Journal tags: sound

4

sparkline

Building More Expressive Products by Val Head

It’s day two of An Event Apart in Boston and Val is giving a new talk about building expressive products:

The products we design today must connect with customers across different screen sizes, contexts, and even voice or chat interfaces. As such, we create emotional expressiveness in our products not only through visual design and language choices, but also through design details such as how interface elements move, or the way they sound. By using every tool at our disposal, including audio and animation, we can create more expressive products that feel cohesive across all of today’s diverse media and social contexts. In this session, Val will show how to harness the design details from different media to build overarching themes—themes that persist across all screen sizes and user and interface contexts, creating a bigger emotional impact and connection with your audience.

I’m going to attempt to live blog her talk. Here goes…

This is about products that intentionally express personality. When you know what your product’s personality is, you can line up your design choices to express that personality intentionally (as opposed to leaving it to chance).

Tunnel Bear has a theme around a giant bear that will product you from all the bad things on the internet. It makes a technical product very friendly—very different from most VPN companies.

Mailchimp have been doing this for years, but with a monkey (ape, actually, Val), not a bear—Freddie. They’ve evolved and changed it over time, but it always has personality.

But you don’t need a cute animal to express personality. Authentic Weather is a sarcastic weather app. It’s quite sweary and that stands out. They use copy, bold colours, and giant type.

Personality can be more subtle, like with Stripe. They use slick animations and clear, concise design.

Being expressive means conveying personality through design. Type, colour, copy, layout, motion, and sound can all express personality. Val is going to focus on the last two: motion and sound.

Expressing personality with motion

Animation can be used to tell your story. We can do that through:

  • Easing choices (ease-in, ease-out, bounce, etc.),
  • Duration values, and offsets,
  • The properties we animate.

Here are four personality types…

Calm, soft, reassuring

You can use opacity, soft blurs, small movements, and easing curves with gradual changes. You can use:

  • fade,
  • scale + fade,
  • blur + fade,
  • blur + scale + fade.

Pro tip for blurs: the end of blurs always looks weird. Fade out with opacity before your blur gets weird.

You can use Penner easing equations to do your easings. See them in action on easings.net. They’re motion graphs plotting animation against time. The flatter the curve, the more linear the motion. They have a lot more range than the defaults you get with CSS keyword values.

For calm, soft, and reassuring, you could use easeInQuad, easeOutQuad, or easeInOutQuad. But that’s like saying “you could use dark blue.” These will get you close, but you need to work on the detail.

Confident, stable, strong

You can use direct movements, straight lines, symmetrical ease-in-outs. You should avoid blurs, bounces, and overshoots. You can use:

  • quick fade,
  • scale + fade,
  • direct start and stops.

You can use Penner equations like easeInCubic, easeOutCubic and easeInOutCubic.

Lively, energetic, friendly

You can use overshoots, anticipation, and “snappy” easing curves. You can use:

  • overshoot,
  • overshoot + scale,
  • anticipation,
  • anticipation + overshoot

To get the sense of overshoots and anticipations you can use easing curves like easeInBack, easeOutBack, and easeInOutBack. Those aren’t the only ones though. Anything that sticks out the bottom of the graph will give you anticipation. Anything that sticks out the top of the graph will give you overshoot.

If cubic bezier curves don’t get you quite what you’re going for, you can add keyframes to your animation. You could have keyframes for: 0%, 90%, and 100% where the 90% point is past the 100% point.

Stripe uses a touch of overshoot on their charts and diagrams; nice and subtle. Slack uses a bit of overshoot to create a sense of friendliness in their loader.

Playful, fun, lighthearted

You can use bounces, shape morphs, squashes and stretches. This is probably not the personality for a bank. But it could be for a game, or some other playful product. You can use:

  • bounce,
  • elastic,
  • morph,
  • squash and stretch (springs.

You can use easing equations for the first two, but for the others, they’re really hard to pull off with just CSS. You probably need JavaScript.

The easing curve for elastic movement is more complicated Penner equation that can’t be done in CSS. GreenSock will help you visual your elastic easings. For springs, you probably need a dedicated library for spring motions.

Expressing personality with sound

We don’t talk about sound much in web design. There are old angry blog posts about it. And not every website should use sound. But why don’t we even consider it on the web?

We were burnt by those terrible Flash sites with sound on every single button mouseover. And yet the Facebook native app does that today …but in a much more subtle way. The volume is mixed lower, and the sound is flatter; more like a haptic feel. And there’s more variation in the sounds. Just because we did sound badly in the past doesn’t mean we can’t do it well today.

People say they don’t want their computers making sound in an office environment. But isn’t responsive design all about how we don’t just use websites on our desktop computers?

Amber Case has a terrific book about designing products with sound, and she’s all about calm technology. She points out that the larger the display, the less important auditive and tactile feedback becomes. But on smaller screens, the need increases. Maybe that’s why we’re fine with mobile apps making sound but not with our desktop computers doing it?

People say that sound is annoying. That’s like saying siblings are annoying. Sound is annoying when it’s:

  • not appropriate for the situation,
  • played at the wrong time,
  • too loud,
  • lacks user control.

But all of those are design decisions that we can control.

So what can we do with sound?

Sound can enhance what we perceive from animation. The “breathe” mode in the Calm meditation app has some lovely animation, and some great sound to go with it. The animation is just a circle getting smaller and bigger—if you took the sound away, it wouldn’t be very impressive.

Sound can also set a mood. Sirin Labs has an extreme example for the Solarin device with futuristic sounds. It’s quite reminiscent of the Flash days, but now it’s all done with browser technologies.

Sound is a powerful brand differentiator. Val now plays sounds (without visuals) from:

  • Slack,
  • Outlook Calendar.

They have strong associations for us. These are earcons: icons for the ears. They can be designed to provoke specific emotions. There was a great explanation on the Blackberry website, of all places (they had a whole design system around their earcons).

Here are some uses of sounds…

Alerts and notifications

You have a new message. You have new email. Your timer is up. You might not be looking at the screen, waiting for those events.

Navigating space

Apple TV has layers of menus. You go “in” and “out” of the layers. As you travel “in” and “out”, the animation is reinforced with sound—an “in” sound and an “out” sound.

Confirming actions

When you buy with Apple Pay, you get auditory feedback. Twitter uses sound for the “pull to refresh” action. It gives you confirmation in a tactile way.

Marking positive moments

This is a great way of making a positive impact in your user’s minds—celebrate the accomplishments. Clear—by Realmac software—gives lovely rising auditory feedback as you tick things off your to-do list. Compare that to hardware products that only make sounds when something goes wrong—they don’t celebrate your accomplishments.

Here are some best practices for user interface sounds:

  • UI sounds be short, less than 400ms.
  • End on an ascending interval for positive feedback or beginnings.
  • End on a descending interval for negative feedback, ending, or closing.
  • Give the user controls to top or customise the sound.

When it comes to being expressive with sounds, different intervals can evoke different emotions:

  • Consonant intervals feel pleasant and positive.
  • Dissonant intervals feel strong, active, or negative.
  • Large intervals feel powerful.
  • Octaves convey lightheartedness.

People have made sounds for you if you don’t want to design your own. Octave is a free library of UI sounds. You can buy sounds from motionsound.io, targetted specifically at sounds to go with motions.

Let’s wrap up by exploring where to find your product’s personality:

  • What is it trying to help users accomplish?
  • What is it like? (its mood and disposition)

You can workshops to answer these questions. You can also do research with your users. You might have one idea about your product’s personality that’s different to your customer’s. You need to project a believable personality. Talk to your customers.

Designing for Emotion has some great exercises for finding personality. Conversational Design also has some great exercises in it. Once you have the words to describe your personality, it gets easier to design for it.

So have a think about using motion and sound to express your product’s personality. Be intentional about it. It will also make the web a more interesting place.

Sonic sparklines

I’ve seen some lovely examples of the Web Audio API recently.

At the Material conference, Halldór Eldjárn demoed his Poco Apollo project. It generates music on the fly in the browser to match a random image from NASA’s Apollo archive on Flickr. Brian Eno, eat your heart out!

At Codebar Brighton a little while back, local developer Luke Twyman demoed some of his audio-visual work, including the gorgeous Solarbeat—an audio orrery.

The latest issue of the Clearleft newsletter has some links on sound design in interfaces:

I saw Ruth give a fantastic talk on the Web Audio API at CSS Day this year. It had just the right mixture of code and inspiration. I decided there and then that I’d have to find some opportunity to play around with web audio.

As ever, my own website is the perfect playground. I added an audio Easter egg to adactio.com a while back, and so far, no one has noticed. That’s good. It’s a very, very silly use of sound.

In her talk, Ruth emphasised that the Web Audio API is basically just about dealing with numbers. Lots of the examples of nice usage are the audio equivalent of data visualisation. Data sonification, if you will.

I’ve got little bits of dataviz on my website: sparklines. Each one is a self-contained SVG file. I added a script element to the SVG with a little bit of JavaScript that converts numbers into sound (I kind of wish that the script were scoped to the containing SVG but that’s not the way JavaScript in SVG works—it’s no different to putting a script element directly in the body). Clicking on the sparkline triggers the sound-playing function.

It sounds terrible. It’s like a theremin with hiccups.

Still, I kind of like it. I mean, I wish it sounded nicer (and I’m open to suggestions on how to achieve that—feel free to fork the code), but there’s something endearing about hearing a month’s worth of activity turned into a wobbling wave of sound. And it’s kind of fun to hear how a particular tag is used more frequently over time.

Anyway, it’s just a silly little thing, but anywhere you spot a sparkline on my site, you can tap it to hear it translated into sound.

Soundcloudbusting

Matt wrote a great article called Ten Years of Podcasting: Fighting Human Nature (although I’m not entirely sure why he put it on Ev’s site instead of—or in addition to—his own). It’s a look back at the history of podcasting, and how it has grown out of its nerdy origins to become more of a mainstream activity. In it, he kindly gives a shout-out to Huffduffer:

…a way to make piecemeal meta-podcasts on the fly built up from random shows (here’s my feed).

Matt has written about how he uses Huffduffer before: a quick introduction to adding your Huffduffer feed to Instacast. It’s equally straightforward with Overcast, and most other iOS podcast apps.

If you use the iOS app Workflow, there’s a nifty tutorial for extracting the audio from YouTube videos, posting the audio to Dropbox, and subscribing in Huffduffer. I’m letting the side down somewhat though: Huffduffer’s API is currently read-only, but it would so much more powerful if you could post from other apps. I need to wrap my head around OAuth to do this. I was hoping to do OwnYourGram-style API with IndieAuth and micropub (once your Huffduffer profile has your website URL, and that URL has rel="me" links to OAuth providers like Twitter, Flickr, or Github, all the pieces should be in place), but alas IndieAuth only works on a domain or subdomain basis so /username URLs are out.

Anyway, back to Matt’s article about podcasting. He writes:

Personally, I like it when new podcasts use Soundcloud for their hosting, because on a desktop computer it means I can easily dip into their archives and play random episodes, scrub to certain segments and get a feel for the show before I subscribe.

It’s true that if you’re sitting in front of a desktop computer, Soundcloud is a great way to listen to an audio file there and then. But it’s a lousy way to host a podcast.

The whole point of podcasting is that it’s time-shifted. You get to listen to the audio you want, when you want. The whole point of Soundcloud is that you listen to audio then and there. That’s great if you’re a musician, looking to make sure that people can’t make copies of your music, but it’s terrible if you’re a podcaster.

To be fair, Soundcloud’s primary audience is still musicians, rather than podcasters, so it makes sense for them to prioritise that use-case. But still, they really go out of their way to obfuscate the actual audio file. Even if the publisher has checked the right box to allow users to download the audio file, the result is a very literal interpretation of that: you can download the file, but you can’t copy the URL and paste it into, say, an app for listening later (and you certainly can’t huffduff it).

Case in point: Matt finishes his article with:

If you don’t have time to read the above, it’s available as a 14min audio file…

That audio file is hosted on Soundcloud. You can listen to it there, or you can listen to it through the embedded player on the article itself. But that’s it. You can’t take it with you. You can’t listen to it later. You can’t, for example, listen to it in your car, even though as Matt says:

…for most Americans, killing time listening to podcasts in a car is a great place.

If you can figure out a way to get at Matt’s audio file (and maybe even huffduff it), I’d be much obliged.

Like Merlin says:

That sound

Despite being a huge Pixar fan, I still haven’t seen Wall•E. That’s mostly due to my belief that a typical cinema is not necessarily the best viewing environment for any movie, but particularly for one that you want to get really engrossed in …unless the cinema is empty of humans.

I’m not sure if I can hold out much longer though, especially after reading this wonderful story about how the people at Pixar responded to one blogger’s reaction to seeing the first trailer for the movie last year. Eda Cherry describes herself as having a strong fondness for robots so Wall•E is already pushing all the right buttons. The moment when he says his own name is the moment that pushes her over the edge — it makes her cry every time. Partly it’s the robot’s droopy eyes as he looks up into space but also:

It’s the voice modulation.

That would be . I remember as a child receiving the quarterly Star Wars fan club newsletter, Bantha Tracks, and reading about the amazing amount of found sounds that went into the soundscape of that galaxy far, far away: animal noises, broken TV sets, tuning forks tapped against high-tension wires. And of course R2D2, voiced by Ben Burtt himself.

Now, with Wall•E, he’s voicing another lovable robot, one capable of moving humans to tears. His involvement is no coincidence. In the initial brainstorming for the project, John Lasseter repeatedly described it as R2D2: The Movie.

The journey involved in turning that initial idea into a finished film is a long one. For a closer look at the process at Pixar, be sure to read Peter Merholz’s chat with Michael B. Johnson. Their storyboarding process sounds a lot like wireframing:

We’d much rather fail with a bunch of sketches that we did (relatively) quickly and cheaply, than once we’ve modeled, rigged, shaded, animated, and lit the film. Fail fast, that’s the mantra.