The Accidental Side Project ◆ 24 ways
This gets me right in the feels.
I can’t believe I was lucky enough to contribute to 24 Ways seven times over its fifteen year lifespan!
This gets me right in the feels.
I can’t believe I was lucky enough to contribute to 24 Ways seven times over its fifteen year lifespan!
When people ask where to find you on the web, what do you tell them? Your personal website can be your home on the web. Or, if you don’t like to share your personal life in public, it can be more like your office. As with your home or your office, you can make it work for your own needs. Do you need a place that’s great for socialising, or somewhere to present your work? Without the constraints of somebody else’s platform, you get to choose what works for you.
A terrific piece from Laura enumerating the many ways that having your own website can empower you.
Have you already got your own website already? Fabulous! Is there anything you can do to make it easier for those who don’t have their own sites yet? Could you help a person move their site away from a big platform? Could you write a tutorial or script that provides guidance and reassurance?
Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story.
It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes.
This article first appeared in 24 Ways, the online advent calendar for geeks.
It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.
I am, of course, talking about Murphy, and the golden rule he gave unto us:
Anything that can go wrong will go wrong.
So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?
But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.
I guess there’s nothing we can do about those particular situations, right?
Wrong!
A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.
If you’ve got a custom 404 page, why not make a custom offline page too?
Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.
If you’re developing locally, service workers will work fine for localhost
, even without HTTPS. But for a live site, HTTPS is a must.
Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.
When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html
.
Now create a JavaScript file called serviceworker.js
. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install
is fired. You can listen out for this event using addEventListener
:
addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener
In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny
just so I can refer to it as JohnnyCache
in the code:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html',
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/path/to/image.jpg'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.
The next event you want to listen for is the fetch
event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch
event will fire. Every time that page requests a style sheet or an image, a fetch
event will fire. You can provide instructions for what should happen each time:
addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener
Let’s write a fairly conservative script with the following logic:
Here’s how that translates into JavaScript:
// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
fetchEvent.respondWith(
// First, try to fetch it from the network
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
}) // end fetch.then
// But if that doesn't work
.catch( fetchError => {
// try to find it in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
// But if that doesn't work
} else {
// and it's a request for a web page
if (request.headers.get('Accept').includes('text/html')) {
// show the custom offline page instead
return caches.match('/offline.html');
} // end if
} // end if/else
}) // end match.then
}) // end fetch.catch
); // end respondWith
}); // end addEventListener
I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.
You can publish your service worker script at /serviceworker.js
but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script
element at the end of every page’s HTML:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('/serviceworker.js');
}
That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.
You might already have some JavaScript files in a folder like /assets/js/
and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/
. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.
Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!
This is just the beginning. You can do more with service workers.
What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.
Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.
So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.
Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.
They let me write a 24 Ways article again. Will they never learn?
This one’s a whirlwind tour of using a service worker to provide a custom offline page, in the style of Going Offline.
By the way, just for the record, I initially rejected this article’s title out of concern that injecting a Cliff Richard song into people’s brains was cruel and unusual punishment. I was overruled.
Paul walks us through the process of making some incremental accessibility improvements to this year’s 24 Ways.
Creating something new will always attract attention and admiration, but there’s an under-celebrated nobility in improving what already exists. While not all changes may be visual, they can have just as much impact.
I thought this post from Drew was a great way to wrap up another excellent crop from 24 Ways.
There are so many new tools, frameworks, techniques, styles and libraries to learn. You know what? You don’t have to use them.
Great advice from Una on getting buy-in and ensuring the long-term success of a design system.
24 Ways is back! Yay! This year’s edition kicks off with a great article by Hui Jing on using @supports
:
Chances are, the latest features will not ship across all browsers at the same time. But you know what? That’s perfectly fine. If we accept this as a feature of the web, instead of a bug, we’ve just opened up a lot more web design possibilities.
Charlie conducts an experiment by living without JavaScript for a day.
So how was it? Well, with just a few minutes of sans-javascript life under my belt, my first impression was “Holy shit, things are fast without javascript”. There’s no ads. There’s no video loading at random times. There’s no sudden interrupts by “DO YOU WANT TO FUCKING SUBSCRIBE?” modals.
As you might expect, lots of sites just don’t work, but there are plenty of sites that work just fine—Google search, Amazon, Wikipedia, BBC News, The New York Times. Not bad!
This has made me appreciate the number of large sites that make the effort to build robust sites that work for everybody. But even on those sites that are progressively enhanced, it’s a sad indictment of things that they can be so slow on the multi-core hyperpowerful Mac that I use every day, but immediately become fast when JavaScript is disabled.
Some great thoughts here from Francis on how crafting solid HTML is information architecture.
Some really great CSS tips from Rich on sizing display text for multiple viewports.
I really, really like Heydon’s framing of inclusive design: yes, it covers accessibility, but it’s more than that, and it’s subtly different to universal design.
He also includes some tips which read like design principles to me:
- Involve code early
- Respect conventions
- Don’t be exact
- Enforce simplicity
Come to think of it, they’re really good design principles in that they are all reversible i.e. you could imagine the polar opposites being design principles elsewhere.
24 Ways is back! That’s how we web nerds know that the Christmas season is here. It kicked off this year with a most excellent bit of hardware hacking from Seb: Internet of Stranger Things.
The site is looking lovely as always. There’s also a component library to to accompany it: Bits, the front-end component library for 24 ways. Nice work, courtesy of Paul. (I particularly like the comment component example).
The component library is built with Fractal, the magnificent tool that Mark has open-sourced. We’ve been using at Clearleft for a while now, but we haven’t had a chance to make any of the component libraries public so it’s really great to be able to point to the 24 Ways example. The code is all on Github too.
There’s a really good buzz around Fractal right now. Lots of people in the design systems Slack channel are talking about it. There’s also a dedicated Fractal Slack channel for people getting into the nitty-gritty of using the tool.
If you’re currently wrestling with the challenges of putting a front-end component library together, be sure to give Fractal a whirl.
Ethan demonstrates progressive enhancement at the pattern level using flexbox.
I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web.
I have no hands-on experience with React, but this tutorial by Jack Franklin looks like a great place to start. Before the tutorial begins he succinctly and clearly outlines the perfect architecture for building on the web today:
- The user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.
- In the background, the client-side JavaScript is executed and takes over the duty of rendering the page.
- The next time a user clicks, rather than being sent to the server, the client-side app is in control.
- If the user doesn’t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.
YES!!!
Y’know, I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. I think that points to a crucial mindset that’s reiterated here:
Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.
This article first appeared in 24 Ways, the online advent calendar for geeks.
24 Ways has been going strong for ten years. That’s an aeon in internet timescales. Just think of all the changes we’ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably-changed landscape of front-end tooling.
Tools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement.
Progressive enhancement isn’t a technology. It’s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way.
Once you’ve figured out what the core functionality is—adding an item to a shopping cart, or posting a message, or sharing a photo—then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you’ve got the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers.
The advantage of working this way isn’t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, that it won’t be catastrophic.
There’s a common misconception that progressive enhancement means that you’ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn’t take very long at all. And once you’ve done that, you’re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren’t universally supported yet, that’s okay: you’ve already got your fallback in place.
The key to thinking about web development this way is realising that there isn’t one final interface—there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that’s okay. Websites do not need to look the same in every browser.
Once you truly accept that, it’s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you’re building works everywhere, while providing the best possible experience for more capable browsers.
Allow me to demonstrate with a simple example: navigation.
Let’s say we’ve got a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear:
The first use-case is easily satisfied by marking up the text with headings, paragraphs, and all the usual structural HTML elements. The second use-case is satisfied by providing a list of good ol’ hyperlinks.
Now where’s the best place to position this navigation list? Personally, I’m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there’s a link with an href
attribute pointing to the fragment identifier for the navigation.
<body>
<main role="main" id="top">
<a href="#menu" class="control">Menu</a>
...
</main>
<nav role="navigation" id="menu">
...
<a href="#top" class="control">Dismiss</a>
</nav>
</body>
See the footer-anchor pattern in action.
Because it’s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name).
The footer-anchor pattern a particularly neat solution on small-screen devices, like mobile phones. Once more screen real-estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute
, flexbox, or in this case, display: table
.
@media all and (min-width: 35em) {
.control {
display: none;
}
body {
display: table;
}
[role="navigation"] {
display: table-caption;
columns: 6 15em;
}
}
See the wide-screen styles in action
Right. At this point I’m providing core functionality to everyone, and I’ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don’t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers—the fallback is already in place.
What I’d really like is to provide a swish off-canvas pattern for small-screen devices. Here’s my plan:
.control
links being activated and intercept that action..active
on the body..active
class exists, slide the content out to reveal the navigation.Here’s the CSS for positioning the content and navigation:
@media all and (max-width: 35em) {
[role="main"] {
transition: all .25s;
width: 100%;
position: absolute;
z-index: 2;
top: 0;
right: 0;
}
[role="navigation"] {
width: 75%;
position: absolute;
z-index: 1;
top: 0;
right: 0;
}
.active [role="main"] {
transform: translateX(-75%);
}
}
In my JavaScript, I’m going to listen out for any clicks on the .control
links and toggle the .active
class on the body accordingly:
(function (win, doc) {
'use strict';
var linkclass = 'control',
activeclass = 'active',
toggleClassName = function (element, toggleClass) {
var reg = new RegExp('(\s|^)' + toggleClass + '(\s|$)');
if (!element.className.match(reg)) {
element.className += ' ' + toggleClass;
} else {
element.className = element.className.replace(reg, '');
}
},
navListener = function (ev) {
ev = ev || win.event;
var target = ev.target || ev.srcElement;
if (target.className.indexOf(linkclass) !== -1) {
ev.preventDefault();
toggleClassName(doc.body, activeclass);
}
};
doc.addEventListener('click', navListener, false);
}(this, this.document));
I’m all set, right? Not so fast!
I’ve made the assumption that addEventListener
will be available in my JavaScript. That isn’t a safe assumption. That’s because JavaScript—unlike HTML or CSS—isn’t fault-tolerant. If you use an HTML element or attribute that a browser doesn’t understand, or you use a CSS selector, property or value that a browser doesn’t understand, it’s no big deal. The browser will just ignore what it doesn’t understand: it won’t throw an error, and it won’t stop parsing the file.
JavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn’t recognise, that browser will throw an error, and it will stop parsing the file. That’s why it’s important to test for features before using them in JavaScript. That’s also why it isn’t safe to rely on JavaScript for core functionality.
In my case, I need to test for the existence of addEventListener
:
(function (win, doc) {
if (!win.addEventListener) {
return;
}
...
}(this, this.document));
The good folk over at the BBC call this kind of feature-test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn’t cut the mustard, it doesn’t get the enhancements. And that’s fine because, remember, websites don’t need to look the same in every browser.
I want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I’m going to use JavaScript add a class of .cutsthemustard
to the document:
(function (win, doc) {
if (!win.addEventListener) {
return;
}
...
var enhanceclass = 'cutsthemustard';
doc.documentElement.className += ' ' + enhanceclass;
}(this, this.document));
Now I can use the existence of that class name to adjust my CSS:
@media all and (max-width: 35em) {
.cutsthemustard [role="main"] {
transition: all .25s;
width: 100%;
position: absolute;
z-index: 2;
top: 0;
right: 0;
}
.cutsthemustard [role="navigation"] {
width: 75%;
position: absolute;
z-index: 1;
top: 0;
right: 0;
}
.cutsthemustard .active [role="main"] {
transform: translateX(-75%);
}
}
See the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screen screens so you might have to squish your browser window.
This was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you’re providing the core functionality to everyone, you’re free to go crazy with all the latest enhancements for modern browsers.
Progressive enhancement doesn’t mean you have to provide all the same functionality to everyone—quite the opposite. That’s why it’s so key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you’re free to add many more features that aren’t mission-critical. You should be rewarding more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML.
Like I said, progressive enhancement isn’t a technology. It’s a way of thinking. Once you start thinking this way, you’ll be prepared for whatever the next ten years throws at us.
My contribution to this year’s edition of the web’s best advent calendar.
A superb article by Josh on planning for progressive enhancement—clearly laid out and carefully explained.
It’s December. That means it’s time for the geek advent calendars to get revved up again:
For every day until Christmas Eve, you can find a tasty geek treat on each of those sites.
Today’s offering on 24 Ways is a little something I wrote called Conditional Loading for Responsive Designs. It expands on the technique I’m using on Huffduffer to conditionally load inessential content into a sidebar with Ajax where the layout is wide enough to accommodate it:
if (document.documentElement.clientWidth > 640) {
// Use Ajax to retrieve content here.
}
In that example, the Ajax only kicks in if the viewport is wider than 640 pixels. Assuming I’ve got a media query that also kicks in at 640 pixels, everything is hunky-dory.
But …it doesn’t feel very DRY to have that 640 pixel number repeated in two places: once in the CSS and again in the JavaScript. It feels particularly icky if I’m using em
s for my media query breakpoints (as I often do) while using pixels in JavaScript.
At my recent responsive enhancement workshop in Düsseldorf, Andreas Nebiker pointed out an elegant solution: instead of testing the width of the viewport in JavaScript, why not check for a style change that would have been executed within a media query instead?
So, say for example I’ve got some CSS like this:
@media all and (min-width: 640px) {
[role="complementary"] {
width: 30%;
float: right;
}
}
Then in my JavaScript I could test to see if that element has the wide-screen layout or not:
var sidebar = document.querySelector('[role="complementary"]'),
floating = window.getComputedStyle(sidebar,null).getPropertyValue('float');
if (floating == 'right') {
// Use Ajax to retrieve content here.
}
Or something like that. The breakpoint is only ever specified once so I ever change it from 640 pixels to something else (like 40 em
s) then I only have to make that change in one place. Feel free to grab the example and play around with it.
By the way, you’ll notice that in the original 24 Ways article and also in this updated example, I’m only testing the layout on page load, not on page resize. It would be fairly easy to add in an onResize test as well, but I’m increasingly coming to the conclusion that—apart from the legitimate case of orientation change on tablets—the only people resizing their browser windows after the page loads are web designers testing responsive designs. Still, it would be nice to get some more data to test that hypothesis.