Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?

Have you published a response to this? :

Responses

Scott Jehl

@adactiojournal if only this site/thang would go online.. we’d have a nice example for your question at the end… :-/

# Posted by Scott Jehl on Thursday, October 23rd, 2014 at 2:01pm

Aaron Gustafson

Late last week, Josh Korr, a project manager at Viget, posted at length about what he sees as a fundamental flaw with the argument for progressive enhancement. In reading the post, it became clear to me that Josh really doesn’t have a good grasp on progressive enhancement or the reasons its proponents think it’s a good philosophy to follow. Despite claiming to be “an expert at spotting fuzzy rhetoric and teasing out what’s really being said”, Josh makes a lot of false assumptions and inferences. My response would not have fit in a comment, so here it is…

Before I dive in, it’s worth noting that Josh admits that he is not a developer. As such, he can’t really speak to the bits where the rubber really meets the road with respect to progressive enhancement. Instead, he focuses on the argument for it, which he sees as a purely moral one… and a flimsily moral one at that.

I’m also unsure as to how Josh would characterize me. I don’t think I fit his mold of PE “hard-liners”, but since I’ve written two books and countless articles on the subject and he quotes me in the piece, I’ll go out on a limb and say he probably thinks I am.

Ok, enough with the preliminaries, let’s jump over to his piece…

Right out of the gate, Josh demonstrates a fundamental misread of progressive enhancement. If I had to guess, it probably stems from his source material, but he sees progressive enhancement as a moral argument:

It’s a moral imperative that everything on the web should be available to everyone everywhere all the time. Failing to achieve — or at least strive for — that goal is inhumane.

Now he’s quick to admit that no one has ever explicitly said this, but this is his takeaway from the articles and posts he’s read. It’s a pretty harsh, black & white, you’re either with us or against us sort of statement that has so many people picking sides and lobbing rocks and other heavy objects at anyone who disagrees with them. And everyone he quotes in the piece as examples of why he thinks this is progressive enhancement’s central conceit is much more of an “it depends” sort of person.

To clarify, progressive enhancement is neither moral or amoral. It’s a philosophy that recognizes the nature of the Web as a medium and asks us to think about how to build products that are robust and capable of reaching as many potential customers as possible. It isn’t concerned with any particular technology, it simply asks that we look at each tool we use with a critical eye and consider both its benefits and drawbacks. And it’s certainly not anti-JavaScript.

I could go on, but let’s circle back to Josh’s piece. Off the bat he makes some pretty bold claims about what he intends to prove in this piece:

  1. Progressive enhancement is a philosophical, moral argument disguised as a practical approach to web development.
  2. This makes it impossible to engage with at a practical level.
  3. When exposed to scrutiny, that moral argument falls apart.
  4. Therefore, if PEers can’t find a different argument, it’s ok for everyone else to get on with their lives.

For the record, I plan to address his arguments quite practically. As I mentioned, progressive enhancement is not solely founded on morality, though that can certainly be viewed as a facet. The reality is that progressive enhancement is quite pragmatic, addressing the Web as it exists not as we might hope that it exists or how we experience it.

Over the course of a few sections—which I wish I could link to directly, but alas, the headings don’t have unique ids—he examines a handful of quotes and attempts to tease out their hidden meaning by following the LSAT’s Logic Reasoning framework. We’ll start with the first one.

Working without JavaScript Statement

  • “When we write JavaScript, it’s critical that we recognize that we can’t be guaranteed it will run.” — Aaron Gustafson
  • “If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold.” — Jeremy Keith

Unstated assumptions:

  • Because there is some chance JavaScript won’t run, we must always account for that chance.
  • Core tasks can always be achieved without JavaScript.
  • It is always bad to ignore some potential users for any reason.

His first attempt at teasing out the meaning of these statements comes close, but ignores some critical word choices. First off, neither Jeremy nor I speak in absolutes. As I mentioned before, we (and the other folks he quotes) all believe that the right technical choices for a project depend on specifically on the purpose and goals of that specific project. In other words it depends. We intentionally avoid absolutist words like “always” (which, incidentally, Josh has no problem throwing around, on his own or on our behalf).

For the development of most websites, the benefits of following a progressive enhancement philosophy far outweigh the cost of doing so. I’m hoping Josh will take a few minutes to read my post on the true cost of progressive enhancement in relation to actual client projects. As a project manager, I hope he’d find it enlightening and useful.

It’s also worth noting that he’s not considering the reason we make statements like this: Many sites rely 100% on JavaScript without needing to. The reasons why sites (like news sites, for instance) are built to be completely reliant on a fragile technology is somewhat irrelevant. But what isn’t irrelevant is that it happens. Often. That’s why I said “it’s critical that we recognize that we can’t be guaranteed it will run” (emphasis mine). A lack of acknowledgement of JavaScript’s fragility is one of the main problems I see with web development today. I suspect Jeremy and everyone else quoted in the post feels exactly the same. To be successful in a medium, you need to understand the medium. And the (sad, troubling, interesting) reality of the Web is that we don’t control a whole lot. We certainly control a whole lot less than we often believe we do.

As I mentioned, I disagree with his characterization of the argument for progressive enhancement being a moral one. Morality can certainly be one argument for progressive enhancement, and as a proponent of egalitarianism I certainly see that. But it’s not the only one. If you’re in business, there are a few really good business-y reasons to embrace progressive enhancement:

  • Legal: Progressive enhancement and accessibility are very closely tied. Whether brought by legitimate groups or opportunists, lawsuits over the accessibility of your web presence can happen; following progressive enhancement may help you avoid them.
  • Development Costs: As I mentioned earlier, progressive enhancement is a more cost-effective approach, especially for long-lived projects. Here’s that link again: The True Cost of Progressive Enhancement.
  • Reach: The more means by which you enable users to access your products, information, etc., the more opportunities you create to earn their business. Consider that no one thought folks would buy big-ticket items on mobile just a few short years ago. Boy, were they wrong. Folks buy cars, planes, and more from their tablets and smartphones on the regular these days.
  • Reliability: When your site is down, not only do you lose potential customers, you run the risk of losing existing ones too. There have been numerous incidents where big sites got hosed due to JavaScript dependencies and they didn’t have a fallback. Progressive enhancement ensures users can always do what they came to your site to do, even if it’s not the ideal experience.

Hmm, no moral arguments for progressive enhancement there… but let’s continue.

Some experience vs. no experience Statement

  • “[With a PE approach,] Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.” — Jeremy Keith
  • “If for some reason JavaScript breaks, the site should still work and look good. If the CSS doesn’t load correctly, the HTML content should still be there with meaningful hyperlinks.” — Nick Pettit

Unstated assumptions:

  • A clunky experience is always better than no experience.
  • HTML content — i.e. text, images, unstyled forms — is the most important part of most websites.

You may be surprised to hear that I have no issue with Josh’s distillation here. Clunky is a bit of a loaded word, but I agree that an experience is better than no experience, especially for critical tasks like checking your bank account, registering to vote, making a purchase from an online shop. In my book, I talk a little bit about a strange thing we experienced when A List Apart stopped delivering CSS to Netscape Navigator 4 way back in 2001:

We assume that those who choose to keep using 4.0 browsers have reasons for doing so; we also assume that most of those folks don’t really care about “design issues.” They just want information, and with this approach they can still get the information they seek. In fact, since we began hiding the design from non–compliant browsers in February 2001, ALA’s Netscape 4 readership has increased, from about 6% to about 11%.

Folks come to our web offerings for a reason. Sometimes its to gather information, sometimes it’s to be entertained, sometimes it’s to make a purchase. It’s in our best interest to remove every potential obstacle that can preclude them from doing that. That’s good customer service.

Project priorities Statement
  • “Question any approach to the web where fancy features for a few are prioritized & basic access is something you’ll ‘get to’ eventually.” — Tim Kadlec

Unstated assumptions:

  • Everything beyond HTML content is superfluous fanciness.
  • It’s morally problematic if some users cannot access features built with JavaScript.

Not to put words in Tim’s mouth (like Josh is here), but what Tim’s quote is discussing is hype-driven (as opposed to user-centered) design. We (as developers) often prioritize our own convenience/excitement/interest over our users’ actual needs. It doesn’t happen all the time (note I said often), but it happens frequently enough to require us to call it out now and again (as Tim did here).

As for the “unstated assumptions”, I know for a fact that Tim would never call “everything beyond HTML” superfluous. What he is saying is that we should question—as in weigh the pros and cons—of each and every design pattern and development practice we consider. It’s important to do this because there are always tradeoffs. Some considerations that should be on your list include:

  • Download speed;
  • Time to interactivity;
  • Interaction performance;
  • Perceived performance;
  • Input methods;
  • User experience;
  • Screen size & orientation;
  • Visual hierarchy;
  • Aesthetic design;
  • Contrast;
  • Readability;
  • Text equivalents of rich interfaces for visually impaired users and headless UIs;
  • Fallbacks; and
  • Copywriting.

This list is by no means exhaustive nor is it in any particular order; it’s what came immediately to mind for me. Some interfaces may have fewer or more considerations as each is different. And some of these considerations might be in opposition to others depending on the interface. It’s critical that we consider the implications of our design decisions by weighing them against one another before we make any sort of decision about how to progress. Otherwise we open ourselves up to potential problems and the cost of changing things goes up the further into a project we are:

The cost of changing your mind goes up the further into any project you are. Just ask any contractor you hire to work on your house.

As a project manager, I’m sure Josh understands this reality.

As to the “morally problematic” bit, I’ll refer back to my earlier discussion of business considerations. Sure, morality can certainly be part of it, but I’d argue that it’s unwise to make assumptions about your users regardless. It’s easy to fall into the trap of thinking that all of or users are like us (or like the personas we come up with). My employer, Microsoft, makes a great case for why we should avoid doing this in their Inclusive Design materials:

When we design only for others like us, we exclude everyone who is not like us.

If you’re in business, it doesn’t pay to exclude potential customers (or alienate current ones).

Erecting unnecessary barriers Statement

  • “Everyone deserves access to the sum of all human knowledge.” — Nick Pettit
  • “[The web is] built with a set of principles that — much like the principles underlying the internet itself — are founded on ideas of universality and accessibility. ‘Universal access’ is a pretty good rallying cry for the web.” — Jeremy Keith
  • “The minute we start giving the middle finger to these other platforms, devices and browsers is the minute where the concept of The Web starts to erode. Because now it’s not about universal access to information, knowledge and interactivity. It’s about catering to the best of breed and leaving everyone else in the cold.” — Brad Frost

Unstated assumptions:

  • What’s on the web comprises the sum of human knowledge.
  • Progressive enhancement is fundamentally about universal access to this sum of human knowledge.
  • It is always immoral if something on the web isn’t available to everyone.

I don’t think anyone quoted here would argue that the Web (taken in its entirety) is “the sum of all human knowledge”—Nick, I imagine, was using that phrase somewhat hyperbolically. But there is a lot of information on the Web folks should have access too, whether from a business standpoint or a legal one. What Nick, Jeremy, and Brad are really highlighting here is that we often make somewhat arbitrary design & development decisions that can block access to useful or necessary information and interactions.

In my talk Designing with Empathy (slides), I discussed “mystery meat” navigation. I can’t imagine any designer sets out to make their site difficult to navigate, but we are influenced by what we see (and are inspired by) on the web. Some folks took inspiration from web-based art projects like this Toyota microsite:

On Toyota’s Mind is a classic example of mystery meat navigation. It’s a Flash site and you can navigate when you happen to mouse over “hotspots” in the design. I’m pointing to one with a big red arrow here.

Though probably not directly influenced by On Toyota’s Mind, Yeshiva of Flatbush was certainly influenced by the concept of “experiential” (which is a polite way of saying “mystery meat”) navigation.

Yeshiva of Flatbush uses giant circles for their navigation. Intuitive, right?

That’s a design/UX example, but development is no different. How many Single Page Apps have you see out there that really didn’t need to be built that way? Dozens? We often put the cart before the horse and decide to build a site using a particular stack or framework without even considering the type of content we’re dealing with or whether that decision is in the best interest of the project or its end users. That goes directly back to Tim’s earlier point.

Progressive enhancement recognizes that experience is a continuum and we all have different needs when accessing the Web. Some are permanent: Low vision or blindness. Some are temporary: Imprecise mousing due to injury. Others are purely situational: Glare when your users are outside on a mobile device or have turned their screen brightness down to conserve battery. When we make our design and development decisions in the service of the project and the users who will access it, everyone wins.

Real answers to real questions

In the next section, Josh tries to say we only discuss progressive enhancement as a moral imperative. Clearly I don’t (and would go further to say no one else who was quoted does either). He argues that ours is “a philosophical argument, not a practical approach to web development”. I call bullshit. As I’ve just discussed in the previous sections, progressive enhancement is a practical, fiscally-responsible, developmentally robust philosophical approach to building for the Web.

But let’s look at some of the questions he says we don’t answer:

“Wait, how often do people turn off JavaScript?”

Folks turning off JavaScript isn’t really the issue. It used to be, but that was years ago. I discussed the misconception that this is still a concern a few weeks ago. The real issue is whether ot not JavaScript is available. Obviously your project may vary, but the UK government pegged their non-JavaScript usage at 1.1%. The more interesting bit, however, was that only 0.2% of their users fell into the “Javascript off or no JavaScript support” camp. 0.9% of their users should have gotten the JavaScript-based enhancement on offer, but didn’t. The potential reasons are myriad. JavaScript is great, but you can’t assume it’ll be available.

“I’m not trying to be mean, but I don’t think people in Sudan are going to buy my product.”

This isn’t really a question, but it is the kinda thing I hear every now and then. An even more aggressive and ill-informed version I got was “I sell TVs; blind people don’t watch TV”. As a practical person, I’m willing to admit that your organization probably knows its market pretty well. If your products aren’t available in certain regions, it’s probably not worth your while to cater to folks in that region. But here’s some additional food for thought:

  • When you remove barriers to access for one group, you create opportunities for others. A perfect example of this is the curb cut. Curb cuts were originally created to facilitate folks in wheelchairs getting across the road. In creating curb cuts, we’ve also enabled kids to ride bicycles more safely on the sidewalk, delivery personnel to more easily move large numbers of boxes from their trucks into buildings, and parents to more easily cross streets with a stroller. Small considerations for one group pay dividends to more. What rational business doesn’t want to enable more folks to become customers?
  • Geography isn’t everything. I’m not as familiar with specific design considerations for Sudanese users, but since about 97% of Sudanese people are Muslim, let’s tuck into that. Ignoring translations and right-to-left text, let’s just focus on cultural sensitivity. For instance, a photo of a muscular, shirtless guy is relatively acceptable in much of the West, but would be incredibly offensive to a traditional Muslim population. Now your target audience may not be 100% Muslim (nor may your content lend itself to scantily-clad men), but if you are creating sites for mass consumption, knowing this might help you art direct the project better and build something that doesn’t offend potential customers.

Reach is incredibly important for companies and is something the Web enables quite easily. To squander that—whether intentionally or not—would be a shame.

Failures of understanding

Josh spends the next section discussing what he views as failures of the argument for progressive enhancement. He’s of course, still debating it as a purely moral argument, which I think I’ve disproven at this point, but let’s take a look at what he has to say…

The first “fail” he casts on progressive enhancement proponents is that we “are wrong about what’s actually on the Web.” Josh offers three primary offerings on the Web:

  • Business and personal software, both of which have exploded in use now that software has eaten the world and is accessed primarily via the web
  • Copyrighted news and entertainment content (text, photos, music, video, video games)
  • Advertising and marketing content

This is the fundamental issue with seeing the Web only through the kens of your own experience. Of course he would list software as the number one thing on the Web—I’m sure he uses Basecamp, Harvest, GitHub, Slack, TeamWork, Google Docs, Office 365, or any of a host of business-related Software as a Service offerings every day. As a beneficiary of fast network speeds, I’m not at all surprised that entertainment is his number two: Netflix, Hulu, HBO Go/Now… It’s great to be financially-stable and live in the West. And as someone who works at a web agency, of course advertising would be his number three. A lot of the work Viget, and most other agencies for that matter, does is marketing-related; nothing wrong with that. But the Web is so much more than this. Here’s just a fraction of the stuff he’s overlooked:

  • eCommerce,
  • Social media,
  • Banks,
  • Governments,
  • Non-profits,
  • Small businesses,
  • Educational institutions,
  • Research institutions,
  • Religious groups,
  • Community organizations, and
  • Forums.

It’s hard to find figures on anything but porn—which incidentally accounts for somewhere between 4% and 35% of the Web, depending on who you ask—but I have to imagine that these categories he’s overlooked probably account for the vast majority of “pages” on the Web even if they don’t account for the majority of traffic on it. Of course, as of 2014, the majority of traffic on the Web was bots, so…

The second “fail” he identifies is that our “concepts of universal access and moral imperatives… make no sense” in light of “fail” number one. He goes on to provide a list of things he seems to think we want even though advocating for progressive enhancement (and even universal access) doesn’t mean advocating for any of these things:

  • All software and copyrighted news/entertainment content accessed via the web should be free. and Netflix, Spotify, HBO Now, etc. should allow anyone to download original music and video files because some people don’t have JavaScript. I’ve never heard anyone say that… ever. Advocating a smart development philosophy doesn’t make you anti-copyright or against making money.
  • Any content that can’t be accessed via old browsers/devices shouldn’t be on the web in the first place. No one made that judgement. We just think it behooves you to increase the potential reach of your products and to have a workable fallback in case the ideal access scenario isn’t available. You know, smart business decisions.
  • Everything on the web should have built-in translations into every language. This would be an absurd idea given that the number of languages in use on this planet top 6,500. Even if you consider that 2,000 of those have less than 1,000 speakers it’s still absurd. I don’t know anyone who would advocate for translation to every language.1
  • Honda needs to consider a universal audience for its marketing websites even though (a) its offline advertising is not universal, and (b) only certain people can access or afford the cars being advertised. To you his first point, Honda does actually offline advertising in multiple languages. They even issue press releases mentioning it: “The newspaper and radio advertisements will appear in Spanish or English to match the primary language of each targeted media outlet.” As for his second argument… making assumptions about target audience and who can or cannot afford your product seems pretty friggin’ elitist; it’s also incredibly subjective. For instance, we did a project for a major investment firm where we needed to support Blackberry 4 & 5 even though there were many more popular smartphones on the market. The reason? They had several high-dollar investors who loved their older phones. You can’t make assumptions.
  • All of the above should also be applied to offline software, books, magazines, newspapers, TV shows, CDs, movies, advertising, etc. Oh, I see, he’s being intentionally ridiculous.

I’m gonna skip the third fail since it presumes morality is the only argument progressive enhancement has and then chastises the progressive enhancement community for not spending time fighting for equitable Internet access and net neutrality and against things like censorship (which, of course, many of us actually do).

In his closing section, Josh talks about progressive enhancement moderates and he quotes Matt Griffin on A List Apart:

One thing that needs to be considered when we’re experimenting … is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? … Context, as usual, is everything.

In other words, it depends, which is what we’ve all been saying all along.

I’ll leave you with these facts:

  • Progressive enhancement has many benefits, not the least of which are resilience and reach.
  • You don’t have to like or even use progressive enhancement, but that doesn’t detract from its usefulness.
  • If you ascribe to progressive enhancement, you may have a project (or several) that aren’t really good candidates for it (e.g., online photo editing software).
  • JavaScript is a crucial part of the progressive enhancement toolbox.
  • JavaScript availability is never guaranteed, so it’s important to consider offering fallbacks for critical tasks.
  • Progressive enhancement is neither moral nor amoral, it’s just a smart way to build for the Web.

Is progressive enhancement necessary to use on every project?

No.

Would users benefit from progressive enhancement if it was followed on more sites than it is now?

Heck yeah.

Is progressive enhancement right for your project?

It depends.

My sincere thanks to Sara Soueidan, Baldur Bjarnasun, Jason Garber, and Tim Kadlec for taking the time give me feedback on this piece._

  1. Of course, last I checked, over 55% of the Web was in English and just shy of 12% of the world speaks English, so…

remysharp.com

This post was originally written in 2015, but upon re-reading it today, it still (just about) holds up, so I finally hit publish.

I had thought that an EdgeConf panel would be about developers not using JavaScript because they were more interested in building high end web apps, full of WebRTC, Web Audio and the like. But it’s not.

I had the pleasure of introducing the Progressive Enhancement panel and contributing to the panel in 2015. For my introduction, I ran some “research” and did some pondering about what exactly is progressive enhancement.

READER DISCOUNTSave $50 on terminal.training

I’ve published 38 videos for new developers, designers, UX, UI, product owners and anyone who needs to conquer the command line today.

$49 - only from this link

Here’s the thing: after getting responses from 800+ developers (on a Twitter poll), I’ve come to realise that most developers, or certainly everyone following me, everyone watching (the EdgeConf stream), everyone reading, see progressive enhancement as a good thing. The “right thing” to do. They understand that it can deliver the web site’s content to a wider audience. There’s no doubt.

There’s accessibility benefits and SEO benefits. SEO, I’ve heard directly from developers, is one way that the business has had buy in to taking a PE approach to development.

But the truth is: progressive enhancement is not part of the workflow.

What is Progressive Enhancement?

Well…it’s a made up term by Steve Champeon who used it to describe the techniques he (or he and team) were using to build web sites instead of taking a graceful degradation approach.

As such, there’s no one single line that defines progressive enhancement. However, Wikipedia defines it as:

[progressive enhancement] allows everyone to access the basic content and functionality of a web page, using any browser or Internet connection

Graceful degradation works the other way around, in that the complete functionality is delivered to the browser, and edge cases and “older browsers” (not meeting the technical requirements) degrade down to a (potentially) less functionality.

The problem is based on a survey of my own followers, that’s to say that they’re likely to have similar interests and values when it comes to web dev, 25% of 800 developers still believe that progressive enhancement is simply making the site work without JavaScript enabled.

How do you make it work without JavaScript?

I can imagine to anyone starting out new in web development might find this question pretty daunting. First pressed with solving some complicated problem and they’ve finally worked out how to make it work using a marriage of StackOverflow copy & pasting and newly gained advice from books and stuff…, but now all of a sudden: make it work without the code 😱

Which explains the silver bullet response that I’ve heard time after time: “how would a WebRTC chat site work?” …it wouldn’t.

In fact, here is The Very Clever Jake Archibald’s excellent SVGOMG web site…with JavaScript turned off, watch as frustration boils over and I’m left to throw my computer out of the window…

Putting aside silly jokes, how does a web site work without JavaScript isn’t really a good question. In fact, it’s entirely out of context.

A better question to ask could be how do we deliver a baseline web site that’s usable by the most minimal of requirements.

Very much what Jeremy Keith has said recently in response to criticism that it’s impossible to progressively enhance everything with today’s expectations. Progressive enhancement is:

…figuring out what is core functionality and what is an enhancement.

So how does the web community re-frame it’s thinking and look at progressive enhancement as the baseline that you build upon?

Why does it matter?

Today many developers are writing “thick clients”, that is, JavaScript driving a lot, if not all, of the functionality and presentation in the browser.

They do it by delivering and render views in the browser. The big upside of this is that the site is extremely fast to the user’s input. The other big benefit is that there are a good number of frameworks (React, Vue, Angular, Polymer to name the “biggies” of today) that lend themselves greatly to client side MVC, i.e. full application logic in the client side code.

The problem is that the frameworks will often (try to) reinvent fundamental building blocks of a web experience. A few simple/classic examples:

  • The link isn’t a link at all, which means you can’t open it in a new tab, or copy it, or share it…or even click it the way you’d expect to
  • The button isn’t a button
  • You can’t share a link to the page you’re looking at (because it’s all client side rendered and doesn’t have a link)
  • Screen readers can’t navigate the content properly

I recently wrote about how I had failed the anchor. It pretty much touched on all the points above.

This doesn’t mean this isn’t possible, just that it’s often forgotten. In the same way that Flash was often labelled as inaccessible. This wasn’t true, it was possible to make Flash accessible, it’s just that the default development path didn’t include it.

A more extreme example of this was seen in Flipboard’s mobile site. Importantly: mobile site. Flipboard render the entire page using a canvas element. I can’t speak for the accessibility of the site, but on mobile it performs beautifully. It feels…”native”. And with that, it’s also broken. I can’t copy links, and I can’t copy text - akin to the Flash apps and even Java applet days. It looks great, but it doesn’t feel “of the web”†.

† caveat: this was true in 2015, it’s possible…likely it’s been thrown away and fixed…I hope.

The problem is: browsers are pretty poor when compared to the proprietary and closed platforms they’re constantly compared to.

There’s pressure (from SF/Apple/who knows) to deliver web sites that feel “native” (no, I won’t define this) and browsers are always playing catch up with native, proprietary platforms: this is a fact.

Native media elements, native sockets, native audio, native push notifications, native control over network - this all took it’s merry time to get the browser. So when a company decides that the tried and tested approach to styling at list of articles won’t give them the unique UX they want and the 60fps interaction, then of course they’re going to bake up their own technology (in Flipboard’s case, re-inventing wheels with canvas…the exact same way Bespin did back in it’s day).

But…how would a thick-client work without JavaScript?

Angular, for instance, did not have a developer story for how to develop a site with progressive enhancement as a baseline.

Does this mean it’s not possible? I don’t think so. Without the stories though, developers will gravitate towards solved problems (understandably).

What does this story look like when a framework is a prerequisite of the project?

Web Components

Web Components are a hot debate topic. They could cause all kinds of mess of the web. On the other hand, they’re also a perfect fit for progressive enhancement.

Take the following HTML:

<input type="text" name="creditcard" required autocomplete="cc-number">

Notice that I’m not using the pattern attribute because it’s hard to match correctly to credit cards (they might have spaces between groups of numbers, or dashes, or not at all).

There’s also no validation, and the first number also tells us what kind of card is being used (4 indicates a Visa card for instance).

A web component could progressively enhance the element similarly to the way the browser would natively enhance type="date" for instance.

<stripe-cc-card> <input type="text" name="creditcard" required autocomplete="cc-number">
</stripe-cc-card>

I wonder, are web components the future of progressive enhancement?

Potential problems on the horizon

Developers are inherently lazy. It’s what makes them/us optimise our workflows and become very good at solving problems. We re-use what’s known to work and tend to eke out the complex parts that we can “live without”. Sadly, this can be at the cost of accessibility and progressive enhancement.

I think there’s some bigger potential problems on the horizon: ES6 - esnext (i.e. the future of JavaScript).

“But progressive enhancement has nothing to do with ES-whatever…”

Taking a step back for a moment. Say we’re writing an HTML only web site (no CSS or JS). But we want to use the latest most amazing native email validation:

<input type="email" required>

Simple. But…what happens if type="email" isn’t supported? Well, nothing bad. The element will be a text element (and we can validate on the server). The HTML doesn’t break.

JavaScript isn’t quite the same, but we can code defensively, using feature detection and polyfills where appropriate.

ES6 has features that breaks this design. Syntax breaking features that cannot exist alongside our ES5 and cannot be polyfilled. It must be transpiled.

There’s currently talk of smart pipelines that can deliver polyfilled code to “old” browsers and light native ES-x features to those newer browsers. Though, I would imagine the older browsers would be running on older machines and therefore wouldn’t perform well with more code in the JavaScript bundles. Compared with new browsers running on new machines are probably faster and are probably more capable than their elderly peers at running lots of code. IDK, just a thought.

Syntax breaking

There’s a small number of ES6 features that are syntax breaking, the “arrow function” in particular.

This means, if the arrow function is encountered by a browser that doesn’t support ES6 arrows, it’s cause a syntax error. If the site is following best practise and combining all their JavaScript into a single file, this means that all their JavaScript just broke (I’ve personally seen this on JS Bin when we used jshint which uses ES5 setters and broke IE8).

I’ve asked people on the TC39 group and JavaScript experts as to what the right approach here is (bear in mind this is still early days).

The answer was a mix of:

  • Use feature detection (including for syntax breaking features) and conditionally load the right script, either the ES5 or ES6
  • Transpile your ES6 to ES5 and make both available

This seems brittle and the more complexity the more likely that as time goes by, new projects will leave out the transpile part, and forget about supporting older browsers - or even newer browsers that don’t ship with ES6 support (perhaps because the VM footprint is smaller and has to work in a super low powered environment).

Since JavaScript doesn’t exhibit the same resilience that HTML & CSS does, so the fear is that it’ll leave users who can’t upgrade, faced with a broken or blank page.

Is there a workflow that solves this? Or are we forced to support two incompatible languages on the web?

Thanks for reading. As usual, it depends. In fact, that it does depend, applies to every single project I work on.

Further reading

# Wednesday, July 24th, 2019 at 12:00pm

3 Shares

# Shared by Sander Tiekstra on Sunday, October 26th, 2014 at 7:18pm

# Shared by Ian McBurnie on Monday, November 3rd, 2014 at 6:32pm

# Shared by Javascript Digest on Saturday, December 20th, 2014 at 5:24pm

4 Likes

# Liked by Himanshu Gajwani on Thursday, October 23rd, 2014 at 7:00pm

# Liked by Dan Boulet on Sunday, October 26th, 2014 at 6:50pm

# Liked by Ethan Marcotte on Monday, November 3rd, 2014 at 1:56pm

# Liked by https://j4y.co/ on Wednesday, February 7th, 2018 at 10:03am

Related posts

Progressive disclosure defaults

If you’re going to toggle the display of content with CSS, make sure the more complex selector does the hiding, not the showing.

Schooltijd

Going back to school in Amsterdam.

Of the web

Baldur Bjarnason has written my mind.

The principle of most availability

Reframing the principle of least power.

Upgrades and polyfills

Apple’s policy of locking browser updates to operating system updates is bad for the web and bad for the planet.

Related links

Building a robust frontend using progressive enhancement - Service Manual - GOV.UK

Oh, how I wish that every team building for the web would use this sensible approach!

Tagged with

Developers Rail Against JavaScript ‘Merchants of Complexity’ - The New Stack

Perhaps the tide is finally turning against complex web frameworks.

Tagged with

Reckoning: Part 1 — The Landscape - Infrequently Noted

I want to be a part of a frontend culture that accepts and promotes our responsibilities to others, rather than wallowing in self-centred “DX” puffery. In the hierarchy of priorities, users must come first.

Alex doesn’t pull his punches in this four-part truth-telling:

  1. The Landscape
  2. Object Lesson
  3. Caprock
  4. The Way Out

The React anti-pattern of hugely bloated single-page apps has to stop. And we can stop it.

Success or failure is in your hands, literally. Others in the equation may have authority, but you have power.

Begin to use that power to make noise. Refuse to go along with plans to build YAJSD (Yet Another JavaScript Disaster). Engineering leaders look to their senior engineers for trusted guidance about what technologies to adopt. When someone inevitably proposes the React rewrite, do not be silent. Do not let the bullshit arguments and nonsense justifications pass unchallenged. Make it clear to engineering leadership that this stuff is expensive and is absolutely not “standard”.

Tagged with

HTML Web Components Can Have a Little Shadow DOM, As A Treat | Scott Jehl, Web Designer/Developer

This is an interesting thought from Scott: using Shadow DOM in HTML web components but only as a way of providing sort-of user-agent styles:

providing some default, low-specificity styles for our slotted light-dom HTML elements while allowing them to be easily overridden.

Tagged with

It’s about time I tried to explain what progressive enhancement actually is - Piccalilli

Progressive enhancement is a design and development principle where we build in layers which automatically turn themselves on based on the browser’s capabilities.

The idea of progressive enhancement is that everyone gets the perfect experience for them, rather than a pre-determined “perfect” experience from a design and development team.

Tagged with

Previously on this day

13 years ago I wrote Brookland

New Amsterdam.

16 years ago I wrote Chuff Chuff, Huffduff

Podcasts make train journeys bearable.

18 years ago I wrote Pictorial Ajaxitagging

Now with added Flickry goodness.

21 years ago I wrote Wi-Fi wants to be free

Here’s an interesting article about Wi-Fi that makes the point that trying to make people pay for wireless access is often more trouble than it’s worth:

21 years ago I wrote Updates abound

A List Apart is back! But you probably knew that already.

22 years ago I wrote The Han Solo Affair

The funniest piece of Lego animation I’ve seen yet.

23 years ago I wrote Apple - iPod

About a week ago, Apple started circulating the news that they were about to release a groundbreaking new product (and it wouldn’t be a Mac).