molily Navigation

Something went wrong

Ways out of the JavaScript crisis

Smashing things against each other

There is much to learn when things break. Think about the Large Hadron Collider (LHS) at the CERN in Switzerland. Scientists accelerate particles like protons, lead and iron with much energy and SMASH them into each other. They make a huge mess, sift through the debris in order to find something valueable. Like the Higgs field and the Higgs boson (the “Goddamn particle”) that happen to give all other particles their mass. It is no coincidence that the CERN also invented the World Wide Web, which is the LHC’s precursor experiment (just kidding).

In the spirit of moving fast (99.9999991% the speed of light) and breaking things, my Firefox browser is configured to block all cookies, ads, tracking and other privacy invasions. Firefox will warn you that it “will cause websites to break”. This somehow reverses cause and effect, but indeed, some web sites self-destruct when the browser rejects cookies. As a web dev, I can handle that, for example by opening the site in a browser instance that allows cookies.

When the browser blocks cookies, it not only ignores the Cookie HTTP header and document.cookie, it also blocks access to the JavaScript storage APIs localStorage and sessionStorage. When a script merely accesses these objects, the script is terminated with an exception.

A script should prepare for the fact that access to localStorage and sessionStorage may throw an error. The precaution is to wrap the access in a try-catch.

For example, if you want to store the user’s theme preference, you can use sessionStorage in a fault-tolerant way:

let theme = 'dark'; // Whatever the user has chosen
try {
  sessionStorage.setItem('theme', theme);
} catch (error) {
  console.error('Could not access sessionStorage', error);
}

When reading the preference, you can query the system theme and override it with the user choice:

const systemTheme = matchMedia('(prefers-color-scheme: dark)').matches
  ? 'dark'
  : 'light';
let userTheme;
try {
  userTheme = sessionStorage.getItem('theme');
} catch (error) {
  console.error('Could not access sessionStorage', error);
}
const finalTheme = userTheme || systemTheme;
// Do something with finalTheme

Most browsers allow storing values using these APIs, so very few sites guard the access. While I do encourage you to program defensively, I reckon my browser configuration to be an outlier.

Unchecked storage access alone does not cause havoc. The exception blows up the current stack, stopping the particular script task. In the debris of this explosion, nothing interesting can be found. Typically, most of the site features continue to work.

Next: Next.js

There is a class of web sites where such an innocuous JavaScript exception causes havoc: Sites build with Next.js, a framework based on React.js. (In theory, this is not specific to Next.js. In practice, it is.)

Let us smash browsers and sites together and record the debris! Take two sites you visit every day (or not): TikTok and Node.js. If your browser blocks storage access, nodejs.org is empty. Simple as that. Tiktok.com shows a sad robot that says: “Page not available. Sorry about that! Please try again later.”

What’s happening here? Next.js renders the HTML page and sends it to the browser. A significant amount of client-side JavaScript is loaded which “hydrates” the HTML DOM. In the most naive way, this means that React.js re-renders the whole page and updates the DOM if necessary.

If an exception occurs during this process, React will remove the server-rendered HTML from the document. This can be observed on nodejs.org.

But fear not. React allows you to catch errors using an error boundary. Like a try-catch, but for the React component tree. An error bubbles up to the nearest error boundary.

In the spirit of Graceful Degradation, you can render a meaningful fallback if an error happens in a certain component branch. You can take action to recover from that error. (The popular react-error-boundary package is actually quite powerful.)

In practice, React error boundaries are misused. In almost all cases, there is one (1) error boundary at the top of the React render tree. And the only thing it does is to show a pointless error message like “Something went wrong”. This can be observed on tiktok.com.

On vercel.com, the site of the company that makes Next.js, you get a more nerdy “Application error: a client-side exception has occurred (see the browser console for more information).”

Stoyan has a catchy name for this behavior:

You've heard of FOUC, FOUT, FOIT, FOFT... now say hello to FORC - Flash Of Readable Content. This phenomenon occurs when a perfectly readable web page is "hydrated" by JavaScript and an error in said hydration takes over the otherwise usable experience. FORC is pronounced "fork" as in "I'd like to stick a fork in my eye"

I haven’t mentioned blocked cookies because of its relevance. It’s merely one way to trigger errors in JavaScript-heavy web sites. It allows you to observe how the site handles any JavaScript error. It’s the LHC way of examining JavaScript apps.

The JavaScript crisis

I’ve opened this article with an anecdote to illustrate the abysmal state of client-side JavaScript usage.

For some reason, in some case, client-side JavaScript will inevitably fail. There are plenty intricate, piecemeal solutions how to prevent this.

Even more problematic than error-prone JavaScript is the sheer amount of client-side JavaScript. Browsers have to download, parse, compile and execute the code. This blocks the rendering of the page. For mobile internet users in particular, the transmission of JavaScript cost money and the execution drains the battery.

As a result, web pages are unreliable, slow and unusable for many people. Writing more robust JavaScript and saving a few kilobytes are stopgap solutions. The overall solution are web architectures that minimize the reliance on client-side JavaScript.

This problem is well-known across the industry, with many people and companies working on improved architectures. Google, for example, has identified the performance problem and tries to address it with their metrics, the Core Web Vitals.

Despite these efforts, the situation is not improving, as shown by the HTTP Archive with its Web Almanac.

The community crisis

To stretch the metaphor even further, at the LHC, there are in fact two beams of particles accelerated and shot at each other.

Similarly, the web development community is divided into two camps on the issue of client-side JavaScript. They do not talk to each other, but ridicule each other.

The technical discord is superimposed by layers of social discord, effectively preventing the community from solving the crisis. As Jake Lazaroff described, prominent voices in the community are injecting toxicity into the discourse. This hurts web development, discourages beginners and prevents us from building better web sites.

In my opinion, the current state of client-side JavaScript usage on the web is so abysmal in part because the criticism of that state is so abysmal. As someone who is trying to improve the state, I don’t exempt myself from this verdict. If we want to make any progress, we need to start with this confession.

I’d like to discuss what I’m pretty sure doesn’t work and what I hope might work.

Camp mentality

I do acknowledge the frustration and anger on both sides, in particular on the side of JavaScript critics. They have been ignored, insulted and abused. I do understand how the two groups became estranged.

At the same time, I see righteousness, condescension and vilification. I see an “us vs. them” mentality. The camps are not talking to each other in any constructive way. This has to stop.

Given the deadlock, the web community needs to convene in a respectful and productive way, take criticism seriously and make radical changes.

Poor analysis and conspiracy theories

There are manifold reasons why error-prone, low-performing JavaScript-heavy architectures have prevailed. It is important to understand why managers and developers choose certain web technology so we can help them make better choices or improve the technology they choose.

There is a simple, popular and convenient explanation: A small cabal of JavaScript framework vendors, “merchants of complexity”, “fraudsters”, “grifters”, have conspired to sell us inferior architectures. Meta with React.js and Vercel with Next.js are named as the culprits.

Of course, there are powerful economic actors who want to sell their goods and paint them in the best possible light. Of course, we need to eat hold the billionaires behind frameworks accountable for poor performance, accessibility and usability. The mistakes of the framework makers and the shortcomings of their products are also responsible for the JavaScript crisis. They did not listen to criticism, they did too little, too late. They even tried to silence critics.

Nonetheless, the conspiracy theory is as simple as it is insufficient. It is downright dangerous since it spreads nothing but hate and actively prevents better analysis that is able to solve the crisis.

Tom Dale wrote in 2015 that “developers aren’t getting tricked by framework authors; they find the benefits worth it.” In 2023, Laurie Voss rejected the conspiracy theory and pointed out economic and social reasons assuming managers and developers are rational economic actors instead of mere fraud victims.

In 2019, Charlie Owen described React in terms of Fordism. React fueled the commodification of the web. With React, Facebook introduced an assembly line that standardized the unit of labor, treating developers as factory workers instead of craft professionals.

Baldur Bjarnason followed the same line in his 2024 article. React and similar technologies allow capital to “play employees in different regions and fields against each other”. These technologies replace expensive specialists with more commodified generalists. “React and the component model standardises the software developer and reduces their individual bargaining power […] It helps [management and executives to] erase the various specialities – CSS, accessibility, standard JavaScript in the browser, to name a few – from the job market.”

All these profound critiques do completely without conspiracy theories.

The Big Rewrite

New architectures are popping up that try to solve the shortcomings of the existing ones. They often focus on server-side rendering (SSR), Progressive Enhancement, web components or compilers to generate less JavaScript.

While this research is valuable, it typically creates new islands in the framework archipelago. The new techniques are often sold with the same zeal, marketing and superficiality as the previous JavaScript framework.

Marco Rogers described the industry’s situation as a Frontend Treadmill: whatever framework you choose, it will be obsolete in five years. If it still exists, it will have changed fundamentally. Developers are forced to keep up with this unhealthy pace.

Rewriting your frontend will not lead to the promised land, Rogers says. Instead, he advises to dive deep into the framework you’re already using. Also, learn fundamental web technologies that will outlive any framework.

Simplicity and complexity

The most common argument against client-side JavaScript solutions is that they bring unnecessary technical complexity. It’s a commonplace that everyone agrees with right away, no further explanation or analysis necessary. Just show the meme that depicts node_modules as the heaviest object in the universe and everyone in the audience will lol-sob.

I find the debate around complexity detached at best and harmful at worst. In almost all cases, the person who cries over-engineering has no insight into the process and the requirements. They claim they could solve the problem with a tenth of the code. Often they want to sell their own framework and want you to rewrite your frontend with it – see the previous section.

Debating complexity is pointless because it’s a subjective metric. Every developer has a different gut feeling about simplicity, complexity and the appropriate amount of complexity for a given task. When people try to find an objective definition, they come to wildly different results. And that’s okay.

Instead, we should focus on hard metrics from a user perspective. Performance, efficiency, compatibility, accessibility and fault-tolerance can be measured, tested and evaluated, automatically and manually.

Any amount of complexity is fine as long as these goals are met. This typically involves moving the complexity to the server, including the edge, where all forms of optimization and scaling are possible.

The dichotomy between User Experience (UX) and Developer Experience (DX)

Another cliché of JavaScript criticism is that modern web development favors developer experience (DX) to the disadvantage of user experience (UX). Instead of rolling up their sleeves, lazy developers allegedly choose technologies (frameworks in particular) that make their life easy but are detrimental for their users. Modern development tools are accused of creating heaven for developers and hell for users.

In my opinion, UX vs. DX is a false dichotomy. Some people say there is an inherent inverse relationship between the two. More DX, less UX. I’d argue there is no necessary connection between the two.

Claiming that developers have put their convenience first is an implausible explanation for the current crisis. I don’t think established stacks like React and Next.js provide a good developer experience, let alone a superior one. I don’t think they still offer a productivity benefit.

Frameworks, dev servers, bundlers, transpilers/compilers, compile-to-JS languages, linters, testing tools and editors are hard to learn, hard to configure and perform poorly. We can’t complain about “tooling fatigue” and at the same time allege that developers favor DX over UX.

Much innovation is happening regarding developer experience. For better or worse, every tool in the frontend toolchain is being rewritten in Rust to make it faster. Each framework has spawned multiple meta-frameworks. Each framework has its own transpiler or compiler. (React recently introduced a compiler. Svelte 5 gets a new compiler. The Angular compiler was recently rewritten.)

I suppose these DX innovations have a mixed impact on UX. The new tools optimize and minimize the code, resulting in less, faster JavaScript. But improved DX may cause a rebound effect. Faster tools make it easier to deploy more and more client-side JavaScript, worsening the overall situation.

When your Rust-based build tool churns out 20 MB of client-side JavaScript in milliseconds and your dev server hot-reloads those 20 MB in milliseconds, you don’t feel the devastation this causes for your users.

This is the point where criticism needs to chime in. We need tools that make it easy to do the right thing and hard to impair the UX. Tools that inform the developer about the impact of a decision on performance and reliability.

Critics often dismiss the significance of developer experience for achieving the values they advocate. Developer experience as pure convenience is indeed worthless. But it’s probably the most effective way to guide and influence developer decisions. Frameworks in particular can bake in accessibility rules and principles like Progressive Enhancement. The default choices should be the right choices. The default, easy way should lead to a robust, fast site.

As Tom Dale noted in 2015, we need to reduce the cost of code. Frameworks should provide zero-cost abstractions.

The obsession with developer experience

While I reject the DX vs. UX dichotomy, I acknowledge that developers and their managers chase good developer experience. Sometimes obsessively and selfishly. That is why DX is a huge market, with millions of venture capital flowing into dev tooling that promises a small productivity benefit. DX is a major driving force in our industry.

Developers rather identify with their tools than with values regarding their craft such as usability, accessibility, or business goals such as trust, excellent service and friendliness towards customers. Not to mention elegance, creativity and beauty.

Developers look for an integrated development environment: One language for everything (JavaScript/TypeScript), one server software for everything (Next.js, for example). This deceptive integration bulldozes important architectural distinctions: backend vs. frontend, HTML vs. JS, CSS vs. JS. JavaScript-heavy architectures have pulled HTML and CSS into their vortex. Now they are coming for every other aspect as well.

As Laurie Voss described, developers are hawks “greedily sucking up browser resources to save themselves time”, while the users are doves “meekly trying to load a website and hoping it will run”. But Voss argues that a stable equilibrium between hawks and doves is established and we need to shift the equilibrium.

React and derailed frontends

Mocking React is yet another cliché. React has become a code word for everything that is wrong with JavaScript. In some circles you don’t have to provide any facts or arguments, you just have to mention that a site uses React to get laughs and sighs.

In fact, there is well-informed fundamental criticism of React. Josh Collinsworth wrote the seminal articles The self-fulfilling prophecy of React in 2022 and Things you forgot (or never knew) because of React in 2023.

In summary, React is sub-par by any measure and everyone in the industry should acknowledge that. Since React is the dominant JavaScript framework marketed by one of the most powerful IT corporations, it deserves its fair amount of criticism.

But React is not the single adversary, the end boss. It’s unfounded to blame “React” for JavaScript atrocities. The mere use of React does not explain why frontends derail and become slow, inaccessible and unusable.

For example, people found the new GitHub frontend, which is build with heavy client-side JavaScript, to be slow and buggy. Some quickly attributed it to the use of React. But as I wrote, the React library is not the sole cause of bad performance. It is comparably slow and huge. With client-only components, it promotes a development model that is known to be slow and fragile.

But React’s performance is usually covered by many architectural layers that multiply the problem. In the case of GitHub, React accounts for 67 of 675 KB of JavaScript on pages like Pull Requests.

For historical reasons, Github uses a wild mix of Primer React, React Router, Catalyst web components, lit-html, morphdom and Rails Turbo.

Several competing technologies and frontend architectures coexist on one page. I would identify this as the main problem. These pages perform horribly not because React is one cornerstone, but because inconsistent engineering decisions lead to a half-baked single-page application.

The GitHub case is a good example of how frontend projects deteriorate over time. According to Conway’s law, software projects mirror the organizational and communication structure of the design team – and all previous teams. Over the course of years, every real-world software project becomes a monument of shifts in product direction, power struggles, management and organizational restructuring, conflicting programming patterns and incomplete migrations.

Every industry hype, every JavaScript trend leaves its mark in the codebase at the expense of user experience. For every line of code written and vetted by well-meaning developers, there are ten lines of third-party code from component libraries, tracking scripts and advertisement.

A web framework is neither responsible for nor capable of preventing this dynamic. But it can provide a rigid architecture that mitigates the harm when developers (are forced to) go wild. Choosing boring, robust, fast, minimal, modular, replaceable, standardized technologies as the basis reduces the likelihood of frontend derailment.

Common goals

Almost 25 years ago, PHP provided one of the first affordable development environments for web applications. Symfony, Laravel, Django and Rails later established best practices for web applications.

The JavaScript community initially ignored these practices when single-page apps conquered the client. When Node.js became popular, Node-based frameworks adopted the practices, but also reinvented the wheel. Many Node “full-stack” frameworks today still leave out the frontend (“use React or Angular or whatever”).

After a decade of criticism, JavaScript folks realized that server-side rendering and logic was useful, after all. In recent years, “meta-frameworks” have rediscovered the distinction between server and client logic – JavaScript that runs on the server vs. JavaScript that runs on the client vs. JavaScript that runs on the server and client (“universal” JavaScript). They have rediscovered semantic markup, CSS-in-CSS (also known as CSS), forms and links.

The JavaScript community is roughly where PHP was in 2000. Which is a good thing. We have just scratched the surface of what a sensible use of JavaScript might look like. This involves rendering some pages statically, rendering some pages dynamically on the server, and rendering interactive “islands” on the client.

What I am missing and what could improve the situation are common goals across JavaScript frameworks:

  • Every starter project should meet certain performance metrics. General-purpose frameworks should be measured on common tasks. In addition to miniscule examples like TodoMVC, we need examples for typical tasks like Create, Read, Update, Delete (CRUD), complex forms with validation as well as loading and streaming content.
  • The performance impact of developer decisions should be transparent. We already have the tools to quickly assess performance, from bundle size warnings to Lighthouse CLI and CI.

    Of course, a build tool alone cannot predict the final page performance. But the performance budget – the amount of HTML, critical CSS and initial JavaScript – is pretty fixed if you are targeting an acceptable load time for common internet connections on common devices.

  • There should be a visible separation between server and client code. Server-run JavaScript has a categorically different footprint than client-side JavaScript. At the same time, we need a smooth and safe transition between server-only components, client-only components and universal components.
  • Not literally zero kilobytes of client-side JavaScript, but the minimal feasible amount. Coding abstractions should come at negligible cost to the end users.
  • Frameworks should “magically disappear” in client code thanks to compilers. The remaining runtime shipped to the client should be minimal. Modern frameworks like Preact, Svelte and Solid have demonstrated that capable, high-level abstractions only require a few kilobytes of JavaScript.

    I understand that some frameworks today do not aggressively optimize for the first page view or for the size of a Hello World page. Many JavaScript-heavy sites never see light of the World Wide Web, but run on intranets or even locally.

    Still, almost all frameworks have responded to criticism and made good progress on initial load performance. Angular, for example, now integrates server-side rendering (SSR) and static site generation (SSG). (According to my test, a standalone Hello World app amounts to 26.42 KB of client-side JavaScript. Angular 19.0.0-next.3, zoneless, without routing.)

Primitives and APIs

After 25 years, we finally have a clue how to share and connect server logic and client-side interactivity in a meaningful way. Unfortunately, everyone is still working in silos.

There are many complementary and competing patterns, such as server-side rendering (SSR), static site generation (SSG), Islands, Hydration and Resumability. This Cambrian explosion is exciting. But we should also strive for convergence. These ideas should be translated into primitives and APIs shared between frameworks.

Each framework provides similar capabilities. Each framework now features a component model where “Lego bricks” structure the UI. Still, if you want to use two widgets from different vendors on one page, each one ships with its own version of React.

Build tools need to apply similar optimizations. Even though they are built on similar foundations (Webpack/Rspack, Rollup/Rolldown, esbuild, SWC etc.), there is zero interoperability.

There is some hope. Recently, the Vue.js community has become the steward of the entire JavaScript ecosystem. Vue is always-neutral Switzerland. Vue land is common land. Its tools have proven useful across all stacks, including Vite, Nitro, Vinxi, Vitest.

Having shared fundamental tools is nice, but standardized APIs are better. We need a high-level, interoperable, portable component model that supports server, client and universal rendering.

Your JavaScript code should be interoperable across runtimes and frameworks. There should be no vendor or framework lock-in. We should be able to copy components into any web project and they should play by the same rules.

To be useful, a component model would need to cover declarative HTML templates, content projection (think of slots and portals), CSS scoping and encapsulation, attaching JavaScript behavior, value and form binding, attribute/property type definitions and public JavaScript APIs. It should feature good defaults plus generic programmable APIs.

With HTML custom elements, we have a standardized API that allows to integrate client-side DOM logic of different provenance. This is a great achievement without question. Many people are so excited about web components that they already proclaim “ditch your framework, just #UseThePlatform”.

In my opinion, the current web component APIs provide 1% of what we technically need to overcome the JavaScript crisis. I’m afraid web-component-based frameworks currently add to the fragmentation. I’m skeptical whether techniques like the Shadow DOM and Declarative Shadow DOM will even be part of a broad solution, and so are proponents of web components.

Mayank pointed out that web components are not components and went on to explain what a component is anyway. Components in the sense of frameworks like React “exist in a different plane” than web components. “Web component APIs can be useful when creating components, but they are not the complete answer. Components should be able to do a lot more than what web component APIs are capable of today.”

To my knowledge, there are only a few projects like Enhance and WebC that are currently exploring this problem space. They are groundbreaking, but still fringe. I guess agnostic and inclusive tools like Astro and 11ty are having the biggest impact right now. They break up framework silos, start with plain HTML and demonstrate how frameworks can work together.

After assessing server-side rendering of web components, including Enhance and WebC, Jared White concludes that “we [need] a sense of server-side interop, a vision of a component format where you could squint and see how server components could be authored in one environment and ported over to another environment without too much hassle”. He adds, “we desperately need a common language around thinking of web components as fullstack components”.

Closing words

I don’t think I’m saying anything new in this post. People much smarter than me have said these things before in a more sensible and eloquent way. I’ve tried to give them due credit. Also I’m merely repeating what I have already written in my last five blog posts on the topic. In this post, I wanted to compile the aspects I find noteworthy for reference – mostly my own – and for posterity.

I’ve talked about various possible technical solutions, but I think the most pressing issue is the discourse that needs to be fixed. We need to recognize that clichés, sermons, insults and righteousness have failed. We need to reconcile with mutual respect sharing the common goal of making the web usable, accessible, secure and safe for everyone.