Journal tags: tools

40

sparkline

Unsaid

I went to the UX Brighton conference yesterday.

The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.

But…

The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.

Not a single speaker addressed where the training data for current large language models comes from (it comes from scraping other people’s copyrighted creative works).

Not a single speaker addressed the energy requirements for current large language models (the requirements are absolutely mahoosive—not just for the training, but for each and every query).

My charitable reading of the situation yesterday was that every speaker assumed that someone else would cover those issues.

The less charitable reading is that this was a deliberate decision.

Whenever the issue of ethics came up, it was only ever in relation to how we might use these tools: considering user needs, being transparent, all that good stuff. But never once did the question arise of whether it’s ethical to even use these tools.

In fact, the message was often the opposite: words like “responsibility” and “duty” came up, but only in the admonition that UX designers have a responsibility and duty to use these tools! And if that carrot didn’t work, there’s always the stick of scaring you into using these tools for fear of being left behind and having a machine replace you.

I was left feeling somewhat depressed about the deliberately narrow focus. Maggie’s talk was the only one that dealt with any externalities, looking at how the firehose of slop is blasting away at society. But again, the focus was only ever on how these tools are used or abused; nobody addressed the possibility of deliberately choosing not to use them.

If audience members weren’t yet using generative tools in their daily work, the assumption was that they were lagging behind and it was only a matter of time before they’d get on board the hype train. There was no room for the idea that someone might examine the roots of these tools and make a conscious choice not to fund their development.

There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating:

Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.

But none of the speakers at UX Brighton chose to examine the larger context of the tools they were encouraging us to use.

One speaker told us “Be curious!”, but clearly that curiosity should not extend to the foundations of the tools themselves. Ignore what’s behind the curtain. Instead look at all the cool stuff we can do now. Don’t worry about the fact that everything you do with these tools is built on a bedrock of exploitation and environmental harm. We should instead blithely build a new generation of user interfaces on the burial ground of human culture.

Whenever I get into a discussion about these issues, it always seems to come back ’round to whether these tools are actually any good or not. People point to the genuinely useful tasks they can accomplish. But that’s not my issue. There are absolutely smart and efficient ways to use large language models—in some situations, it’s like suddenly having a superpower. But as Molly White puts it:

The benefits, though extant, seem to pale in comparison to the costs.

There are no ethical uses of current large language models.

And if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to stop using the current crop of exploitative large language models.

Anyway, like I said, all the talks at UX Brighton were very good. But I just wish just one of them had addressed the underlying questions that any good UX designer should ask: “Where did this data come from? What are the second-order effects of deploying this technology?”

Having a talk on those topics would’ve been nice, but I would’ve settled for having five minutes of one talk, or even one minute. But there was nothing.

There’s one possible explanation for this glaring absence that’s quite depressing to consider. It may be that these topics weren’t covered because there’s an assumption that everybody already knows about them, and frankly, doesn’t care.

To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”

Mismatch

This seems to be the attitude of many of my fellow nerds—designers and developers—when presented with tools based on large language models that produce dubious outputs based on the unethical harvesting of other people’s work and requiring staggering amounts of energy to run:

This is the future! I need to start using these tools now, even if they’re flawed, because otherwise I’ll be left behind. They’ll only get better. It’s inevitable.

Whereas this seems to be the attitude of those same designers and developers when faced with stable browser features that can be safely used today without frameworks or libraries:

I’m sceptical.

What price?

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

First of all, there’s no evidence to back that up.

If anything, as the well gets poisoned by their own outputs, large language models may well end up eating their own slop and getting their own version of mad cow disease. So this might be as good as they’re ever going to get.

And when it comes to energy usage, all the signals from NVIDIA, OpenAI, and others are that power usage is going to increase, not decrease.

But secondly, what the hell kind of logic is that?

It’s like saying “It’s okay for me to drive my gas-guzzling SUV now, because in the future I’ll be driving an electric vehicle.”

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

I suspect that most people know full well that the “they’ll get better!” defence doesn’t hold water. But you can convince yourself of anything when everyone around is telling you that this is the future baby, and you’d better get on board or you’ll be left behind.

Baldur reminds us that this is how people talked about asbestos:

Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.

This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.

It reminds me of the classic Ursula Le Guin short story, The Ones Who Walk Away from Omelas that depicts:

…the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child.

Once citizens are old enough to know the truth, most, though initially shocked and disgusted, ultimately acquiesce to this one injustice that secures the happiness of the rest of the city.

It turns out that most people will blithely accept injustice and suffering not for a utopia, but just for some bland hallucinated slop.

Don’t get me wrong: I’m not saying large language models aren’t without their uses. I love seeing what Simon and Matt are doing when it comes to coding. And large language models can be great for transforming content from one format to another, like transcribing speech into text. But the balance sheet just doesn’t add up.

As Molly White put it: AI isn’t useless. But is it worth it?:

Even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.

Manual ’till it hurts

I’ve been going buildless—or as Brad crudely puts it, raw-dogging websites on a few projects recently. Not just obviously simple things like Clearleft’s Browser Support page, but sites like:

They also have 0 dependencies.

Like Max says:

Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.

Making websites without a build step is a gift to your future self. When you open that project six months or a year or two years later, there’ll be no faffing about with npm updates, installs, or vulnerabilities.

Need to edit the CSS? You edit the CSS. Need to change the markup? You change the markup.

It’s remarkably freeing. It’s also very, very performant.

If you’re thinking that your next project couldn’t possibly be made without a build step, let me tell you about a phrase I first heard in the indie web community: “Manual ‘till it hurts”. It’s basically a two-step process:

  1. Start doing what you need to do by hand.
  2. When that becomes unworkable, introduce some kind of automation.

It’s remarkable how often you never reach step two.

I’m not saying premature optimisation is the root of all evil. I’m just saying it’s premature.

Start simple. Get more complex if and when you need to.

You might never need to.

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Bookmarklets for testing your website

I’m at day two of Indie Web Camp Brighton.

Day one was excellent. It was really hard to choose which sessions to go to because they all sounded interesting. That’s a good problem to have.

I ended up participating in:

  • a session on POSSE,
  • a session on NFC tags,
  • a session on writing, and
  • a session on testing your website that was hosted by Ros

In that testing session I shared some of the bookmarklets I use regularly.

Bookmarklets? They’re bookmarks that sit in the toolbar of your desktop browser. Just like any other bookmark, they’re links. The difference is that these links begin with javascript: rather than http. That means you can put programmatic instructions inside the link. Click the bookmark and the JavaScript gets executed.

In my mind, there are two different approaches to making a bookmarklet. One kind of bookmarklet contains lots of clever JavaScript—that’s where the smart stuff happens. The other kind of bookmarklet is deliberately dumb. All they do is take the URL of the current page and pass it to another service—that’s where the smart stuff happens.

I like that second kind of bookmarklet.

Here are some bookmarklets I’ve made. You can drag any of them up to the toolbar of your browser. Or you could create a folder called, say, “bookmarklets”, and drag these links up there.

Validation: This bookmarklet will validate the HTML of whatever page you’re on.

Validate HTML

Carbon: This bookmarklet will run the domain through the website carbon calculator.

Calculate carbon

Accessibility: This bookmarklet will run the current page through the Website Accessibility Evaluation Tools.

WAVE

Performance: This bookmarklet will take the current page and it run it through PageSpeed Insights, which includes a Lighthouse test.

PageSpeed

HTTPS: This bookmarklet will run your site through the SSL checker from SSL Labs.

SSL Report

Headers: This bookmarklet will test the security headers on your website.

Security Headers

Drag any of those links to your browser’s toolbar to “install” them. If you don’t like one, you can delete it the same way you can delete any other bookmark.

PageSpeed Insights bookmarklet

I’m a little obsessed with web performance. I like being able to check a page’s core web vitals quickly and easily.

Four years ago, I made a Lighthouse bookmarklet. Whatever web page you were on, when you clicked on the bookmarklet you’d get the Lighthouse results for that page. Handy!

It doesn’t work anymore. This is probably because Google are in the loop. Four years is pretty good innings for anything involving that company.

I kid (mostly). Lighthouse itself is still going strong, despite being a Google product. But the bookmarklet needs updating.

Rather than just get Lighthouse results, I figured that the full PageSpeed Insights results would be even better. If your website is in the Chrome UX report, you get to see those CrUX details too.

So here’s the updated bookmarklet:

PageSpeed Insights

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you want to test the page you’re on.

Continuous partial ick

The output of generative tools based on large language models gives me the ick.

This isn’t a measured logical response. It’s more of an involuntary emotional reaction.

I could try to justify my reaction by saying I’m concerned about the exploitation involved in the training data, or the huge energy costs involved, or the disenfranchisement of people who create art. But those would be post-facto rationalisations.

I just find myself wrinkling my nose and mentally going “Ew!” whenever somebody posts the output of some prompt they gave to ChatGPT or Midjourney.

Again, I’m not saying this is rational. It’s more instinctual.

You could well say that this is my problem. You may be right. But I wonder what it is that’s so unheimlich about these outputs that triggers my response.

Just to clarify, I am talking about direct outputs, shared verbatim. If someone were to use one of these tools in the process of creating something I’d be none the wiser. I probably couldn’t even tell that a large language model was involved at some point. I’m fine with that. It’s when someone takes something directly from one of these tools and then shares it online, that’s what raises my bile.

I was at a conference a few months back where your badge featured a hallucinated picture of you. Now, this probably sounded like a fun idea. It probably is a fun idea. I can’t tell. All I know is that it made me feel a little queasy.

Perhaps it’s a question of taste. In which case, I’m being a snob. I’m literally turning my nose up at something I deem to be tacky.

But isn’t it tacky, though? It’s not something I can describe, but there’s just something about the vibe of these images—and words—that feels off. It’s sort of creepy, but it’s mostly just the mediocrity that sits so uneasily with me.

These tools do an amazing job of solving the quantity problem—how to produce an image or piece of text quickly. And by most measurements, you could say that they also solve the quality problem. These outputs are good enough to pass for “the real thing.” The outputs are, like, 90% to 95% there. And the gap is closing.

And yet. There’s something in that gap. Something that I feel in my gut. Something that makes me go “nope.”

A memex in every web browser

When Mathew Modine’s character first shows up in Christopher Nolan’s Oppenheimer, I figured the rest of the cinema audience wouldn’t have appreciated me shouting out “VANNEVAR BUSH IN THE HOUSE!” so I screamed it on the inside.

The Manhattan Project was not his only claim to fame or infamy. When it comes to the world we now live in, Bush’s idea of the memex has been almost equally influential. His article As We May Think became a touchstone for Douglas Engelbart and later Tim Berners-Lee.

But as Matt Thompson points out:

…the device he describes does not resemble the internet or anything I’ve ever found on it.

Then he says:

What Bush was describing sounds to me like what you might get if you turned a browser history — the most neglected piece of the software — into a robust and fully featured machine of its own. It would help you map the path you charted through a web of knowledge, refine those maps, order them, and share them

Yes! This!! I 100% agree with the description of browser history as “the most neglected piece of the software.” While I wouldn’t go as far as Chris when he says web browsers kind of suck, I’m kind of amazed that there hasn’t been more innovation and competition in this space.

If anything we’ve outsourced the management of our browsing history to services like Delicious and Pinboard, or to tools like Obsidian and Roam Research. Heck, the links section of my website is my attempt to manage and annotate my own associative trails.

Imagine if that were baked right into a web browser. Then imagine how beautiful such a rich source of data might look.

Like Matt says:

I don’t think anything like this exists. So Bush’s essay still transfixes me.

Decision time

I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!

But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.

Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?

I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.

When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.

Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.

It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.

From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.

It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.

In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.

When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?

But often a decision that delivers short-term gain may well end up delivering long-term pain.

Alexander Petros describes this succinctly:

Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.

When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.

Robin Berjon puts this into the historical perspective of Taylorism and Luddism:

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.

Or as Marshall McLuhan put it:

Every extension is also an amputation.

…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.

Maintenance matters. With the passing of time, maintenance matters more and more.

Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:

Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Chain of tools

I shared this link in Slack with my co-workers today:

Cultivating depth and stillness in research by Andy Matuschak.

I wasn’t sure whether it belonged in the #research or the #design channel. While it’s ostensibly about research, I think it applies to design more broadly. Heck, it probably applies to most fields. I should have put it in the Slack channel I created called #iiiiinteresting.

The article is all about that feeling of frustration when things aren’t progressing quickly, even when you know intellectually that not everything should always progress quickly.

The article is filled with advice for battling this feeling, including this observation on curiosity:

Curiosity can also totally change my relationship to setbacks. Say I’ve run an experiment, collected the data, done the analysis, and now I’m writing an essay about what I’ve found. Except, halfway through, I notice that one column of the data really doesn’t support the conclusion I’d drawn. Oops. It’s tempting to treat this development as a frustrating impediment—something to be overcome expediently. Of course, that’s exactly the wrong approach, both emotionally and epistemically. Everything becomes much better when I react from curiosity instead: “Oh, wait, wow! Fascinating! What is happening here? What can this teach me? How might this change what I try next?”

But what really resonated with me was this footnote attached to that paragraph:

I notice that I really struggle to generate curiosity about problems in programming. Maybe it’s because I’ve been doing it so long, but I think it’s because my problems are usually with ephemeral ideas, incidental to what I actually care about. When I’m fighting some godforsaken Javascript build system, I don’t feel even slightly curious to “really” understand those parochial machinations. I know they’re just going to be replaced by some new tool next year.

I feel seen.

I know I’m not alone. I know people who were driven out of front-end development because they felt the unspoken ultimatum was to either become a “full stack” developer or see yourself out.

Remember Chris’s excellent post, The Great Divide? Zach referenced it recently. He wrote:

The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?

Mostly I feel sad about the talented people we’ve lost because they felt their front-of-the-front-end work wasn’t valued.

But wait! Can I turn my frown upside down? Can I take Andy Matuschak’s advice and say, “Oh, wait, wow! Fascinating! What is happening here? What can this teach me?”

Here’s one way of squinting at the situation…

There’s an opportunity here. If many people—myself included—feel disheartened and ground down by the amount of time they need to spend dealing with toolchains and build systems, what kind of system would allow us to get on with making websites without having to deal with that stuff?

I’m not proposing that we get rid of these complex toolchains, but I am wondering if there’s a way to make it someone else’s job.

I guess this job is DevOps. In theory it’s a specialised field. In practice everyone adding anything to a codebase partakes in continual partial DevOps because they must understand the toolchains and build processes in order to change one line of HTML.

I’m not saying “Don’t Make Me Think” when it comes to the tooling. I totally get that some working knowledge is probably required. But the ratio has gotten out of whack. You need a lot of working knowledge of the toolchains and build processes.

In fact, that’s mostly what companies hire for these days. If you’re well versed in HTML, CSS, and vanilla JavaScript, but you’re not up to speed on pipelines and frameworks, you’re going to have a hard time.

That doesn’t seem right. We should change it.

No code

When I wrote about democratising dev, I made brief mention of the growing “no code” movement:

Personally, I would love it if the process of making websites could be democratised more. I’ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I’m all in favour of no-code tools …in theory.

But I didn’t describe what no-code is, as I understand it.

I’m taking the term at face value to mean a mechanism for creating a website—preferably on a domain you control—without having to write anything in HTML, CSS, JavaScript, or any back-end programming language.

By that definition, something like WordPress.com (as opposed to WordPress itself) is a no-code tool:

Create any kind of website. No code, no manuals, no limits.

I’d also put Squarespace in the same category:

Start with a flexible template, then customize to fit your style and professional needs with our website builder.

And its competitor, Wix:

Discover the platform that gives you the freedom to create, design, manage and develop your web presence exactly the way you want.

Webflow provides the same kind of service, but with a heavy emphasis on marketing websites:

Your website should be a marketing asset, not an engineering challenge.

Bubble is trying to cover a broader base:

Bubble lets you create interactive, multi-user apps for desktop and mobile web browsers, including all the features you need to build a site like Facebook or Airbnb.

Wheras Carrd opts for a minimalist one-page approach:

Simple, free, fully responsive one-page sites for pretty much anything.

All of those tools emphasise that don’t need to need to know how to code in order to have a professional-looking website. But there’s a parallel universe of more niche no-code tools where the emphasis is on creativity and self-expression instead of slickness and professionalism.

neocities.org:

Create your own free website. Unlimited creativity, zero ads.

mmm.page:

Make a website in 5 minutes. Messy encouraged.

hotglue.me:

unique tool for web publishing & internet samizdat

I’m kind of fascinated by these two different approaches: professional vs. expressionist.

I’ve seen people grapple with this question when they decide to have their own website. Should it be a showcase of your achievements, almost like a portfolio? Or should it be a glorious mess of imagery and poetry to reflect your creativity? Could it be both? (Is that even doable? Or desirable?)

Robin Sloan recently published his ideas—and specs—for a new internet protocol called Spring ’83:

Spring ‘83 is a protocol for the transmission and display of something I am calling a “board”, which is an HTML fragment, limited to 2217 bytes, unable to execute JavaScript or load external resources, but otherwise unrestricted. Boards invite publishers to use all the richness of modern HTML and CSS. Plain text and blue links are also enthusiastically supported.

It’s not a no-code tool (you need to publish in HTML), although someone could easily provide a no-code tool to sit on top of the protocol. Conceptually though, it feels like it’s an a similar space to the chaotic good of neocities.org, mmm.page, and hotglue.me with maybe a bit of tilde.town thrown in.

It feels like something might be in the air. With Spring ’83, the Block protocol, and other experiments, people are creating some interesting small pieces that could potentially be loosely joined. No code required.

Declarative design systems

When I wrote about the idea of declarative design it really resonated with a lot of people.

I think that there’s a general feeling of frustration with the imperative approach to designing and developing—attempting to specify everything exactly up front. It just doesn’t scale. As Jason put it, the traditional web design process is fundamentally broken:

This is the worst of all worlds—a waterfall process creating dozens of artifacts, none of which accurately capture how the design will look and behave in the browser.

In theory, design systems could help overcome this problem; spend a lot of time up front getting a component to be correct and then it can be deployed quickly in all sorts of situations. But the word “correct” is doing a lot of work there.

If you’re approaching a design system with an imperative mindset then “correct” means “exact.” With this approach, precision is seen as valuable: precise spacing, precise numbers, precise pixels.

But if you’re approaching a design system with a declarative mindset, then “correct” means “resilient.” With this approach, flexibility is seen as valuable: flexible spacing, flexible ranges, flexible outputs.

These are two fundamentally different design approaches and yet the results of both would be described as a design system. The term “design system” is tricky enough to define as it is. This is one more layer of potential misunderstanding: one person says “design system” and means a collection of very precise, controlled, and exact components; another person says “design system” and means a predefined set of boundary conditions that can be used to generate components.

Personally, I think the word “system” is the important part of a design system. But all too often design systems are really collections rather than systems: a collection of pre-generated components rather than a system for generating components.

The systematic approach is at the heart of declarative design; setting up the rules and ratios in advance but leaving the detail of the final implementation to the browser at runtime.

Let me give an example of what I think is a declarative approach to a component. I’ll use the “hello world” of design system components—the humble button.

Two years ago I wrote about programming CSS to perform Sass colour functions. I described how CSS features like custom properties and calc() can be used to recreate mixins like darken() and lighten().

I showed some CSS for declaring the different colour elements of a button using hue, saturation and lightness encoded as custom properties. Here’s a CodePen with some examples of different buttons.

See the Pen Button colours by Jeremy Keith (@adactio) on CodePen.

If these buttons were in an imperative design system, then the output would be the important part. The design system would supply the code needed to make those buttons exactly. If you need a different button, it would have to be added to the design system as a variation.

But in a declarative design system, the output isn’t as important as the underlying ruleset. In this case, there are rules like:

For the hover state of a button, the lightness of its background colour should dip by 5%.

That ends up encoded in CSS like this:

button:hover {
    background-color: hsl(
        var(--button-colour-hue),
        var(--button-colour-saturation),
        calc(var(--button-colour-lightness) - 5%)
    );
}

In this kind of design system you can look at some examples to see the results of this rule in action. But those outputs are illustrative. They’re not the final word. If you don’t see the exact button you want, that’s okay; you’ve got the information you need to generate what you need and still stay within the pre-defined rules about, say, the hover state of buttons.

This seems like a more scalable approach to me. It also seems more empowering.

One of the hardest parts of embedding a design system within an organisation is getting people to adopt it. In my experience, nobody likes adopting something that’s being delivered from on-high as a pre-made sets of components. It’s meant to be helpful: “here, use this pre-made components to save time not reinventing the wheel”, but it can come across as overly controlling: “we don’t trust you to exercise good judgement so stick to these pre-made components.”

The declarative approach is less controlling: “here are pre-defined rules and guidelines to help you make components.” But this lack of precision comes at a cost. The people using the design system need to have the mindset—and the ability—to create the components they need from the systematic rules they’ve been provided.

My gut feeling is that the imperative mindset is a good match for most of today’s graphic design tools like Figma or Sketch. Those tools deal with precise numbers rather than ranges and rules.

The declarative mindset, on the other hand, increasingly feels like a good match for CSS. The language has evolved to allow rules to be set up through custom properties, calc(), clamp(), minmax(), and so on.

So, as always, there isn’t a right or wrong approach here. It all comes down to what’s most suitable for your organisation.

If your designers and developers have an imperative mindset and Figma files are considered the source of truth, than they would be better served by an imperative design system.

But if you’re lucky enough to have a team of design engineers that think in terms of HTML and CSS, then a declarative design system will be a force multiplier. A bicycle for the design engineering mind.

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

Declarative design

I feel like in the past few years there’s been a number of web design approaches that share a similar mindset. Intrinsic web design by Jen; Every Layout by Andy and Heydon; Utopia by Trys and James.

To some extent, their strengths lie in technological advances in CSS: flexbox, grid, calc, and so on. But more importantly, they share an approach. They all focus on creating the right inputs rather than trying to control every possible output. Leave the final calculations for those outputs to the browser—that’s what computers are good at.

As Andy puts it:

Be the browser’s mentor, not its micromanager.

Reflecting on Utopia’s approach, Jim Nielsen wrote:

We say CSS is “declarative”, but the more and more I write breakpoints to accommodate all the different ways a design can change across the viewport spectrum, the more I feel like I’m writing imperative code. At what quantity does a set of declarative rules begin to look like imperative instructions?

In contrast, one of the principles of Utopia is to be declarative and “describe what is to be done rather than command how to do it”. This approach declares a set of rules such that you could pick any viewport width and, using a formula, derive what the type size and spacing would be at that size.

Declarative! Maybe that’s the word I’ve been looking for to describe the commonalities between Utopia, Every Layout, and intrinsic web design.

So if declarative design is a thing, does that also mean imperative design is also a thing? And what might the tools and technologies for imperative design look like?

I think that Tailwind might be a good example of an imperative design tool. It’s only about the specific outputs. Systematic thinking is actively discouraged; instead you say exactly what you want the final pixels on the screen to be.

I’m not saying that declarative tools—like Utopia—are right and that imperative tools—like Tailwind—are wrong. As always, it depends. In this case, it depends on the mindset you have.

If you agree with this statement, you should probably use an imperative design tool:

CSS is broken and I want my tools to work around the way CSS has been designed.

But if you agree with this statement, you should probably use a declarative design tool:

CSS is awesome and I want my tools to amplify the way that CSS had been designed.

If you agree with the first statement but you then try using a declarative tool like Utopia or Every Layout, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

Likewise if you agree with the second statement but you then try using an imperative tool like Tailwind, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

It all depends on whether the philosophy behind the tool matches your own philosophy. If those philosophies match up, then using the tool will be productive and that tool will act as an amplifier—a bicycle for the mind. But if the philosophy of the tool doesn’t match your own philosophy, then you will be fighting the tool at every step—it will slow you down.

Knowing that this spectrum exists between declarative tools and imperative tools can help you when you’re evaluating technology. You can assess whether a web design tool is being marketed on the premise that CSS is broken or on the premise that CSS is awesome.

I wonder whether your path into web design and development might also factor into which end of the spectrum you’d identify with. Like, if your background is in declarative languages like HTML and CSS, maybe intrisic web design really resonates. But if your background is in imperative languages like JavaScript, perhaps Tailwind makes more sense to you.

Again, there’s no right or wrong here. This is about matching the right tool to the right mindset.

Personally, the declarative design approach fits me like a glove. It feels like it’s in the tradition of John’s A Dao Of Web Design or Ethan’s Responsive Web Design—ways of working with the grain of the web.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.