The Value Of Science by Richard P. Feynman [PDF]
This short essay by Richard Feynman is quite a dose of perspective on a Monday morning
This short essay by Richard Feynman is quite a dose of perspective on a Monday morning
Wow! Grace Hopper has always been a hero to me, but I had no idea she was such a fantastic presenter. She’s completely engaging, with the timing and deadpan delivery of a stand-up comedian at times.
Matt has made a new website for tracking our collective progress levelling up the Kardashev scale:
Maximising energy generation, distribution and usage at street level, for as many people as possible, everyday.
Benjamín Labatut draws a line from the Vedas to George Boole and Claude Shannon onward to Geoffrey Hinton and Frank Herbert’s Butlerian Jihad.
In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them.
We’re all tired of: write some code, come back to it in six months, try to make it do more, and find the whole project is broken until you upgrade everything.
Progressive enhancement allows you to do the opposite: write some code, come back to it in six months, and it’s doing more than the day you wrote it!
Things can be different:
The core value of the IndieWeb, individual empowerment, helped me realise a fundamental change in perspective: that the web was beautiful and at times difficult, but that we, the people, were in control.
I really, really like this post from Matt (except for the bit where he breaks Simon’s rule).
Robin Sloan on The Culture:
The Culture is a utopia: a future you might actually want to live in. It offers a coherent political vision. This isn’t subtle or allegorical; on the page, citizens of the Culture very frequently articulate and defend their values. (Their enthusiasm for their own politics is considered annoying by most other civilizations.)
Coherent political vision doesn’t require a lot, just some sense of “this is what we ought to do”, yet it is absent from plenty of science fiction that dwells only in the realm of the cautionary tale.
I don’t have much patience left for that genre. I mean … we have been, at this point, amply cautioned.
Vision, on the other hand: I can’t get enough.
I’m really excited about John’s talk at this year’s UX London. Feels like a good time to revisit his excellent talk from dConstruct 2015:
I’m going to be opening up the second day of UX London 2024, 18th-20th June. As part of that talk, I’ll be revisiting a talk called Metadesign for Murph which I gave at dConstruct in 2015. It might be one of my favourite talks that I’ve ever given.
Beautiful writing from Rebecca Solnit, that encapsulates what I’ve been trying to say:
You want tomorrow to be different than today, and it may seem the same, or worse, but next year will be different than this one, because those tiny increments added up. The tree today looks a lot like the tree yesterday, and so does the baby.
I’m very excited that John is speaking at this year’s UX London!
Humans are allergic to change. And, as Jeremy impressively demonstrated, we tend to overlook the changes that happen more gradually. We want the Big Bang, the sudden change, the headline that reads, “successful nuclear fusion solves climate change for good.” But that’s (usually) not how change works. Change often happens gradually, first very slowly, and then, once it reaches a certain threshold, it can happen overnight.
Annalee Newitz:
When we imagine future tech, we usually focus on the ways it could turn humans into robotic workers, easily manipulated by surveillance capitalism. And that’s not untrue. But in this story, I wanted to suggest that there is a more subversive possibility. Modifying our bodies with technology could bring us closer to the natural world.
If we’re serious about creating a sustainable future, perhaps we should change this common phrase from “Form follows Function” to “Form – Function – Future”. While form and function are essential considerations, the future, represented by sustainability, should be at the forefront of our design thinking. And actually, if sustainability is truly at the forefront of the way we create new products, then maybe we should revise the phrase even further to “Future – Function – Form.” This revised approach would place our future, represented by sustainability, at the forefront of our design thinking. It would encourage us to first ask ourselves, “What is the most sustainable way to design X?” and then consider how the function of X can be met while ensuring it remains non-harmful to people and the planet.
Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
Once again, absolutely spot-on analysis from Ted Chiang.
I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
Maggie Appleton:
An exploration of the problems and possible futures of flooding the web with generative AI content.
I not only worry that “cli-fi” might not be an effective form of environmental expression – I have come to believe that the genre might be actively dangerous, stunting our cultural ability to imagine a future worth living in or fighting for.
Stick a singularity in your “effective altruism” pipe and smoke it.
Solarpunk and synthetic biology as a two-pronged approach to the future:
Neither synbio nor Solarpunk has all the right answers, but when they are joined in a symbiotic relationship, they become greater than the sum of their parts. If people could express what they needed, and if scientists could champion those desires — then Solarpunk becomes a will and synbio becomes a way.
Manufactured inevitability a.k.a bullshit:
There’s a standard trope that tech evangelists deploy when they talk about the latest fad. It goes something like this:
- Technology XYZ is arriving. It will be incredible for everyone. It is basically inevitable.
- The only thing that can stop it is regulators and/or incumbent industries. If they are so foolish as to stand in its way, then we won’t be rewarded with the glorious future that I am promising.
We can think of this rhetorical move as a Reverse Scooby-Doo. It’s as though Silicon Valley has assumed the role of a Scooby-Doo villain — but decided in this case that he’s actually the hero. (“We would’ve gotten away with it, too, if it wasn’t for those meddling regulators!”)
The critical point is that their faith in the promise of the technology is balanced against a revulsion towards existing institutions. (The future is bright! Unless they make it dim.) If the future doesn’t turn out as predicted, those meddlers are to blame. It builds a safety valve into their model of the future, rendering all predictions unfalsifiable.