Showing posts with label shame. Show all posts
Showing posts with label shame. Show all posts

Tuesday, September 19, 2023

Google ending Basic HTML support for Gmail in 2024

Understandably they're saying little about it publicly, but word is getting around that Google's fast, super-compatible Basic HTML mode for Gmail will be removed in a few short months. "We’re writing to let you know that the Gmail Basic HTML view for desktop web and mobile web will be disabled starting early January 2024. The Gmail Basic HTML views are previous versions of Gmail that were replaced by their modern successors 10+ years ago and do not include full Gmail feature functionality."

There are also reports that you can't set Basic HTML mode now either. Most of you who want to use it probably already are, but if you're not, you can try this, this, this, this or even this to see if it gets around the front-end block.

Google can of course do whatever they want, and there are always maintenance costs to be had with keeping old stuff around — in this case, for users unlikely to be monetized in any meaningful fashion because you don't run all their crap. You are exactly the people Google wants to get rid of and doing so is by design. As such, it's effectively a giant "screw you," and will be a problem for those folks relying on this for a fast way to read Gmail with TenFourFox or any other limited system. (Hey, wanna buy a Pixel 8 to read Gmail?)

Speaking of "screw you," and with no small amount of irony given this is published on a Google platform, I certainly hope the antitrust case goes somewhere.

Wednesday, July 6, 2022

Network Solutions screws the pooch again

Floodgap appears to be down because Network Solutions' name servers are timing out and name queries are not reliably resolving (everything's up on this end). There is no ETA. If this continues for much longer I'll figure out an alternative, but between this and the social engineering hack last year, maybe this is a sign I need to move it to a different registrar even though I prepaid a few years ago.

Saturday, October 2, 2021

curl, Let's Encrypt and Apple laziness

The built-in version of curl on any Power Mac version of OS X will not be capable of TLS 1.1 or higher, so most of you who need it will have already upgraded to an equivalent with MacPorts. However, even for later Intel Macs that are ostensibly supported -- including my now legacy MacBook Air with Mojave I keep around for running 32-bit Intel -- the expiration of one of Let's Encrypt's root certificates yesterday will suddenly mean curl may suddenly cease connecting to TLS sites with Let's Encrypt certificates. Yesterday I was trying to connect to one of my own Floodgap sites, unexpectedly got certificate errors I wasn't seeing in TenFourFox or mainline Firefox, and, after a moment of panic, suddenly realized what had happened. While you can use -k to ignore the error, that basically defeats the entire idea of having a certificate to start with.

The real hell of it is that Mojave 10.14 is still technically supported by Apple, and you would think updating the curl root certificate store would be an intrinsic part of security updates, but you'd be wrong. The issue with old roots even affects Safari on some Monterey betas, making the best explanation more Apple laziness than benign neglect. Firefox added this root ages ago and so did TenFourFox.

If you are using MacPorts curl, which is (IMHO) the best solution on Power Macs due to Ken's diligence but is still a dandy alternative to Homebrew on Intel Macs, the easiest solution is to ensure curl-ca-bundle is up-to-date. Homebrew (and I presume Tigerbrew, for 10.4) can do brew install curl-ca-bundle, assuming your installation is current.

However, I use the built-in curl on the Mojave MacBook Air. Ordinarily I would just do an in-place update of the root certificate bundle, as I did on my 10.4 G5 before I started using a self-built curl, but thanks to System Integrity Protection you're not allowed to do that anymore even as root. Happily, the cURL maintainers themselves have a downloadable root certificate store which is periodically refreshed. Download that, put it somewhere in your home directory, and in your .login or .profile or whatever, set CURL_CA_BUNDLE to its location (on my system, I have a ~/bin directory, so I put it there and set it to /Users/yourname/bin/cacert.pem).

Sunday, January 10, 2021

Another way social media is bad

Social media like Twitter, Facebook, etc., has been in the news this week for obvious reasons due to the political unrest in the United States, where this blog and yours truly are based. For the same obvious reasons I'm not going to discuss that here since I can't moderate such a discussion and there are a million other places to talk about it. Likewise, please don't do so in the comments; I will remove those posts.

But relevant to this blog and this audience is social media's impact on trying to get the most bang for your buck out of your old devices and computers. Full-fat Twitter and Facebook (and others) are computationally expensive: the bells and whistles cost in terms of JavaScript, and there is no shortage of other client-side analytics to feed you the posts to keep you engaged and to monitor your actions to construct ad profiles. A number of our outstanding bugs in TenFourFox are directly due to this, and some can't be fixed without terrible consequences (such as Facebook's asm.js draw code using little-endian floats, which would be a nightmare to constantly byteswap, which is why the reaction icons don't show up), and pretty much none of them are easy to diagnose because all of their code is minified to hell. As they track changes in hardware and the browser base and rely on them, these problems continuously get worse. Most of TenFourFox's development is done by me and me alone and a single developer simply can't keep up with all the Web platform changes anymore.

Moreover, whatever features are available still have to contend with what the hardware is capable of. As our base is overwhelmingly Power Macs, I expect people to realize they are using computers which are no less than 15 years old and often more. We support operating systems with inadequate GPU support and we have to use Carbon APIs, so we will never be 64-bit, even on G5. Built-in basic adblock cuts a lot of fat, and we have a JIT and the fastest JavaScript on any 32-bit PowerPC platform, but it's still not enough. No one at these sites cares about our systems; I've never had any luck with trying to contact developers other than autoreply contact forms and unhelpful support desks which cater to users instead of other devs. Sometimes the sites offer light versions, such as basic Facebook. Some of you use this, some of you won't. However, sometimes the sites offer light versions, but only through mobile-specific apps (like Twitter Lite), so it doesn't help us. Sometimes user agent twiddling can help but many users don't know how or can't be bothered. And their continued availability is always subject to whether the home site wants to continue to supporting them or not because they probably do have impacts in terms of what browsing and activity information they can aggregate.

Many people effectively rate a computer today on how well it can access social media, and a computer that can't is therefore useless. This means you permit these companies to determine when the computer you spent your hard-earned money on should go in the trash. That decision probably won't be made maliciously, but it certainly won't be made to benefit you.

These are private companies and they get to decide how they will spend their money and time. But we, in turn, shouldn't depend on them for anything nor expect anything from them, and we should think about finding ways to extricate ourselves from them and maintain contact with the people we care about in other fashions. On our systems in particular this will only get worse and it doesn't have to. The power they have over our wallets and our public discourse is only — and entirely — because collectively we gave it to them.

Saturday, September 19, 2020

Google, nobody asked to make the Blogger interface permanent

As a followup to my previous rant on the obnoxious new Blogger "upgrade," I will grudgingly admit Blogger has done some listening. You can now embed images and links similarly to the way you used to, which restores some missing features and erases at least a part of my prior objections. But not the major one, because usability is still a rotting elephant's placenta. I remain an inveterate user of the HTML blog view and yet the HTML editor still thinks it knows better than you how to format your code and what tags you should use, you can't turn it off and you can't make it faster. And I remain unclear what the point of all this was because there is little improvement in functionality except mobile previewing.

Naturally, Google has removed the "return to legacy Blogger" button, but you can still get around that at least for the time being. On your main Blogger posts screen you will note a long multidigit number in the URL (perhaps that's why they're trying to hide URLs in Chrome). That's your blog ID. Copy that number and paste it in where the XXX is in this URL template (all one line):

https://www.blogger.com/blogger.g?blogID=XXX&useLegacyBlogger=true#allposts

Bookmark it and you're welcome. I look forward to some clever person making a Firefox extension to do this very thing very soon, and if you make one post it in the comments.

Wednesday, August 5, 2020

Google, nobody asked for a new Blogger interface

Even New Coke is better than New Blogger!

I'm writing this post in what Google is euphemistically referring to as an improvement. I don't understand this. I managed to ignore New Blogger for a few weeks but Google's ability to fark stuff up has the same air of inevitability as rotting corpses. Perhaps on mobile devices it's better, and even that is a matter of preference, but it's space-inefficient on desktop due to larger buttons and fonts, it's noticeably slower, it's buggy, and very soon it's going to be your only choice.

My biggest objection, however, is what they've done to the HTML editor. I'm probably the last person on earth to do so, but I write my posts in raw HTML. This was fine in the old Blogger interface which was basically a big freeform textbox you typed tags into manually. There was some means to intercept tags you didn't close, which was handy, and when you added elements from the toolbar you saw the HTML as it went in. Otherwise, WYTIWYG (what you typed is what you got). Since I personally use fairly limited markup and rely on the stylesheet for most everything, this worked well.

The new one is a line editor ... with indenting. Blogger has always really, really wanted you to use <p> as a container, even though a closing tag has never been required. But now, thanks to the indenter, if you insert a new paragraph then it starts indenting everything, including lines you've already typed, and there's no way to turn this off! Either you close every <p> tag immediately to defeat this behaviour, or you start using a lot of <br>s, which specifically defeats any means of semantic markup. (More about this in a moment.) First world problem? Absolutely. But I didn't ask for this "assistance" either, nor to require me to type additional unnecessary content to get around a dubious feature.

But wait, there's less! By switching into HTML view, you lose ($#@%!, stop indenting that line when I type emphasis tags!) the ability to insert hyperlinks, images or other media by any other means other than manually typing them out. You can't even upload an image, let alone automatically insert the HTML boilerplate and edit it.

So switch into Compose view to actually do any of those things, and what happens? Like before, Blogger rewrites your document, but now this happens all the time because of what you can't do in HTML view. Certain arbitrarily-determined naughtytags(tm) like <em> become <i> (my screen-reader friends will be disappointed). All those container close tags that are unnecessary bloat suddenly appear. Oh, and watch out for that dubiously-named "Format HTML" button, the only special feature to appear in the HTML view, as opposed to anything actually useful. To defeat the HTML autocorrupt while I was checking things writing this article, I actually copied and repasted my entire text multiple times so that Blogger would stop the hell messing with it. Who asked for this?? Clearly the designers of this travesty, assuming it isn't some cruel joke perpetuated by a sadistic UI anti-expert or a covert means to make people really cheesed off at Blogger so Google can claim no one uses it and shut it down, now intend HTML view to be strictly touch-up only, if that, and not a primary means of entering a post. Heaven forbid people should learn HTML anymore and try to write something efficient.

Oh, what else? It's slower, because of all the additional overhead (remember, it used to be just a big ol' box o' text that you just typed into, and a selection of mostly static elements making up the UI otherwise). Old Blogger was smart enough (or perhaps it was a happy accident) to know you already had a preview tab open and would send your preview there. New Blogger opens a new, unnecessary tab every time. The fonts and the buttons are bigger, but the icons are of similar size, defeating any reasonable argument of accessibility and just looks stupid on the G5 or the Talos II. There's lots of wasted empty space, too. This may reflect the contents of the crania of the people who worked on it, and apparently they don't care (I complained plenty of times before switching back, I expect no reply because they owe me nothing), so I feel no shame in abusing them.

Most of all, however, there is no added functionality. There is no workflow I know of that this makes better, and by removing stuff that used to work, demonstrably makes at least my own workflow worse.

So am I going to rage-quit Blogger? Well, no, at least not for the blogs I have that presently exist (feel free to visit, linked in the blogroll). I have years of documents here going back to TenFourFox's earliest inception in 2010, many of which are still very useful to vintage Power Mac users, and people know where to find them. It was the lowest effort move at the time to start a blog here and while Blogger wasn't futzing around with their own secret sauce it worked out well.

So, for future posts, my anticipated Rube Goldbergian nightmare is to use Compose view to load my images, copy the generated HTML off, type the rest of the tags manually in a text editor as God and Sir Tim intended and cut and paste it into a blank HTML view before New Blogger has a chance to mess with it. Hopefully they don't close the hole with paste not auto-indenting, for all that's holy. And if this is the future of Blogger, then if I have any future projects in mind, I think it's time for me to start self-hosting them and take a hike. Maybe this really is Google's way of getting this place to shut down.

(I actually liked New Coke, by the way.)

Saturday, October 12, 2019

Chrome users gloriously freed from obviously treacherous and unsafe uBlock Origin

Thank you, O Great Chrome Web Store, for saving us from the clearly hazardous, manifestly unscrupulous, overtly duplicitous uBlock Origin. Because, doubtlessly, this open-source ad-block extension by its very existence and nature could never "have a single purpose that is clear to users." I mean, it's an ad-blocker. Those are bad.

Really, this is an incredible own goal on Google's part. Although I won't resist the opportunity to rag on them, I also grudgingly admit that this is probably incompetence rather than malice and likely yet another instance of something falling through the cracks in Google's all-powerful, rarely examined automatic algorithms (though there is circumstantial evidence to the contrary). Having a human examine these choices costs money in engineering time, and frankly when the automated systems are misjudging something that will probably cost Google's ad business money as well, there's just no incentive to do anything about it. But it's a bad look, especially with how two-faced the policy on Manifest V3 has turned out to be and its effect on ad-blocker options for Chrome.

UPDATE: I hate always being right. Peter Kasting, a big wheel and original member of the Chrome team, escalated the issue and the extension is back, but for how long? And will it happen again? And what if you're not a squeaky enough wheel to gain enough attention to your plight?

It is important to note that this block is for Chrome rather than Chromium-based browsers (like Edge, Opera, Brave, etc.). That said, Chrome is clearly the one-ton gorilla, and Google doesn't like you sideloading extensions either. While Mozilla reviews extensions too, and there have been controversial rejections on their part, speaking as an add-on author of over a decade there is at least a human on the other end even if once in a while the human is a butthead. (A volunteer butthead, to be sure, but still a butthead.) So far I think they've reached a reasonable compromise between safety and user choice even if sometimes the efforts don't scale. On the other hand, Google clearly hasn't by any metric.

This is a good time to remind people who may not know that TenFourFox has built-in basic adblock, targeted at the JavaScript-based nuisances that are most pernicious on our older systems. It's not only an integral part of the browser but it's also actually written in C++, so it's faster than a JavaScript-based add-on and works at a much lower level. It can also be combined with Private Browsing and other adblocker add-ons for even more comprehensive protection.

You may have suspected by the relative lack of activity on this blog and at Github that there aren't going to be any new features in the next TenFourFox release, and you'd be right. Between my wife and I actually being in the same hemisphere for a couple weeks, an incredible amount of work at the dayjob and work on the POWER9 side for mainline Firefox I've just been too short-handed to do much development this cycle. It will instead be numbered FPR16 SPR1 with security patches only and I'll use the opportunity to change our upstream certificate source to 68ESR. Watch for it sometime next week.

Friday, August 16, 2019

Chrome murders FTP like Jeffrey Epstein

What is it with these people? Why can't things that are working be allowed to still go on working? (Blah blah insecure blah blah unused blah blah maintenance blah blah web everything.)

This leaves an interesting situation where Google has, in its very own search index, HTML pages served by FTP its own browser won't be able to view:

At the top of the search results, even!

Obviously those FTP HTML pages load just fine in mainline Firefox, at least as of this writing, and of course TenFourFox. (UPDATE: This won't work in Firefox either after Fx70, though FTP in general will still be accessible. Note that it references Chrome's announcements; as usual, these kinds of distributed firing squads tend to be self-reinforcing.)

Is it a little ridiculous to serve pages that way? Okay, I'll buy that. But it works fine and wasn't bothering anyone, and they must have some relevance to be accessible because Google even indexed them.

Why is everything old suddenly so bad?

Friday, May 3, 2019

TenFourFox not affected by the addon apocalypse

Tonight's Firefox add-on apocalypse, traced to a mistakenly expired intermediate signing certificate, is currently roiling Firefox users worldwide. It bit me on my Talos II, which really cheesed me off because it tanked all my carefully constructed site containers. (And that's an official Mozilla addon!)

This brief post is just to reassure you that TenFourFox is unaffected -- I disagreed with signature enforcement on add-ons from the beginning and explicitly disabled it.

Monday, December 3, 2018

Edge gets Chrome-plated, and we're all worse off

I used to think that WebKit would eat the world, but later on I realized it was Blink. In retrospect this should have been obvious when the mobile version of Microsoft Edge was announced to use Chromium (and not Microsoft's own rendering engine EdgeHTML), but now rumour has it that Edge on its own home turf -- Windows 10 -- will be Chromium too. Microsoft engineers have already been spotted committing to the Chromium codebase, apparently for the ARM version. No word on whether this next browser, codenamed Anaheim, will still be called Edge.

In the sense that Anaheim won't (at least in name) be Google, just Chromium, there's reason to believe that it won't have the repeated privacy erosions that have characterized Google's recent moves with Chrome itself. But given how much DNA WebKit and Blink share, that means there are effectively two current major rendering engines left: Chromium and Gecko (Firefox). The little ones like NetSurf, bless its heart, don't have enough marketshare (or currently features) to rate, Trident in Internet Explorer 11 is intentionally obsolete, and the rest are too deficient to be anywhere near usable (Dillo, etc.). So this means Chromium arrogates more browsershare to itself and Firefox will continue to be the second class citizen until it, too, has too small a marketshare to be relevant. Then Google has eaten the Web. And we are worse off for it.

Bet Mozilla's reconsidering that stupid embedding decision now.

Saturday, March 3, 2018

And now for something completely different: Make that Power Mac into a radio station (plus: the radioSHARK tank and AltiVec + LAME = awesome)

As I watch Law and Order reruns on my business trip, first, a couple followups. The big note is that it looks like Intel and some ARM cores aren't the only ones vulnerable to Meltdown; Raptor Computer Systems confirms that Meltdown affects at least POWER7 through POWER9 as well, and the Talos II has already been patched. It's not clear if this is true for POWER4 (which would include the G5) through POWER6 as these processor generations have substantial microarchitectural differences. However, it doesn't change anything for the G3 and 7400, since because they appear to be immune to Spectre-type attacks means they must also be immune to Meltdown. As a practical matter, though, unless you're running an iffy program locally there is no known JavaScript vector that successfully works to exploit Spectre (let alone Meltdown) on Power Macs, even on the 7450 and G5 which are known to be vulnerable to Spectre.

Also, the TenFourFox Downloader is now live. After only a few days up with no other promotion, it's pulling down about 200 downloads a day. I note that some small number are current TenFourFox users, which isn't really what this is intended for: the Downloader is unavoidably -- and in this case, also unnecessarily -- less secure, and just consumes bandwidth on Floodgap downloading a tool to download something the browser can just download directly. If you're using TenFourFox already (at least 38 or later), please just download upgrades with the browser itself. In addition, some are Intel Mac users on 10.6 and earlier, which the Downloader intentionally won't grab for because we don't support them. Nevertheless, the Downloader is clearly accomplishing its goal, which is important given that many websites won't be accessible to Power Mac users anymore without it, so it will be a permanent addition to the site.

Anyway, let's talk about Power Macs and radios. I'm always fond of giving my beloved old Macs new things to do, so here's something you can think about for that little G4 Mac mini you tossed in the closet. Our 2,400 square foot house has a rather curious floor plan: it's a typical California single-floor ranch but configured as a highly elongated L-shape along the bottom and right legs of the property's quadrilateral. If I set something playing somewhere in the back of the house you probably won't hear it very well even just a couple rooms away. The usual solution is to buy something like a Sonos, which are convenient and easy to operate, but streaming devices like that can have synchronization issues and they are definitely not cheap.

But there's another solution: set up a house FM transmitter. With a little spare time and the cost of the transmitter (mine cost $125), you can devise a scheme that turns any FM radio inside your house into a remote speaker with decent audio quality. Larger and better engineered than those cheapo little FM transmitters you might use in a car, the additional power allows the signal to travel through walls and with careful calibration can cover even a relatively large property. Best of all, adding additional drops is just the cost of another radio (instead of an expensive dedicated receiver), and because it's broadcast everything is in perfect sync. If your phone has an FM radio you can even listen to your home transmitter on that!

There are some downsides to this approach, of course. One minor downside is because it's broadcast, your neighbours could tune in (don't play your potentially embarrassing, uh, "home movie" audio soundtracks this way). Another minor downside is that the audio quality is decent but not perfect. The transmitter is in your house, so interference is likely to be less, but things as simple as intermittently energized electrical circuits, bad antenna positioning, etc., can all make reception sometimes maddeningly unpredictable. If you're an uncompromising audiophile, or you need more than two-channel audio, you're just going to have to get a dedicated streaming system.

The big one, though, is that you are now transmitting on a legally regulated audio band without a license. The US Federal Communications Commission has provisions under Part 15 for unlicensed AM/FM transmission which limit your signal to an effective distance of just 200 feet. There are more specific regulations about radiated signal strength, but the rule of thumb I use is that if you can detect a usable signal at your property line you are probably already in violation (and you can bet I took a lot of samples when I was setting this up). The FCC doesn't generally drive around residential neighbourhoods with a radio detector van and no one's going to track down a signal no one but you can hear, but if your signal leaks off your property it only takes one neighbourhood busybody with a scanner and nothing better to do to complain and initiate an investigation. Worse, if you transmit on the same frequency as an actually licensed local station and meaningfully interfere with their signal, and they detect it (and if it's meaningful interference, I guarantee you they will sooner or later), you're in serious trouble. The higher the rated wattage for your transmitter, the greater the risk you run of getting busted, especially if you are in a densely populated area. If you ever get a notice of violation, take it seriously, take your transmitter completely offline immediately, and make sure you tell the FCC in writing you turned it off. Don't turn it back on again until you're sure you're in compliance or you may be looking at a fine of up to $75,000. If you're not in the United States, you'd better know what the law is there too.

So let's assume you're confident you're in (or can be in) compliance with your new transmitter, which you can be easily with some reasonable precautions I'll discuss in a moment. You could just plug the transmitter into a dedicated playback device, and some people do just that, but by connecting the transmitter to a handy computer you can do so many other useful things. So I plugged it into my Sawtooth G4 file server, which lives approximately in the middle of the house in the dedicated home server room:

There it is, the slim black box with the whip antenna coming off the top sandwiched between the FireWire hub (a very, very useful device and much more reliable than multiple FireWire controllers) and the plastic strut the power strip is mounted on. This is the Whole House FM Transmitter 3.0 "WHFT3" which can be powered off USB or batteries (portable!), has mic and line-level inputs (though in this application only line input is connected), includes both rubber duck and whip antennas (a note about this presently) and retails for about $125. Amazon carries it too (I don't get a piece of any sales, I'm just a satisfied customer). It can crank up to around 300 milliwatts, which may not seem like much to the uninitiated, but easily covers the 100 foot range of my house and is less likely to be picked up by nosy listeners than some of the multi-watt Chinese import RF blowtorches they sell on eBay (for a point of comparison, a typical ham mobile radio emits around 5 watts). It also has relatively little leakage, meaning it is unlikely to be a source of detectable RF interference when properly tuned.

By doing it this way, the G4, which is ordinarily just acting as an FTP and AFP server, now plays music from playlists and the audio is broadcast over the FM transmitter. How you decide to do this is where the little bit of work comes in, but I can well imagine just having MacAmp Lite X or Audion running on it and you can change what's playing over Screen Sharing or VNC. In my case, I wrote up a daemon to manage playlists and a command-line client to manipulate it. 10.5+ offers a built-in tool called afplay to play audio files from the command line, or you can use this command line playback tool for 10.2 through 10.4. The radio daemon uses this tool (the G4 server runs Tiger) to play each file in the selected folder in order. I'll leave writing such a thing to the reader since my radio daemon has some dependencies on the way my network is configured, but it's not very complex to devise in general.

Either way works fine, but you also need to make sure that the device has appropriate signal strength and input levels. The WHFT3 allows you to independently adjust how much strength it transmits with a simple control on the side; you can also adjust the relative levels for the mic and line input if you are using both. (There is a sorta secret high-level transmission mode you can enable which I strongly recommend you do not: you will almost certainly be out of FCC compliance if you do. Mine didn't need this.) You should set this only as high as necessary to get good reception where you need it, which brings us to making sure the input level is also correct, as the WHFT3 is somewhat more prone to a phenomenon called over-modulation than some other devices. This occurs when the input level is too high and manifests as distortion or clipping but only when audio is actually playing.

To calibrate my system, I first started with a silent signal. Since the frequency I chose had no receivable FM station in my region of greater Los Angeles (and believe me, finding a clear spot on the FM dial is tough in the Los Angeles area), I knew that I would only hear static on that frequency. I turned on the transmitter with no input using the "default" rubber duck antenna and went around the house with an FM radio with its antenna fully retracted. When I heard static instead of nothing, I knew I was exceeding the transmission range, which gave me an approximate "worst case" distance for inside the house. I then walked around the property line with the FM radio and its antenna fully extended this time for a "within compliance" test. I only picked up static outside the house, but inside I couldn't get enough range in the kitchen even with the transmitter cranked up all the way, so I ended up switching the rubber duck antenna for the included whip antenna. The whip is not the FCC-approved configuration (you are warned), but got me the additional extra range, and I was able to back down the transmitter strength and still be "neighbour proof" at the property line. This is also important for audio quality since if you have the transmitter power all the way up the WHFT3 tends to introduce additional distortion no matter what your input level is.

Next was to figure out the appropriate input level. I blasted Bucko and Champs Australian Christmas music and backed down the system volume on the G4 until there was no distortion for the entire album (insert your own choice of high volume audio here such as Spice Girls or Anthrax), and checked the new level a few times with a couple other albums until I was satisfied that distortion and overmodulation was at a minimum. Interestingly, while you can AppleScript setting the volume in future, what you get from osascript -e 'set ovol to output volume of (get volume settings)' is in different units than what you feed to osascript -e 'set volume X': the first returns a number from 0-100 with 14 unit steps, but the second expects a number from 1-10 in 0.1 unit steps. The volume on my G4 is reported by AppleScript as "56" but I set that on startup in a launchd startup item with a volume value of 4.0 (i.e., 4 times 14 equals 56). Don't ask me why Apple did it this way.

There were two things left to do. First was to build up a sufficient library of music to play from the file server, which (you may find this hard to believe) really is just a file server and handles things like backups and staging folders, not a media server. There are many tools like the most excellent X Lossless Decoder utility -- still Tiger and PowerPC compatible! -- which will rip your CDs into any format you like. I decided on MP3 since the audio didn't need to be lossless and they were smaller, but most of the discs I cared about were already ripped in lossless format on the G5, so it was more a matter of transcoding them quickly. The author of XLD makes the AltiVec-accelerated LAME encoder he uses available separately, but this didn't work right on 10.4, so I took his patches against LAME 3.100, tweaked them further, restored G3 and 10.4 compatibility, and generated a three-headed binary that selects for G3, G4 and a special optimized version for G5. You can download LAMEVMX here, or get the source code from Github.

On the G5 LAMEVMX just tears through music at around 25x to as much as 30x playback speed, over three times as fast as the non-SIMD version. I stuck the MP3 files on a USB drive and plugged that in the Sawtooth so I didn't have to take up space on its main RAID, and the radio daemon iterates off that.

The second was figuring out some way to use my radios as, well, radios. Yes, you could just tune them to another station and then tune them back, but I was lazy, and when you get an analogue tuner set at that perfect point you really don't want to have to do it again over and over. Moreover, I usually listen to AM radio, not FM. One option is to see if they stream over the Internet, which may even be better quality, though receiving them over the radio eliminates having to have a compatible client and any irregularities with your network. With a little help from an unusual USB device, you can do that too:

This is the Griffin radioSHARK, which is nothing less than a terrestrial radio receiver bolted onto a USB HID. It receives AM and FM and transmits back to the Mac over USB audio or analogue line-level out. How do we hook this up to our Mac radio station? One option is to just connect its audio output directly, but you should have already guessed I'd rather use the digital output over USB. While you can use Griffin's software to tune the radio and play it through (which is even AppleScript-able, at least version 2), it's PowerPC-only and won't run on 10.7+ if you're using an old Intel Mac for this purpose, and I always prefer to do this kind of thing programmatically anyhow.

For the tuner side, enterprising people on the Linux side eventually figured out how to talk to the HID directly and thus tune the radio manually (there are two different protocols for the two versions of the radioSHARK; more on this in a moment). I combined both protocols together and merged it with an earlier but more limited OS X utility, and the result is radioSH, a commandline radio tuner. (You can also set the radioSHARK's fun blue and red LEDs with this tool and use it as a cheapo annunciator device. Read the radioSH page for more on that.) I compiled it for PowerPC and 32-bit Intel, and the binary runs on anything from 10.4 to 10.13 until Apple cuts off 32-bit binary compatibility. The source code is available too.

For USB audio playthru, any USB audio utility will suffice, such as LineIn (free, PowerPC compatible) or SoundSource (not free, not PowerPC compatible), or even QuickTime Player with a New Audio Recording and the radioSHARK's USB audio output as source. Again, I prefer to do this under automatic control, so I wrote a utility using the MTCoreAudio framework to do the playback in the background. (Use this source file and tweak appropriately for your radioSHARK's USB audio endpoint UID.) At this point, getting the G4 radio station to play the radio was as simple as adding code to the radio daemon to tune the radio with radioSH and play the USB audio stream through the main audio output using that background tool when a playlist wasn't active (and to turn off the background streamer when a playlist was running). Fortunately, USB playthru uses very little CPU even on this 450MHz machine.

I mentioned there are two versions of the radioSHARK, white (v1) and black (v2), which have nearly completely different hardware (belied by their completely different HID protocols). The black radioSHARK is very uncommon. I've seen some reports that there are v1 white units with v2 black internals, but of the three white radioSHARKs I own, all of them are detected as v1 devices. This makes a difference because while neither unit tunes AM stations particularly well, the v1 seems to have poorer AM reception and more distortion, and the v2 is less prone to carrier hum. To get the AM stations I listen to more reliably with better quality, I managed to track down a black radioSHARK and stuck it in the attic:

To improve AM reception really all you can do is rotate or reposition the receiver and the attic seemed to get these stations best. A 12-foot USB extension cable routes back to the G4 radio station. The radioSHARK is USB-powered, so that's the only connection I had to run.

To receive the radio on the Quad G5 while I'm working, I connected one of the white radioSHARKs (since it's receiving FM, there wasn't much advantage to trying to find another black unit). I tune it on startup with radioSH to the G4 and listen with LineIn. Note that because it's receiving the radio signal over USB there is a tiny delay and the audio is just a hair out of sync with the "live" analogue radios in the house. If you're mostly an Intel Mac house, you can of course do the same thing with the same device in the same way (on my MacBook Air, I use radioSH to tune and play the audio in QuickTime Player).

For a little silliness I added a "call sign" cron job that uses /usr/bin/say to speak a "station ID" every hour on the hour. The system just mixes it over the radio daemon's audio output, so no other code changes were necessary. There you go, your very own automatic G4 radio station in your very own house. Another great use for your trusty old Power Mac!

Oh, one more followup, this time on Because I Got High Sierra. My mother's Mac mini, originally running Mavericks, somehow got upgraded to High Sierra without her realizing it. The immediate effect was to make Microsoft Word 2011 crash on startup (I migrated her to LibreOffice), but the delayed effect was, on the next reboot (for the point update to 10.13.2), this alarming screen:

The system wouldn't boot! On every startup it would complain that "macOS could not be installed on your computer" and "The path /System/Installation/Packages/OSInstall.mpkg appears to be missing or damaged." Clicking Restart just caused the same message to appear.

After some cussing and checking that the drive was okay in the Recovery partition, the solution was to start in Safe Mode, go to the App Store and force another system update. After about 40 minutes of chugging away, the system grudgingly came up after everything was (apparently) refreshed. Although some people with this error message reported that they could copy the OSInstall.mpkg file from some other partition on their drive, I couldn't find such a file even in the Recovery partition or anywhere else. I suspect the difference is that these people encountered this error immediately after "upgrading" to Because I Got High Sierra, while my mother's computer encountered this after a subsequent update. This problem does not appear to be rare. It doesn't seem to have been due to insufficient disk space or a hardware failure and I can't find anything that she did wrong (other than allowing High Sierra to install in the first place). What would she have done if I hadn't been visiting that weekend, I wonder? On top of all the other stupid stuff in High Sierra, why do I continue to waste my time with this idiocy?

Does Apple even give a damn anymore?

Sunday, January 28, 2018

Why my next laptop isn't gonna be a Mac, either

I'm typing this in an early build of TenFourFox Feature Parity Release 6. This version contains speculative fixes for hangs on Facebook and crashes with textboxes on some systems, plus tuned-up UI, accelerated video frame colour conversion and -- the biggest feature -- basic adblock integrated directly into the browser core. The basic adblock is effective enough that I've even started running "naked" without Bluhell Firewall and it's so much quicker that there's no add-on overhead. I have some more features planned with a beta somewhere around mid-February. Watch for it.

Meanwhile, Raptor has announced their first production run on the Talos II has begun, their big, beefy, open and fully-auditable POWER9-based beast -- check out this picture of their non-SAS production sample motherboard. I'm really looking forward to mine arriving hopefully sometime in February. It's been delayed apparently by some supplier shenanigans but if they're moving to mass production, they must have enough parts to get it to us early orderers (my order was in August 2017).

I bought the Talos II because I wanted something non-x86 without lurking proprietary obscenities like the Intel Management Engine (or even AMD's Platform Secure Processor) that was nevertheless powerful enough to match those chips in power, and the only thing practical and even close to it is modern Power ISA. It had to be beefier than the Quad G5 I'm typing this on, which is why beautiful but technically underwhelming systems like the AmigaOne X5000 were never an option because this 11-year-old Quad mops the floor with it (no AltiVec, wtf!). It had to be practical, i.e., in a desktop form factor with a power draw that wouldn't require a second electrical meter, and it had to actually exist. Hello, Talos. It was pretty clear even before I decided on the specific machine that my next desktop computer wasn't going to be a Mac; I briefly toyed with gritting my teeth and waiting around for whatever the next iteration of the Mac Pro would be, but eventually concluded pro users just weren't a priority demographic to Apple's hardware designers anymore. After all, if we were, why would they make us wait so long? And why should I wait and pay buck$$$ for another iteration of an architecture I don't like anyway?

But now I'm not sure we're even a priority to their software designers. Here's where I lost it: from the idiots who couldn't even secure a password field properly came their bloodyminded attempt to improve the security of the operating system by removing command line telnet and ftp (directly from the Apple CSR, "it is not possible to access FTP through the terminal, because High Sierra is a more secure operating system" [sic]) -- and they even screwed the removal up.

That's the absolute last straw for me. Sure, as someone who actually uses them on my internal network, I could reinstall them or anything else Apple starts decommissioning in Homebrew (right up until Apple takes some other component away that can't be easily restored in that fashion, or decides to lock down the filesystem further). Sure, hopefully if I upgrade (I use this term advisedly) my Haswell i7 MacBook Air from Sierra to High Sierra, I might not have too many bugs, and Apple might even fix what's left, maybe, or maybe in 10.14, maybe. Sure, I could vainly search for 64-bit versions of the tools I use, some of which might not exist or be maintained, and spend a lot of time trying to upgrade the ones I've written which work just fine now (breaking the unified build I do on my G5 and being generally inconvenient), and could click through the warning you'll now get in 10.13.4 whenever you open a 32-bit app and leap whatever hoops I have to jump through on 10.14 to run them "without compromise."

Sure, I could do all that. And I could continue to pay a fscking lot of money for the privilege of doing all that, too. Or, for the first time since 1987, in over thirty years of using Macs starting with my best friend's dad's Macintosh Plus, I could say I'm just totally done with modern Macs. And I think that's what I'll be doing.

Because the bottom line is this: Apple doesn't want users anymore who just want things to keep working. Hell, on this Quad in 10.4, I can run most software for 68K Macs! (in fact, I do -- some of those old tools are very speedy). But Classic ended with the Intel Macs, and Rosetta crapped out after 10.6. Since then every OS release has broken a little here, and deprecated a little there, and deleted a little somewhere else, to where every year when WWDC came along and Apple announced what they were screwing around with next that I dreaded the inevitable OS upgrade on a relatively middling laptop I dropped $1800 on in 2014. What was it going to break? What new problems were lurking? What would be missing that I actually used? There was no time to adapt because soon it was onto next year's new mousetrap and its own set of new problems. So now, with the clusterflub that Because I Got High Sierra's turned out to be, I've simply had enough. I'm just done.

So come on, you Apple apologists. Tell me how Apple doesn't owe me anything. Tell me how every previous version of OS X had its bugs, and annual major OS churn actually makes good sense. Tell me how it's unfair that poor, cash-starved Apple should continue to subsidize people who want to run perfectly good old software or maintain their investment in peripherals. Tell me how Apple's doing their users a favour by getting rid of those crufty niche tools that "nobody" uses anyway, and how I can just deal. If this is what you want from your computer vendor, then good for you because by golly you're getting it, good and hard. For me, this MacBook's staying on Sierra and I'll wipe it with Linux or FreeBSD when Sierra doesn't get any more updates. Maybe there will be a nice ARMbook around by then because I definitely won't be buying another Mac.

Friday, August 11, 2017

Time to sink the Admiral (or, why using the DMCA to block adblockers is a bad move)

One of the testing steps I have to do, but don't enjoy, is running TenFourFox "naked" (without my typical adblock add-ons) to get an assessment of how it functions drinking from the toxic firehose that is the typical modern ad network. (TL;DR: Power Macs run modern Web ads pretty poorly. But, as long as it doesn't crash.) Now to be sure, as far as I'm concerned sites gets to monetize their pages however they choose. Heck, there's ads on this blog, provided through Google AdSense, so that I can continue to not run a tip jar. The implicit social contract is that they can stick it behind a paywall or run ads beside them and it's up to me/you to decide whether we're going to put up with that and read the content. If we read it, we should pony up in either eyeballs or dinero.

This, of course, assumes that the ads we get served are reasonable and in a reasonable quantity. However, it's pretty hard to make money simply off per-click ads and networks with low CPM, so many sites run a quantity widely referred to as a "metric a$$ton" and the ads they run are not particularly selective. If those ads end up being fat or heavy or run scripts and drag the browser down, they consider that the cost of doing business. If, more sinisterly, they end up spying on or fingerprinting you, or worse, try to host malware and other malicious content, well, it's not their problem because it's not their ad (but don't block them all the same).

What the solution to this problem is not, is begging us to whitelist them because they're a good site. If you're not terribly discriminating about what ads you burden your viewers with, then how good can your site really be? The other non-solution is to offer effectively the Hobson's choice of "ads or paywall." What, the solution to the ads you don't curate is to give you my credit card number so you can be equally as careful with that?

So until this situation changes and sites get a little smarter about how they do sponsorship (let me call out a positive example: The Onion's sponsored content [slightly NSFW related article]), I don't have a moral problem with adblocking because really that's the only way to equalize the power dynamic. Block the ads on this blog if you want; I don't care. Click on them or not, your choice. In fact, for the Power Macs TenFourFox targets, I find an adblocker just about essential and my hats are off to those saints of the church who don't run one. Lots of current sites are molasses in January on barbituates without it and I can only improve this problem to a certain degree. Heck, they drag on my i7 MacBook Air. What chance does my iMac G4 have?

That's why this egregious abuse of statute is particularly pernicious: a company called Admiral, which operates an anti-adblocker, managed to use a DMCA request to Github to get the address of the site hosting their beacon image (to determine if you're blocking them or not) removed from the EasyList adblock listing. They've admitted it, too.

The legal theory, as I understand it (don't ask me to defend it), is that adblockers allow users to circumvent measures designed to "control access," which is a specific component of the American DMCA. (It is not, in fact, the case in Europe.) It might be more accurate to say that the components of adblockers that block adblocker blocking are primarily what they object to. (Uh, yo dawg.) Since the volunteer maintainers of EasyList are the weak link and the list they maintain is the one most adblockers use as a base, this single action gets them unblocked by most adblock extensions and potentially gives other ad networks a fairly big club to force compliance to boot.

The problem with this view, and it is certainly not universally shared, is that given that adblockers work by preventing certain components of the page from loading, theoretically anything that does not load the website completely as designed is therefore in violation. The famous text browser Lynx, for example, does not display images or run JavaScript, and since most ads and adblocker-blockers are implemented with images and JavaScript, it is now revealed as a sinister tool of the godless communist horde. NoScript blocks JavaScript on sites you select, and for the same reasons will cause the end of the American Republic. Intentionally unplugging your network cable at the exact moment when the site is pushing you a minified blob of JS crap -- or the more technically adept action of blackholing that address in your hosts file or on your router -- prevents the site from loading code to function in the obnoxious manner the ad network wants it to, and as a result is clearly treason. Notice that in all these examples the actual code of the site is not modified, just whether the client will process (or in the last example even just receive) and display it. Are all these examples "circumvention"?

This situation cannot stand and it's time for us independent browser maintainers to fight fire with fire. If Admiral isn't willing to back down, I'll issue the ultimatum that I will write code into TenFourFox to treat any of Admiral's web properties as malicious, and I encourage other browser maintainers to do the same. We already use Safe Browsing to block sites that try to load malicious code and we already generate warnings for sites with iffy credentials or bad certificates, so it's not a stretch to say that a site that actively attacks user choice is similarly harmful. The block will only be by default and a user that really wants to can turn it off, but the point will be made. I challenge Admiral to step up their game and start picking on people their own size if they really believe this is the best course of action.

And hey, even if this doesn't work, I should get lots of ad clicks from this, right? Right?






I'll get my coat.

Saturday, January 14, 2017

45.7.0 available (also: Talos fails)

TenFourFox 45.7.0 is now available for testing. In addition to reducing the layout paint delay I also did some tweaks to garbage collection by removing some code that isn't relevant to us, including some profile accounting work we don't need to bother computing. If there is a request to reinstate this code in a non-debug build we can talk about a specific profiling build down the road, probably after exiting source parity. As usual the build finalizes Monday evening Pacific time. I didn't notice that the release had been pushed forward another week, to January 24. If additional security patches land, there will be a respin. There will be a respin this weekend. The download links have been invalidated and cancelled.

For 45.8 I plan to start work on the built-in user-agent switcher, and I'm also looking into a new initiative I'm calling "Operation Short Change" to wring even more performance out of IonPower. Currently, the JavaScript JIT's platform-agnostic section generates simplistic unoptimized generic branches. Since these generic branches could call any code at any displacement and PowerPC conditional branch instructions have only a limited number of displacement bits, we pad the branches with nops (i.e., nop/nop/nop/bc) so they can be patched up later if necessary to a full-displacement branch (lis/ori/mtctr/bcctr) if the branch turns out to be far away. This technique of "branch stanzas" dates back all the way to the original nanojit we had in TenFourFox 4 and Ben Stuhl did a lot of optimization work on it for our JaegerMonkey implementation that survived nearly unchanged in PPCBC and in a somewhat modified form today in IonPower-NVLE.

However, in the case of many generic branches the Ion code generator creates, they jump to code that is always just a few instruction words away and the distance between them never moves. These locations are predictable and having a full branch stanza in those cases wastes memory and instruction cache space; fortunately we already have machinery to create these fixed "short branches" in our PPC-specific code generator and now it's time to further modify Ion to generate these branches in the platform-agnostic segment as well. At the same time, since we don't generally use LR actually as a link register due to a side effect of how we branch, I'm going to investigate whether using LR is faster for long branches than CTR (i.e., lis/ori/mtlr/b(c)lr instead of mtctr/b(c)ctr). Certainly on G5 I expect it probably will be because having mtlr and blr/bclr in the same dispatch group doesn't seem to incur the same penalty that mtctr and bctr/bcctr in the same dispatch group do. (Our bailouts do use LR, but in an indirect form that intentionally clobbers the register anyway, so saving it is unimportant.)

On top of all that there is also the remaining work on AltiVec VP9 and some other stuff, so it's not like I won't have anything to do for the next few weeks.

On a more disappointing note, the Talos crowdfunding campaign for the most truly open, truly kick-*ss POWER8 workstation you can put on your desk has run aground, "only" raising $516,290 of the $3.7m goal. I guess it was just too expensive for enough people to take a chance on, and in fairness I really can't fault folks for having a bad case of sticker shock with a funding requirement as high as they were asking. But you get the computer you're willing to pay for. If you want a system made cheaper by economies of scale, then you're going to get a machine that doesn't really meet your specific needs because it's too busy not meeting everybody else's. Ultimately it's sad that no one's money was where their mouths were because for maybe double-ish the cost of the mythical updated Mac Pro Tim Cook doesn't see fit to make, you could have had a truly unencumbered machine that really could compete on performance with x86. But now we won't. And worst of all, I think this will scare off other companies from even trying.

Thursday, October 27, 2016

Apple desktop users screwed again

Geez, Tim. You could have at least phoned in a refresh for the mini. Instead, we get a TV app and software function keys. Apple must be using the Mac Pro cases as actual trash cans by now.

Siri, is Phil Schiller insane?

That's a comfort.

(*Also, since my wife and I both own 11" MacBook Airs and like them as much as I can realistically like an Intel Mac, we'll mourn their passing.)

Friday, September 23, 2016

TenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Saturday, September 10, 2016

TenFourFox 45.4.0 available (plus: priorities for feature parity and down with Dropbox)

With the peerless level of self-proclaimed courageousness necessary to remove a perfectly good headphone jack for a perfectly stupid reason (because apparently no one at Apple charges their phone and listens to music at the same time), TenFourFox 45.4.0 is released (downloads, hashes, release notes). This will be the final release in the beta cycle and assuming no critical problems will become the public official release sometime Monday Pacific time. Localizations will be frozen today and uploaded to SourceForge sometime this evening or tomorrow, so get them in ASAP.

The major change in this release is additional tweaking to the MediaSource implementation and I'm now more comfortable with its functioning on G4 systems through a combination of some additional later patches I backported and adjusting our own hacks to not only aggressively report the dropped frames but also force rebuffering if needed. The G4 systems now no longer seize and freeze (and, occasionally, fatally assert) on some streams, and the audio never becomes unsynchronized, though there is some stuttering if the system is too overworked trying to keep the video and audio together. That said, I'm going to keep MediaSource off for 45.4 so that there will be as little unnecessary support churn as possible while you test it (if you haven't already done so, turn media.mediasource.enabled to true in about:config; do not touch the other options). In 45.5, assuming there are no fatal problems with it (I don't consider performance a fatal flaw, just an important one), it will be the default, and it will be surfaced as an option in the upcoming TenFourFox-specific preference pane.

However, to make the most of MediaSource we're going to need AltiVec support for VP9 (we only have it for VP3 and VP8). While upper-spec G5 systems can just power through the decoding process (though this might make hi-def video finally reasonable on the last generation machines), even high-spec G4 systems have impaired decoding due to CPU and bus bandwidth limitations and the low-end G4 systems are nearly hopeless at all but the very lowest bitrates. Officially I still have a minimum 1.25GHz recommendation but I'm painfully aware that even those G4s barely squeak by. We're the only ones who ever used libvpx's moldy VMX code for VP8 and kept it alive, and they don't have anything at all for VP9 (just x86 and ARM, though interestingly it looks like MIPS support is in progress). Fortunately, the code was clearly written to make it easier for porters to hand-vectorize and I can do this work incrementally instead of having to convert all the codec pieces simultaneously.

Interestingly, even though our code now faithfully and fully reports every single dropped frame, YouTube doesn't seem to do anything with this information right now (if you right-click and select "Stats for nerds" you'll see our count dutifully increase as frames are skipped). It does downshift for network congestion, so I'm trying to think of a way to fool it and make dropped frames look like a network throughput problem instead. Doing so would technically be a violation of the spec but you can't shame that which has no shame and I have no shame. Our machines get no love from Google anyway so I'm perfectly okay with abusing their software.

I have the conversion to platform codec of our minimp3 decoder written as a first draft, but I haven't yet set that up or tested it, so this version still uses the old codec wrapper and still has the track-shifting problem with Amazon Music. That is probably the highest priority for 45.5 since it is an obvious regression from 38. On the security side, this release also disables RTCPeerConnection to eliminate the WebRTC IP "leak" (since I've basically given up on WebRTC for Power Macs). You can still reenable it from about:config as usual.

The top three priorities for the next couple versions (with links to the open Github issues) are, highest first, fixing Amazon Music, AltiVec VP9 codepaths and the "little endian typed array" portion of IonPower-NVLE to fix site compatibility with little-endian generated asm.js. Much of this work will proceed in parallel and the idea is to have a beta 45.5 for you to test them in a couple weeks. Other high priority items on my list to backport include allowing WebKit/Blink to eat the web supporting certain WebKit-prefixed properties to make us act the same as regular Firefox, support for ChaCha20+Poly1305, WebP images, expanded WebCrypto support, the "NV" portion of IonPower-NVLE and certain other smaller-scope HTML/CSS features. I'll be opening tracking issues for these as they enter my worklist, but I have not yet determined how I will version the browser to reflect these backported new features. For now we'll continue with 45.x.y while we remain on 45ESR and see where we end up.

As we look into the future, though, it's always instructive to compare it with the past. With the anticipation that even Google Code's Archive will be flushed down the Mountain View memory hole (the wiki looks like it's already gone, but you can get most of our old wikidocs from Github), I've archived 4.0b7, 4.0.3, 8.0, 10.0.11, 17.0.11 and Intel 17.0.2 on SourceForge along with their corresponding source changesets. These Google Code-only versions were selected as they were either terminal (quasi-)ESR releases or have historical or technical relevance (4.0b7 was our first beta release of TenFourFox ever "way back" in 2010, 8.0 was the last release that was pure tracejit which some people prefer, and of course Intel 17.0.2 was our one and so far only release on Intel Macs). There is no documentation or release notes; they're just there for your archival entertainment and foolhardiness. Remember that old versions run an excellent chance of corrupting your profile, so start them up with one you can throw away.

Finally, a good reason to dump Dropbox (besides the jerking around they give those of you trying to keep the PowerPC client working) is their rather shameful secret unauthorized abuse of your Mac's accessibility framework by forging an entry in the privacy database. (Such permissions allow it to control other applications on your Mac as if it were you at the user interface. The security implications of that should be immediately obvious, but if they're not, see this discussion.) The fact this is possible at all is a bug Apple absolutely must fix and apparently has in macOS Sierra, but exploiting it in this fashion is absolutely appalling behaviour on Dropbox's part because it won't even let you turn it off. To their credit they're taking their lumps on Hacker News and TechCrunch, but accepting their explanation of "only asking for privileges we use" requires a level of trust that frankly they don't seem worthy of and saying they never store your administrator password is a bit disingenuous when they use administrative access to create a whole folder of setuid binaries -- they don't need your password at that point to control the system. Moreover, what if there were an exploitable security bug in their software?

Mind you, I don't have a problem with apps requesting that access if I understand why and the request isn't obfuscated. As a comparison, GOG.com has a number of classic DOS games I love that were ported for DOSBox and work well on my MacBook Air. These require that same accessibility access for proper control of input methods. Though I hope they come up with a different workaround eventually, the GOG installer does explain why and does use the proper APIs for requesting that privilege, and you can either refuse on the spot or disable it later if you decide you're not comfortable with it. That's how it's supposed to work, but that's not what Dropbox did, and they intentionally hid it and the other inappropriate system-level things they were sneaking through. Whether out of a genuine concern for user experience or just trying to get around what they felt was an unnecessary security precaution, it's not acceptable and it's potentially exploitable, and they need to answer for that.

Watch for 45.4 going final in a couple days, and hopefully a 45.5 beta in a couple weeks.