XSLT RIP
xslt.rip393 points by edent 6 hours ago
393 points by edent 6 hours ago
I don't really need or use XSLT (I think), so I am not really affected either way. But I am also growing mightily tired of Google thinking "I am the web" now. This is really annoying to no ends. I really don't want Google to didctate onto mankind what the web is or should be. Them killing off ublock origin also shows this corporate mindset at work.
This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)
I was hoping the site itself would be an XML document. Thankfully, it is an XML document.
% curl https://xslt.rip/
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/index.xsl" type="text/xsl"?>
<html>
<head>
<title>XSLT.RIP</title>
</head>
<body>
<h1>If you're reading this, XSLT was killed by Google.</h1>
<p>Thoughts and prayers.</p>
<p>Rest in peace.</p>
</body>
</html>This is actually a clever way to distinguish if the browser supports XSLT or not. Actual content is XHTML in https://xslt.rip/index.xsl
The author is frontend designer and has a nice website, too: https://dbushell.com/
I like the personal, individual style of both pages.
Heh, I honestly thought the domain name stood for "D-Bus Hell" and not their own name.
Chuckling at the disclaimer 'No AI made by a human.' I doubt many web devs could tell you that because so many use AI now. I was speaking with a web dev this summer and he told me AI made him at least twice as productive. It's an arms race to the bottom imo.
Which begs the question, are people consciously measuring their productivity? If so, how? And did they do it the same way before and after using AI tooling?
Anecdotal, but I don't measure my productivity, because it's immeasurable. I don't want to be reduced to lines of code produced or JIRA tickets completed. We don't even measure velocity, for that matter. Plus when I do end up with a task that involves writing something, my productivity depends entirely on focus, energy levels and motivation.
To me XSLT came with a flood of web complexity that led to having effectively only 2 possible web browsers. It seems a bit funny because the website looks like straight out of the 90s when "everything was better"
I have the same mixed feelings. Complexity is antidemocratic in a sense. The more complex a spec gets the fewer implementations you get and the more easily it can be controlled by a small number of players.
It’s the extend part of embrace, extend, extinguish. The extinguish part comes when smaller and independent players can’t keep up with the extend part.
A more direct way of saying it is: adopt, add complexity cost overhead, shake out competition.
Ironically, that text is all you get if you load the site from a text browser (Lynx etc.) It doesn't feel too different from <noscript>This website requires JavaScript</noscript>...
I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
> now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
Edge IE 11 mode is still there for you. Which also supports IE 6+ like it always did, presumably. They didn’t reimplement IE in Edge; IE is still there. Microsoft was all in on xml technologies back in the day.
Firefox haven't removed XSLT support yet.
I should've worded differently. By the narrative of this website, Google is "paying" Mozilla & Apple to remove XSLT, thus they are "controlled" by Google.
I personally don't quite believe it's all that black and white, just wanted to point out that the "open web" argument is questionable even if you accept this premise.
Worth noting XSLT is actually based on DSSSL, the Scheme-based document transformation and styling language of SGML. Core SGML already has "link processes" as a means to associate simple transforms/renames reusing other markup machinery concepts such as attributes, but is also introducing a rather low-level automaton construct to describe context-dependent and stateful transformations (the kind of which would've be used for recto/verso rendering on even/odd print pages).
I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.
Completely correct and the operative phrase here is “absurd amounts” which actually captures our entire contemporary computing stack in almost every dimension that matters.
I'm strongly against the removal of XSLT support from browsers—I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website, I commented on the original GitHub thread [2], and I use XSLT for non-web purposes [3].
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
[0]: https://www.maxchernoff.ca/tools/Stardew-Valley-Item-Finder/
[1]: https://www.maxchernoff.ca/atom.xml
[2]: https://github.com/whatwg/html/pull/11563#issuecomment-31909...
[3]: https://github.com/gucci-on-fleek/lua-widow-control/blob/852...
>I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website
You are on some very very small elite team of web standards users then
FYI: Many Firefox and Thunderbird extensions use <?xml-stylesheet?> . Perhaps not XSLTProcessor though.
Can’t you just do the xslt transformation server-side? Then you can use the newest and best xslt tools, and the output will work in any browser, even browsers that never had any built-in xslt support.
> Cant you just do the xslt transformation server-side?
For my Atom feed, sure. I'm already special-casing browsers for my Atom feed [0], so it wouldn't really be too difficult to modify that to just return HTML instead. And as others mentioned, you can style RSS/Atom directly with CSS [1].
For my Stardew Valley Item Finder web app, no. I specifically designed that web app to work offline (as an installable PWA), so anything server-side won't work. I'll probably end up adding the JS/wasm polyfill [2] to that when Chrome finally removes support, but the web app previously had zero dependencies, so I'm a little bit annoyed that I'll have to add a 2MB dependency.
[0]: https://github.com/gucci-on-fleek/maxchernoff.ca/blob/8d3538...
That is actually mozilla's stand in the linked issue except it's on client though. They would rather replace it with some non native replacement (So there is no surprising security issue anymore) if remove directly is impractical.
There is actually a example of such situation. Mozilla removed adobe pdf plugin support a long time ago and replaced it with pdf.js. It's still a slight performance regression for very giant pdf. But it is enough for most use case.
But the bottom line is "it's actually worth to do it because people are using it". They won't actively support a feature that little people use because they don't have the people to support it.
>But I think that this website is being hyperbolic
Intentionally in a humourous way, yes
I think also literally, independent of the cheeky tone.
Where it lost me was:
>RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by multiple government sites. Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
I mean yes Google lobbies, and certainly can lobby for bad things. And though I personally didn't know much of anything about XSLT, I from reading a bit about it I certainly am ready to accept the premise that we want it. But... is Google lobbying for an XSLT law? Does "control legislation" mean deprecate a tool for publishing info on government sites?
I actually love the cheeky style overall, would say it's a brilliant signature style to get attention, but I think this implying this is tied to a campaign to control laws is rhetorical overreach even by its own intentionally cheeky standards.
I think the reason you're considering it rhetorical overreach is because you're taking it seriously. If the author doesn't actually mind the removal of XSLT support (i.e. possibly rues its removal, but understands and accepts the reasons), then it's really a perfectly fine way to just be funny.
> Does "control legislation" mean deprecate a tool for publishing info on government sites?
I believe the intended meaning, in context, is "... for publishing the literal text of laws on government sites".
Right, my quote and your clarification are saying the same thing (at least that's what I had in mind when I wrote that).
But that leaves us back where we started because characterizing that as "control the laws" is an instance of the the rhetorical overreach I'm talking about, strongly implying something like literal control over the policy making process.
> but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
For some reason people seem to think raising awareness is all you need to do. That only works if people already generally agree with you on the issue. Want to save endangered animals? raising awareness is great. However if you're on an issue where people are generally aware but unconvinced, raising more awareness does not help. Having better arguments might.
>For some reason people seem to think raising awareness is all you need to do.
I guess I'm not seeing how that follows. It can still be complimentary to the overall goal rather than a failure to understand the necessity of persuasion. I think the needed alchemy is a serving of both, and I think it actually is trying to persuade at least to some degree.
I take your point with endangered animal awareness as a case of a cause where more awareness leads to diminishing returns. But if anything that serves to emphasize how XSLT is, by contrast, not anywhere near "save the animals" level of oversaturation. Because save the animals (in some variation) is on the bumper sticker of at least one car in any grocery store parking lot, and I don't think XSLT is close to that.
I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it. Conversely, XSLT being deprecated has lower awareness initially, but when you raise it many people hearing that aren't necessarily sympathetic - I don't think most engineers think particularly fondly about XSLT, my reaction to it being deprecated is basically "good riddance, I didn't think anyone was really using it in browsers anyway".
As an open source developer, i also have a lot of sympathy to google in this situation. Having a legacy feature holding the entire project back despite almost nobody using it because the tiny fracation that do are very vocal and think its fine to be abusive to developers to get what they want despite the fact its free software they didn't pay a dime for, is something i think a lot of open source devs can sympathize with.
> For some reason people seem to think raising awareness is all you need to do.
I don't think many do.
It's just that raising awareness is the first step (and likely the only one you'll ever see anyway, because for most topics you aren't in a position where convincing *you* in particular has any impact).
Convincing me personally does not have any impact. Convincing people like me, in mass, does.
A mass doesn't move because it's convinced (i.e. rationally) of something, but because they are emotionally impacted.
Rational arguments come later, and mostly behind closed doors.
My emotional response to XSLT being removed was: "finally!". You would need some good arguments to convince me that despite my emotions applauding this descion it is actually a bad thing.
Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors, which usually involves how rational the protestors are precieved as. Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
Thats why the other side usually try to smear protests as being crazy mobs who would never be happy. The moment you convince uninvolved people of this, the protestors lose most power.
> Rational arguments come later, and mostly behind closed doors.
I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after. If you're resorting to protest you are trying to leverage public support into a more powerful position. That's about how much power you have not the soundness of your argument.
>> You cannot “convince decision-makers” with a webpage anyway.
They should probably be called "decision-maders"
> but wildly misguided
Why? Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
I'm full in favour of removing such insecure features that barely anyone uses.
I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust. But good luck with that.
> Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
Sure, I agree with you there, but removing XSLT support entirely doesn't seem like a very good solution. The Chrome developer who proposed removing XSLT developed a browser extension that embeds libxslt [0], so my preferred solution would be to bundle that by default with the browser. This would:
1. Fix any libxslt security issues immediately, instead of leaving it enabled for 18 months until it's fully deprecated.
2. Solve any backwards compatibility concerns, since it's using the exact same library as before. This would avoid needing to get "consensus" from other browser makers, since they wouldn't be removing any features.
3. Be easy and straightforward to implement and maintain, since the extension is already written and browsers already bundle some extensions by default. Writing a replacement in Rust/another memory-safe language is certainly a good idea, but this solution requires far less effort.
This option was proposed to the Chrome developers, but was rejected for vague and uncompelling reasons [1].
> I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust.
That's already been done [2], but maintaining that and integrating it into the browsers is still lots of work, and the browser makers clearly don't have enough time/interest to bother with it.
[0]: https://github.com/mfreed7/xslt_extension
[1]: https://github.com/whatwg/html/issues/11523#issuecomment-315...
From your [1] “rejected for vague and uncompelling reasons”:
>>> To see how difficult it would be, I wrote a WASM-based polyfill that attempts to allow existing code to continue functioning, while not using native XSLT features from the browser.
>> Could Chrome ship a package like this instead of using native XSLT code, to address some of the security concerns? (I'm thinking about how Firefox renders PDFs without native code using PDF.js.)
> This is definitely something we have been thinking about. However, our current feeling is that since the web has mostly moved on from XSLT, and there are external libraries that have kept current with XSLT 3.0, it would be better to remove 1.0 from browsers, rather than keep an old version around with even more wrappers around them.
The bit that bothers me is that Google continue to primarily say they’re removing it for security reasons, although they have literally made a browser extension which is a drop-in replacement and removes 100% of the security concerns. The people that are writing about the reasons know this (one of them is the guy that wrote it), which makes the claim a blatant lie.
I want people to call Google specifically out on this (and Apple and Mozilla if they ever express it that way, which they may have done but I don’t know): that their “security” argument is deceit, trickery, dishonest, grossly misleading, a bald-faced lie. If they said they want to remove it because barely anyone uses it and it will shrink their distribution by one megabyte, I would still disagree because I value the ability to apply XSLT on feeds and other XML documents (my Atom and RSS feed stylesheets are the most comprehensive I know of), but I would at least listen to such honest arguments. But falsely hiding behind “security”? I impugn their honour.
(If their extension is not, as their descriptions have implied, a complete, drop-in replacement with no caveats, I invite correction and may amend my expressed opinion.)
an insecure mess contained in a sandbox is still an insecure mess
it just has slightly less chance of affecting something else
The easier thing might have been if Chrome & co opted to include any number of polyfills in JS bundled with the browser instead of making an odd situation where things just break.
I think you can recognize that the burden of maintaining a proven security nightmare is annoying while simultaneously getting annoyed for them over-grabbing on this.
libxslt != XSLT.
It's like removing JPEG support because libjpg is insecure!
Which would be a totally sensible thing you do. Especially if jpeg was a rarely used image format with few libraries supporting it, the main one being unmaintained.
If this were true you could fix this today with the other library. That library is the only implementation used and it’s features are relied upon.
Firefox doesn’t use libxslt. I presume IE didn’t either. It’s only WebKit-heritage browsers that use libxslt.
There is already a replacement in rust but people like you and the Google engineers have ignored that fact. “Good luck” they all say turning their nose away from reality so they can kill it. Thanks for your support.
>Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
Being this is HN, did anyone suggest rewriting it in rust? :)
I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
There are still virtually zero good XML parsers but plenty of good JSON parsers so I do not buy your assertion. Writing a good JSON parser can be done by most good engineers, but I have yet to use a good XML parser.
This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.
> by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
Here is where you lose me
The JSON spec fits on two screen pages https://www.json.org/json-en.html
The XML spec is a book https://www.w3.org/TR/xml/
> The JSON spec fits on two screen pages https://www.json.org/json-en.html
It absolutely does not. From the very first paragraph:
It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.
which is absolutely a book you can download and read here: https://ecma-international.org/publications-and-standards/st...
Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.
> The XML spec is a book https://www.w3.org/TR/xml/
Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.
> my experience implementing API is that every XML implementation is obviously-correct
This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.
As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.
Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.
I think you are misreading the phrase "based on". The author, I believe, intends it to mean something like "descends from", "has its origins in", or "is similar to" and not that the ECMAScript 262 spec needs to be understood as a prerequisite for implementing a JSON parser. Indeed, IIRC the JSON spec defined there differs in a handful of respects from how JavaScript would parse the same object, although these might since have been cleaned up elsewhere.
JSON as a standalone language requires only the information written on that page.
The "References" section of the XML spec is almost longer than the JSON spec itself
> [...] serious XML implementation [...]
You are cherry-picking here
Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.
> RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
Hope I can quote it to Transofrmer architecture One day
With browser being as complicated as they are, I kind of support this decision.
That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)
Would you be willing to entertain the idea that, perhaps, you haven't noticed you actually used XSLT during your mundane browsing? Sample page, how would you tell? https://www.europarl.europa.eu/politicalparties/index_en.xml
There exists a much better html version of that page, which also comes up as the first google result and is easier to discover on the website. https://www.europarl.europa.eu/about-parliament/en/organisat...
The lack of the jump scare cookie banner on the XSLT version is certainly an improvement, but I otherwise agree. Google search burying XSLT driven pages isn't a surprise given their stance.
Sure there are examples of websites using XSLT, but so far I've only seen the dozen or maybe two dozen, and it really looks like they are extremely rare. And I'm pretty sure the EU parliament et. al. will find someone to rework their page.
This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Those had good rationale for deprecating that I would say don't apply in this instance. Flash and Java applets were closed, insecure plugins outside the web's open standards, so removing them made sense. XSLT is a W3C standard built into the web's data and presentation layer. Dropping it means weakening the open infrastructure rather than cleaning it up.
> This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Sure, but Flash and Java were never standards-compliant parts of the web platform. As far as I'm aware, this is the first time that something has been removed from the web platform without any replacements—Mutation Events [0] come close, but Mutation Observers are a fairly close replacement, and it took 10 years for them to be fully deprecated and removed from browsers.
[0]: https://developer.mozilla.org/en-US/docs/Web/API/MutationEve...
You ignored the argument (though probably not intentionally). You talk about how many you've seen. But you probably seen way more and never realized
If there were that many, why do people only list the same handful again and again? And where are all the /operators/ of those websites complaining? Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?
Again: this is nothing like Flash or Java applets (or even ActiveX). People were seriously considering Apple's decision to not support Flash on iPhone as a strategic blunder due to the number of sites using it. Your local news station probably had video or a stock market ticker using Flash. You didn't have to hunt for examples.
Battle.net's forums used to use XSLT and be a buggy mess, but not sure if that was related to their use of XSLT.
Naturally I meant as a developer. I don’t doubt I came past xslt rendered pages.
If you view an RSS or Atom feed in chrome today you just get a screen of xml eg. https://developer.wordpress.org/news/feed/
In the golden old days of 2018, browsers at least applied some styling https://evertpot.com/firefox-rss/
You can still manually apply styling using xslt https://www.cedricbonhomme.org/blog/index.xml
But XSLT is not strictly required for styling. In fact, Firefox also supports an out-of-band stylesheet inclusion via the `Link` HTTP header [1]:
Link: </style.css>; rel=stylesheet
(Yes, this works even without <?xml-stylesheet?> PI others have mentioned.)I think the best strategy for Google is to support this and simultaneously ditch XSLT. This way nothing is truly lost.
[1] You can test your browser from: https://annevankesteren.nl/test/html-element/style-header.ph...
In Safari at least clicking a rss link prompts you to open it in a rss reader, which I think is a superior experience. Reading a rss feed in browser is not without use, but I’d argue that that’s mostly the job of the site itself.
> You can still manually apply styling using xslt
Unless I'm using XSLT without knowing, you can do this with the xml-stylesheet processing instruction
For RSS feeds, XSLT stylesheets are used to display a human-readable version in the browser.
Random example: https://lepture.com/en/feed.xml
This is useful because feed URLs look the same as web page URLs, so users are inclined to click on them and open them in a web browser instead of an RSS reader. (Many users these days don't even know what an RSS reader is). The stylesheet allows them to view the feed in the browser, instead of just being shown the XML source code.
Why is this so critical? We dont due this for any other format. If you put an ms office document on a page, we dont have the browser render it, we download it and pass it off to a dedicated program. Why is RSS so special here?
Well, IMO it would be cool if we could do that, but the MS Office formats are a lot more complicated so it's a lot more work to implement. Also, quite often the whole point of sharing a file in MS Office format is so that the user can take it and edit it, which would require a dedicated program anyway.
Why would Google keep supporting AMP if the line is drawn only by use?
They chose to kill off a spec and have it removed from every browser because they don't like it. They choose to keep maintaining AMP because its their pet project and spec. Its as simple as that, it has nothing to do with limited resources forcing them to trim features rather than maintain or improve them.
Because the "semantic web" was an interesting idea.
And: Because it exists/existed and thus people relied upon it.
With the amount of sites on the web, even a small number relying on features, each having just a bunch of users, it becomes a big number of impacted.
I dont see how xslt is connected to semantic web
The GP asked "Why is RSS so special here?"
And XSLT in that context is interesting as one can ship the RSS file, the web browser renders it with XSLT to human readable and a smart browser can do smart things with it. All from the same file.
Ok but maintaining a web browser that supports a ton of small features that nobody-except-me-and-my-cousin are using has a huge cost; you don’t support obscure features just because someone somewhere is relying on it (relevant: https://xkcd.com/1172/).
If you think about it, basically nothing except HTML is a critical function of browsers. You can solve everything just with that. We don’t even need CSS, or any custom styling at all. JavaScript is absolutely not necessary.
Yes and no.
You can have a document without CSS but you can’t style it.
You can have a document without JavaScript but only a static one (still interactive, but only though forms)
On the other hand, you can replace XSLT with server side rendering, or JavaScript. It does not serve a truly unique function.
I don't think it's a critical feature, but it is nice-to-have.
Imagine if you opened a direct link to a JPEG image and instead of the browser rendering it, you'd have to save it and open it in Photoshop locally. Wouldn't that be inconvenient?
Many browsers do support opening web-adjacent documents directly because it's convenient for users. Maybe not Microsoft Word documents, but PDF files are commonly supported.
Yeah, but browsers actually make use of that format. And its not like you can add a special header to jpg files to do custom reformatting of the jpeg via a turing complete language. Browsers just display the file.
You can do the same by checking Accept headers, User-Agent if you truly must.
Aren't there other ways to load and parse a technical format like RSS to a human-readable format? Like you would do with JSON.
Or can't you polyfill this / use a library to parse this?
You can do the transformation server-side, but it's not trivial to set it up. It would involve detecting the web browser using the "Accept" header (hopefully RSS readers don't accept text/html), then using XSLT to transform the XML to XHTML that is sent to the client instead, and you probably need to cache that for performance reasons. And that's assuming the feed is just a static file, and not dynamically generated.
In theory you could do the transformation client side, but then you'd still need the server to return a different document in the browser, even if it's just a stub for the client-side code, because XML files cannot execute Javascript on their own.
Another option is to install a browser extension but of course the majority of users will never do that, which minimizes the incentive for feed authors to include a stylesheet in the first place.
Not without servers rendering the HTML or depending on client-side JS for parsing and rendering the content.
Its also worth noting that the latest XSLT spec actually supports JSON as well. Had browsers decided to implement that spec rather than remove support all together you'd be able to render JSON content to HTML entirely client-side without JS.
This site is a bit of a Rorschach test as it plays both sides of this argument: bad Google for killing XSLT, and the silliness of pushing for XSLT adoption in 2025.
"Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"
I already have XSLT in my website because I have an Atom feed and XSLT is the only way to serve formatted Atom/RSS feeds in a static site. Perhaps you have never considered the idea that someone might want to purchase some cheap static hosting to serve their personal website, but it is a fine way to do things. This change pries the web ever further out of the hands of common people and into the big websites that just want the browser to serve their apps.
IMHO, Google had become the most powerful tech company out there! It has a strong monopoly in almost every aspect of our lives and it is becoming extremely difficult to completely decouple from it. My problem with this is that it now dictates and influences what can be done, what is allowed and what not, and, with its latest Android saga (https://news.ycombinator.com/item?id=45017028), it's become worrying.
I strongly encourage building a website entitled, something like keepXSLTAlive.tld to advocate for XSLT as the other guys did https://keepandroidopen.org/ for Android (https://news.ycombinator.com/item?id=45742488), or keep this current site (https://xslt.rip/) but update the UI a little bit to better reflect the protest vibe.
What you say about google might be true. And its changes to android might be bad…
But that does not mean xslt should be kept alive just because of that. It should be judged on its own merits
And thats part of the problem, they didn't judge it on its merits.
Google judged a 25 year old spec that is now 2 major versions out of date.
So why is almost nobody here actually defending it on its own merits? In my opinion XSLT was a bad idea ~20 years ago when I started in web development. It was convoluted, not nice to work with and the implementations buggy.
Most people seem to think it is bad because it is Google who want to remove it. Personally I just see Google finally doing something good.
There is so much defense of XSLT it’s crazy you assume no one is here defending it. This thread isn’t the single defense point against Google.
Not only that Google engineers Mason Freed has shown pretty forcefully that he will not listen to defense, reason or logic. This further evidenced by Google repeatedly trying to kill it for 25 years.
Personally I just see you licking Google’s boot.
End of an era! I remember going through XSLT tutorials many decades ago and learning everything there was to learn about this curious technology that could make boring XML documents come 'alive'. I still use it to style my RSS feeds, for example, <https://susam.net/feed.xml>. It always felt satisfying that an XML file with a stylesheet could serve as both data and presentation.
Keeping links to the original announcements for future reference:
1) <https://groups.google.com/a/chromium.org/g/blink-dev/c/CxL4g...>
2) <https://developer.chrome.com/docs/web-platform/deprecating-x...>
I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.
Boy is this an awesome web page. Suddenly I have the urge to create an html page with ifames, blink, marquee and table tags (for layout of course)
You can always render blink and marquee with Canvas.
Just kidding, Canvas is obsolete technology, this should obviously be done with WebGPU
I know you're being sarcastic, but to be pedantic WebGPU (usually) uses canvas. Canvas is the element, WebGPU is one of the ways of rendering to a canvas, in addition to WebGL and CanvasRenderingContext2D.
And also don't expect smooth sailing with WebGPU yet, unless all your users have modern mainstream browsers with up to date hardware.
And even that isn't enough; no browser supports WebGPU on all platforms out of the box. https://caniuse.com/webgpu
Chrome supports it on Windows and macOS, Linux users need to explicitly enable it. Firefox has only released it for Windows users, support on other platforms is behind a feature flag. And you need iOS 26 / macOS Tahoe for support in Safari. On mobile the situation should be a bit better in theory, though in my experience mobile device GPU drivers are so terrible they can't even handle WebGL2 without huge problems.
Needs an "under construction" banner
Recently had to grab content from a page that was layouted with tables. Just nested tables over tables, not even ids for the elements.
I invite you to view the source of the very page we're on right now.
thanks for this, you made my day! i never bothered to look.
i still remember when tables were forced out of fashion by hordes of angry div believers! they became anathema and instantly made you a pariah. the arguments were very passionate but never made any sense to me: the preaching was separating structure from presentation, mostly to enable semantics, and then semantics became all swamped with presentation so you could get those damned divs aligned in a sensible way :-)
just don't use (or abuse) them for layout but tables still seem to me the most straightforward way to render, well, tabular content.
While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then. It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
>While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then.
Countless websites on Geocities and elsewhere looked just like that. MY page looked like that (but more edgy, with rotating neon skull gifs). All those silly GIFs were popular and there were sites you could find and download some for personal use.
>It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
In North Platte or Yorkshire maybe. Otherwise plenty of neon blue and pink in the 80s. Starting from video game covers, arcades, neon being popular with bars and clubs, far more colorful clothing being popular, "Memphis" style graphic design, etc.
The brown, beige, and dark orange were extremely prevalent in the 80s --- but a lot of that was a result of the fact that most things in your environment are never brand new; the first half of the 80s was mostly built in the second half of the 70s.
This look with animations and bright text on dark repeated backgrounds was definitely popular for a while in the late 90s. You wouldn’t see it on larger sites like Yahoo or CNN, but it was definitely not unheard of for personal sites.
Gray backgrounds where also popular, with bright blue for unvisited links and purple for visited links. IIRC this was inspired by the default colors of Netscape Navigator 2.
> IIRC this was inspired by the default colors of Netscape Navigator 2.
"Inspired" is an interesting word for "didn't set custom values." And I believe Mosaic used the same colors before. I'm not even sure when HTML introduced the corresponding attributes (this was all before CSS ...)
Now that you mention it, something did seem a little off about the thinking-butt emoji...
> don't actually look like how most websites looked back then
https://geocities.restorativland.org/Area51/
> was more of a brownish beige.
Did you never watch MTV?
Exactly.
If there is no white 1x1 pixel that is stretched in an attempt to make something that resembles actual layout, or multiple weird tables, I always ask: are they even trying.
In all seriousness- they got quite a good run with xslt. Time to let it rest.
1x1 pixels for padding and aligning? That came later. Your memory is off.
In the 90s, sites did kinda look like that.
The 1x1 pixel gif hack arrived shortly after Netscape 1.1 introduced tables. I belive this was before colored text and tiled backgrounds became available. So the hack is definitely part of the “golden age” of web design.
1x1 pixels for padding and aligning were absolutely a thing in the late 90s (1997+). Don't know what alternative history you have in mind, but it was used at the "table layout" era.
What came later was the float layout hell- sorry, "solution".
Could just be the author’s personal style?
I once got into a cab in NYC on Halloween and the driver said to me, hey, you really nailed that 80s hairstyle, thinking I had styled it for Halloween. I had to tell him dude, I’m from the 80s.
It's interesting that we don't have a replacement for this use case. For me, XSLT hits a sweet spot where I can send a machine-parsable XML document and a small XSLT sheet from dirt cheap static web hosting (where I cannot perform server-side transforms, or control HTTP headers). This is fairly minimal and avoids needing to keep multiple files in sync.
I could add a polyfill, but that adds multiple MB, making this approach heavyweight.
XSLT was once described to me as "Pain wrapped in Hate", and I fully agree. I'm truly shocked that there is ANY opposition to it's removal and retirement.
Stockholm Syndrome: we went thru the torture of learning it. And now we love it
> Tell your friends and family about XSLT.
I had a good chuckle at the idea of sitting around the dinner table at Christmas telling my parents and in-laws all about XSLT.
Don’t… you’re forgetting the Christmas of ’02 when cousin Marvin brought up the issue of Tabs vs Spaces!! Uncle Frank still holders a grudge and he’s still not on speaking terms with Adam
I haven't been too chatty about it but the furor over this being removed has, I suspect, everything to do with there being no real plan to replace what it does. No I don't just mean styling RSS feeds. I mean writing websites as semantic documents!! The whole thing the web is (was) about!
Since the XSLTProcessor feature can be realized with a Polyfill (https://github.com/mfreed7/xslt_polyfill), I find myself agreeing with Google.
Btw, I love this page! Highly entertaining, yet at the same time use of XSLT.
If they were going to ship the xslt polyfill by default with Chrome, I wouldn't disagree.
I've worked with a hospital, their electric medical records are written in XML, and use XSLT to render HTML.
They will be able to do that in perpetuity.
It's just direct browsing support for rendering using XSLT that's removed.
Which is one excellent use of XSLT. It is not that useful for general web.
From https://chromeenterprise.google:
> For over a decade, Chrome has supported millions of organizations with more secure browsing – while pioneering a safer, more productive open web for all.
… and …
> Our commitment to Chromium and open philosophy to integration means Chrome works well with other parts of your tech stack, so you can continue building the enterprise ecosystem that works for you.
Per the current version of https://developer.chrome.com/docs/web-platform/deprecating-x..., by August 17, 2027, XSLT support is removed from Chrome Enterprise. That means even Chrome's enterprise-targeted, non-general-web browser is going to lose support for XSLT.
Most people who use xslt like the grandparent described were never using it on the client side but on the server side. Nothing google chrone does will effect the server side.
To clarify: initially, the first web browser evolved from a SGML-based documentation browser at CERN. This was the first vision of the web: well-structured content pages, connected via hyperlinks (the "hyper" part meaning that links could point beyond the current set of pages). So, something like a global library. Many people are still nostalgic to this past.
Surprisingly, the "hyperlinked documents" structure was universal enough to allow rudimentary interactive web applications like shops or reservation forms. The web became useful to commerce. At first, interactive functionality was achieved by what amounted to hacks: nav blocks repeated at every page, frames and iframes, synchronous form submissions. Of course, web participants pushed for more direct support for application building blocks, which included Javascript, client-side templates, and ultimately Shadow DOM and React.
XSLT is ultimately a client-side template language too (can be used at the server side just as well, of course). However, this is a template language for a previous era: non-interactive web of documents (and it excels at that). It has little use for the current era: web of interactive applications.
What makes XSLT inherently unsuitable for an interactive application in your mind? All it does is transform one XML document into another; there's no earthly reason why you can't ornament that XML output in a way that supports interactive JS-driven features, or use XSLT to built fragments of dynamically created pages that get compiled into the final rendered artifact elsewhere.
My only use of XSLT (2000-2003) was to make interactive e-learning applications. I'd have used it in 2014 too, for an interactive "e-brochure", if I could have worked out a cross-browser solution for runtime transformation of XML fragments. (I suspect it was possible then but I couldn't work it out in the time I had for the job...)
If you can use it to generate HTML, you can use it to generate an interactive experience.
If they have security in mind, they should intend to deprecate and remove HTML. The benefits of keeping it are slowly disappearing as AI content on the web is taking over, and HTML contains far more quirks than XSLT, and let's not talk about aging C codebases about HTML...
What security vulnerabilities do you think of? Modern html5 parsers are really good and secure. The html5 standard largely solved the issues.
In all seriousness, XSLT looked stillborn even 25 years ago when it was introduced.
Agree. It always seemed like a strange and poorly conceived technology to me.
There is absolutely nothing to prevent anyone from generating arbirary DOM content from XML using JS; indeed, there's nothing stopping them from creating a complete XSLT implementation. There's just no need to have it in the core of the browser.
You don’t need to generate anything with JavaScript, aside from one call to build an entire DOM object from your XML document. Boom, whole thing’s a DOM.
I guess the fact that it’s obscure knowledge that browsers have great, fast tools for working directly with XML is why we’re losing nice things and will soon be stuck in a nothing-but-JavaScript land of shit.
Lots of protocols are based on XML and browsers are (though, increasingly, “were”) very capable of handling them, with little more than a bridge on the server to overcome their inability to do TCP sockets. Super cool capability, with really good performance because all the important stuff’s in fast and efficient languages rather than JS.
XSLT has a life outside the browser and remains valuable where XML is the way data is exchanged. And RSS does not demand XSLT in the browser so far as I know. I think RIP is a bit excessive.
Website is overly dramatic. Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.
Full of security issues is similarly overly dramatic, Haha. Fil-c appears to already compile libxml2[1] so I wonder how far off libxslt would be?
[1] https://github.com/pizlonator/fil-c/tree/deluge/projects/lib...
> Full of security issues is similarly overly dramatic
It doesn’t seem dramatic at all:
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
> libxslt -- unmaintained, with multiple unfixed vulnerabilities
— https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...
> no one wants to maintain libxslt
For $0? Probably not. For $40m/year, I bet you could create an entire company that just maintains and supports all these "abandoned" projects.
> or $0? Probably not. For $40m/year, I bet you could create an entire company
No sane commercial entity will dump even a cent into supporting an unused technology.
You have better luck pitching this idea to your senator to set up an agency for dead stuff - it will create tens or hundreds of jobs. And what's $40mm in the big picture?
> your senator
Funny you should mention that. US Title Code uses XSLT.
I know it is there. I am more curious as to why no one updated all that to modern browser technology.
Until these recent rumblings out of Google, it was modern browser technology.
> it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money.
As for money: Remind me what was Google's profit last year?
As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser
[1] Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity...
For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.
Browsers should try things. But if after many years there is no adoption they should also retire them. This would be no different if the organization is charity or not.
> For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.
Google themselves have a document on why killing anything in the web platform is problematic: e.g. Chrome stats severely under-report corporate usage. See "Blink principles of web compatibility" https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
It has great examples for when removal didn't break things, and when it did break things etc.
I don't know if anyone pays attention to this document anymore. Someone from Chrome linked to this document when they wanted to remove alert/prompt, and it completely contradicted their narrative.
> Remind me what was Google's profit last year?
Last i checked, google isn't a charity.
Their products are built on open source. Android and Chrome come to my mind, but also their core infrastructure, it's all Linux and other FOSS under the hood.
Besides, xkcd #2347 [1] is talking about precisely that situation - there is a shitload of very small FOSS libraries that underpin everything and yet, funding from the big dogs for whom even ten fulltime developer salaries would be a sneeze has historically lacked hard.
The thing is, xslt isn't underpinning much of anything, that is why google is removing it instead of fixing it.
Google does contribute to software that it uses. When i say google is not a charity, i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
> The thing is, xslt isn't underpinning much of anything
An awful lot of stuff depends on xslt under the hood. Web frontend, maybe not much any more, that ship has long since sailed. But anything Java? Anything XML-SOAP? That kind of stuff breathes XML and XSLT. And, at least MS Office's new-generation file formats are XML... and I'm pretty sure OpenOffice is just the same.
> The thing is, xslt isn't underpinning much of anything
Neither do huge complicated standards that Chrome pushed in recent years.
> that is why google is removing it instead of fixing it.
And yet Google has no issues supporting, deploying and fixing features that see 10x less usage. Also, see this comment: https://news.ycombinator.com/item?id=45874740
> i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
They took upon themselves the role of benevolent stewards of the web. According to their own principles they should exercise extreme care when adding or removing features to the web.
However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.
> However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.
Apple and firefox agree with them. They did not do this unilaterally. By sone accounts it was actually firefox originally pushing for this.
To be honest, there are two ways to solve the problem of xkcd 2347, either putting efforts into the very small library or just stop depending on it. Both solutions are fine to me and Google apparent just choose the latter one here.
If not depending on a library is an option, then you dont really have an xkcd 2347 problem. The entire point of that comic is that some undermaintained dependencies are critical, without reasonable alternatives.
Except it's not Google whose "products" stop working by removing that dependency.
Why not switch the browser to use a JavaScript implementation internally instead of the old C++ implementation?
This is about forcing everyone into Json. Incredibly sad the amount of “just take Google’s word for it” in this thread. We have truly lost our way as a tech embraced society and eschew reason.
There is a reason the lead Google engineers initials are “MF”.
Looks like more of a retro-fun site, than a protest. Most serious websites of 90's had more like light brownish background with black text with occasional small image on the side, double borders for table cells, Times font, horizontal rules, links with bold font in blue color, side-bar with navigation links, bread-crumbs at the top telling where you are now, may be also next-prev links at the bottom, and a title banner at the top.
Game sites and other "desperate-for-attention" sites have the animated gifs all over, scrolling or blinking text, dark background with bright multi-colored text with different font sizes and types and sound as well, looking pretty chaotic.
Professional and serious websites, yes, but there were plenty of websites on Geocities that looked very much like this. These websites may not have been the majority of the internet, but they weren't rare either.
Just browsing around on a geocities website you can find pages like https://geocities.restorativland.org/CollegePark/Lounge/3449... and https://geocities.restorativland.org/Eureka/1415/ (audio warning on both)
If anything, this retro site is a bit too modern for having translucent panels, the background not being badly tiled, and text effects being too stylish.
Got to love the github issue, show exactly the sad state of things. Google owns the internet now and we are all chumps for even thinking there is anything open left.
Disenting opinions will be marked as abuse!
Why not just write an XSLT implementation in JS/WASM, or compile the existing one to WASM? This is the same approach that Firefox uses for PDFs and Ruffle for Flash. That way it is still supported by the browser and sandboxed.
This already exists, and I agree that it's the best solution here, but for some reason this was rejected by the Chrome developers. I discussed this solution a little more elsewhere in the thread [0].
Killing RSS = killing decentralized internets (blogs, podcasts, etc) = empowering centralized plateform such as youtube, spotify (etc)
Youtube has pretty much always supported RSS and still does. Google killed their RSS reader, but if they wanted to kill RSS they wouldn't put it in their video platform.
When it comes to killing web technology, Google is mostly killing their own weird APIs that nobody ended up using or pruning away code that almost nobody uses according to their statistics.
> Youtube has pretty much always supported RSS and still does.
It has RSS feeds for individual channels. It does not _support_ RSS in any meaningful way.
Can you please clarify? For me, maintaining my own watch lists, that is, per channel RSS feeds, all neatly organized in my RSS aggregator's folders, is the only way to fly.
I tried to use a PHP CMS called Symfony that used XSLT back in the early to mid 2000s. Was definitely interesting and a learning curve.
A counterpoint to the idea that this is entirely Google's doing: https://meyerweb.com/eric/thoughts/2025/08/22/no-google-did-...
I think you should disclose that you work on the Google Chrome team in a post like this.
Yeah my bad; I was on the go. I'm on the Chrome team, I work on DevTools.
This is unfortunate and sad but understandable. Slightly off-topic: a friend dared me to look for a sandbox CSP bypass and I discovered one using XSLT. I reported it to Mozilla few months ago, CVE-2025-8032. https://www.mozilla.org/en-US/security/advisories/mfsa2025-5...
Google cannot kill anything on its own.
If people continue to use XML-supporting technology, these open standards will continue to thrive.
I'm sure this site will be supported eventually by the Ladybird Web browser - can't wait to switch to it next August.
Google isn’t killing XSLT. They just don’t want to support it in their browser any more. The site is misleading.
When you have 70+% browser market share, stopping support for something _is_ killing it.
It is misleading in so far that XSLT is an independent standard [1] and isn't owned by Google, so they cannot "kill it", or rather they'd have to ask W3C to mark it as deprecated.
What they can do is remove support for XSLT in Chrome and thus basically kill XSLT for websites. Which until now I didn't even know was supported and used.
XSLT can be used in many other areas as well, e.g. for XSL-FO [2]
[1] https://www.w3.org/TR/xslt-30/ [2] https://en.wikipedia.org/wiki/XSL_Formatting_Objects
You say they cannot kill it, and yet they are about to. We'll see who wins, reality or your word games.
I don't think XSLT was invented for the purpose of rendering XML into HTML in the first place. Perhaps it never should have been introduced in browsers to begin with?
I truly loved XSLT back in the day and I strongly believe it to be an ingenious technology.
And I truly believe it's time to retire this monstrosity.
My first graduate job at a large British telco involved a lot of XML...
- WSDL files that were used to describe Enterprise services on a bus. These were then stored and shared in the most convoluted way in a Sharepoint page <shudders>
- XSD definitions of our custom XML responses to be validated <grimace>
- XSLTs to allow us to manipulate and display XML from other services, just so it would display properly on Oracle Siebel CRM <heavy sweats>
> XSLT will soon enter the Google graveyard.
AFAIK the "google graveyard" is just for google products they have killed off.
Given that Google owns Web, it can be argued that any web tech killed by Google is a part of Google Graveyard
What a beautiful look. I really like websites with this design :)
Hearing about this again and again and I still need to ask: who actually uses that, and for what?
And how does it break RSS? (Which I at least heard of people using it before)
Some people used XSLT to style their RSS feeds when displaying them in the browser. An alternative is to use CSS to style the feeds. Personally I don't see why I would want styled feeds.
Show people what looping over a range looks like in XSLT, you cowards!
I used to generate a blog and tumblelog entirely from XML files using an XSLT processor, it will not be missed.
That’s a “classic”-looking site!
Lots of Comic Sans and animated GIFs (which means that I still have XSLT, I guess).
I know that XSLT can be implemented in JS (and I have used Saxon-JS, its good!) but the loss of functionality for the XML processing instruction will be a shame.
There is nothing like in the modern web stack, such a pity.
Please kill it, and then let's sit on a table with all adults people and decide what else should be killed. Maybe specify a minimum subset of modern feature a browser must support, please let's do it, it could light on again browser competition, projects like lady browser should not implement obscure backwrads compatible layout spec... What about the not modern web sites? The browser will ask to download an extra wasm module for opening something like https://www.spacejam.com/1996/
So sad. I love XSLT. I wish XML had been the thing instead of JSON.
> XSLT will soon enter the Google graveyard.
The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.
>We must conclude Google hates XML & RSS!
Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.
>by killing [RSS] Google can control the media
The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.
> Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.
>How much did Google pay for this support?
Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.
Humm I agree with the statement but why does the website need to look like it is from early the 90's?
It's not dead yet, a new maintainer showed up. But, Google Chrome decided to ditch it, which is fine by me. It was a cluster fuck, similar to libxml2, but even worse.
Good old DSSSL days, sigh.
I love everything about this site. The design, the vibe, the rhetoric.. It’s a work of art!
Great neuron exercise seeing Flaming Text again
But what is XSLT? Why is it important?
These points should be addressed first on the website.
> Google pays Mozilla up to $420 million per year...
What the hell is Mozilla doing with that money? How useless are all those people?
It bothers me they can't even seem to design a user interface that looks like it came out of the last decade. Thunderbird is an even bigger mess.
At least Thunderbird makes big changes now without any big funding. Firefox on the other hand is getting.... a new mascot https://www.firefox.com/kit/
Mitchell Baker has “a family to feed”.
(IIRC her salary increased something like 10 folds over the past 15 years or so)
Edit: It has jumped from $490k[1] to $6.25M[2] from 2009 to 2024.
Edit 2: by looking the figures up, I learned that she's gone at last, good riddance (though I highly doubt her successor is going to take a 12-fold pay cut)
[1]: https://static.mozilla.com/foundation/documents/mf-2009-irs-... page 8
[2]: https://assets.mozilla.net/annualreport/2024/b200-mozilla-fo... page 8 as well.
Love the aesthetics.
they are playing us for fools!
The web site should also use terms like "arrogant priests rule the web" from browsers' attempt to kill alert/prompt: https://www.quirksmode.org/blog/archives/2021/08/breaking_th...
Also: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones": https://dev.to/richharris/stay-alert-d
I can't even tell if this is satire or just hyperbole.
Given that XSLT transforms XML into HTML, why has no one simply built a server side XSLT system? So these existing sites that use XSLT can just adopt that, and not need to rely on browser support.
I remember Gentoo Linux had all its official documentation in a system just like that, maybe 15-20 years ago. It was written and stored as XML, XSLT-processed and rendered into HTML on the webservers.
They moved everything into a wiki later.
EDIT: Oh, their developers' manual is still done like that: https://github.com/gentoo/devmanual into https://devmanual.gentoo.org/
Server-side XSLT tools have existed for 25 years or so. The people complaining about this want existing websites using XSLT on the client to continue to work without changes.
Now that XSLT has the power of Comic Sans on its side, I don't know what could possibly go wrong anymore.
Add XSLT to your website and weblog today before it is too late!
I cannot tell if this is satire or not, very well done
This is propaganda.
Who on earth approved .rip as a TLD? Stupid
To make the web safer, they will replace simple static web pages with remote code execution on the user's machine. Yet another “fuck you” to people who don't want to shove JavaScript in everything. God forbid I serve a simple static site to people. Nonono. XSLT is fantastic for people who actually want to write XML documents like the good old days, or add styling to Atom feeds.
Edit: and for a slightly calmer response: Google has like, a bajillion dollars. They could address any security issues with XSLT by putting a few guys on making a Rust port and have it out by next week. Then they could update it to support the modern version in two weeks if it being out of date is a concern. RSS feeds need XSLT to display properly, they are a cornerstone of the independent web, yet Google simply does not care.
It's truly troubling to see a trillion dollar corporation claim that the reason for removing a web browser feature that has existed since the 90s is because the library powering it was unmaintained for 6 months, and has security issues. The same library that has been maintained by a single developer for years, without any corporate support, while corporations reaped the benefits of their work.
Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.
It would cost Google practically nothing to step up and fix all security issues, and continue maintenance if they wanted to. To say nothing of simply supporting the original maintainer financially.
But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
> Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.
I think https://xkcd.com/1172/ is more fitting.
> But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
No, because xml has meaningful usage on the web. The situations are very different.
> No, because xml has meaningful usage on the web. The situations are very different.
They're really not. If "meaningful usage" was a factor, Google should stop maintaining AMP, USB, WebTransport, etc.[1]
If security and maintenance are a concern, then they should definitely also remove XML, since libxml2 has the same issues as libxslt.
Google says:
> Similar to the severe security issues in libxslt, severe security issues were recently reported against libxml2 which is used in Chromium for parsing, serialization and testing the well-formedness of XML. To address future security issues with XML parsing In Chromium we plan to phase out the usage of libxml2 and replace XML parsing with a memory-safe XML parsing library written in Rust
Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?
The problem seems to be that the current libxslt library is buggy due to the use of C++, an unsafe language (use after free etc.).
[BTW, Chris Hanson's old book "C: Interfaces and Implementations" demonstrated how to code in C in a way that avoids use after free: use pointers to pointers instead of pointers and set them to zero upon free-ing memory blocks; e.g.
/* source: https://github.com/drh/cii/blob/master/src/arena.c */
void Arena_dispose(T *ap) {
assert(ap && *ap);
Arena_free(*ap);
free(*ap);
*ap = NULL; /* avoid use after free */
}
]> Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?
Even if one existed right now, i would be surprised if that changed googles mind.
Agreed. Because this decision has nothing to do with safety or low usage, like they claim. It's just another example of a corporation abusing their dominance to shape the web according to their interests.
> They're really not. If "meaningful usage" was a factor, Google should stop maintaining AMP, USB, WebTransport, etc.[1]
Meaningful usage being a factor does not mean it is the only factor.
I think it goes without saying that google isn't going to remove support for xml (including things like SVG) anytime soon.
There are more xml parsers than just that and it’s a smaller scope to rewrite or maintain.
> Will we also see support for XML removed?
Hopefully YES.
Let the downvotes come, I know there are XML die hard fans here on HN.
[dead]
Meh. RSS was great. XSLT was always awful. Javascript does everything XSLT did, so much better. Let it die.
Wow you got negged so hard, likely by people that have never really written XSLT code.
I have and I've always hated it. I still to this day will never touch an IBM DataPower appliance, though I'm more than capable because of XSLT.
They (IBM) even tried to make it more appealing by allowing Javascript to run on DataPower instead of XSLT to process XML documents.
It's a crap language designed for XML (which is too verbose) and there are way better alternatives.
Javascript and JSON won because of their simplicity. The Javascript ecosystem however (nodejs, npm, yarn etc) are what take away from an otherwise excellent programming language.