Poprocks wrote:
> On 2020-08-13, Axel Berger wrote:
> > That's not the main point. Very few sites come even near to being valid.
> > 90 % of the bloat (and the same share of the malware entry points) come
> > from trying to second guess, whatever the author might have meant by the
> > utter nonsense he wrote.
>
> I don't agree with that at all. I'd say that accounts for some of the
> overhead, but a very small percentage.
Indeed, and a site that isn't valid HTML according to the spec doesn't take
materially more compute resource to parse. Maybe it's more ambiguous and
the results might differ between browsers, but it's not slower. HTML isn't
the problem, Javascript is.
(I put sites like Facebook and Youtube through the W3C validator. They
didn't pass, but the problems were mostly applying name="..." to elements
that should have that according to the spec. That's not going to slow down
parsing - but those things are necessary for Javascript to manipulate the
DOM)
> The real issue is that webpages have stopped becoming *pages* and are
> now essentially applications in and of themselves, all being run through
> essentially slow, interpreted languages.
And some of those scripts are doing intrinsically complicated things, like
doing a live auction of the user's eyeballs to the highest bidder advertiser
- all while the page loads.
Block those things and page load times come down a lot, but there's still
tons of complicated Javascript in something like Google Docs which you can't
block or it won't work.
To the OP, there are browsers that have a full webkit engine (so the JS
works) but the UI is written in C(++) not in Javascript, which makes them
faster and lower footprint. Look at Otter Browser, QtWebkit and some
others.
Theo
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)
|