OK, but this doesn't really answer the question. If setTimeout() could be abused by creating a busy loop, why couldn't the same be done with postTask()?
Or in other words, why is the "protect web devs from themselves" camp ok with only "securing" one API but not the other?
As for "put up interventions to nudge devs to use the right APIs": What ever happened to "paving the cowpaths"? Wasn't that the big idea in WHATWG circles originally? So if web devs are familiar with setTimeout() and tend to use it for all kinds of things, why not improve that API instead of trying to coax devs into using a new one?
Because browser developers still have major incentive to care about not misusing the resources (cpu/battery) of browser users, and website developers very clearly do not.
This the natural consequence of a platform having high capability but low barrier to entry. Conscientious use of resources cannot be assumed and is in fact the exception rather than the rule, and so guardrails must be put in place.
This is an enormous problem with software in general. IMO it's probably because software has been abstracted into the stratosphere to the point that most developers aren't at all aware of resources or even the machine it's running on. That's someone else's problem. I really hate it.
While the problem is very real, I don't think it's the fault of abstractions or even developers. If you have to fight your product manager for the authorization to spend a little time using resources correctly, it is probably because there's no organization-wide incentive to reduce resource usage of a web application, unless you're called Apple, Google or Mozilla.
What Andy giveth, Bill taketh away
Of which, the biggest example is shipping Chrome with the application.
Whenever I refactor and endpoint that takes the p99 from 1 minute to 1 second, I think about how a 4k video being uploaded undoes all of that progress
Since the result is the same to the end user regardless of what I say here, I know I will not sway many people but...
As someone who has done what I will call "corporate web dev" for a long time, it's almost never the actual site or web app that is abusing your resources. It's all the junk 3rd-party scripts that the business and marketing people force onto it.
These scripts are intended to be "low code" solutions. Even if the developers working at those places mean well, nobody reads their docs least of which the marketing goons with unfettered access via a CSP nonce copy-pasting whatever example <script> tags they think they need to inject to make the thing go.
If you ever want a laugh and have tons of free time you should find one of those sites loaded with these kinds of scripts that an ad blocker would normally get rid of, read the docs for how those scripts were supposed to be used, and bask in the insane stupidity and cargo cult nonsense causing duplicate events and many redundant http calls and websockets... and then turn your ad blocker back on.
You may then ask yourself sensible questions such as: "doesn't all this royally fuck up their analytics data?" and "does some poor soul making reports from that mess ever clean it up?". The answer is yes it does, and no they don't. They instead will try to play the blame game and claim it's the underlying site or web app causing the issues until they find another job. There's a lot of churn in that space.
Wondering why a rather simple form didnt work i view the console, click the error and find something querying an api with a var called neuralnetwork. I was quite satisfied with myself for finding the problem in just a few seconds. Extra points for naming the variable after the technology rather than what it contains. Imagine naming your form data not sonething boring like streetname but mysql or just database or perhaps API!?
I asked about this a few years ago on SO and there is some good info: https://stackoverflow.com/q/61338780/265521
E.g. Chrome has this comment:
// Chromium uses a minimum timer interval of 4ms. We'd like to go
// lower; however, there are poorly coded websites out there which do
// create CPU-spinning loops. Using 4ms prevents the CPU from
// spinning too busily and provides a balance between CPU spinning and
// the smallest possible interval timer.
At the time at least the 4ms only kicks in after 5 levels of nesting, as mentioned in the article, but there was still a 1ms limit before that.Seems like it has been removed though based on jayflux's comment.
I remember reading that high precision timers can be used for browser fingerprinting and/or for timing attacks, but I didn't anything specifically about setTimeout()/setInterval() after searching a bit.
Also--loosening the accuracy of timers allows the system to optimize CPU power states and save battery. Again, not sure if that's related here.
Maybe someone else here can add more detail.
You might be referring to the Spectre mitigation changes:
Timer precision from performance.now and other sources is reduced to 1ms (r226495)
https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-...
Although you can claw that precision back by enabling cross-origin isolation for your site, at least in Firefox and Chrome, which both quantize high res timers to 100μs in non-isolated contexts but only 5μs in isolated contexts. I'm not sure exactly what Safari does.
That's high precision clocks (aka Preformance.now(), with sub-millisecond resolution) not timers.
The precision of setTimeout has never been high, it kind-of maps to the OS scheduler and the OS scheduler often enforce their own minimum timeouts (Windows has defaulted to 15.625 ms resolution for a very long time, and the newer high resolution timers max out at 1ms across most operating systems)
Don’t unfocused tabs also get throttled? Otherwise we’d all be melting our computers with the 40 open tabs we have. For some of us that’s a slow day.
As someone who wrote an entire indexedDB wrapper library just to understand the "micro task" issues that are referenced in this blog post, and THEN dedicated a couple hundred words of my readme to explaining this footgun[0], I am so glad to hear about `scheduler.postTask`. That's new information to me!
Thanks for including that example!
[0] https://github.com/catapart/record-setter?tab=readme-ov-file...
The story of web development:
"For the time being, I’ll just do what most web devs do: choose whatever API accomplishes my goals today, and hope that browsers don’t change too much in the future."
It's a strategy that's worked out very well. Standards groups and browsers prioritize backwards compatibility very highly. It's hard to remember any real compatibility breakages in standardized HTML/CSS/JS features (ie. not third-party plugins like Flash).
Challenge accepted.
https://developer.mozilla.org/en-US/docs/Glossary/blink_elem...
I guess it's the end of days, if tags have stopped blinking.
> And the beast shall come forth surrounded by a roiling cloud of vengeance. The house of the unbelievers shall be razed and they shall be scorched to the earth. Their tags shall blink until the end of days. — from The Book of Mozilla, 12:10
Have I not always heard that timeout-based callbacks always run at or after the timeout, but never before?
“Do this {} at least Xms from now”, right?
Sure, but the nuance here is there is a (otherwise usable) range of values for which the timers are only ever "after" instead of "at or after". I.e. the lower bound is artificially increased while the upper bound remains unlimited.
I don’t think “artificially increased” is correct. See your sibling. If the runtime waits until expiry, and only then adds the task to the end of the work queue, there’s no point at which any delayed work could happen at expiry except the work to place it on the end of (an empty) queue.
Any busy runtime (e.g. one with lots of parallel tasks, plus anything running less than optimally) will have a delay.
Artificially increased is what's happening when you request a timeout of 0 and the browser always makes it 4 ms or more.
Imagine this code:
let value = 0;
(function tick () {
value += 1;
console.log(value);
setTimeout(tick, 1);
})();
If you let `tick()` run for 1 second, what would you expect the `value` to be? Theoretically, it should be around 1,000, because all you're doing is running a function that increments the `value` and then puts itself back onto the execution queue after a 1 ms delay. But because of the 4 ms delay that browsers implement, you'll never see `value` go above 250, because the delay is being artificially increased.Yeah, exactly. Timeout based callbacks register a timer with the runtime, and when the timer is up, then the callback gets added to the end of the task queue (so once the timeout is up, you've got to wait for the current loop iteration to finish executing before your callback gets executed).
this has always been my understanding as well - it schedules a function to run at time, but it won't pre-empt something that is blocking when it needs to run so there's always the possibility that it has to wait for another function to finish before it can run - a timeout is not a guarantee.
I always kinda figured that any "timer" in any language would technically need to work that way unless you're running a very fancy real-time system because multitasking, especially in high load scenarios, means there just may not be clock cycles available for your task at the exact millisecond you set something to execute at.
So it is with JS; I kinda figured EVERYTHING would need to be heavily throttled in a browser in order to respect the device running that browser.
> Even if you’ve been doing JavaScript for a while, you might be surprised to learn that setTimeout(0) is not really setTimeout(0). Instead, it could run 4 milliseconds later:
Is this still the case? Even with this change? https://chromestatus.com/feature/4889002157015040
I think it's still the case. The 4ms happens if you call setTimeout nested several times. I don't know the exact limit. But it's 5-ish times where that kicks in IIRC.
Edit: Here's the MDN bit on that, I was correct:
https://developer.mozilla.org/en-US/docs/Web/API/Window/setT...
> browsers will enforce a minimum timeout of 4 milliseconds once a nested call to setTimeout has been scheduled 5 times.
And the link from there to the spec about that:
https://html.spec.whatwg.org/multipage/timers-and-user-promp...
> If nesting level is greater than 5, and timeout is less than 4, then set timeout to 4.
I think that change is talking about the minimum timeout for the first 5 nested calls to `setTimeout(0)`.
Previously the first 5 would take 1ms, and then the rest would take 4ms. After that change the first 5 take 0ms and the rest take 4ms.
This was news to me but incredibly interesting: https://developer.mozilla.org/en-US/docs/Web/API/Scheduler/p...
I was aware of that for browsers; is this also true for Electron?
Electron is just a packager for Chromium, with the minimum code necessary to achieve that objective.
Technically it is a browser. Unless I am making a serious logic flaw here, it should be applicable.
Ok, but is the eluctron runtime freed of this throttling limitation?
Yes, that is the question. There seems to be no good reason to keep that limitation in the packaged browser.
Background javascript processes can really seem to add up across a lot of browser tabs firing up to stay "smart" or "current" like they're the only tab in existence in their user's life.
I'm not sure if many people struggle with browser tabs gone wild. Limiting Javascript can have varying degrees of success since it's relative to how the page/site/app is built to begin with.