Really interesting read (from Varun Gandhi) about optimizers and transparent vs observable, reliable behavior: https://typesanitizer.com/blog/rethink-optimizers.html
I've had similar thoughts when working on weval and Cranelift -- sometimes one wants to assert that a load-bearing simplification really does happen; and also maybe have a more systematic way of directing compiler effort. (Something like Tcl tool scripting in the EDA world maybe?) We have infra for testing but it's not user-facing; maybe that should change...
@cfallin I hit the same problem with Postgres, went to the IRC channel and got a rather condescending reply that you can't tweak the optimizer because it knows best (slightly paraphrased)
I later had to add HTTP basic auth to that section of a website since it was a DoS vector. I hate Postgres
@cfallin Cool post! This section resonated with me:
> you do not really want the best thing you can do to just be a “hint” which the optimizer is free to silently ignore.
That's part of the reason I enjoyed the ISPC model for writing data-oriented code. The number of ways in which optimization passes can be broken is substantially smaller. Once SIMD is more explicit in the language, it's easier to build a mental model. Less time spent paranoidly inspecting the asm output of my Rust code.
@cfallin great post indeed, thanks for the link!
yeah, I was also thinking about the fact that a lot of compiler/vm/related projects *do* have internal tools for testing and introspecting this stuff. But how to make the tools usable/useful to the outside world is pretty unclear to me.