A macroscopic view of the work that compilers do, is that it's simply a translation of semantics that are ergonomic for humans into semantics that are ergonomic for machines. From this view, we get a 2-Semantic model. To put it more concretely, we have one set of semantics for source code (let's call it HS for high-level semantics), and one set for executing on hardware (let's call it LS for low-level semantics).
One of the problems of computer science (and also a reason why it's exciting) is that the field is constantly changing. This means that both HS and LS designs change. With the 2-Semantic model we are constantly throwing away old code that works, either because it makes assumptions about hardware that are no longer true, or because it's written in old languages that don't interface with new languages.
Many modern languages have moved to more of a 3-Semantic model that utilize a semantic layer between HS and LS, which we can call MS. We've entered an era of virtual machines and abstract runtimes. This buffer layer means that it's easier to swap out the semantics on the human side or the machine side, and still preserve the usefulness of legacy code. But what about the design of the semantics for this middle layer?
There are many modern choices for MS. Most of them come from a clear Turing machine heritage, like JVM, CLR, LLVM, etc. Some functional languages choose a lambda calculus based MS, although there doesn't seem to be any standardized MS shared between functional languages (hopefully GRIN will change that).
Semantics that are easy for humans to understand come with human baggage, and semantics that are easy for machines to understand come with machine baggage. What about developing semantics that have as little baggage as possible? It seems that that is the domain of mathematics, although the mathematical community has its own baggage.
I would like to see a MS that lasts 50-100 years, and becomes almost universal among languages. I don't think any of the current layers are viable for that sort of longevity or mass adoption.
So what are the problems with the current MS designs?
- They don't capture semantics from HS that are relevant to optimization in LS
- They expose too many low-level details that aren't relevant to high-level languages
- They are more complex than they need to be
- They are biased towards particular HS designs