Transforms happen at coherence boundaries. This is a reflection of agency being about crossing membranes.
Listening is like event driven models, but there isn't a pre-determined set of events you can listen to rather it's more like stored procetures in a SQL db where you specify what changes in the scape/state/data you want to match on and thus "hear"
What's the equivalent in our virutal machine of the stack frame instructions that are used to impelment calling (and returning from) a procedure in more traditional CPUs.
The possibility of implementing/optimizing scape processing using GPUs, i.e. an element of the scape has a processor assigned to it like a pixel might in a GPU, along with address handling for resolution scaling, etc...
The underlying data engine with smart caching creating true cloud architectures.