Denormalisation could just be an illusion if we have a system that tracks all places where similar data resides and is responsible for updating it. The same system would be useful for caching, it would keep track of all the caches of a piece of information and keep them up-to-date.
I call this the data maintenance system. The data maintenance system is a frontend to your data. It stores data in underlying systems. The data maintenance system should deploy various algorithms to help index data that can be configured for the data in question. The data maintenance system should be no stranger to indirection of data lookup and retrieval. The data maintenance system is aware of other systems and the data inside them. You register the presence of a MySql database and describe the structure of the data that is inside MySql. The same for redis, postgres and other databases you have. The data maintenance system works out how to fetch data from the underlying datastore. Have an elastic search cluster that needs to be populated with data from the database? That can be scheduled too.
I should be able to instance data structures such as a btree or a radix trie to index records and configure them. Or shard data where necessary.
I should be able to migrate my data easily by switching databases and have the data be migrated automatically. I should be able to shift data around for efficiency. Hot keys should be detected and migrated or load balanced.
The data maintenance system could be a SQL based or keyvalue based.