Continuing with our series of blog-posts on the valuation of data and its application to…
The Increasing Economic Value of Private Data in the Era of Large Language Models: a Case for Secure Computation
As Large Language Models (LLMs) make vast quantities of public data readily accessible and effectively commoditise public knowledge, private data becomes increasingly valuable: this shift has been named “The Tragedy of the AI Data Commons” in a recent publication regarding the increasing divide between the value of public and private data.
Secure computation and other privacy preserving techniques are the last bastion against the value erosion of data caused by LLMs: as we previously pointed out in “The Valuation of Secrecy and Privacy Multiplier”, secure computation improves the valuation of data during the process of public dissemination, including its ingestion by LLMs.
And although dynamic stochastic general equilibrium models have started to incorporate the effects of LLMs as productivity enhancers (for example, “From Servers to Rates: AI, ICT Capital, and the Natural Rate” and “Tech Booms: From Dot-Com To AI”), they do not yet model the rational responses of heterogeneous agents to counteract its negative effects through intangible capital deepening enabled by secure computation: in the AI era, the only way to keep and increase the value of data assets is using secure computation.