Brookings Institution, a self-styled non-partisan US think tank, is putting the European Union (EU) under pressure for its AI regulation plans.
The essence of a Brookings piece that saw the light of day this week is that the EU is on the wrong track by looking to introduce liability on AI systems described as “general purpose.”
And then there’s the usual, though often slightly, severely nebulous criticism of “stifling innovation” – this happens every time Big Tech finds their future plans and/or bottom line under any kind of real or perceived threat.
In this case, Brookings mentioned something that reports in mainstream tech press singled out – GTP-3 – as an example of technology that would allegedly suffer should the slow-moving European Commission actually go ahead with adopting its AI Act, announced in 2021.
GTP-3 is described as a “cutting edge” AI tool, while the gripe with EU’s proclaimed goals around regulating AI is sold as concern for open source, and how that may “chill” the innovative environment around it.
Sounds good – but not so fast. When it comes to GTP-3 – free and open source’s “favorite new best friend” Microsoft has its finger in this pie. Just as an example of the kind of things Brookings is worried about here, it’s worth recalling that Generative Pre-trained Transformer 3 (GPT-3) is a deep learning auto-regressive language model with its OpenAI builder that started as a non-profit.
The promising – and, given Google’s past criticism – commercially competitive tech, was in the hands of a company that in 2019 became for-profit, by and large abandoning its original open source practice, and then in 2020 receiving billions in investment from Microsoft.
This “champion” of “open source AI” is in reality heavily dependent on one of the historically least open source-friendly big companies that ever existed – public API can be used by other users to receive output, but Microsoft has its claws firmly in, having secured licensed use of GTP-3.
Yet Brookings paints its criticism of the AI Act as concern for “the little (open source) guy.” But in the end, it seems to be all about keeping corporations safe from liability, regardless of mental gymnastics on display here.
Like so: “In the end, the (EU) attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI,” the Brookings writes.