When one side develops a weapon, it is traditional for its opponents to seek to seize it, or to design an equally destructive weapon, in order to establish a balance of deterrence. The digital world is no exception: for every data-gathering or attack capability via increasingly sophisticated vectors present on one side, the other side develops countermeasures and builds its own modes of attack.
The emergence of generative AI in the public sphere continues the astroturfing1 that has been going on for decades, helping to erode the very concept of truth, both as reality and as possibility. If everything becomes credible, then nothing is truly true. Attempts at regulation are emerging, but they are moving slowly and proving complex, as not all players agree. Some have already undermined the truth in their own countries: destroying the truth in others is just a continuation of this process.
So, instead, a business is formed. We congratulate ourselves on having invented tools so powerful that they become ever more destructive. Efforts are made to dismantle the weak regulatory barriers already in place or about to be introduced. We praise the economic multiplier of this technology, without any in-depth knowledge of the matter.
And as always, we turn the vocabulary on its head: regulation is a prison; state aid is a prerequisite for success (we’d like to see the same logic culturally applied to social aid); moderation is a form of censorship; freedom is domination by the powerful; competition, including winning military contracts, is peace.
Having said that, what’s next? What issues would I have liked to see emerge from the French AI summit (and I’m not saying they weren’t addressed there, just that they weren’t publicized)?
I don’t think we can imagine a world today without generative AI. The technology has infused so many practices in such a short period of time… it seems impossible. But we can ask ourselves, collectively, what kind of AI we want.
From an ecological point of view, first of all, I think that the European Union’s Corporate Sustainability Report Directive (CSRD) and, more generally, the Science-Based Targets initiative (SBTi) are going in the right direction by creating a non-financial accounting of greenhouse gas (GHG) emissions.
This can only work if regulation is maintained, strengthened, and publicized over time. Knowing that a company like Microsoft exceeds its targets by 29% is important. If this overrun has impacts on the sustainability report of their clients, then only do we create a competitive situation around ecological indicators for the benefit of the planet.
But the question goes far beyond the ecological issue: we need companies to discuss and set up very precise screens on the AIs they use, and the use cases where they apply them. These screens must contain:
- Specific use cases, and their success criteria
- A survey of available models, their capabilities and biases
- Test protocols with samples of use cases and an evaluation of feedback
- An explicability framework to ensure the transparency of the decision-making process, independently audited to validate its legality and the organization’s accountability
- A continuous monitoring mechanism, with a disaster recovery plan if the AI needs to be disconnected
I’m convinced that all these needs will create jobs in the future, from the AI choice consulting firm to the structure specializing in continuous monitoring, via the independent audit firm, or even one specializing in the specific task of the business need (you don’t audit generative AI for translation, labeling or general discussion in the same way).
The speed at which we make progress on these issues will depend on our political courage and our ability to stand up to the economic interests of the major AI players, who don’t really want this kind of competition.
It’s up to us to force them.
-
a deceptive propaganda practice in which organizations or individuals create the illusion of widespread popular support for their product, service or point of view. ↩