Monday, September 8, 2025

Moment of Truth

We stand at a moment in time of astonishing magnitude - on the one hand, the richest and most powerful economic superpower in the world is undergoing an authoritarian takeover whose hallmark is a flagrant disregard for the truth. On the other hand, modern AI technology has suddenly permitted machines to accurately process complex natural language, images, speech, other complex structured and semi-structured signals so that now humanity has broader access to incredibly powerful tools that permit us to understand and use information in unprecedented ways.  

Leading frontier language models are the most rapidly adopted and widely used information technology that has ever existed. They democratize intelligence. They make it easy for normal people to access expertise far beyond their own, and on the whole, they perform well. Serious issues with 'hallucinations' (where models invent facts) are increasingly less common and AI is becoming a widely accepted computational toolset across almost any task where complex reasoning and information use is required. 

It is only a matter of time before this technology is weaponized as a tool for disinformation across various areas of public life. Up until now, such efforts (such as Elon Musk's attempts to heighten awareness about so-called 'white genocide' in South Africa) have backfired dramatically, but it would be naïve to suppose that all such attempts can be so easily spotted. Given the importance of information networks to secure and exploit power (see Yuval Harari's book 'Nexus'), we should likely expect there to be concerted and intense effort from stakeholders to have AI models and agents tailor their answers to support their pursuit of power rather than to provide the most accurate and truthful responses as a matter of principle

Having said that, we have a pre-existing runway to drive AI technology in an arc that bends toward truth. This is for two reasons:  (A) models' dependence on high-quality training data for good performance is hard to circumvent with disinformation; (B) we have only begun to scratch the surface of how we can best use these tool to solve real problems. 

I believe that a key part of developing this technology lies in how we address the core underlying process of how humans ascertain 'what is true': Argumentation.

Argument is present across all human endeavor involving competing approaches to the use of information. It is a centerpiece of philosophy, law, politics, academia, science, and business. It is a key, but largely understated component of writing, espionage, warfare, terrorism, and even medicine (for example, consider the role that advocacy and argument played in removing homosexuality as a pathology from the Diagnostic and Statistical Manual of Mental Disorders or 'DSM' in 1973). Even though it is the water in which we all swim, it is currently only well-modeled and understood by AI technology in a small number of domains (such as Mathematical Olympiad proofs).

For AI technology, it is essential that we understand (A) how argument works in a human context and (B) how we can transcend the constraints of that paradigm to solve previously seemingly impossible challenges. Human argumentation is beset by unproductive methods and patterns (see 'Argumentation Schemes' by Walton et al 2013) which mostly do not center the standard scientific model of 'abductive reasoning' (in which we seek explanations of observations). Thus, in fields where text is dominated by  human discussion that is geared towards securing power for one party over another, any attempt to learn how to effectively construct a compelling argument that drives outcomes will be dominated by human interactions with all their ineffective, complex, flawed approaches. 

How then, can we find workable solutions? Could we condition reasoning / argumentation AI models and systems attempting to solve complex problems (like cancer, homelessness, gun violence, drug policy, political redistricting, etc.) to approach these problems scientifically - and find explanations for phenomena based on curating and organizing observations judged to be relevant. Pragmatically, this  approach could use reinforcement learning methods to train specialized models capable of domain-specific problem-solving based on an abductive reasoning methodology (such methods have been used in biology already: see the `rbio` project at the Chan Zuckerberg Initiative). 

Naturally, the discussion presented here is hardly a detailed, concrete solution (which would require detailed modeling, training set curation, and rigorous AI-research to truly instantiate it), but I present it as a viable strategy for tech leaders. Given resources, it could enable truth in people's lives. It is essential that we act now, immediately, before it is too late. 

It may be possible to entrench the usefulness of such truth-empowering tools by helping humanity  understand and accomplish more than we have ever been able to do before and generate true societal change. If we can do this, then the short runway we currently enjoy against authoritarian disinformation may extend by a month or a year. If successful, it could even server to inoculate people from disinformation. If all such efforts fail, modern AI is likely to be used to consolidate large-scale surveillance, propaganda, and authoritarianism - to the detriment of all but a few.

I feel that we really are living in a 'moment of truth', and we must act creatively to rise to the occasion.