In February 2024 Tucker Carlson travelled to Moscow and used ‘X” to broadcast a live interview with Vladimir Putin. In doing so, Carlson offered platform access to an audience of millions and Putin needed no second invitation to disseminate his warped historical justifications for the invasion of Ukraine. In that same fortnight, OpenAI released their demo of Sora, a new AI product that could create entirely synthetic but alarmingly real-looking videos in response to a textual input from a user. When considered together, these two seemingly unrelated events offer insight into the realities of a 21st century information environment where humanity’s instinctive reliance on identity based narratives combines with the dominance of global online platforms and the emergence of accessible AI products to create a new media landscape that can be defined as one ‘without-truth’. This post intends to identify the conditions for the arrival of an era ‘without-truth’.
Alongside the Sora demo, OpenAI simultaneously published a post on their website entitled ‘Disrupting malicious uses of AI by state-affiliated threat actors’. In it, the authors stated that whilst they ‘build AI tools that improve lives and help solve complex challenges’, they ‘know that malicious actors will sometimes try to abuse our tools to harm others.’ This report, along with another released in August 2024, perhaps explain why Sora still hasn’t yet been made available to the general public - OpenAI recognise that the paradox of human creativity is extenuated all the more when considering the creative opportunities and threats offered by state-of-the-art LLMs.
This continues a theme first explored in Post#1 which focuses on the doubled-edged nature of humanity’s relationship with narratives, where those same forces that give us identity and meaning as a species and can make us reach extraordinary heights, are also responsible for our very worst tendencies. In a similar vein, the democratising impact of social media has also dragged large swatches of global discourse into the gutter, with the impact being seen across a range of indicators from mental health to political polarisation. By creating products that bring their technologies to a consumer market, OpenAI are following this trend, lowering the barrier to entry for access to these powerful capabilities and enabling creativity and destruction in equal measure.
As stated, this paradox is not a new phenomenon. But there have always been limiting factors on the ability for the destructive aspect to become dominant. Technological expertise has traditionally been a barrier to entry. Destructive ideas require effective dissemination and a willing audience. Without that, they run their ground. Even when there is access to an ultimate dissemination method and a willing audience, (today’s social media platforms being the obvious example), destructive ideas can still be challenged and refuted by evidence and reason.
However, there is a very real danger that a new paradigm is created when consumer access to the creativity afforded by a product such as Sora, is layered with the platform dissemination available on social media, with both providing an outlet for humanity’s propensity to consume a narrative. This layering of ‘product and platform’ could create conditions where the perception of reality is routinely challenged, where doubt on veracity and authenticity prevail and where the extremes typically dominate. The challenge this poses to our public information spaces can be seen when considering discourse on any of the conflicts that are tragically taking place around the world today. If an individual has certainty on what to believe about events occurring today in Ukraine, Gaza or Lebanon they will typically fall into one of two categories; 1) they are in one of those war zones and are experiencing it first hand; 2) they are a distant, online observer and occupy one of the polarised extremes.
Ironically, it was not the pursuit of information on the world’s conflicts that forced a realisation that the global information environment could now be one, ‘without-truth’, but more trivially, the receipt of a video purporting to show UK Prime Minister Keir Starmer at the Labour Party Conference in September, calling for an “immediate ceasefire in Gaza” and ‘the return of the sausages’. That this video was dismissed so easily as a ‘deepfake’ underlines the extent to which an audience’s ability to be incredulous has become saturated by this ‘platform/product’ effect. This is just one example, but it is demonstrative of the response of a rational human being to a relatively mundane event. It is becoming increasingly clear that routinely, we do not know what is true and what is not. Space for nuance and subjectivity in our global information environment is becoming scarce and competing realities are viewed as zero-sum rather than complimentary. We are ‘without truth’.
Building on the platform/product effect mentioned above, there are four criteria that can be seen to have created this phenomenon:
The prevailing effect of the traditional narratives and stories that define us as a species. We are pre-disposed to believing fiction.
The normalisation of the factors responsible for the creation of the ‘post-truth’ era, e.g social media platforms, online anonymity, the smart phone.
The perception of AI as something that challenges reality. The media hype around AI will always be the dominant driver of what people believe, as opposed to actual realised capability.
The actual use of AI consumer products to generate synthetic media and create fake content. Open AI has already exposed the intent malign actors have.
All four of these factors combine to influence our interpretation of truth and reality in the global information environment. The first three are the most dominant. As and when video generation products of the quality of Sora become more readily accessible, the fourth will add fuel to the first three whilst also bringing its own significant complexities. We need to be prepared. We can’t be ‘without-truth’.