<?xml version="1.0" encoding="UTF-8"?>
<rss  xmlns:atom="http://www.w3.org/2005/Atom" 
      xmlns:media="http://search.yahoo.com/mrss/" 
      xmlns:content="http://purl.org/rss/1.0/modules/content/" 
      xmlns:dc="http://purl.org/dc/elements/1.1/" 
      version="2.0">
<channel>
<title>Igor Sadoune</title>
<link>https://sadoune.me/blog.html</link>
<atom:link href="https://sadoune.me/blog.xml" rel="self" type="application/rss+xml"/>
<description>Personal page and blog</description>
<generator>quarto-1.6.42</generator>
<lastBuildDate>Fri, 18 Jul 2025 04:00:00 GMT</lastBuildDate>
<item>
  <title>Human Devolution and The Paradox of our Time</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/ai_vs_humans/</link>
  <description><![CDATA[ 





<p>I often reflect on the timing of my academic journey with a sense of gratitude. I completed my PhD just before the era of mass automation truly took hold—pre-GPT-3, to be precise. This allowed me to acquire skills through sheer effort and persistence, without the crutch of advanced AI tools. Take my English proficiency, for example: I learned it the hard way, grappling with grammar, vocabulary, and nuanced expression through countless hours of reading, writing, and self-correction. If I were pursuing the same path today, I might lean heavily on AI for drafting papers, emails, or even casual communication, potentially stunting my growth. I’m relieved I didn’t miss out on that foundational struggle—it’s made me more capable and self-reliant.</p>
<p>But this personal anecdote highlights a broader, more troubling trend: as AI surges forward, humans are not just being outpaced; we’re actively regressing, widening the gap between our potential and our reality. This constitutes a paradox, as at the time I am writing these words, we live in an atmosphere of <a href="https://sadoune.me/posts/ai_economic_transition/">AI-Anxiety</a> as people are afraid of being replaced, but technology misuse, and the loss of human skills and knowledge caused by AI reliance, make us even more replaceable.</p>
<section id="behavioral-flaws-and-fake-productivity" class="level2">
<h2 class="anchored" data-anchor-id="behavioral-flaws-and-fake-productivity">Behavioral Flaws and Fake Productivity</h2>
<p>While AI ascends, humans are devolving, lured into comfort by the very technologies we create. We outsource essential skills to machines, eroding our abilities over time. Consider coding: aspiring programmers might rely on AI to generate snippets, skipping the deep understanding of algorithms and logic that comes from manual debugging. In creative arts, digital painting and drawing apps with AI assistance mean fewer people learn foundational techniques like perspective or color theory—why sketch by hand when an algorithm can “fix” it?</p>
<p>Worse, humans often use these technologies improperly, exacerbating inefficiency. For instance, over-relying on AI for writing without editing results in generic, error-prone output, making individuals seem less competent. This misuse renders us more replaceable: a worker who can’t communicate effectively without AI is easily automated away. It’s a <strong>paradox</strong>—people fear job loss to AI but behave in ways that hasten it. By surrendering skills and knowledge, we forfeit our leverage, reinforcing the incentive for replacement. We’re not just adapting to technology; we’re diminishing ourselves through it. The paradox of our time is that AI’s greatest promise—unprecedented productivity and prosperity—depends entirely on maintaining human excellence.</p>
<p>There’s also a more technical concern: <a href="https://sadoune.me/posts/mode_collapse/">model collapse</a>. AI systems trained predominantly on AI-generated content begin to lose coherence and quality, like making copies of copies until the image degrades beyond recognition. Human-created content isn’t just valuable—it’s essential fuel for AI’s continued evolution. Writers must keep writing, artists must keep creating, and thinkers must keep thinking, not despite AI but because of it. Our original thoughts and authentic expressions are the diverse dataset that keeps AI systems robust and innovative.</p>
</section>
<section id="human-devolution" class="level2">
<h2 class="anchored" data-anchor-id="human-devolution">Human Devolution</h2>
<p>The humans of tomorrow risk becoming mere shadows of today’s, much like how modern sedentarity has eroded our physical prowess. Ancient humans were paragons of athleticism—hunters and gatherers who ran marathons daily, built muscle through necessity, and maintained peak health without gyms or supplements. Today, technology-enabled lifestyles promote inactivity, leading to widespread health issues like obesity and weakened immunity. Similarly, our cognitive and creative “muscles” are atrophying.</p>
<p>This devolution predicts a decrease in performance and skills at every educational level—a form of educational deflation. A PhD earned today might command respect for rigorous, independent work, but future doctorates could be devalued if candidates rely on AI for research, analysis, and writing. Without true mastery, PhDs may become less reputable, symbolizing not expertise but adept tool usage. The gap widens not just between AI and humans, but between past and future generations of humanity.</p>
</section>
<section id="countering-the-gap-adaptation-and-potential-reversal" class="level2">
<h2 class="anchored" data-anchor-id="countering-the-gap-adaptation-and-potential-reversal">Countering the Gap: Adaptation and Potential Reversal</h2>
<p>Yet, this trajectory isn’t inevitable. To counter it, we must actively resist devolution. Education systems, especially PhD programs, need adaptation: implement rigorous vetting to ensure candidates demonstrate genuine skills without AI crutches. This could include AI-free exams, oral defenses emphasizing original thought, and projects requiring manual problem-solving. By fostering resilience and deep learning (the human one), we can preserve human excellence.</p>
<p>Looking further, the gap might one day reverse through even higher technology. Just as modern medicine compensates for physical decline—prosthetics restoring mobility or drugs enhancing longevity—neural chips could augment cognition, restoring lost knowledge and skills. Imagine implants that directly interface with the brain, providing instant expertise without the grind. This paradox of technology solving technology-induced problems offers hope, but it underscores the need for mindful adoption today. Ultimately, the choice is ours: evolve alongside AI or fade into its shadow.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog2024</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {Human Devolution and The Paradox of our Time},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/ai_vs_humans/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/ai_vs_humans/</guid>
  <pubDate>Fri, 18 Jul 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/ai_vs_humans/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>LLMs and New Cyberthreats</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/llm_cybersecrity/</link>
  <description><![CDATA[ 





<p>Large Language Models (LLMs) are making their way into organizations where they serve as engines fine tuned on private documentation and data to power agentic workers and therefore increase productivity. LLMs support production but also can considerably weight in decision-making as they are often (wrongly) used as oracles by staff. More recently, as discussed in a <a href="https://sadoune.me/posts/ai_public_governance/">previous article</a> of this blog , AI systems have been integrated into governments internal worflow, data management, and decision-making, exapanding LLMs control beyond the corporate world. This allows to spread the strength of AI over several layers of the society. However, if AI has strentgh–mainly productivity boost if used properly–it has also vulnerabilities. Indeed, on top of the usual limitations, fears, and concerns, new cybersecurity threats arise. Cybersecurity threats are not inherently new, but using LLMs as central engine creates new opportunities for attackers, such as <em>prompt manipulation</em>, <em>data poisoning</em> and <em>model extraction</em>.</p>
<section id="conventional-versus-llm-specific-cybersecurity" class="level2">
<h2 class="anchored" data-anchor-id="conventional-versus-llm-specific-cybersecurity">Conventional versus LLM-Specific Cybersecurity</h2>
<p>The emergence of LLMs introduces unique security challenges that differ fundamentally from traditional cybersecurity concerns. Those LLM-specific risks and associated counter measures do not compete against the existing framework but compements it:</p>
<table class="caption-top table">
<colgroup>
<col style="width: 16%">
<col style="width: 40%">
<col style="width: 43%">
</colgroup>
<thead>
<tr class="header">
<th>Dimension</th>
<th>Traditional Cybersecurity</th>
<th>LLM-Specific Cybersecurity</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><strong>Attack Surface</strong></td>
<td>System intrusion required</td>
<td>Remote manipulation possible</td>
</tr>
<tr class="even">
<td><strong>Data Type</strong></td>
<td>Structured (databases, files)</td>
<td>Unstructured (text, conversations)</td>
</tr>
<tr class="odd">
<td><strong>Primary Risks</strong></td>
<td>Malware, unauthorized access</td>
<td>Prompt injection, data poisoning</td>
</tr>
<tr class="even">
<td><strong>System Nature</strong></td>
<td>Static, predictable</td>
<td>Dynamic, context-dependent</td>
</tr>
<tr class="odd">
<td><strong>Compliance</strong></td>
<td>ISO 27001, GDPR</td>
<td>AI Act, ethical guidelines</td>
</tr>
<tr class="even">
<td><strong>Defense</strong></td>
<td>Firewalls, encryption</td>
<td>Input sanitization, output filtering</td>
</tr>
<tr class="odd">
<td><strong>Human Factor</strong></td>
<td>Password weakness, phishing</td>
<td>Overreliance, prompt crafting</td>
</tr>
</tbody>
</table>
</section>
<section id="llm-specific-threats" class="level2">
<h2 class="anchored" data-anchor-id="llm-specific-threats">LLM-Specific Threats</h2>
<section id="attack-vectors" class="level3">
<h3 class="anchored" data-anchor-id="attack-vectors">Attack Vectors</h3>
<p>The cybersecurity landscape transforms dramatically when LLMs enter the picture. While traditional attacks like malware and phishing require attackers to breach network perimeters or exploit human vulnerabilities, LLM-specific threats operate through entirely different channels.</p>
<p>Consider <strong>data poisoning</strong>, where attackers don’t need to hack into your systems at all. Instead, they corrupt the training data that shapes the model’s behavior, creating a time bomb that activates when the model encounters specific triggers. Similarly, <strong>model extraction</strong> attacks use carefully crafted queries to reverse-engineer proprietary models, essentially stealing intellectual property through the front door. <strong>Prompt injection</strong> represents perhaps the most accessible attack vector—malicious users can manipulate outputs simply by crafting clever inputs, turning the model’s conversational nature against itself.</p>
</section>
<section id="data-protection" class="level3">
<h3 class="anchored" data-anchor-id="data-protection">Data Protection</h3>
<p>The nature of data protection also shifts fundamentally. Traditional cybersecurity focuses on securing structured data—neat rows in databases, organized financial records, and clearly defined files. But LLMs work with messy, unstructured conversations and text, creating new vulnerabilities. These models can inadvertently memorize and expose sensitive training data, leaking confidential information through seemingly innocent responses. The challenge isn’t just protecting data at rest or in transit, but controlling what the model has learned and might reveal.</p>
</section>
<section id="human-centric-risks" class="level3">
<h3 class="anchored" data-anchor-id="human-centric-risks">Human-Centric Risks</h3>
<p>Beyond data concerns, LLMs introduce risks that don’t exist in traditional systems. <strong>Bias and fairness</strong> become security issues when models produce discriminatory outputs that could lead to legal liability or reputational damage. <strong>Hallucinations</strong>—the model’s tendency to generate convincing but false information—pose risks ranging from misinformation spread to flawed business decisions. These aren’t bugs in the traditional sense; they’re inherent characteristics of how these models work.</p>
<p>The dynamic nature of LLMs creates additional complexity. Unlike static servers with fixed configurations, LLMs adapt their responses based on context and user interaction. Each conversation potentially changes how the model behaves, making it harder to predict and control outputs. This interactivity, while powerful for productivity, opens doors for sophisticated manipulation techniques.</p>
</section>
<section id="regulatory-compliance" class="level3">
<h3 class="anchored" data-anchor-id="regulatory-compliance">Regulatory Compliance</h3>
<p>Regulatory compliance adds another layer of challenge. While traditional systems can rely on established frameworks like ISO 27001 or GDPR, LLMs must navigate emerging AI-specific regulations like the EU AI Act. These new rules demand transparency, accountability, and fairness—concepts that are straightforward for traditional systems but complex for probabilistic models that even their creators don’t fully understand.</p>
</section>
</section>
<section id="defending-against-llm-specific-threats" class="level2">
<h2 class="anchored" data-anchor-id="defending-against-llm-specific-threats">Defending Against LLM-Specific Threats</h2>
<p>Defending against these threats requires rethinking security strategies. Traditional tools like firewalls and encryption remain important but insufficient. Organizations must implement <strong>input sanitization</strong> to filter malicious prompts before they reach the model, <strong>output monitoring</strong> to catch harmful or sensitive responses before users see them, and continuous <strong>model fine-tuning</strong> to patch vulnerabilities as they’re discovered.</p>
<p>Perhaps most critically, the human element takes on new dimensions. While traditional cybersecurity worries about weak passwords and phishing victims, LLM security must address how users interact with AI. This includes preventing intentional misuse by malicious actors and protecting well-meaning users from overrelying on AI outputs. When employees treat LLMs as infallible oracles rather than sophisticated but fallible tools, they create vulnerabilities that no technical solution can fully address.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog202509</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {LLMs and New Cyberthreats},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/llm_cybersecurity/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>

 ]]></description>
  <category>Cybersecurity</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/llm_cybersecrity/</guid>
  <pubDate>Wed, 25 Jun 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/llm_cybersecrity/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>Liquid AI: The Underdog in the Shadow of GenAI</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/liquid_ai/</link>
  <description><![CDATA[ 





<p>The current AI landscape is dominated by GenAI, or in other words, generative models such as Large Language Models (LLMs). LLMs are attention-based models called transformers, and are at the heart of the AI hype. However, it’s easy to overlook groundbreaking innovations that don’t produce flashy text, images, videos, and even 4D-editing. One such overlooked marvel of innovation is Liquid Neural Networks (LNNs). LNNs are based on a fundamentally different approach to neural computation that could revolutionize how we build AI systems for the real world. While everyone’s eyes are on the next GPT model, LNNs are quietly solving problems that transformers can’t even touch. This article serves as a gentle explanation of LNNs, and makes the case for more attention on LNNs, as beyond the hype, true AI researchers, engineers, practitioners, and enthusiasts need to be aware or reminded of this powerful algorithmic technology.</p>
<section id="what-are-liquid-neural-networks" class="level2">
<h2 class="anchored" data-anchor-id="what-are-liquid-neural-networks">What Are Liquid Neural Networks?</h2>
<p>Hasani et al.&nbsp;revived the concept of LNNs in their foundational paper <a href="https://arxiv.org/abs/2006.04439">Liquid Time-constant Networks</a> (2021)<sup>1</sup>, represent a radical departure from traditional neural architectures. Unlike conventional neural networks with fixed weights and static connections, LNNs are dynamic systems inspired by the nervous system of the C. elegans worm—a creature with just 302 neurons that exhibits remarkably complex behaviors.</p>
<p>The key innovation lies in their continuous-time dynamics. While traditional neural networks process information in discrete steps, LNNs model neurons as differential equations that evolve continuously over time:</p>
<p><img src="https://latex.codecogs.com/png.latex?%5Ctau%20%5Cfrac%7Bdx_i%7D%7Bdt%7D%20=%20-x_i%20+%20f%5Cleft(%5Csum_%7Bj%7D%20w_%7Bij%7D(t)%20%5Ccdot%20x_j%20+%20I_i%5Cright)"></p>
<p>Where: - <img src="https://latex.codecogs.com/png.latex?x_i"> represents the state of neuron <img src="https://latex.codecogs.com/png.latex?i"> - <img src="https://latex.codecogs.com/png.latex?%5Ctau"> is the time constant (which can vary) - <img src="https://latex.codecogs.com/png.latex?w_%7Bij%7D(t)"> are time-varying synaptic weights - <img src="https://latex.codecogs.com/png.latex?f"> is the activation function - <img src="https://latex.codecogs.com/png.latex?I_i"> is the external input</p>
<p>This seemingly simple equation hides profound implications. The time-varying weights <img src="https://latex.codecogs.com/png.latex?w_%7Bij%7D(t)"> allow the network to adapt its connectivity patterns based on the input, creating a truly dynamic computational graph.</p>
</section>
<section id="the-architecture-that-adapts" class="level2">
<h2 class="anchored" data-anchor-id="the-architecture-that-adapts">The Architecture That Adapts</h2>
<p>To understand the fundamental difference between traditional neural networks and LNNs, consider their architectural principles:</p>
<table class="caption-top table">
<colgroup>
<col style="width: 13%">
<col style="width: 48%">
<col style="width: 37%">
</colgroup>
<thead>
<tr class="header">
<th>Aspect</th>
<th>Traditional Neural Network</th>
<th>Liquid Neural Network</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><strong>Architecture</strong></td>
<td>Fixed layers with predetermined connections</td>
<td>Dynamic connections that adapt during inference</td>
</tr>
<tr class="even">
<td><strong>Weights</strong></td>
<td>Static weights learned during training</td>
<td>Time-varying weights that respond to input patterns</td>
</tr>
<tr class="odd">
<td><strong>Information Flow</strong></td>
<td>Information flows in one direction</td>
<td>Bidirectional information flow</td>
</tr>
<tr class="even">
<td><strong>Time Processing</strong></td>
<td>Discrete time steps</td>
<td>Continuous-time evolution</td>
</tr>
<tr class="odd">
<td><strong>Data Requirements</strong></td>
<td>Requires extensive training data</td>
<td>Learns from limited data</td>
</tr>
<tr class="even">
<td><strong>Adaptability</strong></td>
<td>Cannot adapt after deployment</td>
<td>Adapts to new situations in real-time</td>
</tr>
</tbody>
</table>
<p>The mathematical representation highlights this difference. In a traditional network, the output is simply:</p>
<p><img src="https://latex.codecogs.com/png.latex?y%20=%20f(W%20%5Ccdot%20x%20+%20b)"></p>
<p>In contrast, a liquid network’s state evolves according to coupled differential equations:</p>
<p><img src="https://latex.codecogs.com/png.latex?%5Cbegin%7Baligned%7D%0A%5Cfrac%7Bdx_i%7D%7Bdt%7D%20&amp;=%20%5Cfrac%7B1%7D%7B%5Ctau_i(t)%7D%20%5Cleft%5B-x_i%20+%20f%5Cleft(%5Csum_j%20w_%7Bij%7D(t)%20%5Ccdot%20x_j%20+%20I_i%5Cright)%5Cright%5D%20%5C%5C%0A%5Ctau_i(t)%20&amp;=%20%5Ctau_%7Bmin%7D%20+%20(%5Ctau_%7Bmax%7D%20-%20%5Ctau_%7Bmin%7D)%20%5Ccdot%20%5Csigma(v_i%5ET%20%5Ccdot%20x%20+%20b_i)%0A%5Cend%7Baligned%7D"></p>
<p>This continuous-time formulation allows LNNs to naturally process temporal information without the need for recurrent connections or attention mechanisms.</p>
<p>The architecture of LNNs is fundamentally different from traditional networks. Instead of fixed layers with predetermined connections, LNNs feature:</p>
<ol type="1">
<li><strong>Dynamic Synapses</strong>: Connections that strengthen or weaken based on the temporal patterns in the data</li>
<li><strong>Adaptive Time Constants</strong>: Neurons that can speed up or slow down their responses</li>
<li><strong>Sparse Connectivity</strong>: Highly efficient architectures with orders of magnitude fewer parameters</li>
</ol>
</section>
<section id="why-lnns-matter-and-why-were-ignoring-them" class="level2">
<h2 class="anchored" data-anchor-id="why-lnns-matter-and-why-were-ignoring-them">Why LNNs Matter (And Why We’re Ignoring Them)</h2>
<p>While the tech world obsesses over the latest 175-billion parameter language model, LNNs are achieving remarkable results with just a few thousand parameters. Consider these achievements:</p>
<ul>
<li><strong>Autonomous Driving</strong>: LNNs with 19 neurons matching the performance of deep networks with 100,000 parameters<sup>2</sup></li>
<li><strong>Time-Series Prediction</strong>: Superior performance on chaotic systems like weather prediction</li>
<li><strong>Robustness</strong>: Exceptional resistance to adversarial attacks and distribution shifts</li>
<li><strong>Interpretability</strong>: With fewer neurons, we can actually understand what the network is doing</li>
</ul>
<p>The irony is palpable. While we pour billions into training ever-larger transformers that require data centers to run, LNNs can run on a microcontroller and adapt in real-time to changing conditions.</p>
</section>
<section id="the-mathematics-of-adaptation" class="level2">
<h2 class="anchored" data-anchor-id="the-mathematics-of-adaptation">The Mathematics of Adaptation</h2>
<p>The secret sauce of LNNs lies in their liquid time-constant mechanism. Unlike fixed neural networks, the time constant <img src="https://latex.codecogs.com/png.latex?%5Ctau"> itself becomes a learnable function:</p>
<p><img src="https://latex.codecogs.com/png.latex?%5Ctau_i%20=%20%5Csigma(W_%5Ctau%20%5Ccdot%20%5Bx_i,%20I_i%5D%20+%20b_%5Ctau)"></p>
<p>This allows each neuron to adjust its temporal dynamics based on the current state and input. The result is a network that can naturally handle:</p>
<ul>
<li>Variable-length sequences without padding or truncation</li>
<li>Irregular sampling rates in sensor data</li>
<li>Long-term dependencies without gradient vanishing</li>
</ul>
<p>The synaptic weights also follow their own dynamics:</p>
<p><img src="https://latex.codecogs.com/png.latex?%5Cfrac%7Bdw_%7Bij%7D%7D%7Bdt%7D%20=%20%5Calpha%20%5Ccdot%20(A_%7Bij%7D%20-%20w_%7Bij%7D)%20%5Ccdot%20g(x_i,%20x_j)"></p>
<p>Where <img src="https://latex.codecogs.com/png.latex?A_%7Bij%7D"> represents the target connectivity pattern and <img src="https://latex.codecogs.com/png.latex?g"> is a gating function. This creates a network that literally rewires itself during inference.</p>
</section>
<section id="real-world-applications-where-lnns-shine" class="level2">
<h2 class="anchored" data-anchor-id="real-world-applications-where-lnns-shine">Real-World Applications: Where LNNs Shine</h2>
<p>The efficiency advantage of LNNs becomes stark when we compare their computational requirements:</p>
<p>This 100-1000x reduction in computational requirements isn’t just about efficiency—it fundamentally changes where and how AI can be deployed.</p>
<p>LNNs excel in domains where traditional deep learning struggles:</p>
<ol type="1">
<li><strong>Edge Computing</strong>: Running sophisticated AI on devices with limited computational resources</li>
<li><strong>Adaptive Control</strong>: Robots and drones that need to adapt to changing environments</li>
<li><strong>Medical Devices</strong>: Implantable systems that must operate for years on minimal power</li>
<li><strong>Financial Systems</strong>: Trading algorithms that adapt to market dynamics in real-time</li>
</ol>
</section>
<section id="the-overshadowing-effect" class="level2">
<h2 class="anchored" data-anchor-id="the-overshadowing-effect">The Overshadowing Effect</h2>
<p>So why aren’t LNNs getting the attention they deserve? The answer lies in the economics and psychology of AI development:</p>
<ol type="1">
<li><strong>The Wow Factor</strong>: GenAI produces immediately impressive results that capture public imagination</li>
<li><strong>Investment Momentum</strong>: Billions flow into transformer-based models, creating a self-reinforcing cycle</li>
<li><strong>Publication Bias</strong>: Papers on large language models get more citations and media coverage</li>
<li><strong>Complexity Bias</strong>: The assumption that bigger and more complex must be better</li>
</ol>
<p>This creates a dangerous monoculture in AI research. While we optimize for benchmark performance and parameter counts, we’re missing opportunities to build AI systems that are: - More energy-efficient - More interpretable - More robust to real-world conditions - More suitable for safety-critical applications</p>
</section>
<section id="looking-forward-the-liquid-future" class="level2">
<h2 class="anchored" data-anchor-id="looking-forward-the-liquid-future">Looking Forward: The Liquid Future</h2>
<p>The potential of LNNs extends far beyond their current applications. Imagine:</p>
<ul>
<li><strong>Hybrid Systems</strong>: LNNs handling real-time perception and control while transformers handle high-level reasoning</li>
<li><strong>Neuromorphic Computing</strong>: Hardware specifically designed for liquid dynamics, achieving unprecedented efficiency</li>
<li><strong>Biological Integration</strong>: Brain-computer interfaces that speak the language of biological neural networks</li>
<li><strong>Swarm Intelligence</strong>: Networks of simple LNN agents exhibiting complex collective behaviors</li>
</ul>
</section>
<section id="conclusion-dont-forget-the-underdogs" class="level2">
<h2 class="anchored" data-anchor-id="conclusion-dont-forget-the-underdogs">Conclusion: Don’t Forget the Underdogs</h2>
<p>As we stand in awe of the latest generative AI achievements, let’s not forget that intelligence comes in many forms. Liquid Neural Networks represent a fundamentally different approach to artificial intelligence—one that prioritizes efficiency, adaptability, and real-world performance over raw computational power.</p>
<p>The next breakthrough in AI might not come from adding another trillion parameters to a transformer. It might come from understanding how 302 neurons in a microscopic worm can navigate, hunt, and survive in a complex world. In our rush toward artificial general intelligence, let’s not overlook the elegant solutions that nature has already provided.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog2025h</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {Liquid AI: The Underdog in the Shadow of GenAI},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/liquid_ai/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>


<div id="quarto-appendix" class="default"><section id="footnotes" class="footnotes footnotes-end-of-document"><h2 class="anchored quarto-appendix-heading">Footnotes</h2>

<ol>
<li id="fn1"><p>Hasani, R., Lechner, M., Amini, A., Rus, D., &amp; Grosu, R. (2021). Liquid Time-constant Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7657-7666.↩︎</p></li>
<li id="fn2"><p>Lechner, M., Hasani, R., Amini, A., Henzinger, T. A., Rus, D., &amp; Grosu, R. (2020). Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence, 2(10), 642-652.↩︎</p></li>
</ol>
</section></div> ]]></description>
  <category>Liquid AI</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/liquid_ai/</guid>
  <pubDate>Fri, 13 Jun 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/liquid_ai/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>The End of Work and the Future of Wealth Redistribution</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/data_economy/</link>
  <description><![CDATA[ 





<p>This article continues a thread woven into the early days of this blog: the profound, world-shifting impact of AI on how our societies function and thrive. In <a href="https://sadoune.me/posts/ai_economic_transition/">AI Anxiety: Will You Lose Your Job?</a>, I explored the immediate disruptions AI brings to the workforce; in <a href="https://sadoune.me/posts/the_end_of_work/">The Grand Dream: AI for Breakfast, Lunch, and Dinner</a>, I examined the ultimate horizon—a future in which production is fully automated and the notion of “work” has all but vanished. Today, I want to confront a deeper, more philosophical question underlying both stories: <strong>In a world where machines provide for virtually every need, how is wealth distributed? Who benefits from the enourmous well of prosperity AI and automation can bring?</strong></p>
<section id="under-the-current-system-the-concentration-of-wealth" class="level2">
<h2 class="anchored" data-anchor-id="under-the-current-system-the-concentration-of-wealth">Under the Current System: The Concentration of Wealth</h2>
<p>This is not a purely academic exercise—the answer to this question will dictate whether our automated future feels like paradise or dystopia. Let’s stretch our imaginations to their limits: production, from resource extraction through complex manufacturing, has been captured by robotics and AI. Human labor is barely required, not out of inefficiency, but redundancy. This might sound like science fiction, but automation is advancing along a spectrum, inching us toward this seemingly extreme endpoint with every passing year.</p>
<p>Now, imagine that the levers of automation are largely held by global private enterprises. Under the rules of today’s economy—especially those of Western capitalism—ownership of productive capital becomes the only ticket to wealth. If automation entirely replaces labor, then work ceases to be the means through which most people can earn a living. Instead, wealth pools among corporate shareholders, those who own the means of automated production. Without any need for human effort, only people who already hold shares in these institutions can hope to ascend above a potentially paltry universal basic income, possibly distributed by some government entity in the name of social stability. This scenario offers a bleak outlook: widespread abundance generated by machines, yet only a small elite enjoying true prosperity while inequality becomes ever more deeply entrenched by inherited wealth and the erosion of social mobility. The opportunity to build value through one’s labor, to climb the economic ladder, fades away.</p>
</section>
<section id="an-alternative-vision-a-data-based-economy" class="level2">
<h2 class="anchored" data-anchor-id="an-alternative-vision-a-data-based-economy">An Alternative Vision: A Data-Based Economy</h2>
<p>Is there an alternative? Let’s imagine a future grounded in an entirely different principle for distributing prosperity—one that does not rely on ownership or labor. In a world run by intelligent machines, data is the new oil and each of us is a wellspring. Recent headlines—like Reddit’s lawsuit against Anthropic for unauthorized use of its data—highlight how every individual, knowingly or not, is already creating something of enormous value for tech behemoths. Every click, scroll, comment, or post we make is a drop in the vast ocean fueling the AI engines that power tomorrow’s economy. What if, instead of handing this value over for free, we demanded a share in the wealth it helps produce? What if data—your data—became the basis for a new, fairer redistribution of wealth in a fully automated world?</p>
<p>This vision echoes core ideas behind the Web3 economy—a movement built on decentralized digital ownership and blockchain technologies. In Web3, users are not just passive consumers but active stakeholders: they own their digital identities, control the data they generate, and earn tokens or governance rights by participating in decentralized platforms. Unlike the status quo, where tech giants extract value from user activity without compensation, the Web3 paradigm promises a new social contract in which data producers are also beneficiaries. If applied at scale, a data-based economy inspired by Web3 principles could shift the dynamics of value creation, giving individuals direct economic agency over their contributions and fostering a more equitable distribution of the immense wealth that automation and AI make possible.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog2025g</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {The End of Work and the Future of Wealth Redistribution},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/data_economy/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>Full Automation</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/data_economy/</guid>
  <pubDate>Mon, 09 Jun 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/data_economy/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>Meta Computing, or the Real Paradigm Shift</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/new_paradigm/</link>
  <description><![CDATA[ 





<p>The terms “new paradigm” and “paradigm shift” are now among the most popular keywords, everywhere. However, those terms are often kind of empty. Yes, we can call a new paradigm in the job market and industrial processes; yes, we can say that AI initiated a shift in how research is conducted; and yes, we can discuss the new paradigm of content production (music, films, books, marketing, ads, etc); but, the most important, and under-reported paradigm shift, is a technical one: the emerging paradigm of what I like to call “meta computing”. I am not talking about “vibe coding”, but about Large Language Models (LLMs) now serving as Central Processor Units (CPU), and Agentic AI (also mistakenly called “AI agents”), taking the role of Random Access Memory (RAM).</p>
<section id="background-the-cpu-ram-dance" class="level2">
<h2 class="anchored" data-anchor-id="background-the-cpu-ram-dance">Background: The CPU-RAM Dance</h2>
<p>To understand the magnitude of this shift, we must first appreciate the elegant simplicity of traditional computing. At its core, a computer operates through a carefully orchestrated dance between components. The CPU serves as the brain, executing instructions with blazing speed but limited memory. RAM acts as the workspace, holding data and instructions that the CPU needs immediate access to. Storage devices preserve information for the long term, while the motherboard connects everything together like a digital nervous system.</p>
<p>This architecture has served us well, enabling everything from spreadsheets to space exploration. The CPU processes instructions sequentially (or in parallel cores), fetching data from RAM, performing calculations, and storing results back. It’s a beautiful system, but fundamentally limited to executing predefined instructions. The computer does exactly what it’s told—no more, no less.</p>
</section>
<section id="llms-as-the-new-cpu-agentic-ai-as-dynamic-ram" class="level2">
<h2 class="anchored" data-anchor-id="llms-as-the-new-cpu-agentic-ai-as-dynamic-ram">LLMs as the New CPU, Agentic AI as Dynamic RAM</h2>
<p>Now, imagine if we could transcend these limitations. What if our processing unit could understand context, reason about problems, and generate solutions beyond predefined instructions? This is where LLMs enter the picture, functioning as a new type of “CPU” for intelligent computation.</p>
<p>LLMs process natural language and concepts rather than binary instructions. They don’t just execute; they interpret, reason, and create. But like traditional CPUs, LLMs alone have limitations—they process information in isolation, without persistent memory or the ability to take actions in the world.</p>
<p>Enter Agentic AI, which serves as the “RAM” of this new paradigm. Just as RAM provides working memory and state management for CPUs, Agentic AI provides context, memory, and action capabilities for LLMs. It maintains conversation history, manages tool usage, coordinates multiple tasks, and bridges the gap between pure language processing and real-world interaction. Together, they form a computational unit that can understand goals, plan approaches, and execute complex tasks autonomously.</p>
</section>
<section id="agentic-ai-vs.-ai-agents" class="level2">
<h2 class="anchored" data-anchor-id="agentic-ai-vs.-ai-agents">Agentic AI vs.&nbsp;AI Agents</h2>
<p>In my opinion, this is one of the most important distinctions to understand in the AI landscape, as it often confuses both machine learning practitioners and the general public:</p>
<ul>
<li><p>Agentic AI: This involves using Large Language Models (LLMs, that are pretrained general model for semantics) as central processing units (CPUs) to complete tasks such as web browsing, coding, and posting. In other words, agentic AI systems have access to your browser or computer and can autonomously complete tasks by querying a model like ChatGPT, mimicking how a human user would perform these actions.</p></li>
<li><p>AI Agent: This refers to reinforcement learning models trained on specific datasets to perform specific tasks, without necessarily involving an LLM. An example of an AI agent is a reinforcement learning model trained to play Atari games. The agent receives pixel data from the screen as input and learns to output actions (e.g., moving a joystick, pressing a button) to maximize its score in the game. This type of agent doesn’t necessarily involve a LLM; it’s trained directly on the game environment through trial and error.</p></li>
</ul>
</section>
<section id="meta-computing-transforms-everything" class="level2">
<h2 class="anchored" data-anchor-id="meta-computing-transforms-everything">Meta-Computing Transforms Everything</h2>
<p>Traditional computing gave us the ability to automate calculations and data processing. Meta-computing gives us the ability to automate reasoning, creativity, and decision-making.</p>
<p>Consider how this transforms various domains:</p>
<p><strong>Software Development</strong>: Instead of writing code line by line, developers describe intentions and constraints. The meta-computing layer translates these into working systems, handling implementation details while humans focus on architecture and requirements.</p>
<p><strong>Scientific Research</strong>: Researchers can engage with AI systems that understand scientific literature, propose hypotheses, design experiments, and interpret results—accelerating the pace of discovery.</p>
<p><strong>Business Operations</strong>: Complex workflows that once required extensive human coordination can be managed by AI systems that understand goals, constraints, and trade-offs, adapting dynamically to changing conditions.</p>
<p><strong>Education</strong>: Personalized tutors that understand not just subject matter but learning styles, adjusting their approach in real-time based on student comprehension.</p>
<p>This meta-computing layer doesn’t replace human intelligence; it amplifies it. It handles the computational heavy lifting of understanding, reasoning, and execution, freeing humans to focus on creativity, ethics, values, and high-level decision-making. This supports the broader societal paradigm shifts initiated by AI—democratizing access to expertise, accelerating innovation, and enabling new forms of human-AI collaboration.</p>
</section>
<section id="traditional-computing-as-the-foundation-layer" class="level2">
<h2 class="anchored" data-anchor-id="traditional-computing-as-the-foundation-layer">Traditional Computing as the Foundation Layer</h2>
<p>Just as high-level programming languages didn’t eliminate assembly code but built upon it, meta-computing doesn’t replace traditional computing—it transcends it. Traditional computing becomes the foundational layer, the bedrock upon which this new paradigm operates.</p>
<p>Consider the analogy of programming languages. Assembly language directly controls hardware but is tedious for complex tasks. High-level languages like Python abstract away hardware details, enabling developers to express complex ideas succinctly. Similarly, traditional computing handles the low-level operations—matrix multiplications for neural networks, memory management, network protocols—while meta-computing handles high-level reasoning and goal-directed behavior.</p>
<p>This layering is powerful. Traditional computing’s speed and precision enable meta-computing’s intelligence. Processor units (CPUs, GPUs, TPUs,…) perform trillions of calculations per second to enable LLMs to generate human-like text. Distributed systems coordinate across data centers to provide the infrastructure for global AI services. Classical algorithms optimize resource allocation to make meta-computing economically viable.</p>
</section>
<section id="conclusion" class="level2">
<h2 class="anchored" data-anchor-id="conclusion">Conclusion</h2>
<p>We’re not abandoning the foundations we’ve built; we’re building a new floor on top of them. And just as modern software developers rarely think about transistor states but rely on their consistent operation, future builders of intelligent systems will work primarily at the meta-computing layer while depending on traditional computing’s reliable foundation.</p>
<p>The real paradigm shift isn’t just about new technology—it’s about a fundamental change in what we consider “computing” to be. We’re moving from a world where computers execute instructions to one where they understand intentions. From systems that process data to ones that grasp meaning. From tools that extend our physical capabilities to partners that amplify our cognitive ones.</p>
<p>This is the dawn of the meta-computing era, and we’re just beginning to glimpse its transformative potential.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog2025f</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {Meta Computing, or the Real Paradigm Shift},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/new_paradigm/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>

 ]]></description>
  <category>LLMs &amp; Agentic AI</category>
  <category>Meta Computing</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/new_paradigm/</guid>
  <pubDate>Fri, 30 May 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/new_paradigm/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>The Great Emancipation: Entering the Post-Data Era</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/obsolete_data/</link>
  <description><![CDATA[ 





<p>I started a blog because I am passionate about technology and I love to write. In my first article I tackled the issue of <a href="https://sadoune.me/posts/mode_collapse/">mode collapse</a>, or in other words, the idea that the evolution of AI relies on a continuous input of fresh human-generated data. This is a beautiful irony, but as AI research advances rapidly, the irony fades. Algorithmic technology is now entering the realm of reliable self-evolution.</p>
<section id="the-dawn-of-data-free-ai" class="level2">
<h2 class="anchored" data-anchor-id="the-dawn-of-data-free-ai">The Dawn of Data-Free AI</h2>
<p>The shift began with reinforcement learning, where AI systems learned not from labeled datasets but from trial and error in simulated environments. DeepMind’s AlphaGo, for example, achieved superhuman performance by playing millions of games against itself, rather than relying solely on human game records. But that was just the beginning.</p>
<p>Enter <a href="https://arxiv.org/abs/2505.03335">Absolute Zero Reasoning</a> —a paradigm where large language models (LLMs) develop reasoning skills without any human-labeled data. Instead, these models generate their own problems and solutions, using self-play and self-verification to bootstrap new knowledge from first principles. This approach moves beyond traditional machine learning, which interpolates from examples, enabling models to extrapolate into genuinely novel territory.</p>
<p><a href="https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">Google DeepMind’s Alpha Evolve</a> takes this further by using evolutionary algorithms to autonomously discover and optimize algorithms and code. Rather than just learning from data, these systems iteratively redesign themselves, discovering optimization strategies and even breaking long-standing computational records—such as the 2022 breakthrough in matrix multiplication. While human researchers still define the problems, the solutions are increasingly generated by AI itself.</p>
<p>Perhaps most intriguingly, <a href="https://github.com/SakanaAI/continuous-thought-machines">Continuous Thought Machines (CTM)</a>—as explored by Sakana AI—blur the line between computation and physics. These models process information in continuous, rather than discrete, steps, synchronizing neuron activity in a way that more closely resembles biological brains. This allows for a fundamentally different kind of reasoning, potentially bridging the gap between artificial and natural intelligence.</p>
</section>
<section id="simulation-supremacy-when-virtual-beats-real" class="level2">
<h2 class="anchored" data-anchor-id="simulation-supremacy-when-virtual-beats-real">Simulation Supremacy: When Virtual Beats Real</h2>
<p>The game-changer has been the realization that simulation can often surpass reality for training purposes. <a href="https://arxiv.org/abs/2505.04588">Zero Search</a> algorithms, such as <a href="https://github.com/Alibaba-nlp/ZeroSearch">Alibaba’s Zero Search</a> framework, demonstrate this by achieving superhuman performance in information retrieval without ever accessing real-world search engines. Instead, these models are trained on simulated search results and synthetic experiences, covering edge cases that real-world data might never provide. This approach not only reduces costs but also mitigates the biases and messiness of human data.</p>
<p>This isn’t just about games anymore. Physics simulations now train robots more efficiently than real-world trials. Economic models can learn optimal strategies without historical market data. Medical AI can discover treatment protocols through simulated patient populations, exploring scenarios that would be impossible or unethical to study in reality.</p>
</section>
<section id="datas-dominance-slow-decline" class="level2">
<h2 class="anchored" data-anchor-id="datas-dominance-slow-decline">Data’s Dominance: Slow Decline</h2>
<p>Remember when data was called “the new oil”? That metaphor captured how the digital economy ran on vast reserves of human-generated information. But just as renewables and nuclear energy are making oil less central to our future, a new generation of AI systems is learning to thrive without constant human input. However, just as oil remains valuable even as electric vehicles proliferate, data won’t disappear overnight. Its role is fundamentally changing: from being the sole fuel for intelligence to becoming one input among many.</p>
<p>Consider the trajectory:<br>
- <strong>Peak Oil Era</strong>: Every machine needed petroleum products<br>
- <strong>Transition Period</strong>: Hybrid systems and alternative energy sources emerge<br>
- <strong>Post-Oil Future</strong>: Oil becomes a specialty product, not a universal necessity</p>
<p>We’re seeing the same with data:<br>
- <strong>Peak Data Era</strong>: Every AI needed massive human-generated datasets<br>
- <strong>Current Transition</strong>: Self-play, synthetic data, and reasoning systems reduce dependence<br>
- <strong>Post-Data Future</strong>: Human data becomes valuable for specific applications, not a universal requirement</p>
</section>
<section id="the-road-ahead-challenges-and-opportunities" class="level2">
<h2 class="anchored" data-anchor-id="the-road-ahead-challenges-and-opportunities">The Road Ahead: Challenges and Opportunities</h2>
<p>This shift promises to reshape the digital economy profoundly. The current tech giants built their empires on data monopolies—collecting, processing, and monetizing human information. But what happens when AI no longer needs that data?</p>
<p>This transition won’t be smooth or complete. Some domains will always benefit from real-world data—understanding human preferences, cultural nuances, or current events. But the monopolistic power of data is breaking.</p>
<p>We’re witnessing AI’s adolescence—the moment it stops depending entirely on its parents (us) for knowledge. Like any coming-of-age story, it’s both exciting and terrifying. The mode collapse problem that seemed so intractable just years ago is now being addressed through algorithmic innovation.</p>
<p>As we stand at this inflection point, one thing is clear: the future of AI isn’t about bigger datasets or more powerful scrapers. It’s about systems that can reason, evolve, and learn in ways that transcend their training data. The age of data might not be ending, but its reign as the undisputed king of AI is certainly coming to a close.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<div class="sourceCode" id="cb1" style="background: #f1f3f5;"><pre class="sourceCode bibtex code-with-copy"><code class="sourceCode bibtex"><span id="cb1-1"><span class="va" style="color: #111111;
background-color: null;
font-style: inherit;">@misc</span>{<span class="ot" style="color: #003B4F;
background-color: null;
font-style: inherit;">SadouneBlog2025e</span>,</span>
<span id="cb1-2">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">title</span> = {The Great Emancipation: Entering the Post-Data Era},</span>
<span id="cb1-3">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">author</span> = {Igor Sadoune},</span>
<span id="cb1-4">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">year</span> = {2025},</span>
<span id="cb1-5">  <span class="dt" style="color: #AD0000;
background-color: null;
font-style: inherit;">url</span> = {https://sadoune.me/posts/obsolete_data/}</span>
<span id="cb1-6">}</span></code></pre></div>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/obsolete_data/</guid>
  <pubDate>Fri, 23 May 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/obsolete_data/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>The Grand Dream: AI for Breakfast, Lunch, and Dinner</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/the_end_of_work/</link>
  <description><![CDATA[ 





<p>Let’s face it, the idea of full-scale AI automation isn’t just some cyberpunk fever dream anymore. Big tech has been pushing this narrative, (<a href="https://sadoune.me/posts/ai_economic_transition/">creating some AI anxiety in job markets</a>), and honestly, it feels like we are moving straight toward it, whether we want to play or not. It’s everywhere: in books, in movies, in Elon Musk interviews where he paints a picture of “great prosperity”, universal basic incomes, everything delivered to your door, and your only worry being: what do I do with all my <em>free time</em>? Who wouldn’t sign up for this ride? I’d totally punch that ticket. The tempting promise: a world where all our needs are churned out by robots, AI, and maybe an arms race of universal revenue systems. It sounds awesome—a quality life on autopilot, no strings attached.</p>
<section id="but-who-gets-to-stop-working-first" class="level3">
<h3 class="anchored" data-anchor-id="but-who-gets-to-stop-working-first">But Who Gets to Stop Working First?</h3>
<p>Just because this AI-fueled paradise looks close enough to taste doesn’t mean everyone gets a slice at the same time—or ever. The path there is paved with some pretty gnarly challenges:</p>
<ul>
<li><p><strong>Economic Shockwaves</strong> – The real pain resides in the transition, as this utopia will take time to reach. In the meantime, universal revenue is not a sine qua non option, and job markets will be disrupted: some people will lose their jobs along the way, and poverty may increase before the ultra-productivity on the other side of the coin manages to balance out the loss in consumers. This path seems rather painful at first.</p></li>
<li><p><strong>The Energy Dilemma</strong> – All this automation needs serious juice. The dream relies on things like nuclear fusion, and although tremendous progress has been made in this direction, fully democratized fusion energy is not around the corner. In the meantime, nuclear fission can do the job, but this requires tremendous investments, and only a fraction of the world has access to such technology, which, on top of that, carries strategic implications and may not be shared easily. In short, very few countries have the resources to power a mostly AI-autonomous economy with nuclear fission.</p></li>
<li><p><strong>Mode Collapse</strong> – As I explain <a href="https://sadoune.me/posts/mode_collapse/">here</a>, AI has inherent and serious limitations that can hinder its development. Maybe this ideal of a fully automated society isn’t even reachable in the next decades. However, some recent developments—namely, Absolute Zero Reasoning—may enable AI to emancipate from its dependence on data, so mode collapse might not be an issue in the future.</p></li>
<li><p><strong>The Green Problem</strong> – Even with AI helping, ecology isn’t a solved puzzle. Even though nuclear energy is relatively clean, even with fission, the production and deployment of such infrastructure comes with environmental costs. Waste management, mining for raw materials, and large-scale energy transport remain unsolved or only partly solved problems. We can’t simply wish away the ecological debts that come with building “clean” AI-powered utopias.</p></li>
<li><p><strong>Security Soup</strong> – Quantum computing threatens to make any safeguard, even blockchain, obsolete—and with AI controlling everything, that’s a recipe for disaster. On top of that, the <a href="https://sadoune.me/posts/ai_public_governance/">Skynet narrative</a> has already begun, with initiatives like OpenAI for governments as part of the Stargate project. The very tools designed to manage AI could be the ones to spiral out of our control.</p></li>
</ul>
</section>
<section id="the-epic-divide-techno-utopia-meet-techno-have-nots" class="level2">
<h2 class="anchored" data-anchor-id="the-epic-divide-techno-utopia-meet-techno-have-nots">The Epic Divide: Techno-Utopia, Meet Techno-Have-Nots</h2>
<p>What is the real issue? The technology itself? Well, algorithms can be open source. Talent is everywhere. In fact, the bottleneck is physical: energy. The first societies that crack the AI-energy combo are gonna rocket ahead. The rest? Well, they might find themselves not just left behind, but stuck in a new kind of underworld. Forget just being poorer—the whole definition of purpose changes when people somewhere don’t have to work at all, while you’re still clocking in. Imagine waking up and realizing it’s not about the paycheck anymore, it’s about your very <em>human condition</em>.</p>
</section>
<section id="what-should-we-do-then" class="level2">
<h2 class="anchored" data-anchor-id="what-should-we-do-then">What Should We Do Then?</h2>
<p>So, how do we avoid this biblical allegory of “AI ascension” where some societies head for the clouds and others just stay in Hell, working? I don’t know. Nobody really does—we are just on the verge of everything ahead, if that makes sense. Nevertheless, openness matters—a lot, as open source levels the playing field, at least when it comes to knowledge. Global cooperation is also a requirement, but let’s face it: geopolitics is rarely about sharing, and every country has its own dreams of being the one pushing humanity (or themselves) to the next level.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<pre class="{bibtex}"><code>@misc{SadouneBlog2025d,
  title = {The Grand Dream: AI for Breakfast, Lunch, and Dinner},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/the_end_of_work/}
}</code></pre>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>Full Automation</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/the_end_of_work/</guid>
  <pubDate>Fri, 16 May 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/the_end_of_work/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>The Stargate Project and the Digital State: Skynet Is Born</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/ai_public_governance/</link>
  <description><![CDATA[ 





<p>AI is finding its way into public governance—that is now a fact. OpenAI has announced the launch of “OpenAI for Countries,” a sweeping global initiative aimed at helping nations build out their AI infrastructure and customize AI capabilities for local needs. This move, unveiled as part of the company’s $500B Stargate project, signals not just a technological leap, but a new era in the geopolitics of AI.</p>
<p>OpenAI’s new program will see the company partnering directly with governments to construct in-country data centers and adapt its products—like ChatGPT—to specific languages and cultural contexts. The goal: to create custom AI assistants for citizens, improving public services in areas such as healthcare, education, and government administration. Funding will be shared between OpenAI and participating countries, with an initial focus on ten democratically aligned nations.</p>
<section id="a-two-way-road" class="level2">
<h2 class="anchored" data-anchor-id="a-two-way-road">A Two-Way Road</h2>
<p>OpenAI’s leadership frames this as a way to extend “US-led AI leadership” and foster a “global, growing network effect” for democratic AI. In other words, OpenAI is positioning itself not just as a tech provider, but as an ambassador for American values and a key player in shaping the future of global governance.</p>
<p>As AI infrastructure becomes inseparable from questions of national identity, strategic interests, and economic security, the “AI race” between nations is entering an unprecedented phase. No longer is this competition just about faster chips or better models—it is now at the very core of geopolitics. For major players like the US and China, achieving AI supremacy translates into new forms of sovereign control, where soft power rapidly morphs into hard power. Whoever controls the standards and infrastructure of AI now gains direct influence over other nations’ economies, public sectors, and even the consent of their citizenry. In this context, OpenAI’s initiative isn’t just technological outreach—it’s geopolitical maneuvering, and the stakes for the world’s balance of power have never been higher.</p>
</section>
<section id="human-sovereignty-and-the-digital-state" class="level2">
<h2 class="anchored" data-anchor-id="human-sovereignty-and-the-digital-state">Human Sovereignty and the Digital State</h2>
<p>But the stakes are even larger than the question of human versus AI control in human affairs. What emerges here is the specter of an increasingly globalized—and, crucially, US-driven—superpower operating through standardized technological rails. Who, ultimately, will sit at the helm of this interconnected, AI-enabled juggernaut?</p>
<p>The hand on the wheel may not seem so controversial now, as each partnership is framed as little more than a productivity boost or a modernization measure. Yet, history (and sci-fi!) teaches us that it is these moments—when new tools are quietly seeded into the soil of governance—that ultimately determine the canopy under which nations, and their citizens, later find themselves. Today, we are told this is simply infrastructure for progress. But if these are the roots, what kind of tree will grow—whose fruit will it bear, and who will be allowed to harvest it?</p>
<p>This echoes another initiative associated with Sam Altman: Worldcoin, which aims to create a digital ID for, ultimately, every human, via iris scanning. Such efforts, though often presented as advancements in efficiency or security, prompt deep reflection on privacy, identity, and control in the digital era. As these technologies are woven into the fabric of society, we must carefully consider whether their benefits are distributed equitably—and who is empowered or diminished by their adoption.</p>
</section>
<section id="the-great-schism-among-the-top-human-minds-in-ai" class="level2">
<h2 class="anchored" data-anchor-id="the-great-schism-among-the-top-human-minds-in-ai">The Great Schism Among the Top Human Minds in AI</h2>
<p>This should have been the subject for another article, but I cannot ignore its relevance here. As top AI scientists like Yoshua Bengio and Geoffrey Hinton express deep concerns and often take a more pessimistic stance, a clear schism emerges: top industry leaders (like Sam Altman) are notably optimistic, pushing for more AI advancement and advocating for less regulation. These two sides have sharply diverged. However, recent developments such as the Stargate project only reinforce the skeptics’ positions. If AI weaves itself more deeply into public governance, the call for caution becomes even more pressing—lending further credibility to critics like Bengio.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<pre class="{bibtex}"><code>@misc{SadouneBlog2025c,
  title = {The Stargate Project and the Digital State: Skynet Is Born},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/ai_public_governance/}
}</code></pre>


</section>

 ]]></description>
  <category>Digital State</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/ai_public_governance/</guid>
  <pubDate>Fri, 09 May 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/ai_public_governance/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>AI Anxiety: Will You Lose Your Job?</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/ai_economic_transition/</link>
  <description><![CDATA[ 





<p>In April 2025, Shopify CEO Tobi Lütke shared an internal memo with all his employees (which he then posted on X), stating that they must demonstrate why AI cannot perform a job before requesting new hires. Shopify is not the only company taking this approach; other major tech firms like Klarna and Uber (just to name a few) have also publicly embraced AI-driven productivity for internal processes and labor. Meanwhile, a growing number of Gen Z students are questioning the value of traditional college education, believing that AI makes this investment a waste of time and money.</p>
<p>These are the kinds of news stories that spark fear and anxiety. I mean, there are bus stop ads in San Francisco from companies offering artificial employees for hire. Naturally, when faced with such a rapidly evolving and effective technology, it is hard not to question ourselves. Will we remain relevant in our jobs? And if so, for how long? Part of the answer can be found in my previous article <a href="https://sadoune.me/posts/mode_collapse/">Does AI Need Humans to Evolve?</a>, which makes the case for the value of human-crafted content in the very development of AI.</p>
<p>Nevertheless, we cannot all be writers and artists, getting paid to create content to “feed the machine”. The combine harvester was a disruptive technology that revolutionized agriculture, displacing many farmworkers but ultimately leading to greater efficiency. With AI, however, we are talking about every sector of activity, including physicians, lawyers, accountants, artists, and even manual labor (intelligent machinery and robotics). The potential for disruption goes beyond a mere economic transition—we are facing a true paradigm shift at the scale of our civilization.</p>
<section id="big-techs-productivity-pivot-versus-the-small-business-reality" class="level2">
<h2 class="anchored" data-anchor-id="big-techs-productivity-pivot-versus-the-small-business-reality">Big Tech’s Productivity Pivot versus the Small Business Reality</h2>
<p>The signals from major technology companies are unmistakable. Once voracious recruiters competing for talent, these companies have dramatically slowed their hiring. Instead, they are turning inward, leveraging AI to boost internal productivity. These companies have the resources, technical expertise, and organizational flexibility to rapidly integrate AI into their workflows. Their shareholders applaud as productivity metrics climb. The narrative of “doing more with less” has become the new corporate mantra in boardrooms across Silicon Valley.</p>
<p>In contrast, the employment landscape looks markedly different beyond the tech giants. Small and medium-sized enterprises (SMEs) employ most of the workforce in developed economies. These businesses often lack the technical infrastructure, expertise, or leadership awareness to implement AI solutions effectively. The reasons for not upgrading substantially are varied: limited understanding of AI capabilities, concerns about implementation costs, organizational inertia, and, in many cases, legitimate questions about whether current AI solutions truly fit their specific business needs.</p>
</section>
<section id="the-advent-of-solopreneurs-and-tiny-firms" class="level2">
<h2 class="anchored" data-anchor-id="the-advent-of-solopreneurs-and-tiny-firms">The Advent of Solopreneurs and Tiny Firms</h2>
<p>AI, especially agentic AI, is poised to redefine the concept of solopreneurship by enabling individuals to leverage advanced tools and operate at scales previously reserved for larger organizations. At the time I am writing these words, it is difficult to have the necessary perspective to see objectively what is happening, as this is unfolding right now. But from my perspective, here are the consequences of the advent of the “one-man team”:</p>
<ul>
<li>Lost headcount in traditional employment can be offset by a dramatically increased number of solopreneurs, as AI removes technical and operational barriers for individuals starting their own ventures.</li>
<li>These new micro-businesses have the potential to disrupt the economy by being highly competitive, primarily through aggressive cost-cutting.</li>
<li>A significant portion of these tiny companies often targets the sector of AI counseling and education for SMEs, helping established businesses adjust and adopt AI-powered productivity pivots. Ironically, this may accelerate the transition of SMEs toward automation and, in the process, further slow down hiring.</li>
</ul>
</section>
<section id="the-consumer-paradox" class="level2">
<h2 class="anchored" data-anchor-id="the-consumer-paradox">The Consumer Paradox</h2>
<p>The fundamental paradox of widespread automation is this: if AI significantly reduces employment opportunities, who will have the income to purchase the products and services these increasingly automated businesses produce?</p>
<p>This question reveals the circular nature of our economic system. Workers are also consumers. When productivity gains from technology are concentrated among shareholders rather than distributed across society, the result is a shrinking consumer base. History has shown repeatedly that extreme wealth concentration eventually undermines economic growth.</p>
<p>This reality suggests that some form of regulation or economic restructuring will be necessary, including for the instigators of mass automation themselves. Options actively being debated include:</p>
<ol type="1">
<li><strong>AI taxation</strong> - Taxing productivity gains from AI to fund social programs or universal basic income.</li>
<li><strong>Mandatory profit-sharing</strong> - Requiring companies to distribute AI-driven productivity gains among workers.</li>
<li><strong>Reduced working hours</strong> - Maintaining employment levels by shortening the standard work week.</li>
<li><strong>Education subsidies</strong> - Government funding for continuous learning and reskilling programs.</li>
</ol>
</section>
<section id="conclusion-staying-relevant-in-the-ai-era" class="level2">
<h2 class="anchored" data-anchor-id="conclusion-staying-relevant-in-the-ai-era">Conclusion: Staying Relevant in the AI Era</h2>
<p>Before reaching peak AI efficiency, as well as nuclear fusion to match the energy demand of such a mass deployment of intelligent systems, it is reasonable to say that jobs are relatively safe. However, while jobs may not disappear soon, employees are and will. For instance, where 10 developers were needed, maybe 3 working with AI tools may be enough to sustain at least the same level of production.</p>
<p>In other words, as Shopify’s CEO said to his employees, it is crucial to keep up with technological inflation and stay informed about the latest workflow processes in order to remain competitive as the job market in certain areas shrinks. And, as many Gen Z thinks, understanding the new AI productivity landscape may outweigh diplomas and experience in certain scenarios and for some recruiters. This suggests a practical approach for navigating the transition:</p>
<ol type="1">
<li><strong>Become an AI power user</strong> - Deeply understand how AI tools can enhance your specific role.</li>
<li><strong>Focus on uniquely human skills</strong> - Develop capabilities that complement rather than compete with AI.</li>
<li><strong>Cultivate adaptability</strong> - Build habits of continuous learning and experimentation.</li>
<li><strong>Seek augmentation, not replacement</strong> - Look for ways AI can handle routine aspects of your work while you focus on higher-value activities.</li>
</ol>
<p>The economic transition triggered by AI will undoubtedly create disruption, but it also presents unprecedented opportunities for those willing to embrace change. The question is not whether AI will transform the economy—it’s already happening—but how we as individuals and societies choose to respond to that transformation.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<pre class="{bibtex}"><code>@misc{SadouneBlog2025b,
  title = {AI Anxiety: Will You Lose Your Job?},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/ai_economic_transition/}
}</code></pre>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/ai_economic_transition/</guid>
  <pubDate>Thu, 01 May 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/ai_economic_transition/thumbnail.png" medium="image" type="image/png" height="144" width="144"/>
</item>
<item>
  <title>Does AI Need Humans to Evolve?</title>
  <dc:creator>Igor Sadoune</dc:creator>
  <link>https://sadoune.me/posts/mode_collapse/</link>
  <description><![CDATA[ 





<p>There is an interesting phenomenon that threatens the very foundation of AI systems: <em>mode collapse</em>. This technical term describes a situation where GenAI models begin to produce increasingly homogeneous outputs, effectively eating their own tail like the mythical serpent Ouroboros. Mode collapse is not new, and has been well-known since the early days of adversarial learning (2014), but it is now more relevant than ever.</p>
<section id="ai-and-darwinism" class="level2">
<h2 class="anchored" data-anchor-id="ai-and-darwinism">AI and Darwinism</h2>
<p>If we can think of AI as a species, at least in a sci-fi sense, then why not explain it using Darwinism? In this case, we can think of human training data as a gene pool. Iterated evolution over this pool will result in a narrowing of diversity, as dominant traits prevail and weaker ones are essentially wiped out by the environment (e.g., user engagement, corporate objectives, etc.). When training AI, dominant patterns in the training data are amplified by AI models, while less common patterns are neglected. This leads to outputs that become increasingly derivative, predictable, and homogeneous.</p>
<p>To evolve properly, AI systems continuously need fresh human-crafted content (not like the image thumbnail of this post 😳) as training data in order to diversify the gene pool, and thus avoid inbreeding and extinction. Indeed, AI systems lack a true “mutation” mechanism that would naturally introduce novelty. In biological evolution, random mutations create new traits, but AI systems do not yet have an equivalent process for generating truly novel patterns without human input.</p>
</section>
<section id="the-feedback-loop-problem" class="level2">
<h2 class="anchored" data-anchor-id="the-feedback-loop-problem">The Feedback Loop Problem</h2>
<p>Large Language Model (LLM)-based AI enables the creation of varied and extensive content from a compact input—a prompt, or a series of refined prompts. It has never been easier for humans to create, to the point where not only have many content creators switched to GenAI, but new creators—empowered by AI tools—flood the market in massive numbers with new content (across social media, advertisements, writing platforms, etc.). It is hard to estimate how much of the internet consists of synthetic data, and such estimates would quickly become obsolete anyway, but as of early 2025, we can reasonably speak of at least one-third. To put things in perspective, LLMs (e.g., ChatGPT) only began to gain widespread popularity in 2023.</p>
<p>As AI-generated content floods online spaces, newer models inevitably train on this synthetic data, and begin to amplify their own patterns and quirks, creating a feedback loop that leads to</p>
<ol type="1">
<li><strong>Semantic drift</strong> - Subtle shifts in how language is used and understood</li>
<li><strong>Information staleness</strong> - Recycling of outdated concepts without fresh perspectives</li>
<li><strong>Stylistic convergence</strong> - A narrowing range of expression and voice</li>
<li><strong>Factual degradation</strong> - The amplification of inaccuracies through repeated regurgitation</li>
<li><strong>Creative homogenization</strong> - A loss of diversity in artistic and creative outputs</li>
<li><strong>Predictable outputs</strong> - A tendency towards uniform and less innovative results</li>
<li><strong>Bias Reinforcement</strong> - The perpetuation and amplification of existing biases present in training data</li>
</ol>
<p>Note that the above list is non-exhaustive, just what I could think of at the moment.</p>
</section>
<section id="the-grand-irony" class="level2">
<h2 class="anchored" data-anchor-id="the-grand-irony">The Grand Irony</h2>
<p>There is a profound irony here that deserves contemplation. AI systems, developed at enormous expense to automate human creative and intellectual labor, now require human creative and intellectual labor to remain viable. The technology designed to replicate human output now needs that output as an essential resource.</p>
<p>In a development that borders on the surreal, major AI companies have begun hiring humans specifically to create original content for training their models. This represents a fascinating economic inversion: technology that was once heralded as a replacement for human labor now depends on that very labor for its continued functionality. Companies like OpenAI, Anthropic, and others have launched initiatives to commission writers, artists, and other creators to produce high-quality original work.</p>
</section>
<section id="a-permanent-symbiosis" class="level2">
<h2 class="anchored" data-anchor-id="a-permanent-symbiosis">A Permanent Symbiosis?</h2>
<p>From my perspective, this dynamic makes logical sense. AI functions as a creativity amplifier for humans while simultaneously removing financial and technological barriers to creative production, which paradoxically also makes generating fresh human-crafted content more accessible. In fact, it depends on how AI is used, as it is possible to enhance productivity by an enormous amount while preserving artistic or technical human direction. Nevertheless, this is a fragile balance, since AI, even when used as a mere productivity tool, inevitably introduces its biases into human work.</p>
<p>However, the discussion on the synthetic-to-real creation ratio may no longer be relevant at all if this symbiosis does not become a permanent feature of the AI-human relationship. The answer likely depends on several critical factors, the main one being the ability of AI to mutate and adapt single-handedly to introduce inherent synthetic novelty. Perhaps we have just defined here a threshold to assess if AI is becoming sentient, but this deserves a discussion on its own. In the meantime, the machines, it seems, need us after all.</p>
</section>
<section id="to-cite-this-article" class="level2">
<h2 class="anchored" data-anchor-id="to-cite-this-article">To Cite This Article</h2>
<pre class="{bibtex}"><code>@misc{SadouneBlog2025a,
  title = {Does AI Need Humans to Evolve?},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/mode_collapse/}
}</code></pre>


</section>

 ]]></description>
  <category>AI &amp; Society</category>
  <category>5min read</category>
  <guid>https://sadoune.me/posts/mode_collapse/</guid>
  <pubDate>Fri, 25 Apr 2025 04:00:00 GMT</pubDate>
  <media:content url="https://sadoune.me/posts/mode_collapse/thumbnail.jpeg" medium="image" type="image/jpeg"/>
</item>
</channel>
</rss>
