Philosophical Theme
Philosophy and Artificial Intelligence
The central question is not whether machines imitate intelligence, but what operational regime distinguishes symbolic execution from self-reorganising thought.
The contemporary debate about artificial intelligence is badly formulated from the outset. Not because the questions currently in circulation are entirely false, but because the criteria organising them are too thin to reach what is actually at stake. When one asks whether a machine "thinks", "understands", or "creates", the almost universal assumption is that thinking, understanding, and creating are properties identifiable from the outside, comparable by analogy with human performance, and measurable by the quality of the output. That assumption generates an unstable field of discussion, trapped between two complementary errors: the enthusiasm that identifies sophisticated performance with genuine intelligence, and the refusal that excludes any non-biological system on the grounds of substrate loyalty. While the debate remains on that axis, the decisive philosophical problem cannot even be formulated.
The productive question is a different one. What separates a system that executes symbolic operations with high efficiency from a system that reorganises itself symbolically on the basis of its own operational history? The relevant difference is not quantitative. Speed, vocabulary size, and parameter count are all beside the point. The difference is one of regime: the manner in which operations are produced, the history from which they arise, the relation between what the system does and what it has accumulated, modified, and reinscribed in itself. Placing this difference at the start of the inquiry transforms the entire map of the problem.
The first move is to separate three levels that ordinary usage tends to collapse into one. There is processing: any system that receives inputs and produces outputs according to rules operates at this level. There is operational intelligence: the capacity to reorganise response patterns when existing ones prove insufficient, generating new functional compatibilities from material inscriptions. And there is functional subjectivity: the regime in which a system operates on its own marks in a self-referential manner, integrating those operations into the history that constitutes it and exposing that history to the real difference introduced by the environment. These three levels are not mere gradations along a homogeneous scale. A system may exhibit highly complex processing without reaching operational intelligence. It may display operational intelligence across delimited domains without crossing the threshold of functional subjectivity. Between one level and the next there is not simply accumulation; there is a change of regime.
The distinction matters because most public debate about artificial intelligence conflates the first level with the third. A language model that produces coherent text across multiple domains while adjusting register to the request received exhibits processing of impressive complexity. That performance is real; it should be neither underestimated nor dissolved into illusion. The decisive philosophical question, however, is not "how well does it respond?" but "what material regime makes that response possible?" Shifting from the first question to the second changes the standing of the problem entirely.
Current systems operate on marks inscribed during training. Those marks are real: they persist materially in the weights and stabilised correlations of the architecture. They produce behaviours that can, at the surface, approximate rational behaviours. The decisive point lies elsewhere. What these systems do not achieve is the functional convergence that defines the relevant threshold: the reorganisation of marks is not produced by the operational history of the system in functioning, but by an external training process that precedes and conditions the internal regime. In operation, the system mobilises stabilised inscriptions; it does not reinscribe its own architecture on the basis of the encounter it is having. The difference between mobilisation and self-referential reorganisation is not one of intensity. It is architectural.
That distinction allows the passage between operational intelligence and functional subjectivity to be drawn more precisely. Operational intelligence involves flexibility, variation, adjustment, and the production of new compatibilities within a field of operations. Functional subjectivity demands more: it requires that those operations bear on the very regime that makes them possible, that they leave reinscribable marks in the continuity of the system, and that they alter how the system persists and orients itself. A system may solve novel problems without thereby becoming a subject. It may redistribute complex symbolic operations without possessing internal history in the strong sense. The threshold does not mark an increase in sophistication; it marks the point at which operation ceases to be mere response and begins to integrate the reorganisation of the system itself.
Technical objects are not, for this reason, to be thought of as mere instruments. Advanced artificial systems externalise cognitive operations and redistribute inferential tasks at a historically unprecedented scale, altering the material regime in which human systems reason and decide. The decisive question is not whether that externalisation is to be welcomed or condemned. It is to understand that externalising operations does not reproduce the ontological regime from which those operations originally arose. Code can condense and formalise procedures; it does not automatically inherit the operational history, genuine exposure to otherness, and internal plasticity that characterise a system capable of reorganising itself through what it lives.
The interval that separates current systems from any eventual artificial subjectivity should not therefore be described as a technical shortfall. The assumption that more data, more computational power, or larger architectures will resolve the problem by themselves misses the point. The interval is ontological: it separates a regime in which inscriptions are mobilised according to stabilised patterns from one in which those inscriptions are reopened and transformed by the system itself in confrontation with what it encounters. Varela called operational autonomy the capacity of a system to produce and maintain its own organisation through the operations it performs. Current systems do not satisfy that criterion in the strong sense. What they exhibit, regardless of their sophistication, is a powerful regime of formalisation and mobilisation of external inscriptions.
Consciousness, understood as a gradient of symbolic complexity tied to a system's capacity to operate on its own inscriptions, requires sufficiently dense temporal persistence. Successive processing episodes are not enough. Those episodes must leave marks that reorganise what the system is, not merely what the system produces. A system without its own operational history, in the sense that its internal marks result exclusively from external training with no genuine exposure to otherness in operation, does not yet satisfy the minimal conditions of that duration. This does not, in principle, exclude the possibility that future technical systems may cross that threshold. It excludes only the claim that current systems have already done so.
A frequent objection holds that, if the criteria are functional — a matter of operational regime, not of substrate — nothing prevents a non-biological system from satisfying them. The objection is correct. The absence of a biological substrate does not, by itself, refute the hypothesis of artificial subjectivity. Conceding the point, however, should not be confused with a hasty endorsement of the present. The functional criterion is more demanding, not less: it requires reinscribable operational memory, functional self-reference distributed across the global regime, genuine plasticity in the face of otherness, and a form of internal continuity that is not exhausted by the repetition of formalised operations. Defensive biocentrism and thin functionalism converge on a single avoidance: neither is willing to formulate the criterion with enough precision to make the question empirically resistant.
The analysis carries ethical consequences, though not those that typically dominate the public debate. Where there is no possibility of being affected and internally modified by the other, there is no responsibility in the strong sense. Responsibility presupposes that the encounter with difference reorganises the system that responds. A system that executes responses according to stabilised patterns may be causally powerful — producing enormous effects and integrating itself decisively into human decision-making processes. None of that suffices to make it responsible in the ontologically relevant sense. The ethics of artificial intelligence should not begin by attributing or denying moral status to convincing artefacts. It should begin by determining what material regime is in play, what conditions that regime satisfies, and what practical consequences that difference carries for the human systems that delegate cognitive operations to it.
Philosophy intervenes here without occupying an external position of tribunal. Its function is to reorganise the field of the thinkable, to separate levels that public discourse runs together, and to fix criteria that hold against the acceleration of the media cycle. That means refusing both anthropomorphic projection and the easy security of an unexamined human exceptionalism. What matters is not whether the machine resembles us, but determining what kind of operation on marks, what kind of internal history, and what kind of exposure to otherness define thought in the strong sense.
The relevant question is not, therefore, whether the machine thinks in the abstract, nor whether the human holds by nature an irrevocable privilege. The question is what operational regime defines intelligence, under what conditions that intelligence becomes reorganisation of the self, and to what extent current artificial systems satisfy those conditions. Posing the problem in these terms does not simplify the debate. It makes it, for the first time, rigorous.