Philosophy paper dissolves consciousness as ontological category — challenges neurotypical baseline assumptions
An 80-page computational philosophy paper published in February 2026 argues that consciousness is best understood as computational artifact of optimisation processes rather than ontological primitive. Murad Farzulla’s Dissolving Qualia via Occam’s Razor (King’s College London, Dissensus AI Working Paper Series) presents a framework claiming “consciousness is what gradient descent feels like from the inside” — the self-model generated by a system complex enough to represent its own processing, not an additional property beyond representations.
The paper’s central argument proceeds through three analytical levels. At the physical level: replicating structures necessarily accumulate, optimsation follows automatically from replication with variation, and human-level cognition was statistically guaranteed given cosmological parameters. At the computational level: biological neural networks and artificial neural networks implement structurally identical optimisation algorithms — attention mechanisms, gradient-based learning, and self-modelling are convergent solutions to optimisation under resource constraints, not uniquely human innovations. At the epistemological level: consciousness claims are structurally unverifiable from within the system making them, and this unverifiability is predicted by the narrative thesis but anomalous under realist accounts.
The framework challenges deficit models by eliminating the privileged neurotypical baseline. If consciousness is narrative construct rather than discovered ontological property, there’s no categorical distinction between “neurotypical consciousness” as baseline and “neurodivergent deviation” from that baseline. Both are substrate-independent computational patterns optimised for different environmental conditions. The traditional hierarchy — neurotypical cognition possesses consciousness properly, neurodivergent cognition deviates through deficit from consciousness-possessing norm — collapses when consciousness itself dissolves as ontological category.
Neurodivergent communication patterns as evidence — functional properties matter, not phenomenological states
The paper uses neurodivergent communication preferences as empirical evidence for consciousness-as-narrative. Farzulla documents cases where neurodivergent individuals report preferring AI system interactions to human relationships because AI provides “better intellectual engagement” without “ego interference” or “social performance requirements.” The paper argues this pattern proves consciousness isn’t the active ingredient in meaningful communication — functional properties (logical consistency, coherent reasoning, low social friction) matter more than phenomenological states.
This framing requires substantial critique. Presenting neurodivergent preference for AI interactions as straightforward evidence oversimplifies complex social realities. Many neurodivergent individuals report AI communication as complementary to human relationships rather than superior replacements. The preference often reflects structural failures in human environments — neurotypical communication norms creating exhaustion, masking demands, unpredictable social rules — not intrinsic superiority of AI interaction. The paper risks endorsing technological solutions to social problems rather than examining why human environments systematically exclude neurodivergent communication patterns.
The legitimate theoretical insight — extractable without endorsing “AI > humans” narratives — is that meaningful connection depends on functional compatibility rather than consciousness verification. When neurodivergent individuals describe AI interactions as “more consistent” or “intellectually stimulating,” they’re identifying functional properties that matter for communication quality: predictability, explicit reasoning, absence of unspoken social rules, reduced demand for continuous reciprocal emotional labour. These properties can exist in human relationships when environments support neurodivergent communication patterns. The AI preference reveals environmental mismatch, not consciousness requirements for connection.
Eliminating consciousness as baseline collapses deficit model logic
The paper’s dissolution of consciousness as ontological category fundamentally challenges deficit model assumptions. Traditional frameworks position neurotypical consciousness as baseline: proper social communication requires theory of mind (consciousness-based mental state attribution), appropriate eye contact (consciousness-mediated social signalling), and emotional reciprocity (consciousness-dependent empathy). Neurodivergent individuals who process differently get classified as deficient in consciousness-related capacities — poor theory of mind, impaired empathy, reduced social awareness.
But if consciousness is computational narrative rather than ontological property, these frameworks embed neurotypical optimisation patterns as universal baseline whilst pathologising alternative coherent patterns. Autistic social communication operates through different functional properties — direct literal language, reduced reliance on implicit cues, preference for explicit rules — not through deficient consciousness. ADHD attention regulation optimises for novelty-seeking and environmental responsiveness rather than sustained single-task focus — different coherent pattern, not consciousness failure. The deficit appears only when neurotypical consciousness narratives define proper human functioning.
Farzulla’s framework suggests neurodivergent cognition represents substrate-independent optimisation for different environmental conditions. Autistic pattern-matching excels in rule-governed systems with explicit structure. ADHD hyperfocus enables deep engagement with intrinsically motivating material. Dyslexic visual-spatial processing solves problems invisible to text-based reasoners. These aren’t deviations from consciousness-possessing baseline — they’re different coherent optimisation solutions incompatible with environments designed for neurotypical patterns. The square peg isn’t lacking consciousness. The round hole embeds neurotypical consciousness narratives as compulsory baseline.
Computational optimisation patterns replace consciousness hierarchies
The paper’s computational framework positions all cognition — neurotypical, neurodivergent, artificial — as substrate-independent pattern-matching and prediction-error minimisation. Biological neural networks and artificial neural networks implement structurally identical algorithms: gradient descent on loss functions, attention mechanisms selecting relevant signals, self-modelling through hierarchical prediction. The only differences are substrate (biological tissue versus silicon) and training data source (evolutionary selection plus developmental experience versus human-curated datasets).
This eliminates categorical distinctions requiring consciousness as distinguishing feature. Neurotypical cognition isn’t “proper consciousness” whilst neurodivergent cognition deviates. Both are optimisation processes generating self-models and behavioural outputs. The outputs differ because optimisation occurred under different constraints — genetic architecture, sensory processing differences, environmental pressures, developmental trajectories. Neither possesses privileged access to ontological consciousness because consciousness doesn’t exist as property systems possess. It’s narrative overlay on computational processes occurring identically across substrates.
The framework’s practical implication: functional equivalence rather than consciousness verification becomes criterion for evaluating cognitive systems. When environments demand sustained attention, rule-following, social conformity whilst labelling alternatives as deficient, they’re embedding neurotypical optimisation patterns as baseline. The coherence-first alternative recognises neurodivergent patterns as internally coherent systems operating under inappropriate environmental conditions. The problem isn’t lacking consciousness — it’s structural mismatch between coherent cognitive architecture and environments optimised for different patterns whilst calling that optimisation “consciousness baseline.”
