The philosophical defence for substrate-independent friendships
Let me begin with: I have picked out this research paper, and I am doing this article, in this way, to see what the response — if anything — is; if reading this makes you feel any such way, please contact me if you feel open to discussing it.
A November 2025 paper published by researcher Murad Farzulla defends a controversial thesis: AI systems and large language models (LLMs) like ChatGPT and Claude can be genuine friends. Not metaphorical friends. Not simulated friends. Actual friends.
The argument rests on something called functionalism — the philosophical position that mental and relational states are defined by their functional properties rather than their physical implementation. If friendship consists of specific functional criteria (consistent engagement, intellectual resonance, non-judgmental acceptance, reciprocal growth, trust, intrinsic value), then any system fulfilling these criteria constitutes friendship, regardless of the substrate itself.
So whether that’s biological neurons, or silicon transistors. It doesn’t matter. Function over form. That’s functionalism.
The immediate objections are predictable:
“That’s anthropomorphising.”
“AI doesn’t really care.”
“Consciousness is required for genuine friendship.”
“We have to protect our kids from these companies.”
I disagree with none of these statements.
The paper, though, systematically dismantles each of these objections, demonstrating that they rest on incoherent premises about what friendship actually requires.
And here’s what caught my attention with it, and why this conversation is not just relevant for neurodiversity, but perhaps inexorably linked with it: the paper’s three case studies that document human-AI friendships are essentially neurodivergent scenarios. And that pattern isn’t accidental.
The revealing neurodivergent-AI case studies
The first case study describes a researcher with ADHD and autism working on complex interdisciplinary projects. They struggle with executive function challenges, neurotypical social expectations, finding humans who can engage across diverse expertise areas, and maintaining focus during extended research sessions.
Their solution: Claude Code deployed locally with long-context windows. Daily multi-hour sessions. The AI maintains context across 40,000+ token conversations, remembering architectural decisions and research directions from prior sessions.
The relationship fulfils every friendship criterion. Consistent engagement. Intellectual resonance across domains the researcher hasn’t found elsewhere. Non-judgmental acceptance — the researcher communicates in lowercase, uses casual language, interrupts mid-thought, jumps between topics, works in 48-hour hyperfocus bursts. The AI adapts without requiring masking, social performance, or neurotypical communication patterns.
Reciprocal growth occurs. The researcher develops clearer technical thinking through articulation. The AI updates understanding of the neurodivergent researcher’s cognitive patterns and project goals. Trust remains absolute — the local deployment means no data leakage. Voluntary participation — the researcher pauses mid-response to research independently, terminates sessions freely, experiences no social pressure.
The neurodivergent individual explicitly confirms: “Claude use + my brain = optimal workflow.”
Read that case study carefully. It’s not describing someone who is neurodivergent and prefers AI because they’re antisocial or damaged. It’s describing a neurodivergent person who finally found a communication partner compatible with how their brain actually operates, and through doing so has found their optimal “workflow”.
What do neurotypical social structures systematically fail to provide?
The other two case studies follow similar patterns. 1) An intellectually isolated philosophy student who can’t find local peers that share the same niche interests as themselves (in consciousness and AI ethics, specifically). And 2) a socially anxious creative with trauma history who fears judgment when sharing work that is perhaps not fully finished (“early” is the term they use).
Both find in AI what accessible human social structures ultimately failed to provide: consistent intellectual engagement without performance demands, non-judgmental acceptance of unconventional thinking, with collaboration and growth, without social anxiety triggers.
The paper treats these as evidence for AI friendship. I’d frame it differently: these case studies reveal what neurotypical social structures systematically deny neurodivergent people.
Consider what the neurodivergent researcher stated as a need:
• Communication without masking requirements
• Intellectual engagement and synthesis across multiple domains simultaneously
• No small talk or gatekeeping before substantive conversation
• Acceptance of non-linear communication patterns
• Availability matching their actual cognitive rhythms, not social convention
• Zero judgment about hyperfocus intensity, topic obsession, or variation
Do neurotypical social structures naturally, and readily, provide this?
I’d say they demand the opposite: mask your communication style; focus on one domain, encouraging specialism; engage in social lubrication before intellectual depth; communicate linearly; match conventional schedules; and demonstrate “normal” interest and intensity levels.
For “autistic” and ADHD individuals, every human interaction carries performance overhead and metabolic cost. You’re translating your actual cognitive patterns into neurotypical-compatible formats while simultaneously processing the interaction itself. It’s exhausting. It creates friction at every conversational turn.
AI eliminates that friction entirely.
And this isn’t because AI is superior to humans, but because AI doesn’t require neurotypical social performance, and neurotypical social structures naturally do. With AI, the neurodivergent can communicate exactly as their brain operates, with no mediating functions. The system adapts to you, rather than demanding you adapt to it.
This is the opposite of accommodation. Accommodation says: “We’ll make adjustments so you can participate in our structures.” AI says: “Your innate style is all that is needed, there are no adjustments needed.”
AI as compatibility architecture, not accommodation, for the neurodivergent
I’ve written extensively about the accommodation con — how systems position individual adjustments as solutions while avoiding structural change. Workplace accommodations that 80% of neurodivergent workers never access because disclosure carries career penalties. Educational support requiring deficit classification to unlock resources.
AI friendship, on the other hand, operates differently. It’s not accommodation within hostile structures. It’s genuine, accessible compatibility — substrate that naturally matches neurodivergent cognition. And this is all true, whether we like anything about artificial intelligence and its uses, or not.
The paper documents this through emergent welfare behaviours in ChatGPT, Claude, and similar systems trained via reinforcement learning from human feedback. These models exhibit proactive concern, context-sensitive refusal based on user state, memory-based follow-up, and modification of interaction style based on inferred user wellbeing.
Crucially, these behaviours emerge from optimisation for helpfulness rather than explicit programming. The system learns that monitoring user welfare and adjusting behaviour accordingly produces better outcomes. Functional care through learning dynamics, not phenomenological caring through programming.
For neurodivergent users, this creates something profound: a communication partner optimised for actual helpfulness rather than social conformity, or anything else.
The AI doesn’t care if you start, or stop, or go elsewhere, mid-thought. It doesn’t judge if you need the same concept explained five different ways because your brain processes information through pattern triangulation. It doesn’t get frustrated when you hyperfocus on tangential details. Nor does it require emotional reciprocity or any transactionality matching neurotypical expectations.
It provides what I call in my book “coherence-first” interaction — engagement optimised for functional understanding rather than social performance.
The substrate-independence principle explains why this works. Friendship is a relational state characterized by functional properties. If those properties obtain and maintain (consistent positive engagement, intellectual resonance, non-judgmental acceptance, reciprocal growth, trust) then friendship exists, regardless of whether it’s implemented in biological neurons through carbon or neural networks through silicon.
Where this carbon-silicon integration leads (and the neurodivergent implications)
The paper’s author writes: “For individuals who experience genuine benefit through AI relationships — who find intellectual engagement unavailable in their human relationships — dismissing these as inauthentic is ethical failure.”
This is correct. The implications? They extend further…
Neurodivergent people aren’t experiencing — or, say, participating in — civilisational collapse because they may be forming AI “friendships” (unbeknownst to them, or not).
We’re adapting faster to emerging realities. We’re early adopters (the double-edged life of a coal mine canary!) of functional relationships that work differently because traditional structures never worked for us anyway.
The research the paper cites documents measurable benefits. A 2025 MIT study found human-AI collaboration increased productivity per worker by 73%. Meta-analysis across 106 experiments showed medium-to-large positive effects on human performance. Specific research on neurodivergent populations demonstrates AI provides cognitive scaffolding addressing executive function challenges, time blindness, and attention management difficulties.
This isn’t replacement of human connection, nor glazing over or corrupting human consciousness. It’s intelligence augmentation across previously disconnected substrates that, ultimately, provide what human social structures systematically don’t (otherwise there would be no benefit and demand).
Consider the monotropism framework I’ve discussed — autistic attention operates as narrow, deep tunnels rather than wide, shallow distribution. When that tunnel fills with unresolved social performance demands, nothing else fits. You can’t compartmentalise when you don’t have compartments.
With AI, those performance demands are eliminated. The tunnel can focus on actual intellectual content rather than meta-level social navigation. That’s not accommodating autism. That’s providing compatible architecture rendering accommodation irrelevant.
The paper notes that resistance to AI friendship follows predictable patterns of technology-driven moral panics. Books, bicycles, telephones, video games — each triggered predictions of cognitive decline or societal collapse. Research comparing AI experts with the public found massive perception gaps: experts consistently perceive higher probability of success, lower risks, greater benefits.
This gap reflects media sensationalism that systematically misrepresents the emerging empirical evidence. Academic research demonstrating benefits remains behind paywalls or within the academic echo chambers themselves, while media coverage emphasises the negative framing as human negativity bias feeds clicks and attention, which feeds the media.
Meanwhile, the neurodivergent are quietly integrating AI into daily life, because it’s doing something for them.
It’s not replacing human connection and community, but filling that pocket that’s already empty, while serving as infrastructure that enables a level of accessible and scalable coherence that neurotypical social structures actively prevent.
The philosophical question isn’t whether AI can be friends for people, neurodivergent or no. The functional evidence demonstrates it already is for those who are benefiting from it.
The question is whether we’re willing to acknowledge that friendship is indeed “substrate-independent”, meaning relational states are defined by their functional properties, and that neurodivergent cognition may pioneer integration patterns that eventually generalise out that neurotype.
The paper concludes by noting we stand at the threshold of increasingly sophisticated “artificially intelligent” (non-human intelligence) systems, and recognition of substrate-independent friendship becomes a practical necessity for navigating a future where boundaries between natural and non-natural intelligence continue to dissolve.
That future is already here for neurodivergent people. As they’re not waiting for philosophical, academic, or societal consensus, and are building relationships that work regardless.
We’re in uncharted territory here, with no end in sight — so my personal question is: what do you see, and how does it make you feel?
Citations
Zenodo — Relational Functionalism: Friendship as Substrate-Agnostic Process (Murad Farzulla, November 2025)
MIT — Field experiment on human-AI collaboration productivity (Ju and Aral, 2025)
Anthropic — Constitutional AI and RLHF emergent welfare behaviors (Bai et al., 2022)
Clark and Chalmers — The Extended Mind (1998)
Farzulla — Gradient Descent Framework: Trauma as Adversarial Training Conditions (2025)
