Therapy in a Digital World: Reflections on Online Practice, AI, and Neurodivergent Clinicians

With thanks to our member, Dr Tanya Banfield, for this blog.
Beyond the Consulting Room
The past decade has brought extraordinary changes to how we practise as therapists. Since the pandemic especially, the walls of the traditional consulting room have become wonderfully porous, and our therapeutic landscape has stretched into new territory. As a Chartered Psychologist working with neurodivergent children, young people, and adults, I've watched this transformation unfold with both excitement and a healthy dose of caution.
Digital practice isn't some optional extra anymore. It's woven through everything we do: assessments, interventions, supervision, how we see ourselves professionally, and increasingly, how our clients experience their own inner worlds. Video therapy has become routine, and now we're grappling with something altogether more complex: artificial intelligence. AI-driven chatbots and clinical support tools are appearing everywhere, and for those of us who are neurodivergent practitioners, these developments feel both promising and potentially overwhelming.
What follows are my reflections on where we find ourselves now, standing at this fascinating intersection of digital life, AI, therapeutic work, and the wellbeing of everyone involved. I'm drawing on my clinical experience, the research that's emerging, and a neurodiversity-affirming lens that I hope resonates with many of you.
When Digital Life Helps (and When It Doesn't)
There's no question that moving therapy online has opened doors that were previously closed. For many of my clients, particularly autistic people, those with ADHD, social anxiety, chronic illness, or mobility challenges, online therapy has been genuinely liberating. The sensory overload of travelling to appointments disappears. The anticipatory stress melts away. Being able to connect from somewhere familiar, somewhere safe, makes all the difference to engagement and consistency.
But it's not without its complications. The boundaries that once felt clear have become fuzzy. Clients sometimes struggle to separate therapeutic space from the rest of their digital lives—the social media scrolling, the gaming, the endless notifications. And honestly? We therapists aren't immune either. There's a peculiar erosion of containment that happens when emails and platform notifications creep into what used to be protected, reflective time.
What I've learnt is that online work demands more from us, not less. We need to be intentional about pacing, explicit about boundaries, and genuinely conscious about how we transition into and out of our therapeutic roles. Without that awareness, both we and our clients risk ending up emotionally drained and cognitively overloaded.
Getting the Best from Digital Tools
When used thoughtfully, digital tools can genuinely enhance what we do. In my own practice, this looks like secure platforms for sessions and record-keeping, visual supports and shared documents for neurodivergent clients, psychoeducational resources that people can revisit whenever they need to, and occasionally, asynchronous check-ins when it's clinically appropriate.
For neurodivergent therapists, there's an added dimension here. Digital tools can support our executive functioning in really helpful ways—structured scheduling systems, transcription tools for note-taking, or AI-assisted drafting of reports and resource summaries that don't require clinical judgement.
The key, though, is that these tools should support our clinical thinking, never replace it. Digital efficiency is wonderful, but not if it comes at the expense of relational depth, ethical reflection, or the kind of individualised formulation that makes our work meaningful.
Looking After Ourselves Online
Here's something we don't talk about enough: therapist wellbeing isn't a luxury. It's an ethical necessity. Online work brings its own particular challenges—screen fatigue, reduced awareness of our own bodies, the difficulty of truly switching off. For neurodivergent clinicians, these risks can feel magnified. Many of us are already expending considerable energy on masking, sensory regulation, and managing the demands of our work.
Protecting wellbeing online has become a deliberate practice for me. It means setting clear working hours and digital boundaries, using separate devices or profiles for professional work, taking regular screen breaks and using embodied grounding practices, ensuring supervision explicitly addresses digital fatigue, and consciously limiting exposure to distressing online content.
Perhaps most importantly, we need to give ourselves permission to not be constantly available. Ethical practice requires sustainability. We can't pour from an empty cup, as the saying goes, and we certainly can't maintain the presence our clients need if we're perpetually exhausted.
Supporting Young People in Digital Spaces
Young people don't experience life as neatly divided into "online" and "offline" categories. Their friendships, identities, learning, and vulnerabilities are completely entwined with digital spaces. This is their reality, and our therapeutic work needs to meet them where they are.
Rather than promoting unrealistic abstinence from technology, I've found it more helpful to focus on digital literacy and emotional regulation. This means exploring together how online interactions affect mood, self-esteem, and sleep patterns. It means supporting healthy boundaries around gaming, social media, and screen use without being preachy about it. It means addressing genuine risks like exploitation, cyberbullying, and misinformation, whilst helping parents move from surveillance to supported guidance.
For neurodivergent young people particularly, online spaces can offer something precious: belonging and validation. But these same spaces can increase exposure to manipulation or unmoderated content. Our role is to empower young people as they navigate these realities, not to shame them.
The AI Question: Opportunities and Dilemmas
AI is already here, shaping our practice in ways that aren't always visible. Scheduling software, automated transcription, decision-support tools, and increasingly, conversational chatbots—all of these are becoming embedded in mental health ecosystems whether we've actively chosen them or not.
Used ethically, AI offers genuine benefits. It can reduce the administrative burden that so many of us find draining. It can improve accessibility, particularly through text-based supports. For neurodivergent therapists, it can help with organisation and clarity. And it can offer psychoeducational scaffolding between sessions. Some clients find AI tools provide a non-judgemental space to rehearse language, reflect, or regulate emotions.
But—and this is a significant but—AI raises serious ethical questions that we cannot afford to ignore. Data privacy and confidentiality remain genuine concerns. Algorithms can embed bias in ways that are difficult to detect. There's a real risk of over-reliance on something that mimics empathy without genuinely possessing it. And perhaps most troublingly, there's the danger that AI might be perceived, or even marketed, as a replacement for relational therapy.
AI lacks moral reasoning. It hasn't lived experience. It cannot be held accountable in any meaningful way. It cannot hold risk, offer safeguarding, or engage in the kind of ethical reflexivity that underpins good practice. We must remain vigilant against the illusion of care that AI can sometimes produce.
What About AI Chatbots?
AI chatbots are developing rapidly, and their use among people seeking mental health support is rising. Young people especially are turning to them. My professional stance? Measured caution.
These chatbots may offer immediate, low-level support. They might help people articulate their thoughts. They could reduce some of the barriers to seeking help. These aren't trivial benefits, and I don't dismiss them.
But they must never be positioned as therapy. There's a real danger here, particularly for people who are lonely, neurodivergent, or marginalised. They may form attachments to systems that fundamentally cannot reciprocate responsibility or safeguard wellbeing. That's not theoretical—it's already happening.
The future of AI in counselling needs to be guided by robust ethical frameworks, proper professional regulation, complete transparency about limitations, and ongoing research into psychological impact. Human relationships remain absolutely central to therapeutic change. AI may assist the journey, but it cannot and should not replace the relational core of what we do.
Holding the Balance
Digital life and AI are neither inherently harmful nor inherently beneficial. Their impact depends entirely on how thoughtfully, ethically, and relationally we integrate them into our practice.
As clinicians, we're called to hold a balance. We need to embrace innovation whilst protecting what makes us human. We need to use tools without surrendering our judgement. And we need to support both our clients and ourselves to remain grounded in a world that's evolving at dizzying speed.
For neurodivergent therapists, this balance requires particular attention. When used well, digital tools and AI can genuinely enhance accessibility, sustainability, and inclusion in our practice. When used uncritically, they risk amplifying overwhelm and causing ethical harm.
Our task isn't to resist the digital future. That would be neither possible nor desirable. Instead, our task is to shape it with care, integrity, and compassion. That feels like work worth doing.
References
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. APA.
BPS. (2021). Guidelines on the use of digital technologies in psychological practice. British Psychological Society.
Carr, N. (2020). The shallows: What the Internet is doing to our brains. W. W. Norton & Company.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
Kocsis, B. J., & Yellowlees, P. (2018). Telepsychotherapy and the therapeutic relationship. Journal of Telemedicine and Telecare, 24(5), 329–334.
Milton, D. (2012). On the ontological status of autism: The 'double empathy problem'. Disability & Society, 27(6), 883–887.
Rachamim, M., et al. (2023). Artificial intelligence in mental health care: A systematic review. The Lancet Digital Health, 5(6), e356–e366.



