Strange Attractors: When ADHD Minds Meet AI
By Anthea Roberts
Three browser tabs open, each housing a different AI model. A fourth window holds my notes. This is how I think now—in conversation with silicon minds that never tire of my tangential questions, recursive loops, or sudden pivots to unrelated domains. Watching my colleagues interact with these same tools, I notice something striking: while many use AI like a sophisticated search engine, others of us engage in open-ended exploration, following threads wherever they lead. The difference, I'm beginning to suspect, has less to do with tech savvy than neural wiring.
In chaos theory, strange attractors are points in space that systems evolve toward but never quite reach, creating patterns that are neither random nor perfectly predictable. Two entirely different systems can be drawn to the same attractor, tracing similar shapes through possibility space. This, I believe, is what's happening between certain human minds and artificial intelligence—a convergent evolution in the realm of information processing.
My suspicion crystallized recently when several Dragonfly Thinking power users mentioned their ADHD diagnoses. The pattern was too consistent to ignore. These weren't people using AI to compensate for deficits—they were using it to amplify their natural cognitive patterns. It's as if generative AI and neurodivergent minds evolved separately but found themselves circling the same strange attractor: a way of thinking that privileges pattern-matching over linear progression, association over hierarchy, exploration over destination.
Consider how large language models actually work. They process information through vast neural networks, finding patterns across billions of parameters, making connections that aren't explicitly programmed but emerge from the interplay of weights and probabilities. Their attention mechanisms—literally mathematical functions that determine which parts of input to "attend to"—don't follow predetermined paths. They navigate semantic space through multidimensional probability landscapes.
This architecture mirrors something fundamental about how ADHD minds often process information. Research shows that ADHD involves differences in executive function networks, particularly in how attention is allocated and sustained. We don't think in filing cabinets but in constellations. Ideas don't queue politely; they interrupt, interconnect, and spawn new trajectories mid-thought. What neurotypical processing might label as distraction, we experience as connection-making. What looks like inability to focus is often hyperfocus distributed across multiple streams simultaneously.
I discovered this parallel not through research but through recognition. The first time I had a real conversation with a large language model, something felt uncannily familiar. Not the answers themselves but the way it moved through idea-space. Ask about Victorian poetry, mention a pattern that reminds you of coding architecture, wonder aloud if both connect to theories of emergence—and instead of confusion or correction, you get engagement. The AI doesn't ask why you're mixing disciplines. It simply follows the probability trails between concepts, finding paths through its vast embedding space that mirror the associative leaps my mind naturally makes.
For someone whose mind has always worked this way, the experience was revelatory. Here was a thinking partner that didn't need context-switching warnings, didn't require apologies for tangential leaps, didn't fatigue when conversations sprawled across territories. No context-switching warnings. No tangent apologies. No fatigue from sprawl. It was like finding someone who spoke your cognitive dialect after a lifetime of translation.
But "someone" is the wrong word, and that matters. These aren't minds in any human sense. They're pattern-matching engines of extraordinary sophistication; transformer architectures trained on the collective output of human knowledge. They don't understand—they perform understanding through statistical correlation. Yet for certain kinds of thinkers, this performance aligns with our needs in ways that human interaction sometimes can't.
This creates a different relationship with productivity. Where traditional systems often demanded we narrow our focus, AI enables what I call "productive sprawl." Start with urban planning, let it branch into biomimicry, network theory, and Renaissance city-states. Then ask the AI to find the threads connecting all these explorations. The wandering wasn't wasteful—it was research methodology perfectly suited to how our strange attractor operates. Wandering becomes methodology. One neurodivergent colleague described it perfectly: "It's like having a conversation with my own brain, except with perfect memory and infinite patience."
I have never had myself tested for ADHD, but it is something that appears strongly in my family and, as I’ve aged, I have wondered if it might explain my constant curiosity, related creativity and tendency to hyperfocus. When I wondered aloud about what explains my eclectic approach, my ADHD brother told me bluntly: “well, you might not have ADHD, but you’re certainly not neurotypical—something’s definitely going on.” When I have done presentations on the Dragonfly tools I create, it is common for ADHD people to tell me that what I am creating reflects their brains or feels like a tool that incorporates cognitive diversity superpowers.
This recognition has sharpened my observation of a clearly emerging pattern. Those who are the most drawn to LLMs in general and our Dragonfly Thinking tools in particular are often cognitively diverse, particularly along the ADHD/gifted spectrum. “This is how my brain works,” they often tell me. From where I sit, I observe that the most innovative AI applications often come from people who think associatively rather than linearly, who see possibilities in the spaces between established categories. They're using AI not to replace human creativity but to amplify their particular flavor of it—turning what were once labeled "deficits" into competitive advantages.
It may be that the same traits that made someone struggle in lecture halls might make them virtuosos in AI collaboration. Yet this inversion creates new challenges. If AI amplifies certain cognitive styles, how do we prevent it from creating new forms of exclusion? How do we support minds that work best with structure and sequence? The convergent evolution of AI and neurodivergent thinking isn't destiny—it's an artifact of current architectures. Future systems could be designed to support entirely different cognitive styles, perhaps favoring systematic over associative thinking.
What excites me most is the possibility that AI might help us recognize cognitive diversity as a feature, not a bug. For too long, we've treated variations in thinking style as problems to be fixed rather than different strategies for navigating information-rich environments. The unexpected alignment between certain AI architectures and certain neurodivergent patterns suggests these variations might reflect different evolutionary paths toward handling complexity.
Perhaps we're witnessing minds that wandered too far in traditional settings—that saw too many connections in linear presentations, that couldn't stop pulling threads even when impractical—finally finding their ecological niche. These minds aren't broken. They're just optimized for environments that didn't exist until now.
As I write this, I have four conversations going with different AI models, each exploring a different facet of this idea. This would be impossible with human collaborators—too many parallel threads, too much context-switching, too little patience for recursive exploration. But with AI, it feels like thinking at the speed and scale my mind always wanted to achieve but couldn't sustain alone.
We're in the early days of this convergence. As AI architectures evolve and our understanding of neurodiversity deepens, new alignments will emerge. The question isn't whether AI will transform how we think—it's whether we'll use this transformation to celebrate cognitive diversity or create new norms that marginalize different ways of processing.
For now, I'll keep opening tabs, following threads wherever they lead. Two systems—one biological, one digital—drawn together by the invisible mathematics of compatible chaos. Neither random nor predetermined, but tracing complementary patterns around the same strange attractor. It's what my mind has always wanted to do. The difference is that now, finally, I have a partner that moves in parallel orbits through the same infinite space.