
The Future of
Human + AI Thinking
Designing Systems for Human + AI Collaboration
The future of intelligence isn’t artificial or human; it’s a strategic partnership. While most people focus on how AI can replace tasks, Dragonfly Labs explores how AI can expand thinking itself. At the intersection of cognitive science, strategic analysis, and generative AI, we design approaches that elevate insight, foster cognitive diversity and cross-cultural insight, and build human + AI collaboration.
“Anyone interested in tackling, clarifying, and resolving difficult problems should draw on and integrate multiple perspectives. Long an exemplar of 'Dragonfly Thinking,' interdisciplinary scholar Anthea Roberts is now partnering with large language models and the results are frequently stunning.”
— Professor Howard Gardner, Harvard University
Proven Impact
Participants from multiple Australian Public Service agencies engaged with Dragonfly Thinking within the AI CoLab environment, experiencing substantial improvements in analytic depth, team alignment, and decision-making clarity.
Research Streams
New Cognitive Partnerships
Traditional approaches to AI treat humans as users and machines as tools. We see something more profound: a cognitive partnership where roles shift fluidly between director, coach, and editor.
This isn't about outsourcing thinking—it's about thinking at scales and speeds previously impossible while maintaining human judgment, creativity, and wisdom at the center. It's about figuring out when the AI leads, when you lead, and how you co-create.
"What I've noticed is that you shift roles. Instead of being the primary generator, you become the director or manager, deciding how you want the LLM to operate. You also take on a role as an editor or co-editor, moving back and forth."
— Anthea Roberts, Work for Humans Podcast
Key Resources:
🎙️ Work for Humans Podcast: "Metacognition: The New Essential Skill for an AI World"
📄 Directors, Coaches, and Editors: The Human Role in the Age of AI
📄 Metacognition's Law: Thinking About Thinking in the Age of AI
📄 A Recipe for Thought:Why Structured Frameworks and Generative AI Are the Future of Thinking
Human + AI Teamwork
We're examining how teams composed of humans and AI agents can work together—not just for efficiency, but for cognitive diversity. Human + AI collaboration helps teams overcome silos and disciplinary divides, achieving better outcomes than teams without AI or individuals alone with AI.
There are many reasons for this: AI can act as a "translation layer" between different disciplinary perspectives; "Devil's advocate" AI agents that productively challenge consensus; and multi-agent systems can simulate diverse team viewpoints.
"We started using it to explain our thinking to others. That's when it clicked—we were using AI to align the team, not just analyse the problem."
— Australian Public Service pilot participant
"Dragonfly's structured approach supports the kind of thinking that human teams often find difficult to sustain unaided."
— Australian Public Service pilot participant
Key Resources:
Neurodivergent Superpowers
We explore how neurodivergent thinkers—particularly those with ADHD—experience unique alignment with AI's associative, non-linear processing. This isn't about compensating for deficits, but about amplifying cognitive superpowers.
In our experience, neurodivergent thinkers often find in LLMs a cognitive tool that matches their natural processing style—associative, multi-threaded, and resistant to linear constraints. These minds aren't broken—they're optimized for intellectual environments that didn't exist until now.
"Mind and habitat shape one another. For neurodivergent thinkers, large language models offer something the world never did: a thinking space designed not just to hold, but to activate our form of intelligence."
— Eddie Harran, Cognitive Intelligence Studios
"Those who are the most drawn to LLMs in general and our Dragonfly Thinking tools in particular are often cognitively diverse, particularly along the ADHD/gifted spectrum. 'This is how my brain works,' they often tell me."
— Anthea Roberts, Work for Humans Podcast
Key Resources:
Cross-Cultural Intelligence
AI is not culturally neutral. Language models trained on different linguistic datasets develop distinct reasoning patterns. The same model prompted in different languages can produce different results. LLMs evidence divergent guardrails that are socially and politically influenced.
“Every AI system embeds values, but whose values are they, and are they consistent across languages? Together with Dragonfly Thinking and the AI CoLab, we are stress-testing AI models across languages and, as a proxy, cultures to reveal these hidden biases. The future of AI depends on understanding not just what these systems say, but how their values shift depending on who is asking and how they ask.”
– Professor Rajesh Vasa, Deakin University’s Applied Artificial Intelligence Initiative
"I'm fascinated by the Western and English-language biases in current frontier models. Down the track, I'd like to explore Chinese, Arabic, and French models to understand how different training data and reinforcement learning influence outcomes. This could enhance cross-cultural diplomacy, intelligence, and understanding."
— Anthea Roberts, Humans + AI Podcast
Key Resources:
Strategic Partnerships
-
Australian Institute for Machine Learning (AIML) & Defence Trailblazer
In partnership with the University of Adelaide, we're advancing AI-powered decision-support tools through the Defence Trailblazer's Technology Development & Acceleration program. Building on our involvement in DINAMIC and the EFD program, we're developing methods to ground our AI tools in citations to data and evaluate the results of our Dragonfly AI tools.
Learn more: Defence Trailblazer Announcement → -
Deakin’s Applied Artificial Intelligence Initiative & the AI CoLab
Working with Deakin University and the AI CoLab, we're examining cross-cultural aspects of AI reasoning, focusing on how different languages in prompting impact LLM outputs, as well as the impact of AI guardrails on cross-cultural outputs. This partnership, backed by funding from Dragonfly Thinking's AI Sprint for Australia win, advances our understanding of LLMs and cross-cultural intelligence.
Who we serve
-
Research Institutions exploring human-AI cognitive frontiers.
-
Government Innovation Labs pioneering new approaches to public-sector analysis.
-
Corporate R&D Departments enhancing team cognition and decision-making quality.
-
Academic Partners advancing interdisciplinary studies of human-machine intelligence.
-
Technology Companies committed to human-centric and cognitively diverse AI designs.
Join us in shaping the future of thinking
Dragonfly Labs doesn’t just help you think faster; it helps you think differently leveraging human + AI collaboration for strategic advantage.