Moving from Scenarios to Strategy: Introducing the “Escher Prompt.”

By Dr Huw McKay

For decades, an elite group of large organizations have embraced scenario thinking to enhance traditional forecasting and strategic planning. Leading multinational companies like Shell, Cisco, and BHP, and strategically minded states like the Government of Singapore, have pioneered its use to navigate deep uncertainty. Unsurprisingly, a cottage industry of consultants has sprung up to support other organizations hoping to emulate their success.

Yet despite its popularity with Boards and the enthusiasm with which these projects are often embraced, many companies new to the technique discover that crafting a set of elegant, diverse scenarios is the easy part. The greater challenge lies in translating divergent pathways and long-dated future end-states into actionable strategic choices today. And arriving manually at the Holy Grail of “no-regret” actions that are robust across the breadth of a diverse set is as rare as the Grail analogy implies. Too often, faced with this roadblock, teams default back to what they know — traditional, extrapolative forecasting – and the elegant scenario set starts to collect virtual dust on its shiny new SharePoint site.

There are several causes for this failure, some on the capability side and some of the behavioural side. Strategy professionals and senior leaders often lack the capability or confidence to reason across multiple futures simultaneously. Time constraints, over-simplified executive communications (for example, the ever-shrinking limit on the maximum length of board memos), cognitive biases and reliance on consultants can all compound the issue.

Large language models (LLMs) and AI tools built upon them should be able to cut through these constraints, bridging the chasm between divergent scenarios and practical decisions. That is exactly what I have been exploring at Dragonfly, using its RRR AI tool to test whether AI can lead us to the Grail. My explorations indicate that it can – by way of what I am calling the “Escher Prompt.”

Early in my tests, I noticed the AI — much like most human strategists — tended to focus on bilateral pairing (matching one scenario to one action): “Under Scenario A, do X; under Scenario B, do Y,” and so on. A useful start, but hardly a breakthrough. I needed to find a way to draw out strategic options that can be expected to produce advantages across all the futures in a set.

The prompting strategy I arrived at to achieve this ambitious end reminded me of M. C. Escher’s famous multi-perspective drawings. In particular my mind went to the image above, where the hooded stair walkers appear to be simultaneously ascending and descending. You can judge the aptness of the analogy in a moment.

Here is a concrete example. I trained Dragonfly’s RRR tool on both a ready-made set of four divergent scenarios that are in use by a multinational company (codenamed Epsilon), and that company’s corporate strategy. I codenamed the scenarios Alpha, Beta, Gamma, and Delta. Dragonfly RRR was then taken through a four-step prompt sequence:

Prompt 1:
“What actions are robust in at least 3 of the 4 scenarios?”

Logic: This initial prompt recognizes that in a genuinely diverse set of scenarios, identifying true no regret outcomes is extremely unlikely on a first pass. Simplifying the task to work on a simple majority - 3 out of 4 - gets the information flowing. This also recognizes that AIs economize on their token use and will frequently time-out, crash or guess/hallucinate if asked to address too many layers of complexity. (See Apple’s much discussed paper on the present frontier of large-reasoning models The Illusion of Thinking, which post-dated this discovery by a few weeks). It is optional to add in a persona and context for the AI at the outset of the prompting sequence, such as “you are a strategy professional at a diversified conglomerate …”

Dragonfly output:
Based on the analysis of the scenarios within the uploaded document, here are five actions that seem robust across at least three of the four scenarios.

Commentary: We now have five options that are likely to be advantageous in a simple majority of scenarios, with short explanations of why they were chosen. Dragonfly’s RRR framework adds important ballast to these responses by systematically justifying the working. We may come back to these later, but we put them aside for now.

Prompt 2:
“Which scenario was most often the odd one out in the previous analysis?”

Logic: This follow-up prompt turns the gaze of the tool onto the minority scenario, which is the proximate barrier to our pursuit of the Grail. As an aside, this direction of enquiry recognizes that when making decisions under uncertainty, there is often considerable value in studying outliers, not excluding them.

Dragonfly output (abridged & anonymized):

In the previous analysis, the scenario most frequently identified as the "odd one out" was likely the "Gamma" scenario. While the other scenarios focus on X, Y, and Z, "Gamma" is characterized by J, K & L, which might not align as closely with initiatives centred around { … } strategies. Gamma emphasizes unpredictability … , which poses challenges. Therefore, actions that thrive in Alpha, Beta and Delta tend to face more friction in Gamma.

Commentary: We have what we need to move ahead. Dragonfly has flagged “Gamma” as the outlier.

Prompt 3:
“What would you recommend under the Gamma scenario?”

Logic: Having identified the outlier, we now need to identify options that will be advantageous in the “odd one out.” Escher’s hooded stair walkers are starting to question their direction of travel …

Dragonfly output (abridged & anonymized).

Under the "Gamma" scenario, characterized by { … }, the focus should be on strategies that enhance { … }. Here are six recommendations:

Commentary: We now have six actions to ponder that work for the outlier. On to the denouement.

 

Prompt 4:
Do any of these actions designed specifically for "Gamma" also enhance reward and/or resilience in the other 3 scenarios?”

Logic: We now turn the tool back upon itself – we are living in Escher’s world by descending to ascend, or ascending to descend, if you prefer. By the rules of transitive logic, if Dragonfly answers in the affirmative to this prompt, those answers must, by definition, represent no-regret actions. Note that the concepts of “reward and/or resilience” are have specific definitions in the Dragonfly RRR framework, they are not general concepts.

Dragonfly output (abridged & anonymized):

Yes, several actions designed for the "Gamma" scenario can also enhance reward and resilience in the other three scenarios, providing strategic advantages across diverse futures.

Commentary:
Voila. We have the affirmative response for which we were hoping. The Grail is ours: secured from the desktop without the need for a perilous quest.

[If the following image means nothing to you, congratulations. One of life’s most joyous comedic experiences still awaits you.]  

For reference, “several” actions turned out to be five, each of which was expected to exert a positive influence on reward and/or resilience, with the choices justified.

By asking the AI to identify and then interrogate the outlier, then folding its unique requirements back into the remainder of the set, it was possible to isolate strategies that would have been difficult — potentially impossible — to identify either manually or by a direct pursuit.

My instincts and my experience working with sophisticated scenario sets in multiple contexts allow me to infer that the Escher Prompt is a generalisable method for extracting the sort of insights described above. Even so, some caveats are in order and more may well emerge with future experimentation.

The most important caveat is that the results achieved by the Escher Prompt will depend upon the nature and quality of the underlying scenario set. If they are not sufficiently divergent, or they are too divergent, or if the individual scenarios are not internally inconsistent, the strategic insights derived may suffer in usefulness, or tend towards the axiomatic.

Dr Huw McKay is a strategic thinker working at the intersection of business, finance, policy and academia. He is a Visiting Fellow at the Crawford School, ANU and was a former Chief Economist of BHP.

Next
Next

The Devil's Advocate: What Happens When Dissent Becomes Digital