Leave Intuition to the Machines
January 29, 2024251 views0 comments
Is it time for System 3 thinking by humans?
Just two months after its launch in late 2022, ChatGPT reached 100 million monthly active users. Along with other advanced language models, it quickly started to encroach on territory traditionally exempt from automation, such as tasks requiring creativity, intuition and decision-making.
So, what does this mean for managerial work? We predict that the blend of artificial intelligence (AI) and human thought will remain indispensable – at least for now – but with an unexpected twist. Far from being limited to grunt work, AI will be entrusted with some of the more creative and intuitive components of decision-making, tasks viewed as fundamentally human. It won’t replace managerial work but rather reshape it.
Two styles of thinking: fast and slow
In his best-selling book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman brought to the mainstream the concept of two distinct modes of human thought. “System 1” thinking is fast, intuitive, instinctive, yet prone to mistakes in unfamiliar circumstances. “System 2” thinking, on the other hand, is slow, intentional and better able to conquer new situations by applying rules that it has learned in the past.
We generally approach tasks intuitively, only engaging System 2 if something in the environment suggests that thinking harder might be required. Though our System 1 improves naturally from the experiences and feedback we amass over time, we make conscious efforts to improve our System 2 thinking, for instance, through formal education to develop our logic.
With sufficient practice, the acquired skills of System 2 become embedded in the intuition of System 1, in a sort of virtuous cycle.
AI’s evolution has taken a different path. Its starting point is logic, akin to System 2 thinking in humans. Rapid logical computation is what allowed IBM’s DeepBlue to triumph over Grand Chess master Garry Kasparov in 1997.
However, the advent of machine learning brought forth a novel variant of machine intelligence. It demands extensive training on data, after which it operates almost instantly. While notoriously opaque, its workings are remarkably effective on average. We argue that this mirrors humans’ System 1 thinking: Human intuition is built on years of experience but operates almost instantly.
This development allowed Alphabet’s AlphaGo in 2016 to triumph over Lee Sedol, the top player at Go, a game that humans were supposed to always dominate because intuitive play is crucial to success.
Combining thinking styles across humans and AI
How will humans work alongside AI? The fundamental premise of most narratives is that tasks can be divided into subtasks, which humans and machines undertake based on their relative strengths. This line of reasoning echoes enduring principles behind specialisation, outsourcing, offshoring and strategic alliances.
We propose a shift in focus from task specialisation to a specialisation by thinking type. If machine intelligence is capable of intuitive reasoning (System 1) on a superhuman scale, and if existing computational systems already outpace humans in logical reasoning (System 2), where does that leave room for humans? We contend that the answer is in the integration of these two systems.
Though the ability to pivot between System 1 and System 2 has long been emphasised in decision-making research, with debate over how well humans are able to do so, it is not generally seen as its own system. Yet if System 1 and System 2 tasks are carried out by AI, this pivoting between the two – call it System 3 – is where human intelligence comes into play.
As it stands, humans hold both an absolute and relative edge in this System 3 form of thought. They can identify when a process needs to be changed and select between different options and analyses. The durability of this advantage remains uncertain, as advancements in computer science seem poised to combine traditional computation with machine learning. However, it’s evident that humans will remain the sole masters of System 3 thinking for a substantial window of time.
What does it mean for managers?
To bring this idea to life, let’s consider a classic managerial dilemma: “Which project should I invest in among several options?” Some process of funnelling is necessary to go from a large set of projects to a smaller set that bear closer examination. The projects could be candidates for recruitment, or potential partners for a strategic alliance or takeover targets. Conversely, creating a large enough list of initial candidates (ideation) is also important to ensure a good coverage of the possibilities. Given the vast data associated with various projects, some of which may not be easily processed, some form of intuition or judgment can be helpful, particularly under time pressure.
This is where System 1 thinking kicks off the process for most managers. Their years of experience in a context may have generated insights that operate sub-consciously, producing what we think of as managerial intuition. But what if, rather than relying solely on their gut instinct for the initial selection, managers enlisted a large language model (LLM) to sift through the myriad of initial options and generate a shortlist of feasible alternatives? A list generated by an LLM could be both larger and begin with a larger candidate pool.
This shortlist could then undergo a rigorous review using systematic, logical procedures that can be thoroughly checked and explained. This is within the purview of well-trained, methodical managers using System 2 thinking, as well as traditional rule-based AI systems. But here too, the scale and computational power of AI offer advantages. Checking facts, conducting analyses, ranking candidates on multiple criteria, clustering them in higher dimensional spaces – these are all procedures that machines can and have been doing for a long time.
However, this process is iterative and doesn’t end with one cycle. The strict application of rules to the shortlisted candidates might expose flaws in both the shortlist and the applied rules. The ability to identify these shortcomings, and fine-tune both the shortlist creation and the selection process, exemplifies quintessential System 3 thinking.
We believe that this form of thinking is where human managers should invest their skill development efforts. It presents an exciting fusion of human cognitive flexibility in harnessing “machine precision” with “machine intuition,” maximising the strengths of both and mitigating their weaknesses. The metaphorical image we have is that of a human charioteer guiding the twin steeds of machine precision and machine intuition, yoked together to produce rapid progress in decision-making.