Non-Linearities and Futures Thinking

Centre for Strategic Futures
8 min readJun 24, 2024


By Joel Nee

“By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” — Paul Krugman, 1998

Image designed by Stanley Yang


In 1980, McKinsey projected that the US cellphone market size in 2000 would be 900,000 subscribers. The actual eventual figure was 109 million, roughly 100x larger than predicted, with 900,000 new subscribers to global cellphone services every three days.[i]

Obviously, dunking on supposed “masters of the universe” is easy, and projections are not. Nonetheless, when experts get it wrong to the extent of several orders of magnitude, it is instructive to ask, “Why?” More importantly, how can we try to not get it so wrong?

This piece argues that unexpected non-linearities[ii] are behind many such mistakes in thinking about the future. The world does not lend itself nicely to linear paths or normally distributed outcomes because the world is complex, with multitudinous, unexpectedly interacting elements. In this context, futures thinking can pave the way by recognising potential non-linearities in advance.

This piece has three parts: the first establishes a statistical model that helps visualise non-linearities and draws out its implications. The second discusses how futures thinking may help chart a path through. The third concludes.

Non-Linearity: A Statistical Model

This part establishes a statistical model[iii] that helps visualise a manifestation of non-linearity — a feedback loop[iv] — and its impact.

A stylised problem: 100 people decide every week whether to go to a bar, with a 50% independent chance that each person will go on a given week. To simulate a feedback loop, introduce the Overcrowding Rule: if the bar is overcrowded for one week — with 60 or more people — people become annoyed,[v] reducing the probability that each person will go from 50% to 40% the next week (and only the next week).

Below are two simulations over 100 weeks each charting the number of people who turned up each week. One simulation contains the Overcrowding Rule and, as a basis for comparison, the other does not (i.e. it simply simulates a 50% independent chance that each person will go on a given week). Guess which chart shows the Overcrowding Rule:

Figure 1
Figure 2

Figure 2 includes the Overcrowding Rule. Two observations from this exercise:

a) Both charts look very similar. Both seem to have attendance numbers jiggle around 50, as expected for 100 people each with a 50% independent chance of attending. For the simulation without the Overcrowding Rule, attendance numbers can indeed be well-modelled by a normal distribution.[vi] The key is how easily the simulation with the Overcrowding Rule can be mistaken for the one without the Overcrowding Rule i.e. the non-linear feedback loop of the Overcrowding Rule is not immediately obvious, and the normal distribution can seemingly model both simulations — call this the Normality Assumption. Even if the slight downside bias in attendance numbers in Figure 2 was apparent, it was only because the data was presented from a bird’s eye view over 100 weeks. The Overcrowding Rule’s effect is much harder to perceive from the perspective of a neutral observer from within the simulation (e.g. the bar’s bartender) living through each week.

b) The Overcrowding Rule produces non-linear critical discontinuities that are not accounted for with the linear Normality Assumption. Specifically, this is observed at weeks 58 and 96 in Figure 2, when attendance collapses to around 30 — at no point do the attendance numbers in Figure 1 come close.[vii] For the bartender, this collapse in attendance numbers will probably come as quite a shock if he expected attendance numbers based on a simple normal distribution a la Figure 1, especially since this expectation seems to be borne out in the first 57 weeks.

It is true that collapses in attendance numbers rarely occur. However, the interesting things in real life happen not with equilibrium behaviour but with the rare sizeable departures from equilibrium — it is where opportunities and crises abound, institutions and people are transformed, and fortunes are made and lost.

Stylised as the model above might be, non-linear feedback loops have led to real-world critical discontinuities that were not anticipated because of the Normality Assumption. For example, the 2008 Financial Crisis was caused in part by a feedback loop ripping through the US subprime mortgage market — a collapse in subprime mortgage debt values led to deleveraging, in turn further depressing subprime mortgage debt values. This cascade led to critical discontinuities like bank failures. These events were not anticipated by financial institutions because of the Normality Assumption: their normally distributed models ignored systemic risk and assumed past observations of asset price fluctuations would be sufficiently good estimates of future asset price behaviour.[viii] Similar unexpected non-linear events are abundant throughout various fields, including political systems and elections, industry supply chains, infectious disease transmission, technological development, and geopolitical conflicts.

Relevance to Futures Thinking

Since history is littered with the flaws of the Normality Assumption and the subsequent surprise of non-linear critical discontinuities, why do agents keep fighting yesterday’s war and fail to plan for non-linear scenarios? Because it is easy. Recall that Figure 2 with the Overcrowding Rule looks very similar to Figure 1 without the Overcrowding Rule; simply ignoring the Overcrowding Rule gives a pretty good and intuitive estimate for Figure 2’s attendance numbers most of the time, until it does not. More generally, lots of things look well-defined until they are not, by which point it usually is too late. Just as the bartender finds it difficult to see the feedback loop that causes a non-linear critical discontinuity in attendance numbers, so too do organisations find it difficult to identify possible non-linearities as a real-time participant within the system. This is especially so if the organisation is focused on day-to-day operations and “firefighting”.[ix]

This is where futures thinking comes in — to resist the institutional imperative to simply think about the future as a well-defined bell curve of possible outcomes along a linear extrapolation of historical equilibrium states. Since non-linearities make the past an imperfect guide to the future (at best), futurists can value-add through forward-looking environmental scanning and sensemaking. For example, futurists may identify emerging trends through the detection of “weak signals” i.e. discreet indicators of latent dynamics and mechanisms.[x] Such signals may belie potential non-linearities and provide forewarning. In the stylised bar problem, this might mean using multidimensional signals to identify the “Overcrowding Rule” in advance of a collapse in attendance numbers e.g. noting patrons’ complaints that the bar is getting too crowded as it approaches 60 people, observing how the pleasantness of the bar’s atmosphere shifts with attendance numbers, etc. Understanding such plausible future states then allows for strategies to deal with them.[xi]

It is important to note that futures thinking is not the same as forecasting or prediction. Futures work helps us figure out how, not what, to think about the future; it concerns itself with the implications should future states materialise, not which future state will likely materialise. Hence, futures thinking would not have enabled an organisation to perfectly predict the 2008 Financial Crisis. However, a futurist might have picked up weak signals suggesting plausible future non-linearities in the form of financial system fragilities. This may have sensitised an organisation to cope better with such scenarios when they eventually occurred.


When Paul Krugman assessed the Internet’s future impact on the economy in 1998, he essentially linearly extrapolated from a previous technological development: the fax machine. And it is impossible to know for sure, but when McKinsey projected the US cellphone market size in 2000, the likelihood was that they used approximately linear instead of exponential assumptions of unit growth. Unjustifiably assuming linearity is why these visions of the future were so wrong. Futures thinking provides an alternative path: one that is wary of the occasional dramatic mistakes from easy, instinctive linear projections, and, in so doing, girds organisations to live to fight another day.

Joel Nee is Foresight Analyst at the Centre for Strategic Futures.

The views expressed in this blog are those of the authors and do not reflect the official position of Centre for Strategic Futures or any agency of the Government of Singapore.


[i] The Economist, “Cutting the Cord,” 1999.

[ii] Non-linearities refer to phenomena where effect (output) is not proportional to cause (input), beyond that which can be attributed to random perturbations alone. This may take the form of feedback loops, critical discontinuities, or emergences, among others.

[iii] Inspired by a classic statistical model from complexity science — the El Farol bar problem.

[iv] A feedback loop is one where agents react to a transient environment, which consequently reacts to the agents. Feedback loops have been noted to be “paramount in any complex system.” See Sharma, Daksh, “Complexity Economics,” The Economics Review, 2019. and Eliassi-Rad, Tina, Henry Farrell, David Garcia, Stephan Lewandowsky, Patricia Palacios, Don Ross, Didier Sornette, Karim Thebault, and Karoline Wiesner, “What Science Can Do for Democracy: A Complexity Science Approach,” Humanities & Social Sciences Communications 7, no. 30 (2020).

[v] Cf. Yogi Berra: “Nobody goes there anymore. It’s too crowded.”

[vi] Distribution is approximately normal for a binomial distribution with sufficiently large n (100 in this case).

[vii] The feedback loop does not even have to be particularly pronounced for critical discontinuities to emerge. In this case, there is only a ~2.8% chance that attendance will be ≥60 for the feedback loop to kick in the next week, and when the feedback loop does kick in, it only reduces the probability of attending by 20%. Despite this, attendance numbers relatively frequently fall below 30, which would be a 4-sigma event under a normal distribution (happens about 1 in 25,000 times).

[viii] For an overview of financial institutions’ failed modelling because of the Normality Assumption in the lead-up to the 2008 Financial Crisis, see Dowd, Kevin, John Cotter, Chris Humphrey, and Margaret Woods, “How Unlucky is 25-Sigma?” Journal of Portfolio Management 34, no. 4 (2008).

[ix] Other possible reasons might be institutional inertia and the costs involved in anticipating and preparing for future scenarios — for an overview of how such factors played into the critical discontinuity of the 2011 Fukushima nuclear disaster (including the ignoring of a historical tsunami that inundated the region 869 AD and overly low seawalls), see Acton, James and Mark Hibbs, “Why Fukushima Was Preventable,” Carnegie Endowment for International Peace, 2012.

[x] This is similar to what Michele Wucker refers to as looking for “grey rhinos”. See Michele Wucker, The Gray Rhino: How to Recognise and Act on the Obvious Dangers We Ignore (New York: St. Martin’s Publishing Group, 2016).

[xi] This piece has focused on the potential downsides of non-linearities and how futures thinking can help organisations prepare for them. On the corollary, the same argument applies to the upsides of non-linearities and how futures thinking can position organisations to take full advantage of possible opportunities.



Centre for Strategic Futures

Welcome to CSF Singapore’s blog site, a space to share our shorter think-pieces and reflections. Visit our main website at