We often hear claims about future technologies that will shape the world. Typically, a small set of technologies are invoked; at present, these would include AI, quantum technology and biotechnology.
How could we gauge claims about these technologies and decide where to make investments? This is an important question for the strategic mind (less for the short-term investor who rides the hype cycle).
Fortunately a lot has been written about the general topic, but not always using the same terminology, which means it takes years to read and assimilate different literature and draw conclusions.
Let us first break down the analysis, with the proviso that what follows is a representation of what I do not know, not what is not known in general, and therefore a guide to my reading (the page is very irregularly updated and I made no attempt to construct a coherent argument).
The device encodes our fate
The first aspect I want to consider – the “internal” technical features of the proposed device, on the assumption that they somehow encode its future success or failure.
I believe this is an implicit argument in at least some writing about the future of technology even if it is rarely admitted (presumably, not admitted for fear of sounding “naively” deterministic or of being ignorant of decades of work on the social construction of technology).
However, in my view, encoding of futures in the properties of devices cannot be completely dismissed, even if there are important caveats. There is “something about” devices that matters in this story and ought to be investigated.
Let us begin with “quantum technology” (which is one of the fields claimed for at least two decades to presage the future of technology) and evaluate some of the arguments that have been made about it.
We are currently in the midst of a second quantum revolution. The first quantum revolution gave us new rules that govern physical reality. The second quantum revolution will take these rules and use them to develop new technologies… Quantum technology allows us to organise and control the components of a complex system governed by the laws of quantum physics. This is in contrast to conventional technology which can be understood within the framework of classical mechanics.
Dowling and Milburn, 2002, Quantum Technology: The Second Quantum Revolution, in: arXiv
The authors of this now old but well-articulated piece give examples of how they believe knowledge gathered through the first quantum revolution (as they put it) underpins many contemporary devices.
For example, they state that ‘electronic wave functions that underpin the electronic semiconductor physics…drive the computer-chip industry and the Information Age’ while ‘the realization that a light wave must be treated as a particle gives to us the under-standing we need to explain the photoelectric effect for constructing solar cells and photocopying machines’.
One possible reading of this slightly ambiguous statement is in my view that the first quantum revolution authored the modern age, so to speak, while the second quantum revolution would author the next.
However, the statement also seems to produce a slightly different but more precise interpretation in which the first revolution was merely explanatory of existing technology rather than itself being an authorial force. In the second quote directly below, this precise meaning dominated.
The hallmark of this Second Quantum Revolution is the realization that we humans are no longer passive observers of the quantum world that Nature has given us. In the First Quantum Revolution, we used quantum mechanics to understand what already existed. We could explain the periodic table, but not design and build our own atoms. We could explain how metals and semiconductors behaved, but not do much to manipulate that behavior. The difference between science and technology is the ability to engineer your surroundings to your own ends, and not just explain them. In the Second Quantum Revolution, we are now actively employing quantum mechanics to alter the quantum face of our physical world.
ibid.
It could well be the case and indeed is the case that quantum mechanisms can help explain the workings of devices that already exist.
But it seems more uncertain that any such knowledge underpinned the development of these devices in the past or, furthermore, challenging the promoters of the technology, would necessarily underlay new inventions in future.
Overall, I take a nice distinction between scientific knowledge as an explanatory tool, as contrasted with scientific knowledge as an inventive tool, and point out the gulf between the two ideas has not yet been gauged.
As a first attempt to address this point, let us reflect on the “quantum” inventions named and their possible epistemic basis.
| Invention | Epistemic basis |
|---|---|
| Computer chips and the Information Age | Multiple inventors in American electronics firms based on ideas developed by Werner Jacobi (Siemens) and a British military engineer, Geoffrey Dummer. |
| Solar cells | Bell Labs developed the first “practical” silicon solar cell but drew on a long history of inventions and discoveries dating back to the observation of the photoelectric effect by Edmund Becquerel in the 19th century. |
| Photocopiers | The inventor, Chester Carlson (Xerox), was inspired by the work of Pál Selényi, a Hungarian engineer, concerning electrostatic picture recording. |
However, as the above table shows, it is actually quite difficult to determine the epistemic bases of a particular technology as you have to decide what you consider the central insight in the process of development and then determine how that insight came about. This also requires a lot of esoteric historical knowledge that only a small number of professional historians would have.
Certainly, arguments could be made for classical mechanics of the Newtonian kind being an inventive tool, I guess.
Arguments could also be made, and are regularly made, concerning nuclear physics and the nuclear bomb.
But looking at the famous Manhattan project and the destruction of Hiroshima and Nagasaki, experience in process industry coming from Du Pont could also be deemed crucial in the process of enrichment of uranium, or in past aerial campaigns to build a long-range bomber.
I am sure the arguments are much less clear for quantum mechanics and I am not yet convinced we could attribute the three inventions cited above to knowledge of that topic. This means the historical record, or at least “common knowledge” of it, does not seem to help us much in deciphering the present.
I think we also know that inventions can be developed empirically or otherwise stumbled upon, which might be the case for most inventions, although we cannot say for sure.
Successful inventions can appear based on what would later be seen as false premises or false lines of thought, an example of doing the right thing for the wrong reasons. Herbal medicines are a historical example that prompted the development of some pharmaceuticals even though of course modern investigators would disagree with antique reasoning such as humors.
The number of successful inventions developed from what would be perceived as correct scientific understanding and with a strategic goal in view might indeed be the less common kind. That is to say, if the Vannevar Bush-type “linear model” (whereby “cutting edge” scientific knowledge authored invention) existed in reality, it applied to only a fraction of the inventive space.
At the same time, however, it is rather on this particular idea that ex-ante technological determinism depends if we are to read the future of technology from current scientific knowledge, and therefore it is something that ought to be “bottomed out” empirically as much as possible.
Regrettably, there is not actually much data to adduce on this point that I found so far. A foundational analysis by Mansfield (1991) argued that ‘one-tenth of the new products and processes commercialized during 1975-85 in the information processing, electrical equipment, chemicals, instruments, drugs, metals, and oil industries could not have been developed (without substantial delay) without recent academic research’ (p. 11).
Of course, there is no sure means of saying if this relatively small contribution became more important since that analysis.
A good although old article by Cohen, et al., 2002, The Influence of Public Research on Industrial R&D, in: Management Science cited data from Narin, et al. (1997) which ‘concluded that the linkages between industrial R&D and current public research (conducted in either academia or government labs in the prior 10 years) grew dramatically between the late-1980s and early-to-mid-1990s’ (p. 3).
It could be the case, therefore, that the percentage of inventions derived intentionally or strategically from scientific knowledge increased from a possible late-twentieth century baseline of around 10% (as detected by Mansfield). This could be attributed to quite deliberate effort such as technology transfer offices as well as the promotion of the impact agenda, translational research, open innovation, even the “neo-liberal” university.
The linear model was therefore not a relic of the past, as sometimes claimed, but indeed a description of the present and even the future and, contrary to the idea that we long abandoned it, we are actually its greatest advocates. As such, we would be actively generating a particular kind of technological determinism at a scale and perhaps of a quality that did not exist before.
This assertion is obviously not well-evidenced but it is politically disorienting, which seems to be one good reason to value it as an analytical tool. It implies that technological futures such as quantum technology require for their successful execution the creation of an ever-more dominant linear model of innovation that also facilitates a certain kind ex-ante technological determinism.
This could lead us to ask what is the function of the curve describing the penetration of the linear model, and what predictions can we make for its future trajectory (I did not yet see that data).
Additionally, it points us to the need to understand the epistemic basis of successful inventions, which more often than not we did not fully consider, lost as we often are in the word soup of translational research, linear models, and technology transfer.
On the contrary, if the epistemic basis for successful inventions remained mostly with trial and error, stumbling, tinkering, luck, intuition, false premises and other poorly-specified processes of that kind, rather than strategic, directed activity, it would have implications for how we thought about, and invested in, ideas. We would need to make efforts to specify these inventive processes, at the least.
A trial and error approach is amenable to scale, massed capital and quasi-bureaucratic structures capable of organizing and documenting repeated trials. The discovery of many medicines and pesticides, for example, depended on this process. The possible outputs of a petrochemical plant or natural products were screened for activity with procedures carefully refined to experiment “on mass” such as through use of robotics, clinical trials, or large research establishments staffed by hundreds of people operating pipettes.
Whereas, tinkering and intuition, while perhaps much cheaper and requiring less organization, tended also to be more directionless, diffuse, and so on, and therefore much less amenable to massed capital (as firms perhaps discovered when they unsuccessfully sought profits in university labs where such tinkering could flourish, among other places).
This somewhat elides the distinction made by Freeman and Soete on pp. 103-105 of their famous textbook The Economics of Industrial Innovation (but I do not want to put words in mouths). They argued for the importance of corporate or (quasi) bureaucratic inventors (in the twentieth century) and contrasted this idea with the classic Sources of Invention by the right-wing economist, John Jewkes, which they argued had highlighted individual genius.
Where I would make a slight caveat is that Freeman and Soete foregrounded the entity behind the invention whereas I am interested in the epistemic basis, not denying however, that the two could be connected. But I believe this epistemic train of thought takes us away from the perhaps slightly restrictive intellectual baggage that gets carried on this topic.
A corporate inventor could in theory undertake more diffuse kinds of research, of course, although you have to harbor doubts about the ability of commercially unproductive mavericks to operate in a quasi-bureaucracy (unless there was some kind of special shield from the company leadership such as through the transfer of profits and rents to “philanthropic activities” as occurs with the IT billionaires).
Equally, an individual type inventor or a small team could undertake trial and error methods, but it would be slow because they would lack the capital and organization for the necessary through-put.
Looked at another way, a corporate inventor would be more able to obtain any amount of esoteric information about scientific matters through a range of measures such as employing consultants, building partnerships, buying IP, and so on, thereby implying a more rational approach.
These measures would not be available to a small team for financial or legal reasons. Hence, the small innovator would be at an information disadvantage and more reliant on guesses.
Certainly, the epistemic situation is difficult to summarize. We need to know much more about the inventive process and its relation to knowledge.
There is an academic literature on maintenance and possibly related ideas like jugaard, frugal innovation, etc. This is quite interesting because it is epistemic in its focus.
But it would only be one angle on a complex problem which is why I think study of these topics is probably an intellectual dead end in and of itself. Jugaard-type ideas have also come in for some critique as basically racist, in common with older evidently neo-colonial terms like appropriate or intermediate technology.
We also have a big literature on the anthropology of scientists, in which we are informed that what goes on inside laboratories is not at all scientific. This seems to me a document of the various poorly-specified processes cited above like intuition, guesses and trial and error, although not always in those terms. But it is sometimes hard to read when sociologists of science are being ironic as opposed to being serious. I am not yet sure how to use this literature for the ends we seek here; Latour’s challenge to connect the dots has not been fulfilled that I heard.
A comprehensive epistemic analysis, which therefore we lack, would though give us the tools to draw conclusions about the probability of a particular invention succeeding and also the most appropriate strategies to obtain it.
However, this would still not be enough. We would also have to ask if our scientific knowledge of natural processes would ever be sufficient to design successful interventions, even in principle, or at least would it be sufficient in the time-frame proposed. Evidently, in some cases the answer would be yes, in others, perhaps no.
Genetic engineering is relatively old and obviously emerged from scientific study, yet the enormous complexity of crop genomes defeated many efforts despite significant investment. The biological world offers a model of a complex system that defeats human ingenuity (cancer might prove a similar problem).
Prof. Dr. Böttcher proposed a complexity scale for biological systems including human societies (figures 3-4). In the case of genetic engineering it might be the case that anything much above A. thaliana on his complexity scale would prove impossible to engineer using molecular methods (excluding of course the very simple modifications developed by Monsanto that introduced particular traits). This would rule out modifications of many crop plants (presumably – although he did not note crop plants like wheat or rice on his scale).
One interesting idea from Topcu, et al. (see bibliography below) was that decomposition of a complex problem actually increased its complexity (if I understood their argument correctly). If we applied these ideas to crop plants, we might conclude that “reductionist” approaches such as genetic engineering would indeed make our task even more difficult and reinforce the idea that the only viable methods lay with conventional plant breeding (this kind of approach apparently recognizes the complexity of the system in which we are trying to intervene).
There is obviously a whole literature condemning reductionist approaches but what is important here is our ability to quantify complexity as well as the impact of proposed solutions, rather than just moralizing about it.
Referencing Prof. Böttcher’s scale, biotechnology could therefore have very limited applications above a certain point perhaps marked by the various model organisms such as A. thaliana, and would be unlikely to produce substantial gains.
If we considered the quantum world in the same light, we could be forced to draw the same conclusions. Where, therefore, would we place the quantum world on that scale of complexity? What would be the interactions between the complexity of the systems concerned and our proposed interventions?
This points us to the need to understand the level of complexity of the sphere we are trying to control and the ability of the tools we have available to do so. In the coming years I will summarize the literature I read on degrees of complexity of systems.
We could in conclusion say we need to grasp the nature of the inventive process that might be required and the possible roles of scientific knowledge in that process.
Secondly, we would have to understand the complexity of the sphere under consideration with the assumption that more complex systems are less amenable to solutions based on scientific knowledge, because it is not possible to understand them at a sufficiently mechanical level to make directed interventions.
This would present particular problems for scientific approaches that rely on decomposition of complex problems, potentially making these problems infinitely unsolvable or forever just out of reach.
The world encodes our fate
Herein we move into a wider world of ex-ante technological determinism.
While I believe there is value in addressing the ability of devices to encode their own fate, so to speak, let us not throw out the baby with the bath water regarding the social construction of technology.
The ability of a given device to change the means of production is not as such coded within the device itself but in theory in a broader social, economic and political world, if we could only understand the factors.
Here I want to move beyond the hype cycle to an understanding of what sustains an invention and what point if at all it would fade and how ex-ante determinations could be made on the basis that the future of a device is somehow encoded in present conditions if they could be read in the right way.
There is a big academic literature on this topic which we can draw upon. The literature on technology readiness levels might be interesting, but this is only one aspect. I have not yet had time to summarize my reading.
But it seems to me, overall, there is an idea that scientific knowledge feeds through into inventions over decades (although perhaps less depending on which study you read). In each case, we would ask what caused the delay – what forces opposed it and what forces favored it?
Summary of approach (highly provisional)
The contours of the predictive model to assess claims would include the following set of questions.
- What evidence is there that a linear model operates in the area under consideration? What hope of this model being strengthened?
- What is the complexity of the problem that the proposed intervention will solve? What is the impact of the intervention on the complexity of the problem?
- What problem does the proposed invention address and what would be the competitor options (including incumbents)?
- How long do we think the inventive process will require and where do we foresee the political forces to sustain it for that period of time?
Ideally we would have quantitative outputs that would allow us to weight investments in our portfolio. I believe the above discussion implies quantitative measures could be developed.
But we currently do not have them, so let us instead game out the above questions in a subjective way with a few big claims.
Game out (highly provisional)
The Joint Research Centre of the European Commission recently highlighted the following five “technologies and innovations” that will not surprise anyone, namely, artificial intelligence, gene-engineering, internet of things, augmented reality and internet of the body.
For the time being we will just game out the first two (for the reason that the latter four all concern computers and so perhaps can be merged together although I must admit I do not actually know what the last one means).
Regarding AI, on the first point there is reasonable evidence that a linear model operates and might even be strengthened. Large firms are reportedly gobbling up the scientific world and due to large amounts of capital are even expanding it. This would surely only deepen the connections between scientific knowledge and invention. On the fourth question, as well, we appear to see massive amounts of capital in play which could imply the political sustainability of the program as the investors, who are very powerful, would get burned if it collapsed.
However, on the other two points the answers are less certain. Given we do not know the exact problem the technologists have in mind to remedy, we cannot really assess the complexity of the system as well as the impact of the remedy, nor its competitors.
While AI produces a mixed picture with some unknowns, I think the read-out for gene-engineering is perhaps less convincing. In some sectors there is not very strong evidence for a linear model and no obvious means of it being strengthened. There seems to be no escalating agenda to increase the penetration of the linear model.
As already discussed, the complexity of biological systems is perhaps too great above a certain threshold of the model organisms while at the same time a reductionist intervention might heighten that complexity taking us further from our goal.
There are competitors to the proposed technology in many areas that would achieve the same result while there are rather uncertain political forces to sustain the necessary work. As such, gene-engineering would receive much less emphasis than AI in my putative investment portfolio, for these reasons.
Evidently, these ideas are general and impressionistic. There is nothing as such new here. A quantitative model applied to specific proposals would be most useful.
Bibliography (highly provisional)
Topcu, et al., 2022, The Dark Side of Modularity: How Decomposing Problems Can Increase System Complexity, in: Journal of Mechanical Design (link)
Bessen, 2015, Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth
Albino, et al., 2014, Understanding the development trends of low-carbon energy technologies: A patent analysis, in: Applied Energy. “The number of patents that are based upon basic research is growing.”
Salado and Nilchiani, 2014, The Concept of Problem Complexity, in: Procedia Computer Science
Agar, 2012, Science in the 20th Century and Beyond (¨Translational research¨)
Chesbrough, et al. (eds.), 2006, Open Innovation: Researching a New Paradigm
Freeman and Soete, 1997, The Economics of Industrial Innovation (third edition)
Emmeche, 1997, Aspects of complexity in life and science, in: Philosophica
Mansfield, 1991, Academic research and industrial innovation, in: Research Policy
Nelson and Winter, 1977, In search of useful theory of innovation, in: Research Policy