From Biotech to “Tech-Bio”

Centre for Strategic Futures
8 min readApr 17, 2024

By Harper Chew

Image generated by Stanley Yang with Generative AI


Traditional modes of biotechnology resembled a game of Tetris played in the dark, where the fit of tetrimonos remained uncertain until it was placed. It was a labour-intensive process involving scientists meticulously pipetting and manipulating biological samples by hand, relying on manual techniques to analyse and manipulate genetic material. It was slow, inefficient, and relied heavily on luck.

Engineering has historically served as the backbone of technological advancements, as engineers created tools, devices and systems that made it possible to explore and manipulate the world. As we scaled the S-curve of innovation [1], the field of biology took on an engineering flavour, rendering the term ‘biotechnology’ insufficient in describing the field. In response, the label ‘tech-bio’ [2] has emerged, reflecting a shift where biology is drawn out of the confines of laboratory spaces and into high-tech, low-touch innovations. Tech-bio is an exercise in applying bits onto atoms (more on this later) as technology seeks to manipulate and redefine biology through both hardware and software solutions, giving us a new understanding of biological data and techniques allowing for the design and construction of new biological parts, devices, and systems.

This blogpost is the introduction to our two-part series on tech-bio. In this first part, we will be looking at the key innovations that enabled the shift from biotech to tech-bio, and how these can be applied in the biopharma world.

”Bits onto atoms”: digitisation of biology

The easiest and most straightforward way of understanding tech-bio is that it is technology first, biology second.

The digitisation of biology is a prominent example of how advancements in technologies has led to the move towards tech-bio.

DNA nucleotides consists of a nitrogenous base, a phosphate group, and a sugar molecule. Each nucleotide forms a bond with an adjacent nucleotide, creating a ‘rung’ in the ‘DNA ladder’. DNA nucleotides function similarly to “back-end” coding languages — they not only encode genetic information but also contain instructions regarding various cellular processus such as gene regulation and cell replication. Advancements in DNA synthesis and sequencing have therefore enabled scientists to digitise biological data (such as genetic sequences and protein structures) at scale.

For example, the Human Genome Project in 2003 saw the complete sequencing of the entire human genome — deciphering the order of nucleotide bases along the DNA strands in all chromosomes of a human cell. This sequencing acts as a map of the human DNA. It is this advancement and realisation that we can indeed “decode” the genomic ‘book of life’ that enabled scientists to explore how DNA translates into organism function and seek out genes associated with certain diseases [3]. The foundational knowledge gleamed from the project provided essential insights that contributed to the development of CRISPR technology. CRISPR, originally discovered in bacteria as an immune system, has been adapted for genome editing purposes due to its precise and efficient targeting capabilities. Today, genome sequencing has become more accessible than ever, with whole-human-sequencing costs falling from USD$10,000 a decade ago to just USD$600 today [4].

“High-tech, low touch”: AI and automation

Traditional modes of scientific exploration required scientists to operate under the assumption that a system adheres to intrinsic rules. The scientist’s job was to create a hypothesis proposing what said rules might be and design experiments to assess the validity of the hypothesis. It is an iterative cycle of trial and error to get to the end outcome within an unknown solution space.

On the other hand, tech-bio’s technology-first exploration starts by exploring the solution space. Artificial intelligence (AI) models process the data and learn the system dynamics. Scientists then learn about the system by asking the model to make predictions on its biology.

Biology is complicated, and nucleotides possesses independent effects (individual properties of each nucleotide base) and external effects (factors outside of DNA sequence that influences the behaviour of nucleotides, such as temperature, pH etc.), forming a complex web of interactions difficult for the human mind to grasp. Computational tools and machine learning algorithms therefore have been employed to not only help make sense of such data, but also simulate biological processes and predict their outcomes under different conditions. This is especially valuable in drug development. For instance, the Shanghai Public Health Clinical Centre was able to decode the Ribonucleic Acid (RNA) or the genetic building blocks of the COVID-19 virus within 48 hours [5] by leveraging mutation prediction and structural modelling techniques. This predictive modelling aided in understanding the virus’s mechanisms of infection and pathogenesis.

In summary, the key catalysts that enabled the shift from biotech to tech-bio are:

1. Ability to visualise, measure, identify and manipulate at the molecular level.

2. Treating genetic instructions (DNA, RNA, etc.) like codes to be written, edited, and executed.

3. Ability to gather, digitise and analyse genomes from massive populations.

4. Integration of the biological and non-biological components.

5. Automation.

Case Study: exploring tech-bio applications for biopharma

Technology is eating the world, and the field of healthcare can stand to win massive gains through technological integration. One of the most prominent examples is Moderna’s astonishing speed in developing the COVID-19 vaccine. Using their own custom AI to analyse vast amounts of genomic data related to COVID-19, Moderna was able to identify the suitable antigen and speed through the discovery phase. Within 42 days, the company had a vaccine ready for human-testing [6].

This segment will detail how a technology-first approach could work during various stages of biopharma development.

1. Target Discovery

Traditional drug discovery begins with the identification of biological targets, such as proteins or genes, which are believed to produce a desired clinical effect. Next-generation target discovery methodologies as seen in tech-bio companies typically involve two key themes — a move towards foundational models in biology and machine learning.

First, tech-bio leverages a wide variety of data sources and models them jointly to better understand the desired target, phenotype, and drug combination. The hypothesis here is that such joint modelling leads to a richer understanding of the phenotypes of interest, and eventually better results in tasks such as target prioritisation.

Second, machine learning (ML) is used to design experiments by leveraging in silico predictions. One can, for example, simulate in silico numerous experiments and use the results to prioritise which experiments to run in vitro or in vivo, allowing for faster discovery and validation. [7] This also allows for faster iterations of experimental loops, with each iteration improving the ML’s predictive accuracy.

2. Development

New technologies allow for precise and efficient development of therapeutic solutions. For example, scaled molecular biology technologies such as CRISPR can edit (i.e., insert, delete, modify) or even replace multiple genes in a living system. Behaviour induced by said changes can be measured at scale. Scientists can use this information not only to discover new targets for therapeutic treatments but also speed up the process of identifying and validating potential drug candidates. [8] Programmable protein engineering also serves as a critical ingredient for fields like precision medicine and targeted drug delivery. Using diffusion (i.e., generative) models, AI can predict protein structures to optimise drug discovery.

Additionally, cloud labs and lab automation innovations allow for increased efficiency, scalability, and accessibility in scientific research and experimentation. Through automation and remote access capabilities, researchers can streamline workflows, conduct more experiments, and analyse more data with integrated ML technologies. Cloud labs also facilitate collaboration between scientists from different locations, fostering interdisciplinary research [9][10].

3. Delivery

Traditional therapeutic delivery systems typically involve the delivery of small molecules, proteins and antibodies to the cell’s surface and were deployed systematically throughout the body. The new generation of modalities such as nucleic acids and live cells are more specific and offers benefits such as tissue targeting and controlled release. For example, mRNAs work as an instruction manual for cells to produce a viral protein or part of a protein, triggering an immune response. The next frontier is the overcoming of the blood-brain-barrier, a highly selective barrier which serves to protect the brain from harmful substances. If we can develop novel capsids [11] capable of penetrating the barrier, we will be able to deliver therapeutics directly to the brain that allow us to treat for a range of neurological disorders (e.g., Alzheimer’s, Parkinson’s and others). [12]

Tech-bio companies form a ‘closed loop’ and combines advanced, multi-omic sequencing [13], an integrated data infrastructure and machine learning to design and optimise experiments. With the aid of ML, historical data from experiments informs subsequent approaches, leading to greater output and less experimental cost and time.

4. Regulation

Regulatory agencies, such as the Food and Drug Administration (FDA), and other health authorities, review the safety, efficacy, and quality of new drugs before they can be marketed and distributed to patients. Processing time can be long, and the constant back and forth is tedious. To expedite this process, companies can leverage natural language processing (NLP) technology, enabling regulators to communicate with specialised large language models (LLMs). For example, pharmaceutical companies can look to build AI language models on top of their foundational models. These models, trained on vast amounts of data related to drug development, safety, and efficacy, becomes a tool to streamline the process of answering queries from regulators. Additionally, regulators can further prompt the model for details when needed, enhancing transparency and information communication [14].


While we have kept the discussion within drug development, it is not hard to extrapolate and imagine a future where tech-bio proliferates across society. Imagine a world where everyone is aware of their genetic risks and precision medicine is standard practice. [15] Imagine a world where biomakers, akin to advanced 3D printers, is normalised. Where technology is literally integrated into our biological selves, enabling real-time health monitoring with precision and accuracy. The next part of this blog series features a short work of fiction, where we explore what this future might look like through the diary entry of an introspective homemaker.

Harper Chew was Research Assistant at the Centre for Strategic Futures.

The views expressed in this blog are those of the authors and do not reflect the official position of the Centre for Strategic Futures or any agency of the Government of Singapore.


[1] Nieto, Mariano, Francisco Lopéz, and Fernando Cruz. “Performance Analysis of Technology Using the S Curve Model: The Case of Digital Signal Processing (DSP) Technologies.” Technovation 18, no. 6 (January 1, 1998): 439–57.

[2] Amirav-Drory, Omri. “The Crowdsourced History of Techbio.” NFX. Accessed February 27, 2024.

[3] “Human Genome Project Fact Sheet,” n.d.

[4] Mullin, Emily. “The Era of Fast, Cheap Genome Sequencing Is Here.” Wired. September 29, 2022.

[5] McKie, Robin. “The Vaccine Miracle: How Scientists Waged the Battle against Covid-19.” The Guardian. December 6, 2020.

[6] Big Think. “How AI Played an Instrumental Role in Making mRNA Vaccines”. November 1, 2023.




[11] Novel capsids are unique protein shells used in gene therapy. They are designed to have specific properties, such as the ability to target certain cells or tissues more effectively, or to evade the immune system.

[12] Voyager Therapeutics Inc. “Voyager Therapeutics Presents New Data Supporting Tau Antibody Program for Alzheimer’s Disease and GBA1 Gene Therapy Program for Parkinson’s Disease at the AD/PDTM Conference”, March 28, 2023.

[13] Multi-omic combines the data sets of multiple “omes” (such as genome, transcriptome, epigenome, proteome, and microbiome). This comprehensive approach provides biological insights at multiple levels.

[14] DIGITIMES. “AI Healthcare: Moderna CIO Highlights Transformative Role at CES 2024,” January 26, 2024.

[15] DNA “Do you really want to find out if you’ll get Alzheimer’s?” March 4, 2024 Commentary: Do you really want to find out if you’ll get Alzheimer’s? — CNA (



Centre for Strategic Futures

Welcome to CSF Singapore’s blog site, a space to share our shorter think-pieces and reflections. Visit our main website at