Biology has traditionally been defined by careful observation, slow experimentation and a reliance on discovery rather than design. It dealt with highly variable living systems and was often described as messy, unpredictable science. Artificial intelligence (AI) is changing that.
Nvidia’s chief executive Jensen Huang recently explained that biology is entering a new era. In interviews and keynotes he argued that once biological systems become predictable and reproducible, they can be designed and built like machines. AI’s ability to model proteins and pathways, predict outcomes and automate experimentation creates an engineering‐like compounding feedback loop. Huang noted that such feedback loops enabled the digital revolution, and he believes the same will happen in life sciences . His observation is shared by other technologists who see biology “becoming engineering, not just science” . This article explores how AI is reshaping biology, examines the design‑build‑test‑learn workflow, highlights recent case studies, discusses ethical and regulatory challenges, and explains the implications for society.
Why Biology Feels Like Engineering Now
Predictive models and protein design
AI and machine learning models are making biological systems predictable. DeepMind’s AlphaFold has released a database of more than 200 million protein structures—essentially the entire catalog of known proteins—and made them freely available . This resource, along with generative protein design tools, allows researchers to predict three‑dimensional shapes and engineer new proteins instead of discovering them by trial and error. Other AI systems, such as protein language models and diffusion models, can generate completely new sequences with desired properties.
Predictive modelling extends beyond proteins. In genome editing, the early CRISPR–Cas9 system used simple statistical models to select guide RNAs, but by 2014 researchers were already employing machine‑learning algorithms to enhance single‑guide RNA (sgRNA) design . Since then, scientists have combined transcriptomic, proteomic and epigenomic data to build models that assess the performance and risks of gene editing across different cell types . AI now helps match individual patients with optimal targets, predict off‑target effects and optimize prime editing strategies . As a result, genome editing moves from guesswork to rational design.
Exponential data and high‑throughput automation
Engineering disciplines thrive on data‑driven iteration. Biology now benefits from huge datasets—genomic sequences, gene‑expression profiles, proteomic measurements, imaging data and more. Machine learning algorithms mine these datasets to discover patterns and propose new designs. For example, research groups like the Carbonell Lab emphasize that automation and machine learning can drive every stage of the design‑build‑test‑learn (DBTL) cycle, from robotic assembly and microfluidic growth to experimental planning and predictive modelling . Large language models tuned to scientific data, such as SciQA and generative models for molecules, enable rapid hypothesis generation and design suggestions.
AI also powers self‑driving laboratories. A University of Illinois project combined an AI algorithm that proposed enzyme mutations with robotics to build and test thousands of variants. A second AI system then learned from the results and fed the next design cycle. Within a few iterations, the team improved an enzyme’s activity 26‑fold, demonstrating how AI‑driven automation can accelerate discovery . Héctor Garcia Martín from Berkeley Lab notes that synthetic biology is traditionally slow and laborious, but coupling machine learning with robotics and microfluidic automation can speed up the process by roughly twenty‑fold . Such closed‑loop systems turn biology into an engineering process where designs are rapidly tested, evaluated and improved.
Design‑build‑test‑learn (DBTL) becomes routine
The DBTL paradigm has long been used in synthetic biology. AI accelerates each step:
- Design. Algorithms generate thousands of candidate molecules or genetic constructs. The National Academies of Sciences reports that AI‑enabled tools can produce data‑driven insights and generate novel candidates within days, a task that would otherwise take years . Large language models like CRISPR‑GPT and ChemCrow assist scientists in finding information, automating literature searches and proposing hypotheses .
- Build. Robotic platforms assemble DNA sequences, grow organisms and express proteins. Automated labs use robotics, 3D printing and microfluidics to build thousands of designs in parallel .
- Test. High‑throughput assays measure performance. Machine learning models analyze results, update predictions and identify promising variants. Self‑driving labs close the loop by feeding test results back into the design stage.
- Learn. AI discovers patterns and refines models. The iterative cycle enables compounding improvements, similar to how software engineering benefits from feedback.
This cycle, once the domain of hardware engineers, is now standard practice in biofoundries and synthetic biology labs .
Better odds of success in drug discovery
AI is already shortening drug development timelines. Insilico Medicine’s anti‑fibrotic drug for idiopathic pulmonary fibrosis moved from target discovery to Phase I trials in under 30 months, whereas typical pathways take 3–6 years . A 2024 analysis noted that AI‑discovered molecules in Phase I trials have a success rate of 80–90%, significantly higher than the historic industry average of 40–65% . Recursion Pharmaceuticals reported that its AI platform took 18 months to move a molecule into clinical testing—a process that normally takes about 42 months . Such improvements suggest that AI can double R&D productivity .
Predictability and compounding innovation
Engineering disciplines are powerful because innovations build upon each other. Once a problem is solved, the solution becomes a building block for future designs. Huang believes biology is entering a similar phase. Tools like AlphaFold allow researchers to start with a precise protein structure instead of solving it from scratch. Gene‑editing models can design new CRISPR proteins by fine‑tuning large pre‑trained protein language models . Automated labs store every failed and successful experiment, enabling continuous learning and rapid reuse of parts and protocols. As more biological components become predictable, innovation should compound and accelerate, just as in software or semiconductor engineering.
Ethical Risks and Societal Implications
The engineering of biology using AI offers extraordinary promise but also introduces significant risks. Scholars and biosecurity experts highlight several concerns.
Biosecurity and dual‑use threats
AI systems that design or optimize biological systems could be misused to create harmful agents. A review in Frontiers in Bioengineering and Biotechnology notes that AI lowers technical barriers and thus amplifies the dual‑use potential of biotechnology, making it easier for malicious actors to design novel pathogens . The technology transforms knowledge that required advanced expertise into something accessible through AI tools. This raises fears of bioweapons or other malicious uses.
The Frontier Model Forum—a consortium of AI companies—addresses these concerns by categorizing safeguards into several layers:
- Capability limitation. Alter model training or data to prevent them from acquiring dangerous knowledge .
- Behavioral alignment. Train models to refuse harmful requests and shape responses to avoid enabling misuse .
- Detection and intervention. Monitor inputs and outputs to detect suspicious use and block unsafe content .
- Access control. Restrict who can use a model and what capabilities they can access .
- Supporting ecosystem mitigations. Provide tools and information to governments and other actors to implement broader defenses .
These layers must operate together, and developers need to share research on which techniques are effective. The issue brief emphasizes that safeguards must evolve as AI capabilities grow and that no single measure is sufficient .
Data privacy and algorithmic bias
AI models rely on massive amounts of genetic and health data, which raises privacy issues. An article in the Avicenna Journal of Medical Biotechnology warns that mishandling such sensitive data can lead to privacy violations and that consent and data ownership must be addressed . Bias is another concern. If training data does not represent all populations, AI models might reinforce health disparities, misdiagnose certain groups or propose ineffective treatments .
Transparency and explainability
Many AI systems operate as “black boxes.” Researchers and regulators worry about using opaque algorithms for clinical decisions. The same Avicenna review highlights the need for transparency and interpretability to gain public trust and allow proper oversight . Without explainable AI, decisions about treatments, gene edits or diagnostics could be difficult to justify.
Ethical boundaries and equitable access
AI enables unprecedented genetic manipulation. This raises questions about how far humans should go in editing genomes or designing new organisms. The Avicenna review urges policymakers to consider the ethical boundaries of genetic manipulation and to ensure that the benefits of AI-driven biotechnology are equitably distributed, preventing new technologies from widening social inequalities .
Evolving Regulatory Frameworks
Regulators around the world are scrambling to keep up with rapid AI advances in biotechnology. Recent guidance documents and policy plans illustrate the emerging framework.
United States
In January 2025 the U.S. Food and Drug Administration (FDA) released a draft guidance titled “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision‑Making for Drug and Biological Products.” It recommends that drug developers use a risk‑based credibility assessment when presenting AI-generated evidence. The guidance suggests evaluating model reliability, relevance to the specific context of use and the quality of input data . The document aims to ensure that AI‑generated analyses used to support safety, efficacy or quality are trustworthy.
Commentators note that the FDA needs additional authority and resources. A 2025 perspective in Communications Medicine argues that regulators must invest in capacity and capability, developing specialized AI teams and ensuring that staff can evaluate complex models . The same article calls for legislative action to create statutory bodies to regulate AI-enabled therapeutics and warns that otherwise, litigation and legal challenges could slow down innovation .
European Union
The European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMAs) adopted an AI workplan for 2023–2028. The plan aims to maximize benefits and manage risks, focusing on four dimensions: guidance and policy support, adopting AI tools and technologies, collaboration and training, and structured experimentation . The workplan emphasizes that AI tools must comply with data protection laws and that regulators will regularly update policies as the technology evolves .
China, Japan and Canada
Other governments are also formulating strategies. A 2025 perspective notes that China’s New Generation AI Development Plan aims for global leadership by 2030 and includes draft regulations addressing data security and algorithm transparency . Japan’s AI Strategy 2019 and Society 5.0 initiative integrate AI across society, emphasizing ethical development . Canada’s Pan‑Canadian AI Strategy and a Directive on Automated Decision‑Making focus on fairness and accountability . The article notes that many regulators have not yet fully clarified how they will regulate AI-generated platforms and products, and it calls for harmonized global standards .
Harmonization efforts
The same perspective stresses that countries risk creating incompatible rules and urges a unified approach similar to the International Conference on Harmonization (ICH) for pharmaceuticals . Some progress has been made: the FDA, UK’s MHRA and Health Canada have collaborated on guiding principles for AI-enabled medical devices, including real‑time performance monitoring and transparency . Further harmonization is necessary to support global development and deployment of AI‑driven therapeutics.
The Road Ahead: Opportunities and Cautions
The integration of AI into biology signals a profound shift. When proteins, genomes and cells become predictable and designable, biology takes on the qualities of engineering. Jensen Huang’s assertion that biology will become an engineering discipline encapsulates this transition. AI tools—predictive models, automated labs and data-driven design—transform bio research from exploratory science into an iterative, compounding process.
Yet society must balance innovation with responsibility. The promise of faster cures, renewable bio‑manufacturing and personalized medicine should not blind us to the perils of dual‑use misuse, privacy violations or algorithmic bias. Regulators worldwide are beginning to address these issues, but policies must adapt quickly and globally. Ethical frameworks should accompany technical safeguards to ensure equitable and safe deployment of AI-powered bioengineering.
In the coming decade, the line between a biologist and a software engineer will blur. High school students may design novel enzymes in silico, and clinicians could use AI‑generated genome edits to treat diseases. Whether this future leads to better health and environmental sustainability or to new risks depends on how researchers, companies, regulators and the public navigate this transformation. AI doesn’t just make biology faster—it demands that we treat life itself as an engineering project, with all the responsibility that entails.