(This article originally appeared at Forbes.com)
Artificial intelligence (AI) has become a hot topic in the area of life sciences lately. With a growing number of groundbreaking AI use cases in other hi-tech industries -- ranging from self-driving cars to speech and image recognition tools to personal assistants (you know Siri, don’t you?) -- players in the biopharmaceutical industry are looking toward AI to speed up drug discovery, cut R&D costs, decrease failure rates in drug trials and eventually create better medicines.
And while it's clear there is a fair amount hype surrounding AI, the public has already developed a degree of skepticism about its real value.
To realistically assess the current state of the AI field in biopharma, let’s reveal some of the historical drivers of progress in this area.
Despite a halo of futurism, the field of AI is relatively old, with its official birth at a famous Dartmouth College conference in 1956. Fifteen years later, AI made its way to the medical field with growing public interest and hype. Several biomedical AI-based systemswere developed during the 1970s, including Internist-1, CASNET and MYCIN. The high expectations were not met, however, and in 1973, Sir James Lighthill officially described the AI field as a total failure. Government and public interest in AI cooled down, which led to shortages in research funding.
In the early 1980s, the interest was regained due to the creation of AI-based “expert systems,” which were quickly adopted worldwide. The first system of this kind, XCON, was a staggering commercial success that led to a multimillion-dollar industry by 1985. Two years later, the market of specialized AI products collapsed and the so-called “AI winter” prevailed with little hope for the field to ever re-emerge as a mainstream topic. Computer scientists would often avoid any associations with AI so as not to be regarded as “wild-eyed dreamers,” as John Markoff wrote in the New York Times in 2005.
The downturns in the AI progress in the late 1970s and 1980s were due to an immature technological environment. Only in the early 1990s did the field of AI gradually advance and soon flourished on the waves of exponential growth in computational power (Moore's law), data communication (the internet), cloud technologies (Salesforce, AWS, EC cloud, apps, etc.) and big data (aka the “big data revolution”). Here is a nice illustration that shows the availability of sufficient data to train AI models is crucial for breakthroughs in various fields.
In 1997, Deep Blue, IBM's supercomputer, defeated Garry Kasparov in chess, and this marked the first in what would be a series of milestones for AI. Since 2012, the progress in AI jumped exponentially after a major breakthrough in deep learning of neural networks. That year, the press was shaken by a wave of publications about how a computer managed to identify a cat profile from a large series of YouTube videos without any prior instructions about cats.
Big Pharma’s Bet On AI
AI has exciting opportunities to prosper in the biopharmaceutical field. The advances in combinatorial chemistry in the 1990s generated many millions of novel chemical compounds for testing as possible drugs. This stimulated the development of different high-throughput screening (HTS) techniques to perform such testing in relatively short terms, generating numerous public and private databases of compound bioactivities and toxicities. Simultaneously, a rapid progress in biology unfolded in the 1990s with advances in gene sequencing and “multi-omics” studies leading to the accumulation of billions of data points describing genes, proteins, metabolites and mapping interconnections between different biochemical processes and their phenotype manifestations.
The availability of big data in life sciences and a rapid progression in deep neural networks led to a wave of AI-based startups focused on drug discovery sweeping through the biopharma industry over the last three years. A number of significant AI-big pharma collaborations were announced in 2016-2017, including Pfizer and IBM Watson, Sanofi Genzyme and Recursion Pharmaceuticals, and GSK and Exscientia, among others.
Sifting Through The Hype
While multiple media outlets continue to rave about how AI is the future of healthcare, this technology has yet to prove itself in the biopharmaceutical industry. As of today, there are no AI-inspired, FDA-approved drugs on the market. Also, it is important to realize that while AI-based data analytics can bring innovation at every stage of drug discovery and during the development process, this data will not magically serve as a substitute for chemical synthesis, laboratory experiments, trials, regulatory approvals and production stages. What AI can do, though, is optimize and speed up R&D efforts, minimize the time and cost of early drug discovery, and help anticipate possible toxicity risks or side effects at late-stage trials to hopefully avoid tragic incidents in human trials. It can help incorporate knowledge derived from genomics and other biology disciplines into drug discovery considerations to come up with revolutionary ideas for drugs and therapies.
AI failed to deliver on its promise in the 1970s and 1980s, but now the situation is fundamentally different. The industry now has access to sufficient computational powers, commercially available cloud-based services, and it possesses big-chemical and biological data to train models. The current technological environment in conjunction with existing deep learning techniques provides exciting opportunities for major industry disruptions in the next three to five years. The necessity to start embracing AI technologies and revamping human resource strategies to create data science-driven interdisciplinary teams has become a matter of the future business sustainability for biopharma organizations.