On where AI can make the biggest impact:
When you look at AI in discovery and also in development, you’re really talking about improving the probability of success, which is the yield–10% at best, genetic medicines maybe 20%–and then you’re looking at efficiencies, which is can you do this better, faster. The focus of the next few years needs to be on that success rate, because that is the biggest challenge. The cost of making medicines is high because of the cost of failure.
Najat Khan, Recursion
Watch: From science fiction, to science facts, to science hype: reality check
On planning ahead to make sure models actually get used:
We’ve heard numerous examples of so many publications that are out there, including in diagnostics and the medical space, but the models are not getting employed for various reasons. There is a lot of opportunity to–earlier, up front–think about how would I deploy the model? How do I integrate it back in the real world? And that’s not only in the diagnostic space which is obviously fairly complex with regulatory and reimbursements, but it also pertains to things like workflow improvements.
Martin Stumpe, Danaher
Watch: AI in action: moving beyond hype to meaningful impact
On keeping up with the pace of technology:
The key next challenge is that the regulatory spectrum has not kept pace with how fast these developments are happening. Digital pathology adoption in the U.S. is around 4% for clinical use, which is really low. And I think a key reason is there hasn’t been a good, acceptable use case – an FDA-approved device – for primary diagnosis. That’s preventing us from getting to that adoption.
Faisal Mahmood, Harvard Medical School
Watch: Beyond the hype: advancements in protein modeling, digital pathology, and human virtual models
On a critical hinge point for the healthcare delivery business:
We focus a lot on cost, but value-based care is quality over cost. And we need to do two things at the same time: we need to lower costs and we need to improve outcomes. Well, it’s straightforward to measure costs. Coming up with ways of measuring quality is much harder, and until we get better answers to that, I think we’re going to be stuck in the position we are now. Can AI be the tool by which we measure quality? Maybe. I haven’t seen it yet but I hope it happens.
Anthony Philippakis, GV
Watch: Lost in translational AI: what is the true impact in human health?
On the importance of trust:
We’re just emerging out of a pandemic that showed us that science can deliver miracles and have the tools to save lives, and people reject those tools – I’m talking about the vaccines, of course – because of lack of trust. So it’s not just about getting there. It’s about getting there in a way that people trust, whether it’s because their doctor told them or because they trust institutions to have appropriate governance.
Vardit Ravitsky, The Hastings Center
Watch: Architecting the rules of the game: ethics and law
On how AI can make medicines better from the beginning:
Most programs fail in the clinic because of the early stages of the decision-making process. That is, we are very poor at selecting the right therapeutic hypotheses, the right targets to modulate and the right patient population. So this seems like a very important place to which we would want to deploy this AI technology to try and make this process better.
Daphne Koller, insitro
Watch: Machine learning for better medicines
These quotes were edited for length and clarity.