As part of Aspire 2025’s spotlight panel on AI in life sciences, leaders from across the industry came together to unpack what’s working—and what still needs work—when it comes to AI adoption. Mehdi Sarmady, VP of Genomic and Data Sciences at Spark Therapeutics, brought a unique perspective grounded in real-world success stories and deep technical fluency.
Moderated by John Seffernick, SVP of Strategy at Stellix, the conversation explored how AI is enhancing innovation, productivity, and regulatory preparedness in next-generation gene therapy development.
Second, we’ve dramatically reduced time spent on report writing and regulatory documentation by automating repeatable text generation workflows. It’s not flashy, but it saves time, boosts quality, and improves consistency across study reports.
The lesson? Start with targeted use cases, not FOMO. It’s okay to be a late adopter if it helps you avoid dead ends—and AI is moving so fast that today’s complexity might become tomorrow’s commodity.
That exercise helped employees visualize what was possible and sparked a culture shift around AI experimentation—without the fear of judgment or failure.
In our case, we’re already using AI to autonomously generate regulatory reports—but only where we’ve built the data foundation and oversight structures to support that. AI is only as trustworthy as the environment it operates in. That’s why digital infrastructure and explainability are so critical.
We’re thinking about things like digital watermarks or labels that show which documents were generated or assisted by AI. That way, when regulatory bodies come knocking, there’s clarity—and accountability.
“With the right data, the right oversight, and the right people, AI isn’t just a buzzword—it’s a force multiplier for discovery, productivity, and compliance.”
Moderated by John Seffernick, SVP of Strategy at Stellix, the conversation explored how AI is enhancing innovation, productivity, and regulatory preparedness in next-generation gene therapy development.
John: Where have you seen AI actually move the needle in your work?
Mehdi: At Spark, we’ve seen two high-impact applications. First, in protein design—especially for gene therapy capsid engineering—we’re now using AI models to design capsids optimized for tissue targeting. What used to take years, we can now generate with a button click, thanks to a lab-in-the-loop system built on fine-tuned protein foundation models.Second, we’ve dramatically reduced time spent on report writing and regulatory documentation by automating repeatable text generation workflows. It’s not flashy, but it saves time, boosts quality, and improves consistency across study reports.
John: Everyone’s talking about ROI. But how do you justify AI investment at the enterprise level?
Mehdi: That’s the hard part. AI is still in the experimentation phase for many enterprises. At Roche, for example, we had multiple instances of GPT-based tools running in different silos. The real challenge was value realization. After a couple of years, leadership started asking: “What did we really gain from all this?”The lesson? Start with targeted use cases, not FOMO. It’s okay to be a late adopter if it helps you avoid dead ends—and AI is moving so fast that today’s complexity might become tomorrow’s commodity.
John: Trust and adoption go hand in hand. How do you build buy-in across your organization?
Mehdi: It has to be both top-down and bottom-up. At Roche, we ran an “AI Chatbot Challenge” that gave every employee access to a secure AI sandbox where they could build bots without writing a single line of code. People created bots for everything from report writing to key opinion leader sentiment analysis.That exercise helped employees visualize what was possible and sparked a culture shift around AI experimentation—without the fear of judgment or failure.
John: How do you see the role of autonomy evolving in AI-driven life science operations?
Mehdi: I think we’re following the same curve as autonomous vehicles—first, you have to keep your hands on the wheel; eventually, the system earns enough trust to go fully hands-off.In our case, we’re already using AI to autonomously generate regulatory reports—but only where we’ve built the data foundation and oversight structures to support that. AI is only as trustworthy as the environment it operates in. That’s why digital infrastructure and explainability are so critical.
John: What about compliance and data security? How do you manage risk?
Mehdi: This is the next frontier. Enterprises need to evolve their IT and governance models to handle the sensitivity of AI-generated knowledge. The keys are clear audit trails, prompt transparency, and training.We’re thinking about things like digital watermarks or labels that show which documents were generated or assisted by AI. That way, when regulatory bodies come knocking, there’s clarity—and accountability.
Final Takeaway
For Mehdi Sarmady, the takeaway is clear: AI is already transforming how life sciences companies operate—but success depends on foundations, not flash.“With the right data, the right oversight, and the right people, AI isn’t just a buzzword—it’s a force multiplier for discovery, productivity, and compliance.”