GE HealthCare’s artificial intelligence chief offers medtech AI tips

GE HealthCare’s artificial intelligence chief offers medtech AI tips

GE HealthCare Chief AI Officer Parminder Bhatia discusses medtech artificial intelligence development, safety and regulations.

A photo of GE HealthCare Chief AI Officer Parminder Bhatia.

GE HealthCare Chief AI Officer Parminder Bhatia [Photo courtesy of GE HealthCare]

Following the first part of our interview with GE HealthCare Chief AI Officer Parminder Bhatia in which he discussed the device developer’s use artificial intelligence and vision for the technology, this second part offers advice from him and his team to help others take advantage of AI in medtech.

The following has been lightly edited for clarity and space.

MDO: What are regulators looking for when reviewing AI in medtech, and do you have any unique advice to help device developers meet the bar?

Bhatia: “Regulators play an important role, and we share their focus on patient safety. That’s why we incorporate our Responsible AI principles at every stage of our product development, which include a focus on safety, validity, transparency, explainability, and fairness. Be intentional at every step in layering in test, checks, and safeguards. You can’t retrofit safety. It has to be there from the start.”

Related: Explainable AI lessons from the developers of the EarliPoint Evaluation for autism

How do you expect regulations around medtech AI will change in the coming years?

Bhatia: “Oversight is essential, and I expect we’ll see regulations around medtech AI continue to be honed over the coming years. Regulators are already working closely with industry experts to fully understand both the opportunities and the complexities that come with technologies like generative and agentic AI. As part of that, I expect that we’ll see more emphasis on transparency, explainability, and lifecycle monitoring, as well as mechanisms for continuous validation in real-world use. At GE HealthCare, we welcome that direction because it aligns with how we already work. Our Responsible AI principles — safety, validity, transparency, explainability, and fairness — are built in from the very beginning of product development. That’s how we ensure our innovations not only meet today’s requirements but are prepared for the regulatory expectations of tomorrow.”

How can medical device developers build trust in AI?

Bhatia: “There are two big things. The first is making sure the solutions are safe and effective, built with responsible AI principles in mind. The second is education, and ensuring care teams understand how the solutions work, including usage but also explainability and the safeguards.”

Related: Five tips from Philips for building trust in medtech AI

Do you have any mantras or mottos or often repeated phrases you use with your teams or partners when talking about AI and digital tech?

An image showing GE HealthCare's Vscan Air CL system.

GE HealthCare says its Vscan Air CL system’s AI-powered algorithm “delivers fast, reliable
bladder volume measurements while maintaining clear visualization of other pelvic anatomy,” which helps reduce unnecessary catheterizations. [Image courtesy of GE HealthCare]

Bhatia: “There are a few guiding phrases I repeat often. One is: ‘Start with the problem, not the technology.’ At GE HealthCare, we work backward from the toughest challenges our customers face, whether it’s reducing clinician burnout, improving workflow efficiency, or expanding access to care. Technology is a means to an end, not the end itself.

“Another is: ‘This is healthcare’s iPhone moment.’ Just as the smartphone unlocked entirely new ways of interacting with technology, I believe foundation models and agentic AI will redefine how care teams use data, making care more proactive, personalized, and accessible.

“For example, access is one of the biggest global challenges. With handheld devices like our Vscan Air combined with Caption AI, a nurse in a rural clinic can capture diagnostic-quality ultrasound images without waiting for a specialist. That’s what it means to start with the customer problem — limited access to expertise — and then use technology as the enabler to close that gap.”

Related: A physicist at GE HealthCare explains how imaging can advance cancer and brain care

What have you learned about AI that might help other device developers innovate or succeed with their own projects?

Bhatia: “One of the most important lessons I’ve learned about AI is that innovation takes time, persistence, and iteration. Early in my career, during graduate school, I worked on developing advanced MRI technology. At the time, it felt like we were building something far ahead of where the industry was, but that experience taught me that groundbreaking ideas often take years before they find the right moment and infrastructure to scale.

“That perspective is very relevant to AI today. Device developers should know that success isn’t about building the flashiest algorithm. It’s about creating something clinically meaningful, validating it rigorously and being patient enough to see it through.

“I’ve also learned the importance of collaboration. The MRI work I did was not in isolation. It was part of a larger community of researchers, clinicians, and engineers. The same holds true with AI in healthcare. No single person or company has all the answers, but when we bring together expertise across academia, industry, and clinical practice, that’s when real breakthroughs happen.”

Related: Advice from J&J MedTech’s global digital head on understanding user needs, building trust in AI, digitization efforts and more

link

Leave a Reply

Your email address will not be published. Required fields are marked *