“When AI makes a mistake in drug safety, the real test isn’t who to blame — it’s who takes responsibility to fix it.”
– Vrushali Negandhi
Picture this: it’s 2 a.m. in a pharmacovigilance operations room somewhere in the Gulf. The coffee is strong enough to wake the dead, and a machine learning algorithm, let’s call her Algy, has just flagged an “unusual pattern” in adverse event data. Someone stares at the screen and mutters, “Did Algy just find a signal?” Someone else shrugs, “She’s never wrong, right?”
But here’s the catch: when Algy is wrong – who takes the blame?
Welcome to the age of algorithmic accountability in drug safety. The Middle East, with its rapidly maturing healthcare ecosystem and ambitious AI strategies, is diving head-first into this ethical, regulatory, and scientific puzzle.
The uneven AI awakening
“Awareness and alignment of AI use in PV are uneven across MENA, and that’s a risk in itself.”
Dr. Ahmed Hegazy
“AI systems within pharmacovigilance are already in place and used by many companies including their affiliates within the MENA countries,” explains Dr. Ahmed Hegazy, MD at Pvigilant Health. “However, other companies especially local distributors and some health authorities are not. So, the level of awareness and implementation is not the same across the pharmacovigilance community.”
That unevenness, Dr. Hegazy warns, could translate into risk. “Raising awareness and understanding of how and what to use AI for is essential… alignment of regulatory frameworks is critical for smooth implementation and minimizing risks of noncompliance.”
And he’s right, the numbers speak for themselves. The AI in pharmacovigilance market is projected to grow at 14.6% CAGR (2025–2034) globally, with the MENA region identified as one of the fastest-emerging markets (InsightAce Analytics, 2025). Yet local adoption remains patchy, a mix of ambition and experimentation.
AI in Pharmacovigilance – 2025 Snapshot
· Global PV-AI market growth: 14.6% CAGR (2025–2034)
· GCC pharmacovigilance maturity index: Moderate to High in KSA, UAE; Emerging in Iraq, Bahrain, Oman
· 72% of surveyed PV professionals cite “ethical clarity” as the biggest AI implementation challenge.
Regulators are waking up too…
The good news? Regional regulators aren’t sitting idle.
“Some regulators within the region are already using AI in signal management to identify local signals early,” shares Dr. Hegazy. “They’re contributing not just following global regulatory bodies. Monthly newsletters are issued to disseminate this information to the healthcare community.”
Translation: MENA’s drug safety leaders are starting to play offense, not just defense. The shift from “reactive” to “proactive” pharmacovigilance could be a turning point in regional patient safety.
AI won’t replace judgment (Relax, humans still matter)
“AI should help us excel, not replace us. Pharmacovigilance is still deeply human work.”
Dr. Katya Nabeel Ailabouni
“Let’s not forget what pharmacovigilance really means,” smiles Dr. Katya Nabeel, Pharmacovigilance Officer at Emirates Drug Establishment. “It’s the science and activities related to detecting, assessing, understanding, and preventing adverse effects or any other drug-related problems. AI has become a cornerstone in transforming these activities, but it must always operate under human oversight.”
Dr. Ailabouni believes AI should amplify, not replace, human expertise. “AI enhances PV professionals and practices, it helps safety scientists excel and elevate the profession, as long as it’s used smartly and wisely.”
The takeaway? Algorithms may process a million safety reports in minutes, but empathy, context, and ethical judgment remain exclusively human features. (No offense, Algy.)
The Governance Gap: where rules meet responsibility
|
In Iraq, Dr. Manal Younus, Head of Pharmacovigilance at the Ministry of Health, brings structure to this conversation, literally. “The CIOMS Working Group XIV report outlines nine key principles that serve as the foundation for the safe use of AI in pharmacovigilance,” she explains. “These include governance, accountability, human oversight, transparency, explainability, data privacy, and fairness.”
She’s also pushing for systemic reform. “Universities and training programs should integrate AI ethics, governance, and risk management into pharmacovigilance curricula. Emphasizing human oversight, model validity, transparency, and fairness enable students to evaluate AI tools robustly.”
Dr. Younus’s vision is simple but profound: the next generation of PV professionals should be fluent in both regulation and responsibility.
Risk-Based AI: Ethics in Action
|
Alina Panourgia, Co-Founder of Composite Pharma, doesn’t mince words. “AI systems in healthcare are ranked as high-risk,” she says. “Use risk-based AI governance frameworks already available, CIOMS, WHO, EMA, which emphasize transparency, fairness, audit trails, and human oversight.”
Her advice for the GCC is practical and bold: “Apply local privacy laws, and run bias assessments carefully based on the unique population mix. Ideally, create shared PV datasets including data from academia, regulators, and hospitals. And maybe have an AI ethics panel.”
In a region where data is fragmented across borders and institutions, her idea of shared pharmacovigilance datasets could revolutionize both signal detection and trust.
Panourgia doubles down on education too:
“Teach data privacy and governance and focus on human-in-the-loop oversight, that’s why it’s needed. And future system engineers must have ethics and data privacy engrained in them from day one. It should be part of their DNA.”
Global Voices, Local Lessons
|
Adding a global-industry perspective with regional nuance, Pav Rushiraj, Director, Head of Pharmacovigilance & UK QPPV at Ipsen, stresses the urgency of ethical integrity and inclusivity.
“As AI becomes integrated into pharmacovigilance systems, ensuring ethical integrity and safety for diverse GCC populations requires frameworks that address bias, transparency, and accountability. AI models trained primarily on Western datasets may not reflect GCC-specific genetic, cultural, and clinical variations, leading to underrepresentation and potential harm in signal detection or risk assessment.”
His proposed solution is pragmatic and forward-looking:
- Academic–Regulatory Collaboration: Partner local universities, healthcare institutions, and regulators (SFDA, DOH, NHRA) with global AI ethics experts to co-develop governance standards.
- Localized Data Stewardship: Build GCC pharmacovigilance data repositories to represent local populations in AI training.
- Ethical Oversight Boards: Establish multidisciplinary AI ethics committees within PV systems to review algorithms for bias and cultural fit.
- Transparency and Auditing: Document model decision-making and run periodic bias audits to ensure compliance.
“In essence,” Pav concludes, “sustainable AI in GCC pharmacovigilance demands a co-created ethical ecosystem, combining global AI ethics with regional diversity safeguards.”
So… When Algy Hurts, Who Heals?
Let’s be honest — it’s complicated. Liability doesn’t sit neatly with one entity. When an AI-driven PV system fails to flag a critical adverse event, accountability could fall on:
- the developer who trained the model,
- the company that deployed it,
- the regulator that approved it, or
- the human operator who trusted it blindly.
But maybe the answer isn’t who’s responsible, but how responsibility is shared.
The Middle East’s emerging PV ecosystem, if guided by fairness, transparency, and collaboration, can become a model for ethical AI worldwide. Because while algorithms may crunch the data, humans still carry the conscience.
Final Word: Hope, Humor & Human Oversight
So next time Algy sends an alert at 2 a.m., let’s thank her, but also double-check her homework.
Because in the race toward AI-powered pharmacovigilance, the Middle East has something the machines don’t: the wisdom to pause, question, and correct course. And that’s precisely what will keep patient safety, and trust; intact.
Maybe the real question isn’t who’s responsible, but how responsibility is shared. Because algorithms can crunch terabytes of safety data, but only humans can carry conscience.
The future of pharmacovigilance in the Middle East won’t be written by code alone — it will be co-authored by regulators, safety scientists, ethicists, and yes, a few brave algorithms learning their way into trust.
And that conversation is just getting started.
This January 21st – 22nd at the Grand Mercure Dubai City, the 5th Edition Drug Safety Symposium 2026 – Middle East Chapter will bring together PV leaders, regulators, and AI innovators to unpack exactly this, from automation in case management to the ethical governance of machine learning models.
Because as we stand on the edge of AI-driven pharmacovigilance, one truth remains clear:
Technology may accelerate vigilance, but integrity will always define it.
