Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Market access specialists will know that the exact process for development of a Joint Clinical Assessment (JCA) under the EU’s new Health Technology Assessment regulation has been hotly debated and the additional burden in individual markets has been unclear. Now, at a post-ISPOR panel discussion, Spain’s Sonia García Pérez has given her view, and it’s a showstopper. The head of the Division for EU and International Affairs at the Spanish Medicine Agency said, “We estimate that creating a JCA will require three to four times more resources than the current IPT [therapeutic positioning report].” García Pérez went on to suggest that Spain would be able to manage this surge in evaluation demand, in part because of the semi-autonomous status of their regions and additional EU support.
However, from January 2025 the additional requirements will fall on all EU markets for new treatments in the pilot therapy areas, including oncology, advanced therapy medicinal products, certain medical devices and in vitro diagnostic tools – and not everywhere is as confident as Spain. This has led many in the industry to speculate on how experts on both sides of the regulatory fence will manage the 300–400% increase in workload, in what was intended to be an adjunct that would accelerate market access. In one of those interesting coincidences of history, some believe salvation might be just on the horizon, in the form of AI (Artificial intelligence).
On the horizon, or over it?
Commercially available, general purpose, generative AI, along the lines of ChatGPT, does not seem a good fit. The well-known problems regarding AI inaccuracies and elaborate hallucinations, and the inability of most large language models (LLMs) to adequately reference or audit summarized or contextualized outputs just doesn’t tally with the need for precision understanding required for JCA. Credibility is vital – hallucination in one aspect of one market’s JCA could bring the entire process into disrepute and, crucially, delay patient access to vital drugs.
In any event, building a JCA is not a single authoring process. Amiculum's market access specialists have been deconstructing the likely JCA development workflows and, with one eye on our own internal AI experimentation, have been looking for credible and useful AI-enabled shortcuts. We believe we have found several areas in which specialist AI tools could assist.
No ordinary AI…
Amiculum's solution is to use a new type of model for AI analysis and authoring: retrieval-augmented generation (RAG). RAG can be complicated to explain, but essentially there are two elements to understand. First, the conversion of a specific data set (eg a large range of clinical/research data) into a searchable and indexed database – this is often described as a ‘data pipeline’ and is the critical reference source for downstream outputs. Specific external (online) resources can also be integrated into the pipeline. The second step is the targeting of an LLM (typically an enterprise version of ChatGPT or other commercially available systems) to specifically process the data pipeline to analyse, organize and synthesize new content – based on our relevant prompting. This addresses the aforementioned concerns of output hallucinations and relevance with the general use of LLMs, providing a secure process with relevant and citable outputs. It also means that sensitive client information can be retained in a separate secure database and not shared externally.
This approach enables our Amiculum market access specialists to have a more strategic role in defining the overall authoring approach and objectives, structuring, reviewing, and fine-tuning critical outputs – with the AI RAG model providing a rapid analysis of the data pool and generation of initial draft content.
The end result
So what is the outcome? Does 400% become 200%? In truth, we don’t know yet. While we can certainly see that RAG will help to shortcut data mining, literature reviews, and even dossier development for JCA (which will be vital with the tight development timelines), what we can’t predict is how many other shortcuts will come about across the European Union as a result of reuse and/or acceptance of other market evaluations. That may yet prove to be the deciding factor that renders RAG AI either interesting but superfluous, or an actively essential tool to get novel treatments into patients across the EU.
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
About the authors
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Joint Clinical Assessments: bureaucratic nightmare, AI opportunity, or both?
Contact CTA
Start a conversation
Find out how our experts can address your healthcare communication challenges.