Methodology for Eliciting Expert Opinion




Missing evidence is a frequently encountered problem when assessing the effectiveness or cost-effectiveness of an intervention. Even when there is some high-quality evidence for some outcomes, evidence for important clinical or health economic outcomes, such as effectiveness relative to treatments currently used in the NHS or longer term effectiveness outcomes, may be missing. The use of decision analytical models involving numerous parameters is often challenging due to ‘sparse’ evidence (for example on duration of treatment effects, incidence of rare complications). 

Missing evidence is particularly problematic in the health technology assessment (HTA) of diagnostics and medical devices because current regulatory frameworks do not require the generation of robust evidence of effectiveness.  In assessment of clinical, public health and social care interventions, evidence may be limited, particularly for complex interventions or models of care, where studies are often not undertaken to the methodological quality ideally required to support decision-making. Also, many studies undertaken with single condition patients are unlikely to reliably be generalizable to the mix of patients typically presenting to primary and secondary level services with a range of co-existing conditions or multimorbidities.

In the absence of high-quality published evidence to inform estimates of effectiveness or cost-effectiveness other sources of evidence are needed, including expert opinion. For the purposes of this piece of work, ‘experts’ in this sense might include health or social care professionals, academics, patients, service users or lay members.

There are a number of established methods of collecting expert opinion.  Simpler techniques are based on set questions and do not identify or adjust for expert bias. Controlled expert elicitation techniques are more reliable and often ask the same question in different ways. Such techniques identify uncertainty in individual parameters both in the individual answers of particular experts and between experts.  Methods for expert elicitation are reasonably well-established and are classed into individual elicitation and consensus methods.  However there has been little comparison or independent validation of the performance of these techniques, including costs and time. Some approaches are also resource-intensive and typically require face to face meetings with the experts which is challenging within the timescales of HTA assessment.  The application of the techniques to a decision-making setting also poses methodological challenges.

Probabilistic models are often used in HTAs to capture the effects of both variability (in responses and populations) and uncertainty.  To be useful in such models, gaps in evidence bases need to be encoded using probability distributions. The need for expert judgements is more apparent when the probabilistic models are taken within a Bayesian framework, which is becoming more popular in HTA.  Here expert judgements are used to form the basis of an analysis as “prior” distributions. There is therefore great interest in developing defensible elicitation protocols for HTA that will result in justifiable probability distributions. 

More information is available on the NICE website.

It is advisable to get in touch as early as possible, at least 12 weeks before the submission deadline to discuss proposals. 

Highlight Notice

MRC and NIHR invite applications (through the Methodology Research Programme) to conduct research into methodology for expert opinion elicitation. A vignette (PDF, 206KB)commissioned by the Methodology Advisory Group provides more background information. Applications are particularly sought on the following:

  • A short project to compare the available expert elicitation protocols and software and to identify strengths and weaknesses with the aim of identification of the most important aspects of “good” expert elicitation. This could lead into a longer-term project to propose and validate an optimally efficient protocol and software.  
  • Investigation of whether the mode of expert opinion elicitation affects results. It would be valuable to validate individual elicitation methods/protocols, and to conduct head to head comparisons of different elicitation methods/protocols. Are different/more reliable answers achieved with an elaborate, structured method versus a streamlined one?  E.g. comparison of face-to-face vs online methods.
  • Methodology to identify and understand the range of opinions that are being obtained from an elicitation exercise, particularly where they may be conflicting.
  • Methods for synthesis of elicited expert opinion with other strands of evidence, to inform e.g. HTA or diagnostic decision making.

Application process and schedule

Applications for projects are invited through the normal MRC funding grant schemes (research grant, new investigator research grant etc.) and will be considered at the regular Methodology Research Programme Panel meetings, to this competition’s usual deadlines. These will be in competition with other applications received, but the Panel will be mindful of the strategic importance of this area.

Contact and guidance

The titles of all applications in response to this highlight should be prefixed with HEE: when filling out the JES form, and on any attachments, e.g. “HEE: A method for…”

It is essential to discuss your proposals with MRC Head Office at an early stage. All applications must be approved by the Methodology Programme Manager prior to submission. Please contact:

Dr Sam Rowley

Additionally, prospective applicants are also encouraged to discuss their proposed work with NICE, to ensure it will most effectively address the relevant challenges and to explore possibilities for interaction.

The appropriate contact is Professor Sarah Garner: