UHN Libraries has developed this guide to provide an overview of the evidence-based practice (EBP) process, from formulating clinical questions to locating, appraising, and applying evidence in patient care. It also highlights key tools, models, and resources to support healthcare teams in delivering evidence-informed care.
What is Evidence-Based Practice (EBP)?
EBP (also called evidence-informed practice) builds on the core principles of Evidence-Based Medicine (EBM) to guide clinical decision-making. In 1996, Sackett et al. defined EBM as "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”
EBP empowers healthcare professionals to provide care that is not only effective but also individualized and patient-centred. It moves beyond reliance on past practice or anecdotal experience, promoting a more informed evidence-based approach. It involves asking relevant clinical questions and applying findings from rigorous research.
By integrating the best available evidence with clinical expertise and patient preferences and values, EBP improves patient outcomes, enhances the quality of care, and ensures efficient use of resources. It also supports professional development by encouraging healthcare providers to update their knowledge and practices based on emerging evidence continually. EBP practice is about cultivating a mindset that values rigorous scrutiny, continuous learning, and a commitment to quality care.
Multiple definitions exist for EBP, but all emphasize the integration of the best available research evidence, clinical expertise, patient values, and contextual factors to guide evidence-informed decisions in practice.
In 2019, Melnyk and Fineout-Overholt defined EBP, as “a paradigm and lifelong problem-solving approach to clinical decision making that involves the conscientious, use of the best available evidence (including a systematic search for and critical appraisal of the most relevant evidence to answer a clinical question) with one’s own clinical expertise and patient values and preferences to improve outcomes for individuals, communities, and systems.”
Steps in Evidence-Based Practice
EBP is a dynamic, self-directed learning process in which healthcare providers address patient needs and practice gaps by asking:
The five core steps of EBP are:
1. Ask: Define a clear, focused clinical question based on identified information needs that can be answered with research evidence.
2. Acquire: Systematically search for the best available evidence to answer the clinical question.
3. Appraise: Critically evaluate the evidence for validity and applicability.
4. Apply: Integrate the appraised evidence into clinical practice, considering the patient's needs, preferences, and values.
5. Assess/Audit: Evaluate the outcomes of evidence application to ensure sustained improvements in patient care and clinical performance.
6. Dissemination: Share EBP findings to promote knowledge translation and system-wide improvement.
Individual and Organizational Evidence-Based Practice
EBP can be implemented at both individual and organizational levels. Some EBP initiatives focus on individual clinical decision-making, such as addressing a specific question to improve a patient’s experience with a particular intervention using relevant research evidence. Other initiatives are broader in scope, requiring collaborative, team-based efforts to address shared clinical challenges, such as updating a policy or standard procedure.
While the EBP process remains consistent across both contexts, organizational-level implementation often includes additional considerations, such as aligning with institutional priorities, engaging interprofessional teams, and incorporating formal evaluation strategies. These elements are indicated in the Iowa Model (Figure 1.2), which guides the structured and systematic integration of evidence across healthcare systems.
References:
Types of Clinical Questions
Clinical questions fall into two main categories: background questions and foreground questions.
Background questions
Background questions explore general knowledge about a condition, treatment, process or concept to build a foundational understanding. They are best answered using clinical textbooks, or clinical evidence summaries, such as BMJ Best Practice, ClincalKey.
Examples:
Foreground Questions
Foreground questions are specific, clinical questions that guide decision-making in patient care. Answering these questions requires an evidence-based approach and depends on the type of question and the level of evidence available.
Foreground questions can address different types of questions, such as therapy, diagnosis, prognosis, prevention, and etiology. Depending on their focus, they can be quantitative (examining causal relationships) or qualitative (exploring experiences and perceptions).
Question Type | Quantitative | Qualitative |
---|---|---|
Therapy | Effectiveness of treatments, interventions, or drugs | Patient experiences, perceptions, or preferences regarding treatments |
Diagnostic | Accuracy of diagnostic tests, sensitivity, specificity | Patient feelings or perceptions about diagnostic procedures |
Prognosis | Likelihood or risk of future outcomes, survival rates | Patient perspectives on future health outcomes or experiences with disease progression |
Etiology | Causation or risk factors for diseases, associations | Patient perceptions or social factors influencing disease or health behaviors |
Formulating a Clinical Question: PICO Model
Without a well-focused question, finding appropriate resources and relevant evidence can be challenging and time-consuming. Foreground questions are often structured using the PICO framework, which helps refine and focus the search for evidence by identifying key elements of the question.
PICO stands for:When developing a PICO-based question, consider the type of clinical inquiry, such as therapy, prevention, diagnosis, prognosis, or etiology. The table below shows how PICO elements adapt to each question type, with relevant examples.
Question Type | Patient | Intervention or Exposure | Comparison | Outcome |
---|---|---|---|---|
Therapy | Patient’s disease/ condition (age, gender, ethnicity, etc.) | specific drugs or procedural intervention | alternative drug, procedural intervention, or standard care | treatment effectiveness, or management of disease/ condition |
Example | In adult patients with osteoarthritis of the knee | does physical therapy or | NSAIDs alone | reduce pain and improve function? |
Diagnosis | Patient’s disease/ condition (age, gender, ethnicity, etc.) | specific diagnostic tools or procedure | alternative diagnostic tools or procedure | effective diagnosis of disease/ condition |
Example | In patients with suspected Alzheimer’s disease | is MRI or | cognitive testing | more effective for early detection? |
Prognosis | Patient’s disease/ condition (age, gender, ethnicity, etc.) | specific drugs or procedural intervention | Usually not applicable | occurrence or absence of new disease/ condition |
Example | In patients with stage II breast cancer | receive chemotherapy | what is the five-year survival rate? | |
Prevention | Patient’s disease/ condition (age, gender, ethnicity, etc.) | specific drugs or procedural intervention | Another preventative measure, or standard care | prevention of disease/ condition |
Example | In elderly patients | does daily vitamin D supplement | reduce the risk of falls? | |
Etiology | Patient’s disease/ condition (age, gender, ethnicity, etc.) | exposure to certain condition or risk behaviour | absence of certain condition or risk of behaviour, or standard care | development of disease/ condition |
Example | In adults | who smoke | compared to non-smokers | what is the relative risk of developing lung cancer? |
References:
Clinical evidence sources are broadly categorized into two main types:
Primary sources (unfiltered resources): These are the original studies, research, observation, or experiment on therapy, diagnosis, prognosis, or etiology, such as clinical trials, cohort studies, and case studies.
Secondary sources (filtered/pre-appraised resources): These sources synthesize and evaluate primary research, offering summaries, systematic reviews, meta-analyses, and evidence-based guidelines to aid in clinical decision-making.
The 6S Hierarchy of Evidence
DiCenso et al. (2009) introduced the 6S hierarchy of evidence to streamline the retrieval of high-quality resources. This framework organizes evidence in a pyramid structure, with secondary sources at the top and primary studies (single studies) at the base. When searching for evidence, begin at the highest available level and move downward only if appropriate evidence is not found. While top-level resources are preferred, lower levels may be consulted for questions that cannot be fully addressed with higher-level evidence.
The table below outlines each level of the 6S pyramid, providing brief descriptions and examples of relevant evidence-based resources to guide efficient evidence retrieval.
Level | Description | Examples |
---|---|---|
System | Computerised Decision Support Systems |
|
Summaries | Summaries are regularly updated clinical guidelines or textbooks that integrate evidence-based information about specific clinical problems. |
|
Synopses of Synthesis | Synopses of Synthesis summarize the information found in systematic reviews. By concluding evidence at lower levels of the pyramid, these synopses often provide sufficient information to support clinical action. | |
Synthesis | Commonly referred to as a systematic review, a synthesis is a comprehensive summary of all the evidence surrounding a specific research question. |
|
Synopses of Single Studies | Synopses of single studies summarize evidence from high-quality studies. |
|
Single Studies | Studies represent unique research conducted to answer specific clinical questions, such as Randomized Controlled Trials, Cohort Studies, Case-Control Studies, Case-Series and Case Reports. |
|
Additional Evidence Sources
Meta-Search (Federated search): These tools retrieve evidence from multiple sources across all levels of the 6S pyramid simultaneously, returning results from summaries, pre-appraised research, and non-appraised primary studies.
Ranking Level of Evidence
The EBP approach has led to the development of various systems for ranking levels of evidence, which classify studies according to their design and methodological rigour. These systems may differ in the number of levels they use; some include 5, 7, or 8, with Level I typically representing the most reliable and robust evidence. However, the type of evidence considered highest depends on the type of clinical question being asked, for example:
Therapy questions: Level I evidence includes systematic reviews or meta-analyses of RCTs.
Prognosis questions: Level I evidence includes systematic reviews of non-experimental studies (such as cohort studies), although these may be ranked lower (Level III or IV) in systems focused on therapy questions.
Diagnosis questions: Level I evidence includes systematic reviews of high-quality diagnostic accuracy studies.
The table below lists three evidence-ranking models for therapy questions.
Oxford Centre for Evidence-Based Medicine: Levels of evidence |
Melnyk and Fineout-Overholt: 7 evidence level steps |
Polit and Beck: 8 evidence level steps |
---|---|---|
Level 1a: Systematic review of RCTs | Level I: Systematic review or meta-analysis of all relevant RCTs | Level I: Systematic review/ meta-analysis of RCTs |
Level 1b: Individual RCT | Level II: Single well-designed RCTs | Level II: Randomized Controlled Trial (RCT) |
Level 2a: Systematic review of cohort studies | Level III: Well-designed controlled trials without randomization | Level III: Nonrandomized trial (quasi-experiment) |
Level 2b: Individual cohort study | Level IV: Well-designed case-control or cohort studies | Level IV: Systematic review of nonexperimental (observational) studies |
Level 2c: Outcomes research | Level V: Systematic review of descriptive and qualitative studies (meta-syntheses) | Level V: Nonexperimental/observational study |
Level 3a: Systematic review of case-control studies | Level VI: Single or descriptive or qualitative study | Level VI: Systematic review/meta-synthesis of qualitative studies |
Level 3b: Individual case-control study | Level VII: Opinion of authorities, or reports of expert committees | Level VII: Qualitative studies/descriptive study |
Level 4: Case series | Level VIII: Non-research source (opinion, internal evidence) | |
Level 5: Expert opinion |
While evidence ranking systems provide a useful framework, they do not determine the quality of a study. For example, RCTs are often considered the "gold standard," but the study design alone does not guarantee high quality. Key factors such as proper randomization, concealed allocation, blinding, sample size, and study duration all play a critical role. Therefore, critically appraising each study or evidence source is essential to assess its validity and applicability.
References:
Critical appraisal skills equip clinicians to systematically assess the quality, relevance, and validity of individual research findings. When research on a similar topic produces conflicting results, critical appraisal is essential for discerning the most valid and applicable evidence. It is important to recognize that neither the source nor the authorship alone guarantees the credibility of a study. While research aims to generate meaningful evidence from data, methodological weaknesses or bias in study design can compromise the accuracy of its conclusions.
The critical appraisal process involves determining whether the study design aligns with the research question, evaluating both internal validity (systematic error/bias) and external validity (generalizability, and identifying biases affecting methodological quality.
Appraisal methods apply across a broad spectrum of research designs, including randomized controlled trials, cohort studies, qualitative research, diagnostic studies, systematic reviews, and more. Schardt and Myatt (2008) highlight key considerations for evaluating each study type.
Key Issues in Appraising Therapy StudiesKey Issues in Appraising Diagnostic Studies
Key Issues in Appraising Prognosis Studies
Key Issues for Etiology/Harm Studies
Various tools are available to support critical appraisal, tailored to different study types. These tools, often in the form of checklists, scales, or domain-based frameworks, help assess study quality (how well bias was minimized) and identify potential risks of bias (how the lack of safeguards may have affected results).
Some tools assess both study quality and risk of bias, while others focus on just one aspect. Using more than one tool is often necessary to ensure a comprehensive evaluation of study quality and bias. The table below outlines common study types alongside the critical appraisal tools typically used to assess their quality and potential for bias. Additional tools are available through resources such as LATITUDES Network, a collection of validity assessment tools for use in evidence syntheses, and CATevaluation, which features tools that have been tested for validity and/or reliability.
Whereas these tools focus on assessing individual studies, GRADE (Grading of Recommendations, Assessment, Development and Evaluations) is used to assess the certainty of evidence across multiple studies for specific outcomes, typically in the context of systematic reviews or guideline development. GRADE builds upon the foundation established during critical appraisal by considering factors such as risk of bias, consistency, directness, precision, and publication bias.
References:
Clinical expertise involves a strong understanding of the patient population, the ability to anticipate treatment effects and potential side effects, and an awareness of the resources at hand. It also draws on practical experience and critical thinking skills. Clinical judgment develops through the integration of these elements, with the understanding that a treatment effective for one individual may not be appropriate for another.
Glasziou et al. (2007) note that this step is sometimes referred to as assessing the "external validity" or "generalizability" of the research. In practice, clinicians may weigh a study’s relevance and feasibility concurrently with appraising its quality, depending on the clinical context.
Before applying research findings, clinicians may ask:
The clinician or EBP team must evaluate whether the strength and quality of the evidence justify a change in practice. This includes engaging patients in meaningful conversations to ensure that care decisions reflect not only scientific evidence but also the patient's voice. If uncertainty remains, it may be necessary to generate additional evidence through an internal EBP project or more formal research.
Even the most promising intervention can fall short if not implemented with a clear understanding of the patient's values and perspective.
References:
This step in EBP focuses on evaluating the impact of applying evidence-informed interventions on patient outcomes and clinical performance. This evaluation promotes continuous improvement and ensures clinical decisions remain aligned with best practices and patient needs.
Evaluation involves two core components:
Self-evaluation/reflection and evaluation of the EBP process:
Clinicians reflect on their performance and the EBP process, from formulating clinical questions to implementing evidence, to identify strengths, gaps, and opportunities for improvement. Self-reflection also reinforces critical thinking, accountability, and professional growth.
Key questions for self-evaluation may include:
Evaluate your performance of EBP steps 1-4 using the self-reflection questions, developed by the University of Canberra.
Evaluating changes in practice
Clinicians assess whether applying evidence-based changes has led to improvements in patient care or system-level performance. This process involves measuring the effectiveness of changes, comparing them to baseline data, engaging patients to ensure their experiences and outcomes are considered, and determining whether interventions should be sustained, adjusted, or discontinued.
Suggested questions to guide this evaluation:
Quality improvement (QI) frameworks and clinical audit tools play a key role in sustaining EBP. QI emphasizes measurable outcomes, continuous feedback, and data-driven decisions to ensure changes are patient-centered and effective. Audits support this process by tracking compliance, evaluating performance, and identifying opportunities for improvement.
References:
Dissemination involves sharing EBP findings through presentations, publications, institutional platforms and active engagement with stakeholders, including healthcare providers, policymakers, and institutional leaders to promote broader adoption, knowledge exchange, and continuous learning. Below are a few examples of resources that can support dissemination and knowledge sharing:
References: