OpenClinical logo

Evaluation of Clinical Decision Support Systems

Evaluation of Clinical Decision Support Systems

contents
 bullet  Definitions  bullet  Aspects of DSS evaluated  bullet  Resources 1: Australian Health Information Council  bullet  References

 bullet  DSS: Introduction  bullet  DSS: Benefits, Drawbacks, Success factors  bullet  DSS: Evaluation  bullet  DSS: Evaluation studies  bullet  DSS: Evaluation reviews

Definitions
"Evaluation is a means to assess the quality, value, effects and impacts of information technology and applications in the health care environment, to improve health information applications and to enable the emergence of an evidence-based health informatics profession and practice" [Ammenwerth et al, 2004].

"Evaluation is the act of measuring or exploring properties of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context" [Ammenwerth et al, 2004]

Randolph A. Miller has proposed that the bottom line in evaluating clinical decision support systems (CDSSs) should be "whether the user plus the system is better than the unaided user with respect to a specified task" [Miller, 1996]
Features of DSS that are or can be evaluated
Evaluation studies of CDSSs have typically aimed to measure the impact of a system on a well-delineated and limited part of the process of care. Systems that have been relatively frequently evaluated have been designed, for example, to provide support for diagnosis, disease management, drug management, preventive interventions. Other evaluation topics have included the impact of a system on the quality of decision making, impact on clinical actions, usability, integration with workflow, the quality of the clinical advice offered. The cost effectiveness of CDSSs and their ability to help improve clinical outcomes have been relatively infrequently evaluated.

DSS Evaluation Dilemmas
Some significant dilemmas can arise in CDSS evaluation (source: presentation in June 2005 by Jeremy Wyatt, Associate Director of Research, National Institute for Health and Clinical Excellence, England and Wales NHS):
  1. "Will there be sufficient, traceable, direct evidence for selected products ? What if no impact studies ?"
  2. "What is the relevant control – no intervention or same knowledge available in alternative format ?"
  3. "Is it feasible to model costs & benefits of wide-ranging DSS (eg. NHSDirect’s Clinical Assessment system) ?"
  4. "Fast changing technology – how applicable are study results to later versions ?"
  5. "DSS can have long term effects on users:"
    • "Training users, so DSS value would diminish"
    • "Causing user skills to waste away, so DSS value would increase"
  6. "Does DSS effectiveness depend so much on user skills etc. that RCTs are unhelpful ? "

DSS Evaluation Resources 1: Australian Health Information Council

In 2004, the Electronic Decision Support subgroup of the Australian Health Information Council published an evolving set of freely available resources on the evaluation of electronic decision support systems (EDSS). The materials also cover planning and developing EDSS implementation.

Though the resources form part of an effort to develop a National Evaluation Framework for EDSS, stimulated by the Australian National Electronic Decision Support Taskforce report Electronic Decision Support for Australia"s Health Sector (2003), they have been designed for use by the whole EDSS community.

Evaluation topics covered include:
  • clinical impact
  • impact on working practices
  • usability
  • knowledge content
  • system requirements
  • technical issues
  • interoperability
  • managing an evaluation.

Guidelines are also provided on:
  • Key study designs for the quantitative evaluation of an EDSS
  • Conducting focus groups
  • Conducting interviews
  • Developing and testing questionnaires
  • Heuristic evaluation (a form of usability inspection)
  • Conducting usability testing
  • Computer log analysis
  • Qualitative data analysis resources
  • Quantitative data analysis and sample size resources

references: general

Jytte Brender. Handbook of Evaluation Methods for Health Informatics. Elsevier Academic Press, 2006. pp.368.

[]   [Elsevier]

Brender 2006

" Contents
Part I. Introduction 1. Introduction 2. Conceptual Apparatus 3. Types of User Assessments of IT-based Solutions 4. Choosing or Constructing Methods for Evaluation Part II. Methods and Techniques 5. Introduction 6. Overview of Assessment Methods 7. Description of Methods and Techniques 8. Other Useful Information Part III. Methodological and Methodical Perils and Pitfalls at Assessment 9. Background Information 10. Approach to Identification of PItfalls and Perils 11. Framework for Meta-Assessment of Assessment Studies 12. Discussion List of Abbreviations List of References. "

Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer-Verlag , 1997.

[Review - BMJ]

" Computers have not been kind to physicians. They enable monitoring and pestering everywhere. None the less, there have been generations of computer enthusiasts in health care, including many doctors. The appeal seems obvious: we forget easily, the computer never forgets; we are slow to do simple things, the computer is fast; we tire, the machine is tireless. Even 30 years ago it seemed a natural partnership. With the computer to leverage our strengths and ease our frailties, we could all be Oslers. Every clinician would become more skilful, assisted by an uncomplaining aide of unwavering alertness and unerring recollection. For decades the pioneers worked to make this vision real. From early efforts has come a discipline that borrows liberally from computer science and epidemiology, ethnography and cognitive science, sociology and management, economics and industrial engineering, library science and mathematics, epistemology and health services research. Add in all of the healthcare professions, and call it medical (or, better, health) informatics. It is a mélange that appeals to restless minds. Medical informatics deals with all the aspects of medical information, except its content. Not how best to manage asthma, but rather how information on asthma management might be stored, retrieved, and applied. The results of this work are information systems that are often challenging to understand, measure, evaluate, and improve. That challenge can now be met with the sturdy assistance of Friedman and Wyatt, two respected informaticians. They have produced a landmark book, which should have a place of honour in the library of anyone seeking a deeper understanding of clinical informatics in general and evaluation in particular. The authors focus on clinical information resources, which is a large subset of the wider field of medical informatics. With few wasted words in three short chapters, they provide the careful reader with a good understanding of large clinical computing systems. They do a similarly excellent job of outlining evaluation, though they perhaps overdo the "subjectivist" (intuitive, holistic, sociable) versus "objectivist" (rational, reductionist, no dates) dichotomy. The discussions on measurement, validity, and study design will feel familiar to students of epidemiology, but the perspective of information systems brings new value. The authors, mindful of the many disciplines they draw on, define and use terms with precision. The analysis and discussion of subjectivist methods will be new to many readers, and, philosophy aside, it is hard to improve on. ... "
Wyatt J, Spiegelhalter D. Field trials of medical decision-aids: potential problems and solutions. In Clayton P (ed). Proc. 15th Symposium on Computer Applications in Medical Care, Washington 1991. New York: McGraw Hill Inc 1991: 3-7

[PubMed]

" Only clinical trials can assess the impact of prototype medical decision-aids, but they are seldom performed before dissemination. Many problems are encountered when designing such studies, including ensuring generality, deciding what to measure, feasible study designs, correcting for biases caused by the trial itself and by the decision-aid, resolving the "Evaluation Paradox", and potential legal and ethical doubts. These are discussed in this paper. "
Wyatt J, Spiegelhalter D. Evaluating medical expert systems: what to test and how? Medical Informatics 1990; 15: 205-217.

[PubMed]

" Many believe that medical expert systems have great potential to improve health care, but few of these systems have been rigorously evaluated, and even fewer are in routine use. We propose the evaluation of medical expert systems in two stages: laboratory and field testing. In the former, the perspectives of both prospective users and experts responsible for implementation are valuable. In the latter, the study must be designed to test, in an unbiased manner, whether the system is used in clinical practice, and if it is used, how it affects the structure, process and outcome of health care encounters. We conclude with proposals for encouraging the objective evaluation of these systems. "
References: issues and challenges

Brender J. Evaluation of health information applications--challenges ahead of us. Methods Inf Med. 2006;45(1):62-6. Review.

[PubMed]   []

" OBJECTIVES: The aim of the paper is to review the challenges for evaluation in the light of characteristics of the healthcare sector, present as well as future. METHODS: The approach is a synthesis based on highlights from the literature. RESULTS: The review addresses the following issues: 1) the role of evaluation activities within a systems development or implementation context; 2) suggestions on the nature of success and failure characteristics; and 3) evaluation aspects viewed in the perspective of different types of systems. Constructive evaluation, evaluation being the act of bringing about a decision-making basis, is perceived as the means to minimize failure and maximize success from the very beginning of the development or implementation. Based on these discussions, the challenges that evaluation and evaluators are facing are debated. CONCLUSION: The ultimate challenge ahead is first to fill the gap of presently needed evaluation methods. This need is in particular related to evaluation of cognitive and work process-oriented aspects of IT-based solutions. Finally, the challenge is to provide constructive evaluation methods and methodologies for dealing with the full complexity and dynamics of the target domain, for application within the entire lifecycle of the IT-based systems and solutions. "

Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005 Apr 2;330(7494):765. Epub 2005 Mar 14. Review.

[PubMed]   [PubMed Central]   [BMJ]

" OBJECTIVE: To identify features of clinical decision support systems critical for improving clinical practice. DESIGN: Systematic review of randomised controlled trials. DATA SOURCES: Literature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews. STUDY SELECTION: Studies had to evaluate the ability of decision support systems to improve clinical practice. DATA EXTRACTION: Studies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature. RESULTS: Seventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations. CONCLUSIONS: Several features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate."

Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, Talmon J; HIS-EVAL Workshop Participants. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform. 2004 Jun 30;73(6):479-91.

[PubMed]   [Elsevier]    [ScienceDirect]

" summary Background: Health care is entering the Information Society. It is evident that the use of modern information and communication technology offers tremendous opportunities to improve health care. However, there are also hazards associated with information technology in health care. Evaluation is a means to assess the quality, value, effects and impacts of information technology and applications in the health care environment, to improve health information applications and to enable the emergence of an evidence-based health informatics profession and practice. Objective: In order to identify and address the frequent problems of getting evaluation understood and recognised, to promote transdisciplinary exchange within evaluation research, and to promote European cooperation, the ExploratoryWorkshop on ‘‘New Approaches to the Systematic Evaluation of Health Information Systems’’ (HIS-EVAL) was organized by the University for Health Sciences, Medical Informatics and Technology (UMIT), Innsbruck, Austria, in April 2003 with sponsorship from the European Science Foundation (ESF). Methods: The overall program was structured in three main parts: (a) discussion of problems and barriers to evaluation; (b) defining our visions and strategies with regard to evaluation of health information systems; and (c) organizing short-term and long-term activities to reach those visions and strategies. Results: The workshop participants agreed on the Declaration of Innsbruck (see Appendix B), comprising four observations and 12 recommendations with regard to evaluation of health information systems. Future activities comprise European networking as well as the development of guidelines and standards for evaluation studies. Conclusion: The HIS-EVAL workshop was intended to be the starting point for setting up a network of European scientists working on evaluation of health information systems, to obtain synergy effects by combining the research traditions from different evaluation fields, leading to a new dimension and collaboration on further research on information systems’ evaluation. "
Ammenwerth E, Graber S, Herrmann G, Burkle T, Konig J. Evaluation of health information systems-problems and challenges. Int J Med Inform. 2003 Sep;71(2-3):125-35. Review.

[PubMed]   [ScienceDirect]

" OBJECTIVES: Information technology (IT) is emerging in health care. A rigorous evaluation of this technology is recommended and of high importance for decision makers and users. However, many authors report problems during the evaluation of information technology in health care. In this paper, we discuss some of these problems, and propose possible solutions for these problems. METHODS: Based on own experience and backed up by a literature review, some important problems during IT evaluation in health care together with their reasons, consequences and possible solutions are presented and structured. RESULTS AND CONCLUSIONS: We define three main problem areas-the complexity of the evaluation object, the complexity of an evaluation project, and the motivation for evaluation. Many evaluation problems can be subsumed under those three problem areas. A broadly accepted framework for evaluation of IT in healthcare seems desirable to address those problems. Such a framework should help to formulate relevant questions, to find adequate methods and tools, and to apply them in a sensible way. "

Miller RA. Evaluating evaluations of medical diagnostic systems. J Am Med Inform Assoc. 1996 Nov-Dec;3(6):429-31.

[PubMed]   [PubMed Central]

" System evaluation in biomedical informatics should take place as an ongoing, strategically planned process, not as a single event or a small number of episodes. Complex software systems and accepted medical practices both evolve rapidly, so evaluators and readers of evaluations face moving targets. Thus, it is crucial for readers to be able to place any individual evaluation study into proper perspective. This advice applies to the nascent technology of medical diagnostic decision support systems (MDDSS). That the editor of a prestigious medical journal judged this entire technology’ based on a single, well-done, but intermediate- level and partial evaluation of several systems coupled with his own anecdotal experience emphasizes our professional obligation to characterize such systems and their evaluations responsibly ... "

Moehr JR. Evaluation: salvation or nemesis of medical informatics? Comput Biol Med. 2002 May;32(3):113-25.

[PubMed]   [ScienceDirect]

" The currently prevailing paradigms of evaluation in medical/health informatics are reviewed. Some problems with application of the objectivist approach to the evaluation of real-rather than simulated-(health) information systems are identified. The rigorous application of the objectivist approach, which was developed for laboratory experiments, is difficult to adapt to the evaluation of information systems in a practical real-world environment because such systems tend to be complex, changing rapidly over time, and often existing in a variety of variants. Practical and epistemological reasons for the consequent shortcomings of the objectivist approach are detailed. It is argued that insistence on the application of the objectivist principles to real information systems may hamper rather than advance insights and progress because of this. Alternatives in the form of the subjectivist approach and extensions to both the objectivist and subjectivist approaches that circumvent the identified problems are summarized. The need to include systems engineering approaches in, and to further extend, the evaluation methodology is pointed out. "
References: evaluation methods

Talmon J, Enning J, Castaneda G et al. The VATAM guidelines. Int J Med Inform. 1999 Dec;56(1-3):107-15.

[PubMed]   []

" Evaluation and assessment of the impact of information and communication technology in medicine is gaining interest. Unfortunately, till now there were no agreed upon approaches. The objective of the VATAM project is to develop guidelines that will assist assessors to set-up and execute studies. This paper describes the background of the VATAM project and provides an account of the current state of the guidelines. It concludes with an indication of the developments that will take place in the short term to further elaborate the guidelines and some considerations for consolidation of VATAM's results. "

Clarke K, O'Moore R, Smeets R et al. A methodology for evaluation of knowledge-based systems in medicine. Artif Intell Med. 1994 Apr;6(2):107-21.

[PubMed]   []

" Evaluation is critical to the development and successful integration of knowledge-based systems into their application environment. This is of particular importance in the medical domain--not only for reasons of safety and correctness, but also to reinforce the users' confidence in these systems. In this paper we describe an iterative, four-phased development evaluation cycle covering the following areas: (i) early prototype development, (ii) validity of the system, (iii) functionality of the system, and (iv) impact of the system. "

Beuscart-Zephir MC, Brender J, Beuscart R, Menager-Depriester I. Cognitive evaluation: how to assess the usability of information technology in healthcare. Comput Methods Programs Biomed. 1997 Sep;54(1-2):19-28.

[PubMed]   []

" As the adoption of information technology has increased, so too has the demands that these systems become more adapted to the physicians and nurses environments, to make access and management of information easier. The developers of information systems in Healthcare must use quality management techniques to ensure that their product will satisfy given requirements. This underlines the importance of the preliminary phase where Users Requirements are elicited. Some methodologies, such as KAVAS (E.M.S. Van Gennip, F. Gremy, Med. Inform. 18, 1993, 179) chose to use a continuous assessment protocol as a key strategy for quality management. At each stage of the conception and development of a prototype, the assessment checks that it conforms to the expectation of the users' requirements. The methodology of evaluation is then seen as a dynamic process which is able to improve the design and development of a dedicated system. The purpose of this paper is to demonstrate the necessity to include a cognitive evaluation phase in the process of evaluation by: (1) evaluating the integration (usability) of the I.T. in the activity of the users; and (2) understanding the motives underlying their management of information. This will help the necessary integration of information management in the workload of the healthcare professionals and the compatibility of the prototypes with the daily activity of the users. "
Lundsgaarde H. Evaluating medical expert systems. Soc Sci Med 1987; 24: 805-819.

[PubMed]

" Approximately 90% of all computerized medical expert systems have not been evaluated in clinical environments. This paper: identifies the principal methods used to assess the performance of medical expert systems in both laboratory and clinical settings, describes the different research strategies used in the evaluation of medical expert systems at different development stages, and discusses past evaluation efforts in relationship to future applications of different decision support technologies and expert systems in health care. "
links
 bullet  Guidelines for the evaluation of electronic decision support systems - Australian Health Information Council
acknowledgements
Professor Jeremy Wyatt, National Institute for Health and Clinical Excellence, England and Wales NHS.
page history
Entry on OpenClinical: May 25 2005 (v0.1)
Last main update: June 26 2005




Search this site
 

Privacy policy User agreement Copyright Feedback

Last modified:
© Copyright OpenClinical 2002-2011