10.29.2011 | 0 Comments
Trackback to articles by John Zaleski on healthcare information technology.
10.29.2011 | 0 Comments
Trackback to articles by John Zaleski on healthcare information technology.
10.15.2011 | 0 Comments
I’m reminded of the film Braveheart during the scene with young William Wallace in which he is asleep and dreaming post the murder of his father at the hands of Edward the Longshanks when, in his dream, his dead father, Malcolm, turns to him and says “Your heart is free. Have the courage to follow it.”
I find that an interesting parallel to one of the rules Steve Jobs stated in his Commencement Address to Stanford graduates several years back, and reiterated in Carmine Gallo’s post in Entrepreneur titled ”Steve Jobs and the Seven Rules of Success”:
Rule 1: “Do what you love. … People with passion can change the world for the better.”
From the perspective of healthcare information technology, my view of this applied is don’t do what everyone else is doing just because it is in vogue. As I’ve written in other articles, I believe the key to effective and helpful patient care is incorporating information from multiple sources and looking outside the field for hints and guidance on this. Good ideas from other fields are necessary to enrich the space and can add great value. Systems engineering and integration are but two concepts that I see others beginning to take up in terms of disciplines that are beginning to be applied.
Another Steve Jobs concept is connecting things that evade others, or that others ignore. Again, Gallo writes: “…people with a broad set of life experiences can often see things that others miss… Connect ideas from different fields.” This is often easy to do in retrospect: hindsight is 20/20. What this means is that you must expose yourself to a broad range of experiences so that you can look at a problem from outside the field, as described at the end of the last paragraph.
In my own case, when I was preparing to do my research way back in the early ’90s, and I indicated to my Dissertation Committee my goals and objective, I had a member of my Committee say to me privately that he gave me a 30% chance of succeeding. My goal was to develop a model of post-operative weaning in order to demonstrate that a systems engineering modeling approach could be applied to project a state in patients by treating them as a system and creating a model of multi-dimensional inputs. After the successful defense of my dissertation a year or two later the same Committee member said to me that I proved my case. At the time he made the comment I felt very depressed: what was I doing? Was I wrong? Was I biting off more that I could chew? But, as Steve Jobs said, you can only connect the dots looking backwards. You must have faith looking ahead.
10.14.2011 | 0 Comments
Article of note from HealthcareITNews on 5 ways telemedicine can boost care in rural communities sites #2 as “Telemedical devices for remote monitoring of in-home care improve clinical observations.” Suggestion is to put hospital quality patient care devices in the home. Data transmitted to monitoring centers can be used to monitor and evaluate patient status. This area could benefit not just from medical device connectivity but also clinical decision support tools at these monitoring centers.
10.14.2011 | 4 Comments
The AAMI Medical Device Alarms Summit was held October 4th & 5th in Herndon, VA at the Hyatt. There will be much published on the AAMI web site in this regard, and much in the way of out-briefs and collateral so I will leave the complete minutes and summary of activities and goings-on to those charged with doing so. However, I am compelled to focus on a few related themes that were referred to by several of the speakers and to which I voiced my opinion publicly during the meeting. I will do so relative to two specific speakers and provide the input that I shared at the conference during the public question and answer forums.
The keynote speaker on medical devices was given between 8:45 and 9:15 by George Blike, MD of the Dartmouth-Hitchcock Medical Center. Early in his presentation, Dr. Blike discussed that not much has changed since the 1999 Institute of Medicine Report To Err is Human was published. In that report, the IOM concluded, based on two studies, that between 44000 and 98000 Americans were killed each year due to preventable medical errors. While the options to diagnose and treat have increased measurably since that time, the complexity of the treatment process undermines the benefits. This is also the case of information complexity, in which the amount of information that is now available in electronic form exceeds the environmental space limitations surrounding the patient.
Medical device alarms, Dr. Blike continued, as a way to redirect attention, about redirecting attention “from something that is less important to something that is more important.”
However, uncertainty plays a large role in medicine, and the uncertainty about the meaning of alarms requires knowing much more than just whether a parameter is outside some norm or threshold. It is about knowing the context surrounding the patient. It is more than managing alarms as “nuisances” in the clinical space. While it is true that clinical staff can become “snow blind” to the continuing cacophony of alarms within the environment, the reason for reducing the alarms is not to reduce the cacophony, but to focus the alarms to redirect attention in a way that truly helps the patient.
Dr. Blike referenced Lucian Leape, MD: “Anesthesia is the only system in healtchare that begins to approach the vaunted six sigma level of perfection that other industries strive for.”
The management of a patient becomes more like a feedback control problem, in which the process blocks of Detect, Diagnose, Treat and Monitor are key to the closed loop system. In an environment where upwards of 40 parameters must be monitored over time (ICU), it becomes a multi-dimensional, multi-parameter feedback control system exercise. In environments such as the ICU where nursing:patient ratios may be 1:2 or 1:1, the care team must be able to detect problems, diagnose cause, treat, and then monitor. As part of this process, the medical device alarms identify deviations of the system state of the patient from the expected state.
The keynote address of Dr. Blike was followed by a panel titled “Defining the Problem: It’s More Than a Nuisance.” James Blum, MD of the University of Michigan Health System and Barbara Drew, RN, PhD of the University of California, San Francisco were the members of this panel. With no disrespect to Dr. Drew, who had an impressive and extremely interesting and informative presentation, I wish to focus on the comments of Dr. Blum because of the message he communicated in terms of systems integration and data management. I resonated most definitely with Dr. Blum’s presentation as it has been a rallying call of my own now for almost 20 years (indeed, a large reason for my entry into the field).
I took some photos of slides, and there are three that speak strongly to me. The first of these is the slide on physiologic monitors, shown below. The key point is that alarms, in general are not “smart”, and this is especially true of physiologic monitoring where, in which there is no “penalty for high sensitivity with low specificity,” and a general lack of data integration. However, the historic and retrospective data is not readily available nor can prospective analysis and projection be performed with the data. Moreover, these data are not integrated with the wealth of other information that provides context on the patient. While it is true that alarms that are of a critical nature (e.g.: ventricular tachycardia or asystole need no other context, there are situations which do (e.g.: O2 Saturation changes, respiratory function changes) and thus it is important to incorporate the context surrounding the patient into medical device alarms.
The second of these charts from Dr. Blum is one that resonates very strongly with me: Electronic Medical Records may be good charting instruments, but in terms of their clinical decision support capacity relative to real-time data, they are very mediocre. The reason I say this is because of the last two bullets on his slides: the data resolution can be limited (critical care charting ~ 15 minute intervals) and suffer from garbage-in:garbage:out. Because modeling of physiological systems often requires high fidelity and high resolution information, sparsely collected data will often miss crucial events that may be seconds in duration or less. For example, assessments of heart rate and respiratory rate variability can be quite important and predictive as to patient stability, as well as critical medical device alarms related to V-TACH or ASYSTOLE. The EMR is simply not equipped to capture these data trends based on the rate of data collection: the likelihood of missing these events all together is simply higher than capturing them at all.
The third and final of these slides is the following titled “Integration.” The essence of the problem with medical device alarms is that they are fairly “one-dimensional” in nature: they typically are associated with the patient care device (PCD) and its function (e.g.: physiologic monitor, infusion pump, mechanical ventilator, etc.) This does not mean that they are univariate, but rather they do not take into account the entire context of the patient and environs, as well as patient history, chemistry, etc. The Integration slide makes the point that multiple systems must be taken together–or fused–to provide an intelligent assessment of what is important versus that which is not.
The last of the three photos above defines to me the essence of effectively managing medical device alarms is through the application of systems engineering and systems integration disciplines. First, complete and unfettered integration of data, from medical devices through ancillary information systems (lab, PACS, EMR, etc.) are required. Next, laying out the use cases and scenarios related to the types of problems and conditions a patient can experience needs to be done in a holistic way that ignores vendor and device boundaries. This may involve integrating data, user interfaces, and creating methods that “feed” on the data available from multiple sources and assimilate it to produce integrated outputs. The display mechanisms are important but are secondary at this point: dashboards that allow singular access to information (much akin to avionics design) may be appropriate here. However, more important is the overall integration of information to provide predictive modeling, retrospective trending, and for evaluating scenarios on the fly. This takes a longitudinal look at the patient state in terms of everything about the patient. As Dr. Blum identified, there may be 40 parameters (give or take) that form the basic state of the patient.
When I began my career it was in the aerospace field and I focused on state space modeling of complex systems. This state space modeling often involved various forms of filtering, including Kalman and Batch Least Squares filters. The systems integration aspect of this modeling involved evaluating the trend or future state with respect to the current state based upon a system model representation of the entity being modeled. From this model, a projection could be made into the future. As was stated by Dr. Blike and others during the conference, medicine is one field where uncertainty plays a large part, it is perhaps naive to think that one could model the human being in ways that many in the aerospace industry do. However, 20+ years ago when I began my studies at the University of Pennsylvania and began my research into prediction and modeling at the University of Pennsylvania Medical Center, that is precisely what I was doing on a smaller scale with a specific class of patients. The subject of my dissertation was predicting the post-operative respiratory behavior of coronary artery bypass grafting patients, a unique class of patient in surgical intensive care units. Many of the concepts brought up during the Medical Device Alarms Summit resonated with me from the early days of my dissertation. One in particular was the idea of taking multi-source, multi-variate data and massaging into an assessment of outcome. As I did when I conducted my research, multi-source data from laboratory, patient record (history, demographics, etc.) were incorporated into the overall assessment of outcome. I know that as I followed patients from surgery through endotracheal extubation afterwards that I found many interesting relationships once all the data were laid out before me: relationships between re-warming time and patient’s anesthetic dosing; relationships time to begin breathing and time to extubate. The approach took into account the fact that there were uncertainties in the modeling. The objective was to establish a gross, coarse model of behavior by looking at the patient as a “black box.” Higher fidelity warranted more accurate modeling. However, approaching the patient as a system and taking into account all information is one approach I believe is the key to effective medical device alarms management.
10.12.2011 | 0 Comments
Posted a comment on the HIMSS blog relative to a new piece on the importance of medical device connectivity.
10.11.2011 | 3 Comments
From time to time I have been asked to provide explicit details on the mechanics and methods behind Haar wavelet transforms. The purpose of this post is to walk through two simple examples that demonstrate the use of the Haar transform relative to two one-dimensional signals (time signals). The details of the Haar basis and Haar wavelet transform are available elsewhere. The purpose here is to provide a simple example of how the Haar basis is computed using a simple tool such as an Excel spreadsheet.
Let’s begin with the end product: the following is a 4×4 Haar matrix computed using Microsoft Excel:
The Haar Matrix, which we will denote Hn, is given as follows for the case of 4 data elements, denoted as Vn:
This computes to that shown in the figure above. The Haar wavelet coefficients are computed by first inverting the Haar matrix and multiplying by the output signal vector, Vn:
Haar matrix inverse is calculated using Excel and the result is shown in the following figure:
Each cell in the Excel spreadsheet is computed using the following cell entry:
Where $B$2 corresponds the cell corresponding to the first row and column of the original Haar matrix and $E$5 corresponds to the last row and column of the original Haar matrix. The elements (1,1) at the end of the expression must include the components of each cell. So, in the example above, (1,1) represents the element in the first row and column of the matrix inverse. This is place in the first cell of the Excel spreadsheet corresponding to this element. The last row and column element would be (4,4). So, for example, the cells would be populated as such:
Time and signal vector chosen arbitrarily for this example is as follows:
A plot of this signal is shown in the next figure:
The wavelet coefficients are computed using the follow expression:
It is possible to cull coefficients on some basis, such as their magnitude with respect to the largest coefficient. We can arbitrarily impose a threshold with respect to the largest coefficient (-4.9497) and remove those coefficients (set to zero) that are at or below this magnitude. Suppose we set a threshold of 30%. The wavelet coefficients with 30% threshold imposed result in the removal of the second coefficient:
The signal can be recomputed using this culled set of coefficients. They are calculated using the following expression:
Excel spreadsheet calculation:
A plot of the signal with an overlay plot of the recreated signal using 30% threshold on the wavelet coefficients is displayed in the figure below. Note the comparison between the two signals, indicating some loss of fidelity owing to the removal of the wavelet coefficient. This is a crude representation of the effects of destructive compression on the reconstruction of signals:
The method can be extended easily to any dimension. Let us consider an application of the Haar wavelet transform to an 8×8 Haar matrix:
The inverse of this matrix is as follows:
The base signal is defined as follows:
A plot of this signal provides a convenient visual rendering of the data:
The wavelet coefficients are calculated in precisely the same way as the 4×4 example shown previously:
Finally, the imposition of signal thresholds (20% and 30%) is shown and the signal is reconstructed in the manner previously described, only extended to an 8×8 Haar matrix. The resulting plot with the wavelet threshold impositions are plotted as overlays in the following figure:
My book on modeling medical device data may be found here . Other links, such as the paper on modeling of re-awakening time are also available on this site, for the interested reader. I’ve also included a PDF version of this blog entry for download.
10.08.2011 | 0 Comments
This has been a very tough week, and although I have made a point of attempting to make at least 1 blog entry per day, it simply was not possible due to the business of my schedule and travel. This coming week is no better. However, I felt strongly motivated to take a moment while (finally) at home to make a blog entry while I get ready to pack for my next trip tomorrow… at least my cats still recognize me.
But, Steve Jobs’ death at the age of 56 is a blow in more ways than one. He was a creative genius. I liken him to Nikola Tesla in some ways for his uniqueness as well as his personal behavior–both the oddities and the passion. But, unlike Tesla, Jobs knew how to manage a business: he had both creative genius and business acumen, and this combination is even more rare than creative genius alone. There are many creative geniuses who have brilliant ideas that are manifested as tinkering in their garages, and who die penniless and / or insane. However, it is more rare for the creative genius to recognize and tap into that which can be commoditized and marketed in a way that the public will seek — nay — will run to as a “must have.”
This, to me, is sheer brilliance. I wish I had this talent!
HealthcareITNews had a piece recently on the legacy Steve Jobs is leaving in terms of the Apple products and their impact and use on healthcare. While healthcare per se was not Apple’s forte, the platforms and the ubiquity of the appliances and the application infrastructure are well designed, easy to use, and provide the perfect infrastructure for deployment of applications as well as usability. I have written on Apple, healthcare and technology in the past. The ability to support web-based applications in a palm-based platform (iPhone) that can support both external device connectivity (through USB) as well as providing camera, high resolution user interface, and the ability to support and share applications through the Apple Story mean that the sky is the limit.
I have spent a fair amount of time writing my own personal applications and learning the iOS SDK and there are enormous possibilities and untapped benefits that have yet to be realized. Many web-based applications from EMRs to software applications ranging from image-based viewing (picture archiving, or PACS) are readily available. The use of the iPhone or the iPad for interactive note taking, electronic medical record interaction, or applications are well represented through the Apple Store.
There are many iPhone and iPad knock-offs out there. All have their relative benefits. But, Apple will always be the first in terms of commercial appeal. Although there are some additional features I would like to see in the iPad (I’m sure Apple is hanging on every word I write), one of the key gaps I currently see is in the lack of multiple USB ports for connecting to external devices. Their single 30-pin interface (image shown below) provides for mapping to USB. However, it is the only hard-wired external interface available from the unit. This limits somewhat the capabilities for tertiary device connectivity given the iPad or iPhone is docked within a docking station or otherwise connected to charging power.
From a clinical environmental perspective, iPhone and iPad are not of medical grade (i.e., satisfy UL 60601-1-1 requirements, etc.). However, for physician use remotely or for personal use within the healthcare environment at the point of care, these are very capable for private, individual use. However, for general purpose use by staff in environments where the potential to drop the units or where they may receive rough use, these are probably not the best tools for the trade.
Steve Jobs’ legacy may be the passion and creativity of his vision that has been passed on to individuals who can see the application potential of the technologies he created in many areas, not just healthcare. I know from my own perspective that he has inspired me.
Thanks for visiting Medicinfotech.
10.02.2011 | 1 Comment
Anyone who has read Phillip Longman’s book, Best Care Anywhere, about the turnaround and best practices of the VA system might come away with a question mark over his head in regard to whether the Veterans Administration may have figured out how to do something right. In his book, Longman goes through the process and development of the VA’s own electronic medical record system, covering its beginnings in the bowels and basements of VA hospitals across the country in which physicians and computer developers worked partially in secret to bring forward a very usable yet unpolished electronic medical record system.
Longman describes the process by which modules and methods developed individually by computer programmers working under the supervision of physicians across the United States developed and bolted together a very user-centric and physician-focused healthcare information system, and how these methods were laid open for improvement, refinement and validation & verification by numerous application developers over decades of creative hacking. In short, open source development at the application level. The result of this open source Frankenstein was what became known as VistA.
Part of the success of VistA and the VA in general is the fact that most patients who are within the VA system are cared for over a lifetime of ailments. Patients within the VA are likely to remain from the time they are discharged from military service to the time they pass on. This enables and facilitates their health management over that period of time and causes a shift in care focus from that of solving the immediate crisis to preventative care. This same re-focus can also be seen with all-inclusive payer-provider networks in which patients are managed over a long period of time. As Longman points out on page 102,
“…what ultimately undid HMOs and true managed care was that, because of the constant churning of patients, they couldn’t make good on their early promise to…’keep people healthy’”.
As I have written elsewhere, the focus on chronic diseases is increasing. Given the estimate of 90 million Americans who are presently living with chronic illness [Longman, page 103], the need for long-term care is increasing. The ability to support telemedicine and incorporate and manage patient data within a long term clinical record is needed that can assist in providing oversight of such patients over time. Consider the chart contained in the following figure from a 2008 Robert Litan Study of chronic ailments and their relative costs. Management through measurement and maintenance is part of the process for ensuring the best information is provided to the care provider~making sure the end user physician has the best information in a form that is suitable for easing the data burden.
Considering the model of the VA and VistA, it is interesting to consider the extension to other supporting technologies in and around the patient. For instance, open source data collection methods or open source architectures that can be used to facilitate the collection of medical device data for inclusion within the record, as part of the overall view of the patient. A key attribute of “open sourcing” is the ability to improve the overall product via a large user community through trial and use case implementation and extrapolation. Indeed, the larger the user community, the higher the likelihood that use cases can be found and tested, thereby providing a more robust end product. The challenge is, of course, regarding regulatory management of open source frameworks. To a large degree open source software is anathema to the FDA regulatory process–and it relates to control and management of access. This is understandable. Yet, perhaps a balance can be achieved whereby a healthy and robust community can “kick the tires” on software but then tested and certified builds can be nestled away from the mainstream where they can be brought forward in a controlled manner with limited or restricted interaction from the user community. I am certain that I will hear healthy argument on both sides–whether in comment or through private email. Nonetheless, there is an important lesson to be learned from the relative successes of the VistA platform that extend beyond its open source roots. It must be remembered that VistA is also part of a network of patients and providers who are fairly exclusive and controlled. Were this model to be extended to a wider range of patients it may be that the model would fall apart. The open source attribute of VistA is certainly not share by the larger industry-developed electronic medical records. The ability to modify these private EMRs on the fly is not possible, nor is the code made available for general consumption. Ergo, one can see an impending conflict between open source and private industry in general.
However, it is this author’s belief that making a more open capability in terms of developing more end-user centric systems would actually manifest itself in moving the “ball further down the field” in terms of directing the focus on the harder problems in medicine and healthcare information technology. The focus on Meaningful Use and the measurands surrounding meeting the specific criteria associated with the various phases are directing the objective away from care and towards financial gain / financial penalties. Ultimately, this is not helping the patient.
Update: See my response to a blog post in regard to this originally posted article here.
10.01.2011 | 3 Comments
Medical Device Alarms Summit
The impending Medical Device Alarms Summit has caused me to do some research into the area of medical device alarms in general, and has also caused me to go back and review old papers and research of mine that are related. The conference coordinators provided links to some research material, and this research material also caused me to dig up some references to papers by one of my former advisors, CW Hanson, and colleague Bryan Marshall of PENN. Their paper, “Artificial intelligence applications in the intensive care unit,” (Crit Care Med 2001 Vol. 29, No. 2) is referenced within one of the recommended research articles co-authored by Michael Imhoff and Silvia Kuhls, titled “Alarm Algorithms in Critical Care Monitoring,” (International Anesthesia Research Society, 2006, 0003-2999/06).
Imhoff describes the three classes of medical devices responsible for alarms can be classified into the categories of monitoring, therapeutic to “support or replace failing organs,” and therapeutic to “administer medications and/or fluids to the patient.” While medical devices have evolved in the area of providing closed-loop control in the form of feedback from the patient through sensors, Imhoff describes two key issues that remain in the area of medical device alarms. These are:
1) Identifying conditions for which an alarm needs “to be thrown”, and
2) the consistent and unambiguous annunciation of the alarm in a manner that makes it clear to the end-user that a critical event has occurred and can be differentiated from other such events.
In my work and my experiences, the types of medical device alarms have been legion. Imhoff describes several classes of alarms, and I will further characterize these as clinical versus technical. I view technical alarms as those that identify conditions within the device itself. For example, if a device disconnects from a patient (e.g.: a probe falls off, or a cable disconnects, or the device ceases to communicate with the middleware or interfacing system). These types of alarms, Imhoff explains, can have clinical impact. This, of course makes sense for somewhat obvious reasons.
The second class of alarms are those that are clinical: identifying conditions based on measurements made from the monitoring or therapeutic devices of danger or impending danger (e.g.: heart rate too low or too high, blood pressure too low or too high, O2 saturation too low, etc.). These types of conditions are those for which one expects alarm to be “thrown.” However, as Imhoff puts it, the failure of the technical architecture can result in the inability to detect the clinical conditions. Again, this is quite obvious. Ergo, it is necessary to notify of both technical as well as clinical failure since the failure of the technical will result in the inability to detect the clinical, especially when considering remote monitoring environments.
Imhoff & Hanson both describe and discuss modeling and prediction techniques for identifying conditions, monitoring and modeling approaches for predicting future trajectory, that involve many different types of techniques. Key among these are artificial neural networks, fuzzy logic, Kalman filtering, Bayesian estimation, least squares filtering, and others.
In my dissertation, “Modeling post-operative respiratory state in coronary artery bypass grafting patients,” and in the EMBS paper that followed it, found in the EMBS paper “Modeling Spontaneous Minute Volume in Coronary Artery Bypass Graft Patients,” I describe a template-based approach for predicting viability for spontaneous breathing trials. The key points being that, from a regulatory and a predictability perspective, most methods require large amounts of patient data in order to develop reliable and predictable outcomes. Deterministic behavior is required, especially for FDA approval of such methods. Hence, many methods have remained in the realm of research and clinical trials because of this. Nonetheless, the ability to reduce the overall effects of alarm fatigue and better predictability as well as improved forecasting of patient outcome remains a fertile area. I intend to report on the outcome of this workshop and am quite interested in the discussions that will ensue.
For further reading, I will (of course), point the reader to one or both of my books:
9.28.2011 | 0 Comments
HealthcareITNews published an interesting piece on the four key types of applications that are key to watch in the hospital environment and for mHealth in general. Christina Thieist, healthcare administrator & founder of blog Christina’s Considerations, is reporting that the 4 types will:
1. Enable mobility by freeing physicians from their offices and desktop computers;
2. Provide remote access to lab results and medical imaging;
3. Enable smartphones and other ubiquitous appliances to become medical devices (e.g.: medical device connectivity) and,
4. Assist in providing practice management by facilitating rounds, support billing, etc.
9.22.2011 | 0 Comments
Medicinfotech now employs a new plug-in to detect mobile devices. You can see the look and feel of the site using the following utility.
9.22.2011 | 0 Comments
On September 12-13 of this year the FDA held a Workshop on Mobile Medical Applications–Draft Guidance. The link to this site is provided here. The FDA is seeking comment on how they should approach medical applications that are accessories to medical devices and, in addition, standalone software that supports or provides for functions related to clinical decision making–that is, Clinical Decision Support.
The FDA has provided a link to submit electronic comment on the draft guidance at this location. Alternately, written comments may be submitted to:
Division of Dockets Management (HFA-305),
Food and Drug Administration
5630 Fishers Lane, Rm. 1061
Rockville, MD 20852
Identify comments with the docket number: FDA-2011-D-0530
Presentations during this session focused on the following:
1) Definition of standalone CDS software
2) What are the different types of CDS software?
3) What levels of support do these CDS software provide?
The speakers included Dr. Kristen Meier of the FDA; Dr. Jonathan White, of AHRQ; Dr. Stan Pestotnik, TheraDoc Clinical IT; Dr. Richard Katz, GWU; Dr. Meghan Dierks, Beth Israel Deaconess Medical Center; and Meryl Bloomrosen, AMIA. Hyperlinks are included with the presenter names to their respective presentations.
Dr. Kristen Meier’s presented on “Standalone Clinical Decision Support.”
This panel focused on the factors used to determine risk classification of difference types of software that manage or provide for clinical decision support, and what ways to approach assessing safety and effectiveness determination of same.
Dr. Meghan Dierks included some interesting and provocative points on a definition of “What is Clinical Decision Support?” The author stated that CDS…
“…has the potential to influence any or all of the typical decisions that I make when caring for a patient…”
“Any tool that can influence my ability to: (a) detect current state; (b) detect change; (c) predict future states; (d) identify and present goals and objectives; (e) identify and present options available.”
Furthermore, the following I found to be key to my personal understanding and belief of CDS influences:
“Any tool that can influence my ability to: (a) identify, present optimal choice among several criteria; (b) assess information adequacy/quality; (c) search for additional information; (d) check for bias; (e) structure the approach–intervention.”
Basically, this is interventional guidance.
Dr. Dierks continues: What are the key risks if the CDS fails? Well, is the failure visible to the clinician–can the clinician detect the failure and in sufficient time to override or intervene on behalf of the patient before it is too late? Are there cues or insights or strategies available to the clinician in the absence of the CDS system or methodology? From here, the question as to severity of harm and likelihood of harm–all aspects typically assessed as part of hazard and risk analysis per the FDA Quality System–apply.
I am encouraged by this level of discussion–it engages the clinicians as part of the process that I believe for far too long has been absent in the area of healthcare IT. It is time for the electronic medical record (EMR) “community” to evolve more into the role of evaluating tools that will truly bring forward Meaningful Care (not necessarily the same as Meaningful Use). This means enabling, facilitating, and removing impeding barriers from the clinician in the process of assessing and caring for the patient. The IT infrastructure that has garnered much of the focus on healthcare IT in terms of charting over the past several years now needs to evolve forward into effective use of the information IN A WAY that helps the clinician. This statement has many implications, from optimal display of data; reduction in “screen flips” to truly enabling that which is most important to the physician end-user. This is achieved not simply by providing better “mouse traps” in the form of sophistication. It means providing data in a straightforward manner; presenting options and analyses close at hand and in a way easy to access so that options may be assessed. The technology should NEVER get in the way of the physician. I believe we (the community as a whole) has a long way to go in this regard.
Some time ago I had referred to the fact that I was going to be writing much more on the topic of clinical informatics and clinical decision support. I have posts here, here and here that refer to this dialog. I plan to write much more.
FDA regulatory framework for 510(k) clearance
Thanks for visiting http://www.Medicinfotech.com
9.18.2011 | 0 Comments
Recently, I have received comments from some folks indicating the RSS feed from this site has not been functioning properly. I include a link here by which to verify this site’s RSS feed may be verified upon inspection. Alternatively, you can click on the image link below:
- Medicinfotech Web Management
9.08.2011 | 2 Comments
This post represents the first in a series I’ve decided to write on the subject of pragmatic application of clinical informatics–from the perspective of one who has been in the trenches on the side of development and implementation.
e-HealthExpert.org provides a definition of Clinical Informatics attributed to Columbia University that is:
“Clinical Informatics is the scientific study of the effective use of information in patient care, clinical research and medical education.”
e-HealthExpert.org goes on to define the key goals of clinical informatics as:
“The ultimate goals of clinical informatics are to streamline the processes of patient care, to provide clinicians with accurate data in a timely manner, improve the quality of care, and to reduce costs.”
I am an engineer by education and a Clinical Informaticist by training. Having spent the better part of 20 years involved in the analysis of data, and in the development, modeling and creation of expert systems, fuzzy-logic-based controllers, and leading the development of products that bring these capabilities forward in a practical way, it is clearer to me more now than at any other time just how rich the space is and the potential that exists in improving overall healthcare through appropriate use of analytical tools at the point of care.
A problem–perceived or real–among researchers into complex problems and modeling is that of achieving high fidelity realism at the expense of time and effort, and high specificity. The “nirvana” for modelers (at least this was my case in the old days) was that of achieving a highly accurate model that could be applied across many different situations. Alas, in practice (not for lack of trying), I was typically only able to achieve high positive predictive performance when specificity was very high. This may be taken as the sign of a mediocre modeler, or a tough and complex problem (or some of both–I’m not proud!) Yet, this is not a unique challenge. The challenge lies in the fidelity and accuracy of the model at the expense of loss of specificity. Systems modeling is defined in this way: produce a relatively accurate “black box” model that gets you within the “ball park” of the right answer. Lacking full knowledge of the detailed mechanics involved internally to the black box, this type of input-output modeling enables mapping of input stimulus to output behavior without expending the concomitant effort to create the higher fidelity model. The relative benefits? Well, this depends on the overall objective you are seeking. The real world is replete with situations in which this is very much the case: obtain a ballpark answer in a finite amount of time, or obtain a highly accurate answer at the cost of time and effort that may not be affordable.
It all depends on the objective.
Much work in the fields of expert systems, fuzzy logic, neural networks and knowledge management exists. In my own dissertation and elsewhere I have written about practical system models for predictive assessments in guiding decisionmaking. The question as to required accuracy comes down to what is the required end result. If the end goal is to guide workflow functions–when to revisit a patient; determining which data need to be recorded to support certain management or interventional functions; general notifications as to status of the patient–then a high-level model may be sufficient as it serves the purpose of providing a “heads up” to the end user.
On the other hand, if the required model is to be used to support interventional, microscopic assessments of critical events, or to employ modeling as a means of guiding critical alarm notifications, or to inform a clinician that if certain action is not taken patient safety will be at risk, then high accuracy may be necessary absolutely necessary and anything less will be intolerable.
Another concern I have heard over the years is the general one that you cannot replace the clinician with a computer, as spoofed in the rather comical Herman cartoon.
I have always drawn the following analogy to clinical decision support systems: that informatics tools are equivalent to a well-organized toolbox for the mechanic. The mechanic (not to state that clinicians are auto mechanics, or vice versa, but to draw the linkage between two artisans plying their trades) is still in charge of making the diagnosis; of making the repair; of analyzing the situation (situation assessment). Informatics is the toolbox and the organization of tools within the toolbox provides a convenient way for the clinician to select the right “wrench.” It also provides a mechanism for suggesting a tool to apply to any given situation, but does not force the mechanic to use one tool or one approach. Yet, the benefit can be improved workflow or a better decision that can save time, energy, and even lives.
In future posts I will endeavor to go into some specific details on methods and approaches, as well as surveys of the field. Thanks for visiting Medicinfotech.
8.18.2011 | 0 Comments
Found this small physiological vital signs monitors, the VitalPoint(r) Pro, located on this site here. This caught my attention not only because of its size, but also because it has extra-enterprise uses (home health use). I like small devices because they are light, agile, and can be used in alternative locations other than intensive care, emergency departments, and operating rooms. I’m going to continue my research on these devices because smaller makes them able to be integrated with electronic medical record systems. This makes them scalable and usable by the folks.
8.14.2011 | 0 Comments
Link to iPhoneMedicalApps.com has a nice piece on the use of iPhone technology in the mHealth application environment. The development of iPhone applications in general has expanded enormously over the course of the last 2 years and the healthcare application area involving the mobile health platforms and infrastructure — mHealth 2.0 — has been tremendously enabled via the ubiquity and access of the iPhone and other appliances (for example, Android).
The key is that there is not slowing in this market, and the limitations are only those associated with the creativity of the developers. So, mHealth will be an enormous growth field into the future as far as the eye can see.
8.11.2011 | 0 Comments
In regard to the SC Magazine post, this is a key concern in the safety and assessment of medical devices at the point of care. Vulnerabilities that can lead to patient safety hazards, even down the road, well beyond the original horizon of the device, requires creative thinking and the ability to accommodate the effects of new technologies into thinking during the design process and beyond.
It is understood that this device is older and when it was originally designed the vulnerabilities were perhaps not understood and, therefore, were not designed around. Yet, given this knowledge today, a new severity-likelihood assessment needs to be performed and mitigation for the revealed vulnerability identified and a corrective action plan created to mitigate the effects of the new vulnerability.
Medical device design and development involves adherence to the quality system requirements put forward by the FDA. As part of the design of any medical device it is necessary to assess risk severity and likelihood of occurrence to evaluate whether a particular risk poses an unacceptable hazard to a patient or a clinician or any other user, technician or anyone who comes in contact with the medical device. The term acceptable depends on the situation and specifics of the device and context surrounding the patient. The objective of the assessment is to identify ways to mitigate the risks in terms of their severity, their likelihood, or both.
Medical Device Vulnerability implies the need for proper severity likelihood analysis to evaluate and identify ways to mitigate risk.
8.06.2011 | 0 Comments
While trolling the USPTO site this evening I found out that I was just awarded my 6th US Patent. This patent is titled: “Distributed system for monitoring patient video, audio and medical parameter data.”
The main web site provides the capability to download the images of the actual patent. However, the quick search tool may be used to locate, for those interested. Here is a screen image of the patent:
In this US Patent, I discuss the use of video imagery combined with medical device parameter data to provide a more holistic picture of patient care. The significance of this offering and its key differentiator from competition already on market is in the use of an enterprise network for web-based display of imagery through the Internet.
Please view my web site on PhD Dissertation to see benefits of medical device data for patient care and clinical decision making.
12.16.2009 | 0 Comments
Yes. It has been a while since I have posted anything. This is due to two reasons: first, I’ve been recovering from surgery. Second, I’ve been given a book contract to write a book on medical devices and modeling. My intention is to focus on the book for the next several months, as I do not have a ghost writer (I’m it), and making a living is necessary, as well. However, I will provide updates and details as I progress. As of this moment, chapter 1 is almost complete and I am about 4000 words along to a total of 92,000.
Switch to our mobile site