Wednesday, September 23, 2009

HEALTH IT: GOALS, CONCERNS AND CHOICES

Health information technology, if properly implemented, can move us closer toward two fundamental goals of healthcare reform:
  • Improved quality of care, through:
    * Making comprehensive patient information (clinical history, medications, test results, etc.) available at the point of care
    * Better decision support
    * Patient involvement
  • Cost containment, through:
    * Prevention of errors; elimination of duplicate tests and treatment procedures
    * Faster dissemination of research results and best practices

From the provider standpoint, there are three logical levels of information aggregation. First, all pieces of medical data for the current encounter (complaint, diagnosis, tests, reports, notes, medications, treatment procedures) are combined with relatively static patient’s personal information (demographic, family history, social history), into a visit summary. At the next level, the entire visit history for that patient at this point of care is incorporated. Finally, all patient records from all source EHR systems are added together to form PHR.

In fact, the first and second levels of aggregation can be found in most EHR applications. There is still work ahead of us to simplify and automate communication between them and, for example, lab or bedside monitoring systems. Eventually, that will depend on the architecture of each EHR system, and is not going to be covered by interoperability standards. This means that the internal representation of patient data, the way it is stored, delivered and presented to the user, is up to the system vendor. Luckily, that is largely a technical task of establishing necessary protocols and data exchange formats.

It gets more complicated when we need to retrieve patient records created by another EHR system outside of our network. Basically, it does not matter if it is two blocks away or across the country, from a birds’-eye perspective, the process will look the same: to locate those records; send a request; get results. At a closer look, though, each of those steps has a number of actions involved, and architectural decisions have far-reaching implications.

For an EHR system, the capability to find external health records for a patient depends on access to a registry that links an identifier, which must be unique for each person, with all repositories where those records are stored. It either has to know that identifier, or should be able to get it based on the patient data it has. Basically, there are two choices: a nationwide patient number, which will be assigned at birth or on arrival to the U.S. with a proof of residence, or a Record Locator Service (RLS), that is to create a unique master index for every person and to tag all records for that person with it. The main objective against the nationwide identifier is that without proper identity verification process, it may be abused, much like SSN is, to gain access to somebody else’s medical records. As for RLS, it brings a few mostly technical issues. In general terms, it uses record matching techniques based on demographic information available in EHR. If certain fields are empty or contain incorrect values, they may return false positives or negatives.

The record retrieval process can also differ depending where the data is stored. If there is a centralized repository, containing copies of all medical records, it is relatively straightforward. The data management application

  1. Authenticates the user (checks his credentials)
  2. Authorizes him (grants access rights) based on his profile
  3. Applies privacy protection rules (laws, regulations, patient consent instructions)
  4. Formats results and sends them back to the requester
  5. Creates audit records (who, what, when)

The most complex part of the workflow logic corresponds to steps 2 and 3. For example, an ED doctor requests health records on the out-of-state patient he currently cares for. If the state of residence has different disclosure rules than the one the patient is being treated in, which of them should apply? Should the doctor be allowed to see psychiatric records if the patient has apparent congestive heart failure?

Maintaining profiles of external users and implementing all that logic may prove too overwhelming for an individual EHR system in a pure P2P world. It makes a lot of sense to set up an intermediary that will handle most of that process. Each connected EHR system will only need to know a limited number of user categories, and what information has to be provided depending on which category the requesting user belongs to. In this framework, though, the big unknown is availability of source EHR systems, especially, in small hospitals and practices. Storing a copy of patient records at a local RHIO, much like in the centralized repository, will insulate the source EHR system from external requests, but the need for a record linking mechanism will still remain.

As far as clinical research and public health activities are concerned, where data mining and statistical analysis are applied against large volumes of data, a centralized repository of personal health records is the most efficient and, perhaps, secure option. Traditionally, researches receive anonymized, or de-identified, patient data directly from healthcare institutions. HIPAA allows for disclosure of de-identified records, but in a recently published paper, Paul Ohm of the University of Colorado Law School states that release of raw data does not guarantee necessary privacy protection, especially, if the data enters the public domain. In many cases, though, the end result of interest is computed statistics, bearing no links whatsoever with any personal information. By keeping data inside the repository, controlling and monitoring access to it, we have a better chance to avoid unintended consequences.

I tend to believe that there is no ideal architecture or technology that could perfectly meet all our needs, and personal health records are no exception. The law, policies and standards will continue developing, hopefully, with the goals, that I mentioned earlier, in sight.

2 comments:

  1. Good post, Alexander. A few comments--[in brackets]--follow ...

    You wrote: From the provider standpoint, there are three logical levels of information aggregation. First, all pieces of medical data for the current encounter (complaint, diagnosis, tests, reports, notes, medications, treatment procedures) are combined with relatively static patient’s personal information (demographic, family history, social history), into a visit summary. At the next level, the entire visit history for that patient at this point of care is incorporated

    > [I’d emphasize the collection of clinical and financial—i.e., quality and cost—data here].

    Finally, all patient records from all source EHR systems are added together to form PHR

    > [In addition to data from EHRs, the PHR would include data entered directly by the patient. With the patient’s authorization, certain data from the PHR would/could also be sent to the clinicians’ EHRs].

    For an EHR system, the capability to find external health records for a patient depends on access to a registry that links an identifier, which must be unique for each person, with all repositories where those records are stored. It either has to know that identifier, or should be able to get it based on the patient data it has. Basically, there are two choices: a nationwide patient number, which will be assigned at birth or on arrival to the U.S. with a proof of residence, or a Record Locator Service (RLS), that is to create a unique master index for every person and to tag all records for that person with it.

    > [What about using a biometric index instead of a nationwide patient number or RLS? ]

    Maintaining profiles of external users and implementing all that logic may prove too overwhelming for an individual EHR system in a pure P2P world. It makes a lot of sense to set up an intermediary that will handle most of that process. Each connected EHR system will only need to know a limited number of user categories, and what information has to be provided depending on which category the requesting user belongs to. In this framework, though, the big unknown is availability of source EHR systems, especially, in small hospitals and practices. Storing a copy of patient records at a local RHIO, much like in the centralized repository, will insulate the source EHR system from external requests, but the need for a record linking mechanism will still remain.

    >[In a P2P (node-to-node) network, a publisher-subscriber methodology (similar to the way the phone system works) would help assure that EHRs are available to exchange patient data with authorized parties who would have to keep their computers turned on and have access to the Internet via e-mail (or other means). In this cyber-architecture, patient data could be stored locally in encrypted files in the providers’ and patients’ computers, while an intermediary node maintained by a local RHIO could make the e-mail (or IP) addresses of all authorized publishing nodes available to the appropriate subscribing nodes. For cross-region data exchange, the different RHIO nodes could share the e-mail addresses.]

    ReplyDelete
  2. Steve,

    First of all, thank you for your comments.

    I agree that to be complete, pure medical information has to be linked to costs of providing care. As for quality, I assume you are talking about the outcome of a specific encounter, rather than the Meaningful Use clinical quality measures. Quantification of outcomes is a big topic on its own, and is at the heart of the pay-for-performance model. But yes, discharge or visit summary, for that matter, definitely belongs there.

    Yes, patients should have the ability to view their entire record and maintain certain parts of it.

    A biometric identifier should work, as long as we also take into account accident victims being brought to an emergency room.

    I am not sure whether I completely comprehend the technical details of the exchange framework that you describe. The goal is to find the location of the record and retrieve it with as little latency as possible. I understand that HTTP(S) and (S)FTP, which provide the best automation capabilities, require a static IP address, firewall, server, etc. VPN connection eliminates the need for the first two conditions, but is not really scalable. True, SMTP/POP3 works whenever your computer is connected to the Internet, but I am not sure if there are any e-mail clients that support auto-response to any account from a given list, if this is what you suggest. I am not saying this is impossible, or even too difficult to implement, but it would hardly work fast enough. Again, perhaps, I misunderstood your idea.

    ReplyDelete