Fast Healthcare Interoperability Resources (“FHIR”) is still a dream yet to be realized
In the past decade, FHIR was and still may be the promised solution to the messy landscape of healthcare interoperability. But if you are working on provider-facing healthcare solutions like we are, you must’ve shared a similar experience: it never really works as it should.
Things might be different if you are building greenfield projects that don’t rely on legacy vendors or only working with one FHIR system (instead of working with a dozen different workflows for a single task), but most of the time, this pushes us to its alternative - push-based HL7 messages.
What do most HealthTech companies actually need?
Once you start building a data acquisition strategy/integration strategy, you will be dazzled by choices:
API aggregators that plug into Health Information Exchanges (“HIEs”)
Data aggregators that contract with pharmacies, payors, and the government to collect and anonymize data for sales
Third-party integration providers that provide a simpler interface to help you scale faster
Google Cloud’s upscale FHIR/HL7 store
The list goes on…
All of these providers typically charge hefty fees and require year-long commitments, which unfortunately stifles innovation, making it harder for your everyday “built-in-a-garage” start-ups to take flight.
Scoping becomes absolutely essential during this process. Because healthcare data is not centralized, it is vital that a company, that is focused on creating accurate AI/ML-powered recommendations, has access to the entire EMR. Most use cases can be supported by mapping that need into different HL7 feeds, common suspects are usually ADT (Admission, Discharge, and Transfer), MDM (Master Data Management), PN (Person Name data type), and SIU (Scheduling Information Unsolicited) feeds.
It’s easier than you think
Before health tech startups, there were legacy healthcare IT vendors. And their, even HIEs, payors, and large health systems’ best-kept secret in healthcare interoperability is Mirth Connect (developed by NextGen). It is the golden shortcut to kick off your interop journey.
To quote NextGen, Mirth functions like a translator who can “pull” data from and “push” data to a database and filter / transform the message formats to different standards (e.g., HL7 to FHIR)
A self-hosted instance is easily scalable and can support up to a few hundred bi-directional connections. At no cost - any start-up can quickly pick up how to use the interface and serve their needs to pull and push messages with their provider orgs! You can even use Synthea as a learning tool to simulate incoming data streams.
Enters streaming
Traditionally, streaming solutions are only used to capture real-time or near real-time systems such as clickstream or high-frequency trading, But as the analytical demands from different use cases rise exponentially, right-sizing the data ingestion intervals has become key to the success of these programs. Building a streaming data platform allows us to gather minute-by-minute updates on what’s happening to the patients and provide timely suggestions on possible interventions and support at the time patients need it the most.
To accomplish this, you just need a VPN to transmit the HL7 messages over the MLLP protocol and channel it into your Mirth instance. Optionally (but highly recommended), you can use services like Kafka or Event Hub to serve as a message sink to provide consistency and even keep the cost low during streaming jobs (limit the ingestion to once a day).
How to process the messages
I wholeheartedly recommend you try out Spark’s open-source HL7 message parsing library Smolder (https://github.com/databrickslabs/smolder). It provides such an amazing experience compared to Java-based HAPI or other similar solutions. It just works.
Happy Streaming!
Here’s another Fun Flag Fact:
According to Guinness World Records the largest flag ever made was of a Romanian national flag that measured 349 x 227 metres (1,145 x 744.5ft) – about three times the size of a football field.
It weighed 5-tonnes and consisted of 44 miles of thread and was unfurled 22 miles South-East of Bucharest.
It took 200 people several hours to unfurl it.