Currently I'm researching two areas of #openEHR CDR implementation: 1. Conformance verification, 2. Performance / High throughput.
For #openEHR conformance verification I'm continuing my work at #HiGHmed in which I designed hundreds of test cases and data sets (archetypes, templates, JSON and XML instances), to verify an openEHR CDR complies with the openEHR specs.
The first implementation of those #openEHR compliance test cases was done in #RobotFramework which is a Python tool, initially tests were really nice to read, but with complexity, the Robot tests started to look messy and were difficult to maintain.
The current testing framework I'm using is #SpockFramework in which tests are written in #Groovy and run on the #JVM. Reports are nice, though are not as detailed as the Robot ones, but at least the code for complex tests is understandable and ready to maintain.
Though the #openEHR compliance verification is not only about running tests, is about a framework that allows to profile the testing need on the context and type of system (not all openEHR implementations are CDRs!). That is why I've designed a Compliance Verification Framework.
The #openEHR CVF is under development and will be released freely so anyone can implement it. It's technology agnostic, and focus on 1. Providing vendors a tool to verify their systems comply with the openEHR specs, 2. Customers to verify what they are buying is compliant.
In parallel I'm working on the implementation of the #openEHR CVF using #SpockFramework against @AtomikServer so we can verify the test implementation is correct and, at the same time, verify Atomik behaves correctly.
My second area of research is performance and high throughout communications of #openEHR data. Actually checking where parallel programming could be applied, and researching about the #EventDriven approach as a way to handle some data processing by receiving async notifications.
I'm detecting a lot of potential places in @AtomikServer where async notifications could be implemented, instead of having everything in the same thread, like in commit operation logs for #openEHR compositions, folders, etc.
In the middle I'm playing with an alternative #openEHR API over TCP that allows higher throughout than HTTP 1.1, doing some tests with #protobuf and other encoding mechanisms. The
The goal would be to: 1. reduce #openEHR payload size, 2. use a lower later transport protocol, and, with the combination of both, reach better throughput than HTTP+JSON. Benchmarks will be publicly available.
I'll write papers about these experiments for documenting my findings, and hopefully find somewhere to publish/present my work.
BTW if you are interested in these topics, have some free time, java skills, and want to learn and help, drop me a line!
There are some #openEHR marketing materials circulating that states "openEHR is a clinical data persistence standard" which is incorrect and people is getting and forwarding the wrong message.
First the marketing material mixes what is the #openEHR specification with some implementation (won't name names). Second, openEHR specifications don't even mention how to persist clinical data.
In terms of data #openEHR allows/enables long term clinical data management, but allows vendors to implement the persistence of data in any way, paradigm or technology that fits their needs.
Healthcare information has many zoom levels, from microcellular, tissue, organ or DNA data, to international metrics and statistics, and everything in between. When working on healthcare data platforms it's important to know at which level(s) you are working.
That will determine how you design your components, repositories, rules, APIs, etc. and when your processes change between different zoom levels, for instance, aggregating data.
The data zoom level will also determine how data is exchanged and processed. Understanding that will lead you to a better platform architecture, but not understanding at which zoom level you are working, can lead to a messy software architecture.
Recordé la publicación "Estándares e interoperabilidad en salud electrónica: Requisitos para una gestión sanitaria efectiva y eficiente" de la Comisión Económicapara América Latina y el Caribe (CEPAL) de Naciones Unidas donde hice mención de #openEHR allá por 2011.
Fui coautor junto a Selene Indarte, en ese momento presidenta de SUEIIDISS (HL7 Uruguay), ella escribió desde el punto de vista de la gestión sanitaria y quien les habla desde los estándares y la interoperabilidad.
Creo que la publicación fue muy importante en aquel momento, y para mi que CEPAL me haya dado la oportunidad de escribir con tan pocos años de experiencia en el área, pero con un camino ya recorrido, fue un honor. Aquí la publicación repositorio.cepal.org/handle/11362/3…
In the last couple of weeks I've been studying and testing the #openEHR demographic model which has a lot of potential though it needs some improvements.
First it needs more flexibility to specify roles by setting the identities to optional. Second there is an inconsistency in the languages attribute which is DV_TEXT and a better option would be CODE_PHRASE.
Also, the demographic #openEHR model needs more support by modeling tools, we actually need demographic OPTs to be able to test conformance with the openEHR specs.