HIStalk Interviews Walter Stewart, CEO, Medcurio
Walter “Buzz” Stewart, PhD, MPH is co-founder and CEO of Medcurio.
Tell me about yourself and the company.
I have focused my career on working with data, from epidemiology training to leading research centers and startups.It defined my path, both in education, where I earned a PhD in epidemiology and was a professor at Hopkins for a while, and with my first startup, which was data-based.
I had a 16-year journey through two health systems, where I founded and ran research centers. I realized from the problems that I was experiencing that it was going to be difficult for AI and automation to work at scale.
I ended up where I never would have predicted, which is in the EHR integration space. I launched Medcurio as a step toward fundamentally solving the EHR integration problems that we face in healthcare. Namely, that it’s difficult to get access to all the real-time data that you need to drive predictive models or other kinds of processes.
How does your product make it easier to get access to real-time EHR data compared to tools that the vendor themselves might offer?
We have used all of those tools. I used them for the better part of 10 to 12 years trying to develop real-time predictive models, first at Geisinger and then at Sutter. Getting real-time applications into production took months, and even small changes took weeks.
It took weeks to more than a month to make changes based on user feedback. It was difficult to maintain user interest with that kind of turnaround time. The application broke from schema changes, which was disruptive and unpredictable. Other downstream functions made it impossible to do things in real time, and we didn’t see that bottleneck going away.
We made up a starter list of the problems we wanted to solve. We built a no-code API platform that installs in less than a day in a health system’s on-prem environment. It erases those problems.
An analyst can log in and build APIs to access any data that they want without writing a line of code. They can make changes to those APIs in minutes to hours. Other features give health systems the kind of control that I wished that I’d had, and that I’m sure that my colleagues wish they’d had, when I was journeying through Sutter and Geisinger.
What competitive or clinical advantages can health systems gain from using real-time data?
This is an important point. It took us a while to recognize that we had been working on real-time use of data for 10 years, so we just assumed that the rest of the market had the same passion. When we launched Medcurio and built our foundation product in mid-2020, we found that we were talking about things in a way that only the top five or 10% of health systems were thinking about.
When I think about using data in real time, I go to a couple of areas that are becoming prominent in the era of AI. One is predicting events, which can be useful in many ways. We often think of it for predicting risky events, such as getting to a heart failure patient before they end up in the hospital. Predictive models can be valuable for that, for inpatient infections, or for a host of other things. It’s a powerful area where AI could have profound influence.
But I think probably the more important areas are in workflow automation, whether that’s back-office workflow automation, or automating a whole process. If you take something like prior authorization, you have snippets of automation, with manual work in between those snippets. The power of being able to move any electronic health record data in real time is that you can put the whole thing together with a set of APIs that power each step in a process and hand off from one step to the next.
How do data latency and completeness problems potentially limit the innovation or implementation of AI solutions, especially agent-driven technologies?
I would list three things. Access to real-time EHR data is limited, latency reduces ROI, and slow iteration impedes improvement.
A unique quality of AI is that, compared to the era before, it will continue to drive unending demand for data volume and data diversity. That will always expand, and if you can’t meet that need in real time, you will have to pull back what you’re trying to do with AI based on the data that you can get.
Second is that maybe you can get only 24-hour-old data by end-of-day downloads. For some workflows, that might work. But for workflows where there’s a lot of ROI opportunity at stake, most of that has to be driven by being able to access all the data that you need in real time without constraints.
I don’t care what automated solution you create, you are always going to have iterations to making it better, and identifying ways that it’s getting hung up. You can’t evolve an automated workflow where after you identify the data you need, it takes months to get it because of infrastructure challenges.
How does your relationship with EHR vendors work when you become a layer between them and their customers, or making sure that vendor changes don’t break something?
We were very aware of those challenges. We developed a technology that is not specific to healthcare. We adapted to a data model. Our technology can talk to any InterSystems Iris database.
We install with the folks on the health system side. They mostly manage the install process, which takes two to four hours. They log into a GUI and can point-and-click to build, test, and then deploy APIs. That process takes anywhere from two to five days.
Our vision was that if I was in a health system, I wouldn’t have to wait in line forever to get something done to meet my demands or needs. This technology is designed for health systems to have control of their data.
What does the health system need to do to use your system in a no-code environment to create APIs that access EHR data?
In an Epic environment, we’ve had Epic analysts in their first year of training logging in and building APIs in an hour. It’s pretty seamless, because when you log in, you’re looking at things in a way that is just logically coherent.
Building an API involves two parts. Who do you want data on, and what data do you want on them? All of that can be done by a point-and-click process.
If you are building APIs for an application, let’s say a prior auth application that might involve a third-party vendor, that vendor just needs to know the API ID. A single endpoint is called for all APIs. That process is quite straightforward.
How are clients typically using the data?
We have seen three categories. One is strategic management of real-time data access. We have a system that uses it at scale in that way. They have rules of the game for how they access data based on priorities, such as using the EHR vendor APIs, FHIR, or some other method. If using these methods will take more than four hours, they use our VennU data access platform instead because it is so straightforward and easy to manage.
Our platform allows assigning multiple role-based users to one API. That gives them quite a bit of flexibility around how one API can be used by different groups. For example, some might have access to personal identifiers and some might not, depending on their role.
We have seen homegrown uses, most commonly real-time display vehicles, whether for inpatient or ambulatory settings.
Some are using third-party solutions. Salesforce is a good example. One system had struggled for six months with data they couldn’t get, and they solved it with our technology in a couple of hours. They went from 10,000 to 1.5 million AP calls per day in 18 months after solving that single data access problem.
I think it varies depending upon where a system is on the spectrum of trying to automate or just observe their core intellectual assets.
How is the customer charged?
It’s an annual license based on health system revenue, with support fees. It is designed to motivate health systems to use it to the max.
How does the federal government’s emphasis on FHIR and APIs as an interoperability solution affect your business?
FHIR certainly is one path to accessing data, today and for the future. There will still be real constraints on the narrow sliver of data that you can access via FHIR, because there’s a lot of fields that you can’t access.
Our roadmap calls for developing what we call a FHIR facade for our platform. Because we have flexibility on how we output the data that’s being requested via an API, we can output it in different formats. A FHIR facade feature will allow users to get data in a somewhat FHIR format that could be interchangeable. That would provide greater scale, both within and across systems.
Do EHR vendor decisions or government mandates about data access have any impact on your business?
We don’t touch the data. Our security review is really simple. We install our technology. We coach on how to use our technology. The health system controls it. Our technology gives them access to 100% of their electronic health record data.
Our product talks directly to the InterSystems Iris database. A user who is building an API can visualize things in a way that allows them to easily build the requirements for who they want data on and what data fields they want to access. Once the API is in production, it can be called by any group or application that the security officer designates as allowed.
The power of our platform is that it solves what I would consider to be my greatest challenge when I was a leader of these research centers, which was that I just couldn’t get access to data that I wanted. That was the first problem we wanted to solve, and that’s why we felt that the best path was inventing this no-code approach to getting access from any data field.
What are the most important parts of the company’s strategy over the next few years?
We are getting to the end of our full roadmap for the VennU data access platform. We have this powerful platform that is an enabler for automation, so we will move from platform development to partnerships with solution vendors.



















Yes, but why is Epic "the best in the industry for many healthcare systems"? And sure, it's a complex software…