I was intrigued by Mr. H’s mention last week of the Mass General Brigham FaceAge AI tool that can estimate age from facial photos. Researchers found that patients with cancer appeared older than their stated age. The older they looked, the lower their odds of survival.
Although physicians have historically used visual assessments to predict potential outcomes, the tool uses face feature extraction to estimate a user’s biological age based on their photo. An article describing the tool was recently published in The Lancet Digital Health if you’re interested in all the details.
This item, as many things that Mr. H mentions, got me thinking. I found a couple of sites that host biological age calculators and completed the relevant surveys to get a couple of results. Some of them were more specific, asking for various lab values. Fortunately, I had results for all of the requested lab values and even some of the exercise performance measures that were included on one of the questionnaires. I also found a tool that is very similar to FaceAge, although not the exact one used in the study, and snapped my selfie.
The survey-based calculators estimated my biological age as anywhere from 4.6 to nine years below my actual age. The facial photo tool thought that I was more than 10 years younger. I suppose my liberal use of sunscreen and hats is paying off, since my facial wrinkles were scored as 2 out of a possible 100 points. I also did well on the “undereye” measure, although I admit that my photo was taken when I was well rested. I’m sure it would not have scored as well had it been taken after a shift in the emergency department.
I don’t look at a lot of high-resolution pictures of my face, and when I received my score report with a full-screen of my face right in front of me, I was somewhat surprised that you can still see some artifacts from years of wearing an N95 mask while seeing patients. I’m guessing that when I look in the mirror my brain somewhat processes that out, so it was a little startling.
I’d be interested to see how I would score on a medical-grade tool such as the one mentioned in the article. Although it was a fun exercise to complete the different surveys and see where I stand, none of the recommendations provided alongside the results of any of the tools were different from what I usually hear during my primary care preventive visits: keep moving, eat as healthy as possible, and watch out for the rogue genes you’re carrying around.
I would be interested to hear others’ experiences with similar tools and whether they have motivated you to do anything different from a lifestyle perspective.
Mr. H also recently mentioned efforts by NASA and Google to develop a proof-of-concept AI-powered “Crew Medical Officer Digital Assistant” (CMO-DA) to support astronauts on long space missions. As a Star Trek devotee, I couldn’t help but think of the Emergency Medical Hologram from “Star Trek: Voyager.”
The project is using Google Cloud’s Vertex AI environment and has been used to run three scenarios: an ankle injury, flank pain, and ear pain. The TechCrunch article noted that “a trio of physicians, one being an astronaut, graded the assistant’s performance across the initial evaluation, history-taking, clinical reasoning, and treatment.” A particular astronaut/physician came to mind when I read that, and if there’s a hologram to be created, I’m sure other space fangirls out there would find him an acceptable model.
The reviewers found the model to have a 74% likelihood of correctness for the flank pain scenario, 80% for ear pain, and 88% for the ankle injury. I’m not sure what the numbers are like for human physicians in aggregate, but I’m fairly certain I’ve had a higher accuracy rate for those conditions since they’re common in the urgent and emergency care space. However, NASA notes that they hope to tune the model to be “situationally aware” for space-specific elements, including microgravity. I would hazard a guess that most physicians, except for those with aerospace certifications, don’t have a lot of knowledge on that or other extraterrestrial factors.
The article links out to a NASA slide deck. Since I do love a good NASA presentation I had to check it out. I was excited to see that there is a set of “NASA Trustworthy AI Principles” that address some key factors that are sometimes lacking in the systems I encounter. The principles address accountable management of AI systems, privacy, safety, and the importance of having humans in the loop to “monitor and guide machine learning processes.” They note that “AI system risk tradeoffs must be considered when determining benefit of use.” I see a lot of organizations choosing AI solutions just for the sake of “doing AI” and not really considering the impacts of those systems, so that one in particular resonated with me.
Another principle that resonated with this former bioethics student was that of beneficence, specifically that trustworthy AI should be inclusive, advance equity, and protect privacy while minimizing biases and supporting “the wellbeing of the environment and persons present and future.” Prevention of bias and discrimination, prevention of covert manipulation, and scientific rigor are also addressed in the principles as is the idea that there must be transparency in “design, development, deployment, and functioning, especially regarding personal data use.” I wish there were more organizations out there willing to adopt a set of AI principles like this, but given the commercial nature of most AI efforts, I can understand why these ideals might be pushed to the side.
In addition to the CMO-DA project, three other projects are in the works: a Clinical Finding Form (CliFF), Mission Control Central (MCC) Flight Surgeon Emergency Procedures, and a collaboration with UpToDate. I love a catchy acronym and “CliFF” certainly fits the bill.
I recently finished the novel ”Atmosphere” by Taylor Jenkins Reid . If you are curious about the emergency procedures that a mission control flight surgeon might need to have at their fingertips, the book does not disappoint.
The deck goes on to discuss the evolution of Large Language Models, retrieval-augmented generation, and prompt engineering within the context of the greater NASA project. The deck specifically notes that any solution must be on-premise, which is particularly true when you experience the communications blackouts that are inherent in space travel.
There are more details in the deck about the specific AI approach and the scenarios. I particularly enjoyed learning about “abdominal palpation in microgravity” and the need to make sure that the patient is secured to the examination table to prevent floating away. I also learned that “due to the microgravity environment, the patient’s abdominal contents may shift,” which got me wondering exactly how many organs were subject to shifting since many of them are fairly well-anchored by blood vessels and other not-so-stretchy structures.
The deck listed the three physician personas who scored the scenarios. Based on physician specialty, it’s likely that my favorite astronaut wasn’t one of them, but I was happy to see that an obstetrician / gynecologist was included.
Apparently there was a live demonstration of the CMO-DA at the meeting for which the presentation deck was created, so if anyone has connections at NASA, I know of at least one clinical informaticist that would love to see it. I’ll definitely be setting up some online alerts for some of these topics and following closely as the tools evolve.
Did you ever dream of being an astronaut, and what ultimately sidelined you from that career? Leave a comment or email me.
Email Dr. Jayne.
There are a couple of different ways to handle this. On the EHR side, if the system doesn't have a…