By Aaron Earls
It is “appointed for people to die once,” according to Hebrews 9:27, but Google thinks they may be able to tell you when that appointment is.
Researchers from Stanford, the University of Chicago, and UC San Francisco worked with Google to develop an artificial intelligence program, which they claim can better predict a hospital patient’s likelihood of survival. The findings are from a new paper published in the journal Nature.
Trial runs at two U.S. hospitals found the AI program to predict more accurately patient outcomes than traditional means. AI was able to predict a patient’s length of stay in the hosptial with 86 percent accuracy compared to 76 percent accuracy of traditional predictive models. AI also outperformed traditional methods when predicting a patient’s chances of dying at the hospital (95 percent to 86 percent) and the possibility of readmission after being discharged (77 percent to 70 percent).
The models run by the AI “outperformed traditional, clinically used predictive models in all cases,” said Google’s Alvin Rajkomar, according to the New York Post.
The AI had more than 46 billion pieces of anonymous patient data to evaluate from the 216,221 adults at participating hospitals during the trial run.
This ability to access and evaluate extensive information—much of it previously buried in PDFs or on handwritten notes—is what gave the AI the predictive edge over traditional means, according to the researchers.
Nigam Shah, an associate professor at Stanford and co-author of Google’s research paper, told Bloomberg News that as much as 80 percent of the time spent on today’s predictive models is making the data presentable.
The Google AI is able to avoid this prep work. “You can throw in the kitchen sink and not have to worry about it,” Shah told Bloomberg.
Researchers believe AI could help medical professionals make quicker diagnoses.
But others have different kinds of worries.
In an op-ed at Harvard Business Review, Andrew Burt, chief privacy officer at tech company Immuta, and Samuel Volchen, associate professor of pediatrics at the University of Chicago, shared their concerns about private personal data being controlled almost exclusively by large tech giants.
“Governments must ensure that the massive amounts of data these new methods require don’t become the province of only a few companies, as has occurred in the data-intensive worlds of online advertising and credit scoring,” they write.
But it’s not only privacy concerns that worry Burt and Volchen, they also point out how much of the connections made by AI are often incomprehensible to human doctors and researchers.
While noting how accurate the Google AI was and how beneficial it could prove to be, they also noted its “predictions, however, were based on patterns in the data that the researchers could not fully explain.”
There are also ethical concerns as to how this technology will guide choices doctors face. “Decisions about insurance coverage for patients seeking certain medical treatment, or hospitals trying to allocate scarce beds for patients are obvious examples of potentially problematic scenarios where such AI predictions could come into play,” according to Business Insider.
- iFaith: How Religion and Technology Coexist in a Digital Age
- Can Technology Addictions Be Reversed?
- New Technology Leaves Poor Americans Behind
- What’s Next in Church Technology?
AARON EARLS (@WardrobeDoor) is online editor of Facts & Trends.