New Learning Model Claims to Reduce Bias, Improve Accuracy
August 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:
“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”
Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:
“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”
If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.
Cynthia Murrell, August 30, 2023
Comments
One Response to “New Learning Model Claims to Reduce Bias, Improve Accuracy”
[…] be resolved, but it may actually be getting worse. Other problems, of course, include that stubborn bias problem and inappropriate comments. Until its many flaws are resolved, Morrison observes, […]