The Home of Dinobabies Knows How to Eliminate AI Bias
August 26, 2022
It is common knowledge in tech and the news media that AI training datasets are flawed. These datasets are unfortunately prone to teaching AI how to be “racist” and “sexist.” AI are computer programs, so they are not intentionally biased. The datasets that teach them how to work are flawed, because they contain incorrect information about women and dark-skinned people. The solution is to build new datasets, but it is difficult to find hoards of large, unpolluted information. MIT News explains there is a possible solution in the article: “A Technique To Improve Both Fairness And Accuracy In Artificial Intelligence.”
Researchers already know that AI contain mistakes so they use selective regressions to estimate the confidence level for predictions. If the predictions are too low, then the AI rejects them. MIT researchers and MIT-IBM Watson AI Lab discovered what we already know: women and ethnic minorities are not accurately represented in the data even with selective regression. The MIT researchers designed two algorithms to fix the bias:
“One algorithm guarantees that the features the model uses to make predictions contain all information about the sensitive attributes in the dataset, such as race and sex, that is relevant to the target variable of interest. Sensitive attributes are features that may not be used for decisions, often due to laws or organizational policies. The second algorithm employs a calibration technique to ensure the model makes the same prediction for an input, regardless of whether any sensitive attributes are added to that input.”
The algorithms worked to reduce disparities in test cases.
It is too bad that datasets are biased, because it does not paint an accurate representation of people and researchers need to fix the disparities. It is even more unfortunate locating clean datasets and that the Internet cannot be used, because of all the junk created by trolls.
Whitney Grace, August 26, 2022