My question about biases in machine learning is why do they matter? My understanding of bias is that it is something that leads to a distorted perception of reality. However, assuming we are able to build machine learning algorithms that make accurate predictions, why would we need to bother worrying about bias - the predictions are accurate. Let's say you had a machine learning algorithm that determined the cost of health insurance given data on a person's health, and the machine learning algorithm determined that men should be charged higher prices for health insurance, would we refer to this as an example of bias? Of course not - insurance companies are already aware of the increased risk-taking behavior of men and price insurance products accordingly. So, the whole notion of "preventing bias in machine learning" sounds to me a lot like preventing machine learning from making accurate, data-driven determinations that are uncomfortable for people to accept as they undermine the current widely accepted political narratives.
Link to video: 1. Behavioral Economics - LabXchange