Bias, Bias, Bias

After some years of steady build-up, the hype around machine learning and AI seems to be back with full swing: Vicarious, a US-startup focused on developing human-level AI and backed by Elon Musk, Mark Zuckerberg and Jeff Bezos has acquired more than $50m of venture capital so far without a product on the market their list of investors actually looks like a Silicon Valley hall of fame). “Vicarious is bringing us all closer to a future where computers perceive, imagine, and reason just like humans”, says Peter Thiel, quoted on the company’s website. At the same time, acquisitions of AI-based startups are in the news with increased frequency. Just a few weeks ago, Twitter bought the UK-based startup Magic Pony for approximately $150m, a company which had only existed for some 18 months. With all that buzz and excitement about the new era of technological advances and as its real-life impact becomes graspable, a critical and sociological discussion about its pitfalls is ever more pressing. With great algorithmic power comes great responsibility – how can machine learning and AI deal with biases?

The nature of human bias

Whether we like to acknowledge it or not, we all have our biases. You might be more likely to give a waitress a tip than a waiter, to take a very light example for this. As we grow up, our socio-economic surroundings, our friends and family, teacher and classmates help us conceptualize our worldview, beliefs and understanding of things. While a true, unbiased view on the world and its processes might simply not be possible due to our human nature, it is only a constant wrangle with our biases, which helps us to effectively eliminate most of them in our decision-making.

In his recent book “The Sharing Economy”, Arun Sundararajan, Professor and the Robert L. and Dale Atkins Rosen Faculty Fellow at New York University’s Leonard N. Stern School of Business claims that the impact of our prevalent biases have gained increasing impact with the rise of the sharing economy. As more and more people are empowered to participate as “small business owners” and provide their services on platforms like Uber, AirBnB or Etsy, evaluations are not only a distinguishing argument, but can be a prerequisite for business success. Uber has implemented a certain threshold – once drivers’ review average falls below it, they are ineligible to offer their services on the platform. The visibility and influence of individual reviews on platform of the sharing economy gives biases larger leverage than before. A negative review might be influenced by a lot of (hidden) factors, not all of them associated to the actual service or product experience.

Machine learning algorithms as a detective for human bias?

In theory, algorithms could help to detect these kind of biases. Compliance with non-discrimination rules is an important topic and companies regularly undergo audits of their processes and hiring practices. Interestingly though, biases – whether they are of conscious or unconscious kind – are somehow tolerated in other aspects of business operations (such as reviews for participants of the sharing economy). While this might also be a topic for legislative regulators, companies should have the intrinsic motivation themselves to develop smart algorithms, which help to detect biases in reviews and recommendations, as they are the backbone of their value proposition.

[Search volume trend for machine learning]

Search volume trend for machine learning

Algorithms, it seems, could be a rather ‘objective’ mirror for business processes and interactions, spotting and identifying biases and allowing companies to develop countermeasures.

Machine learning algorithms as the embodiment of human bias?

At the same time, algorithms – in their initial phase – are always set up by human beings. Thus, existing biases are likely to be reflected in code. Indeed, in areas like predictive policing, where software essentially points police towards those geographic areas, where the probability of criminal activity is higher for a given timeframe, these biases have already been uncovered. In the US, for example, those software solutions appear to be biased against African-Americans. Why so? Any kind of machine learning algorithm is trained on historic data. Let’s assume your average police officer – due to his experience and intuition – is likely to conduct more shifts in African American neighborhoods, these same biases are likely to be reflected in the respective software, which is trained to incorporate those statistical patterns. Similarly, the assessment of the likelihood of prisoners to conduct any kind of crime after being released again is also ethnically biased, according to ProPublica. So, if we are to turn to algorithms to assess biased processes, aren’t we actually squaring the problem?

Can algorithms detect biases of algorithms?

As weird as it may sound, but algorithms might be – once again – part of the solution, at least to some degree. Researchers from the famous Carnegie Mellon University in the US have recently presented a paper, which outlines a testing system for algorithms. This system is specifically designed to test the outcomes of algorithms, based on various input datasets and could give hints towards a more transparent, unbiased, non-discriminatory implementation of algorithms.

This might be of crucial importance as our world increasingly relies upon algorithmic power. Advancing sociological debate and pushing for transparency of algorithms will be significant activities if we want to unleash the full power of machine learning and AI. As our biases are inherent in our human nature, so are they is our algorithms. Being aware of them and finding the right framework (institutions!) to allow for check-ups and transparency should not be a burden, but essential part of the exercise.

Contact the author
Niels Reinhard
+49 (30) 814 513-13
Subscribe
Share

Leave a Comment

Your email address will not be published. Required fields are marked *