Would you like to receive similar articles straight to your inbox?

A Practical Approach to Detecting and Correcting Bias in AI Systems [The New Stack]

The following piece on bias in AI was originally published in The New Stack by Sofus Macskássy, VP of Data Science at HackerRank. 


As companies look to bring artificial intelligence into the core of their business, calls for greater transparency into AI algorithms and accountability for the decisions they make are on the rise.

That makes sense: If people are going to rely on AI to make important decisions with real world consequences, they need to trust it. But trust comes in many forms — and that makes it difficult to pin down. First, AI needs to explain why it made a particular recommendation. That builds trust because people understand the reasoning. Deeper levels of trust come from knowing that a system is fair and unbiased. Showing this part is much harder.

This leaves companies in a tough spot when it comes to leveraging AI: they can either fly blind or fall behind. In 2018, Amazon — a clear frontrunner in AI — shut down its experimental AI recruiting tool after the team discovered major issues with bias in the system.

What’s needed is a more practical approach. Here’s what 15 years building AI and machine learning models at companies like Facebook and Branch, and now HackerRank, has taught me about detecting and correcting bias in AI systems.

Read the full article at The New Stack.

 

Leave a Reply

Your email address will not be published. Required fields are marked *