Editor’s note: The Towards Data Science podcast’s “Climbing the Data Science Ladder” series is hosted by Jeremie Harris. Jeremie helps run a data science mentorship startup called SharpestMinds . You can listen to the podcast below:
Most of us believe that decisions that affect us should be reached by following a reasoning process that combines data we trust with a logic that we find acceptable.
As long as human beings are making these decisions, we can probe at that reasoning to find out whether we agree with it. We can ask why we were denied that bank loan, or why a judge handed down a particular sentence, for example.
But today, machine learning is automating away more and more of these important decisions. Our lives are increasingly governed by decision-making processes that we can’t interrogate or understand. Worse, machine learning algorithms can exhibit bias or make serious mistakes, so a world run by algorithms risks becoming a dystopian black-box-ocracy, potentially a worse outcome than even the most imperfect human-designed systems we have today.
That’s why AI ethics and AI safety have drawn so much attention in recent years, and why I was so excited to talk to Alayna Kennedy, a data scientist at IBM whose work is focused on the ethics of machine learning, and the risks associated with ML-based decision-making. Alayna has consulted with key players in the US government’s AI effort, and has expertise applying machine learning in industry as well, through previous work on neural network modelling and fraud detection.
Here were some of my biggest take-homes from the conversation: