Artificial intelligence (AI) is fundamentally changing the way humans interact with machines by automating tasks which before only humans could perform. While AI can seem like magic, using these innovative techniques comes with considerable risks. AI models can be fraught with bias, as was the case when Amazon launched an internal recruiting tool that used AI to vet job resumes. In designing the tool, researchers identified that the model was ranking women’s resumes significantly lower than men’s resumes. The model penalized resumes for having the term “women” in activities like “women’s chess club captain” and downgraded applicants for having attended all-women’s colleges. While Amazon scrapped the program in 2015, it is an important lesson that even organizations with the best intentions may run into unexpected risks when managing AI-based projects. In this presentation We will review multiple case studies of how AI projects realized risk and highlight additional risks associated with managing AI projects including:
- Bias in AI models
- Privacy concerns with data used to train AI systems
- Legal and licensing issues that may rise from using AI powered tools
- Lack of model transparency and explainability
- Model drift and how models lose accuracy over time
We will also cover strategies for dealing with these risks to help program managers maximize the impact that AI has on their projects. At the end of this presentation, attendees will be empowered to identify and address the biggest risks impacting AI projects in today’s environment.
PMI Talent Triangle: Technical Project Management (Ways of Working)