More and more companies have implemented AI technology in there daily operation, in manual and automated processes, decisions and customer interaction. During training and implementation they usually do an risk assessment based on Art. 22 of the EU General Data Protection Regulation (GDPR) and the Guiding Principles for Automated Decision-Making in the EU. But wouldn´t you want to know if your machine learning model or your AI algorithm is “good”?
We have developed a unique solution to probe your AI technology in the same way we question human beings about their moral and their ethical demeanor. We test and certify your solution in production mode to avoid it’s learning from our questions and we can retest updated, retrained models.