PwC released a guide to responsible AI after conducting a survey on around 250 business executives, finding that the level of understanding and application of responsible and ethical AI practices were significantly varied across organisations and in a lot of cases immature.

Everyone’s talking about responsible AI. To turn the talk into action, organisations need to make sure that their use of AI fulfils a number of criteria. First, that it’s ethically sound and compiles with regulations in all respects; second, that it’s underpinned by a robust foundation of end-to-end governance and third, that it’s supported by strong performance pillars addressing boas and fairness, interpret-ability and explain-ability and robustness and security.

From automation to augmentation and beyond, artificial intelligence (AI) is already changing how business gets done. Companies are using AI to automate tasks that humans used to do, such as fraud detection or vetting resumes and loan applications, thereby freeing those people up for higher-level work. Doctors are using AI to diagnose some health conditions faster and more accurately. Chatbots are being used in place of customer service representatives to help customers address simple questions.

Click here to read the full report

Comment