In a recent blog, we spoke about how artificial intelligence (AI) can do tasks that people would otherwise do. Substituting a tool for a person can be an effective way of chomping through repetitive and relatively unsophisticated jobs. Giving people AI-powered tools can help them be more productive. They don’t take jobs away directly, but they make tasks easier.
However, any opportunity carries risk and so how do you mitigate that, so you can capitalise on the opportunity? You need a framework and a set of policies to govern the way you work.
There are three questions we need to ask ourselves when testing any solution that contains AI:
If we don’t ask these questions, or we arrive at incorrect conclusions, then we are exposed to safety risks and costs. We may promise gold and deliver mud. We risk implementing a tool that reduces staff and service user satisfaction, rather than increasing it.
The answer to question 1 requires legal expertise and your vendor should be able to provide assurances, for example via their clinical safety assessments if it is a clinical solution. However, the answers to questions 2 and 3 are more complex and require the application of a new approach to defining projects. This should start with establishing a set of goals for the use of AI. These can be enshrined in principles and then used to create specific policies that can be applied to particular use cases.
Tom Lawry’s seminal book, AI in Health (HIMSS Publishing; 1st edition, February 2020), provides a great starting position to understand potential goals for your organisation:
How do we ensure fair and equitable treatment of staff and service users? How do we ensure the safety and effectiveness of interactions with AI-powered services?
Will my AI-based system(s) operate reliably, safely and consistently, even when under cyber-attack? Are they designed to operate within a clear set of parameters under expected performance conditions?
How do we control and process data and protect sensitive data from improper use? How do we ensure the provenance of data models? How do we protect the computing estate?
How do AI-tools and people work together? How do we avoid digital inequality? How do we avoid intentional or unintentional exclusion? How do we ensure AI is creating a net-benefit across multiple measures? What is the impact on staff (for example wellbeing)?
How do we govern AI-powered work? What can/can’t we do? What checks and balances need to be in place? What certifications do we and our vendors need?
These are all solid questions, but how do you bake the answers into a set AI policy for your organisation?
Your AI policy must have the five following components if it’s going to be applied in the sorts of use cases listed above.
If you do this first, then you will have a greater chance of landing a safe implementation of this technology. We know it has the power to change the world. But we need to land it in a safe and managed way where expectations are set, and benefits can be delivered while risks are mitigated.
We would be really interested to hear from organisations that are interested in safely deploying generative AI, to share experiences.