In the rush to promote and use AI and other data-powered technologies, have we properly considered the potential negative impacts of these tools?
When it comes to AI tech, a good question to ask oneself is, is there a way to use these tools whilst protecting people, companies, and communities from harmful impacts? The short answer is yes. But it takes effort and attention.
A variety of tools and frameworks exist that let you move beyond simple legal issues when it comes to new tech, and apply the ethics and values that you and your team really care about. Tools like the ODI’s Data Ethics Canvas allow anyone to apply an ethical position to any kind of data project, whether that’s using a new dataset, or adopting a tool powered by ChatGPT. And it’s not about a black and white approach to new tech – labelling one thing intrinsically bad and another good – it’s about identifying the ethical basis for minimising risks, and amplifying benefits.
Regularly adopting ethical thinking is what makes data ethics a superpower – a superpower that allows you to take on the might of the big tech companies, by making it clear what tools and techniques are acceptable to you and the people you care most about.
Found this Little Missions interesting?
Subscribe to get Little Missions delivered straight to your inbox.