Data ethics isn’t the preserve of data scientists. Data work has the potential to deliver impressive business results. But, when the focus on business results is at the expense of our values and ethics we risk losing it all.
Back in March 2018, Mark Zuckerberg was sitting pretty. In the previous 2 years Facebook’s stock and his wealth had almost doubled. The scale and influence of the platform he created just 14 years before was unrivalled. Yes, questions were being asked about the impact social media was having on society. But, just look at that advertising growth…
On March 17th The Guardian and New York Times broke the Cambridge Analytica story. The story knocked $36 billion off Facebook’s stock value, overnight. Yes, the value of Facebook stock has increased since then. But, so has the media and regulatory scrutiny of the business. Cambridge Analytica is no more.
The loss of trust in Facebook’s ability to match business practices with societal ethics and values continues to have a long term impact on the business. This impact most recently evidenced by the advertising boycott the social media platform experienced in early summer 2020.
The Facebook and Cambridge Analytica scandal has been well explored elsewhere. It is one of many similar stories. From race-based insurance profiling to exposing the running habits of army personnel, data-driven technology has the potential to create significant harm. Often this harm is unintentional, and even technically legal. Whether legal or not when harm is exposed the impact on the trust people and governments have in a business is often significant. This loss of trust leads to greater regulatory oversight, customers departure, and, in some cases, the end of a business.
What’s going wrong? It’s about balance
Businesses need to find balance. Balancing the value they get from data-powered products and services with a clear understanding of potential negative impacts. It is leaders who need to bring this balance to their business.
Business leaders can deliver this balance with a focus on data ethics. This starts with grasping the fundamentals. Then ensuring that practical action is being taken to ensure that ethics and values are being addressed in their business.
Here are some of the aspects of data ethics that I believe leaders need to consider.
Professional ethics have been around for a long time
A number of professions and communities already have well-established ethics codes and practices. Some of these standards – such as statistics and computing – have a close relationship with data ethics practices.
Many people will be familiar with the Hippocratic Oath: one of the longest-standing and most commonly recognised ethical statements for a profession. The oath is historically taken by new doctors, requiring them to uphold specific ethical standards. Read the original and a modern version of the Hippocratic oath
Leaders need to consider the professional ethics that needs to be embraced in the data work their business conducts.
Ethical decisions are too often delegated to people who design and build technology
One of the issues in modern business is that people are using software and data to design and build technologies that make decisions about other people. Decisions involved in driving a car, deciding who gets a job, or whether someone gets offered a mortgage. These decisions are made by algorithms. But this means we need answers to some key questions, like who will be designing the algorithms?
The algorithms are likely to be machine learning algorithms that use data to learn what to do. But what data will those algorithms learn from? What ethical and cultural values and behaviour will be represented in that data? And what impact will all of this have?
Too often in business the answers to these questions are delegated to technical teams and individuals. Their focus is too often on building a tool with business impact. They may and often do consider ethical issues. But how can you be sure?
GDPR doesn’t address ethics
One key issue that leaders need to appreciate is that recent regulations like the GDPR (General Data Protection Regulation) have guidelines on how to process data in an open and transparent way. GDPR doesn’t, however, establish guidelines for the less clear ethical areas of data handling. That work is still ongoing in many countries and organisations – in the meantime leaders need to take ensure that practical action is being taken in their business.
The more data we create and use the greater the risk
One of the reasons AI works for business is that more data than ever is stored and managed. More data has been created in the past two years than in the entire previous history of the human race. The opportunities for applying large amounts of data in new technologies such as AI can often be accompanied by ethical risks. This can create pitfalls if we’re not carefully considering data ethics.
Unintended bias can creep in everywhere
Online car insurance firms use predetermined algorithms to assess the risk of a user filing a claim against their policy. In 2018, large global firms like Admiral and Marks & Spencers faced a public backlash when it was found that insurance quotes for drivers with the traditional English name ‘John’ were far lower than quotes of the same for drivers named ‘Mohammed’. Under particular scrutiny was the insurance company Admiral where their deal on Go Compare for fully comprehensive insurance on a 2007 Ford Focus in Leicester was priced at £1,333 for ‘John Smith’ and £2,252 for ‘Mohammed Ali’.
Sixty quotes run across ten different cities on GoCompare and other comparison sites revealed the companies in question consistently charged more if the driver was called Mohammed. Following public outcry, the UK’s Financial Conduct Authority (FCA) announced it would extend its consumer data research to examine whether names can impact the cost of car insurance cover.
Some states in the US have now banned price optimisation on grounds it was unfairly discriminatory.
The issues with price comparison start with a well-intended use of data – attempts to provide competitive, personalised insurance. However, the bias that crept into these services shows how even well intended data uses can create a significant negative impact, first for consumers then for the reputation of the companies involved.
Data ethics starts with leaders
Many organisations have started on data transformation journeys. These journeys risk ending in failure when leaders don’t provide well-informed guidance and oversight. This isn’t about micromanaging. It’s about placing equal value on trust and revenue growth. Trust is hard to build and easy to destroy. Data ethics provides leaders with an essential tool for first preserving trust, then building data-informed businesses that create value and avoid harm.
This starts with data literacy. Leaders need to understand data, how they interact with it, and the impact it is having on their business and sector. Data ethics should be considered a key element of this literacy. The benefit? Greater trust in their business.