In debates on data, privacy and the use of algorithms, social media companies have usually received the greatest amount of attention and scrutiny. Parliament’s DCMS Sub-Committee on Online Harms and Misinformation’s inquiry on data ethics is opening up the conversation and taking a look at these issues in other business areas.
The sub-committee was set up in March 2020 to consider a range of issues in this area, including the forthcoming online harms legislation. So far, they have carried out an inquiry on online harms and misinformation, resulting in a report that largely focused on misinformation during the Covid-19 pandemic. The sub-committee is now carrying out an inquiry into online harms and data ethics, and have thus far held two oral evidence sessions. The first session took place in September and focused on the need to moderate ‘harmful’ content on social media platform Tik Tok. The second, which took place on the 13th of October, was a broader conversation on data ethics across different sectors.
In April 2019, the Government published its Online Harms White Paper, which sought to “make clear companies’ responsibilities to keep UK users, particularly children, safer online”. Its key proposal was that online platforms should have a “duty of care” to their users, which would place an obligation on them to tackle harmful activities on their digital platforms and services, in particular, on social media. The paper also stated that compliance “with this duty of care will be overseen and enforced by an independent regulator” who would have powers to levy fines on non-compliant parties. The Government’s chosen regulator is Ofcom.
While the White Paper related largely to content moderation, it also stated that the regulator would have the “power to request annual transparency reports from companies” and to request additional information “including about the impact of algorithms in selecting content for users and to ensure that companies proactively report on both emerging and known harms.”
The Government confirmed its intention to bring in legislation on online harms in the Queen’s Speech in December 2019. The Government then published an initial response to its consultation on the Online Harms White Paper in February 2020, committing to a fuller response later in the year. This has been delayed by the pandemic, but is expected imminently. Since then, the DCMS sub-committee has been set up and has begun inquiries into online harms, misinformation and data ethics.
While initial discussions on monitoring and evaluating algorithms have been focused on content selection on social media platforms, the sub-committee’s latest session on data ethics examined how people are impacted by increasingly data-driven services and automated decisions in other areas.
The session’s witnesses were Dr Jiahong Chen, a Research Fellow in IT Law at Horizon Digital Economy Research at the University of Nottingham, Carly Kind, Director of the Ada Lovelace Institute, and Dr Jeni Tennison, Vice-President at the Open Data Institute. The main takeaways from the session are outlined below.
Data Collection and Usage Outside of Social Media
Members of the sub-committee probed the witnesses to shed more light on data collection in other sectors of the economy. In relation to this, Dr Tennison highlighted that conversations on data ethics are not limited to how personal data is collected and used by social media platforms, but could also cover data collected through other kinds of accounts (e.g. utility accounts) which reveal different patterns of consumption and lifestyle choices or habits. She also noted that data on air quality and traffic provide information about communities that then gets interpreted and used in decision-making, and therefore the ethics of how that information should also be discussed, given its potential impact on communities.
Public Trust in Data Collection and Algorithms
Relatedly, Ms Kind argued that data collection and data driven technologies were facing a lack of public trust, particularly in the wake of the Ofqual scandal over the summer, and early NHS Test and Trace failures. In the case of Ofqual, she argued that the “amount of damage that’s been done in trust in statistical models far outweighs what actually happened” and therefore, “going forward there’s a very high bar that needs to be met with any new data driven intervention.”
The State’s Role in Promoting Transparency in the Use of Algorithms
Dr Tennison and Ms Kind also pointed to the shortcomings of self-regulation in mitigating harms related to data collection and algorithm usage. Ms Kind in particular argued that “having some external accountability measure is imperative in creating an online space that is more hospitable to a wide range of communities” and that the public would value an “external, independent regulator.”
The Limits of Informed Consent, the Importance of Digital Literacy, and Building a Culture of Ethics
One key theme of the session was the limits of informed consent in data collection and usage. GDPR requires data collectors to gain informed consent, meaning that the data subject knows the identity of the party collecting the data, what data processing activities the party intends to conduct, the purpose of the data processing, and that they can withdraw their consent at any time. However, both Dr Tennison and Dr Chen noted that most individuals do not have the time or resources to properly read terms and conditions and give actual informed consent, and that regulation needed to step in to ensure that users are not exploited.
However, the witnesses also recognised the limits in public policy’s ability to keep up with the speed at which new technologies are developed and implemented, and recommended investment in digital literacy as well as promoting a culture of ethics that both the private and public sector agree on. Ms Kind said: “Legislation might not be able to keep up but we can do a better job of building a coherent understanding of what public legitimacy for technology looks like and what standards companies have to meet in order to enjoy a social license to operate, to enjoy the public legitimacy of their users and consumers.”
Given recent coverage of data collection and algorithm usage in public policy, as well as controversies surrounding their failures and biases, legislators are keen to gain a deeper understanding of how these processes operate and to regulate their usage. The session highlighted how both legislators and researchers working on data ethics are increasingly worried that the GDPR model of informed consent may not do enough to effectively protect service users’ data, and that there may be a need for state bodies to regulate the collection and usage of this data even further. Although the debate around data collection and algorithms has very much been shaped by experiences from social media platforms, the concerns raised are likely to impact regulation of data collection and use across a broad range of sectors; therefore, there is wide scope for corporate engagement in this area to educate and work with policy makers. This is especially true as the full response to the Online Harms White Paper is published and associated legislation is drafted.
Taso Advisory supports clients with the political, policy, and regulatory challenges they face, and helps them to design and deliver credible responses to mitigate risks and seize opportunities. We make complex challenges simple, give actionable advice, and support in delivery. You can find out more about what we do and who we work with.
For a confidential discussion about how we can support your public policy and public affairs work in relation to online harms, and keep you informed of developments, please get in touch by emailing [email protected] or by calling +44 (0) 20 3488 4489.