It has been a tough few years for Facebook, who have repeatedly fallen under fire for their failure to protect the data of their users.
The most notable of these is the 2018 Cambridge Analytica scandal, where it was revealed that over 50 million people’s data had been harvested from their Facebook profiles without their consent, and then used to swing their votes in US presidential election towards Donald Trump, via ads distributed by Cambridge Analytica. Similar activity was also revealed for the 2016 EU referendum in the UK. Several months after the scandal broke, GDPR (General Data Protection Regulations) came into play, which, had they been in place at the outbreak of the scandal, would have resulted in a fine worth 4% of Facebook’s worldwide turnover (over £1 billion).
At the start of this week, Facebook were criticised by Presidential candidate Elizabeth Warren for running intentionally inaccurate ads. Her party produced a Facebook ad with false claims to see if it would be approved without being flagged, which it was. In response to this coming to light, Facebook have recently introduced a number of new measures to safeguard people against misinformation and to check political advertisers.
1. Ads that encourage non-participation
Cambridge Analytica created the ‘Do so’ campaign in Trinidad and Tobago, utilising rebellious teen spirit to persuade young people to abstain from voting. This ban is crucial in political situations where a verdict was reached by a tiny percentage such as in the EU referendum.
2. Tools for analysing ads
The image below is a screenshot from Ads Library. The new features allow users to explicitly see where the ad was running, across Facebook, Instagram, Messenger, and/or the Audience Network. A US presidential candidate ad spend tracker is currently being developed, so that spend can be viewed at a state and regional level. The sponsor is clearly labelled, and it is flagged that the ads are regarded as related electoral or political issues. Furthermore, in order to run political ads on a large scale, it is now a requirement that the poster has a government ID and can prove their location (to minimise overseas interference), this will ensure accountability for the posts.
3. Labelling state funded media and flagging false information
In addition to the steps that have already been taken, this month they will also be labelling state-controlled media outlets with information regarding whether they are fully or partly run by a government, so that readers know where the information is coming from. A labelling system will also come into play whereby false or suspected to be false information can be flagged, ensuring that readers are more aware of the trustworthiness of a source.
4. Further changes
Finally, they have made an initial $2m investment to support media projects such as programmes that will help prevent the spread of misinformation from large accounts and educational workshops on how to assess the motives of ads.
These changes will affect global Facebook users, but in the UK specifically there are plans to open a dedicated UK election operation centre. The purpose of this will be to monitor ongoing political advertising, to ensure that the guidelines are being followed, hopefully preventing further malpractice. They are also campaigning for electoral law to be updated for the digital age, as existing law does not extend to cover modern developments.
Hundreds of Facebook workers have signed a letter to Mark Zuckerberg criticising his decision to not check political posts for their credibility. Facebook claimed that they are trying to protect voices, and that in a democracy people should decide what is credible rather than be told by tech companies. However, other Facebook employees feel that this allows politicians to weaponize the platform by targeting people who believe that content posted by political figures is trustworthy as not everyone will critique things that they read online. Nonetheless, the flagging system will be actioned soon which will hopefully make it easier for people to identify unreliable claims for themselves.
Overall, these steps are certainly promising, but whilst it is encouraging to see Facebook holding themselves accountable for their actions, only time will tell as to whether a new scandal is waiting just around the corner. I anticipate that users will be more sceptical about any overt political activity on their feeds, and this scepticism will likely transfer onto other ads too. It is important that brands and agencies are being similarly transparent, to try and rebuild trust among the public.
Lauren Haynes, Account Executive
Share this post: