Artificial intelligence and data protection issues made big headlines in the US and Britain earlier this year when a whistleblower at tech firm Cambridge Analytica revealed political activists harvested millions of Facebook profiles without their owners' consent.
Now research from the European Consumer Organisation BEUC confirms that many internet giants are not in compliance with new consumer privacy laws, with many using deceptive techniques to persuade users to agree to share their private information. And the group has been using AI to take the fight back to the online big data firms.
European General Data Protection Regulation (GDPR) came into force in May 2018. It replaced the 1995 Data Protection Directive, which became law before most of the modern internet companies even existed in their current form. GDPR gives users more rights in terms of information disclosure or requiring data to be deleted; regulators across Europe will be able to cooperate more closely than in the past, and be able to issue more substantial fines, up to 4% of company turnover.
While all companies who hold personal data will be affected by the legislation, the biggest impact will be on those who trade the data to third parties, using customer lists as a mass marketing tool. Full and specific consent must be obtained by companies holding the data for it to be used in such a way.
BEUC reports, however, that consent is not always freely given. Citing the report Deceived by Design prepared by a Norweigan rights group, the organisation states that large internet companies like Microsoft, Google and Facebook employ a number of "exploitative design choices" when asking for consent, such as assuming consent in standard settings, reducing functionality if consent is not given, or generally giving the illusion of choice where agreement is required. These "dark design patterns" inevitably benefit the service provider at the expense of the internet user.
Consumer consent is usually connected with lengthy and detailed legal documentation, which in practice most people will neither read nor fully understand. Employing the electronic AI techniques frequently used by internet firms themselves, BEUC joined forces with analysts at the University of Florence to perform detailed computer analysis on the service terms and conditions of 14 online corporations including Amazon, Apple, Twitter, Uber, Booking.com and Netflix.
In total almost 3,500 sentences of text were scanned. 401 (11%) used unclear language, while 1,240 (33.9%) were problematic as they did not provide sufficient information on how the data would be used.
The report concludes that
a) companies do not provide all the information which is required under the GDPR transparency obligations. For example, companies do not always inform users properly regarding the third parties with whom they share or get data from;
b) processing of personal data often does not happen according to GDPR requirements. For instance, clauses stating that by simply using the website of the company, the user agrees to its privacy policy,
c) policies are formulated using vague and unclear language which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice.
Specific cases of abuse in the report include those which do not disclose the recipients of information, what information could be transmitted, or the reasons for sending the information to third parties. Other clauses in online contracts fail the GDPR requirements to provide fully clear details on the holders of the information or fail to give adequate information to contact the data handler.
A contract term from Uber misleadingly states that a background check may be required for some services. A term for Booking.com fails to distinguish between legitimate and illegitimate interests in using the information. Apple is criticised in the report for failing to accurately state the strategic partners who might receive user information. Facebook legal contracts fail to specify the length of time information could be held. While the privacy terms for AirBnB are non-compliant for not informing users accurately of their rights, or what steps they can take to access them.
Commenting on the findings of the report the Director General of BEUC Monique Goyens raised concerns at the level of non-compliance identified: "Many privacy policies may not meet the standard of the law. It is key that enforcement authorities take a closer look at this."
She also suggested that AI could be used more often in the future to assist lawyers to "keep companies in check, and ensure people's rights are respected". She added that while users were "increasingly surrounded by connected products and use digital services... assessing such policies is essential to protect people's privacy and autonomy".
The BEUC also wishes to establish a gold standard in the future which will provide a framework which means that all information about data sharing will be transparent and consensual. This specifically includes provisions relating to providing information on who will receive information, what will be stored, and for what purposes information may be shared.
With most legal documents the devil is always in the detail. Online big data firms can update their policies relatively quickly, while legislation, particularly across Europe, can be years in the making and therefore out of date. When new regulations do come out, few people may be sure of their exact implications. Privacy concerns continue to put pressure on large data owners to be transparent, but the commercial demand for client lists, and the revenue to be earned by sharing this information will continue to grow. In theory, user information is more protected than ever. But with the spread of new laws, more complex contracts, and the creation of further regulation, it seems those who will be prospering most are undoubtedly the digital lawyers.