Should We Regulate Digital Platforms?

I invited Phil Howard and Gillian Bolsover in November of 2016 to guest edit this special issue of Big Data on ‘‘Computational Propaganda.’’ I am delighted at the collection of academic papers that they have curated on the subject, which is front and center at the moment. Politicians, journalists, and scholars grapple with how and to what extent social-media platforms were used to manipulate the presidential election of the oldest democracy in the world. While the details were murky last year, it was becoming evident to us at Big Data that we were witnessing the vulnerability of social media for political mass manipulation for the first time. Despite a large-scale scientific study* conducted by Facebook in 2012 demonstrating that users’ moods could be manipulated via messages fed to its users, it continued to maintain a position of ‘‘algorithmic neutrality’’ on content. Other reports in the media had suggested that it was difficult for most people to distinguish humans from bots on Twitter. While the evidence suggested that social-media platforms were ripe for manipulation, the platforms didn’t seem particularly worried about the potential consequences of the research findings. A year later, however, with increasing evidence of manipulation, it is time to ask ourselves whether we need to make some changes to preempt potentially worse consequences going forward, namely, the potential for digital platforms being used as weapons by malicious actors. Common sense would suggest that we do, and while there are no easy answers, we need to start asking ourselves whether our liberal free-market values are vulnerable to rogue governments or groups who are able to sow havoc using minimal financial resources. We need to start finding answers. The Facebook study of 2012 had indeed sparked outrage and concern around the use of data for social experimentation without consent of human subjects. It was worrisome that data-usage policies of virtually all digital platforms had become increasingly rapacious over the years, allowing them to do what they please with the data they collect assiduously. Facebook’s social experiment would never have been approved by an Intuitional Review Board (IRB) for university research involving human subjects. But the outrage resulted in nothing. There was no regulatory body, and there still isn’t one that addresses unethical use of data by business entities, even though the implications are arguably as serious as those involving national security. Facebook’s research, conducted with a great deal of rigor on a large scale, clearly demonstrated that their users could be manipulated on their platform. Would this public knowledge not induce a motivated party to exploit such vulnerability, especially if the stakes are high and the costs low? The U.S. government is well aware that it has no shortage of overt and covert enemies. So, why would it not take steps to thwart them on digital platforms in the same way as it attempts to do so in other arenas? The very real possibility that a party with meager resources can potentially derail a vibrant democracy at virtually no cost represents an incalculable ‘‘externality’’ imposed on us through a known vulnerability in our digital ecosystem. In a recent article on ‘‘FinTech’’ platforms, my coauthor and I point out that because trust is so important when it comes to our money, we have historically held banks and financial firms to much higher standards of compliance and control than other businesses. Financial institutions are required to follow well-defined processes, with oversight and fail-safe plans aimed at minimizing risk and maximizing public trust. The terror attacks of 2001 ushered in stringent ‘‘know your customer’’ (KYC) regulations, requiring that such institutions know their customers in great detail. The 2008– 2009 crisis resulted in even more stringent regulation aimed at mitigating economic instability and curbing the use of funds for financing terrorism and other illicit