US Plans to Regulate Big Tech Like Google, Amazon, Apple, Facebook. Other Countries Can Take Cue
US Plans to Regulate Big Tech Like Google, Amazon, Apple, Facebook. Other Countries Can Take Cue
India and Europe’s forays into data localisation could potentially “tip the scales away from Big Tech” by regulating access to their citizen’s data.

The US is reevaluating its relationship with Big Tech. In 2020, 72 percent of Americans told pollsters that social media companies have too much power and influence in politics. What they sense is borne out in the data. Together, the stock market value of Amazon, Apple, Facebook, and Google totals US$ 5 trillion. In their respective markets, Amazon is forecast to capture half of the US e-commerce retail market and Apple almost 21 percent of smartphone market share. Facebook boasts almost 60 percent of the globe’s social media users, with 1.86 billion logging daily, and Google already controls 90 percent of all general search engine queries. Hand in hand with this market power, these and other US tech companies wield an inordinate amount of influence on public discourse — with implications for free speech, the body politic and technology itself.

Why does this matter to the rest of the world? Canadian professor Blayne Haggart summed it up neatly in the days following former US President Donald Trump’s purging from nearly 20 social media platforms in January. Haggart declared that the comprehensive digital “deplatforming” of a sitting US president and the bedlam surrounding it revealed “the extent to which these platforms…are uniquely shaped by and respond to American needs and values” and this “newfound willingness to censor problematic speech and problematic actors will almost certainly inform how they conduct their business in the rest of the world.” When American tech companies sneeze, the world catches a cold.

The regulatory landscape

Much ink has been spilled over the need for guardrails on digital platforms. British politicians, former US Secretaries of Defense, European luminaries, and international researchers all assert that authoritarians will determine the rules of the road online if democratic societies do not do it first. Washington DC recognises the moment. Current energy behind proposals designed to rein in and confer safeguards on Big Tech (companies like but not limited to Amazon, Apple, Facebook, Google and Twitter) in the US capital manifest in two potential avenues — reform of Section 230 (which roughly grants immunity to tech companies from liability for content on their platforms) and anti-trust legislation. This essay will focus on major US tech companies and explore these two potential near-term avenues in the context of the information environment, not because they represent a panacea or even the appropriate course of action, but because they stand the most likely chance of implementation in the next few years. In a marked departure from the inertia of the last two decades, both sides of the aisle in Washington now have Big Tech in their regulatory crosshairs.

The “twenty-six words that created the internet” are no longer sacrosanct. On content moderation and Section 230 of the Communications Decency Act (CDA), Republicans believe these companies are doing too much, while Democrats believe they are doing too little. The “Good Samaritan” premise in Section 230(c) — that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” — allowed embryonic Silicon Valley companies to flourish free from innovation-strangling litigation in the 1990s and early 2000s. This, along with its civil liability provision that renders interactive computer services not liable for users’ speech or their own attempts to restrict access to content deemed offensive or “otherwise objectionable,” is now the subject of intense focus by lawmakers. They see tech companies as the beneficiaries of special government protections that engineered their outsized largesse through the “sweeping immunity” granted by this initial bill and then extended through a slew of court decisions. On one side, certain policymakers interpret Section 230 as carte blanche for wanton deplatforming, excessive content moderation, and incoherent application of terms of service. On the other, lawmakers fault Section 230 for platforms’ failure to thoroughly scrub “disinformation” from their sites. As such, over 20 bills were introduced to amend or reimagine this law in 2020 alone.

In a departure that augurs updates are likely on the horizon, conservative stalwart and US Supreme Court Justice Clarence Thomas signaled that Section 230 reform is no longer out of bounds. A willingness to consider reform by a conservative justice marks a shift in protective attitudes toward Section 230, which can be interpreted as the legislation that allowed tech platforms to grow into the economic powerhouses they are today. But any attempts to modify the legislation are subject to degrees of gradation. They range from significant overhauls (like those described in the May 2020 executive order) to additional carve outs (like those proposed in September 2020 by the Justice Department) to more subtle measures that refine specific phrasing in the text. However, this movement for reform is not ineluctable. The American Civil Liberties Union (ACLU) and others have come to the defense of the moderators, calling Section 230 “critical to protecting free speech online,” and promising that its elimination would jeopardise the publication of content like “videos, photos, and tutorials… each of us is relying on to stay connected today.”

Lawmakers also motion to potential anti-trust legislation as another mechanism to rein in big tech. Anti-trust scrutiny grew in tandem with these companies’ market dominance and consolidation of power. In 2019, Senator Elizabeth Warren publicly targeted Amazon, Facebook, and Google with threats of a break up. The Democrat-led Senate Judiciary Committee’s subcommittee for antitrust reported in 2020 that “companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons.” Four Republicans concurred with the overall findings of the majority report, and where they did not, offered a bipartisan solution to resolve the committee’s concerns instead.

A separate 2021 bill to “fund the regulators” from Senator Amy Klobuchar would nearly double the annual budget for two agencies that share anti-trust enforcement responsibilities. The Department of Justice anti-trust division and the Federal Trade Commission would both receive an additional US$300 million per year under Klobuchar’s plan, which she claimed gained bipartisan support even from Trump’s White House Chief of Staff Mark Meadows before the presidential transition. And both parties lauded the October 2020 filing by the Justice Department that accused Google of illegally protecting its search and advertising monopoly. Along with these measures and proposals, the March Senate Judiciary Committee’s first antitrust subcommittee hearing revealed indications of bipartisan overlap. How and if Washington responds to this anti-trust fervour will depend on who President Joe Biden appoints to the Department of Justice anti-trust division and the Federal Trade Commission chair and their attitudes toward implementing these years-long threats. Nevertheless, such bipartisan agreement and movement toward reform portends real action ahead. As Klobuchar noted in 2021, “we are making antitrust cool again.”

Unintended consequences for foreign policy

In their purest form, the nature of these legislative pushes against US technology companies — content moderation and anti-trust — are a direct response to their outsized impact on public discourse and how their consolidation of market power impacts individual consumers. The size, scale, and reach of digital platforms renders them transformative — they control the flow of information in such an expansive way as to fundamentally shape the public square, wielding as much or more power than a government or nation-state.

In an American context, this will naturally impact discourse on and raise the spectre of free speech. The deplatforming of Trump ushered these concerns to the fore. No friends of Trump, Russian dissident Alexei Navalny, Mexican President Andrés Manuel López Obrador and German Chancellor Angela Merkel all spoke out in protest of these deplatforming decisions and their implications for free speech. Among civil society groups, ACLU Senior Legislative Counsel Kate Ruan stated in January that “it should concern everyone when companies like Facebook and Twitter wield the unchecked power to remove people from platforms that have become indispensable for the speech of billions…”

For their part, tech companies are vocal about an imperative to balance public safety and free expression.

In his October 2019 speech at Georgetown University, Facebook CEO Mark Zuckerberg laid out this dichotomy between “[avoiding] real world harm” and promoting free speech. He reminded his employees and himself that “…as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.” Yet, in practice, this rhetoric confronts certain hard technical and market realities. Despite moderators’ attempts to “err on the side of greater expression” when confronting uncertainty, content moderation — by its nature — does the opposite. Stanford scholar Daphne Keller points to an “over-removal” issue, wherein companies calculate that “the easiest, cheapest, and most risk-avoidant path for any technical intermediary is simply to process a removal request and not question its validity.”

Further, when companies actively insert themselves between “the user and content,” they degrade user trust. According to Pew, roughly three-quarters of adult Americans believe “social media sites intentionally censor political viewpoints that they find objectionable.” This is not helped by the uneven and opaque application of their policies and terms of service. For instance, despite a no tolerance policy of “inciting violence,” Twitter did not flag Iran Supreme Leader Ayatollah Ali Khamenei’s anti-Semitic tweets calling for armed resistance against Israel in May 2020. The company also sat on Chinese Communist Party representatives’ celebration of its sterilisation and genocide against over 1.5 million Uighers in Xinjiang before public pressure precipitated a review and takedown.

Apart from economic competition, instances of collusion in the information space provide anti-trust warriors additional justification for their exertions. Big Tech’s complicity to take down the much smaller Twitter competitor Parler at the height of its popularity in January of this year is not just an example of anti-competitive behavior, but one of dominating the information environment. Google and Apple’s ban of Parler, despite its perch atop the app store at that time, is significant because together these companies control almost 100 percent of the global market share for mobile operating systems. On top of that, Amazon’s decision to drop its hosting service for Parler matters because it controls nearly a third of the cloud infrastructure services market.

This means that if decisions are made to deny service at the cloud hosting infrastructure or internet service provider level, direct access to digital viewpoints, actors or companies who run afoul of these providers is highly circumscribed. It is a good thing when tech platforms cooperate to share “signals” about security issues like child exploitation, terrorism and adversarial foreign government influence operations. However, when companies work together to crush their smaller competitors and decide who has access to the new town square, as they did in the case of Parler, lawmakers may justify reaching for the anti-trust lever. Implications for the ‘infodemic’ are also stark: with this market share capture prompted in part by companies working together to take down alternative digital platforms and nowhere else to go, users will get pushed further and further into the darker corners of the internet.

Yet these moves by the private sector do not occur in isolation. With nearly 90 percent of its user base outside the US and Canada, companies like Facebook have a massive global reach. The impact of ad-hoc content moderation decisions, combined with prodigious consolidation of power, affect the rest of the world. Global powers and partners are aware that decisions within the US will impact how these companies do business outside the US. As such, nations already dabbling in data localisation and internet sovereignty measures are primed to follow through.

In a bifurcated future, where tech titans like Eric Schmidt predict a sundering of the digital world into a “Chinese-led internet and a non-Chinese internet led by America,” the US is no longer the prime mover. Even so, other countries are beginning to balk at this binary and assert sovereignty over their digital content and data. In January, Australia was entangled in a public battle with Google and Facebook over proposed legislation to charge for its digital news content. India and Europe’s forays into data localisation could potentially “tip the scales away from Big Tech” by regulating access to their citizen’s data. In 2021, a Canadian think tank fellow posed advancing domestic control over platforms through a “federated internet of interoperable democratic sovereign countries.”

And all of this is still separate from measures like the European Union’s Digital Services Act framework and General Data Protection Regulation, which create data protection frameworks designed by and specific to member states. These frameworks are already causing headaches for US tech companies, often to the tune of tens of millions of dollars, and are only gaining steam among the country’s traditional allies. The perception of inconsistent platform governance calls into question US authority over the very platforms they built.

To prevent further mishaps, debate within the US is key. The process to settle questions of the real-world harm and free expression tradeoffs should take place through an engaged citizenry, civil society groups, the free press and in courts of law. The contours of this debate should take shape as a combination of policy and tech solutions.

Green shoots: Blending technology & policy solutions

Americans must think dynamically in the context of these growing foreign policy challenges that threaten to fracture the digital world into a disjointed constellation of open and closed systems. Blending new technologies and policy is the antidote to ad-hoc fixes by tech companies and hamfisted US government regulation. On the policy side, Zuckerberg’s idea for the US government to pursue a framework distinct from the binary “telecommunications company versus publisher” approach could be an opportunity taken up by a new administration. This framework should be based on transparency, openness and recourse, with tech companies held accountable for their content moderation decisions.

The first step would be to mandate the publication of content moderation processes and practices to help restore trust, such as through Facebook’s public transparency reports. In the future, instituting algorithmic transparency among these tech companies should be non-negotiable. In addition, calls for an “online Bill of Rights” and a national data protection framework to restore individual rights in the digital space are good start points. Such a framework or federal privacy bill would go far in enshrining user protections. Recommendations like those contained in the final 2020 Cyberspace Solarium Commission report that call for “national data security and privacy protection law establishing and standardising requirements for the collection, retention, and sharing of user data” are also primed for ratification. Even further, new approaches to policy solutions that depend on basic principles of federalism to disseminate decisions and authority at the most local level possible are flowering. Florida Governor Ron DeSantis’ ‘Transparency in Technology Act’ proposal aimed at consumer protection within individual states is one such approach.

But these policy solutions must be complemented by the technology they govern. Companies and citizens should continue to invest in technology solutions that foster democratic values of individual privacy and openness and transparency, such as decentralisation, privacy by design and mechanisms and protocols that favor more user control. A host of alternative platforms and decentralised technologies have burst on the scene in the past few months in the US, heralded by the GameStop rebellion, Bitcoin’s burgeoning valuation, and new ways of thinking about the internet writ large. Such a rethinking is exemplified by projects like DFINITY, which claim to create a “public internet” through a global compute platform, beholden to no one corporate entity.

Similarly, focusing on privacy by design, or designing in privacy protections at the initial stages of technology development, will go a long way to avoiding privacy abuses after digital tools are rolled out for the general population. Similarly, more user control through protocols like a Domain Name System, which privatises host transactions, can help institutionalise these privacy-first norms within the companies themselves, if widely adopted by multiple firms. US tech sector leaders must also commit to implementing efforts like the one explained by Twitter CEO Jack Dorsey in his 2020 testimony to the Senate Commerce Committee, where he describes “…enabling people to choose algorithms created by third parties to rank and filter the content” as “an incredibly energising idea that’s in reach.”

American companies acting on these promises will be the difference between partner nations that decide to take matters into their own hands through heightened data localisation practices or the more open, free flow of data. Put simply, what is at stake is if the US’s friends continue to trust the products and services coming out of the country. Social media platforms were ostensibly conceived to democratise ideas, not stifle them. They were made to distribute the power of information, not consolidate and wield it like a cudgel. Blending technological solutions with smart policy is a small step to restoring the health of free and open societies in the digital world. Convincing partner nations that the US can be trusted to do so is the first hurdle.

This paper was originally published on ORF.

Read all the Latest News, Breaking News and Coronavirus News here. Follow us on Facebook, Twitter and Telegram.

What's your reaction?

Comments

https://filka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!