Big Tech faces ‘major’ EU law on hate speech and disinformation | Economic news
By KELVIN CHAN, AP Business Writer
LONDON (AP) — Tackling hate speech, misinformation and other harmful content online, the European Union is set to agree on sweeping legislation that would force big tech companies to enforce tougher controls, make it easier for users to report problems, and empower regulators to punish non-compliance with billions in fines.
EU officials were negotiating late Friday night on the final details of the Digital Services Act, which would overhaul digital rules for 27 countries and cement Europe’s reputation as a global leader in harnessing corporate power social media and other digital platforms, such as Facebook, Google and Amazon.
The law would be the EU’s third major law targeting the tech industry, a notable contrast to the United States, where lobbyists representing Silicon Valley interests have largely been successful in keeping federal lawmakers at bay.
While the Justice Department and the Federal Trade Commission have filed major antitrust suits against Google and Facebook, Congress remains politically divided on efforts to tackle competition, online privacy, misinformation and more.
Political cartoons about world leaders
New EU rules, designed to protect internet users and their “fundamental rights online”, would make tech companies more liable for content created by users and amplified by their platforms’ algorithms.
“The DSA is nothing less than a paradigm shift in technology regulation. It is the first major attempt to establish rules and standards for algorithmic systems in digital media markets,” Ben said. Scott, a former technology policy adviser to Hillary Clinton who is now the executive director of advocacy group Reset.
Once accepted in principle, the law will still need to be approved by the European Parliament and the European Council, although this should not be a major obstacle. It has not been decided when the law will come into force.
Negotiators hoped to strike a deal by the end of Friday, ahead of Sunday’s French election. A new French government could define different positions on digital content.
The need to regulate Big Tech more effectively became clearer after the 2016 US presidential election, when it was discovered that Russia had used social media platforms to try to influence the country’s vote. Tech companies like Facebook and Twitter have promised to crack down on misinformation, but the problems have only gotten worse. During the pandemic, health misinformation has flourished and once again companies have been slow to act, cracking down after years of letting anti-vaccine lies thrive on their platforms.
Under EU law, governments could ask companies to remove a wide range of content that would be considered illegal, including material promoting terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms such as Facebook and Twitter should provide users with tools to report such content in an “easy and efficient” way so that it can be quickly removed. Online marketplaces like Amazon should do the same for questionable products, such as counterfeit sneakers or dangerous toys.
These systems will be standardized so that they work the same on any online platform.
Companies that break the rules face fines of up to 6% of their annual global revenue, which for tech giants would amount to billions of dollars. Repeat offenders could be banned from the EU market.
Tech giants have lobbied furiously in Brussels to relax EU rules. Google said in a statement Friday that it looked forward to “working with policymakers to get the remaining technical details to make sure the law works for everyone.” Amazon referenced a blog post from last year that said it welcomed measures that boost trust in online services. Facebook did not respond to requests for comment and Twitter declined to comment.
The Digital Services Act would ban ads aimed at minors, as well as ads targeting users based on their gender, ethnicity and sexual orientation. It would also ban deceptive techniques companies use to trick people into doing things they didn’t intend to do, like signing up for services that are easy to accept but hard to decline.
To show that they are making progress in limiting these practices, tech companies should conduct annual risk assessments of their platforms.
Until now, regulators haven’t had access to the inner workings of Google, Facebook and other popular services. But under the new law, companies will have to be more transparent and provide information to regulators and independent researchers about content moderation efforts. This could mean, for example, that YouTube transmits data indicating whether its recommendation algorithm directed users to more Russian propaganda than normal.
To enforce the new rules, the European Commission would need to hire more than 200 new staff. To pay for it, tech companies will be charged a “monitoring fee,” which could amount to up to 0.1% of their annual global net income, according to negotiations.
Experts said the new rules are likely to trigger copycat regulatory efforts from governments in other countries, while tech companies will also come under pressure to roll out the rules beyond EU borders.
“If Joe Biden stands on the podium and says, ‘Damn it, why don’t American consumers deserve the same protections that Google and Facebook offer European consumers’, it will be difficult for these companies to deny the application of the same rules” elsewhere, he said.
But companies are unlikely to do so voluntarily, said Zach Meyers, senior fellow at the Center for European Reform think tank. There’s simply too much money at stake if a company like Meta, which owns Facebook and Instagram, is limited in how it can target advertising to specific groups of users.
“Big tech companies will strongly resist other countries adopting these similar rules, and I can’t imagine companies voluntarily enforcing these rules outside the EU,” Meyers said.
The EU reached a separate agreement last month on its so-called Digital Markets Act, a law aimed at limiting the market power of tech giants and making them treat smaller rivals fairly.
And in 2018, the EU’s General Data Protection Regulation set the global standard for protecting data privacy, although it was criticized for not being effective in changing the behavior of tech companies. . A big part of the problem is that a company’s primary privacy regulator is in the country where its European headquarters are located, which for most tech companies is Ireland.
Irish regulators have opened dozens of data privacy inquiries, but have issued judgments on only a handful. Critics say the problem is a lack of staff, but the Irish regulator says the cases are complex and time-consuming.
EU officials say they have learned from this experience and will make the bloc’s Executive Commission the enforcer of the Digital Services Act and the Digital Markets Act.
AP Technology Editor Barbara Ortutay contributed to this story.
Copyright 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.