Five technology giants including Twitter and Facebook have pledged to self-regulate and adhere to a new voluntary code of practice in New Zealand that aims to curb harmful online content. The move, however, has been dismissed as “window dressing” and an attempt to preempt regulation.
Google, Meta, TikTok, Amazon, and Twitter agreed to sign up for the Aotearoa New Zealand Code of Practice for Online Safety and Harms, which “obligates” tech companies to “actively reduce harmful content” on their respective digital platforms and services in the country. The agreement includes Google’s YouTube, Meta’s Facebook and Instagram, and Amazon’s Twitch platforms.
The move marked the launch of the code of practice, which came into effect Monday after a year of development efforts led by Netsafe, a non-profit organization focused on online safety.
Dependent on self-regulation, the code outlines principles and best practices that look to improve online safety and cut harmful content. It can be applied to a range of products and services that serve different user communities, addressing different concerns and use cases, according to Netsafe.
The code focuses on seven themes under which content is deemed harmful, including cyberbullying or harassment, incitement of violence, misinformation, and child sexual exploitation and abuse.
Under the code, signatories will make “best efforts” to four key commitments that encompass reducing the prevalence of harmful online content, empowering users with more control and to make informed choices, enhancing transparency of policies and processes, as well as supporting independent research.
Netsafe said: “It provides flexibility for potential signatories to innovate and respond to online safety and harmful content concerns in a way that best matches their risk profiles, as well as recalibrate and shift tactics in order to iterate, improve, and address evolving threats online. in real-time.”
It added that the code was not designed to replace “obligations” involved in existing laws or other voluntary regulatory frameworks. Instead, it focused on the signatories’ architecture comprising their systems, policies, processes, products, and tools put in place to combat the spread of harmful content.
NZ Tech has been roped in taking over the establishment and administration of the code. The not-for-profit NGO (non-governmental organization) represents 20 technology communities and more than 1,000 members across New Zealand.
Several digital platforms, including all the five tech companies that signed up for it, were involved in the initial drafting of the code. Feedback from civil society groups, interest groups, the government, and the general public was also taken into consideration.
The code will be monitored by a “new multi-stakeholder governance” group, Netsafe said, which noted that the code was built on online safety principles from Australia and the EU.
Companies that agreed to adhere to the new code of practice would have to publish annual reports about their progress in adherence with the code and would be subject to sanctions if they breached their commitments.
Netsafe CEO Brent Carey said harmful content reports climbed more than 25% amid increased online use fueled by the global pandemic. “There are too many kiwis being bullied, harassed, and abused online, which is why the industry has rallied together to protect users,” Carey said.
Code promotes model that avoids ‘real accountability’
One industry critic, though, has hit out at the establishment of the code, calling it a framework that avoids change and accountability.
Tohatoha NZ CEO Mandy Henk said in a post that the code seemed like a “Meta-led effort to subvert a New Zealand institution”, in a bid to claim legitimacy without having done the work to earn it.
“This is a weak attempt to preempt regulation, in New Zealand and overseas, by promoting an industry-led model that avoids the real change and real accountability needed to protect communities, individuals, and the health of our democracy,” Heml said. “This code talks a lot about transparency, but transparency without accountability is just window dressing. In our view, nothing in this code enhances the accountability of the platforms or ensures those who are harmed by their business models are made whole again or protected from future Harms.”
Tohatoha NZ is a not-for-profit organization that advocates public education of the social impacts of technology.
Henk said the processes that led to the Aotearoa New Zealand Code of Practice revealed that the minds behind it had “no awareness” of the imbalance of power between users and online platforms and had no interest in correcting this inequity.
He also noted that NZ Tech was an advocacy group that lacked the expertise or experience as well as community accountability to administer a code of practice of this nature. It was neither impartial nor focused on the needs of those harmed by the tech platforms, he added.
He further called out Netsafe for being involved in establishing the code, when its role as the approved administrator for New Zealand’s Harmful Digital Communications Act meant there was a conflict of interest. “It aligns [Netsafe] too closely with the companies impacted by the Harmful Digital Communications Act and increases the risk of regulatory capture,” he said. “This code is a distraction from their core work of administering the Act, which is crucially important. NetSafe’s focus should be on serving the New Zealand public and enhancing the safety of every New Zealander who uses the internet.”
Henk instead urged the need for a government-led process to develop online content regulations. This would provide the legitimacy and resources needed to establish a regulatory framework that safeguards the rights of internet users.
He pointed to the Content Regulatory Review as the right step towards this direction.