Regulating artificial intelligence: Can we keep up? And do we need to?
A New Zealand tech leader and good friend of mine recently pointed me to some Silicon Valley schools that have removed technology from the classroom. The approach, championed by the Waldorf School, was highlighted by the New York Times way back in 2011:
“The chief technology officer of eBay sends his children to a nine-classroom school here. So do employees of Silicon Valley giants like Google, Apple, Yahoo and Hewlett-Packard. But the school’s chief teaching tools are anything but high-tech: pens and paper, knitting needles and, occasionally, mud. Not a computer to be found. No screens at all.”
Here we see private education providers and tech-entrepreneur parents making choices, albeit from a position of significant information advantage and privilege, to regulate the use of technology in education.
Is there a case for government regulation of AI in Aotearoa?
This Californian example reminds us that not all regulation has to involve government intervention, and in Aotearoa we’ve also tested different forms of industry self-regulation. If government is to consider stepping in instead, the starting point is to assess the risk of harm:
"The Government of Aotearoa New Zealand uses regulation to protect the community from harm and to improve the living standards of its people. Regulation is likely to involve legislation in some form, but it is not just about the law. – The Treasury, “Regulatory stewardship”
We have a well-established framework for assessing and managing economic harm, mainly through legislation enforced by the Commerce Commission. But the debate around regulating technology, particularly AI, involves not just economic harms but also harms relating to privacy, consumer protection, and human rights. The wide range of intended and unintended consequences, coupled with the pace of change, makes it hard for regulators to develop the right strategy.
The rate and speed of technological change has had regulators wringing their hands for almost two decades. Yet the range of interventions being used in New Zealand still looks much as it always has. It’s starting to look like we may be struggling to keep up.
Is “watch and wait” a viable regulatory strategy?
My colleague Marianna Pekar leads our thinking at MartinJenkins on new digital technologies and the positive potential they offer us here in Aotearoa. As a data scientist, she provided feedback to Stats NZ before the Algorithm Charter for Aotearoa New Zealand was finalised, and she oversaw the Charter’s implementation at the Social Wellbeing Agency.
The Charter was a bit of a breakthrough in how government can demonstrate transparency and accountability for the use of data. But in our work here at MartinJenkins, we still come across government agencies that are unaware of the Charter and what they should be doing to support it.
To Marianna, the Algorithm Charter is a good start in the direction of regulating the use of data and tech, but not enough in itself. Overseas approaches to privacy regulation have been shaped around compliance with the EU’s General Data Protection Regulation (GDPR), and now too Marianna is keeping a keen eye on the EU and its approach to regulating AI.
The EU’s draft Artificial Intelligence Act: The first comprehensive AI law
By all accounts, the EU is leading the way in creating the first comprehensive artificial intelligence law. In June the European Parliament agreed on a draft Artificial Intelligence (AI) Act, with the next step being to negotiate the final Act with the member states.
The EU is setting up a regulatory body to enforce the new rules, and the rules could be fully operational by the second half of 2024. The EU has the scale to attract some great regulatory talent.
The EU’s priority is the safe, transparent, non-discriminatory, and accountable use of artificial intelligence. Their groundbreaking work involves a risk-based approach, and the penalties proposed are heavy. Other countries are following suit, particularly the United States, Canada, and Australia.
The EU framework aims to both protect the fundamental rights of individuals and businesses and also foster AI investment, uptake, and innovation. These two goals obviously sit in some tension. If you’re a company using AI technology in New Zealand, you should watch these developments closely.
The EU’s risk-based approach
The EU’s approach categorises AI systems into four risk levels: unacceptable, high, minimal, and no risk.
“High risk” systems include critical infrastructure, recruitment processes, and credit scoring. They’re subject to strict obligations, requiring high-quality datasets to be used to minimise biases, detailed documentation for compliance assessments, adequate user information, and human oversight measures.
Remote biometric identification is also “high risk”, with prohibitions on law enforcement using it in public spaces (very relevant for us given the consultation underway on this here).
On the other hand, “minimal risk” AI applications, such as AI-driven video games, would continue to be freely usable – a huge relief to my 13-year-old son and potentially the New Zealand gaming industry.
A fork in the regulatory road?
With AI, Aotearoa seems to be at something of a fork in the regulatory road. Will we be a regulatory taker, waiting and letting other large countries or country blocs set the standards? Or will we spring into action now with our own approach?
Privacy and data protection is an example of where we didn’t just import a regime from one of the big overseas hitters, like the EU’s GDPR. We developed a homegrown regime, tailored to Aotearoa. On the other hand, that homegrown approach does create some headaches for some New Zealand companies – like those doing business in GDPR-compliant countries and therefore have to comply with two different regimes.
My colleague Marianna, perhaps like those Silicon Valley tech execs, has the inside skinny and much more knowledge than me about AI and the specific issues regulators will need to address. She’s impressed with how the EU has adopted a future-proof mindset, adopting rules that can be adapted to rapid technological advancements.
In broad outline, Marianna thinks the EU’s approach is suitable for us here. It would be an advance on what we currently have: voluntary guidelines in the form of the Algorithm Charter, without any compliance monitoring and enforcement teeth.
But Marianna also believes strongly we can’t just directly copy a regulatory regime from overseas – particularly not one designed for such a large and varied jurisdiction as the EU. Some values, she tells me, should not be imported. New Zealand needs to develop its own position, where we balance promoting AI innovation with avoiding a dystopian algocracy. This will require a thorough consultation process involving all our diverse communities as well as AI and ethics experts.
What’s happened in New Zealand so far?
There are some things happening here. A group of academics are part of the Global Partnership on Artificial Intelligence, and Waikato University has a dedicated Artificial Intelligence Institute. There’s a cross-agency work programme on AI led out of the Ministry of Business, Innovation and Employment. The Office of the Prime Minister’s Chief Science Advisor has also recently called for action.
But the only regulator I currently see sending out clear signals about their regulatory posture on matters AI is the Privacy Commission, Of course, its mandate is firmly, and appropriately, focussed on one specific form of harm, that related to privacy.
There’s a risk that this privacy-focussed action ahead of others could potentially have a chilling effect on public regulators’ ability to think about the positive opportunities as they race to stamp out the risks.
Our regulatory system is wired to be risk-averse
Our public-sector regulatory system is wired to act in this risk-averse way. As someone who’s been both a regulator and a regulated party, and is now a roving regulatory adviser, I can tell you that historically New Zealand regulators live a sheltered and very separate existence by careful design, penned off from industry and other regulated parties.
Good regulatory theory reminds you that your state-sanctioned intrusive powers must be wielded with care and diligence, and with the public good kept front of mind. Cozying up to regulated parties just leads to bad outcomes, says the regulatory literature.
But how do you keep up with tech developments and be a responsible and responsive regulator without close industry connections? And if you are a company wanting to be innovative, how do you support the regulator to focus on the right risks with a good degree of regulatory certainty so you can attract investment?
Are we ready to grapple with the challenges posed by AI?
How government responds to concerns about the need to regulate AI also connects with the wider issue of trust and confidence, or the lack of it, in regulators and governments generally. Matching up with survey data out of the US, a recent survey in a selection of European countries revealed serious concerns about AI, with a majority being in favour of heavy government regulation.
So, to put it crudely, there is a chance that regulating AI is going to be popular with New Zealand punters as well. But it’s unclear whether we have the level of regulatory sophistication needed to grapple with some of the AI issues we are seeing coming out of the EU. The steps taken by our Government Regulatory Practice Initiative to professionalise our regulatory workforce have been admirable, but we’re still a small country with limited regulatory capability, and AI is ubiquitous.
This is something that any new government will need to watch and respond to in 2024. In the meantime, I will send my kid to school with a Chromebook, read articles in the New York Times about someone’s ideas on how I could be a better parent, and watch what’s happening in the EU with interest, to see if theirs turns out to be a sound strategy.