Regulating disruptive technologies: an update

Emerging technologies are creating upheaval for regulatory environments worldwide.  In this article we look at recent regulatory updates relating to two forms of disruptive technology: drones and artificial intelligence.

Drones

Last month the New Zealand Government released a draft Civil Aviation Bill for consultation (draft Bill).  The draft Bill has been developed to modernise the aviation regulatory system and would replace the Civil Aviation Act 1990 (CA Act) and the Airport Authorities Act 1966.

Much like aviation legislation the world over, the CA Act is not entirely adaptable to emerging technologies such as drones.

The draft Bill proposes a number of relevant changes, including to the ‘pilot in command’ concept and the definition of ‘accident’.  It also proposes new rights relating to detention, seizure and destruction of drones.

Under the CA Act the ‘pilot in command’ has ultimate responsibility for the safe operation of the flight. The current concept of ‘pilot in command’ is not suited to developing aviation technology where there may be no pilot on board.  The proposed changes ensure that in the absence of an on board pilot, the duties, powers and obligations of the pilot fall to the operator of the aircraft.  Although we wonder, what happens when drones are inevitably operated by robots?

A further change relates to the definition of accident.  The existing CA Act requires parties to notify the Civil Aviation Authority (CAA) if there is an accident involving manned aircraft.  This limits the CAA’s ability to investigate drone accidents, understand the safety risks from these aircraft, and thereby regulate them effectively.  The draft Bill proposes updating the definition of accident to include unmanned aircraft.

The draft Bill also proposes a number of options that will better allow action to be taken against a drone being operated in contravention of civil aviation law, or being operated in a fashion that may endanger people or property.

The proposed changes acknowledge the increasing risks of drone use. The commentary released with the draft Bill cites the Gatwick drone incident in 2018 that caused an estimated £50 million loss to the United Kingdom economy and affected thousands of passengers.  It is situations such as this that has encouraged the Ministry of Transport, together with other government agencies, to form a UA Leadership Group to undertake a broader programme of work regarding drones to ensure regulation continues to support the safe and effective use of this technology.  For more information on the UA Leadership Group’s programme of work, click here.

Public submissions on the draft Bill are invited, and these close on 6 July 2019.  If you would like assistance in drafting submissions, or would like any further information about how changes to the CA Act could impact you, please contact us.

Artificial intelligence (AI)

On 22 May the Organisation for Economic Cooperation and Development (OECD) formally adopted its ‘Recommendation on Artificial Intelligence (Recommendation).  The Recommendation includes the first international standards agreed by governments for the responsible stewardship of trustworthy AI.

AI, with all its potential, presents real challenges with regards to its regulation. The adoption of the Recommendation by over 40 countries, including New Zealand, acknowledges the need for international guidance in this area. While not legally binding, the Recommendation provides policy standards for regulators and stakeholders around the world, and establishes a core reference point for AI governance.

The Recommendation includes a set of five principles designed to ensure AI technology develops for the benefit of humans.  The principles for the responsible stewardship of trustworthy AI relate to:

  • Sustainable development and well-being;
  • Human-centred values and fairness;
  • Transparency and explainability;
  • Robustness, security and safety; and
  • Accountability.

These principles are designed to be practical and flexible enough to stand the test of time in this rapidly evolving field.  The principles reflect existing and emerging dialogue around the ethical issues facing AI.

The OECD is not the only body looking at these issues. Around the world, a number of countries are looking at what they need to do to ensure the appropriate development of AI.

In January this year the Australian Human Rights Commission released a whitepaper titled ‘Artificial Intelligence: Governance and Leadership, which highlighted a number of ethical concerns relating to AI. Following this, the Australian Department of Industry, Innovation and Science published ‘Artificial Intelligence: Australia’s Ethics Framework’.  The report examines key issues through exploring a series of case studies that have prompted ethical debate in the AI space, and provides a framework of core principles for the development of AI technology. There is clear crossover between these principles and those of the OECD.

In the European Union, the European Commission’s high level expert group have released ‘Ethics Guidelines for Trustworthy AI’. The guidelines highlight the need for AI to be lawful, ethical and robust. The guidelines offer technical and non-technical methods for trustworthy AI development and deployment, and a non-exhaustive assessment list for AI practitioners to adopt.  Although still in pilot, the assessment list could prove a valuable reference tool for developers.

And most recently, on June 9 at the G20 summit in Japan, the G20 Ministers (which include New Zealand) agreed on AI principles based on those adopted by the OECD.  As with the OECD Recommendation, the G20 guidelines call for users and developers of AI to be fair and accountable, with transparent decision-making processes, and to respect the rule of law and internationally recognised values around privacy, equality and employment rights.

It’s clear that challenges posed by AI are also on the New Zealand agenda.  A report from the New Zealand Law Foundation ‘Government Use of Artificial Intelligence in New Zealand’ released in May this year warns against the use of unregulated AI algorithms by government.  The report is the outcome of phase 1 of a three-year project to evaluate legal and policy implications of AI for New Zealand. For the next phase researchers intend to look at the impacts of AI on work and employment.

As the public awareness of issues relating to AI grows, we expect so too will the regulatory responses.  While we are yet to see binding legislation in this space, issues relating to AI are clearly front of mind for regulators and stakeholders alike.  For now, we suggest that anyone working on AI projects keeps abreast of these developments and ensures the AI being developed is done so in a way that aligns with the five principles of the Recommendation adopted by the OECD.

Lane Neave has a specialist technology team. From start-ups and SMEs to large corporates, we are keen to work with you and your technology projects.  If you want to understand how we can help your business, please get in touch.

Business Law team

Gerard DaleClaire EvansGraeme CrombieEvelyn JonesAnna RyanJoelle Grace,  Peter OrpinEllen SewellMatt TolanCarlo WanKristina SutherlandJacob NuttWhitney MooreAlex StoneBen Cooper, Lisa Catto

Meet the team that makes
things simple.

Graeme Crombie

Let's Talk

"*" indicates required fields

Lane Neave is not able to provide legal opinion or advice without specific instructions from you and the completion of all formal engagement processes.