When the tech lands you in court…!

Print Friendly, PDF & Email

By Zein El Hassan

Innovation v risk management

There is growing tension between the pace of technology-driven innovation and being able to effectively manage the associated risks. We’ve all heard leadership encouraging us to “move faster on tech” and the echoed caution from legal and risk management that “we need to manage the risks”. This is not a new phenomenon but the nature of the technology, its use cases and the associated risks are always changing and at an accelerating rate, especially since the democratising of large language models like ChatGPT.

It’s exciting to brainstorm tech-driven use cases and I recently imagined a super fund app that creates a visual and audio avatar that provides financial advice in a real-time conversation with super fund members. While the technology to create this digital experience is already available, the challenge is whether it is possible to manage the risks inherent in this live exchange with customers.

Consumer protection laws

The risk is that the technology is built, used, or marketed in a way that misleads consumers, and in the case of the super fund app, it may also provide financial advice that is not appropriate for their circumstances or goals. In both cases, it’s a breach of several financial services laws.

Just as ‘greenwashing risk’ has been the subject of recent regulatory investigations and court cases, so too has the use of algorithms and new technology, especially where there is a disconnect between the way the tech performs and how it is marketed to consumers.

Where this happens, the regulators are likely to come knocking, sometimes tipped off by competitors, and it ultimately ends up in court with fines in the millions and is usually coupled with complex customer and operational remediation programs as well as significant reputation damage and additional ongoing regulatory requirements.

Importantly, the lack of intention to mislead customers is not going to save you from these court actions and nor is the lack of consumer loss. These examples of misleading conduct are under the constant regulatory scrutiny of the Australian Securities and Investments Commission (ASIC) for financial services and the Australian Competition and Consumer Commission (ACCC) for everything else.

Not surprisingly, the legislative provisions prohibiting misleading and deceptive conduct under financial services laws and consumer protection laws are couched in similar terms and the underlying case law is very similar as well.

It’s important to remember that the policeman, prosecutor, and plaintiff in these proceedings is the regulator (ASIC or ACCC) and not consumers. However, class action lawyers and litigation funders are always watching the press releases from the regulators to drum up business for class actions by your customers.

So, what are recent examples where this has landed corporates in court and what can we learn from them?

We have all used websites that offer to find the best price for all sorts of things, from hotel rooms, hire cars and flights as well as insurance, super and banking products.

These sites invariably use algorithms where data is collected and sorted from various sources, decisions are made using algorithms through an iterative process of validation and verification that generate recommendations to consumers about products or services.

At a high level, this process is like the decision-tree process used in digital financial advice tools and AI-driven digital customer engagement platforms.

These tech-driven operational processes or models give rise to different risks at each stage of their lifecycle, including the design, testing, implementation, and monitoring stages. There are also different risks involved depending on whether the technology is built in-house or procured from third parties (or open sourced) and the nature of the foundational tech on which it’s built and the data on which it’s trained and tested.

Despite the different types of risks, when taken together, they may result in either ‘faulty tech’ or ‘faulty’ implementation and sometimes, you may have both! From a legal perspective, when either ‘fault’ interacts with consumers, it may result in misleading and deceptive conduct that can land you in court.

Examples of ‘faulty tech’ is where the tech (algorithm or AI) doesn’t perform as expected because of an error, omission, or inadequacy in the programming of the algorithm. It may also arise from the quality of the data that is used by the algorithm or that is used to train the AI or a combo of both.

For digital financial advice, the risk is that the algorithm does not collect sufficient information about the customer’s relevant circumstances to generate a financial advice recommendation that is appropriate for the customer’s circumstances (which is a breach of financial services laws).

Faulty implementation is where the tech performs as intended, however, what it does is different to what the customer expects it to do, usually because the disclosures or advertising about the functionality of the tech or its limitations is misleading to the end user.

In one recent court case, the algorithm recommended the service that paid the highest revenue to the company. Of course, this was contrary to the advertising on the company’s website, which was that the website would recommend the lowest price to the customer. The advertising on the website was misleading and resulted in tens of millions in fines that removed all the profit resulting from the advertising campaigns. The method of calculation, and the size, of the fine was intended to act as a major deterrent to others so that the fine is not seen as a cost of doing business.

An example of poor implementation of digital financial advice is where the limitations of the digital tool are not effectively disclosed to the customer so that the customer does not realise that the recommendation may not be appropriate for their circumstances.

Broader legal framework

Beyond financial services laws, it’s also necessary to ensure that the new tech complies with privacy, anti-discrimination, online safety, intellectual property and other applicable laws and regulatory requirements.

Where the tech is owned by third party providers, navigating intellectual property, privacy and data security issues in technology contracts relating to new technology can be a minefield, including in relation to derivative data, data retention and grappling with new cloud-based technology, including allocating responsibility in relation to major outages.

New and emerging risks

In addition to the above types of risks, the use of generative AI and machine learning are giving rise to new types of risk including hallucinations and drifting.

Hallucination is the risk of generative AI models giving inaccurate and sometimes ridiculous answers to questions asked by users because the tech has no expertise in the subject matter or any programmable ‘common sense’ in generating the answer. Recent examples have been gen AI recommending glue be used to stop pizza from falling apart and more seriously, gen AI using overseas court cases to answer questions about Australian law and making up reference notes.

Of equal concern is where the tech is not faulty on implementation but becomes faulty over time because its output ‘drifts’ away from what is expected.  This is a potential risk for tech with machine learning capabilities, where the tech is trained on certain data to produce outputs to achieve a programmed objective but the outputs drift away from expectations over time for several reasons.

For example, the external sourced data on which the tech was ‘trained’ may be different to the in-house or customer data that is used when the tech is implemented by the business.  Also, where the output of the tech is fed back into the tech, it may amplify the movement (or ‘drift’) away from expected results based on the ‘training’ data.

How do you manage these risks?

The emerging industry view is that many of these risks can be managed effectively by uplifting existing frameworks (as opposed to building separate and adjacent ones).

However, new technology (especially generative AI and machine learning) requires specific and bespoke governance and risk management assessments having regard to the variety of new use cases and the rapidly evolving functionality and features of the new tech.

The initial challenge is to understand the new technology and the risks inherent in its new functionality and features, and how those risks manifest in different use cases and from using different external and internal data sources.

The ongoing and more difficult challenge is that the new technology requires different software tools and platforms to do the testing and to monitor the ongoing performance of the tech using different continuous monitoring software and subject matter experts (as opposed to generalists).

Good governance, the right expertise, robust processes, fit-for-purpose testing tools and proper documentation of all those activities are key to effective governance and risk management of the new technology and use cases.

Looking through the legal lens

When assessing the uses and impact of new technology, it’s important to remember that non-compliance with legal obligations is what will land you in court.  So, remember to consider your legal obligations when reviewing and uplifting your governance, risk management and compliance frameworks and the associated processes that govern your deployment of new technology.

It is also important to consider the technology-related risks that reside in all the links in your technology supply chain from the original developers of the tech to any upstream entities that train, test, iterate and ultimately deploy the tech in your business.  In this regard, it is critical to ensure that there are robust technology contracts in place with robust governance oversight of your supply chain.

Regulatory perspectives

So, how are the regulators supervising the application of this rapidly changing technology?

In the financial services industry, APRA and ASIC focus on system-wide stability and the safety of consumers and investors, respectively.

APRA’s guidance to its regulated entities (banks, insurers and super funds) is to tread carefully when using new advanced AI tech and conduct due diligence, appropriate monitoring and oversight.  APRA already has prudential standards governing technology-related risks and is mandating an industry wide uplift in operational risk management and resilience through the implementation of new prudential standards.

ASIC has recently reiterated that all participants in the financial system have a duty to balance innovation with the responsible, safe and ethical use of emerging technologies and is closely monitoring how the development and application of AI is affecting the safety and integrity of the financial system.  ASIC also has a successful track record of actively pursuing and prosecuting players that engage in misleading and deceptive conduct, including in the use of new technology.

Law reforms

Globally, the nature of AI regulation varies across jurisdictions from a risk-based categorisation of AI uses with calibrated legislative controls for high-risk applications to a principles-based approach empowering sector-specific regulators.

In Australia, new technology, including AI, is regulated under several existing laws and regulatory requirements as outlined above.

In addition to encouraging industry usage of voluntary ethical frameworks (such as Australia’s AI Ethics Framework), the Australian Government is also undertaking industry consultation on whether we need new laws that minimise the risk of AI-facilitated harms before they occur and to ensure that there is an adequate response to harms after they occur.

In this regard, the Australian Government is working on developing guidelines and frameworks to ensure that AI is used ethically and responsibly.  This initiative is expected to result in new laws focusing on safeguarding data, ensuring transparency in algorithms and AI systems and establishing accountability for the application of new technology.   This effort encompasses not only dedicated AI laws but also broader legislation concerning data and privacy (such as the Security of Critical Infrastructure Act and Privacy Act).  So, in short, more regulation is coming.

Concluding comments

There is no doubt that the pace of innovation is accelerating and so is the excitement about potential use cases, and the trepidation about the risks inherent in the use of the new technology.  However, the ultimate test that will keep you out of the courts is whether the new tech and the way it is used complies with your legal obligations.

Accordingly, and as a reminder, it is important to undertake that legal testing throughout each stage of the lifecycle of the new technology, including through the design, testing, implementation and monitoring stages.

We also have practical experience with using AI as part of the “Argos” platform, which helps our clients track, understand and communicate the impact of regulatory change and produces same-day regulatory updates as well as analysis by our Financial Services Team at Mills Oakley.

As you can see, we are passionate about this topic and are here to support you navigate the legal issues to manage these technology-related risks.

For further information, please do not hesitate to contact us.

Get the latest news insights and articles straight to your inbox, simply enter your details.

    *

    *

    *

    *Required Fields

    Financial Services

    RegTracker 26 March 2021 – ASIC’s regulatory approach