ASIC Report 798, Beware the gap: Governance arrangements in the face of AI innovation, identified the most common uses of AI for insurance claims as:
- Supporting the claims process: Claims triaging, decision engines to support claims staff, document indexation, identifying claims for cost recovery; and
- Automating a component of the claims decisioning process, but humans remain responsible for overall claims decision.
and emerging uses as:
- The use of generative Al and natural language processing techniques to extract and summarise key information from claims, emails and other key documents.
Financial service laws are technology neutral therefore when providing claims handling and settling services using AI, the general obligation to provide those services ‘efficiently, honestly and fairly’, remains.
Providing claims handling and settling efficiently, honestly and fairly
ASIC INFO 253 provides guidance on providing claims handling and settling efficiently, honestly and fairly.
To satisfy this obligation, you will generally need to handle and settle insurance claims:
- in a timely way;
- in the least onerous and intrusive way possible;
- fairly and transparently; and
- in a way that supports consumers, particularly ones who are experiencing vulnerability or financial hardship
Australia’s AI Ethics Principles
The incorporation of the eight Australian AI Ethics Principles in AI policies and procedures is supported by ASIC, and should be used when adopting AI in claims processing.
The 8 AI Ethics Principles are:
- Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
- Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
- Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
- Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
- Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
- Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
- Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
Licensees must consider their existing regulatory obligations
What licensees need to do to comply with their existing regulatory obligations when using AI depends on the nature, scale and complexity of their business. It also depends on the strength of their existing risk management and governance practices. This means there is no one-size-fits-all approach for the responsible use of AI. (ASIC REP 798)
ASIC provides the following examples in REP 798:
- Licensees must do all things necessary to ensure that financial services or credit services are provided in a way that meets all of the elements of ‘efficiently, honestly and fairly’. Licensees should consider how their AI use may impact their ability to do so; for example, if AI models bring risks of unfairly biased or discriminatory treatment of consumers, or if the licensees are not able to explain AI outcomes or decisions.
- Licensees must not engage in unconscionable conduct. Licensees must ensure that their AI use does not result in acting unconscionably towards consumers. Licensees must ensure that AI is not used to unfairly exploit consumer vulnerabilities or behavioural biases. It is also critical that licensees mitigate and manage the risks of unfair bias and discrimination of vulnerable consumers from AI use.
- Licensees must not make false or misleading representations. Licensees must ensure that the representations they make about their AI use, model performance and outputs are consistent with how they operate. If licensees choose to rely on AI-generated representations when supplying or promoting financial services, they must ensure that those representations are not false or misleading.
- Licensees should have measures for complying with their obligations, including their general obligations, and these should be documented, implemented, monitored and regularly reviewed. If the use of AI poses new risks or challenges to complying with obligations, licensees should identify and update relevant compliance measures.
- Licensees must have adequate technological and human resources. Licensees should consider whether there are staff with the skills and experience to understand the AI used, and who can review AI-generated outputs. Licensees should have sufficient technological resources to maintain data integrity, protect confidential information, meet current and anticipated future operational needs (including in relation to system capacity), and comply with all legal obligations. APRA regulated insurers must meet the requirements of Prudential Standard CPS 220, 230 and 234. In addition, one of the recent amendments to the Privacy Act, requires that where personal information is used in automated decisions (even if signed-off by a human), certain information must be disclosed in the entities Privacy Policy including how the claimants/third parties personal information is used in the computer program. This amendment takes effect 10/12/2026.
- Licensees must have adequate risk management systems. Licensees should consider how the use of AI changes their risk profile, whether this requires changes to their risk management frameworks, and whether they are still meeting their risk management obligations in light of their use of AI. APRA regulated insurers must meet the requirements of Prudential Standard CPS 220 and CPS 230.
- Licensees remain responsible for outsourced functions, and they should have measures in place to choose suitable service providers, to monitor their performance, and deal appropriately with any actions by such providers. Licensees should consider how these expectations apply if they use third-party providers at any stage in the AI lifecycle. CPS 231 (current), and from July 2025 CPS 230 (and CPG 230) sets out this requirement for material service providers providing claim processing & other claim management services to general insurers.
- Company directors and officers must discharge their duties with a reasonable degree of care and diligence. These duties extend to the adoption, deployment and use of AI. Directors and officers should be aware of the use of AI within their companies, the extent to which they rely on AI-generated information to discharge their duties and the reasonably foreseeable associated risks.
The use of AI will benefit insureds when making a claim however proceed cautiously…
It is clear that AI will enable a number of the claim issues identified in the recent Parliamentary Flood inquiry and the GI Code of Practice review reports to be addressed.
Systemic issues such as claim delays will benefit from the use of AI in the claims process, however the use of AI must continue to comply with financial service laws and be provided in an ethical manner.
It is clear that APRA and ASIC will continue to monitor the use of AI in claims very closely.
APRA will continue to monitor insurers as part of meeting their obligations under FAR, CPS 220, 230 and CPS 234 (especially in the context of outsourcing claims to material service providers).
In REP 798, ASIC states:
We remain focused on advancing digital and data resilience and safety, targeting technology-enabled misconduct and the poor use of AI. Understanding and responding to the use of AI across the entities we regulate is a key priority for ASIC.
We will:
- continue to monitor how our regulated population uses AI, and the adequacy of their risk management and governance processes;
- contribute to the Australian Government’s development of AI-specific regulation;
- engage and collaborate with domestic and international regulator counterparts; and
- where necessary and appropriate, take enforcement action if licensees’ use of AI results in breaches of their obligations.