We are currently processing payments for Medical Indemnity 1 January 2025 renewals. Please note that your cover is not impacted and you can continue to practice. You can access your documents, including your Confirmation Certificate which can be provided as evidence of cover, via the Medical Indemnity Member Login.

Supporting safe and responsible AI use in healthcare

Georgie Haysom, BSc, LLB (Hons) LLM (Bioethics), GAICD, General Manager, Advocacy, Education and Research, Avant

Sunday, 15 December 2024

With artificial intelligence (AI) technology progressing rapidly, it seems that just about every day we hear of a new advance in how it is being used. In healthcare, while AI has many potential benefits, it can also create medico-legal risk.

Listening to our members

Over the past 18 months we have received an increasing number of requests from members for advice about the potential medico-legal risks associated with using AI, including general purpose AI such as ChatGPT and AI scribes for clinical notetaking. A recent survey of 600 members showed three in four respondents had a fair to poor knowledge of AI overall. While only a small number (around 10%) were using AI, around 40% of respondents indicated they were likely to use an AI scribe in the future. 

For AI scribes, assessing their quality, safety and performance isn’t simple. Members have told us they want some reassurance and guidance about which AI scribes are safe to use. However, AI scribes are currently not subject to any regulatory oversight and doctors must do their own due diligence.

Responsibility currently a grey area

Another area of concern is the uncertainty about who is legally responsible if a patient is harmed by the use of AI. The complexity of AI algorithms and machine learning makes it hard to trace the decision-making process, and this creates difficulties in determining who is accountable for errors or adverse outcomes.

Broad indemnity clauses in some AI provider contracts could unfairly shift responsibility onto the doctors using the AI tool, when they are not responsible for controlling the risk.

Evolving regulatory environment

As AI continues to progress, it is only natural that regulatory frameworks will evolve alongside the technology to support opportunities for innovation and respond to the distinct risks.

The Australian Government has proposed mandatory guardrails for AI systems in high-risk settings, including healthcare. These would require developers and deployers of AI to take steps to ensure their products are safe.

At the same time, health laws are being reviewed, with proposals to clarify and strengthen legislation and regulation for AI in healthcare settings. Complementary reviews of consumer laws and privacy laws are also underway.

Avant’s position and advocacy

We believe mandatory minimum standards are required for AI tools used in healthcare that fall outside the Therapeutic Goods Administration’s regulatory framework. These should cover a range of issues including privacy and security, risks management and insurance. These standards should include any AI tools that suggest clinical findings and make recommendations that if inaccurate, or not acted upon, could cause adverse patient outcomes.

Legislative and regulatory obligations should be placed on the entities across the AI supply chain and throughout the AI lifecycle that can most effectively prevent harms before people interact with the AI tool. Where harm does occur, there needs to be mechanisms for appropriately determining liability and obtaining redress. It is not appropriate that healthcare professionals bear the sole liability, particularly where they do not control the risk.

Ongoing consultation will be essential to ensure the appropriate regulation of AI in healthcare and should involve all relevant stakeholders, including insurers.

Avant is actively engaging with government about these proposals to minimise the medico-legal risks for members and support effective public policy development on AI in healthcare.  We have made submissions to the various consultations.

Our focus is on advocating for clear frameworks and guidelines to address the complexities of AI in healthcare and manage the medico-legal risks. We want to ensure that responsibility and liability are clear and properly managed. Both doctors and patients need to be adequately protected in the case of patient harm.

Our CPD courses for Avant members

Tick off some CPD hours and learn more with our in-depth eLearning courses, free for Avant members. Our courses include education activities, reviewing performance and measuring outcomes. 

Learn now

Need support?

Dealing with a medico-legal issue can be stressful. Find out how Avant and other organisations can help.

To Top