There have been a number of changes to Australia’s privacy laws recently and businesses need to be aware of how these changes will impact them in the event of a privacy or data breach, including through the use of AI programs.
Earlier privacy reforms
One of the most significant changes to the Privacy Act was the introduction of a mandatory data breach reporting scheme in 2018. We’ve written previously about the introduction of the scheme, including when a data breach will be considered a notifiable data breach incident, and what businesses need to do to comply with the scheme under the Privacy Act. Our earlier alert can be found here.
More recently in November 2022, further amendments were made to the Privacy Act that introduced harsher penalties of up to $50M for serious or repeated offences, increased protections for misuse of Australian’s personal data outside Australia, and enhanced information sharing and enforcement powers for regulators. The changes were made following the high-profile data breaches at Optus and Medibank, and the Australian Information Commissioner has since filed an application against Medibank in relation to the data breach. Our earlier alert on these changes and the application against Medibank can be found here.
Proposed privacy reforms and reviews
The Government released its Privacy Act Review Report in February 2023, and in September 2023, confirmed its agreement, in principle, to advancing amendments to privacy laws in Australia by adopting over time 116 proposals in the report.
In September 2024, the Government introduced a Bill into Parliament to amend the Privacy Act in line with some of the recommendations from the Review Report, including:
- A new statutory tort for serious invasions of privacy;
- Development of a Children’s Online Privacy Code to strengthen protections from online harms;
- Streamlined information sharing processes during emergencies or the event of a data breach;
- Increased enforcement powers for the Australian Information Commissioner including new civil penalties;
- Improved transparency around automated decision making and security of personal information;
- A ‘white list’ to prescribe countries with similar protections of personal information in the Privacy Act to assist organisations when disclosing information to overseas organisations;
The Bill also contains a new criminal offence of ‘doxxing’, which is the targeted release of personal information online in a way that would be menacing, harassing or malicious. It imposes a penalty of up to 6 years imprisonment, or up to 7 years where a person or group is targeted on the basis of protected characteristics such as race, religion, sex, sexual orientation, gender identity, disability, nationality or ethnic origin.
On 29 November 2024 the reforms, with 106 of the 116 recommended reforms included, passed both Houses of Parliament, and on 10 December 2024 the Bill received Royal Assent, and so the amended law is now in effect.
The consequence of the reforms is that organisations bound by the Privacy Laws should:
- Identify any practices involving automated decisions so privacy policies can be updated;
- Review and tighten collection and use of personal information as the Bill introduces a statutory tort for serious invasions of privacy which likely increased litigation risks, where no loss or damage is required to be proven, an element included in most other jurisdictions world-wide;
- Review ad update Privacy Policies and ensuring appropriate training is implemented; and
- Any business interested in the impact these amendments will have on their organisation or how their Privacy Policies and Procedures should be updated, can contact our Intellectual Property & Technology Team.
Guidance from OAIC around privacy and AI
In October 2024, the Office of the Australian Information Commissioner (OAIC) released important guidance around the intersection between privacy and the use of commercially available AI products in Australia. The impact of AI on individual’s privacy has long been a concern globally, and the ever increasing availability and usage of AI has prompted the new guidance.
Essentially, the guidance is aimed at organisations that are using or developing AI systems that were built with, collect, store, use or disclose personal information, including sensitive information (such as health data). Most people are familiar with generative AI products such as ChatGPT, DALL-E, Microsoft CoPilot or Meta’s AI tools, and the various tasks they can perform such as creating content, data analysis or summarising information, as well as other forms of AI such as facial or voice recognition, chatbots or digital assistants.
There are many different ways that personal information may be at risk for organisations using AI systems. Risks arise where confidential client data or information is input into a non-secure AI system and then becomes part of its training data and can appear in content generated by other completely unrelated users of the same AI system. This can include where information is inferred, incorrect or artificially generated such as through the creation of deepfake images that disclose sensitive or personal information about an individual. As with everything online, once the information is out there it is impossible to make it secure again or to control what someone else does with it. Where information has been disclosed through the use of AI systems without consent, organisations can be liable for significant penalties under Australia’s updated privacy laws.
The top five takeaways from the OAIC’s recent guidance for organisations using AI are:
- privacy obligations apply to any personal information entered into an AI program as well as any content generated that contains personal information;
- businesses should update their privacy policies and notifications to clearly and transparently reflect their use of AI;
- the generation or inference of personal information including images using AI is a collection of personal information and organisations must comply with the Privacy Act;
- where personal information is entered into an AI program organisations must only use or disclose the information for the primary purpose it was collected for; and
- the best practice recommendation is that organisations do not enter personal information especially sensitive information into publicly available generative AI programs.
Further information about the OAIC’s recent guidance can be found here.
Contact us
Companies that use AI systems and programs need to be vigilant and transparent with their clients regarding their use of AI and collection and handling of personal information or risk significant penalties following recent changes to Australia’s Privacy Act. It is also important for companies to regularly review and update their privacy and data collection policies and notifications to ensure ongoing compliance with the Privacy Act as the new changes are implemented.
If you have any concerns about your organisation’s use or intended use of AI and the impacts and risk on privacy please reach out to our Intellectual Property + Technology team for advice tailored to your situation.
For further information please contact Ben Gouldson.