education.vic.gov.au

Policy last updated

18 June 2024

Scope

  • Schools

Date:
June 2024

Policy

Policy

This policy sets out requirements for schools that choose to explore the use of generative artificial intelligence (AI) tools. It provides advice for school leaders and teachers around how to use generative AI tools in a safe and responsible way, as well as advice on how to promote academic integrity in recognition of potential student use of generative AI tools.

Summary

  • Schools must:
    • obtain opt-in consent from parents and carers before using any generative AI tool that requires personal information beyond provision of a student’s school email address and creation of a password for registration
    • protect student data and privacy by restricting the uploading of personally identifiable information into generative AI tools or software that integrates generative AI tools
    • comply with any terms set out by the provider of the tool.
  • Schools must direct staff and students to not use generative AI tools to:
    • upload or generate media depictions of students, staff or parents
    • generate artefacts that mimic a cultural tradition in a way that is disrespectful or offensive.
  • Schools must also direct staff to:
    • not use generative AI tools to communicate with students and parents in ways that undermine authentic learning relationships or replace the unique voice and professional judgement of teachers and school leaders.

Details

On 1 December 2023, Education ministers released the Australian Framework for Generative Artificial Intelligence in SchoolsExternal Link (the National Framework). The aim of the National Framework is to provide guidance on understanding, using and responding to generative AI. It includes 6 principles and 25 guiding statements that define what safe, ethical, and responsible use of generative AI should look like in Australian schools. This policy, guidance and resources are designed to complement the National Framework.

Schools must comply with the requirements detailed in this policy when using generative AI in all contexts.

Schools may use generative AI tools if use:

  • complies with the requirements of this policy and considers the guidance outlined in the Guidance tab in relation to:
    • protecting privacy and personal data
    • appropriate use of generative AI tools
    • promoting academic integrity
  • complies with other relevant department policies that are not specific to generative AI:
  • complies with any terms set out by the provider of the tool, which may include an age limit
  • is accompanied by reasonable steps to identify, understand and appropriately manage risks.

Prior to implementation of any generative AI tool, schools must ensure that the tool can be implemented in an accessible and inclusive way. This includes meeting obligations under the Equal Opportunity Act 2010 (Vic), the Disability Discrimination Act 1992 (Cth) and the Disability Standards for Education 2005 (Cth) to make reasonable adjustments to accommodate staff or students with disability.

Prior to implementation of any generative AI tool, schools are encouraged to:

Protecting privacy and personal data

Schools are expected to take reasonable steps to ensure the protection of data entered into generative AI tools or software that integrates generative AI tools (including user prompts). They should do this by prioritising the use of tools that:

  1. do not share this data with third parties
  2. do not use this data to train a generative AI model
  3. do not save or store this data for future use by the provider of the tool.

Generative AI is a ‘new and emerging technology’ as defined in the Privacy and Information Sharing policy. This means that schools must obtain opt-in consent from parents and carers before using any generative AI tool that requires personal information beyond provision of a student’s school email address and creation of a password for registration (for example, name or phone number). Staff and students also must not use the password from their school email when registering for a generative AI tool.

In addition, if personal information is not required for registration, schools should provide the opportunity for students to opt-out and are encouraged to communicate with parents and carers about how the generative AI tool will be used. Opt-out consent recognises known risks of generative AI technologies (for example, the creation of biased or inaccurate content) as well as general risks associated with new technologies, which often rapidly evolve. Links to collection notice templates are available in the Resources tab.

When using any generative AI tool, schools must direct staff and students to not load any personal information about students or staff onto the tool (for example, student names, reports, personal histories and contact details). Staff must also be directed to not enter any information about the school that could be sensitive (for example, student assessment data and student attendance records). This is because content may be used and reused by the platform and its users, which may constitute a privacy breach.

For further information, schools may refer to the February 2024 public statement made by the Office of the Victorian Information CommissionerExternal Link .

A one-page overview of the steps schools must take to help protect personal data when using generative AI tools is available in the Resources tab.

Appropriate use of generative AI tools

Where a school chooses to use generative AI tools, staff and students must be directed to:

  • not upload media including depictions of students, staff or parents (for example, photos, audio, video), or generate images or other media in the likeness of these persons
  • not generate artefacts that mimic a cultural tradition in a way that is disrespectful or offensive (for example, images mimicking Koorie artwork).

Staff must also be directed to not use generative AI tools to communicate with students and parents in ways that undermine authentic learning relationships or replace the unique voice and professional judgement of teachers and school leaders. This includes not using generative AI tools to directly:

  • communicate with parents or students
  • make judgements about student learning achievement or progress
  • write student reports for parents or carers.

During implementation of any generative AI tool, schools should:

  • ensure the use of generative AI tools is disclosed when tools have an impact on others – disclosure can be given to teachers, staff, students, parents and carers
  • ensure monitoring of benefits and risks
  • consider de-implementing any tool if benefits are not realised or risks are not being adequately managed.

Promoting academic integrity

Schools are strongly encouraged to proactively identify and manage risks around academic integrity and generative AI. This can include:

  • setting expectations and building a culture of academic integrity
  • designing assessments with consideration of generative AI tools
  • identifying and responding to the inappropriate use of generative AI tools in assessments.

Advice for schools to help manage academic integrity risks is outlined in the Guidance tab.

Definitions

Generative artificial intelligence (AI)
Generative AI is a type of computer-based model that can generate new content or modify existing content in response to user prompts. The inputs and outputs of generative AI tools can include text, images, audio, computer code, and other types of data. Refer to the Guidance tab for more information about generative AI.

Personal information
Personal information is recorded information or opinion, whether true or not, about a person whose identity is apparent, or can reasonably be ascertained, from the information. The information or opinion can be recorded in any form. A person's name, address, phone number and date of birth (age) are all examples of personal information.


Guidance

Guidance

This guidance contains the following chapters:

  • Introducing generative AI tools
  • Protecting privacy and personal data
  • Appropriate use of generative AI tools
  • Promoting academic integrity

Introducing generative AI tools

Introducing generative AI tools

What is generative AI?

Generative artificial intelligence (AI) is a type of computer-based model that can generate new content or modify existing content in response to user prompts. The inputs and outputs of generative AI tools can include text, images, audio, computer code, and other types of data. Generative AI tools use machine learning, a process where tools are trained to recognise complex patterns in large datasets. This enables them to produce outputs that can closely resemble human-generated content without explicit programming.

Common types of generative AI tools and functions

Common types of generative AI tools and functions are outlined below, noting that the examples are provided for illustrative purposes only and are not specifically recommended or endorsed by the department. Generative AI tools and functions can be provided through standalone products or embedded into other software and applications. Some generative AI tools combine multiple functions into one product.

  • Text generation – specialising in producing human-like text (tools that do this are sometimes known as large language models (LLMs)
  • Image generation – able to transform text into images or create images
  • Video generation – able to create videos or edit existing videos
  • Audio generation – able to create audio files, such as music, speech, or sound effects
  • Code generation – can produce computer programming code, such as Python, Java, or C++
  • Design generation – can produce layouts, visual compositions, and graphical elements for a wide range of design projects

Protecting privacy and personal data

Protecting privacy and personal data

Privacy and data risks

Generative artificial intelligence (AI) tools pose unique privacy and data protection risks that are important for school communities to be aware of. For example:

  • generative AI providers may request or require student data that includes personal, sensitive or health information to provide access to their generative AI tools
  • students or other members of the school community may upload personal, sensitive or health information belonging to themselves or others without being aware of the associated privacy and data protection risks
  • some generative AI tools have the capacity to match data inputs with other information about individuals, which increases privacy risk through building individual data profiles
  • the safety and storage processes of data inputs into generative AI tools may be unclear
  • it may be difficult or impossible for a generative AI model to forget personal, sensitive or health information once it has been uploaded
  • inputs copied from the internet or from untrustworthy sources, when pasted as a prompt into a generative AI tool, may cause the tool to behave in unexpected or unsafe ways (for example, an image copied from the internet and pasted into a multimodal generative AI tool, may have invisible text embedded within it that the user was not aware of). This is sometimes referred to as a ‘prompt injection’
  • content generated by generative AI about a person can constitute a new collection of personal information about that individual, which can be seen as unreasonably intrusive and therefore a breach of privacy.

Suggested actions: Protecting privacy and personal data

School leaders can:

  • select tools that limit the unnecessary collection or processing of personal, sensitive and health information (either by default or by allowing schools to adjust the settings of the tool)
  • adjust the settings of generative AI tools used by students and staff to protect their privacy
  • monitor the use of tools to ensure that personal information is not uploaded in tools that share this data with third parties, use this data to train a generative AI model, or save or store this data for future use by the provider of the tool.

Teachers can:

  • only collect or upload data necessary for the task at hand, de-identify data before uploading it, use strong encryption to protect data, and only give data access to authorised personnel
  • direct students to avoid uploading personal, sensitive and health information to generative AI tools
  • guide students to be careful when using prompts based on inputs copied and pasted from the internet or from untrustworthy sources.

Appropriate use of generative AI tools

Appropriate use of generative AI tools

Teaching and learning about generative AI

Schools are encouraged to help prepare students to understand and use generative artificial intelligence (AI) tools, by supporting them to learn:

  • what generative AI is, how it works, and its opportunities and risks
  • how to use generative AI tools in a safe and responsible way, while avoiding potential harms
  • how to critically analyse and evaluate the functioning and outputs of generative AI tools.

Schools are encouraged to closely supervise student use of digital technology in the classroom, including generative AI tools, as detailed in the Digital Technologies – Responsible Use policy.

Setting up generative AI tools in a way that is safe and effective

In addition to the requirements outlined in the Policy tab, schools are strongly encouraged to configure generative AI tools settings to protect the privacy of users and restrict harmful content. For example, parameters that may be adjusted include:

  • adjusting the possible prompts that can be inputted into the tool
  • adjusting the possible length of any outputs of the tool
  • reducing the default ‘temperature value’ of the tool, so that it provides less variable, more focused outputs
  • adjusting privacy settings so that student inputs (prompts) are not disclosed to others or used to train the generative AI model
  • setting up the tool with customised standing instructions that are relevant to local context and that will apply to all responses provided by the tool (schools are encouraged to include in these instructions a requirement for the tool to not breach copyright in its responses).

Teaching students about prompting generative AI tools

Generative AI systems are typically operated by users entering a ‘prompt’ to guide the underlying model’s output. Prompts can be text-based, and can also include voice, images and other inputs. A user’s prompt may not initially produce the desired output from a generative AI tool, so in many cases prompt refinement is required. This is known as ‘prompt engineering’.

Where a school chooses to use generative AI tools, staff and/or students should be supported to reflect on and create effective prompts. They can do this by:

  • being specific about what kind of content to produce, including length, genre, style, what tone to use, and what audience to create for
  • asking the tool to act as a type of person or thing, such as what professional expertise or age bracket to assume, what species, object or concept to act as, or what historical figure’s voice to use
  • stipulating the format, such as whether to produce an essay, a poem, or a type of computer code
  • setting parameters, such as what to include or not include in the output
  • providing examples to demonstrate a feature of the desired output, such as tone, structure, or subject matter
  • using an iterative process, such as beginning with one prompt, then tweaking it with each subsequent prompt to improve the output, or change certain aspects of the output
  • asking the tool to help create a more effective prompt, by seeking more information from the user to refine desired output.

Teaching students to critically analyse and evaluate generative AI outputs

A generative AI tool’s outputs are based on the data it is trained on, which can lead it to reproduce inaccuracies or biases inherent in the training data. This cannot always be prevented using effective prompting, which makes it essential that generative AI outputs are critically analysed and evaluated.

Teachers can support students to critically analyse and evaluate generative AI outputs by teaching them to compare and verify outputs with reliable alternative sources of information. This can help to identify inconsistencies and biases in the generative AI output, while also providing an opportunity for reflection on why these inconsistencies and biases exist. It can also provide opportunities for students to learn about what makes a source of evidence reliable, and how they can evaluate different sources for their reliability.

Teachers can support students to consider the following questions:

  • Is this output relevant, applicable and fit-for-purpose?
  • Is this output true, accurate and current?
  • Is this output logical, coherent and internally consistent?
  • Is this output free from discriminatory or harmful content?
  • How can this output be improved with human modification?
Suggested actions: Teaching and learning about generative AI

School leaders can:

  • support teachers and staff to engage with professional learning regarding generative AI tools, how they work, the potential opportunities as well as limitations and biases
  • model the effective, safe and responsible use of generative AI and ensure that privacy, copyright and other legal obligations are consistently met.

Teachers can:

  • base any use of generative AI tools on evidence-based learning and wellbeing practices, as illustrated in the Victorian Teaching and Learning Model
  • engage with professional learning regarding generative AI tools, how they work, the potential opportunities as well as limitations and biases
  • initiate classroom discussions with students about generative AI tools, how they work, and how they can be used safely and appropriately
  • as student usage of generative AI tools increases, extend student learning about how generative AI works
  • model the safe and responsible use of generative AI.

Supporting safe and respectful use

Some of the inappropriate or harmful uses of generative AI tools include:

  • misrepresentation of information (for example, through the creation and dissemination of information that is wholly or partly inaccurate, that reduces a person or group’s ability to accurately understand something, or that is intentionally or unintentionally designed to cause harm to others)
  • harassment, including cyberbullying (for example, through the creation and dissemination of ‘deep fakes’)
  • privacy violations (for example, by uploading personal, sensitive or health information about another person into a generative AI tool and/or using that information to generate new content about that person)
  • cultural appropriation (for example, by creating new artefacts influenced by or mimicking a cultural tradition in a way that is disrespectful or offensive)
  • discrimination (for example, using generative AI tools to produce and disseminate biased content that reinforces harmful stereotypes or unfairly excludes individuals, communities or groups)
  • entering information that may breach another person’s intellectual property rights.

More detail: Misrepresentation of information

Generative AI tools can misrepresent information or allow others to do so, by making it easier to generate content that is untrue or harmful while seeming true or convincing. Misrepresentation of information can be both intentional (disinformation) and unintentional (misinformation). It is important that students are guided not to misrepresent information or disseminate disinformation or misinformation using generative AI tools. Below are 2 examples of misrepresentation of information that schools can consider protecting students from and guiding students not to produce:

  • Deep fakes: Generative AI tools can be used to create ‘deep fakes’, which are pieces of fake multimedia content which portray a person or aspects of a person saying or doing something they did not in fact say or do. When using generative AI tools, it is important that students are taught to build empathy and understand the impacts and consequences of creating such content. Deep fakes can cause serious psychological harm in addition to spreading disinformation, and it is important that this issue is given weight in schools and handled with care.
  • Biased content: The outputs of generative AI tools can be inaccurate or biased, even when expressed in confident language. Generative AI tools can also respond to the preferences of users, which may result in the adoption of dominant ideas and remove more diverse views, creating ‘echo chambers’ and reinforcing biases. When using generative AI tools, it is important to build compassion and critical thinking in students and support diversity.

Schools are encouraged to collaborate with students, parents and carers to establish protections to prevent the above uses, to protect the safety, wellbeing and inclusion of staff and students. Actions can include setting expectations and consequences for instances where expectations are not followed.

Suggested actions: Supporting safe and respectful use

School leaders can:

  • work with students, parents and carers to establish school-wide expectations on what appropriate use of generative AI tools looks like – this can be included in the school’s Acceptable Use Agreement (AUA)
  • consider inappropriate or harmful use of generative AI tools when reviewing the school’s Bullying Prevention policyExternal Link (staff login required).

Teachers can:

Demonstrating transparency and accountability

Human responsibility

Schools are strongly encouraged to ensure individuals who use generative AI tools have agency, judgement, and responsibility, and that outputs from a generative AI tool face human review prior to decisions being made. For example, it is important that the professional judgement of teachers remains central when assessing and evaluating student progress, sequencing student learning tasks, or providing feedback on student work.

It is also important that generative AI tools enhance rather than replace teacher and school voice. Authentic communication between teachers and students, and each school and its broader community, is crucial to building and maintaining trust.

Disclosure

Schools can build trust by disclosing the use of generative AI tools, especially when use may have an impact on others. Disclosure can be given to teachers, staff, students, parents and carers.

Example wording for simple disclosures may include:

  • 'This resource has been (wholly, substantially or partly) generated using generative artificial intelligence.'
  • 'This resource has been adapted from an output generated using generative artificial intelligence.'
  • 'This resource has been edited using generative artificial intelligence.'

A more detailed disclosure may include the following suggested format, which has been developed by Monash UniversityExternal Link :

I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].

Monitoring

Where a school chooses to use generative AI tools, they are encouraged to establish processes to monitor how well these tools are implemented and how much they impact learning and wellbeing outcomes. This can assist in determining how to change the tools’ parameters or change which tools are in use.

Alongside gathering evidence on impacts, schools are encouraged to consider the administrative and financial costs associated with the use of generative AI. Processes for monitoring can include staff, student, parent and carer surveys, as well as student performance data in subjects and year levels where generative AI tools are used.

Suggested actions: Demonstrating transparency and accountability

School leaders can:

  • provide school communities with easy-to-understand information about generative AI tools being used, why they are being used, and any risks associated with their use
  • communicate to all members of the school community their responsibilities and accountabilities when using generative AI tools
  • adopt disclosure approaches for instances where generative AI is used (for example, using the wording provided above)
  • appropriately acknowledge the use of generative AI tools in school administration in line with school disclosure policies
  • establish processes for keeping records of the use of generative AI tools
  • establish processes for monitoring the use of generative AI tools.

Teachers can:

  • exercise their own judgement when using generative AI tools and take responsibility for decision making where generative AI tools are used
  • appropriately acknowledge the use of generative AI tools in teaching practice in line with school disclosure approaches.

Promoting academic integrity

Promoting academic integrity

Generative artificial intelligence (AI) tools can provide the opportunity for students to develop higher order critical thinking, analytical and evaluation skills, and can create efficiencies which allow students to focus more of their time on the skill being assessed. However, these tools can also make it easier for students to undermine forms of assessment that are in use today.

Learning design approaches and processes for ensuring academic integrity need to be responsive to these opportunities and risks.

To support schools to operationalise the Australian Framework for Generative Artificial Intelligence in SchoolsExternal Link guiding statements on ‘learning design’ and ‘academic integrity’, this chapter is structured in 3 parts:

  • setting expectations and building a culture of academic integrity
  • designing assessments with consideration of generative AI tools
  • identifying and responding to the inappropriate use of generative AI tools in assessments.

Setting expectations and building a culture of academic integrity

Academic integrity refers to the expectation that students behave honestly and ethically in the completion and assessment of student work. Maintaining academic integrity ensures students are assessed fairly and teachers, students and parents/carers get an accurate understanding of student progress to help guide further teaching and learning.

Schools can support a culture of academic integrity by providing students with clear expectations about values, responsibilities and behaviours related to learning and assessment, and by responding to breaches of academic integrity in a consistent and proportionate way.

Some schools already have local academic integrity or honesty policies to support a whole-school culture of academic integrity or use the department’s acceptable use agreement templates to agree on behavioural expectations when engaging in digital learning activities. Schools’ existing acceptable use agreements can be updated to include the use of generative AI tools.

Schools’ expectations about how students can use generative AI with academic integrity can include the following:

  • definitions for academic integrity, cheating, and plagiarism
  • age-appropriate guidance on citing and referencing generative AI tools (where appropriate, schools can adopt the use of established referencing systems such as the American Psychological Association’s APA StyleExternal Link , or simplified approaches, such as outlined in the above guidance on disclosure)
  • providing examples of academic misconduct using generative AI tools
  • clearly outlining the consequences for academic misconduct using generative AI tools.

VCAA’s VCE and VPC Administrative Handbooks

While written for VCE, the following sections of VCAA’s VCE Administrative HandbookExternal Link can apply more broadly and provide examples for acceptable student behaviour: ‘Integrity of VCE school-based assessments’, ‘Rules for authentication of school-based assessment’, ‘Strategies for avoiding authentication problems’. The following sections of VCAA’s VPC Administrative HandbookExternal Link are also relevant: ‘Integrity of student work’, ‘Authentication’.

Suggested actions: Setting expectations and establishing a culture of academic integrity

School leaders can develop Acceptable Use Agreements (using DE templates) to ensure students are aware of expectations when engaging in digital learning activities.

Designing assessments with consideration of generative AI tools

Selecting the right approach for assessment

Teachers already have well established rules and practices in place to ensure that student assessments are tailored to their purpose and administrated fairly. For example, many assessments will include time-limits and rules for whether questions are provided beforehand, what notes and materials are allowed, and whether the assessment can be taken home for completion.

Similarly, teachers are encouraged to clearly establish whether and how generative AI should be incorporated into any given assessment, based on its purpose and context. This can reduce the scope for students to use generative AI tools uncritically and without acknowledgement.

Assessments such as diagnostic or high-stakes summative assessments are likely suited to approaches that prohibit or limit the use of generative AI tools. These approaches are most appropriate where the use of generative AI tools may prevent the accurate measurement of student understanding, or inhibit fair comparisons between students.

Assessments such as research projects and portfolios are likely suited to approaches that allow some use of generative AI, often with modifications, to ensure that the intended skills and knowledge are fairly assessed. In some cases, the use of generative AI tools may enhance student assessments by making them more efficient or focused on measuring the intended skills or knowledge. In this case, teachers may consider encouraging or even requiring use of generative AI tools.

Options for assessment design

Prohibit the use of generative AI where it will prevent teachers from gaining an accurate understanding of student learning – for example:

  • require that students complete work in class without access to a computer, or under supervision
  • require that students write in their own words and authenticate this by signing and acknowledging
  • require that students complete their work using a department or school-supplied online word processor or collaboration platform, so that their progress can be monitored in real time, including whether text has been copied from a generative AI tool into a document.

Limit certain uses of generative AI while permitting others – for example:

  • select assessments that cannot be fully completed by a generative AI tool, such as an oral presentation, but where research using generative AI tools is permitted.

Modify tasks to reduce the scope for students to use generative AI tools uncritically and without acknowledgement – for example:

  • require that students use or refer to specific sources when completing an assessment task
  • require that students submit different elements of a written task, including brainstorming, drafting, revising, and editing
  • ask students to demonstrate or apply knowledge of a topic in relation to their own personal experiences
  • use open-ended questions that require students to think critically and creatively and apply local context or information that a generative AI tool would not have access to.

Incorporate the optional use of generative AI tools – for example:

  • require that students explain the reasoning behind their answer, so that the teacher can assess whether students truly understand and can apply what they have written
  • require that students cite any use of generative AI tools, including (where relevant) the prompts entered and the outputs provided
  • allow students to discuss or reflect on their work (for example, a small group discussion, or a video reflection completed individually)
  • support students to critically analyse and evaluate the outputs of a generative AI tool, including asking them to refine and improve their prompt to produce higher quality outputs and/or asking them to cross-reference outputs with credible and verifiable sources.

Encourage the use of generative AI tools to support improved learning outcomes – for example:

  • ask students to brainstorm and refine generative AI prompts which can help them gather information on the set topic
  • encourage students to copy their draft assignments into a generative AI tool to ask for specific suggestions about how they can be improved.

Require, where the purpose of the assessment is to use the tools – for example:

  • that students use generative AI tools to assess their ability to craft inputs and generate outputs, or their ability to critically analyse and evaluate outputs.
Suggested actions: Designing assessments with consideration of generative AI tools

School leaders can:

  • review school assessment design approach with teaching staff
  • work with staff and the school community (for example, through professional learning communities) to define acceptable student use of generative AI tools in assessments.

Teachers can:

  • design assessments with consideration of the appropriate approach for whether and/or how generative AI tools can be used
  • communicate how generative AI tools can or cannot be used in assessments.

Identifying and responding to inappropriate use of generative AI tools in assessments

School leaders and teachers are encouraged to identify and respond to academic misconduct using generative AI tools in a consistent manner in line with any local academic integrity policy.

Examples of academic misconduct using generative AI tools can include:

  • plagiarism – students using generative AI tools to complete their schoolwork (for example, essays, presentations, images, music, and other assignments) and then presenting this work as their own without attribution
  • cheating – students using generative AI tools to help them cheat in assessments (for example, by using generative AI tools to produce responses to questions on tests and quizzes).

Identifying when students are using these tools dishonestly can be challenging given the sophistication of generative AI outputs. Schools can reduce the risk of misconduct by designing assessment with consideration of AI tools and authenticating student work where misconduct is possible.

Authenticating student work

Schools can identify potential instances where submitted work may not have been wholly completed by the student, including submitted work that:

  • is significantly different from the student’s usual level of performance
  • is not in the student’s usual writing style
  • contains inconsistencies, such as different writing styles or different text formats
  • is similar to the output from popular generative AI tools when provided with the assessment question.

Teachers are encouraged to continue using well established processes to authenticate student work, including asking students to explain how they completed an assessment and discuss or present their understanding of the main ideas.

Using plagiarism detection tools

While plagiarism detection tools may have some capacity to identify the potential use of generative AI tools, they can often provide inaccurate or inconclusive results. These can include false positives (incorrectly identifying plagiarism), false negatives (failing to detect actual plagiarism), and accidental self-plagiarism (where students reuse their own work). Teachers are encouraged to use alternative methods for validating the integrity of student work, such as asking students to explain submitted work that does not seem representative of their previously demonstrated knowledge or skills.

Plagiarism detection tools may require the collection and storage of student data, such as typed essays which may contain sensitive content. Any use of plagiarism detection tools by staff, or students at the direction of staff, must comply with the Digital Technologies – Responsible Use and Privacy and Information Sharing policies. If schools elect to use a third-party plagiarism detection tool, they must follow the steps outlined in the ICT Software in Schools – Risk Assessment policy.

Suggested actions: Identifying and responding to the inappropriate use of generative AI tools in assessments

School leaders can:

  • support staff to identify and respond to inappropriate use of generative AI tools in a consistent and proportionate way, for example, through discussions or examples presented within professional learning communities.

Teachers can:

  • identify potential inappropriate uses of generative AI tools in assessments
  • authenticate student work using established and transparent strategies
  • respond to inappropriate uses of generative AI tools in assessment tasks in a consistent and proportionate way.

Resources

Resources

Overviews

Protecting personal data on generative AI tools – one-page overview (DOCX)External Link

Opt-in collection notice templates

Collection notice templates are available in Privacy and Information Sharing – Resources: Collection notices

Further information


Reviewed 20 June 2024