AI Policy

Validum Institute’s AI Policy outlines the responsible use of AI tools in learning, teaching and assessment to maintain academic integrity, transparency and educational quality.

Background

Validum acknowledges that with the increased usage of AI in a variety of situations, it is important to draw on the benefits of AI while considering and minimising the potential risks and/or costs.

The purpose of this Policy is to outline the way in which AI may be used by Validum learners.

For the purposes of this Policy, AI means artificial intelligence and machine learning software or applications, including but not limited to ChatGPT, Jasper, Synthesia (videos, eLearning), Murf (artificially generated voiceovers for content), Canva (image generation, text to image), GTP3, GPT4 and Claude, or any later or similar versions of such software or applications.

The focus of the Policy is directed at ChatGPT, however this policy applies equally to any other AI used for the same purpose, including those which were not yet created at the time the Policy was written.

Validum understands and acknowledges that AI:

  • can be a useful resource for the purposes of research and drafting;
  • uses information gained from data that has been fed into it, together with data mined from the internet;
  • allows users to feed data into it to generate content; and
  • in many cases, is not to be treated as a private or trusted platform, as AI may retain information entered, and which may be accessed by third parties.

Policy

Any use of AI will be permitted for research, assistive, idea generation, information gathering and brainstorming purposes only.

All learners, prior to submitting each assessment, currently sign a declaration that the answers in the assessment are produced by the learner and are the learner’s own work.

In line with the current declaration, Validum expects that all assessment answers submitted by learners in the course of undertaking studies with Validum will be the learner’s own original work.

Learners are NOT to use AI to:

  1. cheat;
  2. create substantive answers to assessment questions which are copied and pasted directly into assessment questions and passed off as the learner’s original work (this is plagiarism); or
  3. answer questions beyond the learner’s apparent ability or skill.

Learners are responsible for proof-reading, fact-checking and editing any content created by AI, and to request sources wherever possible and relevant.

It is also the learner’s responsibility to ensure that any content generated by AI does not breach any copyright or other intellectual property laws.

PLEASE NOTE – if a Validum Trainer and Assessor determines that a learner has blatantly and repeatedly used AI to complete their Assessment(s) in a manner which breaches this Policy, the Trainer and Assessor may (in their sole and absolute discretion) refuse to mark the learner’s Assessment(s) and require the learner to re-attempt the Assessment(s) in their own words.

Risks in using AI

Validum notes that there are potential risks and/or detrimental effects associated with the use of AI in a work and study context. These may include but are not limited to the following:

  • privacy;
  • confidentiality;
  • commercially sensitive information; and
  • currency and accuracy.

As AI relies upon information that it is fed, it is entirely possible that collectively a large portion of incorrect and unreliable information could be used to generate false content.

NEVER enter personal or confidential information.  What you enter into an AI prompt is stored for future use.  It may be recalled or used at any time without your consent.

The learner acknowledges that Validum will not be liable for any loss, cost, expense, claim, damage or adverse impact suffered by the learner caused by, or as a result of, using AI, entering personal or confidential information into AI, or relying on content generated by AI.