In choosing whether a generative AI tool is right for your project, classroom context, or workflow, we encourage you to consider the following process in making your decision.
- What problem is being solved? There are some important applications of AI to improve equity and access, like improved captioning of video or higher quality audio transcription. Something worth considering, however, is whether the problem to be solved is a pedagogical one, or whether the problem is one of efficiencies or scale. Not all educational technologies are interested in education; often, they are really about coping with systemic issues like large classroom sizes and insufficient prep time. Knowing why we are making a decision can help with weighing ethical choices.
- Potential harms. Understanding how tools work and what risks they might raise for students is important. In general, you should not adopt or require students to use any tool that hasn’t been through a Privacy Impact Assessment. But in addition, you ought to consider things like:
- Are racialized, disabled, or gender diverse students disproportionately impacted by some aspect of this technology? Facial recognition tools, for example, may have negative impacts on students in these groups.
- Does this tool store any personal information students may give it, and is there guidance offered for what student should and should not share with the tool? Do you have an understanding of how the data is stored, how long for, and who owns it?
- Have you used the tool and explored whether any resultant responses are likely to contain harmful information? Have you read up on the tool to explore the recommendations of other practitioners?
- Who gains? Where do the benefits lie in using this tool? For example, will more learners find the materials accessible because you made use of this particular tool? Conversely, if there is money or data changing hands, who is profiting?
- Ethical considerations. Working with generative AI can be a place where we have to make difficult decisions to live our values. There can be huge benefits to these tools, but there are also important drawbacks that should be considered.
- Environmental/sustainability concerns. Generative AI tools use a lot of water and generate a lot of carbon. By some estimates, a 20-25 question interaction with ChatGPT costs 500mL of clean drinking water, and an AI-powered search produces 4-5x more carbon than a traditional search. How do these added costs fit in our workflows at a university like TRU, where sustainability is central to our mission?
- Labour concerns. One of the reasons why recent iterations of GPT, for example, are so much less offensive in their outputs than previous versions is because of highly traumatic human labour, largely underpaid and located in the Global South.
- Accessing these tools means making use of large databases of content amassed without the original creator’s consent. We must consider how we talk about intellectual property and originality when we talk about how this work functions. Without knowing the source of these materials, it’s also important to consider what biases the datasets are reinforcing.
Working through these questions doesn’t make the decision to use a generative AI tool as part of your workflow any easier or less fraught, but it hopefully can make the decision-making process more transparent. It might also make a worthwhile classroom activity for your students.