Academic Integrity and AI

Wednesday, April 26 | 12pm – 1pm


Mark Edwards

Mark Edwards, PhD

Dr. Mark Edwards is Director, Academic Initiatives and works closely with the Associate Dean, Academic & Innovation on projects including process and policy development; implementation of Indigenous-focused programs including the new MEd in Indigenous Education; and the Ministry of Education-funded expansion of seats in the Faculty’s early childhood program.




Teresa Dobson

Teresa Dobson, PhD

Dr. Teresa Dobson is Associate Dean, Academic, Graduate and Innovation for the Faculty of Education, as well as Interim Director of the Master of Educational Technology Program. Dr. Dobson’s research focus is in the digital humanities, with a primary focus on experimental read-write interface design and use, and also in literary education. Dr. Dobson has taught secondary English methods courses in the UBC BEd program, and well as English literature in university and grade schools (IB, AP, general stream). Dr. Dobson’s experiences investigating academic integrity cases for the Faculty of Education and teaching academic writing in literature classrooms will inform this session.



Faeyza Mufti

Faeyza Mufti, MET

Faeyza leads the technology-enabled learning design initiatives at ETS. She has an educational background in Computer Science, Project Management, and Educational Technology Design.


Jamilee Baroud, Curriculum and Evaluation Consultant, ETS
Amir Doroudian, Learning Designer, ETS



Estimated reading time: 10 minutes

In this interactive Viewpoint session “Academic Integrity and AI”, we seek to broaden understandings of academic integrity and artificial intelligence and to share perspectives on pedagogy, leadership, and future directions from multiple disciplinary roles and spaces.

Opening remarks

Faeyza Mufti
Faeyza Mufti began the session by defining Generative Artificial Intelligence (AI) and sharing ChatGPT’s response to her prompt, “what happens when AI meets AI?” And the response was that they make some intelligent connections. Faeyza unravelled these intelligent connections, focusing on the affordances and implications of Generative AI and its use as a learning tool in education, which was broken down into two opposing approaches: ban the tool as a threat or embrace the tool with guidelines on appropriate and permissible use.

One of the most prominent looming threats of Generative AI is that students might use it to complete their schoolwork, which has led to the emergence of AI detection tools like GPTZero that can generate false positives and so we must be weary of them.

Faeyza then drew from recent research to exemplify that when educators embrace Generative AI for teaching and learning, the process of engagement with the tool should be purposeful and extend the capability of the human mind. One method is to leverage higher order thinking skills in assignments and writing tasks so that completion can be less easily substituted by Generative AI. In regard to assessment, Generative AI can assist teachers to redesign alternative assessments and flip the classroom.

Faeyza concluded by reminding us that students must be taught how to use Generative AI to improve prompt writing and thus data output and to evaluate responses, which are not always valid. Additionally, while a plethora of Generative AI tools are available that can be used for a variety of purposes including research, writing, generating marketing plans, and referencing, there are privacy and data concerns that must be considered in educational contexts.

Dr. Teresa Dobson
Dr. Dobson began by framing the conversation about academic integrity not to undermine the affordances and possibilities of Generative AI, but as a way to elaborate on some of the dilemmas circulating in higher education. One major concern is understanding how to acknowledge sources when working with a Generative AI. Dr. Dobson noted that academic integrity cases spiked throughout the Pandemic with the rise of online education, but they have yet to see a case related to Generative AI. Regardless, this spike has led the university to become very concerned about academic integrity and so they have created an academic integrity hub, with concrete definitions and methods to accurately cite work to serve as guidance. However, because AI does not site the data they share,

“…a student who is using it won’t necessarily be able to cite the sources that ChatGPT used to generate that text. So, it undermines that notion of academic integrity that’s so central to the tenants of the university.” She added that output is limited to data collected prior to 2022 and that bias exists within the data training set and reflects the lack of diversity of people who are programming and creating the tools — students need to be aware of these limitations and it’s the educator’s role to support students’ development AI literacy skills.”

Dr. Mark Edwards
Dr. Mark Edwards began by noting the significance of the conversation for the Faculty of Education and continued to address three topics related to AI in education. First, Dr. Edwards suggested that because

“…the collective agreement states that all faculty, through their academic freedom, can use whatever means they feel are most fruitful in teaching the topic that they’re teaching,”

educators must tell their students how they can use AI in their specific course context and include those details in their syllabus — resources for different models can be found on the Academic Integrity Hub. If AI is going to be permitted, he suggests that educators define how it will be permitted to avoid the potential academic misconduct. He also recommends that every department and academic unit discuss what it means to use AI now and in the future. A helpful resource is the style guides, which APA regularly updates to ensure accurate use of AI and citations practices. Other style guides are putting out their specific reference guides as well.

Dr. Edwards also identified two areas of significance to academic integrity. One is admissions – he suggests that it is unavoidable that people are going to use AI and so everyone in the faculty and the departments must determine how they want to look at statements of intent. And the second area is that it may be necessary for educators to change their position on AI partway through a term, especially if students do not have equitable and accessible access it. In other words, not every student is going to be able to use Generative AI or use it to a great depth and so the equity issue must be addressed, before it is integrated.

Discussion themes

Panelists and attendees alike were eager to discuss the debate about whether and to what extent AI would curtail critical thinking. The panelists recognized that although AI lacks human-like abilities to think critically, integrating Generative AI may still alter how we conceptualize critical thinking in education. Faeyza stated that if assignments are designed to emphasize higher-order thinking including the ability to analyze, evaluate and peer review then critical thinking may not be so readily substituted by these tools.

Dr. Dobson added that the act of generating text, and developing a prompt – prompt engineering – while using Generative AI is a form of critical thinking in itself. The onset of Generative AI encourages educators to think about the kinds of critical thinking our students are currently engaged in, and how we can continue to encourage them to think critically by incorporating effective critical thinking skills into Generative AI tasks and assignments.

In conversation, panelists identified two main themes in regard to Generative AI and Academic integrity. The first, is to encourage educators to be transparent with students about whether the use of ChatGPT is permitted in the course especially because there are no Faculty wide regulations that dictate whether instructors can or cannot integrate the tool. If it is permitted, then educators are asked to be clear about the parameters of that use. The second, is educating students about how to use ChatGPT and for what purpose it is best suited to. But also, imparting the understanding that AI generated responses do not provide references. As Dr. Dobson eloquently noted:

“If we conceptualize scholarship for learning as being in communication with other people who are thinking and writing about the subject. How do we know who those people are if we use an intermediary to gather the data…that’s the challenge.”

The panelists agreed that bias in Generative AI datasets can heighten inequities and limit access to and understandings of how to use ChatGPT and determine the validity of data outputs. As Dr. Edwards notes, if students cannot equitably and accessibly use or gain access to Generative AI tools then incorporating assignments that require its use will contribute to digital inequities. Dr. Dobson added:

“AI absorbs the bias that's already present in the data that it draws from and students who are working with…chat GPT need to recognize that computational systems are only as good as the people who make them [and] …there's not enough diversity among the people who make technological decisions…. So, it's important to sort of think of the bias that's already inherent in the people who are programming the tools and creating the tools.”

As Faeyza pointed out, one major limitation of Generative AI is related to data privacy concerns. A recent incident that occurred on March 22 with ChatGPT whereby people could see each other’s prompts garnered mass concern. In education, data privacy is of utmost concern and so all institution wide approved tools are FIPPA compliant. One of the main limitations of Generative AI in educational contexts then is the protection of student and teacher privacy. The panelists agreed that educating students on their data privacy rights in relation to Generative AI is a good first step. A next step, if incorporating ChatGPT in assignments or tasks is to provide students the option to opt in or out of using ChatGPT for privacy related concerns.

Presentation slides

View Academic Integrity and AI session slides


A few resources were also made available prior to the session.

Equity, Diversity, and Inclusion (EDI)

This event is a part of our Equity, Diversity, and Inclusion (EDI) initiative, which aims to raise awareness of EDI topics particularly accessibility in in-person, online and hybrid courses in line with the coming new provincial accessibility guidelines. Learn more about our related events and resources.

We invite readers to continue the discussion in the comment area below. Share your thoughts and experiences in relation to classroom culture and community in online learning.

Leave a Reply