Every Learner Everywhere

Lessons Learned from Incorporating Generative AI Into An Online Ethics Course

Last year I incorporated generative AI in several ways into the Pharmacy Ethics course I have been teaching for the University of Mississippi School of Pharmacy since spring 2010. The experience was illuminating in part because I found the accuracy and the plagiarism concerns usually referred to in discussion of AI entirely manageable. However, another serious concern emerged for me about how AI influences students’ abilities or willingness to make original arguments with conviction.

Over the years, this course has evolved through several different delivery modes — an in-person lecture format, hybrid during a period I was abroad, fully online both before and during the Covid-19 emergency and, most recently, as an independent study offering. Each of these prompted specific changes in my teaching practices. For example, lectures are entirely gone, and group discussions became a bigger part of the format, though they have receded under the independent study format.

I have also completely set aside the textbook I started with, originally in favor of adaptive learning courseware. Now I entirely use OER built into the Blackboard LMS. In general, I would characterize the course as a flipped model that prioritizes active learning.

With each of these changes, I tried to learn from students about what worked for them and to amplify that. For example, early in the Covid-19 emergency, I was surprised to learn how much students enjoyed the synchronous online meetings with peers they had never met in person before.

Now generative AI is creating the next set of significant changes to my courses. It’s informing not just the kind of work that students do in their practice activities, projects, and assessments. It’s also informing how I present information and create other learning experiences for them.

Their frequent use of generative AI has also led me to reflect on how to encourage students to determine their own beliefs and to take responsibility for their own arguments in the face of very authoritative-sounding generative AI outputs.

Using AI to develop course content

In Pharmacy Ethics we cover the foundations of healthcare ethics, ethical codes in pharmacy education and pharmacy practice, and social, political, and health issues that are grounded in pharmaceutical use. Some of those issues include IVF, medication abortion, end-of-life care, gender-affirming medications, medical execution, and weight-loss drugs.

The goal of the curriculum in this course is to have students understand the ethical standards of care and practice in healthcare and apply them to contemporary issues in healthcare. Related to this goal is for students to practice reading and analyzing legislation drafted with the intention of regulating, either directly or indirectly, pharmaceutical products.

Having the course as part of the independent studies division gives students more scheduling flexibility as they take the course online at their own pace — completing it within a single term — and there are no synchronous sessions to attend. However, it has not been ideal from my perspective, as it is now a solitary venture for the students and so much of education is social, involving students engaging with others who have different experiences and perspectives. Enrollments have been averaging 30-50 per term, so there were previously lots of opportunities for students to hear different viewpoints from their own.

Because I no longer have control over the course modality, I set out to make the content and the assessments as relevant and interactive as possible for students. In order to refresh the course to make it more topical and interactive, last year I changed up all the content and assessments, including by building readings and assignments around recent state and federal court cases.

To bring in those recent developments, I used Perplexity to generate summaries of legal proceedings. Perplexity is the best tool for this kind of work as it pulls information directly from the internet in real time and provides citations for its facts and figures.

I fact checked all of the outputs and sources cited by Perplexity and often rewrote outputs to make them more accessible to students who may not be familiar with specialized legal and medical terminology. Students always had the option to read the linked original documents, but the AI summaries gave them the quick overview they needed to gain background information for a case study or analysis of a court decision.

Once I had my content set, I used the generative AI tool built into Blackboard Ultra to generate quiz questions for each module. This tool is very useful, but not entirely reliable, so it was necessary for me to go through each question pool to verify not only the accuracy of the question and answer, but to often rewrite the question in a way the students would understand it. I also had to eliminate several subjective questions and answers from the question pools.

It may seem as if the process of fact checking and rewriting outputs was time consuming, but having built course content from my own research in the past, I found the AI-generated content and assessment questions a big time saver.

I tried to use Blackboard Ultra’s image generator, but the results were disappointing in that they were often fantastical looking or cartoonish rather than looking like photographic images. The image generator also required incredibly specific prompts that still did not hit the mark for the kinds of images I needed such as page banners and illustrations of specific medications covered in the course content.

Assigning students to use generative AI

I then turned to incorporating generative AI into my assignments. I created a generative AI policy for each assignment, so students would not have to keep referring back to a policy in the syllabus. Also, this helped clarify how I wanted them to use AI for assignments.

It was important to me that students not only use AI to look up answers, but to use it as a study tool or analysis tool. I created several assignments in which students were instructed to use generative AI for analysis or to investigate bias.

In one early assessment, I asked students to use an image generator to create an image of a pharmacist. They were to upload this image to the submission portal and then they were instructed to tell me if that image looked like them. Students learned from this exercise that many image generators default to pharmacists being male and white. When the women in the class and the Black and Brown men asked the tool to regenerate the image with a more specific prompt, “A Black female pharmacist” for example, often the image had problematic aspects such as the person sporting a stethoscope or being impossibly attractive.

In another assignment I asked students to use generative AI to create a 12-question quiz to help them understand the three ethical principles of healthcare. Building on that assignment, I had them input the Pharmacist’s Oath and to analyze it against the three ethical principles of healthcare.

Generative AI was useful in assignments in which it was otherwise quite burdensome for students to get information. For example, one assignment asked them their congressional representative’s reasoning for how they voted on certain healthcare bills. Not all Congress members put out statements following votes, but AI could quickly search the legislator’s website, interviews, and other public documents to put together a summary of that person’s statements that would indicate support for or against a particular issue.

Generative AI was also useful in helping students break down and analyze complicated Supreme Court cases that formed the basis for several assignments. This may sound like an area where high levels of inaccuracy would creep in. But most inaccuracy with AI stems from it searching its prior data set. It performs much better when you direct its attention to particular PDFs or webpages, which I had shown students how to do by this point.

Overreliance on generative AI

However, as we progressed through the course, students were increasingly asked to weigh in on ethical issues in healthcare. It was my belief that in the last third of the class, students would have gained enough insight to begin forming their own opinions on ethical matters.

But this often did not happen. I found that even when the assignment prompt was subjective in nature, students entered this into ChatGPT and accepted the output as their own. Sometimes students would affirm or adjust the alignment of their personal beliefs with the AI output, but many times they did not.

For example, one assignment asks students to consider the appropriate age for minors to make decisions about gender-affirming medications such as puberty blockers and hormone therapies. The most common ChatGPT output recommended puberty blockers at age 12 for girls and 14 for boys, with hormone therapies starting at age 16. Some students cut and pasted that output and added either that they agreed or disagreed. However, the majority of submissions did not include such statements from students.

This mostly uncritical acceptance of AI outputs is disturbing to me. This is different from accepting incorrect or invented outputs — now commonly called AI hallucinations. This is outsourcing matters of personal belief. Students were willing to present the AI output as their own rather than grappling with the issue or honestly stating their beliefs.

Another struggle students had was acknowledging their use of AI. Even though I told them clearly they were allowed to use it and that there was no penalty for doing so and no extra credit for not using it, they often failed to make a statement about their AI use in their assignment submissions.

I had to send several announcements to students taking the course about marking submissions as generated by AI. When students did finally comply with the policy (because I refused to grade submissions that were generated by AI) their acknowledgements were passive “AI was used on this assignment” or curt “ChatGPT.” Very few students actively stated “I used AI to help me with this assignment.”  In the next term I plan to add checkboxes to each assignment that allow students to say they used AI or that they did not.

I asked our Every Learner student interns about why they thought other students might hesitate to acknowledge AI use, and the interns speculated that even though students were told they could use it, they thought their instructors might think less of them for using it.

Putting personal back in opinions

I strongly believe faculty should be experimenting with uses of generative AI in their classes and be willing to adjust assignments based on student behavior so that instead of banning generative AI, they are teaching students to responsibly use it.

I would rate the AI overhaul of the class a success, but I want to encourage students to work more critically with generative AI. First, I will have them outline how they changed an output or, if they did not, why they made that decision.

Additionally, on assignments in which they are asked their opinion, I need to encourage them to challenge the output or adjust the prompt so that it reflects their actual beliefs rather than the aggregate consensus of the training set. I am working on rewording the assignment so that simply cutting and pasting the output will no longer be acceptable; this way students are forced to grapple with their beliefs against the ethical foundations of healthcare and the ethical underpinning of legislation regulating pharmaceutical care.

Overall, my experience suggests that the accuracy and academic dishonesty concerns about AI — which is where most of the discourse is taking place — are manageable. It’s possible for individual educators and students to take responsibility for checking the outputs of generative AI and to get better at prompting outputs. We can use AI effectively to support the hard work of gathering and synthesizing information.

The greater risk — what I plan to reflect on more and what we need more conversation about — is that students treat AI not just as accurate but as wise, knowing, or insightful. On complex ethical issues, students are exchanging their own voice for the convenience of an AI output generated by facile sweep of the internet.

This leads students astray by a logical fallacy, argumentum ad populum, in which they mistake the frequency of an opinion expressed online as worthy of adopting as their own. As we continue to explore ways to use generative AI in college courses, we need to guard against uses that result in students letting it do the hard work of developing an informed personal perspective.

Check our workshops page for more events on using AI in higher education