Campus thoughts on artificial intelligence

On Wednesday, Sept. 11, three Goshen College employees — Alysha Liljeqvist, assistant professor of business; Fritz Hartman, library director; and Luke Beck Kreider, assistant professor of religion and sustainability — spoke at convocation about the use of artificial intelligence on campus and shared their personal views. These speeches included the following quotes:

Liljeqvist: It took us 175 years to go from the invention of the telegraph to a little red bird in a slingshot, and it took just seven years to go from that little red bird to the founding of OpenAI. That was really fast, and that’s what we’re experiencing. It’s both exciting and a little overwhelming, but at its core, it’s still just ones and zeroes. Think of it like this: we are just planting a tree, but it’s still in its early stages. The trunk is growing, but the branches and the leaves, the full potential, are still developing. The future is exciting, so be curious. Try it out in your field, and see how this tree grows.

Hartman: In 1900, there were 4,000 cars on the road, and there were 21 million horses. By 1920, there were more [cars than horses], and by 1950, we all had cars, in just 50 short years. But guess what happened right away — we didn’t know how to drive those cars. In 1913, 33.38 people per 10,000 vehicles on the road died. Today, that number is 1.5 — that’s a 95% improvement. So what happened? The rules of the road did. We got ourselves a bunch of signs. We painted lines on the road. We made a whole bunch of stupid mistakes, but we learned from those mistakes, and I think that’s what’s going to happen here in the AI realm. The speed is going to go faster, but the basics of the road are going to take on more importance. We’ll have to be able to assess the quality of our source material, have even higher academic integrity standards for ourselves, and have at least a background knowledge of what AI is doing.

Kreider: Writing involves empathy, putting yourself in someone else’s shoes, trying to see something from their perspective. It requires trying to reach out to that person and share something with them. Writing is an exercise in vulnerability and empathy and in self-understanding, even if you’re not writing about yourself. So I’ve been telling my students not to use AI for their writing, because I think that kind of the whole point of education at a place like Goshen is that we go through those kinds of exercises — we do stuff that exercises the muscles we need to become good people, ethically, socially, intellectually, well equipped, empowering. Writing your own ideas in your own voice is one example. It exercises your muscles for empathy, vulnerability, self understanding. So from my perspective, the main problem with having ChatGPT for us is not that it’s dishonest, it’s that it defeats the basic purpose of a liberal arts education.


For context, Goshen College recently updated its policy on academic integrity, adding this paragraph:

The use of generative artificial intelligence to complete assignments is also considered plagiarism when it misrepresents work as a student’s own words and ideas. In some cases, professors may allow or even require the use of AI for instructional purposes. Such exceptions apply only when a professor has given explicit permission to use these tools. When using AI as a source of information, the student is responsible for the veracity of that information.

Under the direction of Robert Brenneman, professor of criminal justice and sociology, students carried out a survey on usage of AI at GC last academic year. Some statistics are highlighted here:

  • Roughly one-quarter of all GC students polled have used AI tools like ChatGPT to complete assignments
  • Three-quarters of the 53 students who have used AI in that way intend to use it for that same purpose in the future.
  • Out of those 53 students, only 7.6% use it to complete assignments and turn it in with only minor edits or none at all, compared to 47% of students nationally.

To build on Wednesday’s convocation, and the evolving use of AI at Goshen College, we asked a variety of faculty and staff in student-serving roles to share their views on artificial intelligence on campus. They are:

  • Robert Brenneman, professor of criminal justice and sociology and program director of CJRJ
  • Suzanne Ehst, associate academic dean
  • Anna Groff, assistant professor of communication
  • Andrew Hartzler, professor of accounting
  • Jesse Loewen, associate director of academic success
  • Kortney Stern, visiting assistant professor of English

The responses have been edited for length and clarity, but no artificial intelligence was used in this article’s creation.


On responding to students’ use of AI…

Robert Brenneman (criminal justice and sociology): A few times it’s been very clear and other times it’s been cases where I suspected it but didn’t investigate further. On shorter, lower-stakes assignments, it’s not always worth the energy and time necessary to confront a student if I’m suspecting inappropriate usage of AI. I have had to confront students on two occasions where it seemed either egregious (on a major assignment and used in a way that was problematic) or enough of a pattern that it was hindering a student’s growth as a writer.

Jesse Loewen (student success): I have not seen students use it, but I have talked with them about their use of AI. Last year, I had a senior who had a vague idea of what she wanted to write about and then used AI to polish some of those ideas by asking AI to do x, y, and z.

Kortney Stern (English): I have had more “gray area” uses of AI in my courses, such as students using Grammarly. I had one particular student who turned in an essay, and I immediately felt there was a glaring contrast in voice, style and tone between this essay and their previous one. They fully believed this was their paper, but when the question of AI came up, they said they had used a program that asks users to upload their writing, and the program changed all of the vocabulary and syntax to reflect what it believed was “academic.” The interesting thing about this case was the student did write the essay, but AI changed their word choice and phrasing so much so that it no longer read as the student’s original work. I think I am seeing more and more of these types of “gray area” cases than I am direct copy-and-paste, clear cut AI generated papers.

Andrew Hartzler (accounting): It’s becoming much more common to use AI to generate responses to writing assignments and outlines.

On personal use of AI…

Anna Groff (communication): I use it more for generating ideas and content rather than writing: for brainstorming class activities and planning, summaries of books or other resources, list of movies or other media related to a topic, outlining week by week topics for a class, etc.

Jesse Loewen (student success): I use the Grammarly AI feature multiple times a day when emailing. I often write too much. Asking AI to shorten my emails, check grammar, and so on allows me to write more professional emails that are quicker to identify the concern or need. Especially when the email is very detailed, involves different perspectives of an issue, or includes someone of “significant importance,” Grammarly AI allows me to feel confident about what I’m sending.

Robert Brenneman (criminal justice and sociology): I have dabbled with AI to generate “plausible” false answers for multiple choice items on a test. I have also experimented with it extensively just to see how it could be used by students. I asked it a question about my own research and it made up some wonderful journal article titles in journals that don’t exist — all with my name on them!

Kortney Stern (English): I personally have not used AI for what I refer to as “outsourcing my work.” I have discussed this with friends that teach at other universities, and I know many that do. I personally like to think of every opportunity to write as an opportunity to practice. For me, the art of writing is a life-long craft, but that is just my personal belief. I do understand why some feel that if they already have the skill, why not use AI to assist with tasks that can be easily delegated and accurately completed, such as writing an e-mail or syllabus.

Suzanne Ehst (associate academic dean): I’ve mostly used large-language models (LLMs) like ChatGPT in my work. They are not always helpful. If I’m writing something that is quite context-specific, these tools usually produce text that is too generic. However, for things that are more general — like updating our academic integrity policy to include explicit guidance on AI — LLMs can generate usable text more quickly than I can.

Andrew Hartzler (accounting): I use it in the form of Grammarly to proofread emails and most documents. I have not used generative AI more heavily yet.

On using AI in the classroom…

Robert Brenneman (criminal justice and sociology): I have used ChatGPT in class on two occasions in which it seemed useful. In Research Methods we asked it to create a consent form for our survey and it did a pretty nice job producing “standard” text that we could then modify for our own use. In Senior Seminar I asked students to use an AI prompt to generate a cover letter for a job ad they found, and it produced jaw-droppingly beautiful prose. But on further examination students realized that the content was fairly generic and not really specific to who they were.

Kortney Stern (English): One activity I do enjoy doing in class is asking an AI source to “write a poem, song, paragraph, etc. in the style of X author.” Then, I ask the class to see what they notice AI is picking up on. Is it trying to mimic themes, tones, etc? Are any of its choices odd, incorrect or inappropriate? Are there any glaring omissions? Alternatively, I might ask them to have AI generate a poem, and then have them “correct” it and/or use it as a jumping off point to add on to.

Andrew Hartzler (accounting): For writing, I’m using the College’s current AI policy. Its use in class is not as relevant to my subjects because students have to show me on paper how to apply concepts they are learning. I would say our department is much more open to its use than most others and my colleague Alysha [Liljeqvist] uses it heavily.

On professional adaptations to evolving AI…

Anna Groff (communication): I want to continue to experiment with it myself and find ways to have students explore it carefully and thoughtfully. That may mean building it into the brainstorming stage for story ideas, research ideas, sources, etc. but also using caution when it comes to writing final versions of articles/papers. I ultimately want to read students’ own thoughts and words in their assignments — that’s far more important than perfect writing. And I want to help them avoid even the slightest temptation to use it to make up quotes in journalism or data for research.

Kortney Stern (English): I am personally always a bit slow to welcome technology. That being said, I think it is important for the classroom to be reflective of the “real world” in terms of content, themes, assignments and educational tools, such as AI. Like all tools (including learning tools), AI can be used, misused, or abused. What is most important to me is that students learn any given skill (such as how to write an academic essay, how to construct a strong thesis, etc.) first, and then once they are in control of that skill they can begin to play with its conventions and even turn to AI for help brainstorming paper topics or creating an outline, for example. So, in this sense, I am more hesitant to support AI use in my 100-level courses unless it is for an in-class activity, but I feel less restrictive towards its uses in class and outside of class for 300-level courses.

Andrew Hartzler (accounting): I will use it anywhere it creates efficiencies for the generation of classroom examples through its ability to quickly restructure language and processes using AI input prompts.  I think it is a more relevant tool in language/discussion based classes than in basic process courses, but in upper level process courses, it could be an excellent teaching tool as an example generator and a way to accelerate the completion of processes.

General thoughts on AI…

Andrew Hartzler (accounting): It is just like any other tool that can be used for good or harm depending on the motivations of the user. I am concerned that at lower levels of learning, it will give students a more effective way of creating the illusion that they understand things that they really don’t understand. That will make them highly ineffective when they are in job settings or at higher levels of learning since they will not be able to complete new and more complex tasks because they never learned basic processes and structures. That being said, students must learn how to use it as a tool if they are to function effectively in environments where use of AI is expected. It is essentially a template generator and hypothesizer on steroids. The ability to discern the difference between input and intended result will be where the educational system will need to step up. It’s just like using a calculator for math. It dramatically speeds up the process, but if the user doesn’t understand the concepts, the calculator is not going to give them the correct result.

Robert Brenneman (criminal justice and sociology): I keep telling students that AI will only make authenticity that much more valuable. They will need to be AI literate at the very least. That means knowing how and when to use AI to their advantage. And poor grammar and weak prose will stick out like a sore thumb from now on. But authenticity, values, passion, the ability to surprise and awaken others with our ideas and actions — the “market” for these things is only going to grow. AI thrives on predictability. Thoughtful, self-critical liberal arts college grads excel at other things.

Jesse Loewen (student success): It is a tool, but so many students want shortcuts to success. Therefore, it’s certainly a danger in some ways. I often think about the fields where you need to show your learning each and every day. Going through college using AI and never really truly applying yourself, your skills, and so on concerns me about our future generations. As AI continues to evolve, we must talk about it and talk with students and staff at all levels of education.

Suzanne Ehst (associate academic dean): New tools are generating a lot of good conversation in higher education, and we need to keep it going. There is so much important, multidisciplinary territory to discuss here, and a liberal arts college is a great place to integrate our various areas of expertise — expertise around workplace readiness, ethical use, teaching and learning, and information literacy to name just a few areas. It’s both daunting and exciting.

Kortney Stern (English): If I were queen for the day, my vision would be to take AI use out of the shadows. Right now, I think the automatic assumption is that AI equals plagiarism. In reality, AI often results in gray areas, such as my example about the way AI changed my student’s voice. I think this is a really productive conversation that could be had: what is tone in writing? How do we recognize voice on the page? When is a writing voice no longer our own? These are great conversations that could be had but are often taken away because of the fear of AI — both in terms of its use and potential punishment for using it. I wish we could have an open dialogue with students. How are they using it? Are they using it? How frequently? In which types of classes or assignments? Did they use it more their first-year and less in their upper division courses? When do they think they have “crossed the line,” if there is such a thing. I would love to hear their perspectives. The only way we can have that kind of open transparency is if they do not fear repercussions, so we, instructors, must also be open to listening before assuming or acting.


In closing, Jesse Loewen added a new perspective to the conversation:

When discussing students whose first language is not English, I had a colleague push back on me that students should use AI if English isn’t their first language. I was coming from the perspective of, sure, using AI for basic stuff, but to really learn and understand a language, I felt that there needed to be a different approach.  This conversation still sticks with me because I don’t know the best language-learning path. 

Another way to look at it is that a non-English speaker walks into a fast-food restaurant and pulls out their phone. They use AI to help them order their food. Sure, that can work. However, my thought was that that individual may want to get to a point in life where they don’t need to have their phone ready to order food. They would have gotten to a place where they learned the language or had enough confidence to order because they understood the language. Maybe they took English courses as we offer at the college, or maybe they had some support who would work closely with them on their English. Perhaps I’m overthinking it, but I’m really curious about what those who are learning English prefer. What’s their preference? Use their phone for the next 20 years or really learn the language? If working in a specific field, having a phone to use as translation/AI isn’t ideal if one alternative is helping that person learn the language.  It’s just something to ponder. 

 

Luke Kreider, the final speaker at convocation, closed his speech with this passage.

Some of you saw the movie Oppenheimer last year about the creation of the atomic bomb.That film shows what really happened in history. Once you build the bomb, it becomes irrational not to use it. Once you use the technology, it starts to seem crazy or even dangerous not to build more and bigger ones — and once in a while, to use them. I’ve been given lots and lots of examples of this: once you make the technology, you have to use it. And I know that AI is not a nuclear weapon, but I worry about that technological coercion. Will we end up sort of forced into how and whether we use it? What will be the cost of that conversion? For me, one of the most valuable, one of the most morally significant and deep human tasks in life is figuring out what we actually think, what we want to say to others. In my view, one of the biggest moral hazards of AI is that it may become overwhelmingly tempting, even necessary, to let the robots think for us, and so to let them do all the work that we ourselves need to sustain our abilities to think for ourselves and make judgments and decisions and conclusions, to think with our emotions and our bodies, and to think in conversation with the body and others other human beings. I think, as a college community, we should consider how to use AI in ways that support us and cultivating our core values, growing core capacities for the kinds of lives we want to live.