Hello, hello!
CRITER is back! It's a day late, or maybe a week early, depending on how you like to count. Oh well.
Kicking things off with two great opportunities for Boise State faculty and staff:
First, the eCampus Center invites you to join a new user group exploring AI chatbot use in online courses: "In this hands-on, collaborative environment we'll share our experiences and challenges creating chatbots for online teaching and learning, investigate best practices for integrating tools like Gemini and BoiseState.AI, and shape a faculty-led research project to launch Spring 2026. Sign up through Campus Groups. Contact ecampus@boisestate.edu to learn more.
And mark your calendars! The second annual Social Impacts of Computing (SIoC) workshop will run this August 4-5 in The SPACE (College of Innovation+Design, Albertsons Library). A registration website is coming soon, and I'll post it when it's up. In the meanwhile, mark the time, and check out the draft schedule here.
Alright, on to this week's recommended mini-reading list:
1. I enjoyed this essay from Inside Higher Ed, which critiques a recent MIT study being framed as "ChatGPT is making us dumb" as having methodological limitations: "Consider the task that participants in this study, all students or staff at Boston-area universities, were given. They were presented with three SAT essay prompts and asked to select one. They were then given 20 minutes to write an essay in response to their chosen prompt, while wearing an EEG helmet of some kind. Each subject participated in a session like this three times over the course of a few months. Should we be surprised that the participants who had access to ChatGPT increasingly outsourced their writing to the AI chat bot? And that, in doing so, they were less and less engaged in the writing process?"
[Related: Ethan Mollick's take on this, if you're interested. The tl;dr: "Ultimately, it is how you use AI, rather than use of AI at all, that determines whether it helps or hurts your brain when learning"].
2. Congrats to Boise State Computer Science professor Jun Zhuang on co-authoring a preprint of an article on "jailbreaking" LLMs--getting them to do things they're not supposed to do. The methods the authors developed, according to 404 Media, involved tricking "AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM" by making the prompt "complicated, full of academic jargon, and cit[ing] sources that do not exist."
3. And here is more incredible work from Boise State AI researcher Brian Stone (I shared this one with our Deans and Provosts because I think it's such an eye-opener). You should read the whole thing, but here's an excerpt that kicks the piece off: "Imagine being a college student on your first day of a new semester. One professor says that using generative AI is cheating, while the next says you will be using it extensively in the course, and yet another does not mention it. You are told you will be punished if an AI detector classifies your assessments as AI-generated. Some professors encourage you to use a grammar app to improve your writing, but others tell you that doing so counts as cheating. At home, one parent worries AI will atrophy your brain and abilities, while the other tells you that you need to learn prompt engineering to have any hope of landing a job in the new AI-infused economy. Pundits in the media say AI makes college obsolete, social media influencers advertise apps that can complete all your papers and online tests, and meanwhile some of your friends are showing off creative applications of AI for fun while others say AI will destroy the world. If you were a college student, you would probably find yourself confused, perhaps excited or nervous about this new technology, and likely unsure of where it fits in your future."
4. And wouldn't you know it, I'm going to sneak a fourth recommendation in (it's short, I promise). From the Idaho Capital Sun, a piece on how Idaho's Information Technology Services (ITS) department is rolling out generative AI: "“While we’re measuring and mitigating risks, we’re making sure that we’re not getting in the way of it being launched. We want to — we really want to unleash this to the workforce,” Idaho Office of Information Technology Services Administrator Alberto Gonzalez said in an interview. “I’m a huge fan of automation and machine learning already anyways, because it can make government way more effective and more efficient. And I believe that those that are not using automation are doing a disservice to the state.”
Here's a photo of a cool thing I saw while hiking Minnehaha falls in Minneapolis last week.
Hope you're getting off your screens and outside, if you can. Talk to you soon!
Jen