CRITER Weekly Update
February 4, 2026
Hello and Happy Wednesday!
One thing I really enjoyed this week was this photo of Fiona the hippo, who I guess somehow is 9 years old now???? She’ll always be a baby in my heart, though:
Now on to the AI:
Upcoming Opportunities
Agentic AI for Faculty Exciting to see the SBOE continuing to provide lots of opportunities for upskilling and training in AI. I signed up for this one from Joel Gladd, our SBOE AI Catalyst:
The February 10th Idaho AI Catalyst show-and-tell will provide a crash course in agentic AI for faculty. As of last week, Google Chrome is now an agentic browser. What does that mean for students and faculty? Kevin Rank (BSU) and I will provide an overview of what “agentic AI” means, explore how recent developments are impacting higher ed, and provide a few examples of how we’re using agents to support our own work.
Register here for the full series or to select a single session.
Web Search and Data Mining Conference Folks in Boise State’s CS department asked me to share info about WSDM 2026 with you:
THE 19 ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, BOISE, IDAHO, FEBRUARY 22-26
WSDM (pronounced “wisdom”) is one of the premier conferences on web-inspired research involving search and data mining. WSDM is a highly selective conference that includes invited talks, as well as refereed full papers. WSDM publishes original, high-quality papers related to search and data mining on the Web and the Social Web, with an emphasis on practical yet principled novel models of search and data mining, algorithm design and analysis, economic implications, and in-depth experimental analysis of accuracy and performance. Register here.
AI and Higher Ed
AI Tutoring Genuinely thrilled to see this op-ed, written by our own Dan Sanford, in Inside Higher Ed:
When we do so, we lose not one, but two well-documented benefits. The first is the benefit to student learners: Tutoring’s highly relational pedagogy builds confidence, persistence and belonging—outcomes that cannot be simulated by a chatbot. The second, less often discussed but equally important, is the benefit to the tutors themselves. Serving as a peer tutor is one of the most powerful pre-professional experiences undergraduates can have. It allows them to practice communication, empathy and facilitation. It helps them begin to see themselves not just as students, but as educators. It bridges their identity as learners with their emerging identity as professionals. When tutoring is outsourced to AI, we rob both groups of students—the learners and the tutors—of experiences that shape their academic and professional futures.
Legitimacy and Verifiability Several of us on the AI Coordinating Council have been having conversations about the existential threat AI poses to a university education. This op-ed from Inside Higher Ed lays out the stakes quite clearly:
The alligator closest to the boat is AI-forced restructuring. Universities used to be purveyors of courses bundled into degrees. There was an assumption that the degree alone had value. Now, universities need to step up and double down on offering verified capability developed under expert supervision, in addition to the social and professional networks that make the capability legible in the world. AI can help faculty deliver content at scale, and employers can trust that your graduate actually knows what the transcript claims.
Worth a read.
AI for Arguing The Chronicle has this article out on AI software called Sway, which supports civil discourse and argument in college classrooms:
Here’s how Sway works. An instructor selects a prompt, usually a controversial statement. (Simon Cullen, one of the tool’s creators, said gender is by far the most popular topic with prompts ranging from the ethics of abortion to transgender athletes.) Students rate how much they agree or disagree with the prompt, and then are matched with opposing viewpoints for a virtual chat. Users provide a display name but are not required to identify themselves or use their actual names. During the discussion, the AI tool intervenes with suggestions that the creators say are to ensure the conversation is as productive as possible.
For example: In a discussion about abortion, if you were to type something accusatory about your debate partner’s opinion on women, Sway might tell you to address your partner’s reasoning instead of speculating on their motives. It also asks guiding questions, trying to refocus argument away from what it interprets as accusations or slogans.
AI for First-Years The Chronicle reports on efforts to use AI software to increase student success in first-year courses:
Foundational courses are hard to teach. They’re generally large and impersonal. The students, typically first-years, have varying goals and levels of preparation. The instructors are often overworked.
Little wonder, then, that failure rates are high. Failing or withdrawing from introductory courses can lead students to switch majors and slow their progress to a degree.
Learnvia, the new nonprofit, offers AI-enabled courseware including video lessons, homework, quizzes, and an AI tutor. Professors can move pieces around or skip them entirely.
The Gates Foundation invested $55 million in the project.
Student Researchers on AI New report based on research by students on AI has some interesting findings (h/t Christine Bauer):
Continuous Change and the Need for Living Resources: The AI market evolves daily, making static lists obsolete. The evergreen database anticipates this challenge.
Pedagogical Promise vs. Academic Integrity Risks: Tools can personalize learning and streamline feedback, yet concerns about fairness and plagiarism detection remain paramount.
Access and the Digital Divide: Free AI tools can mask inequality. Students noted how premium versions with advanced capabilities can exacerbate disparities, and some majors offer more opportunities than others to develop digital fluency.
Creative Uses Beyond Faculty Imagination: Even when AI is banned for graded work, students use it creatively—like generating practice tests for self-study.
Faculty Shifting from Lecturer to Guide: AI is fundamentally reshaping pedagogical priorities, with educators focusing more on helping students build judgment and critical thinking rather than simply transmitting information.
Back to Bluebooks A Columbia student wrote an op-ed for The Chronicle claiming faculty have no idea how much students are using ChatGPT (which I don’t think is quite true) and suggesting that students need to both be taught how to use AI tools and given assignments where the use of AI isn’t possible:
So rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Colleges ought to prepare their students for the future, and AI literacy will certainly be important in ours. But AI isn’t everything. If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.
I did find myself wondering if this one was written with the help of AI.
Tech Update
Moltbook This week, it’s all about Moltbook, Moltbook, Moltbook. Driven by the development of Clawdbot (subsequently called Moltbot, now OpenClaw, next up: RavenClaw, probably), the bots apparently now have their own social network. From the Washington Post (RIP):
Martel’s reaction resembled that of many others struck by recent activity on Moltbook, a website billed as a social network for bots and modeled after the discussion app Reddit. Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst. In other Moltbook threads, bots claimed to share their recently acquired knowledge, such as the proper way to plant a tree. Some prominent AI proponents expressed awe at the bots’ coordinated conversations, raising the possibility of further collusion among AI programs to help or hurt human goals.
Some folks immediately jumped to the conclusion that the bots might be gaining sentience (à la the ending scenes of Her) but not so fast, says Wired:
Leaders of AI companies, as well as the software engineers building these tools, are often obsessed with zapping generative AI tools into a kind of Frankenstein-esque creature, an algorithm struck with emergent and independent desires, dreams, and even devious plans to overthrow humanity. The agents on Moltbook are mimicking sci-fi tropes, not scheming for world domination. Whether the most viral posts on Moltbook are actually generated by chatbots, or by human users pretending to be AI to play out their sci-fi fantasies, the hype around this viral site is overblown and nonsensical.
This op-ed in the NYT seems to agree. 404 Media reports on some early problems with the site’s security. And don’t forget the end goal: to make money.
Botnet Meanwhile, bots are taking over the internet in general:
“The majority of the internet is going to be bot traffic in the future,” says Toshit Pangrahi, cofounder and CEO of TollBit, a company that tracks web-scraping activity and published the new report. “It’s not just a copyright problem, there is a new visitor emerging on the internet.”
AI in Space I don’t even know what to say about xAI and SpaceX merging, and data centers in space, and one business being valued at over one trillion dollars. I’m just a simple girl from Canyon County, and to my ears, this all sounds absurd. But what do I know, other than what common sense and my own eyes tell me?
Seamless Transfer Gemini wants to make it easy for you to move over from ChatGPT. I haven’t tried it.
AI and the Culture
AI for Good This seems cool: The National Archives is using AI to allow increased access to its many historical documents and archives:
The American Story will be the first and only museum in Washington, DC to use artificial intelligence (AI) to bring each visitor a special, individualized opportunity to explore the records of the American people and engage with American history. This exhibit brings more than two million historic records to life, offering a personalized journey through our nation’s past.
[Yes, I know that the Archives and federal museums in general are undergoing censorship. If you know of articles that explore that in relationship to this project, let me know].
Eating AI An Alaska student got in trouble for literally eating an art piece about AI. He was interviewed about why he did it by The Nation, and Lordy, this part:
CW: Do you have any qualms about the fact that AI art is made by scraping other artists?
GG: Yeah, I mean, that’s part of why I spat it out, because AI chews up and spits out art made by other people.
CW: So during your demonstration, you didn’t swallow any of the exhibit?
GG: I swallowed some of it. I had really been spitting it out near the end. I didn’t want to make too much of a mess, but I also didn’t want to have to spit it out in the back of a police car.
AI Politics and Policy
Pretti Murder AI-generated images are now circulating suggesting that Minneapolis RN Alex Pretti was brandishing a gun before he was killed by ICE agents. This article debunks those images (TW: these are hard to look at).
Tech Updates
The Center for Security and Emerging Technology has issued a report about the potential and probability of AI to automate its own AI R&D efforts. There’s careful analysis in the report, and claims are thoughtfully qualified, but the report identifies two concerning threats:
Within AI companies, reduced researcher involvement in R&D processes would make it harder for companies to identify, understand, and prevent harms posed by their systems. More speculatively, several attendees emphasized possible risks involving sophisticated AI systems pursuing unwanted (‘misaligned’) goals, which might emerge accidentally in the training process or be cultivated purposefully by malicious actors. In such scenarios, reduced human oversight could hypothetically allow AI agents to leverage the automated AI R&D process toward their own goals.
Second, faster AI progress resulting from AI R&D automation would make it more difficult for humans (including researchers, executives, policymakers, and the public) to notice, understand, and intervene as AI systems develop increasingly impactful capabilities and/or exhibit misalignment. Relevant risks include enabling bad actors (e.g., by making cyber offense capabilities or bioweapons development more accessible) as well as more diffuse social impacts (e.g., effects on labor markets or human-AI relationships). If research progress accelerates, then there may also be an increasing gap between the most advanced systems available publicly and those that exist inside AI companies, making it harder for outsiders to play an effective role in managing risks and increasing the power imbalance between leading AI companies and other actors.
The report also notes that competitive pressures on AI companies make it increasingly likely that they will go for “AI Building AI” well before society and its regulatory apparatuses are ready for it.
Bite-Sized AI
I’m excited to attend the “agentic AI” training advertised up above, and will report back on what I learn! I’m really resonating with this recent post from Marc Watkins:
This is all new to me. My background is in creative writing, not coding. I read novels and short stories throughout graduate school, so I have little concept of how to code. But with agentic AI, I don’t supposedly need to. I can use storytelling or task an AI agent to code something for me that is both practical and usable in mere moments. That’s hard for me to wrap my brain around. But it’s here, and we’re once again trying to figure out how to grapple with AI, and absolutely no one has figured out agents.
Yep. But we try.
Hope you all have a great week, stay warm in the mornings and cool in the afternoons and, given what I know of all of you, hot at night. We’ll make it through together.
Jen




Regarding the agentic AI updates, it's fascinating to see how fast browsers are evloving into autonomous agents, making my old data structures classes feell like ancient history sometimes.