CRITER Weekly RoundUp
Wednesday, February 11, 2026
Hello everyone, and happy Wednesday!
One thing I really enjoyed this week was Bad Bunny’s Superbowl half-time performance, duh.
Okay, on to the AI!
AI and Higher Ed
Behavioral agility Interesting little primer on thinking through organizational adoption of AI, using the Crucial Conversations framework (h/t Melissa Jensen):
However, when it comes to reaping organizational benefits, the path is usually more tortured. The constraint on producing organization-wide benefits has less to do with the muscularity of your AI system and more to do with the agility of your human system. Unless human habits keep pace with AI insights, the results can be anything from disappointing to disastrous. What your human system needs is behavioral agility—the ability or capacity to adapt to change.
Yes, and there are good reasons organizations and workers resist these changes. But I think the overall sentiment here is right. It’s not enough to just make a technology available. You have to invest in the humans who will be using it, too.
Academia.edu tries podcasts And academics aren’t in love with the results:
They’ve taken to Academia.edu’s own feedback forums to raise their objections, calling it “offensive,” “borderline impersonation/plagiarism,” and accusing the platform of using “human work … as a free content farm for AI training.” Several academics have left the platform altogether in protest.
The site’s founder and chief executive, Richard Price, acknowledged the podcast could make mistakes and said his engineers are working on improvements. “We’re still in the early days of audio generation, just as an industry,” he said, describing the mistakes as “occasional weirdnesses … like teething issues and technical glitches.”
‘And therefore’ and pray Some of the world’s best mathematicians aren’t too worried that AI is going to “solve math:”
WILLIAMS One test on my problem produced an interesting series of responses. The model would come up with an answer, and say, “OK, this is the final solution.” Then it would say, “Wait, stop, what about this?” and modify its answer in some way. And so on: “OK, here’s the final solution. Wait, there’s a catch!” It went into an infinite loop.
Another response gave an answer to a closely related but different question.
TAMARA KOLDA My preliminary results were disappointing in that the A.I. just was confused about the problem, ignoring key information in some parts of the answer, but not even being consistent. I’ve since revised the problem statement and added some more explicit instructions to try to give the A.I. a better chance. So, we’ll see how it goes with the final results.
MARTIN HAIRER One thing I noticed, in general, was that the model tended to give a lot of details on the things that were easy, where you would be like: “Yeah, sure, go a bit faster. I’m bored with what you’re saying.” And then it would give very little detail with the crux of the argument. Sometimes it would be like reading a paper by a bad undergraduate student, where they sort of know where they’re starting from, they know where they want to go, but they don’t really know how get there. So they wander around here and there, and then at some point they just stick in “and therefore” and pray.
AI Awakenings Enjoyed the latest from Lance Eaton where he argues for the importance of “awakening” and critical engagement with AI, an argument I agree with:
I want to talk about locating research as a practice we might look at through this lens of new moves. If we have to be critically engaged with AI, not critically disengaged (and I would argue we do), what does that look like?
Here’s a question. How many people have had students submit works cited or bibliographies of research that didn’t exist?
That’s all of us that typically assign such things, right?
What does that tell us? Well, it tells us students are engaging with AI. They’re not necessarily engaging critically, but they are engaging with it.
And for many of us, we often stop the conversation there. “See, it didn’t work. See, they are misusing it.”
Here’s the first part of finding new moves.
Phew. So much of our current moment, in my view, is about acknowledging our resistance and fatigue, and yet seeing that we are being called to creative, critical engagement. New moves.
This op-ed from The Chronicle goes even further, arguing universities are neglecting their duty to critically analyze the possible harms of AI before pushing adoption.
Academic Integrity Lawsuit A student at Adelphi was erroneously accused of cheating using AI (identified by TurnItIn), was not allowed to appeal, and now has won his lawsuit against the university.
AI and the Culture
Don’t let the machines do the living Anne Helen Peterson has an essay out about the relationship between AI and its productivity promises:
More efficiency doesn’t mean less work. It means more of it — for the same pay. We raise the expectations of what we can produce, and at what cost, and then as the market continues to demand further optimization, less waste, more product, we raise the expectations yet again. To meet them, some of us sacrifice sleep or health; others sacrifice anything approximating leisure or community. We produce more but learn less. And whoever we’re producing the product for — readers, viewers, listeners, managers, supervisors — they’re also trying to do more with less time, and optimize their work (or leisure, or parenting) and have far less attention, energy, and time to expend on whatever we produce.
In these situations, AI optimization simply accelerates the bullshit work cycle.
She discusses implications for teaching and learning, too. And a new article out from Harvard Business Review cosigns many of her claims. Worth looking at.
More on Claude’s constitution Had a few of you reach out last week to (!!!) about Claude’s Constitution. Here’s a piece from Wired about it that provides more helpful context:
Amanda Askell, the philosophy PhD who was lead writer of this revision, explains that Anthropic’s approach is more robust than simply telling Claude to follow a set of stated rules. “If people follow rules for no reason other than that they exist, it’s often worse than if you understand why the rule is in place,” Askell explains. The constitution says that Claude is to exercise “independent judgment” when confronting situations that require balancing its mandates of helpfulness, safety, and honesty.
And The New Yorker has a long piece about what’s happening over at Anthropic, too (calling Ed Ferrier).
There’s also this:
AI and high-stakes advice I’ve done a number of trainings lately, and have received questions about whether chatbots can be used for financial and legal advice. Generally, I say, my rule of thumb is: the higher the stakes of your decision, the more you should double and triple check the work of the bot (or maybe not use it at all). And I still raise concerns around privacy with folks uploading sensitive information here. The NYT reports on the increasing numbers of people using chatbots for retirement advice:
“A.I. is filling a gap for millions of people who may not have access to traditional financial guidance,” said Courtney Alev, a consumer financial advocate at Intuit Credit Karma. “If used thoughtfully, it can help people start planning for retirement earlier, set clearer goals and make more informed decisions.”
But using A.I.-powered financial guidance can come with significant risks. Chatbots can produce inaccurate or overly generalized advice, misinterpret personal circumstances or offer recommendations that lack important context, said Megan Slatter, a wealth adviser at Crewe Advisors. Despite the perceived benefits, more than half of Americans who acted on financial advice from generative A.I. told Credit Karma they ultimately made a poor financial decision or a mistake.
Grok and AI Sexual Harassment We’ve talked about Grok being used on X to create CSAM and other harmful images of women, girls, and others, but this report from 404 Media really drives home how awful this stuff can be:
“I think that people assume, because the pictures aren’t real, that it’s not as damaging,” Brewer told me. “But if anything, this was worse because it just fills you with such a sense of lack of control and fear that they could do this to anyone. Children, women, literally anyone, someone could take a picture of you at the store, going grocery shopping, and ask AI or whatever to do this.”
AI Politics and Policy
AI Labels Seeing some discussion around wanting more oversight of AI labeling practices. Tech Policy Press has this on the challenges of designing effective AI disclosures:
Regulators already know what happens when disclosures are treated as checkboxes. Cookie banners were meant to give people meaningful choices. In practice, many became click fatigue, a ritual that trains users to accept or ignore without understanding. Recent experiments show that cookie consent behavior is heavily driven by banner design and friction, with many users settling into stable “always accept” habits across sites, a pattern consistent with click fatigue rather than informed choice. European data protection regulators eventually had to confront not just whether information was presented, but whether interface design manipulated or exhausted users. The European Data Protection Board’s guidelines on deceptive design patterns in social media interfaces show why usable design is part of compliance.
AI labeling appears to be on track to repeat the same mistake, but with higher civic stakes. Synthetic media disclosures are being built into products optimized for speed, emotion, and engagement. If transparency is not designed for human comprehension in that environment, it risks degrading into performative transparency rather than functional public protection.
Watermarking? Interesting to see this op-ed in the Chronicle, then, arguing that watermarking is a relatively simple solution universities could adopt to detect AI use:
The problem of discouraging students from using LLMs seems tractable. It could be solved through cooperation and enforcement between LLM companies, learning management systems (LMS), universities, and the government. All LLM platforms would be required to watermark their text; they would be forced to build tools that integrate with systems like Canvas so those watermarks can be detected in student submissions. Instructors could then use the watermark detector to figure out whether students have used LLMs and take action.
Ah, okay. So, not simple.
AI and the Economy
This piece in The Atlantic. Oof:
AI is already transforming work, one delegated task at a time. If the transformation unfolds slowly enough and the economy adjusts quickly enough, the economists may be right: We’ll be fine. Or better. But if AI instead triggers a rapid reorganization of work—compressing years of change into months, affecting roughly 40 percent of jobs worldwide, as the International Monetary Fund projects—the consequences will not stop at the economy. They will test political institutions that have already shown how brittle they can be.
The question, then, is whether we’re approaching the kind of disruption that can be managed with statistics—or the kind that creates statistics no one can bear to count.
Bite-Sized AI
This week I tried out the Comet Browser from Perplexity. Comet is supposedly Perplexity’s new “agentic browser,” meaning it can do stuff for you, like navigate webpages and perform tasks.
We’ll see.
I downloaded the Comet app to my Mac. The app opens to a pleasant red, rotating planet (or blue basketball?) and auto-plays pleasant ambient music
It asked me to give my Avatar a name and then prompted me to check some boxes:
Make Comet your default? (nope)
Add Comet to your dock? (sure)
Open Comet when your computer starts? (nah)
Help us improve Comet (no, who has the time)
Then I’m prompted to sign in to Google and the first tripping hazard appears: I’m asked if I would like to allow Comet to access and use my saved Passkeys, so that Comet can help me to sign into them.
This makes me nervous! Agentic AI having access to logins and stuff makes me nervous! But I guess I do that with my current preferred browser, Firefox, so yikes, I agree. Maybe this is a terrible decision? I trust Firefox but have no reason to trust Comet?
But because I love you all, and love being a fool, I persist.
I’m then asked if I want to “always browse without ads.” Sounds good, but then I’d have to make Comet and not Firefox my default browser and, see above. So, no.
Okay, so I’m finally to the chat window, which looks pretty similar to other chatbot interfaces. It suggests a number of “Assistant” prompts, like “Summarize and follow-up” or “Find that moment.”
I select “Find that moment.” The example it gives to illustrate is “Find the exact moment on the moon landing video, with timestamp, of when Neil Armstrong says his famous “One step...” line. It does that and takes me to a YouTube video of the moon landing. It also features a tab with links, a tab with images, and a tab with videos.
This interface is pretty cool. I can see how this might be a better search experience than using Google Search, where you might have to wade through a variety of links, separately search Google Images, etc.
I do notice Perplexity sneakily upgraded my search results to what I would see if I paid for a “pro” account, so feel like they’re definitely trying to reel me in, and probably results for the free version won’t be so satisfying moving forward.
But I also don’t totally see the “agentic” aspect of Comet yet, either.
I try this prompt: “Build me a short powerpoint presentation that summarizes the reactions to Bad Bunny’s Superbowl Halftime performance, including slides that explain the symbolism of the show, the controversy over how many views it received, and critiques of the performance.”
I get an outline of a presentation (superficial at best), like I would in ChatGPT, but would have to upgrade to Pro to get extended access to Perplexity file and app creation.
I could probably ask it to do some automated tasks for me, but I’m still security-aware and so haven’t done this.
And with that, I’m out of time. I would need to see more examples of how this is fundamentally better than just using Perplexity or even the other chatbots I normally use, and also don’t want to sign up to pay for yet another pro version.
Alright, that’s it for me! More soon, have a great week, pull up your knee-socks and keep your shins warm, we have a few more chilly days ahead.
Jen





