Okay, a day late. You should see my inbox.
Hi hi hi! Like you, I'm racing through the day, in a hot sprint to commencement next Saturday. So let's get right to the news...
AI and Human Being
When I do trainings on generative AI, I spend some time talking about the "model" part of "Large Language Model" (it's easy to get stuck just on the L's). We talk briefly about how models can be tweaked/trained/tuned and that gives a lot of power to the folks doing the training. So I read this Erik Hoel essay on gen AI and Studio Ghibli with a ton of interest: "The internet’s Ghiblification was not an accident. Changing a photo into an anime style was specifically featured in OpenAI’s original announcement. Why? Because OpenAI does, or at least seems to do, something arguably kind of evil: they train their models to specifically imitate the artists the model trainers themselves like. Miyazaki for anime seems a strong possibility, but the same thing just happened with their new creative writing bot, which (ahem, it appears) was trained to mimic Nabokov."
AI and Higher Ed
I know it's hard to keep track of all the Executive Orders flapping around these days, but this one on AI and education certainly caught my eye: "By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society." And it looks like federal funding to support this is coming? Who knows?
Library peeps (and everyone else): here's an interesting piece in IHE that argues we already have an ethical framework that could apply to using AI--the Belmont report: "Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements."
I like it!
Here's Marc Watkins on what it means for faculty that the big AI companies have realized that students are their real "power users": "Put bluntly, without access to premium GenAI, faculty will not be able to gauge how this technology impacts student learning. Running your assignment directions through a free model that isn’t as powerful as one of the premium models, or thinking students won’t use the greater usage limits bundled with premium access, is sure to create a false sense of what students who use premium GenAI can and cannot do in the disciplines we teach."
Honestly, though, I could have quoted this whole essay. Take a few minutes to read it, if you can.
[Related, Watkins just let us know about the first of three open-access special issues on AI and education, with lots of focus on writing instruction!].
I don't know about you, but I have some very talented, experienced friends and loved ones who have been looking for work for a while, and it's looking tough out there. The Atlantic's Derek Thompson wonders if it's the first glimmers of AI exerting downward pressure on the job market: "'When you think from first principles about what generative AI can do, and what jobs it can replace, it’s the kind of things that young college grads have done” in white-collar firms, Deming told me. “They read and synthesize information and data. They produce reports and presentations.'" Thompson is just making some educated guesses here. I would add that generative AI is probably making hiring and finding work harder for everyone because of the amount of AI slop and algorithms causing noise in the matchmaking process. But that's just my educated guess.
Related: if you're following AI news (including this newsletter) you've probably heard of Cluely, designed by an AI a Columbia student previously in the news for developing AI to hack tech interviews. Cluely basically encourages you to use AI to "cheat on everything." Kevin Rank shared this 15-minute YouTube video from the AI Daily News explaining the tech (Kevin has created a whole Notebook on it using LM, to teach the controversy! So cool!).
Tech Updates
AI may be ending search, as we know it, and that's an existential crisis for...everything digital?
The NYT reports that being polite to our chatbots is expensive (in terms of dollars and energy both) but some ethicists think there is value in it: "But there’s another reason to be kind. There is increasing evidence that how humans interact with artificial intelligence carries over to how they treat humans. 'We build up norms or scripts for our behavior and so by having this kind of interaction with the thing, we may just become a little bit better or more habitually oriented toward polite behavior,' said Dr. Jaime Banks, who studies the relationships between humans and A.I. at Syracuse University."
[Related, another NYT piece on Anthropic studying whether chatbots can develop consciousness, and if so, do they deserve moral standing: "Mr. Fish acknowledged that there probably wasn’t a single litmus test for A.I. consciousness. (He thinks consciousness is probably more of a spectrum than a simple yes/no switch, anyway.) But he said there were things that A.I. companies could do to take their models’ welfare into account, in case they do become conscious someday." Calling Ed Ferrier!]
For my Boise State friends, there's an update from OIT on AI apps being used in Zoom: "Beginning May 12, all third-party AI agents, also known as 'AI bots,' will be disabled in Boise State's Zoom environment. This includes Otter.ai and Read AI. Zoom AI Companion will not be disabled and is the only University-approved AI tool to be used with Zoom."
AI in Politics and Policy
Whoa, Semafor has a monster piece out on how a groupchat over Signal for wealthy tech bros has encouraged a form of ideological groupthink among some of the richest and most powerful men in the country/world--these are the same folks who are influencing American politics and guiding the gen AI revolution. Consequential stuff: "The group chats aren’t always primarily a political space, but they are the single most important place in which a stunning realignment toward Donald Trump was shaped and negotiated, and an alliance between Silicon Valley and the new right formed. The group chats are 'the memetic upstream of mainstream opinion,' wrote one of their key organizers, Sriram Krishnan, a former partner in the venture capital firm Andreessen Horowitz (typically styled a16z) who is now the White House senior policy adviser for AI."
Hugging Face has released a pretty interesting chatbot that can estimate the energy consumption of various AI tasks. I took it for a whirl and thought the results were pretty interesting. I asked it how much energy a short 10-question interaction with ChatGPT might use, on average, and with some equivocation, it gave me back this: "
Phone charge: A typical smartphone battery is around 19 Wh. So, a 10-20 question conversation would use less than 1% of a phone's battery.
Lightbulb: A standard incandescent lightbulb uses about 25-100 Wh per hour. So, the energy used for a conversation is equivalent to a few seconds of light from a small bulb.
Good to know!
AI Image of the Week
Courtesy of SuperHumanAI...folks on Reddit are uploading images to ChatGPT with the prompt "Create the exact replica of this image, don't change a thing" and, uh, clearly the bots have been driving to Ontario on the weekends.
Very strange. Just like everything these days.
That's it for me! Close the laptop, take off your shoes, go outside. Talk to you soon!
Jen