Oh, hi there! Happy Wednesday! Looks like you all survived Thanksgiving, and Black Friday, and Small Business Saturday, and Cyber Monday, and Indigenous People's Day. I'm going to call today "Unsubscribe from Marketing Emails Day." Maybe you have a good holiday you'd like to pilot?
While you're thinking about it, take a look at the news this week...let's dive in!
AI in the Workplace
This is kind of niche, but I read this TechCrunch article with some interest, because it talks about how company Inflection AI is pivoting from developing LLMs to trying to develop LLMs that better serve other enterprises. This made me wonder about 1) whether predictions are correct that creating ever-better LLMs is going to become impossible as they run out of novel training data and 2) that the real money will be in designing LLMs for particular organizational contexts that prioritize private datasets and particular kinds of functionality. I guess we'll see!
AI in Higher Ed
If you love a fancy advent calendar as much as I do, then The 12 Days of AI might delight you. This is a fantastic way to build some skills in a digestible format (h/t to the new Director of the School for the Digital Future, Kelly Arispe!).
I love bite-size opportunities like the 12 Days, because at this point I think the problem with AI and Higher Ed isn't that there aren't enough training materials or resources out there, it's that there are too many. Lance Eaton also helps us out with the info-glut by creating roundups like Crowdsourcing AI Institutional Policies (h/t Heidi Estrem). V. helpful if you're on committees at colleges trying to figure out where to start!
Marc Watkins published a guest post by David Nelson that reviews OpenAI's new (very late) student guide: "Similarly, and most damning, OpenAI prioritizes throughout an individual interaction with a machine and excludes interpersonal learning. We know that learning is a social activity, that students gain deeper understanding in conversations with their peers, in collaborative spaces where their ideas are challenged, informed and tempered into stronger, more complex beliefs and values. OpenAI promotes Socratic dialogue where 'ChatGPT can act as an intellectual sparring partner' and philosophical debates with historical theorists to develop and compare your ideas. This type of intellectual development is grounded in heterogeneity and interpersonal dynamics, both of which are antithetical to LLMs. And while Chat can provide a starting point for evaluating perspectives relative to established theories and ideas, it incentivizes shortcuts like 'why read Kant when the machine can read it for me?' I’m intrigued by the possibilities of introductory learners asking Chat to 'help me understand Kant’s impenetrable writing because it is really dense and give me examples of what Kant might say in response to my thoughts.' Yet, this learning would likely be done in humanities and social science courses, and those same activities could be accomplished with classmates and an instructor to much greater effect.'"
Don't know if I agree with everything here, but worth taking a look.
And finally, for my public service people, StandfordHAI has a research brief out on how we might right the imbalance of AI development, which is heavily skewed toward the private sector: "In the last decade, however, the field has been increasingly dominated by the private sector. Building and deploying AI systems has become hugely resource intensive, often requiring billions of dollars in investment, custom supercomputing clusters, and enormous datasets containing much of the available data on the internet. This shift has created a significant power imbalance, where academic talent and government support flows to private companies that now produce the vast majority of the world’s most powerful AI systems."
AI and Politics
A good piece in The Conversation about how truth didn't die in this last election (or, AI didn't kill it, anyway): "In a Pew survey of Americans from earlier this fall, nearly eight times as many respondents expected AI to be used for mostly bad purposes in the 2024 election as those who thought it would be used mostly for good. There are real concerns and risks in using AI in electoral politics, but it definitely has not been all bad. The dreaded 'death of truth' has not materialized – at least, not due to AI. And candidates are eagerly adopting AI in many places where it can be constructive, if used responsibly. But because this all happens inside a campaign, and largely in secret, the public often doesn’t see all the details."
Related: Things might have gone better than hoped here in the US with regard to elections and AI but, uh, maybe not so much in Romania. Ouchescǔ!
AI and Big Tech
Lots of news coverage coming out about Amazon finally entering the chat with AI: "Amazon is building one of the world’s most powerful artificial intelligence supercomputers in collaboration with Anthropic, an OpenAI rival that is working to push the frontier of what is possible with artificial intelligence. When completed, it will be five times larger than the cluster used to build Anthropic’s current most powerful model. Amazon says it expects the supercomputer, which will feature hundreds of thousands of Amazon’s latest AI training chip, Trainium 2, to be the largest reported AI machine in the world when finished."
[Cool, cool. "Trainium 2" does not at all sound like a lethal element in the Marvel Cinematic Universe.]
Tech Tips
This section should probably just be called "Tech Tips from Kevin Rank," but we'll all just have to wait for Kevin to get his own newsletter for that (I'll plug it here if you do, Kevin!). Read on for advice on how to streamline answers to student questions about assignments:
Using the Gemini @Gmail extension to generate FAQ questions from your email.
I have a fairly large final project in my class. It spans 4 week of lectures and 2 major assignments and a few smaller ones that represent other, smaller, individual pieces.
I have Instructions, Slides, and Rubrics that are ready to be used in an AI. I then have 2 lectures a week with 2 sections. I generate automated captions on those videos. Next, I put all of that into a CustomGPT. They are also in NotebookLM and I have shown students how to do the latter.
This gives me a large amount of data that is RAG accessible. Since our school uses Google, we can use Gemini. They just enabled extensions this week letting us use the Gemini, Docs, and Drive extensions. I just realized, with the gmail extension, I can ask Gemini to create a list of FAQ questions for me. Then, I used those questions in my CustomGPT to answer the student questions.
The Gemini Prompt - Remember to use the @gmail extension:
Analyze all emails from the past three weeks for questions related to the final project, including individual or team components (e.g., 'Project A'). Identify and extract questions that are:
Asked repeatedly by different students.
Relevant to understanding, completing, or clarifying the final project requirements.
Notable for their importance, even if asked only once.
For each question, provide:
The original phrasing (as written by the student) or a concise, rephrased version that captures the same meaning.
Organize these questions into a clear and comprehensive FAQ-style list.
Ensure the output is detailed, well-organized, and easy to reference.
I took the questions, and put them into my ChatGPT CustomGPT with a simple prompt of :
I have compiled a list of questions about the final project. Please use the information you have to create answers for each item. Make it into a list format.
<Results from Gemini>
When you add the results from Gemini, you may get a few that don't make sense. I manually removed or revised them, and how the prompt is written, it lets you know if a question is re-written quite a bit, allowing you to choose whether the new version is better or should be removed.
This all came to me after I answered the 10th question whether they could have an extension on the due date today. (something I have literally told them about in at least 3 separate lectures that are also available to watch...)
-- I don't like assigning due dates during break. So while the due date is 11/24, I don't penalize them if they work on it and turn it in by the end of break 12/1.
If you end up trying this out, let me know how it goes.
AI(-related) Image of the Week
From newsletter Superhuman AI:
LOLS! H/t Rose Sellars for the newsletter tip.
That's it for me! Remember, all I want for Christmas is you, and for you to
Talk to you soon!
Jen