Generative AI and The Public Humanities
Humanities Tennessee’s Shared Futures Lab is exploring what the future could hold for humanities programs and organizations. In addition to looking at our past and current programming, we’re searching for evidence – or signals of change – that real disruptions or trends are already occurring that could lead to widespread adoption and change. Futurists have long pointed to artificial intelligence (AI) as one of these signals, and within the last year, we have seen generative AI used in numerous applications across industries and being studied at universities.
Dr. Fei-Fei Li, a pioneering computer scientist and director of the Stanford Institute for Human-Centered Artificial Intelligence, wrote in her 2023 memoir The Worlds I See:
What role can public humanities professionals and organizations play in our AI present and future? And how can we use AI to do our humanities work better?
A Very Brief Introduction to Generative AI
Generative artificial intelligence is the branch of machine learning that allows users to prompt a large language model (LLM) to get a response in the form of new content, which can be text, image, video, or audio. LLMs are large data sets that are created by humans and used to train an AI model. The AI model responds to a user prompt by making a prediction based on patterns it finds in its LLM.
Two concerns with generative AI are biased results and hallucinations. If the AI model’s training data includes unrepresentative samples or biases, the model will learn those biases and may generate content that promotes stereotypes. AI hallucinations occur when models create misleading or incorrect results and present them as true. These happen because the model is predicting what word is likely to come next without having the correct information. An additional concern is the use of AI generated images, videos, and audio files to intentionally deceive the public.
Role of the Public Humanities
Generative AI is an undeniably powerful technological tool that will require individuals to learn new skills to effectively and ethically use. In October 2023, the National Endowment for the Humanities launched an initiative to support research projects, “that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties.”
Humanities disciplines, including history, English, and philosophy, are grounded in research, critical thinking, and analysis. Public humanities projects bring humanities scholarship outside of classrooms to the general public, which can take the form of websites, museums exhibits, publications, discussions, workshops, and more. Public humanities organizations already engage in difficult conversations and have the potential to expand their programmatic offerings to include discussions about generative AI, its limitations, and the opportunities it presents.
We can imagine futures where organizations convene panel discussions about the implications of generative AI on state-level politics or create museum exhibits that explore the changing nature of labor. Organizations could host workshops for artists and writers about their rights when generative AI models are trained with their work. Perhaps future projects will include training a LLM with nuanced historical narratives to provide more complete answers to user prompts.
No matter what form future programming takes, public humanities organizations are well-positioned to engage their audiences in conversations about generative AI.
Shared Futures Lab’s Internal Generative AI Use Case
The Shared Futures Lab began in January 2024. We published our first blog and social media posts at the end of that month. From the beginning – and partially in anticipation of this post – we’ve used Google’s Gemini to draft social media plans and content. For each blog post, we input the final draft and prompt Gemini to create a post for Facebook that references the purpose of the Lab as laid out on the HT website. We then edit the generated result to reflect our organizational tone and style.
Sometimes the model generates a post that is intriguing but fundamentally incorrect. One of the most common hallucinations we encounter are posts that say programs are “presented by the Shared Futures Lab.” Almost all of the programs we have discussed were presented by partner organizations and none are presented by this Lab.
Once we have a post that accurately describes the full article, we prompt the model to rewrite the Facebook post for LinkedIn. Invariably, the model creates an entirely new post that highlights the business implications or opportunities of the programming, which reflects the general audience of that platform. These posts also require editing and bias checks before being posted.
Additionally, we have used Gemini to brainstorm social media hashtags and a title for our podcast. In both of these instances, we used the results as a starting point to have a team conversation that led to a final decision.
These mundane applications of generative AI cut down on staff time while enabling us to share our humanities content in new ways. The novelty has worn off, but generative AI has become one more tool that we can use to fulfill our public humanities mission to foster community and civility in Tennessee.