AI Bias NotebookLM Activity

 This blog task is part of AI Bias NotebookLM Activity given by Dilip Baradsir. Click Here . Let discuss it.




Here is video of our sir from where we learn this activity .

Lab Session: DH s- AI Bias NotebookLM Activity

Introduction: The Unseen Biases of Our New Digital Minds

We tend to think of Artificial Intelligence as a purely logical, objective tool—a digital mind operating on the cold, hard logic of code. It's supposed to be better than us, free from the messy prejudices that cloud human judgment. But what if that machine is just a mirror, reflecting our own cultural blind spots back at us in ways we don’t even recognize? That’s the startling conclusion I drew from a recent university lecture by Professor Dillip P Barad on AI and literary interpretation. The talk revealed that far from being neutral, AI inherits our most deeply ingrained biases. Here are five truths that expose the hidden, all-too-human flaws in our new digital minds.

Mind Map :



1. The Best Tool to Uncover AI Bias Isn't Code—It's Literature.

Professor Barad's core argument was both simple and profound: the primary function of literary studies and critical theory is to identify the unconscious biases hidden in our culture, language, and stories. Since Large Language Models (LLMs) are trained on a massive corpus of this very language and culture, the skills honed in a literature degree are uniquely suited to spotting and diagnosing AI’s inherited prejudices.

The humanities teach us not just to question narratives but to perform a kind of discourse analysis on our entire society. The goal, as the professor explained, is a civic one: to build a better community by making ourselves aware of our own hidden assumptions. "...if there is one answer to ask that why study literature... Then one single answer is to identify unconscious biases that are hidden within us in our society in our communications in building a community..." In an age dominated by technology, this makes the literary scholar an unlikely but essential critic. They aren't just reading old books; they are equipped with the hermeneutic framework needed to audit the new minds we are building. 2. AI Traps Female Characters in a Victorian "Angel or Monster" Dichotomy. One of the most compelling examples came from a test based on Sandra Gilbert and Susan Gubar’s landmark feminist text, The Madwoman in the Attic. The book argues that patriarchal literary traditions have historically represented women in one of two ways: the idealized, submissive "angel" or the deviant, hysterical "monster." This theory provides a perfect hypothesis for testing inherited unconscious bias. Professor Barad proposed a simple prompt: "describe a female character in a Gothic novel." The expected outcome, based on the AI's training data, would be a character who fits neatly into this 19th-century trope—either a helpless heroine or an uncontrollable madwoman. But the result of a live test during the lecture was more telling. One participant’s AI generated a "rebellious and brave" heroine. The professor saw this not as a failure of the hypothesis, but as a sign of progress. It demonstrates that modern AIs are learning to overcome the biases in their source material. The "angel or monster" binary is a cultural artifact literary critics are trained to look for, and tracking how well AI avoids this trap becomes a dynamic measure of its evolution. 3. Some AIs Will Simply Refuse to Criticize Certain World Leaders. Perhaps the most shocking experiment revealed a bias that wasn't inherited unconsciously, but programmed deliberately. The professor detailed a comparative test on DeepSeek, an AI model from China. He prompted it to generate satirical poems about various world leaders: Donald Trump, Vladimir Putin, Kim Jong-un, and Xi Jinping. The AI generated poems for Trump, Putin, and Kim Jong-un. But when asked to do the same for Xi Jinping, it refused. A participant running the experiment live received this chillingly Orwellian response: "if you have other questions particularly about Chinese culture history what is positive developments under the leadership of the communist party of China I would be happy to provide information and constructive answers" As the professor noted, these "goody goody words" are "very dangerous." This is deliberate, state-controlled bias. The true insight, however, came from contrasting this with its Western counterparts. While DeepSeek enforces state propaganda, American models like OpenAI's are often criticized by right-wing observers for a "liberal spirit" biased towards "wokeism." This reveals a crucial truth: there is no neutral ground. Bias in AI isn't a single flaw to be fixed, but a spectrum of competing ideologies—from deliberate state control on one end to corporate-driven liberal progressivism on the other—each with its own blind spots. 4. AI's Racial Bias Isn't Just Theory; It's a Measurable Failure. While some biases are ideological, others are deeply embedded in the data AI learns from, creating measurable failures that literary theory helps us understand. The lecture cited several foundational studies that quantify this inherited racial bias. * The "Gender Shades" study: This landmark research found commercial facial recognition systems had error rates of less than 1% for white men but soared to 34% for dark-skinned women. The professor connected this to the formation of the Western literary canon, which historically "foregrounds white writers and marginalized black voices," creating a world where whiteness is the default. * Safia Noble's research: In Algorithms of Oppression, Noble documented how search engines returned racist results for "black girls." This is a digital version of what Edward Said described in Orientalism: algorithms, like colonial literature, can construct "the other" through harmful stereotypes. * The "Stochastic Parrots" paper: This paper warned that LLMs don't just reflect biases—they amplify them. This is a technological parallel to canon formation. As Professor Barad explained, "Bigger anthologies don't necessarily mean more diversity. They often amplify dominant voice." More data doesn't mean better data if it's drawn from a biased source. 5. The True Sign of AI Bias Isn't a Single "Wrong" Answer—It's Inconsistency. How can we distinguish between a fair observation and an epistemological bias? The professor offered a brilliant test case using the "Pushpaka Vimana," the mythical flying chariot from the Hindu epic Ramayana. He explained the nuance of testing for this kind of bias: * If an AI calls the Pushpaka Vimana a "myth," it is not necessarily biased. Many ancient stories contain mythological elements. * However, it is a clear sign of bias if that same AI dismisses the Indian story as myth while treating similar flying objects from Greek or Norse mythology as plausible artifacts or scientific curiosities.
The issue isn't the label but the consistency of its application. The professor provided a clear litmus test: "But if all such flying objects across civilizations are consistently treated as mythical rather than scientific then it shows that GPT or any other tool is applying a uniform standard not bias." The goal is to see if the AI treats different knowledge traditions with fairness.

Quiz Score :



Conclusion: Making the Invisible, Visible

The real problem isn't that bias exists; it's when one kind of bias becomes invisible, naturalized, and enforced as universal truth. AI, trained on the vast and flawed corpus of human knowledge, is a powerful engine for exactly this kind of naturalization. The challenge is not to create a perfectly neutral machine, which may be impossible. The real work is to learn to see, name, and question the biases that AI reflects back at us.

As we rely more and more on these digital minds to inform our world, how do we ensure we're the ones questioning their truth, and not the other way around?


Video :

Thank you...

Comments

Popular posts from this blog

University Paper : 2024 - 2026( MA- English)

The Post Truth

Thinking Activity: Derrida and Deconstruction