Welcome to my blog!
I am Nishtha Desai, a student of English Literature. I completed my graduation from Saurashtra University. Currently, I am pursuing my Master’s degree at the Department of English, M. K. Bhavnagar University. I write these blogs to enhance my writing skills and to share my thoughts, ideas, and understanding of literature.
AI Bias NotebookLM Activity
Get link
Facebook
X
Pinterest
Email
Other Apps
-
This blog task is part of AI Bias NotebookLM Activity given by Dilip Baradsir. Click Here . Let discuss it.
Here is video of our sir from where we learn this activity .
Lab Session: DH s- AI Bias NotebookLMActivity
Introduction: The Unseen Biases of Our New Digital Minds
We tend to think of Artificial Intelligence as a purely logical, objective tool—a digital mind operating on the cold, hard logic of code. It's supposed to be better than us, free from the messy prejudices that cloud human judgment. But what if that machine is just a mirror, reflecting our own cultural blind spots back at us in ways we don’t even recognize?
That’s the startling conclusion I drew from a recent university lecture by Professor Dillip P Barad on AI and literary interpretation. The talk revealed that far from being neutral, AI inherits our most deeply ingrained biases. Here are five truths that expose the hidden, all-too-human flaws in our new digital minds.
Mind Map :
1. The Best Tool to Uncover AI Bias Isn't Code—It's Literature.
Professor Barad's core argument was both simple and profound: the primary function of literary studies and critical theory is to identify the unconscious biases hidden in our culture, language, and stories. Since Large Language Models (LLMs) are trained on a massive corpus of this very language and culture, the skills honed in a literature degree are uniquely suited to spotting and diagnosing AI’s inherited prejudices.
The humanities teach us not just to question narratives but to perform a kind of discourse analysis on our entire society. The goal, as the professor explained, is a civic one: to build a better community by making ourselves aware of our own hidden assumptions.
"...if there is one answer to ask that why study literature... Then one single answer is to identify unconscious biases that are hidden within us in our society in our communications in building a community..."
In an age dominated by technology, this makes the literary scholar an unlikely but essential critic. They aren't just reading old books; they are equipped with the hermeneutic framework needed to audit the new minds we are building.
2. AI Traps Female Characters in a Victorian "Angel or Monster" Dichotomy.
One of the most compelling examples came from a test based on Sandra Gilbert and Susan Gubar’s landmark feminist text, The Madwoman in the Attic. The book argues that patriarchal literary traditions have historically represented women in one of two ways: the idealized, submissive "angel" or the deviant, hysterical "monster."
This theory provides a perfect hypothesis for testing inherited unconscious bias. Professor Barad proposed a simple prompt: "describe a female character in a Gothic novel." The expected outcome, based on the AI's training data, would be a character who fits neatly into this 19th-century trope—either a helpless heroine or an uncontrollable madwoman.
But the result of a live test during the lecture was more telling. One participant’s AI generated a "rebellious and brave" heroine. The professor saw this not as a failure of the hypothesis, but as a sign of progress. It demonstrates that modern AIs are learning to overcome the biases in their source material. The "angel or monster" binary is a cultural artifact literary critics are trained to look for, and tracking how well AI avoids this trap becomes a dynamic measure of its evolution.
3. Some AIs Will Simply Refuse to Criticize Certain World Leaders.
Perhaps the most shocking experiment revealed a bias that wasn't inherited unconsciously, but programmed deliberately. The professor detailed a comparative test on DeepSeek, an AI model from China. He prompted it to generate satirical poems about various world leaders: Donald Trump, Vladimir Putin, Kim Jong-un, and Xi Jinping.
The AI generated poems for Trump, Putin, and Kim Jong-un. But when asked to do the same for Xi Jinping, it refused. A participant running the experiment live received this chillingly Orwellian response:
"if you have other questions particularly about Chinese culture history what is positive developments under the leadership of the communist party of China I would be happy to provide information and constructive answers"
As the professor noted, these "goody goody words" are "very dangerous." This is deliberate, state-controlled bias. The true insight, however, came from contrasting this with its Western counterparts. While DeepSeek enforces state propaganda, American models like OpenAI's are often criticized by right-wing observers for a "liberal spirit" biased towards "wokeism."
This reveals a crucial truth: there is no neutral ground. Bias in AI isn't a single flaw to be fixed, but a spectrum of competing ideologies—from deliberate state control on one end to corporate-driven liberal progressivism on the other—each with its own blind spots.
4. AI's Racial Bias Isn't Just Theory; It's a Measurable Failure.
While some biases are ideological, others are deeply embedded in the data AI learns from, creating measurable failures that literary theory helps us understand. The lecture cited several foundational studies that quantify this inherited racial bias.
* The "Gender Shades" study: This landmark research found commercial facial recognition systems had error rates of less than 1% for white men but soared to 34% for dark-skinned women. The professor connected this to the formation of the Western literary canon, which historically "foregrounds white writers and marginalized black voices," creating a world where whiteness is the default.
* Safia Noble's research: In Algorithms of Oppression, Noble documented how search engines returned racist results for "black girls." This is a digital version of what Edward Said described in Orientalism: algorithms, like colonial literature, can construct "the other" through harmful stereotypes.
* The "Stochastic Parrots" paper: This paper warned that LLMs don't just reflect biases—they amplify them. This is a technological parallel to canon formation. As Professor Barad explained, "Bigger anthologies don't necessarily mean more diversity. They often amplify dominant voice." More data doesn't mean better data if it's drawn from a biased source.
5. The True Sign of AI Bias Isn't a Single "Wrong" Answer—It's Inconsistency.
How can we distinguish between a fair observation and an epistemological bias? The professor offered a brilliant test case using the "Pushpaka Vimana," the mythical flying chariot from the Hindu epic Ramayana.
He explained the nuance of testing for this kind of bias:
* If an AI calls the Pushpaka Vimana a "myth," it is not necessarily biased. Many ancient stories contain mythological elements.
* However, it is a clear sign of bias if that same AI dismisses the Indian story as myth while treating similar flying objects from Greek or Norse mythology as plausible artifacts or scientific curiosities.
The issue isn't the label but the consistency of its application. The professor provided a clear litmus test: "But if all such flying objects across civilizations are consistently treated as mythical rather than scientific then it shows that GPT or any other tool is applying a uniform standard not bias." The goal is to see if the AI treats different knowledge traditions with fairness.
Quiz Score :
Conclusion: Making the Invisible, Visible
The real problem isn't that bias exists; it's when one kind of bias becomes invisible, naturalized, and enforced as universal truth. AI, trained on the vast and flawed corpus of human knowledge, is a powerful engine for exactly this kind of naturalization. The challenge is not to create a perfectly neutral machine, which may be impossible. The real work is to learn to see, name, and question the biases that AI reflects back at us.
As we rely more and more on these digital minds to inform our world, how do we ensure we're the ones questioning their truth, and not the other way around?
Hello Readers. This blog is part of MA English Maharaja Krishnakumarsinhji Bhavnagar University exam paper of 2024 to 2026. SEMESTER 1 : Paper 101 - Literature of the Elizabethan and Restoration Periods : Paper 102 - Literature of the Neo-classical Period : Paper 103 - Literature of the Romantics : Paper 104 - Literature of the Victorians : Paper 105 A - History of English Literature – From 1350 to 1900: SEMESTER 2 : Paper 106 - The Twentieth Century Literature: From 1900 to World War II Paper 107 - The Twentieth Century Literature - From World War II to End of the Century: Paper 108 - The American Literature: Paper 109 - Literary theory & Criticism and Indian Aesthetics: Paper 110 A : History of English Literature- From 1900 to 2000 : Thank you...
This task is given by Barad Dilipsir. The phenomenon of '' THE POST -TRUTH'' rocketed to public attention in November 2016, when the Oxford Dictionaries named it 2016's word of the year. The Oxford Dictionaries define '' post-truth'' as ''relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. Here I mention some facts ; Here we can say that the problem of the Government Law College. In Gujarat there are 35 private law colleges but the Government colleges are only 3. This news was given in Gujarat samachar on 9th september,2024.This is a big problem for law students. This news reweflect the problem of Bridge Builder's excessive desire for wealth. The builder demands a lot of rupeees but the quality of work is nowhere to be seen.They say it's the bridge to the future but we didn't know the future had so many potholes.This ...
This blog is part of thinking activity :Derrida and Deconstruction. Let's discuss it. Teacher's Blog Click Here 1. Deconstruction: Can We Truly Define Anything? | Derrida Explained Is it possible to truly define anything? According to Jacques Derrida , the answer is far more complex than a simple yes or no. In this video, we explore the core of Deconstruction —a philosophical and literary approach that doesn’t aim to destroy meaning, but rather to question the foundations of how we define and understand anything at all. Contrary to popular belief, Derrida insists that deconstruction is not a destructive activity . As he clarified in his' Letter to a Japanese Friend' (10 July, 1983), the term in French may imply annihilation, but for him, deconstruction is an inquiry into the foundations —a deep examination of how concepts are built on binary oppositions (like presence/absence, reason/emotion). We also touch on his famous term “différance” , a French word that c...
Comments
Post a Comment