Showing posts with label critical thinking. Show all posts
Showing posts with label critical thinking. Show all posts

Saturday, April 25, 2026

Q&A: In the age of AI, what is a library for?; UVAToday, April 15, 2026

Alice Berry , UVAToday; Q&A: In the age of AI, what is a library for?

"Q. Where do you fall on the AI enthusiast to AI detractor spectrum?

A. A faculty member at another university asked me recently whether it was defensible to ban AI in her course. I said yes.

That probably isn’t what people expect from someone who spent the last three years building a framework for AI literacy. But it was the honest answer for now. She believed her students needed to develop a specific skill that AI use would short-circuit, and banning it was the right call for that course.

What I would ask of faculty who choose that path is to stay open, keep up with how the technology is developing, and be willing to try approaches others have tested. That is part of what the lab is for: to produce case studies that give faculty something real to work from when they are ready to revisit the question.

I’m wary of the two confident positions on AI in higher education right now: the people certain it will transform teaching, and the people certain it will destroy it. Both are getting ahead of what we actually know about what’s happening in our classrooms.

Q. What is the function of a library in this AI age?

A. A research library has always done two things: help people find information, and help them judge it. AI changes the tools, not the mission. If anything, the mission gets sharper. The library is also one of the few places in a university built to convene across disciplines, and AI literacy requires exactly that: technical knowledge, ethics, critical thinking, practical skill, and societal impact all at once. No single department owns that combination. 

A library can hold it together. That is why we are launching the AI Literacy and Action Lab here. Dean Acampora and I share the conviction that AI is an opportunity for the liberal arts, not a threat to them. The lab is built on that shared premise: AI literacy is a liberal arts problem as much as a technical one, and a university that treats it only as technical will get the answer wrong."

Sunday, December 28, 2025

Our king, our priest, our feudal lord – how AI is taking us back to the dark ages; The Guardian, December 26, 2025

  , The Guardian; Our king, our priest, our feudal lord – how AI is taking us back to the dark ages

"This summer, I found myself battling through traffic in the sweltering streets of Marseille. At a crossing, my friend in the passenger seat told me to turn right toward a spot known for its fish soup. But the navigation app Waze instructed us to go straight. Tired, and with the Renault feeling like a sauna on wheels, I followed Waze’s advice. Moments later, we were stuck at a construction site.

A trivial moment, maybe. But one that captures perhaps the defining question of our era, in which technology touches nearly every aspect of our lives: who do we trust more – other human beings and our own instincts, or the machine?

The German philosopher Immanuel Kant famously defined the Enlightenment as “man’s emergence from his self-imposed immaturity.” Immaturity, he wrote, “is the inability to use one’s understanding without guidance from another”. For centuries, that “other” directing human thought and life was often the priest, the monarch, or the feudal lord – the ones claiming to act as God’s voice on Earth. In trying to understand natural phenomena – why volcanoes erupt, why the seasons change – humans looked to God for answers. In shaping the social world, from economics to love, religion served as our guide.

Humans, Kant argued, always had the capacity for reason. They just hadn’t always had the confidence to use it. But with the American and later the French Revolution, a new era was dawning: reason would replace faith, and the human mind, unshackled from authority, would become the engine of progress and a more moral world. “Sapere aude!” or “Have courage to use your own understanding!”, Kant urged his contemporaries.

Two and a half centuries later, one may wonder whether we are quietly slipping back into immaturity. An app telling us which road to take is one thing. But artificial intelligence threatens to become our new “other” – a silent authority that guides our thoughts and actions. We are in danger of ceding the hard-won courage to think for ourselves – and this time, not to gods or kings, but to code...

With all the benefits AI brings, the challenge is this: how can we harness its promise of superhuman intelligence without eroding human reasoning, the cornerstone of the Enlightenment and of liberal democracy itself? That may be one of the defining questions of the 21st century. It is one we would do well not to delegate to the machine."

Thursday, July 10, 2025

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."