Showing posts with label AI literacy. Show all posts
Showing posts with label AI literacy. Show all posts

Monday, December 15, 2025

Kinds of Intelligence | LJ Directors’ Summit 2025; Library Journal, December 2, 2025

 Lisa Peet, Library Journal; Kinds of Intelligence | LJ Directors’ Summit 2025

"LJ’s 2025 Directors’ Summit looked at artificial—and very real—intelligence from multiple angles

If there was any doubt about what issues are on the minds of today’s library leaders, Library Journal’s 2025 Directors’ Summit, held October 16 and 17 at Denver Public Library (DPL), had some ready answers: AI and people.

Nick Tanzi hit both notes handily in his keynote, “Getting Your Public Library AI-Ready.” Tanzi, assistant director of South Huntington Public Library (SHPL), NY, and technology consultant at The-Digital-Librarian.com (and a 2025 LJ Mover & Shaker), began with a reminder of other at-the-time “disruptive” technologies, starting with a 1994 clip of Today Show anchors first encountering “@” and “.com.”

During most of this digital change, he noted, libraries had the technologies before many patrons and could lead the way. Now everyone has access to some form of AI, but it’s poorly understood. And access without understanding is a staff problem as well as a patron problem.

So, what does it mean for a library to be AI-ready? Start with policy and training, said Tanzi, and then translate that to public services, rather than the other way around. Library policies need to be AI-proofed, beginning by looking at what’s already in place and where it might be stressed by AI: policies governing collection development, reconsideration of materials, tool use, access control, the library’s editorial process, and confidential data. Staff are already using some form of AI at work—do they have organizational guidance?

Tanzi advised fostering AI literacy across the library. At SHPL, he formed an AI user group; it has no prerequisite for participation and staff are paid for their time. Members explore new tools, discuss best practices, complete “homework,” and share feedback, which also allows Tanzi to stress-test policies. It’s not a replacement for formal training, but helps him discover which tools work best in various departments and speeds up learning.

We need to demystify AI tools for staff and patrons, Tanzi noted, and teach ethics around them. Your ultimate goal is to create informed citizens; libraries can build community around AI education, partnering with the local school district, colleges, and government."

Sunday, October 12, 2025

Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference; South Bend Tribune, October 9, 2025

 Rayleigh Deaton, South Bend Tribune; Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference

"The increasingly ubiquitous nature of artificial intelligence in today's world raises questions about how the technology should be approached and who should be making the decisions about its development and implementation.

To the Rev. Paolo Benanti, an associate professor of ethics of AI at LUISS University and the AI adviser to the Vatican, and Aarti Singh, a professor in Carnegie Mellon University's Machine Learning Department, ethical AI use begins when the technology is used to better humanity, and this is done by making AI equitable and inclusive.

Benanti and Singh were panelists during a session on Wednesday, Oct. 8, at the University of Notre Dame's inaugural R.I.S.E. (Responsibility, Inclusion, Safety and Ethics) AI Conference. Hosted by the university's Lucy Family Institute for Data & Society, the conference ran Oct. 6-8 and focused on how AI can be used to address multidisciplinary societal issues while upholding ethical standards...

And, Singh said, promoting public AI awareness is vital. She said this is done through introducing AI training as early as elementary school and encouraging academics to develop soft skills to be able to communicate their AI research with laypeople — something they're not always good at.

"There are many programs being started now that are encouraging from the student level, but of course also faculty, in academia, to go out there and talk," Singh said. "I think the importance of doing that now is really crucial, and we should step up.""

Thursday, July 10, 2025

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays; JD Supra, July 8, 2025

 Mark BoothSteven Farmer, Scott Morton , JD Supra; EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

"The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions."