Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, October 13, 2025

US Supreme Court asked to hear dispute over copyrights for AI creations; Reuters, October 10, 2025

, Reuters; US Supreme Court asked to hear dispute over copyrights for AI creations

 "A computer scientist on Friday asked the U.S. Supreme Court to reconsider a ruling that a work of art generated by artificial intelligence cannot be copyrighted under U.S. law.

Stephen Thaler told the justices that the U.S. Copyright Office's decision denying copyright protection for the art made by his AI system "created a chilling effect on anyone else considering using AI creatively" and "defies the constitutional goals from which Congress was empowered to create copyright.""

Sunday, October 12, 2025

Tilly Norwood & AI Confusion Will Shape Looming Guild Negotiations, Copyright Experts Agree; Deadline, October 12, 2025

 Dade Hayes , Deadline; Tilly Norwood & AI Confusion Will Shape Looming Guild Negotiations, Copyright Experts Agree

"Handel and Mishawn Nolan, managing partner of intellectual property law firm Nolan Heimann, shared their perspectives during a panel Friday afternoon at Infinity Festival in Los Angeles.

Digital scanning of human actors, for the purposes of using their likenesses in film and TV projects is another tricky area for the unions given how untested the legal questions are, the attorneys agreed.

“I actually have a client right now” whose body is being scanned, Nolan said. “What I received [from the company] was just a sort of standard certificate of engagement. It was all rights, just like you would normally use. And I said, ‘Well, what are you gonna do with the data? What is the scope of the use?’”

Because of the intense pressure on productions to move quickly, Nolan said, “everyone would like to just turn around [a talent agreement] tomorrow.” But the complexities of copyright issues raised by AI, which is evolving at a breakneck clip, require a lot more thought, she argued. “The way that we’ve always done business can’t be done in the future. It can’t be done instantaneously,” she continued. “You have to take a moment and think about, what are you doing? What are you capturing? What are you going to use it for? How are you going to use it? How long are you going to have access to it? And what happens in the long term? Who holds onto it? Is it safe? Is it gonna be destroyed?”"

Friday, October 10, 2025

Here's who owns what when it comes to AI, creativity and intellectual property; World Economic Forum, October 10, 2025

 Seemantani SharmaCo-Founder, Mabill Technologies | Intellectual Property & Innovation Expert, Mabill Technologies, World Economic Forum ; Here's who owns what when it comes to AI, creativity and intellectual property

"Rethinking ownership

The intersection of AI, consciousness and intellectual property requires us to rethink how ownership should evolve. Keeping intellectual property strictly human-centred safeguards accountability, moral agency and the recognition of human creativity. At the same time, acknowledging AI’s expanding role in production may call for new approaches in law. These could take the form of shared ownership models, new categories of liability or entirely new rights frameworks.


For now, the legal balance remains with humans. As long as AI lacks consciousness, it cannot be considered a rights-holder under existing intellectual property theories. Nonetheless, as machine intelligence advances, society faces a pivotal choice. Do we reinforce a human-centred system to protect dignity and creativity or do we adapt the law to reflect emerging realities of collaboration between humans and machines?


This is more than a legal debate. It is a test of how much we value human creativity in an age of intelligent machines. The decisions we take today will shape the future of intellectual property and the meaning of authorship, innovation and human identity itself."

Sunday, October 5, 2025

America goes gambling; Quartz, October 5, 2025

 Jackie Snow, Quartz; America goes gambling


[Kip Currier: This Quartz article America Goes Gambling is a timely one about a significant AI-driven development: massive growth in online gambling, sports betting, and gambling addictions after the U.S. Supreme Court struck down a de facto ban on these activities (outside of Nevada and tribal casinos) in 2018's Murphy v. NCAA decision.

I spoke on the issue of AI-enhanced online gambling and sports betting at the September 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis and am currently finishing a chapter for publication on this emerging topic.]


[Excerpt]

"On any given Sunday this football season, Americans are placing millions in legal sports bets, a level of widespread wagering that would have been almost impossible a decade ago when only Nevada offered legal sportsbooks.

Today's football slate represents the peak of a sports betting boom that has fundamentally altered how Americans watch games. Sunday's action is part of an industry that's grown from $4.9 billion in total annual wagers in 2017 to almost $150 billion in 2024. But beneath the Sunday spectacle lies a growing concern about addiction specialists reporting record demand for gambling help as the line between sports entertainment and financial risk becomes increasingly blurred.

The transformation has been swift and dramatic. When the Supreme Court struck down the federal sports betting ban in Murphy v. NCAA in 2018, legal sports betting was confined to Nevada and tribal casinos. Today, legal sports betting operates in 39 states and Washington, D.C., with more statehouses considering laws that would greenlight it."

Monday, September 29, 2025

In the fight over AI, copyright is America’s competitive weapon; The Hill, September 29, 2025

In the fight over AI, copyright is America’s competitive weapon, The Hill; In the fight over AI, copyright is America’s competitive weapon


"On Monday, a new $100 million super-PAC network, Leading the Future, was introduced to shape artificial intelligence policy in next year’s elections. The group says it will fight for sensible guardrails on AI while pushing back against efforts it believes could slow AI development.

But if Leading the Future is to live up to its name, it must avoid an easy trap: framing copyright protections as an obstacle to American AI competitiveness."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

 , The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Wednesday, September 24, 2025

Copyright and AI: Controlling Rights and Managing Risks; Morgan Lewis, September 23, 2025

 JOSHUA M. DALTON, Partner, BostonCOLLEEN GANIN, Partner, New YorkMICHAEL R. PFEUFFER, Senior Attorney, Pittsburgh, Morgan Lewis; Copyright and AI: Controlling Rights and Managing Risks

"The law on copyright and AI is still developing, with courts and policymakers testing the limits of authorship, infringement, and fair use. Companies should expect continued uncertainty and rapid change in this space."

AI as Intellectual Property: A Strategic Framework for the Legal Profession; JD Supra, September 18, 2025

co-authors:James E. Malackowski and Eric T. Carnick , JD Supra; AI as Intellectual Property: A Strategic Framework for the Legal Profession

"The artificial intelligence revolution presents the legal profession with its most significant practice development opportunity since the emergence of the internet. AI spending across hardware, software, and services reached $279.22 billion in 2024 and is projected to grow at a compound annual growth rate of 35.9% through 2030, reaching $1.8 trillion.[i] AI is rapidly enabling unprecedented efficiencies, insights, and capabilities in industry. The innovations underlying these benefits are often the result of protectable intellectual property (IP) assets. The ability to raise capital and achieve higher valuations can often be traced back to such IP. According to data from Carta, startups categorized as AI companies raised approximately one-third of total venture funding in 2024. Looking only at late-stage funding (Series E+), almost half (48%) of total capital raised went to AI companies.[ii]Organizations that implement strategic AI IP management can realize significant financial benefits.

At the same time, AI-driven enhancements have introduced profound industry risks, e.g., disruption of traditional business models; job displacement and labor market reductions; ethical and responsible AI concerns; security, regulatory, and compliance challenges; and potentially, in more extreme scenarios, broad catastrophic economic consequences. Such risks are exacerbated by the tremendous pace of AI development and adoption, in some cases surpassing societal understanding and regulatory frameworks. According to McKinsey, 78% of respondents say their organizations use AI in at least one business function, up

from 72% in early 2024 and 55% a year earlier.[iii]

This duality—AI as both a catalyst and a disruptor—is now a feature of the modern global economy. There is an urgent need for legal frameworks that can protect AI innovation, facilitate the proper commercial development and deployment of AI-related IP, and navigate the risks and challenges posed by this new technology. Legal professionals who embrace AI as IP™ will benefit from this duality. Early indicators suggest significant advantages for legal practitioners who develop specialized AI as IP expertise, while traditional IP practices may face commoditization pressures."

Monday, September 22, 2025

If Anyone Builds it, Everyone Dies review – how AI could kill us all; The Guardian; September 22, 2025

 , The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all

"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."

Friday, September 19, 2025

The 18th-century legal case that changed the face of music copyright law; WIPO Magazine, September 18, 2025

 Eyal Brook, Partner, Head of Artificial Intelligence, S. Horowitz & Co , WIPO Magazine;The 18th-century legal case that changed the face of music copyright law

"As we stand at the threshold of the AI revolution in music creation, perhaps the most valuable lesson from this history is not any particular legal doctrine but rather the recognition that our conceptions of musical works and authorship are not fixed but evolving.

Imagine what would have happened had Berne negotiators decided to define the term in 1886. The “musical work” as a legal concept was born from Johann Christian Bach’s determination to assert his creative rights – and it continues to transform with each new technological development and artistic innovation.

The challenge for copyright law in the 21st century is to keep fulfilling copyright’s fundamental purpose: to recognize and reward human creativity in all its forms. This will require not just legal ingenuity but also a willingness to reconsider our most basic assumptions about what music is and how it comes into being.

Bach’s legacy, then, is not just the precedent that he established but the ongoing conversation he initiated – an unfinished symphony of legal thought that continues to evolve with each new technological revolution and artistic movement.

As we face the challenges of AI and whatever technologies may follow, we would do well to remember that the questions we ask today about ownership and creativity echo those first raised in a London courtroom almost 250 years ago by a composer determined to claim what he believed was rightfully his."

Thursday, September 18, 2025

AI could never replace my authors. But, without regulation, it will ruin publishing as we know it; The Guardian, September 18, 2025

, The Guardian ; AI could never replace my authors. But, without regulation, it will ruin publishing as we know it


[Kip Currier: This is a thought-provoking piece by literary agent Jonny Geller. He suggests an "artists’ rights charter for AI that protects two basic principles: permission and attribution". His charter idea conveys some aspects of the copyright area called "moral rights".

Moral rights provide copyright creators with a right of paternity (i.e. attribution) and a right of integrity. The latter can enable creators to exercise some levels of control over how their copyrighted works can be adapted. The moral right of integrity, for example, was an argument in cases involving whether black and white films (legally) could be or (ethically) should be colorized. (See Colors in Conflicts: Moral Rights and the Foreign Exploitation of Colorized U.S. Motion PicturesMoral rights are not widespread in U.S. copyright law because of tensions between the moral right of integrity and the right of free expression/free speech under the U.S. Constitution (whose September 17, 1787 birthday was yesterday). The Visual Artists Rights Act (1990) is a narrow example of moral rights under U.S. copyright law.

To Geller's proposed Artists' Rights Charter for AI I'd suggest adding the word and concept of "Responsibilities". Compelling arguments can be made for providing authors with some rights regarding use of their copyrighted works as AI training data. And, commensurately, persuasive arguments can be made that authors have certain responsibilities if they use AI at any stage of their creative processes. Authors can and ethically should be transparent about how they have used AI, if applicable, in the creation stages of their writing.

Of course, how to operationalize that as an ethical standard is another matter entirely. But just because it may be challenging to initially develop some ethical language as guidance for authors and strive to instill it as a broad standard doesn't mean it shouldn't be attempted or done.]


[Excerpt]

"The single biggest threat to the livelihood of authors and, by extension, to our culture, is not short attention spans. It is AI...

As a literary agent and CEO of one of the largest agencies in Europe, I think this is something everyone should care about – not because we fear progress, but because we want to protect it. If you take away the one thing that makes us truly human – our ability to think like humans, create stories and imagine new worlds – we will live in a diminished world.

AI that doesn’t replace the artist, or that will work with them transparently, is not all bad. An actor who is needed for reshoots on a movie may authorise use of the footage they have to complete a picture. This will save on costs, the environmental impact and time. A writer may wish to speed up their research and enhance their work by training their own models to ask the questions that a researcher would. The translation models available may enhance the range of offering of foreign books, adding to our culture.

All of this is worth discussing. But it has to be a discussion and be transparent to the end user. Up to now, work has simply been stolen and there are insufficient guardrails on the distributors, studios, publishers. As a literary agent, I have a more prosaic reason to get involved – I don’t think it is fair for someone’s work to be taken without their permission to create an inferior competitor.

What can we do? We could start with some basic principles for all to sign up to. An artists’ rights charter for AI that protects two basic principles: permission and attribution."

Tuesday, September 16, 2025

AI will make the rich unfathomably richer. Is this really what we want?; The Guardian, September 16, 2025

  , The Guardian; AI will make the rich unfathomably richer. Is this really what we want?

"Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of slop. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing psychotic breaks. Do we really need any more of this?"

Monday, September 15, 2025

Google's top AI scientist says ‘learning how to learn’ will be next generation's most needed skill; Associated Press via Pittsburgh Post-Gazette, September 12, 2025

 DEREK GATOPOULOS, Associated Press via Pittsburgh Post-Gazette; Google's top AI scientist says ‘learning how to learn’ will be next generation's most needed skill

"A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as artificial intelligence transforms education and the workplace.

Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google’s DeepMind, said rapid technological change demands a new approach to learning and skill development...

Greek Prime Minister Kyriakos Mitsotakis joined Mr. Hassabis at the Athens event after discussing ways to expand AI use in government services. Mr. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.

“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical," he said. "And if they see ... obscene wealth being created within very few companies, this is a recipe for significant social unrest.”

Sunday, September 14, 2025

Preparing faith leaders to prepare others to use artificial intelligence in a faithful way; Presbyterian News Service, September 4, 2025

 Mike Ferguson , Presbyterian News Service; Preparing faith leaders to prepare others to use artificial intelligence in a faithful way

"It turns out an engineer whose career included stops at Boeing and Amazon — and who happens to be a person of deep faith — has plenty to say about how faith leaders can use artificial intelligence in places of worship.

Jovonia Taylor-Hayes took to the lectern Wednesday during Faithful Futures: Guiding AI with Wisdom and Witness, which is being offered online and at Westminster Presbyterian Church in Minneapolis. The PC(USA)’s Office of Innovation is among the organizers and sponsors, which also includes The Episcopal Church, the United Methodist Church and the Evangelical Lutheran Church in America.

Think of all the varied ways everyday people use AI, Taylor-Hayes said, including as an aid to streamline grocery shopping and resume building; by medical teams for note-taking; for virtual meetings and closed-captioning, which is getting better, she said; and in customer service.

“The question is, what does it look like when we stop and think about what AI means to me personally? Where does your head and heart go?” she asked. One place where hers goes to is scripture, including Ephesians 2:10 and Psalm 139:14. “God has prepared us,” she said, “to do what we need to do.”

During the first of two breakout sessions, she asked small groups both in person and online to discuss questions including where AI shows up in their daily work and life and why they use AI as a tool."

Saturday, September 13, 2025

World Meeting on Human Fraternity: Disarming words to disarm the world; Vatican News, September 13, 2025

Roberto Paglialonga, Vatican News ; World Meeting on Human Fraternity: Disarming words to disarm the world


[Kip Currier: There is great wisdom and guidance in these words from Pope Leo and Fr. Enzo Fortunato (highlighted from this Vatican News article for emphasis):

Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

What is in the best interests of corporations and shareholders should not -- must not -- ever be this planet's central organizing principle.

To the contrary, that which is at the very center of our humanity -- truth, freedom, the well-being and dignity of each and every person, and prioritization of the best interests of all members of humanity -- MUST be our North Star and guiding light.]


[Excerpt]

"Representatives from the world of communication and information—directors and CEOs of international media networks— gathered in Rome for the “News G20” roundtable, coordinated by Father Enzo Fortunato, director of the magazine Piazza San Pietro. The event took place on Friday 12 September in the Sala della Protomoteca on Rome's Capitoline Hill. The participants addressed a multitude of themes, including transparency and freedom of information in times of war and conflict: the truth of facts as an essential element to “disarm words and disarm the world,” as Pope Leo XIV has said, so that storytelling and narrative may once again serve peace, dialogue, and fraternity. They also discussed the responsibility of those who work in media to promote the value of competence, in-depth reporting, and credibility in an age dominated by unchecked social media, algorithms, clickbait slogans, and rampant expressions of hatred and violence from online haters.

Three pillars of our time: truth, freedom, Dignity


In opening the workshop, Father Fortunato outlined three “pillars” that can no longer be taken for granted in our time: truth, freedom, and dignity. Truth, he said, is “too often manipulated and exploited,” and freedom is “wounded,” as in many countries around the world “journalists are silenced, persecuted, or killed.” Yet “freedom of the press should be a guarantee for citizens and a safeguard for democracy.” Today, Fr. Fortunato continued, “we have many ‘dignitaries’ but little dignity”: people are targeted by “hate and defamation campaigns, often deliberately orchestrated behind a computer screen. Words can wound more than weapons—and not infrequently, those wounds lead to extreme acts.” Precisely in a historical period marked by division and conflict, humanity—despite its diverse peoples, cultures, and opinions—is called to rediscover what unites it. “Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

Friday, September 12, 2025

GPT-5’s Ethics Guidelines for Using It in Philosophical Research; Daily Nous, September 10, 2025

   

, Daily Nous; GPT-5’s Ethics Guidelines for Using It in Philosophical Research

"In a post last month, we discussed the question, “How much use of AI in our research is acceptable?...

What do you think of ChatGPT-5’s three positions regarding ethical AI use in research? Are they missing anything? Are they too demanding? Are they any good?"

Thursday, September 11, 2025

Books by Bots: Librarians grapple with AI-generated material in collections; American Libraries, September 2, 2025

 Reema Saleh  , American Libraries; Books by BotsLibrarians grapple with AI-generated material in collections

"How to Spot AI-Generated Books

Once an AI-generated book has made it to your library, it will likely give itself away with telltale signs such as jumbled, repetitive, or contradicting sentences; glaring grammatical errors or false statements; or digital art that looks too smooth around the corners.

Of course, if you can get a digital sneak-peek inside a book before ordering, all the better. But if not, how can you head off AI content so it never arrives on your desk? The following tips can help.

  • Look into who the author is and how “real” they seem, says Robin Bradford, a collection development librarian at a public library in Washington. An author with no digital footprint is a red flag, especially if they are credited with a slew of titles each year. Also a red flag: a book with no author listed at all.
  • Exercise caution regarding self-published books, small presses, or platforms such as Amazon, which filters out less AI-generated content than other vendors do.
  • Think about whether the book is capitalizing on the chance that a reader will confuse it with another, more popular book, says Jane Stimpson, a library instruction and educational technology consultant for the Massachusetts Library System. Does it have a cover similar to that of an existing bestseller? Just as animated Disney movies get imitated by low-budget knockoffs, popular titles get imitated by AI-generated books.
  • Check if there is mention of AI use in the Library of Congress record associated with the book, says Sarah Manning, a collection development librarian at Boise (Idaho) Public Library (BPL). If the book has been registered with the US Copyright Office, its record may mention AI."

Monday, September 8, 2025

Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference; Episcopal News Service (ENS), September 5, 2025

 David Paulsen, Episcopal News Service (ENS) ; Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference

"Some of the most tech-forward minds in the Protestant church gathered here this week at the Faithful Futures conference, where participants wrestled with the ethical, practical and spiritual implications of artificial intelligence. The Episcopal Church is one of four Protestant denominations that hosted the Sept. 2-5 conference. About halfway through, one of the moderators acknowledged that AI has advanced so far and so rapidly that most conferences on AI are no longer focused just on AI...

AI raises spiritual questions over what it means to be human

Much of the conference seemed to pivot on questions that defied easy answers. In an afternoon session Sept. 3, several church leaders who attended last year’s Faithful Futures conference in Seattle, Washington, were invited to give 10-minute presentations on their preferred topics.

“What happens to theology when the appearance of intelligence is no longer uniquely human?” said the Rev. Michael DeLashmutt, a theology professor at General Theological Seminary in New York, New York, who also serves as the Episcopal seminary’s senior vice president.

DeLashmutt argued that people of faith, in an era of AI, must not forget what it means to be Christian and to be human. “Being human means being relational, embodied, justice-oriented and open to God’s spirit,” he said. “So, I think the real risk is not that machines will become human, but that we will forget the fullness of what humanity actually is.”

Kip Currier, a computing and information professor at the University of Pittsburgh, warned that AI is being used by sports betting platforms to appeal to gamblers, including those suffering from addiction. Mark Douglas, an ethics professor at Columbia Theological Seminary, outlined the ecological impact of AI data centers, which need to consume massive amounts of energy and water.

The Rev. Andy Morgan, a Presbyterian pastor based in Knoxville, Tennessee, described himself as his denomination’s “unofficial AI person” and suggested that preachers should not be afraid of using AI to improve their sermons – as long as they establish boundaries to prevent delegating too much to the technology."

Saturday, September 6, 2025

Big Questions About AI and the Church Video; August 25, 2025

Big Questions About AI and the Church Video

Kip Currier: This Big Questions About AI and the Church video (1:12:14) was created by the members of my cohort and me (Cohort 7). Our cohort emanated from the groundbreaking August 2024 ecumenical AI & The Church Summit in Seattle that we all attended.

Perhaps raising more questions than providing answers, the video's aim is to encourage reflection and discussion of the many-faceted issues and concerns at the nexus of AI, faith communities, and our broader societies.

Many thanks to our cohort member Rev. Dr. Andy P. Morgan for spearheading, synthesizing, and uploading this video to YouTube. 

Monday, September 1, 2025

Time; TIME100 AI 2025

 Time; TIME100 AI 2025

Meet the innovators, leaders, and thinkers reshaping our world through groundbreaking advances in artificial intelligence.