Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Thursday, May 14, 2026

'AI has no soul': Pope Leo expected to address AI's ethical challenges; USA TODAY, May 13, 2026

Marc Ramirez, USA TODAY ; 'AI has no soul': Pope Leo expected to address AI's ethical challenges

"Is thinking basically computing? Are humans just biological versions of machines – only less efficient than their AI counterparts?

The concept that people may develop such a mindset is a major concern for Catholic observers given the breakneck pace at which AI is developing.

“As soon as you start thinking of yourself as a machine, only not as good, then you’re just a commodity and have no other reason to live,” said John Cavadini, director of the McGrath Institute for Church Life at the University of Notre Dame in South Bend, Indiana. “It’s a pathway to desolation.”

That’s why Cavadini and others are looking forward to the imminent release of Pope Leo XIV’s first major encyclical, expected to address the growing ethical and moral challenges of artificial intelligence.

The treatise will be Leo’s most authoritative document to date, as topical as it is symbolic: Though the Vatican has set no specific date, a May 15 release would come 135 years to the day that Pope Leo XIII, with whom the current pontiff shares his name, issued what is considered the first social encyclical of modern times, Rerum Novarum...

As the term implies, an encyclical is a "circular letter" designed to be shared among a community...

The overarching concern, Daly said, is whether AI will be leveraged to promote human flourishing or whether efficiency and productivity will become the focus, leaving patients behind...

Another overlooked but important risk of AI, Daly said, is that technological advances tend to favor those already represented in such settings – in other words, those adept with new technology and who have electronic health records...

Hayes-Mota hopes the papal document can place the church, especially in the U.S., at the forefront of an emerging and urgent public conversation. The pope, he said, can play a leading role in fostering that conversation and ensuring it’s “anchored in moral values” and the fundamental questions AI is raising."

Tuesday, May 12, 2026

How AI Killed a 133-Year-Old Princeton Tradition; The Atlantic, May 12, 2026

Rose Horowitch, The Atlantic ; How AI Killed a 133-Year-Old Princeton Tradition

The school’s famous Honor Code was no match for chatbot-enabled cheating.

"Much of higher education’s value rests on the assumption that cheating is an exception, not the rule. A diploma is meaningless if employers and graduate programs can’t trust that graduates learned something in college. Prospective students and their families must believe that their tuition dollars will purchase a good education. And taxpayers need to trust that public-school students are getting something from their four years of subsidized education. Rampant AI use breaks down these signals. “It is bad policy to suspect a man of being a rogue in order to be sure that he is a scholar,” The Princetonian warned in 1876. Perhaps so. But the alternative is even worse."

Sunday, May 10, 2026

A.I. Populism Is Here. And No One Is Ready.; The New York Times, May 8, 2026

, The New York Times ; A.I. Populism Is Here. And No One Is Ready. Silicon Valley oligarchs worried about the risks their technology posed to the world. They forgot about people.

"In one sense, the vision peddled by A.I. companies is remarkably depersonalized: We hand more and more responsibility and judgment off to superintelligent black boxes, which rapidly begin shaping the course of the human future with decisions that remain illegible to the rest of us, including their designers. “People outside the field are often surprised and alarmed to learn that we do not understand how our own A.I. creations work,” Anthropic’s Dario Amodei wrote last year. “They are right to be concerned: This lack of understanding is essentially unprecedented in the history of technology.”

In another sense, and in the meantime, A.I. represents perhaps the most personalized sales pitch ever foisted on the passive American consumer — a vision of a near-total takeover of the country’s economic, social and cognitive lives by tools engineered by just five companies, run by five particular people, several of whom are widely described as sociopaths. The list is so short that you may know most of them by first name: Sam, Dario, Elon and Mark. (Demis Hassabis, who runs Google’s DeepMind, is perhaps less famous.)

These men are all already billionaires, or close to it, and on their current trajectories their wealth and influence look set to expand exponentially as, around them, anti-elitism multiplies, too. Perhaps this is one reason 50 percent of Americans told the Pew Research Center last year they were more concerned than excited about what’s to come from A.I. Only 10 percent said they were more excited. That is a yawning gap into which an entire society is being asked to tumble."

Saturday, May 9, 2026

Tech is turning increasingly to religion in a quest to create ethical AI; AP, May 7, 2026

 KRYSTA FAURIA, AP; Tech is turning increasingly to religion in a quest to create ethical AI

"As concerns mount over artificial intelligence and its rapid integration into society, tech companies are increasingly turning to faith leaders for guidance on how to shape the technology — a surprising about-face on Silicon Valley’s longstanding skepticism of organized religion.

Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural “Faith-AI Covenant” roundtable in New York to discuss how best to infuse morality and ethics into the fast-developing technology. It was organized by the Geneva-based Interfaith Alliance for Safer Communities, which seeks to take on issues such as extremism, radicalization and human trafficking. The roundtable is expected to be the first of several around the globe, including in Beijing, Nairobi and Abu Dhabi.

Tech executives need to recognize their power — and their responsibility — to make the right decisions, said Baroness Joanna Shields, a key partner in the initiative. She worked as a tech executive with stints at Google and Facebook before pivoting to British politics.

“Regulation can’t keep up with this,” she said. But the leaders of the world’s religions, with billions of followers globally, have the “expertise of shepherding people’s moral safety,” she reasoned. Faith leaders ought to have a voice, Shields said.

“This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they’re building and they want to do it right — most of them,” she said of AI tech executives.

The goal of this initiative, according to Shields, is an eventual “set of norms or principles” informed by different groups and faiths, from Christians to Sikhs to Buddhists, that companies will abide by...

Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America and The Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church...

The partnership highlights a growing coalition between faith and tech, born out of an effort to create moral AI — a contested concept which begs questions about whether that is possible and what it means...

“There’s some aspect of PR to it. The slogan was ‘Move fast and break things.’ And they broke too many things and too many people,” said Brian Boyd, the U.S. faith liaison for the nonprofit Future of Life Institute...

But other advocates for AI regulation and safety aren’t so sure these efforts are genuine.

“At best it’s a distraction. At worst it’s diverting attention from things that really matter,” said Rumman Chowdhury, the CEO of the nonprofit Humane Intelligence and the U.S. science envoy for AI under the Biden administration.

Chowdhury says she’s not inclined to believe religion is the best place to help answer questions surrounding AI and ethics, but thinks she understands why companies are increasingly turning to it.

“I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” she said. “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”"

Wednesday, May 6, 2026

‘Avatar’ Suit Focuses on Hot Topic in A.I. Age: A Character’s Face; The New York Times, May 5, 2026

, The New York Times ; ‘Avatar’ Suit Focuses on Hot Topic in A.I. Age: A Character’s Face

"An actress accused the director James Cameron of stealing her likeness to create an “Avatar” character in a lawsuit filed on Tuesday in California — a case that reflects a core fear among Hollywood performers in the artificial intelligence age: losing control of their own faces.

The actress, Q’orianka Kilcher, also sued Disney, which controls the multibillion-dollar “Avatar” franchise, which started in 2009...

The lawsuit involves Neytiri, the digitally created, blue-skinned warrior princess in Mr. Cameron’s three “Avatar” blockbusters. According to the complaint, Mr. Cameron used a photo of Ms. Kilcher as a teenager — without her knowledge — as the foundation for Neytiri, incorporating her features “directly into his production art” and digital production pipeline.

“Neytiri’s lips, chin, jawline and overall mouth shape” in the trilogy “are Q’orianka Kilcher’s,” the complaint said. “This was not a fleeting inspiration or a vague homage; it was a literal transplant of a real teenager’s facial structure.”

In 2010, Ms. Kilcher, who is also an Indigenous rights activist, met Mr. Cameron by chance at a charity event in Hollywood, where he told her that she was the “early inspiration” for Neytiri’s look, according to the complaint. “She did not take this to mean that her actual face had been replicated,” the complaint said.

Ms. Kilcher is suing now, the complaint said, because of an interview that Mr. Cameron gave to a French media outlet in 2024. In the interview, Mr. Cameron mentions Ms. Kilcher and “points to an image of Neytiri and says unambiguously: ‘This is actually her lower face,’” the complaint said. The interview came to her attention a year later."

Monday, May 4, 2026

Poll: The midterms' new big players are pushing agendas that voters don’t fully support; Politico, May 3, 2026

ERIN DOHERTY,  JASPER GOODMANJESSICA PIPER,  DANIEL BARNES and BRENDAN BORDELON, Politico ; Poll: The midterms' new big players are pushing agendas that voters don’t fully support

"Deep-pocketed political groups tied to artificial intelligence and cryptocurrency are rapidly reshaping the midterm money landscape — but many Americans are uneasy with the industries behind the spending.

New results from The POLITICO Poll find broad public skepticism about crypto and AI, creating a possible conflict for candidates benefitting from an influx of contributions from the two industries. These groups are pouring millions of dollars into competitive 2026 races to elevate politicians who they believe will support their agendas in Washington.

Meanwhile, Americans have been slow to embrace either technology.

A 45 percent plurality of Americans say investing in cryptocurrency is not worth the risk, even if it can yield high returns, and a 44 percent plurality say AI is developing too quickly, according to the April survey conducted by independent firm Public First.

Nearly half of Americans say they trust a traditional bank with their money more than a cryptocurrency platform, while just 17 percent say the opposite. And two-thirds support lawmakers either imposing strict regulations or setting broad principles for the AI industry."

Friday, May 1, 2026

Pentagon Makes Deals With A.I. Companies to Expand Classified Work; The New York Times, May 1, 2026

 Julian E. Barnes and , The New York Times ; Pentagon Makes Deals With A.I. Companies to Expand Classified Work

"The Pentagon announced on Friday that it had reached deals with some of the technology industry’s biggest companies in an effort to expand the military’s artificial intelligence capabilities and increase the number of firms authorized to be on classified networks.

The companies, according to the Defense Department, agreed to allow the Pentagon to employ their technology for “any lawful use,” a standard resisted by Anthropic, which was initially the only artificial intelligence model available on classified markets.

The Pentagon had previously confirmed deals with Elon Musk’s xAI, OpenAI and Google. In addition the Pentagon said it had reached deals with Amazon Web Services, Microsoft, Nvidia and Reflection AI, a start-up."

Wednesday, April 29, 2026

A.I. Bots Told Scientists How to Make Biological Weapons; The New York Times, April 29, 2026

 , The New York Times; A.I. Bots Told Scientists How to Make Biological Weapons

"Dr. Relman is part of a small group of experts enlisted by A.I. companies to vet their products for catastrophic risks. In recent months, some have shared with The Times more than a dozen chatbot conversations revealing that even publicly available models can do more than disseminate dangerous information. The virtual assistants have described in lucid, bullet-pointed detail how to buy raw genetic material, turn it into deadly weapons and deploy them in public spaces, the transcripts show. Some have even brainstormed ways to evade detection."

Monday, April 27, 2026

Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics; The Christian Science Monitor, April 27, 2026

  , The Christian Science Monitor; Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics

"A dispute between ChatGPT’s parent company, OpenAI, and one of the company’s founders – billionaire and tech entrepreneur Elon Musk – will play out in a federal court in Oakland, California, beginning April 27. 

Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation. 

OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge."

Sunday, April 26, 2026

This Is How We Get Moral A.I. Companies; The New York Times, April 26, 2026

 The New York Times; This Is How We Get Moral A.I. Companies

"Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?...

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade."

Saturday, April 25, 2026

'Too Dangerous to Release' Is Becoming AI's New Normal; Time, April 24, 2026

 Nikita Ostrovsky, Time; 'Too Dangerous to Release' Is Becoming AI's New Normal

 "On April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” 

The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. 

It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”

Who decides? 

The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them."

The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability; Smithsonian Magazine, April 24, 2026

 Michele Debczak, Smithsonian Magazine ; The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability

"The four-block strip that houses such Los Angeles institutions as the Walt Disney Concert Hall, the Broad and the Museum of Contemporary Art will get a different type of cultural attraction this summer. Dataland, billed as the world’s first museum dedicated to A.I.-generated art, is set to open on June 20.

The brainchild of digital artists Refik Anadol and Efsun Erkiliç, Dataland will anchor the Grand LA complex, designed by architect Frank Gehry, in downtown Los Angeles. The privately funded museum covers 35,000 square feet, 10,000 of which are reserved for the technology required to support the exhibitions. Rather than traditional halls displaying individual artworks, Dataland’s five galleries and 30-foot ceiling are designed for total immersion.

“It’s very exciting to say that A.I. art is not image only,” Anadol tells Jessica Gelt for the Los Angeles Times. “It’s a very multisensory, multimedium experience—meaning sound, image, video, text, smell, taste and touch. They are all together in conversation.”

The museum’s inaugural exhibition, called “Machine Dreams: Rainforest,” was inspired by a trip to the Amazon. Anadol’s studio created an open-access A.I. model called the Large Nature Model, fed it millions of images of nature, and then prompted the machine to “learn and play with the intelligent behaviors of the natural world,” Richard Whiddington writes for Artnet. The result, as Anadol puts it per the Times, is a “a living museum” where visitors can walk among “digital sculptures.” In addition to a kaleidoscope of imagery, museum guests will be immersed in soundscapes, woven from audio that includes oral histories of the Yawanawá people of Brazil and the last recorded call of the extinct Kaua‘i ‘ō‘ō bird of Hawaii, Léa Zeitoun reports for Designboom."

Thursday, April 23, 2026

Penalties stack up as AI spreads through the legal system; NPR, April 3, 2026

, NPR; Penalties stack up as AI spreads through the legal system

""Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI...

The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.

Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.

The professional embarrassments even take place at the level of state supreme courts...

"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled...

When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated."

Wednesday, April 22, 2026

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era; The New York Times, April 22, 2026

 , The New York Times; Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

Artificial intelligence tools are making it faster than ever to reproduce creative work. Does copyright even matter anymore?

"Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law?

Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it.

Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin’s version remains online. He said Anthropic had not asked him to take it down.

It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over.

He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes?

“I just wanted to raise some ethical questions in the A.I. agent era,” he said. “Any creative work can be reproduced in a second.”"

Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?; The New York Times, April 20, 2026

 David DeSteno, The New York Times; Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?

"In a public statement of its intentions for its Claude chatbot, the artificial intelligence company Anthropic has said that it wants Claude to be “a genuinely good, wise and virtuous agent.” The company raised the moral stakes this month, when it announced that its latest A.I. model, Claude Mythos Preview, poses too great a cybersecurity threat to be widely released. Behind the scenes, Anthropic has been trying to shore up the ethical foundations of its products, working with a Catholic priest and consulting with other prominent Christians to help foster Claude’s moral and spiritual development.

Anthropic’s intentions are admirable, but the project of drawing on religion to cultivate the ethical behavior of Claude (or any other chatbot) is likely to fail. Not because there isn’t moral wisdom in Scripture, sermons and theological treatises — texts that Claude has undoubtedly already scraped from the web and integrated — but because Claude is missing a crucial mechanism by which religion fosters moral growth: a body."

Tuesday, April 21, 2026

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Sunday, April 19, 2026

The philosopher trying to teach ethics to AI developers; NPR, April 17, 2026

 , NPR ; The philosopher trying to teach ethics to AI developers

Thursday, April 16, 2026

Val Kilmer Resurrected by AI: ‘As Deep as the Grave’ Trailer Brings Late Actor Back to the Big Screen (EXCLUSIVE); Variety, April 15, 2026

  Brent Lang, Variety; Val Kilmer Resurrected by AI: ‘As Deep as the Grave’ Trailer Brings Late Actor Back to the Big Screen (EXCLUSIVE)

"The filmmakers behind “As Deep as the Grave” have debuted the trailer for the upcoming historical drama, giving viewers a first look at the AI technology that was used to create Val Kilmer‘s performance.

Kilmer, who died in 2025 after battling throat cancer, was cast as Father Fintan, a Catholic priest and Native American spiritualist, but was too sick to shoot his role. With the cooperation of Kilmer’s estate and his daughter Mercedes, the “As Deep as the Grave” team used generative AI to include the actor in the finished film."

Monday, April 13, 2026

Nobody is governing AI; Quartz, April 8, 2026


Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

Saturday, April 11, 2026

Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.; The Washington Post, April 11, 2026

 

, The Washington Post ; Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.

The artificial intelligence company asked religious leaders for guidance on building a moral chatbot.


"The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.


Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”"