Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

 Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, October 21, 2025

Staying Human in the Age of AI: November 6-7, 2025; The Grefenstette Center for Ethics, Duquesne University, November 6-7, 2025

 The Grefenstette Center for Ethics, Duquesne University; Staying Human in the Age of AI: November 6-7, 2025

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Saturday, October 18, 2025

OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions; The New York Times, October 17, 2025

, The New York Times ; OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions


[Kip Currier: This latest tech company debacle is another example of breakdowns in technology design thinking and ethical leadership. No one in all of OpenAI could foresee that Sora 2.0 might be used in these ways? Or they did but didn't care? Either way, this is morally reckless and/or negligent conduct.

The leaders and design folks at OpenAI (and other tech companies) would be well-advised to look at Tool 6 in An Ethical Toolkit for Engineering/Design Practice, created by Santa Clara University Markkula Center for Applied Ethics:

Tool 6: Think About the Terrible People: Positive thinking about our work, as Tool 5 reminds us, is an important part of ethical design. But we must not envision our work being used only by the wisest and best people, in the wisest and best ways. In reality, technology is power, and there will always be those who wish to abuse that power. This tool helps design teams to manage the risks associated with technology abuse.

https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/

The "Move Fast and Break Things" ethos is alive and well in Big Tech.]


[Excerpt]

"OpenAI said Thursday that it was blocking people from creating videos using the image of the Rev. Dr. Martin Luther King Jr. with its Sora app after users created vulgar and racist depictions of him.

The company said it had made the decision at the request of the King Center as well as Dr. Bernice King, the civil rights leader’s daughter, who had objected to the videos.

The announcement was another effort by OpenAI to respond to criticism of its tools, which critics say operate with few safeguards.

“Some users generated disrespectful depictions of Dr. King’s image,” OpenAI said in a statement. “OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”"

Thursday, October 16, 2025

AI’s Copyright War Could Be Its Undoing. Only the US Can End It.; Bloomberg, October 14, 2025

, Bloomberg; AI’s Copyright War Could Be Its Undoing. Only the US Can End It.

 "Whether creatives like Ulvaeus are entitled to any payment from AI companies is one of the sector’s most pressing and consequential questions. It’s being asked not just by Ulvaeus and fellow musicians including Elton John, Dua Lipa and Paul McCartney, but also by authors, artists, filmmakers, journalists and any number of others whose work has been fed into the models that power generative AI — tools that are now valued in the hundreds of billions of dollars."

Sunday, October 12, 2025

Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference; South Bend Tribune, October 9, 2025

 Rayleigh Deaton, South Bend Tribune; Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference

"The increasingly ubiquitous nature of artificial intelligence in today's world raises questions about how the technology should be approached and who should be making the decisions about its development and implementation.

To the Rev. Paolo Benanti, an associate professor of ethics of AI at LUISS University and the AI adviser to the Vatican, and Aarti Singh, a professor in Carnegie Mellon University's Machine Learning Department, ethical AI use begins when the technology is used to better humanity, and this is done by making AI equitable and inclusive.

Benanti and Singh were panelists during a session on Wednesday, Oct. 8, at the University of Notre Dame's inaugural R.I.S.E. (Responsibility, Inclusion, Safety and Ethics) AI Conference. Hosted by the university's Lucy Family Institute for Data & Society, the conference ran Oct. 6-8 and focused on how AI can be used to address multidisciplinary societal issues while upholding ethical standards...

And, Singh said, promoting public AI awareness is vital. She said this is done through introducing AI training as early as elementary school and encouraging academics to develop soft skills to be able to communicate their AI research with laypeople — something they're not always good at.

"There are many programs being started now that are encouraging from the student level, but of course also faculty, in academia, to go out there and talk," Singh said. "I think the importance of doing that now is really crucial, and we should step up.""

Wednesday, October 8, 2025

What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators; The Conversation, October 8, 2025

 Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University , The Conversation; What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators


"Imagine an actor who never ages, never walks off set or demands a higher salary.

That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agenciesHer synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.

But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites. 

The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human."

Tuesday, September 16, 2025

Ethical AI Design and Implementation: A Systematic Literature Review; AIS eLibrary, August 2025

Katia Guerra, AIS eLibrary; Ethical AI Design and Implementation: A Systematic Literature Review

"Abstract

This study analyzes to what extent information systems (IS) research has investigated artificial intelligence (AI) applications and the ethical concerns that these applications pose in light of the EU AI Act and the recommendations and guidelines provided by other institutions, including the White House, UNESCO, OECD, and Université de Montréal. A systematic literature review methodology and a semantic text similarity analysis will be employed to conduct this investigation. The results of such investigation will lead to contributions to IS researchers by synthesizing previous IS studies on ethical AI design and implementation and proposing an agenda and future directions for IS research to make it more oriented toward the compliance of AI systems with current ethical provisions and considerations. This study will also help practitioners to be more aware of AI ethics and foster technical and managerial solutions that could be developed in compliance with current institutional ethical demands."

Wednesday, September 10, 2025

An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?; The Ethicist, The New York Times; September 10, 2025

 , The Ethicist, The New York Times ; An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?

[Kip Currier: This is a thought-provoking and timely ethical dilemma, especially with the proliferation of AI into more and more aspects of our personal and professional lives.

The question posed to The Ethicist in this edition of his column is about students submitting essays for a contest. The questioner wonders if the students have used AI to write their essays. The contest winners are awarded a monetary scholarship. The questioner wonders if they should confront the winners. The beauty of this question is that we don't know for sure whether AI was or was not used. It's totally speculative. What would you do?

Does your thinking change as to whether using AI to write something is ethical or unethical if:

  • AI is used by a university professor to prepare a lecture
  • AI is used by a university professor to create an essay exam
  • AI is used by an elementary school teacher to prepare a lesson
  • AI is used by an elementary school teacher to create a multiple choice test
  • AI is used by your lawyer to write the legal brief for your lawsuit
  • AI is used by your lawyer's paralegal to write the legal brief for your lawsuit
  • AI is used to synthesize the court's verdict by the judge deciding your case
  • AI is used by a library director to compose the library's strategic plan
  • AI is used by a non-profit university to compose the university's strategic plan
  • AI is used by a for-profit company to compose the company's strategic plan
  • AI is used by a military branch to compose a strategy for military engagement
  • AI is used by a government agency to compose a strategy for national security
  • AI is used by local law enforcement to compose a strategy for public safety
  • AI is used by a summer camp to compose a strategy for camp safety
  • AI is used by your doctor to devise the treatment plan for your relative's cancer treatment
  • AI is used by a scientist to devise treatments for helping patients with cancer
  • AI is used to write a song for your significant other's birthday
  • AI is used to write a song for a musical you are creating
  • AI is used to write a song for a pharmaceutical company ad on TV
  • AI is used by your clergy head to write an annual report
  • AI is used by your clergy head to write a sermon
  • AI is used by your clergy head to write the eulogy for the funeral of one of your parents


Questions: Are you able to identify any variations in your ethical reasoning and how you decide your positions in the scenarios above?

What are you basing your decisions on? 

Are some scenarios easier or harder for you than others? If so, why?

In which situations, if any, do you think it is okay or not okay to use AI?

What additional information, if any, would you like to know that might help you to make decisions about whether and when the uses of AI are ethical or unethical?


[Excerpt]

I volunteer with our local historical society, which awards a $1,000 scholarship each year to two high school students who submit essays about a meaningful experience with a historical site. This year, our committee noticed a huge improvement in the quality of the students’ essays, and only after announcing the winners did we realize that one of them, along with other students, had almost certainly used artificial intelligence. What to do? I think our teacher liaison should be told, because A.I. is such a challenge for schools. I also feel that this winner should be confronted. If we are right, that might lead her to confess her dishonesty and return the award. — Name Withheld"

Friday, August 29, 2025

Medicare Will Require Prior Approval for Certain Procedures; The New York Times, August 28, 2025

 Reed Abelson and  , The New York Times; Medicare Will Require Prior Approval for Certain Procedures


[Kip Currier: Does anyone who receives Medicare -- or cares about someone who does -- really think that letting AI make "prior approvals" for any Medicare procedures is a good thing?

Read the entire article, but just the money quote below should give any thinking person heart palpitations about this AI Medicare pilot project's numerous red flags and conflicts of interest...]


[Excerpt]

"The A.I. companies selected to oversee the program would have a strong financial incentive to deny claims. Medicare plans to pay them a share of the savings generated from rejections."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

 MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Friday, July 25, 2025

Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind; Fortune, July 24, 2025

 SHARON GOLDMAN, Fortune; Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind

"For the “accelerationists”—those who believe the rapid development and deployment of artificial intelligence should be pursued as quickly as possible—innovation, scale, and speed are everything. Over-caution and regulation? Ill-conceived barriers that will actually cause more harm than good. They argue that faster progress will unlock massive economic growth, scientific breakthroughs, and national advantage. And if superintelligence is inevitable, they say, the U.S. had better get there first—before rivals like China’s authoritarian regime.

AI ethics and safety has been sidelined

This worldview, articulated by Marc Andreessen in his 2023 blog post, has now almost entirely displaced the diverse coalition of people who worked on AI ethics and safety during the Biden Administration—from mainstream policy experts focused on algorithmic fairness and accountability, to the safety researchers in Silicon Valley who warn of existential risks. While they often disagreed on priorities and tone, both camps shared the belief that AI needed thoughtful guardrails. Today, they find themselves largely out of step with an agenda that prizes speed, deregulation, and dominance.

Whether these groups can claw their way back to the table is still an open question. The mainstream ethics folks—with roots in civil rights, privacy, and democratic governance—may still have influence at the margins, or through international efforts. The existential risk researchers, once tightly linked to labs like OpenAI and Anthropic, still hold sway in academic and philanthropic circles. But in today’s environment—where speed, scale, and geopolitical muscle set the tone—both camps face an uphill climb. If they’re going to make a comeback, I get the feeling it won’t be through philosophical arguments. More likely, it would be because something goes wrong—and the public pushes back."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

 Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

 , Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

Wednesday, July 9, 2025

How the Vatican Is Shaping the Ethics of Artificial Intelligence; American Enterprise Institute, July 7, 2025

 Shane Tews , American Enterprise Institute; How the Vatican Is Shaping the Ethics of Artificial Intelligence

"Father Paolo Benanti is an Italian Catholic priest, theologian, and member of the Third Order Regular of St. Francis. He teaches at the Pontifical Gregorian University and has served as an advisor to both former Pope Francis and current Pope Leo on matters of artificial intelligence and technology ethics within the Vatican.

Below is a lightly edited and abridged transcript of our discussion...

In the Vatican document, you emphasize that AI is just a tool—an elegant one, but it shouldn’t control our thinking or replace human relationships. You mention it “requires careful ethical consideration for human dignity and common good.” How do we identify that human dignity point, and what mechanisms can alert us when we’re straying from it?

I’ll try to give a concise answer, but don’t forget that this is a complex element with many different applications, so you can’t reduce it to one answer. But the first element—one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence. All humans have rights, but you have the right to the pursuit of happiness. This could be the first description of human rights.

In that direction, we could have a problem with this kind of system because one of the first and most relevant elements of AI, from an engineering perspective, is its prediction capabilities.Every time a streaming platform suggests what you can watch next, it’s changing the number of people using the platform or the online selling system. This idea that interaction between human beings and machines can produce behavior is something that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.

Now, the problem is: don’t we have a cognitive right to know if we have a system acting in that way? Let me give you some numbers. When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.

But if you apply it to your life choices—your future, where you want to live, your workplace, and things like that—that becomes much more intense. Once again, the tool could become a weapon, or the weapon could become a tool. This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.

Then you can accept it: “I have diabetes, I need something that helps me stick to insulin. Let’s go.” It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it. Well, right now we have nothing like that framework. Should we think about something in the public space? It’s not a matter of allowing or preventing some kind of technology. It’s a matter of recognizing what it means to be human in an age of such powerful technology—just to give a small example of what you asked me."

Wednesday, July 2, 2025

Evangelical Report Says AI Needs Ethics; Christianity Today, July/August 2025

 

DANIEL SILLIMAN, Christianity Today; Evangelical Report Says AI Needs Ethics

"The Swiss Evangelical Alliance published a 78-page report on the ethics of artificial intelligence, calling on Christians to “help reduce the misuse of AI” and “set an example in the use of AI by demonstrating how technology can be used responsibly and for the benefit of all.” Seven people worked on the paper, including two theologians, several software engineers and computer science experts, a business consultant, and a futurist. They rejected the idea that Christians should close themselves off to AI, as that would not do anything to mitigate the risks of the developing technology. The group concluded that AI has a lot of potential to do good, if given ethical boundaries and shaped by Christian values such as honesty, integrity, and charity."

Saturday, June 28, 2025

Global South voices ‘marginalised in AI Ethics’; Gates Cambridge, June 27, 2025

Gates Cambridge; Global South voices ‘marginalised in AI Ethics’

"A Gates Cambridge Scholar is first author of a paper how AI Ethics is sidelining Global South voices, reinforcing marginalisation.

The study, Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production, was published by the Association for Computing Machinery and is based on a study of nearly 6,000 AI Ethics publications between 1960 and 2024. Its first author is Abdullah Hasan Safir [2024 – pictured above], who is doing a PhD in Interdisciplinary Design. Other co-authors include Gates Cambridge Scholars Ramit Debnath[2018] and Kerry McInerney [2017].

The findings were recently presented at the ACM’s FAccT conference, considered one of the top AI Ethics conferences in the world. They show that experts from the Global North currently legitimise their expertise in AI Ethics through dynamic citational and collaborative practices in knowledge production within the field, including co-citation and institutional of AI Ethics."

Sunday, June 22, 2025

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican; CNN, June 20, 2025

  and , CNN ; Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

"Pope Leo XIV says tech companies developing artificial intelligence should abide by an “ethical criterion” that respects human dignity.

AI must take “into account the well-being of the human person not only materially, but also intellectually and spiritually,” the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives.

“No generation has ever had such quick access to the amount of information now available through AI,” he said. But “access to data — however extensive — must not be confused with intelligence.”

He also expressed concern about AI’s impact on children’s “intellectual and neurological development,” writing that “society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities.”

That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See.

The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobsproduce misinformationworsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulationsintended to ensure that AI is used responsibly, which they say could hinder innovation and global competition.

“In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,” Leo said in his Friday statement."

Monday, June 9, 2025

5 Dangerous Myths About AI Ethics You Shouldn’t Believe; Forbes, May 14, 2025

 Bernard Marr , Forbes; 5 Dangerous Myths About AI Ethics You Shouldn’t Believe

"AI can empower just about any business to innovate and drive efficiency, but it also has the potential to do damage and cause harm. This means that everyone putting it to use needs to understand the ethical frameworks in place to keep everyone safe.

At the end of the day, AI is a tool. AI ethics can be thought of as the safety warning you get in big letters at the front of any user manual, setting out some firm dos and don’ts about using it.

Using AI almost always involves making ethical choices. In a business setting, understanding the many ways it can affect people and culture means we have the best information for making those choices.

It’s a subject there's still a lot of confusion around, not least involving who is responsible and who should be ensuring this gets done. So here are five common misconceptions I come across involving the ethics of generative AI and machine learning."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

  , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."