Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Tuesday, December 2, 2025

The case of the fake references in an ethics journal; Retraction Watch, December 2, 2025

 Retraction Watch ; The case of the fake references in an ethics journal

"Many would-be whistleblowers write to us about papers with nonexistent references, possibly hallucinated by artificial intelligence. One reader recently alerted us to fake references in … an ethics journal. In an article about whistleblowing.

The paper, published in April in the Journal of Academic Ethics, explored “the whistleblowing experiences of individuals with disabilities in Ethiopian public educational institutions.” 

Erja Moore, an independent researcher based in Finland, came across the article while looking into a whistleblowing case in that country. “I started reading this article and found some interesting references that I decided to read as well,” Moore told Retraction Watch. “To my surprise, those articles didn’t exist.”...

The Journal of Academic Ethics is published by Springer Nature. Eleven of the fabricated references cite papers in the Journal of Business Ethics — another Springer Nature title.

“On one hand this is hilarious that an ethics journal publishes this, but on the other hand it seems that this is a much bigger problem in publishing and we can’t really trust scientific articles any more,” Moore said."

Wednesday, November 26, 2025

AI, ethics, and the lawyer's duty after Noland v. Land of the Free; Daily Journal, November 24, 2025

 Reza Torkzadeh, Daily Journal; AI, ethics, and the lawyer's duty after Noland v. Land of the Free

"Noland establishes a bright line for California lawyers. AI may assist with drafting or research, but it does not replace judgment, verification or ethical responsibility. Technology may change how legal work is produced -- it does not change who is accountable for it."

Tuesday, November 18, 2025

Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too; The Conversation, November 17, 2025

 Professor of Sociology, College of the Holy Cross , The Conversation; Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too

"Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.

As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.

I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves."

The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.; The New York Times Magazine, November 16, 2025

 Emily Bazelon and Rachel Poser, Photographs by Stephen Voss , The New York Times Magazine; The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.

President Trump’s second term has brought a period of turmoil and controversy unlike any in the history of the Justice Department. Trump and his appointees have blasted through the walls designed to protect the nation’s most powerful law enforcement agency from political influence; they have directed the course of criminal investigations, openly flouted ethics rules and caused a breakdown of institutional culture. To date, more than 200 career attorneys have been fired, and thousands more have resigned. (The Justice Department says many of them have been replaced.)

What was it like inside this institution as Trump’s officials took control? It’s not an easy question to answer. Justice Department norms dictate that career attorneys, who are generally nonpartisan public servants, rarely speak to the press. And the Trump administration’s attempts to crack down on leaks have made all federal employees fearful of sharing information.

But the exodus of lawyers has created an opportunity to understand what’s happening within the agency. We interviewed more than 60 attorneys who recently resigned or were fired from the Justice Department. Much of what they told us is reported here for the first time..."

"“They didn’t want the ethics office calling them up and telling them what to do.” Joseph Tirrell, former director of the Departmental Ethics Office"

Friday, November 14, 2025

Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession; Cleveland.com, November 12, 2025

 

, Cleveland.com; Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession

"A Cleveland defense attorney is under scrutiny in two counties after submitting court filings containing fabrications generated by artificial intelligence — a case that’s prompting broader questions about how lawyers are ethically navigating the use of AI tools in legal practice.

William Norman admitted that a paralegal in his office used ChatGPT to draft a motion to reopen a murder conviction appeal. The document included quotes that did not exist in the trial transcript and misrepresented statements made by the prosecutor."

AMA ethics journal shutters after 26 years; Retraction Watch, November 13, 2025

Retraction Watch; AMA ethics journal shutters after 26 years 

"The American Medical Association will cease publication of its ethics journal at the end of this year. 

The AMA Journal of Ethics, an open access, peer-reviewed journal was founded in 1999 under the name Virtual Mentor

“The loss of the AMA JoE will be most acutely felt by medical students and trainees, since it had a unique production model that included them in the process,” said Matthew Wynia, a physician and bioethicist at the University of Colorado whose work has been featured in the journal and who previously led the AMA Institute for Ethics.

The journal  publishes monthly issues on a specific theme, such as private equity in health care, antimicrobial resistance, palliative surgery and more. The journal also covered ethics in publishing and research, including a 2015 article titled “How Publish or Perish Promotes Inaccuracy in Science—and Journalism” written by Retraction Watch’s cofounder Ivan Oransky...

The journal’s website will remain online with all content freely available, “in keeping with our guiding premise that ethics inquiry is a public good,” Audiey C. Kao, editor-in-chief of the AMA Journal of Ethics and vice president of the AMA’s Ethics Group for more than two decades, wrote in a statement on the journal’s website. “With humility, I am hopeful and confident that this archived journal content will stay evergreen for years to come.”

The AMA did not provide a reason for the decision to shutter the journal." 

Thursday, November 6, 2025

Staying Human in the Age of AI; Duquesne University, Grefenstette Center for Ethics, November 6-7, 2025

 Duquesne University, Grefenstette Center for Ethics2025 Tech Ethics Symposium: Staying Human in the Age of AI

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Monday, November 3, 2025

With AI technology rapidly advancing, ethics must evolve as well - opinion; The Jerusalem Post, November 2, 2025

AVI JORISCH, The Jerusalem Post; With AI technology rapidly advancing, ethics must evolve as well - opinion

"Wisdom over intelligence

Whether this century becomes our renaissance or our ruin will depend on a quality that can’t be coded or automated: wisdom.

For all our progress, we have not yet learned to match our technological power with moral imagination. We’ve achieved abundance without equilibrium, connection without community, knowledge without humility. The danger isn’t that machines will become more human – it’s that humans will become more machine-like, optimizing for speed and efficiency while forgetting the soul.

Humanity’s story has always been shaped by invention. The wheel, the compass, the printing press, the microchip – each expanded what we could do, but not necessarily who we could be. The Industrial Revolution lifted billions from poverty, yet it also gave us the mechanized wars of the twentieth century. Nuclear energy promised limitless power – and then birthed Hiroshima.

Today, as we stand on the edge of quantum computing, gene editing, and artificial general intelligence, the pattern repeats. The tools evolve. The ethics lag behind.

We need a new kind of moonshot – not just of science, but of spirit."

Tuesday, October 21, 2025

Gambling. Investing. Gaming. There’s No Difference Anymore.; The New York Times, October 20, 2025

 Jonathan D. Cohen and , The New York Times ; Gambling. Investing. Gaming. There’s No Difference Anymore.


[Kip Currier: It's good to see online gambling issues getting more attention, as in this 10/20/25 New York Times Op-Ed. One of the piece's writers is Jonathan D. Cohen, author of the 2025 book Losing Big: America’s Reckless Bet on Sports Gambling".

I spoke on these issues in my talk -- AI Gambling Thirst Traps and God: Christian Imperatives, Church Roles, and Ethical Responsibilities -- at the September 2-5, 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis. A publication based on the talk is forthcoming.]


[Excerpt]

"If it feels as if gambling is everywhere, that’s because it is. But today’s gamblers aren’t just retirees at poker tables. They’re young men on smartphones. And thanks to a series of quasi-legal innovations by the online wagering industry, Americans can now bet on virtually anything from their investment accounts. 

In recent years, this industry has been gamifying the investing experience; on brightly colored smartphone apps, risking your money is as easy and attractive as playing Candy Crush. On the app of the investment brokerage Robinhood, users can now buy stocks on one tab, “bet” on Oscars outcomes on another, and trade crypto on a third.

Given a recent explosion in unsafe gambling and growing evidence of severe financial harm, one might ask whether the government should be permitting 18-year-olds to effectively bet on the Dallas Cowboys with the same accounts they can use to invest in Coca-Cola. Under President Trump, who has a son serving as an adviser to two entrants in the sports prediction marketplace, the answer appears to be a firm yes."

Sunday, September 14, 2025

Preparing faith leaders to prepare others to use artificial intelligence in a faithful way; Presbyterian News Service, September 4, 2025

 Mike Ferguson , Presbyterian News Service; Preparing faith leaders to prepare others to use artificial intelligence in a faithful way

"It turns out an engineer whose career included stops at Boeing and Amazon — and who happens to be a person of deep faith — has plenty to say about how faith leaders can use artificial intelligence in places of worship.

Jovonia Taylor-Hayes took to the lectern Wednesday during Faithful Futures: Guiding AI with Wisdom and Witness, which is being offered online and at Westminster Presbyterian Church in Minneapolis. The PC(USA)’s Office of Innovation is among the organizers and sponsors, which also includes The Episcopal Church, the United Methodist Church and the Evangelical Lutheran Church in America.

Think of all the varied ways everyday people use AI, Taylor-Hayes said, including as an aid to streamline grocery shopping and resume building; by medical teams for note-taking; for virtual meetings and closed-captioning, which is getting better, she said; and in customer service.

“The question is, what does it look like when we stop and think about what AI means to me personally? Where does your head and heart go?” she asked. One place where hers goes to is scripture, including Ephesians 2:10 and Psalm 139:14. “God has prepared us,” she said, “to do what we need to do.”

During the first of two breakout sessions, she asked small groups both in person and online to discuss questions including where AI shows up in their daily work and life and why they use AI as a tool."

Monday, September 8, 2025

Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference; Episcopal News Service (ENS), September 5, 2025

 David Paulsen, Episcopal News Service (ENS) ; Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference

"Some of the most tech-forward minds in the Protestant church gathered here this week at the Faithful Futures conference, where participants wrestled with the ethical, practical and spiritual implications of artificial intelligence. The Episcopal Church is one of four Protestant denominations that hosted the Sept. 2-5 conference. About halfway through, one of the moderators acknowledged that AI has advanced so far and so rapidly that most conferences on AI are no longer focused just on AI...

AI raises spiritual questions over what it means to be human

Much of the conference seemed to pivot on questions that defied easy answers. In an afternoon session Sept. 3, several church leaders who attended last year’s Faithful Futures conference in Seattle, Washington, were invited to give 10-minute presentations on their preferred topics.

“What happens to theology when the appearance of intelligence is no longer uniquely human?” said the Rev. Michael DeLashmutt, a theology professor at General Theological Seminary in New York, New York, who also serves as the Episcopal seminary’s senior vice president.

DeLashmutt argued that people of faith, in an era of AI, must not forget what it means to be Christian and to be human. “Being human means being relational, embodied, justice-oriented and open to God’s spirit,” he said. “So, I think the real risk is not that machines will become human, but that we will forget the fullness of what humanity actually is.”

Kip Currier, a computing and information professor at the University of Pittsburgh, warned that AI is being used by sports betting platforms to appeal to gamblers, including those suffering from addiction. Mark Douglas, an ethics professor at Columbia Theological Seminary, outlined the ecological impact of AI data centers, which need to consume massive amounts of energy and water.

The Rev. Andy Morgan, a Presbyterian pastor based in Knoxville, Tennessee, described himself as his denomination’s “unofficial AI person” and suggested that preachers should not be afraid of using AI to improve their sermons – as long as they establish boundaries to prevent delegating too much to the technology."

Saturday, September 6, 2025

Big Questions About AI and the Church Video; August 25, 2025

Big Questions About AI and the Church Video

Kip Currier: This Big Questions About AI and the Church video (1:12:14) was created by the members of my cohort and me (Cohort 7). Our cohort emanated from the groundbreaking August 2024 ecumenical AI & The Church Summit in Seattle that we all attended.

Perhaps raising more questions than providing answers, the video's aim is to encourage reflection and discussion of the many-faceted issues and concerns at the nexus of AI, faith communities, and our broader societies.

Many thanks to our cohort member Rev. Dr. Andy P. Morgan for spearheading, synthesizing, and uploading this video to YouTube. 

Thursday, July 10, 2025

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Wednesday, May 21, 2025

We're All Copyright Owners. Why You Need to Care About AI and Copyright; CNET, May 19, 2025

 Katelyn Chedraoui , CNET; We're All Copyright Owners. Why You Need to Care About AI and Copyright

"Most of us don't think about copyright very often in our daily lives. But in the age of generative AI, it has quickly become one of the most important issues in the development and outputs of chatbots and image and video generators. It's something that affects all of us because we're all copyright owners and authors...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections. 

There are two distinct ways to view the US's intellectual property laws, Mammen said. The first is that these laws were enacted to encourage and reward human flourishing. The other is more economically focused; the things that we're creating have value, and we want our economy to be able to recognize that value accordingly."

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

 Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Monday, September 30, 2024

USU's College of Humanities & Social Sciences Hosts Conference on Ethics of AI; Utah State University (USU), September 23, 2024

 Utah State University (USU); USU's College of Humanities & Social Sciences Hosts Conference on Ethics of AI

"AI’s emergence from the obscure to the unavoidable has come with many questions and concerns — some of which deal with how we can and should use it ethically.


To help answer some of these questions the USU Communication Studies and Philosophy Department and the Center for Anticipatory Intelligence hosted a conference.

They brought in scholars from a variety of disciplines to discuss these issues — with these experts coming from the University of Cambridge, New York University and Northeastern University, among others."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

 JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Friday, June 14, 2024

Pope Francis is first pontiff to address G7 leaders with AI speech; Axios, June 14, 2024

"Pope Francis made history Friday as the first pontiff to speak at the Group of Seven meeting in Fasano, Italy, where he discussed his concerns with artificial intelligence.

Why it matters: The pope has long urged caution around AI, calling it "a fascinating tool and also a terrifying one," during his remarks Friday even as he acknowledged its potential applications in medicine, labor, culture, communications, education and politics. 

  • "The holy scriptures say that God gave to human beings his spirit in order for them to have wisdom, intelligence and knowledge in all kinds of tasks," he said. "Science and technology are therefore extraordinary products of the potential which is active in us human beings.""

Saturday, April 6, 2024

Where AI and property law intersect; Arizona State University (ASU) News, April 5, 2024

 Dolores Tropiano, Arizona State University (ASU) News; Where AI and property law intersect

"Artificial intelligence is a powerful tool that has the potential to be used to revolutionize education, creativity, everyday life and more.

But as society begins to harness this technology and its many uses — especially in the field of generative AI — there are growing ethical and copyright concerns for both the creative industry and legal sector.

Tyson Winarski is a professor of practice with the Intellectual Property Law program in Arizona State University’s Sandra Day O’Connor College of Law. He teaches an AI and intellectual property module within the course Artificial Intelligence: Law, Ethics and Policy, taught by ASU Law Professor Gary Marchant.

“The course is extremely important for attorneys and law students,” Winarski said. “Generative AI is presenting huge issues in the area of intellectual property rights and copyrights, and we do not have definitive answers as Congress and the courts have not spoken on the issue yet.”"