Showing posts with label ethical dilemmas. Show all posts
Showing posts with label ethical dilemmas. Show all posts

Wednesday, September 10, 2025

An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?; The Ethicist, The New York Times; September 10, 2025

 , The Ethicist, The New York Times ; An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?

[Kip Currier: This is a thought-provoking and timely ethical dilemma, especially with the proliferation of AI into more and more aspects of our personal and professional lives.

The question posed to The Ethicist in this edition of his column is about students submitting essays for a contest. The questioner wonders if the students have used AI to write their essays. The contest winners are awarded a monetary scholarship. The questioner wonders if they should confront the winners. The beauty of this question is that we don't know for sure whether AI was or was not used. It's totally speculative. What would you do?

Does your thinking change as to whether using AI to write something is ethical or unethical if:

  • AI is used by a university professor to prepare a lecture
  • AI is used by a university professor to create an essay exam
  • AI is used by an elementary school teacher to prepare a lesson
  • AI is used by an elementary school teacher to create a multiple choice test
  • AI is used by your lawyer to write the legal brief for your lawsuit
  • AI is used by your lawyer's paralegal to write the legal brief for your lawsuit
  • AI is used to synthesize the court's verdict by the judge deciding your case
  • AI is used by a library director to compose the library's strategic plan
  • AI is used by a non-profit university to compose the university's strategic plan
  • AI is used by a for-profit company to compose the company's strategic plan
  • AI is used by a military branch to compose a strategy for military engagement
  • AI is used by a government agency to compose a strategy for national security
  • AI is used by local law enforcement to compose a strategy for public safety
  • AI is used by a summer camp to compose a strategy for camp safety
  • AI is used by your doctor to devise the treatment plan for your relative's cancer treatment
  • AI is used by a scientist to devise treatments for helping patients with cancer
  • AI is used to write a song for your significant other's birthday
  • AI is used to write a song for a musical you are creating
  • AI is used to write a song for a pharmaceutical company ad on TV
  • AI is used by your clergy head to write an annual report
  • AI is used by your clergy head to write a sermon
  • AI is used by your clergy head to write the eulogy for the funeral of one of your parents


Questions: Are you able to identify any variations in your ethical reasoning and how you decide your positions in the scenarios above?

What are you basing your decisions on? 

Are some scenarios easier or harder for you than others? If so, why?

In which situations, if any, do you think it is okay or not okay to use AI?

What additional information, if any, would you like to know that might help you to make decisions about whether and when the uses of AI are ethical or unethical?


[Excerpt]

I volunteer with our local historical society, which awards a $1,000 scholarship each year to two high school students who submit essays about a meaningful experience with a historical site. This year, our committee noticed a huge improvement in the quality of the students’ essays, and only after announcing the winners did we realize that one of them, along with other students, had almost certainly used artificial intelligence. What to do? I think our teacher liaison should be told, because A.I. is such a challenge for schools. I also feel that this winner should be confronted. If we are right, that might lead her to confess her dishonesty and return the award. — Name Withheld"

Thursday, April 17, 2025

The philosopher’s machine: my conversation with Peter Singer’s AI chatbot; The Guardian, April 17, 2025

  , The Guardian; The philosopher’s machine: my conversation with Peter Singer’s AI chatbot

"Singer has had a long and high-profile career in philosophy. He has only recently retired from Princeton University, where he was professor of bioethics. His CV is extensive (it runs to a full 102 pages). He has a Substack, a podcast, a Bluesky account, a journal he co-edits, and a polished online presence. As I’m scrolling through the storied septuagenarian philosopher’s blog, the chatbot pops up in the corner of the page. I can’t help but feeling like I am being sold something.

In launching his chatbot, Singer described the tool as “employing the Socratic method” – that is, dialogue – to “[guide] users through a process of critical thinking and self-examination”. But I have questions about that.

Me: Do you think it is philosophically justifiable to outsource the discussion of ethics to a chatbot?

Peter Singer AI: Outsourcing ethical discussions to a chatbot can be philosophically justifiable if viewed as a tool to enhance and facilitate ethical reflection. Chatbots can provide insights, pose questions, and offer perspectives that might help individuals think more deeply about ethical issues.

However, it’s crucial to remember that chatbots lack human experience, empathy, and the ability to understand context fully. They should be used as a complement to human judgment and conversation, not a replacement.

Engaging with a chatbot can stimulate thinking and provide a starting point for deeper exploration, but ultimately, ethical decision-making involves personal reflection and discussion with others.

My most memorable and exciting philosophy lessons were the ones in which, as a class, we hotly debated ideas in which we were all materially and emotionally invested. The AI floats loftily above the emotional fray, coolly dispensing pre-programmed wisdom from the cloud, while we grapple with the contradictions and complications of actually living. Between the chatbot and me, there is only one of us for whom the dialogue elicits an emotional response, because for only one of us does the conversation actually matter."