Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Tuesday, February 18, 2025

AI and ethics: No advancement can ever justify a human rights violation; Vatican News, February 16, 2025

 Kielce Gussie, Vatican News; AI and ethics: No advancement can ever justify a human rights violation

"By 2028, global spending on artificial intelligence will skyrocket to $632 billion, according to the International Data Corporation. In a world where smartphones, computers, and ChatGPT continue to be the center of debate, it's no wonder the need for universal regulation and awareness has become a growing topic of discussion.

To address this issue, an international two-day summit focused on AI was held in Paris, France. The goal was to bring stakeholders from the public, private, and academic sectors together to begin building an AI ecosystem that is trustworthy and safe.

Experts in various areas of the artificial intelligence sphere gathered to partake in the discussion, including Australian professor and member of the Australian Government’s Artificial Intelligence Expert Group, Edward Santow. He described feeling hopeful that the summit would advance the safety agenda of AI.

Trustworthiness and safety

On the heels of this summit, the Australian Embassy to the Holy See hosted a panel discussion to address the ethical and human rights challenges in utilizing AI. There, Prof. Santow described his experience at the Paris summit, highlighting the difficulty in building an atmosphere of trust with AI on a global scale."

 

Wednesday, February 12, 2025

As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI;TechCrunch, February 11, 2025

Romain Dillet, TechCrunch ; As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI

"The Artificial Intelligence Action Summit in Paris was supposed to culminate with a joint declaration on artificial intelligence signed by dozens of world leaders. While the statement isn’t as ambitious as the Bletchley and Seoul declarations, both the U.S. and the U.K. have refused to sign it.

It proves once again that it is difficult to reach a consensus around artificial intelligence — and other topics — in the current (fraught) geopolitical context.

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” U.S. vice president, JD Vance, said in a speech during the summit’s closing ceremony.


“The United States of America is the leader in AI, and our administration plans to keep it that way,” he added.


In all, 61 countries — including China, India, Japan, Australia, and Canada — have signed the declaration that states a focus on “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy.” It also calls for greater collaboration when it comes to AI governance, fostering a “global dialogue.”

Early reactions have expressed disappointment over a lack of ambition."

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

 UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

 Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

 WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Monday, January 6, 2025

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Friday, December 27, 2024

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

  Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

 Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."