Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

 WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Monday, January 6, 2025

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Friday, December 27, 2024

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

  Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

 Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."

Essential Skills for IT Professionals in the AI Era; IEEE Spectrum, August 27, 2024

  , IEEE Spectrum; Essential Skills for IT Professionals in the AI Era

"Artificial Intelligence is transforming industries worldwide, creating new opportunities in health care, finance, customer service, and other disciplines. But the ascendance of AI raises concerns about job displacement, especially as the technology might automate tasks traditionally done by humans.

Jobs that involve data entry, basic coding, and routine system maintenance are at risk of being eliminated—which might worry new IT professionals. AI also creates new opportunities for workers, however, such as developing and maintaining new systems, data analysis, and cybersecurity. If IT professionals enhance their skills in areas such as machine learning, natural language processing, and automation, they can remain competitive as the job market evolves.

Here are some skills IT professionals need to stay relevant, as well as advice on how to thrive and opportunities for growth in the industry...

Key insights into AI ethics

Understanding the ethical considerations surrounding AI technologies is crucial. Courses on AI ethics and policy provide important insights into ethical implications, government regulations, stakeholder perspectives, and AI’s potential societal, economic, and cultural impacts.

I recommend reviewing case studies to learn from real-world examples and to get a grasp of the complexities surrounding ethical decision-making. Some AI courses explore best practices adopted by organizations to mitigate risks."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

 Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"