Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Monday, January 6, 2025

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Friday, December 27, 2024

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

  Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

 Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."

Essential Skills for IT Professionals in the AI Era; IEEE Spectrum, August 27, 2024

  , IEEE Spectrum; Essential Skills for IT Professionals in the AI Era

"Artificial Intelligence is transforming industries worldwide, creating new opportunities in health care, finance, customer service, and other disciplines. But the ascendance of AI raises concerns about job displacement, especially as the technology might automate tasks traditionally done by humans.

Jobs that involve data entry, basic coding, and routine system maintenance are at risk of being eliminated—which might worry new IT professionals. AI also creates new opportunities for workers, however, such as developing and maintaining new systems, data analysis, and cybersecurity. If IT professionals enhance their skills in areas such as machine learning, natural language processing, and automation, they can remain competitive as the job market evolves.

Here are some skills IT professionals need to stay relevant, as well as advice on how to thrive and opportunities for growth in the industry...

Key insights into AI ethics

Understanding the ethical considerations surrounding AI technologies is crucial. Courses on AI ethics and policy provide important insights into ethical implications, government regulations, stakeholder perspectives, and AI’s potential societal, economic, and cultural impacts.

I recommend reviewing case studies to learn from real-world examples and to get a grasp of the complexities surrounding ethical decision-making. Some AI courses explore best practices adopted by organizations to mitigate risks."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

 Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"

Wednesday, July 31, 2024

Who Guards AI Ethics? Ending The Blame Game; Forbes, July 30, 2024

Gary Drenik, Forbes; Who Guards AI Ethics? Ending The Blame Game

"As AI becomes increasingly sophisticated and ubiquitous, a critical question has emerged: Who bears the responsibility for ensuring its ethical development and implementation?

According to a recent survey by Prosper Insights & Analytics, about 37% of US adults agree AI solutions need human oversight. However, corporations and governments are engaging in a frustrating game of hot potato, each pointing fingers and shirking accountability. This lack of clear responsibility poses significant risks.

On one hand, excessive government control and overregulation could stifle innovation, hindering AI's progress and potential to solve complex problems. Conversely, unchecked corporate influence and a lack of proper oversight could result in an "AI Wild West," where profit-driven motives supersede ethical considerations. This could result in biased algorithms, privacy breaches and the exacerbation of social inequalities."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

 JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

 Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."