Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

 Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

 EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

NASCAR aware of allegations a team engineer stole intellectual property to give to rival team; AP, October 14, 2024

JENNA FRYER, AP; NASCAR aware of allegations a team engineer stole intellectual property to give to rival team

"NASCAR has acknowledged it is aware of allegations that an engineer for a Cup Series team accessed proprietary information and shared it with another team...

Until a lawsuit is filed or a complaint is lodged with NASCAR, there is nothing the series can do, raising concerns that employees will be able to hand over intellectual property to rivals without ramifications."

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

  , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Monday, October 14, 2024

Copyright law violation? Mark Robinson campaign used photos from freelancer without permission; North Carolina Public Radio, October 11, 2024

Dave DeWitt, North Carolina Public Radio; Copyright law violation? Mark Robinson campaign used photos from freelancer without permission

"“Unfortunately, we see this happen repeatedly, every election season,” says Alicia Calzada, Deputy General Counsel for National Press Photographers Association. “In many cases it is not malicious. Rather it is a consequence of a campaign not understanding the basics of copyright law. This is especially true in down-ballot races, but we see infringement all the way up and down the ballot, and we see it in both parties.

“That said, the communications team of a gubernatorial campaign should be professional enough to know better."...

Hey says she plans to reach out to the Robinson campaign and ask that the photos be removed immediately. As of Friday afternoon, the photos were still on the site. Hey says she intends to explore legal options, if the photos are not removed.

In some ways, the damage may already be done.

“In the context of a political campaign, a photojournalist needs to remain impartial as a part of their job responsibilities, and so when a campaign uses photographs without permission, it threatens the appearance of impartiality,” Calzada said. “This is one reason why many photojournalists fight so hard to protect their copyright.”"

Sunday, October 13, 2024

Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement; artnet, October 11, 2024

 Jo Lawson-Tancred , artnet; Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement

"A lawsuit filed by Good Trubble in a California district on October 10 alleges that Irem Erdem of Round Rock, Texas, deliberately committed copyright infringement because of the image’s “widespread dissemination” online.

The digitally-created artwork designed by Bria Goeller for Good Trubble is titled That Little Girl Was Me. It was released on October 20, 2020, and went viral shortly after the last U.S. presidential election in November 2020, when Harris became the first Black and South Asian woman to be elected vice president. The image can be bought as a print or on t-shirts and other products on Good Trubble’s website, including a new version featuring the White House in celebration of Harris’s current bid for the presidency.

The image pairs the figure of Harris with silhouette of activist Ruby Bridges as a young girl. It quotes from Norman Rockwell‘s iconic 1964 painting The Problem We All Live With, which depicts the historic event of a six-year-old Bridges being escorted by four deputy U.S. marshals into the all-white public school during the New Orleans school desegregation crisis of 1960. This measure was taken to protect her from the threat of violence, which is hinted at by a racial slur and the splatter of thrown tomatoes scrawled on the wall behind her."

Saturday, October 12, 2024

5th Circuit rules ISP should have terminated Internet users accused of piracy; Ars Technica, October 11, 2024

 JON BRODKIN, Ars Technica; 5th Circuit rules ISP should have terminated Internet users accused of piracy

"Music publishing companies notched another court victory against a broadband provider that refused to terminate the accounts of Internet users accused of piracy. In a ruling on Wednesday, the conservative-leaning US Court of Appeals for the 5th Circuit sided with the big three record labels against Grande Communications, a subsidiary of Astound Broadband.

The appeals court ordered a new trial on damages because it said the $46.8 million award was too high, but affirmed the lower court's finding that Grande is liable for contributory copyright infringement."

Friday, October 11, 2024

Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024

  , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room

"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.

That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.

The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.

Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."

Monday, October 7, 2024

Authors Guild to offer “Human Authored” label on books to compete with AI; Marketplace.org, October 7, 2024

Matt Levin, Marketplace.org ; Authors Guild to offer “Human Authored” label on books to compete with AI

"The Authors Guild, the professional association representing published novelists and nonfiction writers, is set to offer to its 15,000 members a new certificate they can place directly on their book covers.

About the size of literary award stickers or celebrity book club endorsements adorning the cover art of the latest bestseller, the certificate is a simple, round logo with two boldfaced words inside: “Human Authored.”

As in, written by a human — and not artificial intelligence.

A round, gold stamp reads "Human Authored," "Authors Guild."
(Courtesy The Authors Guild)

“It isn’t just to prevent fraud and deception,” said Douglas Preston, a bestselling novelist and nonfiction writer and member of the Authors Guild Council. “It’s also a declaration of how important storytelling is to who we are as a species. And we’re not going to let machines elbow us aside and pretend to be telling us stories, when it’s just regurgitating literary vomitus.”

‘We Have to Work Together’: Action Beyond Banned Books Week; American Libraries, October 2, 2024

  Paula Mauro, American Libraries; ‘We Have to Work Together’: Action Beyond Banned Books Week

"While Banned Books Week ended on September 28, writer, director, producer—and Banned Books Week honorary chair—Ava DuVernay stresses the importance of continuing the work of amplifying marginalized voices...

“This banned book effort is an agenda by people who want to make some of us less free, to silence the voices of some of us,” DuVernay tells American Libraries. “We can overcome this, but we have to work together.”

DuVernay recorded a video conversation with Banned Books Week Youth Honorary Chair Julia Garnett, a student activist who fought book bans in her home state of Tennessee and now attends Smith College in Northampton, Massachusetts. In the video, the two discuss DuVernay’s approach to championing diverse viewpoints as a filmmaker and ways the rest of us can join and stay in the fight.

The video is available here, as well as on the Banned Books Week YouTube channel. Highlights from the video are also excerpted below...

How student activists can cope with feeling lonely in their anticensorship fights—often as the youngest person in the room:

First of all, I commend you. I take my hat off to you. I bow to all activists who are doing that hard work. I think the one thing to remember is, it’s lonely because that’s what leadership is. There’s someone who’s leading, and that is who we’re following. And it’s lonely at the front. It’s about building coalition and making sure that the folks around you are aware, are educated, are interested, and are leaning in.

People have different levels of engagement, and that’s okay. But even one person can have an impact…. And if I feel that kind of loneliness—that, “Gosh, I’m the only one out here doing it, and everyone else is doing this”—if you feel it and you still want to do it, you’re on the right track. And there’s nothing better than feeling like you’re on the right track. So, not easy. But glorious, you know?"

Who uses libraries? Even in the stacks, there’s a political divide.; The Washington Post, October 4, 2024

 , The Washington Post; Who uses libraries? Even in the stacks, there’s a political divide.

"When we took a look at the nation’s declining reading habits, our struggling bookstores and the prodigious number of books consumed by America’s top 1 percent of readers, scores of you wrote in with a singular question: What about the libraries?!

You people sure do love libraries! You wanted to know everything. Who are the biggest library users? How many of our books do we get from libraries? What else do we use libraries for?

We scoured all the government sources we could think of before turning to the cabal of polling prodigies over at YouGov to see what they could gin up.

As usual, YouGov exceeded our expectations, asking at least 50 library-related questions of 2,429 U.S. adults in April. They touched on just about everything: librarian approval ratings, restrictions on drag queen story times, number of books read. They also asked about the library services we actually use, up to and including how many of us avail ourselves of the library restrooms."

Saturday, October 5, 2024

Library cancels Harry Potter programming over copyright issue; Buckrail, October 4, 2024

Marianne Zumberge, Buckrail; Library cancels Harry Potter programming over copyright issue

"It’s a sad day for little witches and wizards in Jackson Hole. The Teton County Library’s (TCL) slate of Harry Potter programming has been canceled due to copyright infringement. 

TCL announced the news on Wednesday, Oct. 2. TCL said it had received a cease-and-desist letter from Warner Bros. Entertainment Inc., which owns and controls all things Potter.

“Prior to receiving the letter, Library staff was unaware that this free educational event was a copyright infringement,” TCL’s announcement reads. “In the past, libraries had been encouraged to hold Harry Potter-themed events to promote the books as they were released.”

Three events had been planned for October: A Night at Hogwarts, Harry Potter Trivia for Adults and Harry Potter Family Day."

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

  Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

  Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Thursday, October 3, 2024

Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries; The New York Times, October 2, 2024

 , The New York Times; Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries

"The drugmaker Gilead Sciences on Wednesday announced a plan to allow six generic pharmaceutical companies in Asia and North Africa to make and sell at a lower price its groundbreaking drug lenacapavir, a twice-yearly injection that provides near-total protection from infection with H.I.V.

Those companies will be permitted to sell the drug in 120 countries, including all the countries with the highest rates of H.I.V., which are in sub-Saharan Africa. Gilead will not charge the generic drugmakers for the licenses.

Gilead says the deal, made just weeks after clinical trial results showed how well the drug works, will provide rapid and broad access to a medication that has the potential to end the decades-long H.I.V. pandemic.

But the deal leaves out most middle- and high-income countries — including Brazil, Colombia, Mexico, China and Russia — that together account for about 20 percent of new H.I.V. infections. Gilead will sell its version of the drug in those countries at higher prices. The omission reflects a widening gulf in health care access that is increasingly isolating the people in the middle."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, September 30, 2024

OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024

 Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders

"OpenAI Inc. and Microsoft Corp.‘s GitHub will head to the country’s largest federal appeals court to resolve their first copyright lawsuit from open-source programmers who claim the companies’ AI coding tool Copilot violates a decades-old digital copyright law.

Judge Jon S. Tigar granted the programmers’ request for a mid-case turn to the US Court of Appeals for the Ninth Circuit, which must determine whether OpenAI’s copying of open-source code to train its AI model without proper attribution to the programmers could be a violation of the Digital Millennium Copyright Act...

The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."

OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024

Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders

"The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."

USU's College of Humanities & Social Sciences Hosts Conference on Ethics of AI; Utah State University (USU), September 23, 2024

 Utah State University (USU); USU's College of Humanities & Social Sciences Hosts Conference on Ethics of AI

"AI’s emergence from the obscure to the unavoidable has come with many questions and concerns — some of which deal with how we can and should use it ethically.


To help answer some of these questions the USU Communication Studies and Philosophy Department and the Center for Anticipatory Intelligence hosted a conference.

They brought in scholars from a variety of disciplines to discuss these issues — with these experts coming from the University of Cambridge, New York University and Northeastern University, among others."

Sunday, September 29, 2024

Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley; Politico, September 29, 2024

LARA KORTE and JEREMY B. WHITE, Politico; Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley

"Gov. Gavin Newsom vetoed a sweeping California bill meant to impose safety vetting requirements for powerful AI models, siding with much of Silicon Valley and leading congressional Democrats in the most high-profile fight in the Legislature this year."

AI could be an existential threat to publishers – that’s why Mumsnet is fighting back; The Guardian, September 28, 2024

 , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back

"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.

At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...

If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...

I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.

But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."