Saturday, October 19, 2024

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That; Electronic Frontier Foundation (EFF), October 16, 2024

 CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF); Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

"For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law:"

Friday, October 18, 2024

Mass shooting survivors turn to an unlikely place for justice – copyright law; The Guardian, October 18, 2024

  , The Guardian; Mass shooting survivors turn to an unlikely place for justice – copyright law

"In a Nashville courtroom in early July, survivors of the 2023 Covenant school shooting celebrated an unusual legal victory. Citing copyrightlaw, Judge l’Ashea Myles ruled that the assailant’s writings and other creative property could not be released to the public.

After months of hearings, the decision came down against conservative lawmakers, journalists and advocates who had sued for access to the writings, claiming officials had no right to keep them from the public. But since parents of the assailant – who killed six people at the private Christian elementary school, including three nine-year-old children – signed legal ownership of the shooter’s journals over to the families of surviving students last year, Myles said releasing the materials would violate the federal Copyright Act...

Keeping a shooter’s name or creative property – such as a manifesto or recording – out of the public eye does more than protect the emotional wellbeing of those impacted, experts say. It also helps to prevent future massacres.

That such material can serve as inspiration has been widely documented, explains Rachel Carroll Rivas of the Southern Poverty Law Center’s Intelligence Project. “Those videos just have an inherent dangerous factor, and they really shouldn’t be allowed to spread across the internet,” she said.

The danger stems from the fact that shooters, research has shown, often desire attention, recognition and notoriety."

It Sure Looks Like Trump Watches Are Breaking Copyright Law; Wired, October 18, 2024

Matt Giles, Wired; It Sure Looks Like Trump Watches Are Breaking Copyright Law

"According to the Associated Press, though, TheBestWatchesonEarth LLC advertised a product it can’t deliver, as that image is owned by the 178-year-old news agency. This week, the AP told WIRED it is pursuing a cease and desist against the LLC, which is registered in Sheridan, Wyoming. (The company did not reply to a request for comment about the cease and desist letter.)

Evan Vucci, the AP’s Pulitzer Prize–winning chief photographer, took that photograph, and while he told WIRED he does not own the rights to that image, the AP confirmed earlier this month in an email to WIRED that it is filing the written notice. “AP is proud of Evan Vucci’s photo and recognizes its impact,” wrote AP spokesperson Nicole Meir. “We reserve our rights to this powerful image, as we do with all AP journalism, and continue to license it for editorial use only.”"

Penguin Random House underscores copyright protection in AI rebuff; The Bookseller, October 18, 2024

 MATILDA BATTERSBY, The Bookseller; Penguin Random House underscores copyright protection in AI rebuff

"The world’s biggest trade publisher has changed the wording on its copyright pages to help protect authors’ intellectual property from being used to train large language models (LLMs) and other artificial intelligence (AI) tools, The Bookseller can exclusively reveal.

Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear “in imprint pages across our markets”. The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted.

The statement also “expressly reserves [the titles] from the text and data mining exception”, in accordance with a European Parliament directive.

The move specifically to ban the use of its titles by AI firms for the development of chatbots and other digital tools comes amid a slew of copyright infringement cases in the US and reports that large tranches of pirated books have already been used by tech companies to train AI tools. In 2024, several academic publishers including Taylor & Francis, Wiley and Sage have announced partnerships to license content to AI firms.

PRH is believed to be the first of the Big Five anglophone trade publishers to amend its copyright information to reflect the acceleration of AI systems and the alleged reliance by tech companies on using published work to train language models."

Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

 Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

 EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

NASCAR aware of allegations a team engineer stole intellectual property to give to rival team; AP, October 14, 2024

JENNA FRYER, AP; NASCAR aware of allegations a team engineer stole intellectual property to give to rival team

"NASCAR has acknowledged it is aware of allegations that an engineer for a Cup Series team accessed proprietary information and shared it with another team...

Until a lawsuit is filed or a complaint is lodged with NASCAR, there is nothing the series can do, raising concerns that employees will be able to hand over intellectual property to rivals without ramifications."

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

  , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Monday, October 14, 2024

Copyright law violation? Mark Robinson campaign used photos from freelancer without permission; North Carolina Public Radio, October 11, 2024

Dave DeWitt, North Carolina Public Radio; Copyright law violation? Mark Robinson campaign used photos from freelancer without permission

"“Unfortunately, we see this happen repeatedly, every election season,” says Alicia Calzada, Deputy General Counsel for National Press Photographers Association. “In many cases it is not malicious. Rather it is a consequence of a campaign not understanding the basics of copyright law. This is especially true in down-ballot races, but we see infringement all the way up and down the ballot, and we see it in both parties.

“That said, the communications team of a gubernatorial campaign should be professional enough to know better."...

Hey says she plans to reach out to the Robinson campaign and ask that the photos be removed immediately. As of Friday afternoon, the photos were still on the site. Hey says she intends to explore legal options, if the photos are not removed.

In some ways, the damage may already be done.

“In the context of a political campaign, a photojournalist needs to remain impartial as a part of their job responsibilities, and so when a campaign uses photographs without permission, it threatens the appearance of impartiality,” Calzada said. “This is one reason why many photojournalists fight so hard to protect their copyright.”"

Sunday, October 13, 2024

Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement; artnet, October 11, 2024

 Jo Lawson-Tancred , artnet; Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement

"A lawsuit filed by Good Trubble in a California district on October 10 alleges that Irem Erdem of Round Rock, Texas, deliberately committed copyright infringement because of the image’s “widespread dissemination” online.

The digitally-created artwork designed by Bria Goeller for Good Trubble is titled That Little Girl Was Me. It was released on October 20, 2020, and went viral shortly after the last U.S. presidential election in November 2020, when Harris became the first Black and South Asian woman to be elected vice president. The image can be bought as a print or on t-shirts and other products on Good Trubble’s website, including a new version featuring the White House in celebration of Harris’s current bid for the presidency.

The image pairs the figure of Harris with silhouette of activist Ruby Bridges as a young girl. It quotes from Norman Rockwell‘s iconic 1964 painting The Problem We All Live With, which depicts the historic event of a six-year-old Bridges being escorted by four deputy U.S. marshals into the all-white public school during the New Orleans school desegregation crisis of 1960. This measure was taken to protect her from the threat of violence, which is hinted at by a racial slur and the splatter of thrown tomatoes scrawled on the wall behind her."

Saturday, October 12, 2024

5th Circuit rules ISP should have terminated Internet users accused of piracy; Ars Technica, October 11, 2024

 JON BRODKIN, Ars Technica; 5th Circuit rules ISP should have terminated Internet users accused of piracy

"Music publishing companies notched another court victory against a broadband provider that refused to terminate the accounts of Internet users accused of piracy. In a ruling on Wednesday, the conservative-leaning US Court of Appeals for the 5th Circuit sided with the big three record labels against Grande Communications, a subsidiary of Astound Broadband.

The appeals court ordered a new trial on damages because it said the $46.8 million award was too high, but affirmed the lower court's finding that Grande is liable for contributory copyright infringement."

Friday, October 11, 2024

Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024

  , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room

"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.

That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.

The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.

Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."

Monday, October 7, 2024

Authors Guild to offer “Human Authored” label on books to compete with AI; Marketplace.org, October 7, 2024

Matt Levin, Marketplace.org ; Authors Guild to offer “Human Authored” label on books to compete with AI

"The Authors Guild, the professional association representing published novelists and nonfiction writers, is set to offer to its 15,000 members a new certificate they can place directly on their book covers.

About the size of literary award stickers or celebrity book club endorsements adorning the cover art of the latest bestseller, the certificate is a simple, round logo with two boldfaced words inside: “Human Authored.”

As in, written by a human — and not artificial intelligence.

A round, gold stamp reads "Human Authored," "Authors Guild."
(Courtesy The Authors Guild)

“It isn’t just to prevent fraud and deception,” said Douglas Preston, a bestselling novelist and nonfiction writer and member of the Authors Guild Council. “It’s also a declaration of how important storytelling is to who we are as a species. And we’re not going to let machines elbow us aside and pretend to be telling us stories, when it’s just regurgitating literary vomitus.”

‘We Have to Work Together’: Action Beyond Banned Books Week; American Libraries, October 2, 2024

  Paula Mauro, American Libraries; ‘We Have to Work Together’: Action Beyond Banned Books Week

"While Banned Books Week ended on September 28, writer, director, producer—and Banned Books Week honorary chair—Ava DuVernay stresses the importance of continuing the work of amplifying marginalized voices...

“This banned book effort is an agenda by people who want to make some of us less free, to silence the voices of some of us,” DuVernay tells American Libraries. “We can overcome this, but we have to work together.”

DuVernay recorded a video conversation with Banned Books Week Youth Honorary Chair Julia Garnett, a student activist who fought book bans in her home state of Tennessee and now attends Smith College in Northampton, Massachusetts. In the video, the two discuss DuVernay’s approach to championing diverse viewpoints as a filmmaker and ways the rest of us can join and stay in the fight.

The video is available here, as well as on the Banned Books Week YouTube channel. Highlights from the video are also excerpted below...

How student activists can cope with feeling lonely in their anticensorship fights—often as the youngest person in the room:

First of all, I commend you. I take my hat off to you. I bow to all activists who are doing that hard work. I think the one thing to remember is, it’s lonely because that’s what leadership is. There’s someone who’s leading, and that is who we’re following. And it’s lonely at the front. It’s about building coalition and making sure that the folks around you are aware, are educated, are interested, and are leaning in.

People have different levels of engagement, and that’s okay. But even one person can have an impact…. And if I feel that kind of loneliness—that, “Gosh, I’m the only one out here doing it, and everyone else is doing this”—if you feel it and you still want to do it, you’re on the right track. And there’s nothing better than feeling like you’re on the right track. So, not easy. But glorious, you know?"

Who uses libraries? Even in the stacks, there’s a political divide.; The Washington Post, October 4, 2024

 , The Washington Post; Who uses libraries? Even in the stacks, there’s a political divide.

"When we took a look at the nation’s declining reading habits, our struggling bookstores and the prodigious number of books consumed by America’s top 1 percent of readers, scores of you wrote in with a singular question: What about the libraries?!

You people sure do love libraries! You wanted to know everything. Who are the biggest library users? How many of our books do we get from libraries? What else do we use libraries for?

We scoured all the government sources we could think of before turning to the cabal of polling prodigies over at YouGov to see what they could gin up.

As usual, YouGov exceeded our expectations, asking at least 50 library-related questions of 2,429 U.S. adults in April. They touched on just about everything: librarian approval ratings, restrictions on drag queen story times, number of books read. They also asked about the library services we actually use, up to and including how many of us avail ourselves of the library restrooms."

Saturday, October 5, 2024

Library cancels Harry Potter programming over copyright issue; Buckrail, October 4, 2024

Marianne Zumberge, Buckrail; Library cancels Harry Potter programming over copyright issue

"It’s a sad day for little witches and wizards in Jackson Hole. The Teton County Library’s (TCL) slate of Harry Potter programming has been canceled due to copyright infringement. 

TCL announced the news on Wednesday, Oct. 2. TCL said it had received a cease-and-desist letter from Warner Bros. Entertainment Inc., which owns and controls all things Potter.

“Prior to receiving the letter, Library staff was unaware that this free educational event was a copyright infringement,” TCL’s announcement reads. “In the past, libraries had been encouraged to hold Harry Potter-themed events to promote the books as they were released.”

Three events had been planned for October: A Night at Hogwarts, Harry Potter Trivia for Adults and Harry Potter Family Day."

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

  Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

  Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Thursday, October 3, 2024

Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries; The New York Times, October 2, 2024

 , The New York Times; Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries

"The drugmaker Gilead Sciences on Wednesday announced a plan to allow six generic pharmaceutical companies in Asia and North Africa to make and sell at a lower price its groundbreaking drug lenacapavir, a twice-yearly injection that provides near-total protection from infection with H.I.V.

Those companies will be permitted to sell the drug in 120 countries, including all the countries with the highest rates of H.I.V., which are in sub-Saharan Africa. Gilead will not charge the generic drugmakers for the licenses.

Gilead says the deal, made just weeks after clinical trial results showed how well the drug works, will provide rapid and broad access to a medication that has the potential to end the decades-long H.I.V. pandemic.

But the deal leaves out most middle- and high-income countries — including Brazil, Colombia, Mexico, China and Russia — that together account for about 20 percent of new H.I.V. infections. Gilead will sell its version of the drug in those countries at higher prices. The omission reflects a widening gulf in health care access that is increasingly isolating the people in the middle."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."