Tuesday, January 28, 2025

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy; Electronic Frontier Foundation (EFF), January 27, 2025

KATHARINE TRENDACOSTA, Electronic Frontier Foundation (EFF); It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture:Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek." 

Elton John backs Paul McCartney in criticising proposed overhaul to UK copyright system; The Guardian, January 27, 2025

, The Guardian ; Elton John backs Paul McCartney in criticising proposed overhaul to UK copyright system

"Elton John has backed Paul McCartney in criticising a proposed overhaul of the UK copyright system, and has called for new rules to prevent tech companies from riding “roughshod over the traditional copyright laws that protect artists’ livelihoods”.

John has backed proposed amendments to the data (use and access) bill that would extend existing copyright protections, when it goes before a vote in the House of Lords on Tuesday.

The government is also consulting on an overhaul of copyright laws that would result in artists having to opt out of letting AI companies train their models using their work, rather than an opt-in model...

John told the Sunday Times that he felt “wheels are in motion to allow AI companies to ride roughshod over the traditional copyright laws that protect artists’ livelihoods. This will allow global big tech companies to gain free and easy access to artists’ work in order to train their artificial intelligence and create competing music. This will dilute and threaten young artists’ earnings even further. The musician community rejects it wholeheartedly.”

He said that “challenging financial situations” and increased touring costs made it “harder than ever for new and emerging musicians to make the finances of the industry stack up to sustain a fledgling career”, and added that the UK’s place on the world stage as “a leader in arts and popular culture is under serious jeopardy” without robust copyright protection.

“It is the absolute bedrock of artistic prosperity, and the country’s future success in the creative industries depends on it.”

The government consultation runs until 25 February and will explore how to improve trust between the creative and AI sectors, and how creators can license and get paid for use of their material."

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

 WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Saturday, January 25, 2025

Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators; Virginie Berger, January 25, 2025

Virginie Berger

, Forbes; Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators

"Generative AI is reshaping creativity in ways that highlight a troubling paradox: while touted as a force for innovation, it increasingly relies on exploiting copyrighted materials, songs, books, and artworks, without consent or compensation. This transformation underscores the growing conflict between technological progress and the preservation of artistic integrity. At the heart of the issue lies a troubling paradox: while companies like OpenAI and Google promote AI as a force for innovation, their reliance on scraping copyrighted materials, songs, books, and artworks, undermines the very creativity they claim to enhance. This exploitation is often disguised as progress or justified as necessary for global competitiveness, particularly in the AI race against China. However, these claims mask a deeper reality: the consolidation of power by Big Tech at the expense of creators. As the balance of influence shifts, those who drive culture and innovation are increasingly marginalized, raising urgent questions about the future of intellectual property and creative industries."

Paul McCartney: Don't let AI rip off artists; BBC, January 25, 2025

Laura Kuenssberg, BBC; Paul McCartney: Don't let AI rip off artists

"Sir Paul McCartney has told the BBC proposed changes to copyright law could allow "rip off" technology that might make it impossible for musicians and artists to make a living.

The government is considering an overhaul of the law that would allow AI developers to use creators' content on the internet to help develop their models, unless the rights holders opt out.

In a rare interview for Sunday with Laura Kuenssberg, Sir Paul said "when we were kids in Liverpool, we found a job that we loved, but it also paid the bills", warning the proposals could remove the incentive for writers and artists and result in a "loss of creativity". 

The government said it aimed to deliver legal certainty through a copyright regime that provided creators with "real control" and transparency."

Thursday, January 23, 2025

Rock & Roll Hall of Fame Aims to Axe Copyright Lawsuit Over Van Halen Guitar Photo; Billboard, January 22, 2025

BILL DONAHUE , Billboard; Rock & Roll Hall of Fame Aims to Axe Copyright Lawsuit Over Van Halen Guitar Photo

"The Rock Hall is just the latest company to face such a lawsuit from Zlozower, who also snapped images of Led Zeppelin, The Rolling Stones, Michael Jackson and Bruce Springsteen over a decades-long career. Since 2016, court records show he’s filed nearly 60 copyright cases against a range of defendants over images of Elvis Costello, Guns N’ Roses, Mötley Crüe and more...

In their motion to dismiss the case, the Rock Hall’s attorneys say the museum made a “transformative use” of Zlozower’s original image — a key question when courts decide fair use. They say the Hall used it not simply as an image of the band, but “to contextualize Eddie Van Halen’s instruments on display in the museum as historical artifacts.”

“RRHOF incorporated a portion of plaintiff’s photograph displayed next to the exhibition object, as one piece of source material to document and represent the use of the guitar,” the museum’s lawyers write. “This proximal association between source material and exhibition object helps visitors connect information and delve more deeply into the exhibition objects.”

In making that argument, the Hall’s attorneys had a handy piece of legal precedent to cite: A 2021 ruling by a federal appeals court tossed out a copyright lawsuit against New York City’s Metropolitan Museum of Art over the use of another image of Van Halen in a different exhibit on the same famous set of guitars."

Monday, January 20, 2025

Is the law playing catch-up with AI?; Harvard Law Today, January 16, 2025

Harvard Law Today; Is the law playing catch-up with AI?

"Harvard Law Today: Why was the Artificial Intelligence and Intellectual Property Law Conference in November convened? Why is it important to be talking about AI and IP right now?

William Lee: In the past, this event has been much more focused on the specifics of the law and comparisons of the different approaches across jurisdictions. This year, the conference addressed AI more generally with moderators and panelists from a wider variety of fields including homeland security, life sciences, technological development, non-profit advocacy, and even ethics. I think it was an introduction into AI for many of the people in the room and who better to provide that introduction than [Harvard Law School Professor] Jonathan Zittrain ’95. Matt Ferraro, senior counselor for cybersecurity and emerging technology to the secretary of Homeland Security and executive director of the Artificial Intelligence Safety and Security Board, led a panel primarily of industry leaders, explaining the capabilities and trajectory of AI technology. Then, Iain Cunningham from NVIDIA chaired an excellent panel mostly composed of academics and people from industry discussing how IP law and AI interact. We also had input from experts on the AI and IP relationship in jurisdictions across the globe, including Europe, the UK, and Africa, on a panel moderated by Terry Fisher that was particularly illuminating. Then, we closed with a judges panel where a group of five Federal Circuit and two District Court judges offered views on AI issues as well as IP more broadly.

Louis Tompros: IP law has historically, and inherently, operated at the intersection of law and fast-moving technology. Artificial Intelligence is currently where technology is moving the fastest and where the law has the most ground to cover in order to keep pace. This conference was designed to educate folks about AI technology and the various IP law approaches taken in the United States and around the world, and to help forecast how protections for creative and useful ideas will function in the context of these innovative systems. We try to make the IP conference as broadly appealing and relevant to the groups of constituents that are interested in participating, that is, people within the legal community, the business community, and the academic community, including Harvard Law School students. This year was the first time ever that the conference was fully subscribed via pre-registration which is, I think, a great testament to the level and breadth of interest. You can tell that we got it right precisely because of the incredible interest in this year’s event.

HLT: Throughout history, innovations have compelled IP law to adjust and evolve to account for new technology, like the radio, the television, and the internet. Is AI different?

Tompros: The law can’t possibly stay ahead. It will always lag a bit behind. Honestly, that’s part of the fun of IP law because the law is perpetually having to evolve by necessity to keep pace with rapidly evolving aspects of technology. I don’t think AI is different in kind from previous technological revolutions that affected the law, but I do think it is quite possibly different in scale. The pace of the development of the technology here is so accelerated that the speed at which technological advances are coming makes it even harder for the already trailing legal system to catch up. That leads to some interesting possibilities, but it also leads to some serious challenges. Ultimately, it demands creative and innovative thinking in the design of legal structures established to try to manage it."

Sunday, January 19, 2025

Congress Must Change Copyright Law for AI | Opinion; Newsweek, January 16, 2025

 Assistant Professor of Business Law, Georgia College and State University , Newsweek; Congress Must Change Copyright Law for AI | Opinion

"Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.

When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.

Consider what's at risk with inaction...

While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.

The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.

The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.

Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act."

Saturday, January 18, 2025

News organizations sue OpenAI over copyright infringement claims; Jurist.org, January 16, 2025

 , Jurist.org; News organizations sue OpenAI over copyright infringement claims

"The case centers on allegations that OpenAI unlawfully utilized copyrighted content from various publishers, including The New York Times, to train its generative AI models and the hearing could determine whether OpenAI will face trial.

The plaintiffs claim that ChatGPT’s ability to generate human-like responses stems from the unauthorized use of their work without permission or compensation to develop their large language models (LLMs). OpenAI and its financial backer Microsoft argue that its use of data is protected under the fair use doctrine, which allows limited use of copyrighted material without permission for purposes such as commentary, criticism or education.

Additionally, OpenAI’s legal team asserts that The New York Times has not demonstrated actual harm resulting from their practices and that its use of the copyrighted material is transformative as it does not replicate the content verbatim. On the other hand, the plaintiffs are arguing copyright infringement because OpenAI removed identifiable information such as author bylines and publication details when using the content. They also contend that the LLMs absorb and reproduce expressions from the training data without genuine understanding."

Friday, January 17, 2025

Apple sidelines AI news summaries due to errors; Japan Today, January 17, 2025

 Japan Today; Apple sidelines AI news summaries due to errors

"Apple pushed out a software update on Thursday which disabled news headlines and summaries generated using artificial intelligence that were lambasted for getting facts wrong.

The move by the tech titan comes as it enhances its latest lineup of devices with "Apple Intelligence" in a market keen for assurance that the iPhone maker is a contender in the AI race.

Apple's decision to temporarily disable the recently launched AI feature comes after the BBC and other news organizations complained that users were getting mistake-riddled or outright wrong headlines or news summary alerts."

Thursday, January 16, 2025

In AI copyright case, Zuckerberg turns to YouTube for his defense; TechCrunch, January 15, 2025

, TechCrunch ; In AI copyright case, Zuckerberg turns to YouTube for his defense

"Meta CEO Mark Zuckerberg appears to have used YouTube’s battle to remove pirated content to defend his own company’s use of a data set containing copyrighted e-books, reveals newly released snippets of a deposition he gave late last year.

The deposition, which was part of a complaint submitted to the court by plaintiffs’ attorneys, is related to the AI copyright case Kadrey v. Meta. It’s one of many such cases winding through the U.S. court system that’s pitting AI companies against authors and other IP holders. For the most part, the defendants in these cases – AI companies – claim that training on copyrighted content is “fair use.” Many copyright holders disagree."

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

  in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Wednesday, January 15, 2025

USPTO releases AI strategic plan; FedScoop, January 15, 2025

,  FedScoop; USPTO releases AI strategic plan

"The U.S. Patent and Trademark Office is planning to examine the intersection of artificial intelligence innovation and advancing intellectual property policies as part of a new AI strategy the agency released Tuesday. 

In the document, the USPTO said it intends to study AI-related implications for IP protections and potential uses for the technology to safeguard trademark rights. The agency plans to advocate for the development of “sound judicial precedents and legislation that promote both AI innovation and respect for IP rights, while not unnecessarily constraining future AI innovation.”"

'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line; NPR, January 14, 2025

 , NPR; 'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line

"A group of news organizations, led by The New York Times, took ChatGPT maker OpenAI to federal court on Tuesday in a hearing that could determine whether the tech company has to face the publishers in a high-profile copyright infringement trial.

Three publishers' lawsuits against OpenAI and its financial backer Microsoft have been merged into one case. Leading each of the three combined cases are the Times, The New York Daily News and the Center for Investigative Reporting.

Other publishers, like the Associated Press, News Corp. and Vox Media, have reached content-sharing deals with OpenAI, but the three litigants in this case are taking the opposite path: going on the offensive."

Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'; Bloomberg Law, January 14, 2026

, Bloomberg Law; Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'

"California attorney Mark Lemley dropped Meta Platforms Inc. as a client in a high-profile copyright case because of CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness,” the Stanford University professor said on LinkedIn."

Tuesday, January 14, 2025

USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation; United States Patent and Trademark Office (USPTO), January 14, 2025

United States Patent and Trademark Office (USPTO) ; USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation 

"AI Strategy outlines how the USPTO will address AI's impact across IP policy, agency operations, and the broader innovation ecosystem  

WASHINGTON—The U.S. Patent and Trademark Office (USPTO) announced a new Artificial Intelligence (AI) Strategy to guide the agency’s efforts toward fulfilling the potential of AI within USPTO operations and across the intellectual property (IP) ecosystem. The Strategy offers a vision for how the USPTO can foster responsible and inclusive AI innovation, harness AI to support the agency’s mission, and advance a positive future for AI to ensure that the country maintains its leadership in innovation. 

“We have a responsibility to promote, empower, and protect innovation,” said Derrick Brent, Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the USPTO. “Developing a strategy to unleash the power of AI while mitigating risks provides a framework to advance innovation and intellectual property.”  

The strategy aims to achieve the USPTO’s AI vision and mission through five focus areas which include: 

  1. Advance the development of IP policies that promote inclusive AI innovation and creativity. 
  2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development. 
  3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
  4. Develop AI expertise within the USPTO’s workforce.
  5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

The USPTO and our sister agencies within the Department of Commerce, as well as the U.S. Copyright Office, are providing critical guidance and recommendations to advance AI-driven innovation and creativity. In 2022, the USPTO created the AI and Emerging Technology (ET) Partnership, which has worked closely with the AI/ET community to gather public feedback through a series of sessions on topics related to AI and innovation, biotech, and intellectual property (IP) policy. Since its 2022 launch, more than 6,000 stakeholders have engaged with us on these critical issues. In additionthe USPTO collaborates across government to advance American leadership in AI by promoting innovation and competition as set forth in the Biden-Harris Administration’s landmark October 2023 AI Executive Order. 

The full text of the AI Strategy can be found on the AI Strategy webpageAdditionalinformation on AI, including USPTO guidance and more on USPTO’s AI/ET Partnership, can be found on our AI webpage. "

Thursday, January 9, 2025

Elon Musk says all human data for AI training ‘exhausted’; The Guardian, January 9, 2025

, The Guardian; Elon Musk says all human data for AI training ‘exhausted’

"However, Musk also warned that AI models’ habit of generating “hallucinations” – a term for inaccurate or nonsensical output – was a danger for the synthetic data process.

He said in the livestreamed interview with Mark Penn, the chair of the advertising group Stagwell, that hallucinations had made the process of using artificial material “challenging” because “how do you know if it … hallucinated the answer or it’s a real answer”.

Andrew Duncan, the director of foundational AI at the UK’s Alan Turing Institute, said Musk’s comment tallied with a recent academic paper estimating that publicly available data for AI models could run out as soon as 2026. He added that over-reliance on synthetic data risked “model collapse”, a term referring to the outputs of models deteriorating in quality...

High-quality data, and control over it, is one of the legal battlegrounds in the AI boom. OpenAI admitted last year it would be impossible to create tools such as ChatGPT without access to copyrighted material, while the creative industries and publishers are demanding compensation for use of their output in the model training process."

Wednesday, January 8, 2025

Nosferatu (1922) Dracula Copyright Infringement Story Explained; Screen Rant, January 8, 2025

"Copyright was a big thing even back in 1922 when Nosferatu came out."

3Blue1Brown copyright takedown blunder by AI biz blamed on human error; The Register, January 7, 2025

Thomas Claburn , The Register; 

3Blue1Brown copyright takedown blunder by AI biz blamed on human error

"The errant copyright takedown of a popular video explaining Bitcoin, from Grant Sanderson's smash-hit YouTube channel 3Blue1Brown, has been reversed, with human error blamed.

The brand protection company responsible for the unwarranted takedown, ChainPatrol.io, also acknowledged that the legal demand was a mistake, and has been in communication with Sanderson about its efforts to return the video to YouTube. Sanderson – who has 6.8 million subscribers on YouTube, and whose in-depth videos mainly on mathematics and science have been viewed more than 600 million times – raised the alarm in a social media post on Monday.

He said he had just learned that YouTube had removed a 2017-era video he had made and applied a copyright strike – three of which will get an account terminated – despite the fact that he had only used his own content in his video."

The Internet Archive is in danger; WBUR, January 7, 2025

 

The Internet Archive is in danger


"More than 900 billion webpages are preserved on The Wayback Machine, a history of humanity online. Now, copyright lawsuits could wipe it out.

Guests

Brewster Kahle, founder and director of the Internet Archive. Digital librarian and computer engineer.

James Grimmelmann, professor of digital and information law at Cornell Tech and Cornell Law School. Studies how laws regulating software affect freedom, wealth, and power."

HOW SHERLOCK HOLMES BROKE COPYRIGHT LAW; The Atlantic, January 7, 2025

Alec Nevala-Lee , The Atlantic; HOW SHERLOCK HOLMES BROKE COPYRIGHT LAW

"The estate based its argument on a distinction between “flat” and “round” fictional characters first proposed by E. M. Forster in his 1927 book, Aspects of the Novel, a concept frequently invoked in high-school literature classes but never previously tested in court.

In its legal filings, the estate drew a contrast between “flat” characters without depth—such as Superman and Amos and Andy—and “round” characters such as Holmes, who were capable of complexity and change. Doyle, it said, continued to develop Holmes to the very end, gradually transforming him from a reasoning machine into an empathetic figure who displays affection for women, dogs, and even his long-suffering partner. And it soon became clear that this argument would have enormous implications for copyright holders, who would be motivated to retain control over their characters by changing them incrementally for as long as possible."

Monday, January 6, 2025

CSotD: Telnaes is only unemployed, not gone; The Daily Cartoonist, January 4, 2025

 , The Daily Cartoonist; CSotD: Telnaes is only unemployed, not gone

"We try to avoid duplication and stepping on each other’s toes around here, and by now you’ve likely seen DD Degg’s coverage of Ann Telnaes’ resignation from the Washington Post. And if you haven’t seen his coverage here, you’ve almost certainly seen some coverage because it is all over the Internet, with regret and praise coming from around the globe. As of seven this morning, her Substack announcement had 5,307 likes and had been shared 910 times...

Seeing these pieces on the importance of political cartooning and press freedom, it’s easy to recognize how inconsistent it would have been for her to accept the squelching of her voice by the Post’s current management.

Telnaes will no longer be on the pages of the Washington Post, but perhaps going out into the wider world will make her voice heard by a more diverse audience, particularly if the Post continues to cater to the new administration while hemorrhaging both talent and readership.

She’ll need support on her Substack, by which I mean subscriptions, not just applause, and if you haven’t been supporting small and local media outlets, this is an excellent place to start. 

The cartoon her editor refused to run, which was the final straw that induced her to walk away from a prestigious and well-paying job, offers the very reasonable suggestion that the billionaires who control major media are selling out to the administration, not just with obedience but in several cases with substantial financial contributions.


And here’s something else they’d just as soon not hear anyone say: It seems that major media may be working to gain influence with the wrong people, that they’re making friends with oligarchs but losing touch with their actual customers...


Samizdat is a term that defined underground writings — mimeographed or photocopied — that circulated in the Soviet Union as it began to totter and crash. In our country, in these times, we’re seeing the growth of Substacks and other small-scale publishing by people who, like Ann Telnaes, want to say what they think needs to be said, without being filtered and both-sidesed and required to be “fair and balanced” by management that is more interested in marketing than in journalism.


Supporting small publishers and individual writers matters. The big boys will get along with or without you, but the voices we need to hear need backing."

At the Intersection of A.I. and Spirituality; The New York Times, January 3, 2025

 , The New York Times; At the Intersection of A.I. and Spirituality

"For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the internet in the 1990s. Some proponents of A.I. in religious spaces have gone back even further, comparing A.I.’s potential — and fears of it — to the invention of the printing press in the 15th century.

Religious leaders have used A.I. to translate their livestreamed sermons into different languages in real time, blasting them out to international audiences. Others have compared chatbots trained on tens of thousands of pages of Scripture to a fleet of newly trained seminary students, able to pull excerpts about certain topics nearly instantaneously.

But the ethical questions around using generative A.I. for religious tasks have become more complicated as the technology has improved, religious leaders say. While most agree that using A.I. for tasks like research or marketing is acceptable, other uses for the technology, like sermon writing, are seen by some as a step too far."

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."