Issues and developments related to IP, AI, and OM, examined in the IP and tech ethics graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology", coming in Summer 2025, includes major chapters on IP, AI, OM, and other emerging technologies (IoT, drones, robots, autonomous vehicles, VR/AR). Kip Currier, PhD, JD
Thursday, January 16, 2025
In AI copyright case, Zuckerberg turns to YouTube for his defense; TechCrunch, January 15, 2025
Wednesday, January 15, 2025
'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line; NPR, January 14, 2025
Bobby Allyn, NPR; 'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line
"A group of news organizations, led by The New York Times, took ChatGPT maker OpenAI to federal court on Tuesday in a hearing that could determine whether the tech company has to face the publishers in a high-profile copyright infringement trial.
Three publishers' lawsuits against OpenAI and its financial backer Microsoft have been merged into one case. Leading each of the three combined cases are the Times, The New York Daily News and the Center for Investigative Reporting.
Other publishers, like the Associated Press, News Corp. and Vox Media, have reached content-sharing deals with OpenAI, but the three litigants in this case are taking the opposite path: going on the offensive."
Monday, January 6, 2025
OpenAI holds off on promise to creators, fails to protect intellectual property; The American Bazaar, January 3, 2025
Vishnu Kamal, The American Bazaar; OpenAI holds off on promise to creators, fails to protect intellectual property
"OpenAI may yet again be in hot water as it seems that the tech giant may be reneging on its earlier assurances. Reportedly, in May, OpenAI said it was developing a tool to let creators specify how they want their works to be included in—or excluded from—its AI training data. But seven months later, this feature has yet to see the light of day.
Called Media Manager, the tool would “identify copyrighted text, images, audio, and video,” OpenAI said at the time, to reflect creators’ preferences “across multiple sources.” It was intended to stave off some of the company’s fiercest critics, and potentially shield OpenAI from IP-related legal challenges...
OpenAI has faced various legal challenges related to its AI technologies and operations. One major issue involves the privacy and data usage of its language models, which are trained on large datasets that may include publicly available or copyrighted material. This raises concerns over privacy violations and intellectual property rights, especially regarding whether the data used for training was obtained with proper consent.
Additionally, there are questions about the ownership of content generated by OpenAI’s models. If an AI produces a work based on copyrighted data, it is tricky to determine who owns the rights—whether it’s OpenAI, the user who prompted the AI, or the creators of the original data.
Another concern is the liability for harmful content produced by AI. If an AI generates misleading or defamatory information, legal responsibility could fall on OpenAI."
Saturday, December 28, 2024
SDNY Order Renews Possibility of Digital Millenium Copyright Act as Legal Recourse for News Organizations in the Age of AI; The National Law Review, December 23, 2024
Dan Jasnow of ArentFox Schiff LLP - AI Law Blog, The National Law Review; SDNY Order Renews Possibility of Digital Millenium Copyright Act as Legal Recourse for News Organizations in the Age of AI
"Key Takeaway: The Intercept’s Case Against OpenAI Will Clarify the Future of DMCA Protection Against AI Developers
Until now, other DMCA claims against AI developers have largely failed — most of these cases have not proceeded past the motion-to-dismiss stage — but the order allowing The Intercept’s claim to proceed renews the possibility that the DMCA may be a viable claim against AI developers. For rights holders, 1202(b) provides distinct causes of action against AI developers with different evidentiary requirements than traditional copyright infringement claims. For developers, 1202(b) is another legal risk to be managed, particularly in the wake of the order in The Intercept case."
Tuesday, December 10, 2024
OpenAI makes AI video generator Sora publicly available in US; The Guardian, December 9, 2024
Dara Kerr , The Guardian; OpenAI makes AI video generator Sora publicly available in US
"Anyone in the US can now use OpenAI’s artificial intelligence video generator, Sora, which the company announced on Monday would become publicly available. OpenAI first presented Sora in February, but it was only accessible to select artists, film-makers and safety testers. At multiple points on Monday, though, OpenAI’s website did not allow for new sign-ups for Sora, citing heavy traffic...
While generative AI has improved considerably over the past year, it is still prone to hallucinations, or incorrect responses, and plagiarism. AI image generators also often produce unrealistic images, such as people with several arms or misplaced facial features.
Critics warn that this type of AI video technology could be misused by bad actors for disinformation, scams and deepfakes. There have already been deepfake videos of the Ukrainian president, Volodymyr Zelenskyy, supposedly calling for a ceasefire and of Kamala Harris supposedly describing herself as “the ultimate diversity hire”.
OpenAI said in its blogpost that it would initially limit uploads of specific people and that it will block content with nudity. The company said that it was additionally “blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes”.
Sora will be available to users who already subscribe and pay for OpenAI’s tools. People in the US and “most countries internationally” will have access to the tool, but it will not be available in the UK or Europe."
Wednesday, December 4, 2024
OpenAI Must Hand Over Execs' Social Media DMs in Copyright Suits; Bloomberg Law, December 3, 2024
"
Saturday, November 30, 2024
‘AI Jesus’ avatar tests man’s faith in machines and the divine; AP, November 28, 2024
JAMEY KEATEN, AP; ‘AI Jesus’ avatar tests man’s faith in machines and the divine
"Researchers and religious leaders on Wednesday released findings from a two-month experiment through art in a Catholic chapel in Switzerland, where an avatar of “Jesus” on a computer screen — tucked into a confessional — took questions by visitors on faith, morality and modern-day woes, and offered responses based on Scripture...
Philipp Haslbauer, an IT specialist at the Lucerne University of Applied Sciences and Arts who pulled together the technical side of the project, said the AI responsible for taking the role of “AI Jesus” and generating responses was GPT-4o by OpenAI, and an open-source version of the company’s Whisper was used for speech comprehension.
An AI video generator from Heygen was used to produce voice and video from a real person, he said. Haslbauer said no specific safeguards were used “because we observed GPT-4o to respond fairly well to controversial topics.”
Visitors broached many topics, including true love, the afterlife, feelings of solitude, war and suffering in the world, the existence of God, plus issues like sexual abuse cases in the Catholic Church or its position on homosexuality.
Most visitors described themselves as Christians, though agnostics, atheists, Muslims, Buddhists and Taoists took part too, according to a recap of the project released by the Catholic parish of Lucerne.
About one-third were German speakers, but “AI Jesus” — which is conversant in about 100 languages — also had conversations in languages like Chinese, English, French, Hungarian, Italian, Russian and Spanish."
Friday, November 29, 2024
Major Canadian News Outlets Sue OpenAI in New Copyright Case; The New York Times, November 29, 2024
Matina Stevis-Gridneff, The New York Times ; Major Canadian News Outlets Sue OpenAI in New Copyright Case
"A coalition of Canada’s biggest news organizations is suing OpenAI, the maker of the artificial intelligence chatbot, ChatGPT, accusing the company of illegally using their content in the first case of its kind in the country.
Five of the country’s major news companies, including the publishers of its top newspapers, newswires and the national broadcaster, filed the joint suit in the Ontario Superior Court of Justice on Friday morning...
The Canadian outlets, which include the Globe and Mail, the Toronto Star and the CBC — the Canadian Broadcasting Corporation — are seeking what could add up to billions of dollars in damages. They are asking for 20,000 Canadian dollars, or $14,700, per article they claim was illegally scraped and used to train ChatGPT.
They are also seeking a share of the profits made by what they claim is OpenAI’s misuse of their content, as well as for the company to stop such practices in the future."
Thursday, November 21, 2024
OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit; TechCrunch, November 20, 2024
Kyle Wiggers , TechCrunch; OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit
"OpenAI tried to recover the data — and was mostly successful. However, because the folder structure and file names were “irretrievably” lost, the recovered data “cannot be used to determine where the news plaintiffs’ copied articles were used to build [OpenAI’s] models,” per the letter.
“News plaintiffs have been forced to recreate their work from scratch using significant person-hours and computer processing time,” counsel for The Times and Daily News wrote. “The news plaintiffs learned only yesterday that the recovered data is unusable and that an entire week’s worth of its experts’ and lawyers’ work must be re-done, which is why this supplemental letter is being filed today.”
The plaintiffs’ counsel makes clear that they have no reason to believe the deletion was intentional. But they do say the incident underscores that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools."
Wednesday, November 20, 2024
Indian news agency sues OpenAI alleging copyright infringement; TechCrunch, November 18, 2024
Manish Singh, TechCrunch; Indian news agency sues OpenAI alleging copyright infringement
"One of India’s largest news agencies, Asian News International (ANI), has sued OpenAI in a case that could set a precedent for how AI companies use copyrighted news content in the world’s most populous nation.
Asian News International filed a 287-page lawsuit in the Delhi High Court on Monday, alleging the AI company illegally used its content to train its AI models and generated false information attributed to the news agency. The case marks the first time an Indian media organization has taken legal action against OpenAI over copyright claims.
During Tuesday’s hearing, Justice Amit Bansal issued a summons to OpenAI after the company confirmed it had already ensured that ChatGPT wasn’t accessing ANI’s website. The bench said that it was not inclined to grant an injunction order on Tuesday, as the case required a detailed hearing for being a “complex issue.”
The next hearing is scheduled to be held in January."
Saturday, November 9, 2024
OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case; Gizmodo, November 8, 2024
Todd Feathers, Gizmodo; OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case
"OpenAI won an initial victory on Thursday in one of the many lawsuits the company is facing for its unlicensed use of copyrighted material to train generative AI products like ChatGPT.
A federal judge in the southern district of New York dismissed a complaint brought by the media outlets Raw Story and AlterNet, which claimed that OpenAI violated copyright law by purposefully removing what is known as copyright management information, such as article titles and author names, from material that it incorporated into its training datasets.
OpenAI had filed a motion to dismiss the case, arguing that the plaintiffs did not have standing to sue because they had not demonstrated a concrete harm to their businesses caused by the removal of the copyright management information. Judge Colleen McMahon agreed, dismissing the lawsuit but leaving the door open for the plaintiffs to file an amended complaint."
Wednesday, October 23, 2024
Former OpenAI Researcher Says the Company Broke Copyright Law; The New York Times, October 23, 2024
Cade Metz, The New York Times; Former OpenAI Researcher Says the Company Broke Copyright Law
"Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.
“This is not a sustainable model for the internet ecosystem as a whole,” he told The Times."
Tuesday, October 15, 2024
AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024
Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members
"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.
Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:
- George T. French, Jr, JD, PhD., President, Clark Atlanta University and Chair of Presidents, United Negro College Fund
- Helene D. Gayle, MD, MPH, President, Spelman College
- Bernice A. King, JD, M. Div, CEO, The King Center
- David A. Thomas, PhD., President, Morehouse College
- Angela F. Williams, JD, M. Div, President and CEO, United Way Worldwide
- Ambassador Andrew J. Young, Chairman, The Andrew J. Young Foundation, former U.N. Ambassador, former Mayor of Atlanta and civil rights icon"
Friday, October 11, 2024
Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024
Jacob Shamsian , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room
"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.
That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.
The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.
Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."
Monday, September 30, 2024
OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024
Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders
"
Judge Jon S. Tigar granted the programmers’ request for a mid-case turn to the US Court of Appeals for the Ninth Circuit, which must determine whether OpenAI’s copying of open-source code to train its AI model without proper attribution to the programmers could be a violation of the Digital Millennium Copyright Act...
The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."
OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024
Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders
"The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."
Sunday, September 29, 2024
AI could be an existential threat to publishers – that’s why Mumsnet is fighting back; The Guardian, September 28, 2024
Justine Roberts , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back
"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.
At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...
If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...
I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.
But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."
Wednesday, September 25, 2024
OpenAI, Microsoft, Amazon among first AI Pact signatories; Euronews, September 25, 2024
Cynthia Kroet, Euronews; OpenAI, Microsoft, Amazon among first AI Pact signatories
"OpenAI, Microsoft and Amazon are among 100 companies who are the first to sign up to a voluntary alliance aiming to help usher in new AI legislation, the European Commission said today (25 September)...
The Commission previously said that some 700 companies have shown interest in joining the Pact – which involves voluntary preparatory commitments to help businesses get ready for the incoming AI Act...
The Pact supports industry's voluntary commitments related to easing the uptake of AI in organisations, identifying AI systems likely to be categorised as high-risk under the rules and promoting AI literacy.
In addition to these core commitments, more than half of the signatories committed to additional pledges, including ensuring human oversight, mitigating risks, and transparently labelling certain types of AI-generated content, such as deepfakes, the Commission said...
The AI Act, the world’s first legal framework that regulates AI models according to the risk they pose, entered into force in August."
OpenAI Training Data to Be Inspected in Authors’ Copyright Cases; Hollywood Reporter, September 24, 2024
Winston Cho, Hollywood Reporter; OpenAI Training Data to Be Inspected in Authors’ Copyright Cases
"For the first time, OpenAI will provide access to its training data for review of whether copyrighted works were used to power its technology.
In a Tuesday filing, authors suing the Sam Altman-led firm and OpenAI indicated that they came to terms on protocols for inspection of the information. They’ll seek details related to the incorporation of their works in training datasets, which could be a battleground in the case that may help establish guardrails for the creation of automated chatbots...
U.S. District Judge Vince Chhabria at a hearing on Friday questioned whether the attorneys can adequately represent the writers.
“It’s very clear to me from the papers, from the docket and from talking to the magistrate judge that you have brought this case and you have not done your job to advance it,” Chhabria said, according to Politico. “You and your team have barely been litigating the case. That’s obvious… This is not your typical proposed class action. This is an important case. It’s an important societal issue. It’s important for your clients.”
Thursday, August 29, 2024
OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024
Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims
"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."