Intellectual Property (IP), Artificial Intelligence (AI), Open Movements (OM)
My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
"Artificial intelligence video startup Runway AI has been hit with a proposed class action lawsuit in California federal court for allegedly misusing YouTube content to train its video generation platform.
YouTube creator David Gardner said in the complaintfiled in Los Angeles on Monday, that Runway bypassed YouTube's copyright protections to illegally download user videos for its AI training."
"For their final presentations in the Artificial Intelligence and Ethics course at Avonworth High School last month, students each picked a topic to examine through the lens of AI.
Senior Theo Rose chose to look at AI’s role in the art world. On a slide, she showed several images of AI-generated art...
During the past school year, English teacher Scott Tuffiash launched Avonworth’s AI and Ethics class to get students at his suburban public school talking to one another and thinking critically about the technology around them.
“And that way, it's really like, is this what we want? Is this what we need?” he said.
While Tuffiash said he takes a neutral stance on AI, what students in the class think of this now-ubiquitous technology falls along a spectrum...
Tuffiash also worked with John Slattery, executive director of Duquesne University’s Center for Ethics and Science, Technology and Law, to create an AI-centric essay contest for the region’s high-schoolers.
This school year, students from across Western Pennsylvania were tasked with answering the question: “How do we stay human in the age of Al?”"
"In the latest chapter of a closely-watched fitness industry feud, a federal appeals court handed Megan Roup a decisive victory in her copyright battle with rival trainer Tracy Anderson, affirming that Anderson’s dance-cardio routines are not protectable under federal copyright law. In a newly-issued memorandum, the U.S. Court of Appeals for the Ninth Circuit upheld the lower court’s grant of summary judgment in Roup’s favor, concluding that the so-called “TA Method” amounts to a functional fitness system rather than protectable choreography...
The Ninth Circuit’s ruling sends a clear message to fitness entrepreneurs: branding a workout as proprietary, scientific, or even choreographed will not convert a functional exercise system into a protectable work of authorship. For an industry built on personality-driven empires and carefully curated “methods,” the decision makes clear how limited a role copyright can play in safeguarding competitive advantage. With that in mind, parties like Anderson and Roup will have to rely on trademarks, trade secrets, and as this case illustrates, carefully drafted employment agreements, to protect their intellectual capital.
For now, Roup has secured a meaningful appellate win. And for Anderson, whose cult-followed method helped define boutique fitness in the 2000s, the fight continues – albeit on the contract (not copyright) front.
"Following the lead of several major Hollywood studios, the Motion Picture Assn. has sent its own cease-and-desist letter to ByteDance, the company behind the controversial artificial-intelligence video generator Seedance 2.0.
The trade association, which represents the interests of major film and TV studios, sent a notice to the Chinese company, reflecting its members’ collective response to “ByteDance’s pervasive copyright infringement.” MPA argues that Seedance’s unauthorized use of copyrighted materials is a “feature, not a bug.”
The letter, sent Friday, marks the first time the MPA has forwarded a cease-and-desist to a major AI firm and represents a further escalation of tensions between the entertainment industry and an AI company."
[Kip Currier: Since 2020, I've taught a "required core course" for the graduate students in the Master of Library and Information Science (MLIS) degree program at the University of Pittsburgh School of Computing and Information. The course is LIS 2040: The Information Professional in Communities. I posted the note (copied below) for my students, with the excerpt from a 2/24/26 Guardian article about the decline of access to mass market paperback books, as accessibility and breaking down barriers are key thematic topics in the course.
My 2025 Bloomsbury book Ethics, Information, and Technologyhas a chapter on Access. Accessibility -- in its various manifestations -- is a recurring issue throughout the book's other chapters, such as those exploring ethical issues of Intellectual Freedom, Intellectual Property, Open Movements and Traditional Knowledge, Social Media, Artificial Intelligence and Emerging Technologies, and more.]
[Kip Currier: The most important take-away in my LIS 2040 course is how we as information professionals (and in our capacities as individuals in our personal lives, too) can help to break down barriers that individuals and communities face. This Guardian article on the demise of the mass market paperback Links to an external site.implicates the ability of people to access information and has a whole host of ramifications, like affordability of books, literacy rates, and platforms for diverse authors and genres.
In the second half of the term, we'll be thinking extensively about ways that we can all work to mitigate and break down barriers of many kinds.]
[Excerpt]
"The so-called ‘pocket book’ sold in supermarkets is being phased out across the US, the latest sign of an ongoing shift in how people are choosing to read
Shelly Romerohas early memories of going to her local supermarket and picking pulp fiction off the shelves. “We were very working class; my mom was working two jobs sometimes,” she recalls. “The appeal of books being cheaper and smaller and able to be carried around was definitely a thing.”
For generations of readers, the gateway to literature was not a hushed library or a polished hardback but a wire spinner rack in a supermarket, pharmacy or railway station. There, amid chewing gum and cigarettes, sat the mass-market paperback: squat, roughly 4in by 7in and cheap enough to be bought on a whim.
But the era of the “pocket book” is drawing to a close. ReaderLink, the biggest book distributor in the US, announced recently that it would stop distributing mass-market paperbacks. The decision follows years of plummeting sales, from 131m units in 2004 to 21m in 2024, and marks the end of a format that once democratised reading for the working class...
"They had that democratic aspect to them where you can just find them anywhere and it always felt like it was the pick ‘n’ mix candy-type store where there is something here for everyone, whether it’s the Harlequin romance novel or something very pulpy like a sci-fi or horror novel that you could quickly get.”...
“We’re definitely losing accessibility and that’s a huge thing right now, especially in this country, whether it’s libraries being defunded, book bannings happening, one person saying let’s get rid of 200 books because I don’t want my child to read diverse authors."
The ads by Public First Action, which started airing on Monday, are part of an escalating political war over artificial intelligence before the midterm elections.
"A new ad campaign on Monday warned northern New Jersey residents that Congress could leave them vulnerable to harm by artificial intelligence.
The ad, which opens with photos of A.I.-generated women smiling on social media alongside A.I.-generated headlines, urged voters to tell their House representative to vote against a bill that would block states from creating protections against A.I. scams.
“He can make sure A.I. serves us, not the other way around,”the adsaid of Josh Gottheimer, the Democratic co-chair of the House’s new A.I. commission, which is expected to heavily influence legislation on the topic. “New Jersey families come before Big Tech’s bottom line.”
The $300,000 ad campaign was paid for by Public First Action, a super PAC operation backed by the A.I. start-up Anthropic. Focused on New Jersey, the campaign is likely to run several weeks — part of several similar initiatives by the group nationally."
"A sneaker company is suingKansas City ChiefsstarsPatrick MahomesandTravis Kelce, along with their restaurant partners, alleging trademark infringement. The complaint centers on their use of the numbers specific to 1587 Prime, the name of their restaurant in Kansas City, according to court records.
Mahomes and Kelce, along with business partner Noble 33, opened their steakhouse last year, combining Mahomes' and Kelce's jersey numbers to give it the name. The sneaker company 1587 Sneakers alleges in the suit that the restaurant name infringes its usage of the identical number combination...
The 1587 Prime trademark is in the bar and restaurant category. The sneaker company's 1587 trademark application is in a clothing category.
"I think it's a tough case for the sneaker company," Gerben said. "Trademarks can coexist in different industries. ... Given that the marks are essentially identical here, is a restaurant and a shoe company too close? Are consumers likely to be confused in thinking they are affiliated with one another?""
"In a 2021 podcast clip, the cartoonist said he granted "explicit permission" for anyone to make aposthumous AIbased on him, arguing that his public thoughts and words are "so pervasive on the internet" that he'd be "a good candidate to turn into AI." He added that he was OK with an AI version of him saying new things after he died, as long as they seemed compatible with what he might say while alive.
Shortly after the 68-year-old's January death from complications of metastaticprostate cancer, an AI-generated "Scott Adams" account began posting videos of a digital version of the cartoonist speaking directly to viewers about current events and philosophy, mirroring the cadence and topics the actual human Adams discussed for years.
"The University of Utah has entered into a new agreement with the U.S. Patent and Trademark Office (USPTO) to host a community engagement office on the U campus, strengthening access to intellectual property (IP) education, resources, and expertise for students, entrepreneurs, businesses and communities across the Intermountain West.
Under the agreementannounced Thursday by the USPTO, the U campus will provide a home for a USPTO presence in Salt Lake City, powering a hub for education and outreach under the Unleashing American Innovators Act of 2022. The partnership allows the USPTO to immediately begin delivering vital services and building relationships that support innovation, economic growth and inclusive participation in the nation’s intellectual property system."
"Senators Thom Tillis (R-NC) and Adam Schiff (D-CA), the Chairman and Ranking Member of the Senate Judiciary Committee’s Subcommittee on Intellectual Property, respectively, sent a letter on Thursday to Director of the American Law Institute (ALI) Diane Wood asking for answers to 14 questions about the latest Copyright Restatement Project.
The letter follows mass resignations from the project last year over concerns about the final approved product. Specifically, key copyright proponents resigned over what one of those who resigned, Copyright Alliance CEO Keith Kupferschmid, referred to as “a general undercurrent of anti-copyright sentiment that…manifests itself through a disproportionate focus on atypical court decisions that limit the scope of copyright protection.”
There has been vocal criticism of the project from copyright circles, including the Copyright Office, for years."
"The big picture: Hollywood is leaning on copyright law to rein in Seedance, but legal pressure hasn't slowed the rise of Chinese AI models more broadly."
"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios.
The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."
"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.
Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."
"Defense Secretary Pete Hegseth is "close" to cutting business ties withAnthropicand designating theAIcompany a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.
The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
Why it matters:That kind of penalty is usually reserved for foreign adversaries.
Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."
The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."
"Most of the books the five women have discussed since they started the reading circle last June are classics, and most deal with issues of power, suffering, and the place of women, though they have embraced variety. The works they’ve read include George Orwell’s Animal Farm, Ernest Hemingway’s The Old Man and the Sea, Zoya Pirzad’s I’ll Turn Off the Lights and Symphony of the Dead, also by Abbas Maroufi.
Most of the books can be found online and downloaded free, although occasionally they borrow books from libraries.
They meet every week for an hour-and-a-half at the home of one of the members, varying the location to avoid scrutiny in a country where women’s freedoms have been severely curtailed."
"Crackdowns on speech by prominent figures pave a way for the government to regulate speech more broadly, which should be concerning for people of any political leaning because the party and people in power can change."
"President Trump’s private company has filed for trademarks for airports using his name—setting up the possibility he could profit from what has historically been an honor in name only—just as plans take flight for an airport near his Florida home to be renamed after him."
"To be in public is to risk being filmed. And these days, there’s a good chance it’s happening surreptitiously with smart glasses. Their wearers are filming in restaurants, cafes and bars, capturing warped, eye-level video of drive-through pranks, Michelin-starred meals and work shifts at Texas Roadhouse. Servers, owners and customers can end up as captive participants...
Filming in public spaces is broadly protected by the First Amendment. Some states, including California and Pennsylvania, have two-party consent laws that prohibit recording without express permission, but enforcing them hinges on whether someone has a “reasonable expectation of privacy” in a given setting, said Aaron Krowne, a New York City lawyer specializing in privacy and civil liberties. Restaurants fall in a legal gray area: They are privately owned, but open to anyone who walks in...
The responsibility of using these devices ethically falls largely on the wearer."
"On November 28, 1970, George Harrison’s “My Sweet Lord/Isn’t It a Pity”reached number 1 on the Billboard charttogether. The songs would hold that top slot for 4 weeks and spend 14 weeks on the charts in total.
“My Sweet Lord” soon spawned an infamous “subconscious plagiarism” legal fight that resulted in a landmark ruling that still profoundly affects the music industry today."
"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.
However, a new publication, co-edited by University of Notre Dame theologianPaul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.
The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.
“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.
“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”
In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.
They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."
"As artificial intelligence tools become pervasive, public libraries may want to establish transparent guidelines for how they are used by staff
Policy statements are important, because “people have very different ideas about what is acceptable or appropriate,” says Nick Tanzi, assistant director at South Huntington Public Library (SHPL), NY, who was recently selected by the Public Library Association to be part of a Transformative Technology Task Force focused on artificial intelligence (AI).
In the library field, opinions about AI—particularly with the recent emergence of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Copilot—currently run the gamut from enthusiastic adoption to informed objection. But even the technology’s detractors would agree that AI has already become an integral part of the information-seeking tools many people use every day. Google searches now frequently generate Gemini AI responses as top results. Microsoft has ingrained Copilot into its Windows OS and Office software. ChatGPT’s global monthly active users exceeded 800 million at the end of 2025. Patrons are using these tools, and they may have questions or need assistance. Libraries should be clear about how these and other AI technologies are being used within their institutions."
"Charles Rivkin, the chairman and chief executive of the Motion Picture Association, called on ByteDance to “immediately cease its infringing activity,”saying in a statementthat Seedance 2.0 had engaged in the unauthorized use of copyrighted works on a “massive scale.” Human Artistry Campaign, a global coalition that advocates using A.I. “with respect for the irreplaceable artists, performers and creatives,”said on social mediathat unauthorized works generated by Seedance 2.0 violated the “most basic aspects of personal autonomy.”
Disney, which in a watershed $1 billion deal last year agreed to allow OpenAI’s Sora usersto generate video content with its characters, sent a cease-and-desist letterto ByteDance, accusing it of supplying Seedance with a “pirated library” of Disney’s characters — “as if Disney’s coveted intellectual property were free public-domain clip art.”
ByteDance, which also owns TikTok and has been valued at$480 billion in the private markets, said in a statement that it respected intellectual property rights and was aware of the concerns about Seedance."
"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."
"AIpromises to make work more productive for lawyers, but there's a problem: Their clients are using it, too.
Why it matters:The rise of AI is creating new headaches for attorneys: They're worried about the fate of the billable hour, a reliable profit center for aeons, and are perturbed by clients getting bad legal advice from chatbots.
Zoom in: "It's like the WebMD effect on steroids," says Dave Jochnowitz, a partner at the law firm Outten & Golden, referring to how medical websites can give people a misguided understanding of their condition."
"Chinese tech giant ByteDance has said it will strengthen safeguards on a new artificial intelligence video-making tool, following complaints of copyright theft from entertainment giants.
The tool, Seedance 2.0, enables users to create realistic videos based on text prompts. However, viral videos shared online appear to show copyrighted characters and celebrity likenesses, raising intellectual property concerns in the U.S.
“ByteDance respects intellectual property rights and we have heard the concerns regarding Seedance 2.0,” a company spokesperson said in a statement shared with CNBC."
"The accounting giant KPMG fined one of its partners for using AI to cheat on a training exam… about using AI, one of a number of unintentionally illustrative incidents involving the firm.
"At the end of the century, a journalism scholar published a fascinatingcomparative studyof regional newspapers in the early 1960s and the late 1990s. “Papers of the 1960s seem naïvely trusting of government, shamelessly boosterish, unembarrassedly hokey and obliging,” Carl Sessions Stepp, the researcher, wrote. Newspapers of the ’90s were “better written, better looking, better organized, more responsible, less sensational, less sexist and racist and more informative and public-spirited.”
This sounds, you might think, salutary for the health of democracy. But it may have been precisely this move, away from deferential stenography and toward fearless investigation, that led to declining trust in the news media. Aggressive, probing and accountability-oriented journalism held up a mirror to American society — and many Americans didn’t like what they saw.
“As news grew more negative and more critical, people had more reason to find journalism distasteful,” the media scholar Michael Schudson wrote in a provocative essayon the problem of assessing trust in journalism. “What people do not like about the media is its implicit or explicit criticism of their heroes or their home teams.” No one, famously, likes the bearer of bad news.
Thinking back to that dinner with Bezos, I realized that something similar had happened. He flattered my chosen profession, reassuring me that it was not a cynical undertaking but something much more noble. He told me, in short, what I wanted to hear — and won my trust. In the intervening years, Bezos has apparently decided that his flattery is better aimed at a very different audience: Donald Trump.
During the 2024 presidential campaign, Bezos notoriously demanded that The Post spike its planned endorsement of Kamala Harris, at greatcostto the paper. After the election, he donated $1 million to Trump’s inaugural committee and joined the row of plutocrats at the inauguration. Amazon paid $40 million for the rights to a documentary about Melania Trump, spent tens of millions more to market the movie anddonatedto Trump’s absurd White House mega-ballroom project. It’s certainly one way to win trust.
The Post’s loss is others’ gain. Its best-known journalists havestreamed out the door,joining thriving news organizations like The Atlantic, The Wall Street Journal and The Times. These companies’ success, built on aggressive and independent reporting, makes me wonder whether the hand-wringing about trust is misplaced. In this new gilded age, maybe we should set aside trust and — as Bezos himself once urged — embrace skepticism."
"“Nirvanna the Band the Show the Movie,” the latest mockumentary collaboration between directorMatt Johnsonand composerJay McCarrol, probably shouldn’t exist. The film features extended parodies that carefully skirt copyright law, a stunt that sees the duo literally dive off Toronto’s CN tower and at least two dozen more moments that will leave audiences asking “How the hell is this legal?”
It’s a miracle the film made it to screens at all."
"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.
One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediatedby human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.
Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.
I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.
Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.
The second is the reality that people clearly likea human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."
[Kip Currier: Microsoft AI Chief Mustafa Suleyman's assertion that AI will be performing "most, if not all" white-collar tasks within 12 to 18 months raises lots of questions, like:
Is this forecast accurate or AI hype?
As individuals and societies, do we want AI to displace human workers? Who has decided that this is "a good thing"?
What are the spiritual implications of this revolutionary transformation of our world?
What are the implications of such changes for the physical and mental well-being of children, young people, and adults?
What are the short-term and long-term cognitive impacts of AI use?
How will marginalized persons around the globe be affected by such radical employment changes? How will the Global South be impacted?
What are the implications for income disparities and wealth concentration?
In what ways will culture, the arts, science, medicine, and research be influenced?
What are the impacts on education, life-long learning, and professional development?
How will the environment, diminishing resources like water, and climate change be influenced by this employment forecast?
In what ways will AI proliferation impact people in need and the fauna and flora of the world, particularly vulnerable organisms and ecosystems?
How will monies and resources spent on AI data centers create new environmental justice communities and exacerbate inequities in existing ones?
What are the implications for democracy, human rights, and civil liberties, like privacy, data agency, free expression, intellectual freedom, and access to accurate, uncensored information?
Do you trust AI to do the white-collar jobs that humans have done?
Are Microsoft and Suleyman disinterested parties? Microsoft has major self-interest in hyping AI enterprise products that Microsoft will be charging users to adopt and license.
If Suleyman's claim is accurate, or even is accurate but in a longer time period than 12 to 18 months, what kinds of oversight, regulations, and ethical guardrails are needed/desired?]
[Excerpt]
"Mustafa Suleyman, the Microsoft AI chief, said in an interview with the Financial Times that he predicts most, if not every, task in white-collar fields will be automated by AI within the next year or year and a half.
"I think that we're going to have a human-level performance on most, if not all, professional tasks," Suleyman said in the interview that was published Wednesday. "So white-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.""