Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, January 31, 2026

Copyright and creativity in Episode 2 of the EUIPO Podcast; European Union Intellectual Property Office (EUIPO), January 28, 2026

European Union Intellectual Property Office (EUIPO); Copyright and creativity in Episode 2 of the EUIPO Podcast

"Copyright and creativity in Episode 2 of the EUIPO Podcast

The European Union Intellectual Property Office (EUIPO) has released the second episode of its podcast series ‘Creative Sparks: From inspiration to innovation’, focusing on copyright and the launch of the EUIPO Copyright Knowledge Centre.

Titled “The idea makers: Europe’s new home for copyright”, the episode looks at how copyright supports creativity across Europe. From music, film and publishing to design, digital content and emerging technologies such as generative artificial intelligence.

It brings together institutional and creator perspectives through two guests: Véronique Delforge, copyright legal expert at the EUIPO, and Nathalie Boyer, actress, voice-over artist, Board member of ADAMI and President of the ADAMI Foundation for the Citizen Artist. They discuss creative innovation, why copyright remains essential in a rapidly evolving creative landscape and how creators can better understand and exercise their rights.

The conversation highlights the growing complexity of copyright in a digital and cross-border environment, the specific challenges faced by performers and cultural organisations, and the need for clarity, transparency and trusted information. Particular attention is given to the impact of streaming platforms and generative AI on creative works, authorship and remuneration.

The episode also introduces the EUIPO Copyright Knowledge Centre, launched to bring together reliable information, research, tools and resources in one place.

Making IP closer

The podcast is part of the EUIPO’s determination to make intellectual property more accessible to all and engaging for Europeans, businesses and creators.

The EUIPO will issue monthly episodes and explore topics related to creativity and intellectual property as a tool to foster innovation and enhance competitiveness in EU in the digital era, among many others."

Wednesday, January 28, 2026

Copyrighted art, mobile phones, Greenland: welcome to our age of shameless theft; The Guardian, January 28, 2026

 , The Guardian; Copyrighted art, mobile phones, Greenland: welcome to our age of shameless theft

"Last week I discovered that an article I wrote about the England cricket team has already been copied and repackaged, verbatim and without permission, by an Indian website. What is the appropriate response here? Decry and sue? Shrug and move on? I ponder the question as I stroll through my local supermarket, where the mackerel fillets are wreathed in metal security chains and the dishwasher tabs have to be requested from the storeroom like an illicit little treat.

On the way home, I screenshot and crop a news article and share it to one of my WhatsApp groups. In another group, a family member has posted an AI-generated video (“forwarded many times”) of Donald Trump getting his head shaved by Xi Jinping while Joe Biden laughs in the background. I watch the mindless slop on my phone as I walk along the main road, instinctively gripping my phone a little tighter as I do so.

Increasingly, by small and imperceptible degrees, we seem to live in a world defined by petty theft; petty not in its scale or volume but by its sense of entitlement and impunity. A joke, a phone, an article, the island of Greenland, the entire canon of published literature, a bag of dishwasher tablets: everything, it seems, is fair game. How did we get to this point, and where does it lead us?"

Tuesday, January 27, 2026

High Court Shouldn’t Weigh AI’s Copyright Author Status, US Says; Bloomberg Law, January 26, 2026

 , Bloomberg Law; High Court Shouldn’t Weigh AI’s Copyright Author Status, US Says

"The US Solicitor General advised the US Supreme Court not to take up a computer scientist’s petition to consider whether AI could be an author under copyright law.

A decision foreclosing nonhuman authorship for Stephen Thaler’s Creativity Machine didn’t conflict with any in other circuits or raise complicated questions about protections for artificial intelligence-assisted work by human authors, the Jan. 23 filing said."

A Lecture on Faith, Ethics and Artificial Intelligence; Episcopal News Service, Lecture: Saturday, March 7, 11 AM EST

Episcopal News Service; A Lecture on Faith, Ethics and Artificial Intelligence

"Join Grace Church Brooklyn Heights as we welcome Dr. Isaac B. Sharp for a lecture on faith, ethics and artificial intelligence addressing the question: What does Christian Ethics have to say about the promises and pitfalls of artificial intelligence, engaging questions of justice, agency and moral responsibility? The lecture will take place on Saturday, March 7th at Grace Church (254 Hicks Street, Brooklyn, NY 11201) at 11am. A light lunch will be provided. Please click here to register. For more information, please email The Rev. Leandra Lisa Lambert at LLambert@gracebrooklyn.org

Monday, January 26, 2026

Behind the Curtain: Anthropic's warning to the world; Axios, January 26, 2026

Jim VandeHei, Mike Allen, Axios; Behind the Curtain: Anthropic's warning to the world

"Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.

  • In a 38-page essay, shared with us in advance of Monday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."

  • "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

Why it matters: Amodei's company has built among the most advanced LLM systems in the world. 


  • Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites. 

  • AI is doing 90% of the computer programming to build Anthropic's products, including its own AI.

Amodeione of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" — was written to jar others, provoke a public debate and detail the risks.


  • Amodei insists he's optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.

Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter.""

Search Engines, AI, And The Long Fight Over Fair Use; Electronic Frontier Foundation (EFF), January 23, 2026

JOE MULLIN , Electronic Frontier Foundation (EFF); Search Engines, AI, And The Long Fight Over Fair Use

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works."

Saturday, January 24, 2026

Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case; The Fashion Law (TFL), January 23, 2026

  TFL, The Fashion Law (TFL); Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case

"The U.S. Copyright Office is standing firm in its position that works generated by artificial intelligence (“AI”), even when refined or curated by a human user, do not qualify for copyright protection unless the human author clearly limits their claim to their own original contributions. In a newly filed response and cross-motion for summary judgment, the Office is asking a federal court in Colorado to deny artist Jason Allen’s motion for summary judgment and uphold its refusal to register the work at issue, arguing that the dispute turns on the Copyright Act’s long-established human authorship requirement and not hostility toward AI."

Friday, January 23, 2026

It Makes Sense That People See A.I. as God; The New York Times, January 23, 2026

 , The New York Times; It Makes Sense That People See A.I. as God

"More and more, when it comes to our relationships with A.I. and the complex algorithms that shape so much of our modern subjectivity, we have slipped into the language and habits of mind we normally reserve for deities. And even people who do not make an explicit connection between A.I. and religion engage a kind of religious mode around the new technology."

Wednesday, January 21, 2026

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026

  and  , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss

"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.

While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...

Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.

Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...

Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

 , The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

  Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Sunday, January 18, 2026

Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’; Variety, January 14, 2026

Todd Spangler, Variety ; Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’

"Matthew McConaughey’s lawyers want you to know that using AI to replicate the actor’s famous catchphrase is not “alright, alright, alright.”

Attorneys for entertainment law firm Yorn Levine representing McConaughey have secured eight trademarks from the U.S. Patent and Trademark Office over the last several months for their client, which they said is aimed at protecting his voice and likeness from unauthorized AI misuse."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

 , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."


Library offering two hybrid workshops on AI issues; University of Pittsburgh, University Times, January 16, 2026

University of Pittsburgh, University Times; Library offering two hybrid workshops on AI issues

"Next week the University Library System will host two hybrid AI workshops, which are open to all faculty, staff and students.

Both workshops will be held in Hillman Library’s K. Leroy Irvis Reading Room and will be available online.

Navigating Pitt's AI Resources for Research & Learning: 4-5 p.m. Jan. 21. In this workshop, participants will learn about all the AI tools available to the Pitt community and what their strengths are when it comes to research and learning. The workshop will focus on identifying the appropriate AI tools, describing their strengths and weaknesses for specific learning needs, and developing a plan for using the tools effectively. Register here.

Creating a Personal Research & Learning Assistant: Writing Effective Prompts: 4-5 p.m. Jan. 22. Anyone can use an AI tool, but maximizing its potential for personalized learning takes some skills and forethought. If you have been using Claude or Gemini to support your research or learning and are interested in getting better results faster, this workshop is for you. Attend this session to learn strategies to write effective prompts which will help you both ideate on your topic of interest and increase the likelihood of generating useful responses. We will explore numerous frameworks for crafting prompts, including making use of personas, context, and references. Register here."

Friday, January 16, 2026

Microsoft Shuts Down Library, Replaces It With AI; Futurism, January 16, 2026

, Futurism; Microsoft Shuts Down Library, Replaces It With AI

"Does Microsoft hate books more, or its own workers? It’s hard to say, because The Verge reports that the multitrillion dollar giant is gutting its employee library and cutting down on digital subscriptions in favor of pursuing what’s internally described as an “AI-powered learning experience” — whatever in Clippy’s name that’s supposed to mean."

Microsoft is closing its employee library and cutting back on subscriptions; The Verge, January 15, 2026

 Tom Warren, The Verge; Microsoft is closing its employee library and cutting back on subscriptions

"Microsoft is closing its physical library of books and cutting employee subscriptions. It's part of cost cutting and a move to AI."

Wednesday, January 14, 2026

Britain seeks 'reset' in copyright battle between AI and creators; Reuters, January 13, 2026

Reuters; Britain seeks 'reset' in copyright battle between AI and creators

"British technology minister Liz Kendall said on Tuesday the government was seeking a "reset" on plans to overhaul copyright rules to accommodate artificial intelligence, pledging to protect creators while unlocking AI's economic potential.

Creative industries worldwide are grappling with legal and ethical challenges posed by AI systems that generate original content after being trained on popular works, often without compensating the original creators."

 

Saturday, January 3, 2026

University of Rochester's incoming head librarian looks to adapt to AI; WXXI, January 2, 2026

 Noelle E. C. Evans, WXXI; University of Rochester's incoming head librarian looks to adapt to AI

"A new head librarian at the University of Rochester is preparing to take on a growing challenge — adapting to generative artificial intelligence.

Tim McGeary takes on the position of university librarian and dean of libraries on March 1. He is currently associate librarian for digital strategies and technology at Duke University, where he’s witnessed AI challenges firsthand...

“(The university’s digital repository) was dealing with an unforeseen consequence of its own success: By making (university) research freely available to anyone, it had actually made it less accessible to everyone,” Jamie Washington wrote for the campus online news source, UDaily.

That balance between open access and protecting students, researchers and publishers from potential harms from AI is a space of major disruption, McGeary said.

"If they're doing this to us, we have open systems, what are they possibly doing to those partners we have in the publishing space?" McGeary asked. "We've already seen some of the larger AI companies have to be in court because they have acquired content in ways that are not legal.”

In the past 25 years, he said he’s seen how university libraries have evolved with changing technology; they've had to reinvent how they serve research and scholarship. So in a way, this is another iteration of those challenges, he said."

Tuesday, December 30, 2025

AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer; The Guardian, December 30, 2025

 , The Guardian; AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer

"A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.

Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.

Bengio, chair of a leading international AI safety study, said the growing perception that chatbots were becoming conscious was “going to drive bad decisions”.

The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans."