Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, January 16, 2025

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

  in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Wednesday, January 15, 2025

Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'; Bloomberg Law, January 14, 2026

, Bloomberg Law; Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'

"California attorney Mark Lemley dropped Meta Platforms Inc. as a client in a high-profile copyright case because of CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness,” the Stanford University professor said on LinkedIn."

Tuesday, January 14, 2025

USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation; United States Patent and Trademark Office (USPTO), January 14, 2025

United States Patent and Trademark Office (USPTO) ; USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation 

"AI Strategy outlines how the USPTO will address AI's impact across IP policy, agency operations, and the broader innovation ecosystem  

WASHINGTON—The U.S. Patent and Trademark Office (USPTO) announced a new Artificial Intelligence (AI) Strategy to guide the agency’s efforts toward fulfilling the potential of AI within USPTO operations and across the intellectual property (IP) ecosystem. The Strategy offers a vision for how the USPTO can foster responsible and inclusive AI innovation, harness AI to support the agency’s mission, and advance a positive future for AI to ensure that the country maintains its leadership in innovation. 

“We have a responsibility to promote, empower, and protect innovation,” said Derrick Brent, Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the USPTO. “Developing a strategy to unleash the power of AI while mitigating risks provides a framework to advance innovation and intellectual property.”  

The strategy aims to achieve the USPTO’s AI vision and mission through five focus areas which include: 

  1. Advance the development of IP policies that promote inclusive AI innovation and creativity. 
  2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development. 
  3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
  4. Develop AI expertise within the USPTO’s workforce.
  5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

The USPTO and our sister agencies within the Department of Commerce, as well as the U.S. Copyright Office, are providing critical guidance and recommendations to advance AI-driven innovation and creativity. In 2022, the USPTO created the AI and Emerging Technology (ET) Partnership, which has worked closely with the AI/ET community to gather public feedback through a series of sessions on topics related to AI and innovation, biotech, and intellectual property (IP) policy. Since its 2022 launch, more than 6,000 stakeholders have engaged with us on these critical issues. In additionthe USPTO collaborates across government to advance American leadership in AI by promoting innovation and competition as set forth in the Biden-Harris Administration’s landmark October 2023 AI Executive Order. 

The full text of the AI Strategy can be found on the AI Strategy webpageAdditionalinformation on AI, including USPTO guidance and more on USPTO’s AI/ET Partnership, can be found on our AI webpage. "

Wednesday, January 8, 2025

3Blue1Brown copyright takedown blunder by AI biz blamed on human error; The Register, January 7, 2025

Thomas Claburn , The Register; 

3Blue1Brown copyright takedown blunder by AI biz blamed on human error

"The errant copyright takedown of a popular video explaining Bitcoin, from Grant Sanderson's smash-hit YouTube channel 3Blue1Brown, has been reversed, with human error blamed.

The brand protection company responsible for the unwarranted takedown, ChainPatrol.io, also acknowledged that the legal demand was a mistake, and has been in communication with Sanderson about its efforts to return the video to YouTube. Sanderson – who has 6.8 million subscribers on YouTube, and whose in-depth videos mainly on mathematics and science have been viewed more than 600 million times – raised the alarm in a social media post on Monday.

He said he had just learned that YouTube had removed a 2017-era video he had made and applied a copyright strike – three of which will get an account terminated – despite the fact that he had only used his own content in his video."

Monday, January 6, 2025

At the Intersection of A.I. and Spirituality; The New York Times, January 3, 2025

 , The New York Times; At the Intersection of A.I. and Spirituality

"For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the internet in the 1990s. Some proponents of A.I. in religious spaces have gone back even further, comparing A.I.’s potential — and fears of it — to the invention of the printing press in the 15th century.

Religious leaders have used A.I. to translate their livestreamed sermons into different languages in real time, blasting them out to international audiences. Others have compared chatbots trained on tens of thousands of pages of Scripture to a fleet of newly trained seminary students, able to pull excerpts about certain topics nearly instantaneously.

But the ethical questions around using generative A.I. for religious tasks have become more complicated as the technology has improved, religious leaders say. While most agree that using A.I. for tasks like research or marketing is acceptable, other uses for the technology, like sermon writing, are seen by some as a step too far."

Friday, January 3, 2025

U.S. Copyright Office to Begin Issuing Further AI Guidance in January 2025; The National Law Review, January 2, 2025

 John Hines of The Sedona Conference  , The National Law Review; U.S. Copyright Office to Begin Issuing Further AI Guidance in January 2025

"Parts 2 and 3, which have not yet been released, will be of heightened interest to content creators and to individuals and businesses involved in developing and deploying AI technologies. Ultimate regulatory and legislative determinations could materially recalibrate the scope of ownership and protection afforded to works of authorship, and the stakes are extremely high...

Part 2 of the report, which the Copyright Office expects to publish “after the New Year Holiday,” will address the copyrightability of AI-generated works, and more specifically, how the nature and degree of such use affects copyrightability and registrability. Current law is clear that to be copyrightable, a work must be created by a human. E.g., Thaler v. Perlmutter, 678 F.Supp. 140 (D.DC 2023), on appeal. However assistive tools are used in virtually all creation, from pencils to cameras to photo-editing software programs. In the context of registrability, the Copyright Office offered the following distinction in its March 2023 guidance: “[W]hether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” In Part 2, the Copyright Office will have an additional opportunity to explore these and related issues – this time with the advantage of the many comments offered through the Notice of Inquiry process.

Part 3 of the report, which the Copyright Office anticipates releasing “in the first quarter of 2025,” will focus on issues associated with training data. AI models, depending on their size and scope, may train on millions of documents—many of which are copyrighted or copyrightable— acquired from the Internet or through acquisition of various robust databases. Users of “trained” AI technologies will typically input written prompts to generate written content or images, depending on the model (Sora is now available to generate video). The output is essentially a prediction based on a correlation of values in the model (extracted from the training data) and values that are derived from the user prompts.

Numerous lawsuits, perhaps most notably the case that The New York Times filed against Microsoft and OpenAI, have alleged that the use of data to train AI models constitutes copyright infringement. In many cases there may be little question of copying in the course of uploading data to train the models. Among a variety of issues, a core common issue will be whether the use of the data for training purposes is fair use. Content creators, of course, point to the fact that they have built their livelihoods and/or businesses around their creations and that they should be compensated for what is a violation of their exclusive rights."

Monday, December 30, 2024

Key IP Issues for the Next President and Congress to Tackle: AI and Patent Subject Matter Eligibility; IP Watchdog, December 29, 2024

RYAN J. MALLOY, IP Watchdog; Key IP Issues for the Next President and Congress to Tackle: AI and Patent Subject Matter Eligibility

"The debates surrounding the 2024 election focused on “hot button” issues like abortion, immigration, and transgender rights. But several important IP issues also loom over the next administration and Congress. These issues include AI-generated deepfakes, the use of copyrighted works for AI training, the patentability of AI-assisted inventions, and patent subject matter eligibility more generally. We might see President Trump and the 119th Congress tackle some or all of these issues in the next term."

Sunday, December 29, 2024

Copyright on the Calendar: Top Cases of 2024 and What to Watch in 2025; IP Watchdog, December 20, 2024

YAN SUN & JEFFREY HAJDIN & JOHANNA SCHMITT , IP Watchdog; Copyright on the Calendar: Top Cases of 2024 and What to Watch in 2025

"Copyright law provided an arena for some of the most interesting legal battles in 2024. We review some highlights from 2024 below, as well as some cases to watch in 2025."

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

 Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Friday, December 27, 2024

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review; Electronic Frontier Foundation (EFF), December 27, 2024

CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF) ; While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

"These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction.  

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation."

Friday, December 20, 2024

Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025; The National Law Review, December 19, 2024

Daniel J. Lass of Robinson & Cole LLP , The National Law Review; Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025

"This week, Director Shira Perlmutter indicated that the publication of part two of the U.S. Copyright Office’s three-part report on copyright issues raised by artificial intelligence (AI) would be further delayed. In her letter to the ranking members of the Senate Subcommittee on Intellectual Property and the House Subcommittee on Courts, Intellectual Property, and the Internet, Director Perlmutter indicated that although substantial progress had been made, the Office will not publish part two by the end of 2024 and now expects publication to occur in early 2025.

Part two of the report will describe the copyrightability of generative AI outputs and will build on part one of the report on digital replicas. Following the publication of part two, Director Perlmutter indicated that the third and final part would be published in the first quarter of 2025. Part three will relate to “analyzing the legal issues related to the ingestion of copyrighted works to train AI models, including licensing considerations and the allocation of potential liability.”"

Sunday, December 8, 2024

In Wisconsin, Professors Worry AI Could Replace Them; Inside Higher Ed, December 6, 2024

 Kathryn Palmer, Inside Higher Ed; In Wisconsin, Professors Worry AI Could Replace Them

"Faculty at the cash-strapped Universities of Wisconsin System are pushing back against a proposed copyright policy they believe would cheapen the relationship between students and their professors and potentially allow artificial intelligence bots to replace faculty members...

The policy proposal is not yet final and is open for public comment through Dec. 13. ..

Natalia Taft, an associate professor of biological sciences at the University of Wisconsin–Parkside who signed the open letter, told Inside Higher Ed that she believes the policy proposal “is part of the trend of the corporatization of academia.”...

Jane Ginsburg, a professor of literary and artistic property law at Columbia University School of Law, said the university has the law on its side. 

Under the 1976 Copyright Act, “course material prepared by employees, including professors, as part of their jobs comes within the definition of a ‘work made for hire,’ whose copyright vests initially in the employer (the University), not the employee (the professor).”"

Thursday, November 28, 2024

Fate of AI and Fair Use Copyright Report Will Depend on Judges; Bloomberg Law, November 26, 2024

Jorja Siemons , Bloomberg Law; Fate of AI and Fair Use Copyright Report Will Depend on Judges

"Federal courts—not the US Copyright Office—will have the final say on implementing any recommendations from the office’s forthcoming report on the fair use ramifications of training AI on protected works...

Judges across the country are presiding over roughly three dozen lawsuits against OpenAI Inc.Microsoft Corp.Meta Platforms Inc., Anthropic PBC, and other AI companies. How they apply that framework as they wrestle with novel legal questions remains to be seen—and it may be Congress that benefits the most from the Copyright Office’s analysis."

Thursday, November 14, 2024

Perlmutter Says Copyright Office Is Still Working to Meet ‘Ambitious Deadline’ for AI Report; IPWatchdog, November 14, 2024

EILEEN MCDERMOTT , IPWatchdog; Perlmutter Says Copyright Office Is Still Working to Meet ‘Ambitious Deadline’ for AI Report

"Asked by Subcommittee Chair Chris Coons (D-DE) what keeps her up at night when it comes to the AI issue, Perlmutter said “the speed at which this is all developing.” In September during IPWatchdog LIVE 2024, Perlmutter told LIVE attendees that while she’s confident the issues around copyright and AI will eventually be solved, she’s “less comfortable about what it means for humankind.”

Perlmutter recently came under fire from Committee on House Administration Chairman Bryan Steil (R-WI), who sent a letter On Tuesday, October 29, to the Office asking for an update on the AI report, which Steil charged is no longer on track to be published by its stated target dates. Steil’s letter asked the Office to explain the delay in issuance of parts two and three, which Register of Copyrights Shira Perlmutter indicated in an oversight hearing by the Committee on House Administration would be published before the end of the summer and in the fall, respectively. “The importance of these reports cannot be overstated,” Steil wrote, explaining that copyright owners are relying on the Office to provide clear guidance. “The absence of these reports creates uncertainty for industries that are already grappling with AI-related challenges and hinders lawmakers’ ability to craft effective policy,” the letter added.

Perlmutter commented in the hearing that “we’ve been trying to set and follow our own ambitious deadlines” and the goal remains to get the rest of the report out by the end of the year, but that her key concern is to be “accurate and thoughtful.”

The forthcoming reports will include recommendations on how to deal with copyrightability of materials created using GAI and the legal implications of training on copyrighted works. The latter is most controversial and may in fact require additional legislation focusing on transparency requirements."

Sunday, November 10, 2024

What’s Happening with AI and Copyright Law; JD Supra, November 4, 2024

AEON Law, JD Supra; What’s Happening with AI and Copyright Law

"Not surprisingly, a lot is happening at the intersection of artificial intelligence (AI) and intellectual property (IP) law.

Here’s a roundup of some recent developments in the area of copyright law and AI.

Copyright Office Denies AI Security Research Exemption under DMCA...

Former OpenAI Employee Says It Violates Copyright Law...

Blade Runner Production Company Sues Tesla for AI-Aided Copyright Infringement"

Thursday, November 7, 2024

‘I’m going to sue the living pants off them’: AI’s big legal showdown – and what it means for Dr Strange’s hair; The Guardian, November 6, 2024

  , The Guardian; ‘I’m going to sue the living pants off them’: AI’s big legal showdown – and what it means for Dr Strange’s hair

"“The intersection of generative AI and CGI image creation is the next wave.”

Now that wave is threatening to flood an unprepared industry, washing away jobs and certainties. How do people in the industry feel? To find out, I attended Trojan Horse Was a Unicorn (THU), a digital arts festival near Lisbon in Portugal. Now in its 10th year, THU is a place where young artists entering these industries, some 750 of them, come to meet, get inspired and learn from veterans in their fields: film-makers, animators, VFX wizards, concept artists, games designers. This year, AI is the elephant in the room. Everyone is either talking about it – or avoiding talking about it...

Andre Luis, the 43-year-old CEO and co-founder of THU, acknowledges that “the anxiety is here” at this year’s event, but rather than running away from it, he argues, artists should be embracing it. One of the problems now is that the people eagerly adopting AI are executives and managers. “They don’t understand how to use AI to accelerate creativity,” he says, “or to make things better for everyone, so it’s up to us [the artists] to teach them. You need people who actually are creative to use AI.”

Luis likens generative AI to ultra processed food: it cannot create anything new; it can only reconstitute what’s already there, turning it into an inferior product. “And a lot of companies are trying to make fast food,” he says. Many see AI as a way to churn out quick, cheap content, as opposed to higher quality fare that has been created “organically” over time, with loving human input...

The democratising potential of AI could usher in what Luis calls “a new era of indie” in films, games, TV. Just as digital technology put cameras, editing and graphics tools into the hands of many more people...

“AI is something that is here,” he tells the young creators at THU, “so you need to adapt. See the opportunities, see the problems, but understand that it can help you do things in a different way. You need to ask yourselves, ‘How can I be part of that?’"

Friday, November 1, 2024

AI Training Study to Come This Year, Copyright Office Says; Bloomberg Law, October 31, 2024

Annelise Gilbert , Bloomberg Law; AI Training Study to Come This Year, Copyright Office Says

"The Copyright Office’s report on the legal implications of training artificial intelligence models on copyrighted works is still expected to publish by the end of 2024, the office’s director told lawmakers.

Director Shira Perlmutter on Wednesday said the office aims to complete the remaining two sections of its three-part AI report in the next two months—one on the copyrightability of generative AI output and the other about liability, licensing, and fair use in regards to AI training on protected works."

Wednesday, October 30, 2024

A Harris Presidency Is the Only Way to Stay Ahead of A.I.; The New York Times, October 29, 2024

 THOMAS L. FRIEDMAN, The New York Times; A Harris Presidency Is the Only Way to Stay Ahead of A.I.

"Kamala Harris, given her background in law enforcement, connections to Silicon Valley and the work she has already done on A.I. in the past four years, is up to this challenge, which is a key reason she has my endorsement for the presidency...

I am writing a book that partly deals with this subject and have benefited from my tutorials with Craig Mundie, the former chief research and strategy officer for Microsoft who still advises the company. He is soon coming out with a book of his own related to the longer-term issues and opportunities of A.G.I., written with Eric Schmidt, the former Google C.E.O., and Henry Kissinger, who died last year and worked on the book right up to the end of his life.

It is titled “Genesis: Artificial Intelligence, Hope, and the Human Spirit.” The book invokes the Bible’s description of the origin of humanity because the authors believe that our A.I. moment is an equally fundamental turning point for our species.

I agree. We have become Godlike as a species in two ways: We are the first generation to intentionally create a computer with more intelligence than God endowed us with. And we are the first generation to unintentionally change the climate with our own hands.

The problem is we have become Godlike without any agreement among us on the Ten Commandments — on a shared value system that should guide the use of our newfound powers. We need to fix that fast. And no one is better positioned to lead that challenge than the next U.S. president, for several reasons."

Monday, October 7, 2024

Authors Guild to offer “Human Authored” label on books to compete with AI; Marketplace.org, October 7, 2024

Matt Levin, Marketplace.org ; Authors Guild to offer “Human Authored” label on books to compete with AI

"The Authors Guild, the professional association representing published novelists and nonfiction writers, is set to offer to its 15,000 members a new certificate they can place directly on their book covers.

About the size of literary award stickers or celebrity book club endorsements adorning the cover art of the latest bestseller, the certificate is a simple, round logo with two boldfaced words inside: “Human Authored.”

As in, written by a human — and not artificial intelligence.

A round, gold stamp reads "Human Authored," "Authors Guild."
(Courtesy The Authors Guild)

“It isn’t just to prevent fraud and deception,” said Douglas Preston, a bestselling novelist and nonfiction writer and member of the Authors Guild Council. “It’s also a declaration of how important storytelling is to who we are as a species. And we’re not going to let machines elbow us aside and pretend to be telling us stories, when it’s just regurgitating literary vomitus.”

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

  Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."