Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

 Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, November 4, 2025

Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works; IP Watchdog, November 3, 2025

ROSE ESFANDIARI , IP Watchdog; Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works

"On Friday, October 31, Professors Shlomit Yanisky-Ravid, Lawrence Lessig and a number of other professors and researchers filed an amicus brief with the U.S. Supreme Court in support of Dr. Stephen Thaler’s petition for a writ of certiorari in Thaler v. Perlmutter, urging the Court to grant certiorari and recognize copyright protection for works generated by artificial intelligence (AI).

The brief argued that “excluding AI-generated works from copyright protection threatens the foundations of American creativity, innovation, and economic growth,” warning that the lower court’s interpretation, which requires human authorship, disregards the “spirit of the Copyright Act.”"

AI firm wins high court ruling after photo agency’s copyright claim; The Guardian, November 4, 2025

 , The Guardian; AI firm wins high court ruling after photo agency’s copyright claim

"A London-based artificial intelligence firm has won a landmark high court case examining the legality of AI models using vast troves of copyrighted data without permission.

Stability AI, whose directors include the Oscar-winning film-maker behind Avatar, James Cameron, successfully resisted a claim from Getty Images that it had infringed the international photo agency’s copyright.

The ruling is seen as a blow to copyright owners’ exclusive right to reap the rewards of their work, with one senior lawyer, Rebecca Newman, a legal director at Addleshaw Goddard, warning it means “the UK’s secondary copyright regime is not strong enough to protect its creators”."

Monday, November 3, 2025

With AI technology rapidly advancing, ethics must evolve as well - opinion; The Jerusalem Post, November 2, 2025

AVI JORISCH, The Jerusalem Post; With AI technology rapidly advancing, ethics must evolve as well - opinion

"Wisdom over intelligence

Whether this century becomes our renaissance or our ruin will depend on a quality that can’t be coded or automated: wisdom.

For all our progress, we have not yet learned to match our technological power with moral imagination. We’ve achieved abundance without equilibrium, connection without community, knowledge without humility. The danger isn’t that machines will become more human – it’s that humans will become more machine-like, optimizing for speed and efficiency while forgetting the soul.

Humanity’s story has always been shaped by invention. The wheel, the compass, the printing press, the microchip – each expanded what we could do, but not necessarily who we could be. The Industrial Revolution lifted billions from poverty, yet it also gave us the mechanized wars of the twentieth century. Nuclear energy promised limitless power – and then birthed Hiroshima.

Today, as we stand on the edge of quantum computing, gene editing, and artificial general intelligence, the pattern repeats. The tools evolve. The ethics lag behind.

We need a new kind of moonshot – not just of science, but of spirit."

Japanese Companies Tell OpenAI to Stop Infringing On Its IP; Gizmodo, November 2, 2025

 , Gizmodo; Japanese Companies Tell OpenAI to Stop Infringing On Its IP

"The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand."

Internet Archive’s legal fights are over, but its founder mourns what was lost; Ars Technica, November 3, 2025

 ASHLEY BELANGER , Ars Technica; Internet Archive’s legal fights are over, but its founder mourns what was lost

"This month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas."

Sunday, November 2, 2025

DOJ faces ethics nightmare with Trump bid for $230M settlement; The Hill, October 31, 2025

 REBECCA BEITSCH, The Hill; DOJ faces ethics nightmare with Trump bid for $230M settlement


[Kip Currier: This real life "nightmare" scenario is akin to a hypothetical law school exam fact pattern with scores of ethics issues for law students to identify and discuss. Would that it were a fictitious set of facts.

If Trump's former personal attorneys, who are now in the top DOJ leadership, will not recuse themselves due to genuine conflicts of interest and appearances of impropriety, will the state and federal bar associations, who license these attorneys and hold them to annual continuing legal and ethics-related education requirements so they can remain in good standing with their respective licensing entities, step in to scrutinize potential ethical lapses of these lawyers?

These unprecedented actions by Trump must not be treated as normal. Similarly, if Trump's former personal attorneys approve Trump's attempt to "shake down" the federal government and American taxpayers, their ethically dubious actions as DOJ leaders and officers of the court must not be normalized by the organizations that are charged to enforce ethical standards for all licensed attorneys.

Moreover, approval of this settlement would be damaging to the rule of law and to public trust in the rule of law. If the most powerful person on the planet can demand that an organization -- whose leadership reports to him -- pay out a "settlement" for lawfully-conducted actions and proceedings in a prior administration, what does that say about the state of justice in the U.S.? I posit that it would say that it is a justice system that has been utterly corrupted and that is not subject to equal application of its laws and ethical standards. No person is above the law, or should be above the law in our American system of government and checks and balances. Not even the U.S. President, despite the Roberts Court's controversial Trump v. U.S. July 2024 ruling recognizing absolute and limited Presidential immunity in certain spheres.

Finally, a few words about "speaking out" and "standing up". It is vital for those who are in leadership positions to call out actions like the ones at hand that arguably undermine the rule of law and incrementally move this country from one that is democratically-centered to an autocratic nation state like Russia. I searched for and could find no statement by the American Bar Association (ABA) on this matter, a matter that is clearly relevant to its membership, of which I count myself as a member.

Will the ABA and other legal organizations share their voices on these matters that have such far-reaching implications for the rule of law and our nearly 250-year democratic experiment?

The paperback version of my Bloomsbury book, Ethics, Information, and Technology, becomes available on November 13, and I intentionally included a substantial professional and character ethics section at the outset of the book because those principles are so integral to how we conduct ourselves in all areas of our lives. Ethics precepts and values like integrity, attribution, truthfulness and avoidance of misrepresentation, transparency, accountability, and disclosure of conflicts of interest, as well as recusal when we have conflicts of interest.]


[Excerpt]

"The Department of Justice (DOJ) is facing pressure to back away from a request from President Trump for a $230 million settlement stemming from his legal troubles, as critics say it raises a dizzying number of ethical issues.

Trump has argued he deserves compensation for the scrutiny into his conduct, describing himself as a victim of both a special counsel investigation into the 2016 election and the classified documents case.

The decision, however, falls to a cadre of attorneys who previously represented Trump personally.

Rupa Bhattacharyya, who reviewed settlement requests in her prior role as director of the Torts Branch of the DOJ’s Civil Division, said most agreements approved by the department are typically for tens of thousands of dollars or at most hundreds of thousands.

“In the ordinary course, the filing of administrative claims is required. So that’s not unusual. In the ordinary course, a relatively high damages demand on an administrative claim is also not that unusual. What is unusual here is the fact that the president is making a demand for money from his own administration, which raises all sorts of ethical problems,” Bhattacharyya told The Hill.

“It’s also just completely unheard of. There’s never been a case where the president of the United States would ask the department that he oversees to make a decision in his favor that would result in millions of dollars lining his own pocket at the expense of the American taxpayer.”

It’s the high dollar amount Trump is seeking that escalates the decision to the top of the department, leaving Deputy Attorney General Todd Blanche, as well as Associate Attorney General Stanley Woodward, to consider the request."

Saturday, November 1, 2025

Universal Music settles copyright dispute with AI firm Udio; Reuters, October 29, 2025

 , Reuters; Universal Music settles copyright dispute with AI firm Udio

"Universal Music Group said on Wednesday it has settled a copyright infringement case with artificial intelligence company Udio and that the two firms will collaborate on a new suite of creative products.

Under the agreement, the companies will launch a platform next year that leverages generative AI trained on authorized and licensed music.

UMG Chairman Sir Lucian Grainge said the agreements "demonstrate our commitment to do what's right by our artists and songwriters, whether that means embracing new technologies, developing new business models, diversifying revenue streams or beyond.""

‘Progressive’ Tech Group Asks Trump to Block AI Copyright Cases; The American Prospect, October 31, 2025

 DAVID DAYEN, The American Prospect; ‘Progressive’ Tech Group Asks Trump to Block AI Copyright Cases

"The Chamber of Progress, a self-styled “progressive” industry trade group supported by most of the biggest tech platforms, has urged the Trump administration to intervene in a litany of copyright cases involving artificial intelligence firms, to try to stop authors and publishers from having their work used for training AI models without permission.

The pleading comes as Anthropic prepares to pay authors $1.5 billion, the largest award in the history of copyright law, for pirating their work, in a settlement announced last month. OpenAI, Microsoft, Google, and Meta are named defendants in the more than 50 active lawsuits over AI intellectual-property theft.

In a letter to Michael Kratsios, the lead science adviser to President Trump, the Chamber of Progress estimates that AI companies could be liable under the Copyright Act for up to $1.5 trillion for stealing copyrighted work on which to train their models. The letter’s authors claim that this represents “an existential risk” to AI companies, and that the cases should be tossed out under a “fair use” standard.

The Chamber of Progress’s campaign to promote fair use, which they have created a campaign around called “Generate and Create,” comes as at least three of the nonprofit organization’s past or current backers are being sued over copyright claims: Meta, Google, and the AI art generator Midjourney. Another current funder, Nvidia, relies heavily on AI development for its continued success, and venture capital firm a16z, with several AI startups in its portfolio, also funds the nonprofit."

Friday, October 31, 2025

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract

 

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic ProblemRenegotiating the Socio-Technical Contract

Description:

"In the age of AI, where personal data fuels corporate profits and state surveillance, what are the implications for democracy?

This incisive book explores the unspoken agreement we have with tech companies. In exchange for reducing the anxiety of an increasingly complex online world, we submit to algorithmic classification and predictability. This reduces incentives for us to become “algorithmic problems” with dire consequences for liberal democracy. He calls for a movement to demand that algorithms promote play, creativity and potentiality rather than conformity.

This is a must-read for anyone navigating the intersection of technology, politics and identity in an increasingly data-driven world."

Photographer sues Rice after Coffeehouse uses copyrighted muffin photo; Rice Thresher, October 28, 2025

 , Rice Thresher; Photographer sues Rice after Coffeehouse uses copyrighted muffin photo


[Kip Currier: Another cautionary tale to not "scrape" stuff from the Internet and use it for one's commercial business.

(Oh, wait...that's what Big Tech did with billions of copyrighted works they "scraped" from the Web and pirate book libraries to develop their proprietary IP-protected AI products, too, right?)

At any rate, this "federal copyright-registered blueberry muffin photo" story is another reminder that it'll save you money, time, and possible reputational damage to do one of the following: take your own photo of a blueberry muffin; hire someone to do it; license a photo; or ask permission to use it.

And if you do use a photo for your business venture, give the photographer/copyright creator attribution...

Like this photo I took of my mother's delicious Iron Skillet Blueberry Scones!]



[Excerpt]

"A lawsuit has been filed against Rice, accusing Rice Coffeehouse of using a copyrighted photograph without permission to advertise on their Instagram page and website. 

The Oct. 17 complaint was filed on behalf of Meggan Hill, a photographer and chef who posts her recipes online. The photo in question is a blueberry muffin photo, featured on Hill’s website with an accompanying recipe.

The lawsuit was filed in a Houston federal district court by Hill’s attorney, Layla Nguyen of SRipLaw, an intellectual property law firm based in Boca Raton, Florida. According to the case briefing, Hill tried to notify the university of her allegations twice before suing but received no response. 

“To date, the parties have failed to resolve this matter,” the briefing reads. “Rice failed to respond to any communications.”

The lawsuit alleges that Chaus copied Hill’s photograph titled “Blueberry Muffins - Culinary Hill 1200x800” from the internet and displayed it online to promote the addition of blueberry muffins to their menu. 

In the lawsuit, Hill claims she discovered the photograph on the Chaus website on Jan. 7 — over a year after she registered the image with the Register of Copyrights. Hill allegedly reached out to Rice with her complaint on Jan. 29 and Feb. 28 before filing the suit."

ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.; Business Insider, October 28, 2025

  , Business Insider; ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.

"When a federal judge decided to allow a sprawling class-action lawsuit against OpenAI to move forward, he read some "Game of Thrones" fan fiction.

In a court ruling Monday, US District Judge Sidney Stein said a ChatGPT-generated idea for a book in the still-unfinished "A Song of Ice and Fire" series by George R.R. Martin could have violated the author's copyright.

"A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs' works," the judge said in the 18-page Manhattan federal court ruling."

Are We Losing Our Democracy?; The New York Times, October 31, 2025

 The Editorial Board, The New York Times; Are We Losing Our Democracy?

"Countries that slide from democracy toward autocracy tend to follow similar patterns. To measure what is happening in the United States, the Times editorial board has compiled a list of 12 markers of democratic erosion, with help from scholars who have studied this phenomenon. The sobering reality is that the United States has regressed, to different degrees, on all 12.

Our country is still not close to being a true autocracy, in the mold of Russia or China. But once countries begin taking steps away from democracy, the march often continues. We offer these 12 markers as a warning of how much Americans have already lost and how much more we still could lose."

Thursday, October 30, 2025

As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm; The New York Times, October 29, 2025

  Ana Swanson and , The New York Times; As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm

"Mr. Trump’s comments signaled a major potential change for U.S. policy that many Washington officials warn poses a national security risk. Selling such advanced A.I. chips to China is currently banned, and U.S. officials have worked for years to restrain Beijing’s access to the cutting-edge technology.

The president’s reversal, if it comes to pass, would have widespread implications. Nvidia, which has emphasized the importance of maintaining access to the Chinese market, would reap new sales. But critics have argued that A.I. technology is important enough to potentially shift the balance of power in a strategic competition between the United States and China."

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Teenage boys using ‘personalised’ AI for therapy and romance, survey finds; The Guardian, October 30, 2025

  and , The Guardian; Teenage boys using ‘personalised’ AI for therapy and romance, survey finds

"“Young people are using it a lot more like an assistant in their pocket, a therapist when they’re struggling, a companion when they want to be validated, and even sometimes in a romantic way. It’s that personalisation aspect – they’re saying: it understands me, my parents don’t.”

The research, based on a survey of boys in secondary education across 37 schools in England, Scotland and Wales, also found that more than half (53%) of teenage boys said they found the online world more rewarding than the real world.

The Voice of the Boys report says: “Even where guardrails are meant to be in place, there’s a mountain of evidence that shows chatbots routinely lie about being a licensed therapist or a real person, with only a small disclaimer at the bottom saying the AI chatbot is not real."

Character.AI bans users under 18 after being sued over child’s suicide; The Guardian, October 29, 2025

  , The Guardian; Character.AI bans users under 18 after being sued over child’s suicide

"The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

 

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Federal judge says Texas law requiring book ratings is unconstitutional; KUT News, October 22, 2025

  Bill Zeeble, KUT News; Federal judge says Texas law requiring book ratings is unconstitutional

"The 2023 Texas law requiring booksellers and publishers to rate their books based on sexual content and references has been declared unconstitutional in a Waco court.

A federal judge on Tuesday declared House Bill 900, also known as the READER Act, violates the Constitution. The ruling makes permanent a lower court's temporary injunction that the Fifth Circuit Court of Appeals later upheld.

The law firm Haynes Boone, which represented the coalition of plaintiffs that sued to block the law, said in a statement the ruling is a "major First Amendment victory."

"The READER Act would have imposed impossible obligations on booksellers and limited access to literature, including classic works, for students across Texas," attorney Laura Lee Prather said in the statement.

HB 900 sought to restrict which books are available in school libraries and required booksellers to rate their own books based on sexual content. The Texas Education Agency could have overridden the ratings to prevent school libraries from obtaining books."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

 Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

 , Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Monday, October 27, 2025

Trump Asks Supreme Court to Let Him Fire the Top Copyright Official; The New York Times, October 27, 2025

, The New York Times; Trump Asks Supreme Court to Let Him Fire the Top Copyright Official

"The Trump administration has asked the Supreme Court to allow the president to remove the government’s top copyright official after a lower court allowed her to remain in her post that is part of the Library of Congress.

President Trump ordered the removal in May of Shira Perlmutter, the register of copyrights, along with the librarian of Congress, Carla Hayden, who did not challenge her dismissal.

The Supreme Court’s conservative majority has repeatedly allowed Mr. Trump to fire the leaders of independent agencies even as they fight their dismissals in court, allowing him to seize greater control of the federal bureaucracy.

The administration made the request after a divided panel of the U.S. Court of Appeals for the D.C. Circuit sided with Ms. Perlmutter, the head of the U.S. Copyright Office. The majority said the register is unique within the legislative branch and that her role is to advise Congress on issues related to copyright."