Sunday, November 9, 2025

The AI spending frenzy is so huge that it makes no sense; The Washington Post, November 7, 2025

, The Washington Post; The AI spending frenzy is so huge that it makes no sense

" In just the past year, the four richest companies developing AI — Microsoft, Google, Amazon and Meta — have spent roughly $360 billion combined for big-ticket projects, which included building AI data centers and stuffing them with computer chips and equipment, according to my analysis of financial disclosures.

(Amazon founder Jeff Bezos owns The Washington Post.)

That same amount of money could pay for about four years’ worth of the Supplemental Nutrition Assistance Program (SNAP), the federal government program that distributes more than $90 billion in yearly food assistance to 42 million Americans. SNAP benefits are in limbo for now during the government shutdown...

Eight of the world’s top 10 most valuable companies are AI-centric or AI-ish American corporate giants — Nvidia, Apple, Microsoft, Google, Amazon, Broadcom, Meta and Tesla. That’s according to tallies from S&P Global Market Intelligence based on the total price of the companies’ stock held by investors."


A.I. Is on Its Way to Something Even More Remarkable Than Intelligence; The New York Times, November 8, 2025

Barbara Gail Montero, The New York Times; A.I. Is on Its Way to Something Even More Remarkable Than Intelligence

"Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians."

Saturday, November 8, 2025

Stability AI’s legal win over Getty leaves copyright law in limbo; The Verge, November 5, 2025

Robert Hart , The Verge; Stability AI’s legal win over Getty leaves copyright law in limbo

"Stability AI, the creator of popular AI art tool Stable Diffusion, was largely victorious against Getty Images on Tuesday in a British legal battle over the material used to train AI models. The case originally looked set to produce a landmark ruling on AI and copyright in the UK, but it landed with a thud and failed to set any clear precedent for the big question dividing AI companies and creative firms: whether AI models need permission to train on copyrighted works.

The case, first filed in 2023, is the first major AI copyright claim to reach England’s High Court, though the verdict offers little clarity to other AI companies and rightsholders."

Friday, November 7, 2025

To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots; The New York Times, November 6, 2025

  , The New York Times; To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots


[Kip Currier: This new discretionary DHS records policy is counter to sound ethics practices and democracy-centered values.

Preservation of records promotes transparency, the historical record, accountability, access to information, informed citizenries, the right to petition one's government, free and independent presses, and more. The new DHS records policy undermines all of the above.]



[Excerpt]

"The Department of Homeland Security has stopped using software that automatically captured text messages and saved trails of communication between officials, according to sworn court statements filed this week.

Instead, the agency began in April to require officials to manually take screenshots of their messages to comply with federal records laws, citing cybersecurity concerns with the autosave software.

Public records experts say the new record-keeping policy opens ample room for both willful and unwitting noncompliance with federal open records laws in an administration that has already shown a lack of interest in, or willingness to skirt, records laws. That development could be particularly troubling as the department executes President Trump’s aggressive agenda of mass deportations, a campaign that has included numerous accusations of misconduct by law enforcement officials, the experts said.

“If you are an immigration official or an agent and believe that the public might later criticize you, or that your records could help you be held accountable, would you go out of the way to preserve those records that might expose wrongdoing?” said Lauren Harper, who advocates government transparency at the Freedom of the Press Foundation."

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Thursday, November 6, 2025

Staying Human in the Age of AI; Duquesne University, Grefenstette Center for Ethics, November 6-7, 2025

 Duquesne University, Grefenstette Center for Ethics2025 Tech Ethics Symposium: Staying Human in the Age of AI

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

 Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, November 4, 2025

Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works; IP Watchdog, November 3, 2025

ROSE ESFANDIARI , IP Watchdog; Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works

"On Friday, October 31, Professors Shlomit Yanisky-Ravid, Lawrence Lessig and a number of other professors and researchers filed an amicus brief with the U.S. Supreme Court in support of Dr. Stephen Thaler’s petition for a writ of certiorari in Thaler v. Perlmutter, urging the Court to grant certiorari and recognize copyright protection for works generated by artificial intelligence (AI).

The brief argued that “excluding AI-generated works from copyright protection threatens the foundations of American creativity, innovation, and economic growth,” warning that the lower court’s interpretation, which requires human authorship, disregards the “spirit of the Copyright Act.”"

AI firm wins high court ruling after photo agency’s copyright claim; The Guardian, November 4, 2025

 , The Guardian; AI firm wins high court ruling after photo agency’s copyright claim

"A London-based artificial intelligence firm has won a landmark high court case examining the legality of AI models using vast troves of copyrighted data without permission.

Stability AI, whose directors include the Oscar-winning film-maker behind Avatar, James Cameron, successfully resisted a claim from Getty Images that it had infringed the international photo agency’s copyright.

The ruling is seen as a blow to copyright owners’ exclusive right to reap the rewards of their work, with one senior lawyer, Rebecca Newman, a legal director at Addleshaw Goddard, warning it means “the UK’s secondary copyright regime is not strong enough to protect its creators”."

Monday, November 3, 2025

With AI technology rapidly advancing, ethics must evolve as well - opinion; The Jerusalem Post, November 2, 2025

AVI JORISCH, The Jerusalem Post; With AI technology rapidly advancing, ethics must evolve as well - opinion

"Wisdom over intelligence

Whether this century becomes our renaissance or our ruin will depend on a quality that can’t be coded or automated: wisdom.

For all our progress, we have not yet learned to match our technological power with moral imagination. We’ve achieved abundance without equilibrium, connection without community, knowledge without humility. The danger isn’t that machines will become more human – it’s that humans will become more machine-like, optimizing for speed and efficiency while forgetting the soul.

Humanity’s story has always been shaped by invention. The wheel, the compass, the printing press, the microchip – each expanded what we could do, but not necessarily who we could be. The Industrial Revolution lifted billions from poverty, yet it also gave us the mechanized wars of the twentieth century. Nuclear energy promised limitless power – and then birthed Hiroshima.

Today, as we stand on the edge of quantum computing, gene editing, and artificial general intelligence, the pattern repeats. The tools evolve. The ethics lag behind.

We need a new kind of moonshot – not just of science, but of spirit."

Japanese Companies Tell OpenAI to Stop Infringing On Its IP; Gizmodo, November 2, 2025

 , Gizmodo; Japanese Companies Tell OpenAI to Stop Infringing On Its IP

"The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand."

Internet Archive’s legal fights are over, but its founder mourns what was lost; Ars Technica, November 3, 2025

 ASHLEY BELANGER , Ars Technica; Internet Archive’s legal fights are over, but its founder mourns what was lost

"This month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas."

Sunday, November 2, 2025

DOJ faces ethics nightmare with Trump bid for $230M settlement; The Hill, October 31, 2025

 REBECCA BEITSCH, The Hill; DOJ faces ethics nightmare with Trump bid for $230M settlement


[Kip Currier: This real life "nightmare" scenario is akin to a hypothetical law school exam fact pattern with scores of ethics issues for law students to identify and discuss. Would that it were a fictitious set of facts.

If Trump's former personal attorneys, who are now in the top DOJ leadership, will not recuse themselves due to genuine conflicts of interest and appearances of impropriety, will the state and federal bar associations, who license these attorneys and hold them to annual continuing legal and ethics-related education requirements so they can remain in good standing with their respective licensing entities, step in to scrutinize potential ethical lapses of these lawyers?

These unprecedented actions by Trump must not be treated as normal. Similarly, if Trump's former personal attorneys approve Trump's attempt to "shake down" the federal government and American taxpayers, their ethically dubious actions as DOJ leaders and officers of the court must not be normalized by the organizations that are charged to enforce ethical standards for all licensed attorneys.

Moreover, approval of this settlement would be damaging to the rule of law and to public trust in the rule of law. If the most powerful person on the planet can demand that an organization -- whose leadership reports to him -- pay out a "settlement" for lawfully-conducted actions and proceedings in a prior administration, what does that say about the state of justice in the U.S.? I posit that it would say that it is a justice system that has been utterly corrupted and that is not subject to equal application of its laws and ethical standards. No person is above the law, or should be above the law in our American system of government and checks and balances. Not even the U.S. President, despite the Roberts Court's controversial Trump v. U.S. July 2024 ruling recognizing absolute and limited Presidential immunity in certain spheres.

Finally, a few words about "speaking out" and "standing up". It is vital for those who are in leadership positions to call out actions like the ones at hand that arguably undermine the rule of law and incrementally move this country from one that is democratically-centered to an autocratic nation state like Russia. I searched for and could find no statement by the American Bar Association (ABA) on this matter, a matter that is clearly relevant to its membership, of which I count myself as a member.

Will the ABA and other legal organizations share their voices on these matters that have such far-reaching implications for the rule of law and our nearly 250-year democratic experiment?

The paperback version of my Bloomsbury book, Ethics, Information, and Technology, becomes available on November 13, and I intentionally included a substantial professional and character ethics section at the outset of the book because those principles are so integral to how we conduct ourselves in all areas of our lives. Ethics precepts and values like integrity, attribution, truthfulness and avoidance of misrepresentation, transparency, accountability, and disclosure of conflicts of interest, as well as recusal when we have conflicts of interest.]


[Excerpt]

"The Department of Justice (DOJ) is facing pressure to back away from a request from President Trump for a $230 million settlement stemming from his legal troubles, as critics say it raises a dizzying number of ethical issues.

Trump has argued he deserves compensation for the scrutiny into his conduct, describing himself as a victim of both a special counsel investigation into the 2016 election and the classified documents case.

The decision, however, falls to a cadre of attorneys who previously represented Trump personally.

Rupa Bhattacharyya, who reviewed settlement requests in her prior role as director of the Torts Branch of the DOJ’s Civil Division, said most agreements approved by the department are typically for tens of thousands of dollars or at most hundreds of thousands.

“In the ordinary course, the filing of administrative claims is required. So that’s not unusual. In the ordinary course, a relatively high damages demand on an administrative claim is also not that unusual. What is unusual here is the fact that the president is making a demand for money from his own administration, which raises all sorts of ethical problems,” Bhattacharyya told The Hill.

“It’s also just completely unheard of. There’s never been a case where the president of the United States would ask the department that he oversees to make a decision in his favor that would result in millions of dollars lining his own pocket at the expense of the American taxpayer.”

It’s the high dollar amount Trump is seeking that escalates the decision to the top of the department, leaving Deputy Attorney General Todd Blanche, as well as Associate Attorney General Stanley Woodward, to consider the request."

Saturday, November 1, 2025

Universal Music settles copyright dispute with AI firm Udio; Reuters, October 29, 2025

 , Reuters; Universal Music settles copyright dispute with AI firm Udio

"Universal Music Group said on Wednesday it has settled a copyright infringement case with artificial intelligence company Udio and that the two firms will collaborate on a new suite of creative products.

Under the agreement, the companies will launch a platform next year that leverages generative AI trained on authorized and licensed music.

UMG Chairman Sir Lucian Grainge said the agreements "demonstrate our commitment to do what's right by our artists and songwriters, whether that means embracing new technologies, developing new business models, diversifying revenue streams or beyond.""

‘Progressive’ Tech Group Asks Trump to Block AI Copyright Cases; The American Prospect, October 31, 2025

 DAVID DAYEN, The American Prospect; ‘Progressive’ Tech Group Asks Trump to Block AI Copyright Cases

"The Chamber of Progress, a self-styled “progressive” industry trade group supported by most of the biggest tech platforms, has urged the Trump administration to intervene in a litany of copyright cases involving artificial intelligence firms, to try to stop authors and publishers from having their work used for training AI models without permission.

The pleading comes as Anthropic prepares to pay authors $1.5 billion, the largest award in the history of copyright law, for pirating their work, in a settlement announced last month. OpenAI, Microsoft, Google, and Meta are named defendants in the more than 50 active lawsuits over AI intellectual-property theft.

In a letter to Michael Kratsios, the lead science adviser to President Trump, the Chamber of Progress estimates that AI companies could be liable under the Copyright Act for up to $1.5 trillion for stealing copyrighted work on which to train their models. The letter’s authors claim that this represents “an existential risk” to AI companies, and that the cases should be tossed out under a “fair use” standard.

The Chamber of Progress’s campaign to promote fair use, which they have created a campaign around called “Generate and Create,” comes as at least three of the nonprofit organization’s past or current backers are being sued over copyright claims: Meta, Google, and the AI art generator Midjourney. Another current funder, Nvidia, relies heavily on AI development for its continued success, and venture capital firm a16z, with several AI startups in its portfolio, also funds the nonprofit."

Friday, October 31, 2025

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract

 

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic ProblemRenegotiating the Socio-Technical Contract

Description:

"In the age of AI, where personal data fuels corporate profits and state surveillance, what are the implications for democracy?

This incisive book explores the unspoken agreement we have with tech companies. In exchange for reducing the anxiety of an increasingly complex online world, we submit to algorithmic classification and predictability. This reduces incentives for us to become “algorithmic problems” with dire consequences for liberal democracy. He calls for a movement to demand that algorithms promote play, creativity and potentiality rather than conformity.

This is a must-read for anyone navigating the intersection of technology, politics and identity in an increasingly data-driven world."

Photographer sues Rice after Coffeehouse uses copyrighted muffin photo; Rice Thresher, October 28, 2025

 , Rice Thresher; Photographer sues Rice after Coffeehouse uses copyrighted muffin photo


[Kip Currier: Another cautionary tale to not "scrape" stuff from the Internet and use it for one's commercial business.

(Oh, wait...that's what Big Tech did with billions of copyrighted works they "scraped" from the Web and pirate book libraries to develop their proprietary IP-protected AI products, too, right?)

At any rate, this "federal copyright-registered blueberry muffin photo" story is another reminder that it'll save you money, time, and possible reputational damage to do one of the following: take your own photo of a blueberry muffin; hire someone to do it; license a photo; or ask permission to use it.

And if you do use a photo for your business venture, give the photographer/copyright creator attribution...

Like this photo I took of my mother's delicious Iron Skillet Blueberry Scones!]



[Excerpt]

"A lawsuit has been filed against Rice, accusing Rice Coffeehouse of using a copyrighted photograph without permission to advertise on their Instagram page and website. 

The Oct. 17 complaint was filed on behalf of Meggan Hill, a photographer and chef who posts her recipes online. The photo in question is a blueberry muffin photo, featured on Hill’s website with an accompanying recipe.

The lawsuit was filed in a Houston federal district court by Hill’s attorney, Layla Nguyen of SRipLaw, an intellectual property law firm based in Boca Raton, Florida. According to the case briefing, Hill tried to notify the university of her allegations twice before suing but received no response. 

“To date, the parties have failed to resolve this matter,” the briefing reads. “Rice failed to respond to any communications.”

The lawsuit alleges that Chaus copied Hill’s photograph titled “Blueberry Muffins - Culinary Hill 1200x800” from the internet and displayed it online to promote the addition of blueberry muffins to their menu. 

In the lawsuit, Hill claims she discovered the photograph on the Chaus website on Jan. 7 — over a year after she registered the image with the Register of Copyrights. Hill allegedly reached out to Rice with her complaint on Jan. 29 and Feb. 28 before filing the suit."

ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.; Business Insider, October 28, 2025

  , Business Insider; ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.

"When a federal judge decided to allow a sprawling class-action lawsuit against OpenAI to move forward, he read some "Game of Thrones" fan fiction.

In a court ruling Monday, US District Judge Sidney Stein said a ChatGPT-generated idea for a book in the still-unfinished "A Song of Ice and Fire" series by George R.R. Martin could have violated the author's copyright.

"A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs' works," the judge said in the 18-page Manhattan federal court ruling."

Are We Losing Our Democracy?; The New York Times, October 31, 2025

 The Editorial Board, The New York Times; Are We Losing Our Democracy?

"Countries that slide from democracy toward autocracy tend to follow similar patterns. To measure what is happening in the United States, the Times editorial board has compiled a list of 12 markers of democratic erosion, with help from scholars who have studied this phenomenon. The sobering reality is that the United States has regressed, to different degrees, on all 12.

Our country is still not close to being a true autocracy, in the mold of Russia or China. But once countries begin taking steps away from democracy, the march often continues. We offer these 12 markers as a warning of how much Americans have already lost and how much more we still could lose."

Thursday, October 30, 2025

As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm; The New York Times, October 29, 2025

  Ana Swanson and , The New York Times; As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm

"Mr. Trump’s comments signaled a major potential change for U.S. policy that many Washington officials warn poses a national security risk. Selling such advanced A.I. chips to China is currently banned, and U.S. officials have worked for years to restrain Beijing’s access to the cutting-edge technology.

The president’s reversal, if it comes to pass, would have widespread implications. Nvidia, which has emphasized the importance of maintaining access to the Chinese market, would reap new sales. But critics have argued that A.I. technology is important enough to potentially shift the balance of power in a strategic competition between the United States and China."

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."