Friday, October 31, 2025

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract

 

New Book by José Marichal, California Lutheran University; You Must Become an Algorithmic ProblemRenegotiating the Socio-Technical Contract

Description:

"In the age of AI, where personal data fuels corporate profits and state surveillance, what are the implications for democracy?

This incisive book explores the unspoken agreement we have with tech companies. In exchange for reducing the anxiety of an increasingly complex online world, we submit to algorithmic classification and predictability. This reduces incentives for us to become “algorithmic problems” with dire consequences for liberal democracy. He calls for a movement to demand that algorithms promote play, creativity and potentiality rather than conformity.

This is a must-read for anyone navigating the intersection of technology, politics and identity in an increasingly data-driven world."

Photographer sues Rice after Coffeehouse uses copyrighted muffin photo; Rice Thresher, October 28, 2025

 , Rice Thresher; Photographer sues Rice after Coffeehouse uses copyrighted muffin photo


[Kip Currier: Another cautionary tale to not "scrape" stuff from the Internet and use it for one's commercial business.

(Oh, wait...that's what Big Tech did with billions of copyrighted works they "scraped" from the Web and pirate book libraries to develop their proprietary IP-protected AI products, too, right?)

At any rate, this "federal copyright-registered blueberry muffin photo" story is another reminder that it'll save you money, time, and possible reputational damage to do one of the following: take your own photo of a blueberry muffin; hire someone to do it; license a photo; or ask permission to use it.

And if you do use a photo for your business venture, give the photographer/copyright creator attribution...

Like this photo I took of my mother's delicious Iron Skillet Blueberry Scones!]



[Excerpt]

"A lawsuit has been filed against Rice, accusing Rice Coffeehouse of using a copyrighted photograph without permission to advertise on their Instagram page and website. 

The Oct. 17 complaint was filed on behalf of Meggan Hill, a photographer and chef who posts her recipes online. The photo in question is a blueberry muffin photo, featured on Hill’s website with an accompanying recipe.

The lawsuit was filed in a Houston federal district court by Hill’s attorney, Layla Nguyen of SRipLaw, an intellectual property law firm based in Boca Raton, Florida. According to the case briefing, Hill tried to notify the university of her allegations twice before suing but received no response. 

“To date, the parties have failed to resolve this matter,” the briefing reads. “Rice failed to respond to any communications.”

The lawsuit alleges that Chaus copied Hill’s photograph titled “Blueberry Muffins - Culinary Hill 1200x800” from the internet and displayed it online to promote the addition of blueberry muffins to their menu. 

In the lawsuit, Hill claims she discovered the photograph on the Chaus website on Jan. 7 — over a year after she registered the image with the Register of Copyrights. Hill allegedly reached out to Rice with her complaint on Jan. 29 and Feb. 28 before filing the suit."

ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.; Business Insider, October 28, 2025

  , Business Insider; ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.

"When a federal judge decided to allow a sprawling class-action lawsuit against OpenAI to move forward, he read some "Game of Thrones" fan fiction.

In a court ruling Monday, US District Judge Sidney Stein said a ChatGPT-generated idea for a book in the still-unfinished "A Song of Ice and Fire" series by George R.R. Martin could have violated the author's copyright.

"A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs' works," the judge said in the 18-page Manhattan federal court ruling."

Are We Losing Our Democracy?; The New York Times, October 31, 2025

 The Editorial Board, The New York Times; Are We Losing Our Democracy?

"Countries that slide from democracy toward autocracy tend to follow similar patterns. To measure what is happening in the United States, the Times editorial board has compiled a list of 12 markers of democratic erosion, with help from scholars who have studied this phenomenon. The sobering reality is that the United States has regressed, to different degrees, on all 12.

Our country is still not close to being a true autocracy, in the mold of Russia or China. But once countries begin taking steps away from democracy, the march often continues. We offer these 12 markers as a warning of how much Americans have already lost and how much more we still could lose."

Thursday, October 30, 2025

As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm; The New York Times, October 29, 2025

  Ana Swanson and , The New York Times; As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm

"Mr. Trump’s comments signaled a major potential change for U.S. policy that many Washington officials warn poses a national security risk. Selling such advanced A.I. chips to China is currently banned, and U.S. officials have worked for years to restrain Beijing’s access to the cutting-edge technology.

The president’s reversal, if it comes to pass, would have widespread implications. Nvidia, which has emphasized the importance of maintaining access to the Chinese market, would reap new sales. But critics have argued that A.I. technology is important enough to potentially shift the balance of power in a strategic competition between the United States and China."

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Teenage boys using ‘personalised’ AI for therapy and romance, survey finds; The Guardian, October 30, 2025

  and , The Guardian; Teenage boys using ‘personalised’ AI for therapy and romance, survey finds

"“Young people are using it a lot more like an assistant in their pocket, a therapist when they’re struggling, a companion when they want to be validated, and even sometimes in a romantic way. It’s that personalisation aspect – they’re saying: it understands me, my parents don’t.”

The research, based on a survey of boys in secondary education across 37 schools in England, Scotland and Wales, also found that more than half (53%) of teenage boys said they found the online world more rewarding than the real world.

The Voice of the Boys report says: “Even where guardrails are meant to be in place, there’s a mountain of evidence that shows chatbots routinely lie about being a licensed therapist or a real person, with only a small disclaimer at the bottom saying the AI chatbot is not real."

Character.AI bans users under 18 after being sued over child’s suicide; The Guardian, October 29, 2025

  , The Guardian; Character.AI bans users under 18 after being sued over child’s suicide

"The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

 

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Federal judge says Texas law requiring book ratings is unconstitutional; KUT News, October 22, 2025

  Bill Zeeble, KUT News; Federal judge says Texas law requiring book ratings is unconstitutional

"The 2023 Texas law requiring booksellers and publishers to rate their books based on sexual content and references has been declared unconstitutional in a Waco court.

A federal judge on Tuesday declared House Bill 900, also known as the READER Act, violates the Constitution. The ruling makes permanent a lower court's temporary injunction that the Fifth Circuit Court of Appeals later upheld.

The law firm Haynes Boone, which represented the coalition of plaintiffs that sued to block the law, said in a statement the ruling is a "major First Amendment victory."

"The READER Act would have imposed impossible obligations on booksellers and limited access to literature, including classic works, for students across Texas," attorney Laura Lee Prather said in the statement.

HB 900 sought to restrict which books are available in school libraries and required booksellers to rate their own books based on sexual content. The Texas Education Agency could have overridden the ratings to prevent school libraries from obtaining books."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

 Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

 , Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Monday, October 27, 2025

Trump Asks Supreme Court to Let Him Fire the Top Copyright Official; The New York Times, October 27, 2025

, The New York Times; Trump Asks Supreme Court to Let Him Fire the Top Copyright Official

"The Trump administration has asked the Supreme Court to allow the president to remove the government’s top copyright official after a lower court allowed her to remain in her post that is part of the Library of Congress.

President Trump ordered the removal in May of Shira Perlmutter, the register of copyrights, along with the librarian of Congress, Carla Hayden, who did not challenge her dismissal.

The Supreme Court’s conservative majority has repeatedly allowed Mr. Trump to fire the leaders of independent agencies even as they fight their dismissals in court, allowing him to seize greater control of the federal bureaucracy.

The administration made the request after a divided panel of the U.S. Court of Appeals for the D.C. Circuit sided with Ms. Perlmutter, the head of the U.S. Copyright Office. The majority said the register is unique within the legislative branch and that her role is to advise Congress on issues related to copyright."

AI can help authors beat writer’s block, says Bloomsbury chief; The Guardian, October 27, 2025

 , The Guardian; AI can help authors beat writer’s block, says Bloomsbury chief


[Kip Currier: These are interesting and unexpected comments by Nigel Newton, Bloomsbury publishing's founder and CEO. 

Bloomsbury is the publisher of my impending book Ethics, Information, and Technology. In the interest of transparency, I'll note that I researched and wrote my book the "oldfangled way" and didn't use AI for any aspects of my book, including brainstorming. Last year during a check-in meeting with my editor and a conversation about the book's AI chapter, I just happened to learn that Bloomsbury has had a policy on authors not using AI tools.

So it's noteworthy to see this publisher's shift on authors' use of AI tools.]


[Excerpt]

"Authors will come to rely on artificial intelligence to help them beat writer’s block, the boss of the book publisher Bloomsbury has said.

Nigel Newton, the founder and chief executive of the publisher behind the Harry Potter series, said the technology could support almost all creative arts, although it would not fully replace prominent writers.

“I think AI will probably help creativity, because it will enable the 8 billion people on the planet to get started on some creative area where they might have hesitated to take the first step,” he told the PA news agency...

Last week the publisher, which is headquartered in London and employs about 1,000 people, experienced a share rise of as much as 10% in a single day after it reported a 20% jump in revenue in its academic and professional division in the first half of its financial year, largely thanks to an AI licensing agreement.

However, revenues in its consumer division fell by about 20%, largely due to the absence of a new title from Maas."

Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025

MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments

"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search. 

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.

But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."

Sunday, October 26, 2025

Federal government rules out changing copyright law to give AI companies free rein; Australian Broadcasting Corporation, October 26, 2025

Maani Truu, Australian Broadcasting Corporation; Federal government rules out changing copyright law to give AI companies free rein

"In short

The government has definitively ruled out introducing a copyright exemption for artificial intelligence companies training their models on Australian creative works.

Such a carve-out has been fiercely rejected by the creative sector, after it was floated in a Productivity Commission report.

What's next?

A government working group on artificial intelligence and copyright will meet over the next two days to examine whether the current laws need a refresh."

From CLICK to CRIME: investigating intellectual property crime in the digital age; Europol, October 2025

Europol; From CLICK to CRIME: investigating intellectual property crime in the digital age

"A new wave of online crime is putting consumers, businesses, and the wider economy at risk - from fake medicines and forged wine to illegal streaming platforms. The increase in counterfeit goods and the criminal abuse of intellectual property affect our daily lives more than many realise, with consequences that go far beyond lost revenue.

The conference “From CLICK to CRIME: Investigating Intellectual Property Crime in the Digital Age” was held on 22 and 23 October 2025 in Sofia, Bulgaria. Jointly organised by Europol, the European Union Intellectual Property Office (EUIPO) and Bulgaria’s General Directorate Combating Organised Crime (GDBOP), the event highlighted the vital importance of collaboration in tackling online crime. The participants reaffirmed the importance of strong collective efforts in tackling online-enabled intellectual property crime to protect consumers, safeguard creativity and uphold trust in the digital economy.

Consider a few key examples of the major threats posed by intellectual property crime:

  • Illegal streaming and sharing platforms not only drain the cinema, publishing, musical and software industries but also expose viewers, especially children, to unregulated and potentially harmful content.
  • Fake pharmaceuticals, supplements and illicit doping substances, promoted on social media and websites, are produced in clandestine labs without testing or quality control. Dangerous products, circulating in gyms and among amateur athletes, can cause severe or even fatal health effects.
  • Counterfeit toys, perfumes, and cosmetics are also trafficked online and carry hidden dangers, trading low prices for high risks to health and safety.

Behind many of these schemes are well-structured organised criminal networks that view intellectual property crime not as a secondary activity, but as a lucrative business model."

Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.; The New York Times, October 24, 2025

 , The New York Times; Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.


[Kip Currier: Another interesting example -- probably surprising for most of us who don't have "smart beds", including me -- of the ways that smart devices and the Internet of Things (IoT) can impact us. In this instance, people's sleep!

The paperback version of my book, Ethics, Information, and Technology, is available via Amazon on November 13, 2025 (link here too) and has a significant section on the ethical issues implicated by IoT and smart devices.]


[Excerpt]

"Some users of the smart-bed system Eight Sleep, who sleep atop a snug, temperature-regulating mattress cover in search of “zero-gravity rest,” were rousted from their slumber earlier this week for a surprising reason.

Eight Sleep’s collections of smart products, which the company calls “Pods,” and include those “intelligent” mattress covers, were affected by an outage involving the cloud-storage provider Amazon Web Services, which sent large sectors of the internet into disarray on Monday.

The outage, which lasted more than two hours, took down websites for banks, gaming sites and entertainment services, as well as the messaging service WhatsApp. But it also affected people trying to get some shut-eye.

(First, to answer a question readers might have: Yes, there are smart mattress covers, just as there are smart watches, smart door locks and smart refrigerators.)"

Saturday, October 25, 2025

New study: AI chatbots systematically violate mental health ethics standards; Brown, October 21, 2025

 Kevin Stacey, Brown; New study: AI chatbots systematically violate mental health ethics standards

 "As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association. 

The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users. 

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign."

Lizzo sued over copyright for Sydney Sweeney song; BBC, October 22, 2025

Mark Savage, BBC; Lizzo sued over copyright for Sydney Sweeney song

"The pop star posted a brief snippet of the as-yet-untitled song in August. It caused a stir by mentioning Sweeney's adverts for the jeans company American Eagle, which some critics said promoted white supremacism."

Tuesday, October 21, 2025

Gambling. Investing. Gaming. There’s No Difference Anymore.; The New York Times, October 20, 2025

 Jonathan D. Cohen and , The New York Times ; Gambling. Investing. Gaming. There’s No Difference Anymore.


[Kip Currier: It's good to see online gambling issues getting more attention, as in this 10/20/25 New York Times Op-Ed. One of the piece's writers is Jonathan D. Cohen, author of the 2025 book Losing Big: America’s Reckless Bet on Sports Gambling".

I spoke on these issues in my talk -- AI Gambling Thirst Traps and God: Christian Imperatives, Church Roles, and Ethical Responsibilities -- at the September 2-5, 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis. A publication based on the talk is forthcoming.]


[Excerpt]

"If it feels as if gambling is everywhere, that’s because it is. But today’s gamblers aren’t just retirees at poker tables. They’re young men on smartphones. And thanks to a series of quasi-legal innovations by the online wagering industry, Americans can now bet on virtually anything from their investment accounts. 

In recent years, this industry has been gamifying the investing experience; on brightly colored smartphone apps, risking your money is as easy and attractive as playing Candy Crush. On the app of the investment brokerage Robinhood, users can now buy stocks on one tab, “bet” on Oscars outcomes on another, and trade crypto on a third.

Given a recent explosion in unsafe gambling and growing evidence of severe financial harm, one might ask whether the government should be permitting 18-year-olds to effectively bet on the Dallas Cowboys with the same accounts they can use to invest in Coca-Cola. Under President Trump, who has a son serving as an adviser to two entrants in the sports prediction marketplace, the answer appears to be a firm yes."