Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Friday, April 24, 2026

Sam Altman Wants to Know Whether You’re Human; The Atlantic, April 24, 2026

 Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.; The New York Times, April 24, 2026

Meaghan Tobin and , The New York Times; DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.

"DeepSeek released its models as open source, which means others can freely use and modify them. By contrast, OpenAI and Anthropic kept their leading models proprietary. The episode demonstrated that an open-source system could perform almost as well as closed versions. In the months that followed, Chinese firms released dozens of other open-source models. By the end of 2025, these models made up a significant share of global A.I. usage.

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.

The implications are meaningful. Using A.I. to write code is faster and frees up human programmers to focus on bigger issues. It also means people can use DeepSeek’s latest release to power A.I. agents, which are personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services."

Wednesday, April 22, 2026

When AI advice enters a murder case; Politico, April 22, 2026

Aaron Man, Politico; When AI advice enters a murder case

"Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI on Tuesday following a mass shooting at Florida State University that resulted in two deaths last year. The attorney general stated during a press conference that ChatGPT “offered significant advice” to the suspected gunman, Phoenix Ikner, based on a preliminary review by prosecutors.

“If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said.

The prospect of OpenAI facing criminal liability raises new questions about whether developers should be held responsible for a chatbot’s potential role in such a tragedy.

Legal scholars told DFD that, compared with prior civil cases, imposing criminal liability on the company would be a much steeper uphill battle. A key challenge, according to them, would be proving OpenAI acted with criminal intent."

Friday, April 17, 2026

AI Startups Have These Copyright Lawyers on Speed Dial; Bloomberg Law, April 16, 2026

 David Schultz , Bloomberg Law; AI Startups Have These Copyright Lawyers on Speed Dial

"Something similar connects many of the top attorneys representing the artificial intelligence industry in its most consequential battles: their resumes.

The common thread is Durie Tangri. More than 50 attorneys from the defunct Bay Area intellectual property firm are at the center of epic Silicon Valley copyright fights, just more than three years after Morrison Foerster acquired the practice...

“Tech copyright is a small world,” said Joseph Gratz, one of the alums at Morrison.

The Durie Tangri alums have benefited from the demand in tech copyright law, said Gratz, who has appeared in court defending OpenAI in almost two dozen federal lawsuits...

One of the marquee cases Durie Tangri took on was the decade-long copyright infringement suit over Google’s book digitization. Sonal Mehta, a Durie Tangri alum who is now at WilmerHale, said the boutique relished taking on matters that ventured into uncharted territory.

“We weren’t afraid to be operating in gray areas or to be looking at where the law hadn’t fully developed,” Mehta said. “We didn’t need to feel like every argument had to be something that was a cookie cutter argument that had already been made and won 20 times before.”"

Tuesday, April 14, 2026

Sam Altman home attacks spark concern over AI-motivated violence; Axios, April 14, 2026

Nadia Lopez, Axios; Sam Altman home attacks spark concern over AI-motivated violence

"The big picture: These incidents come amid heightened tension around AI's rapid development, with public anxiety over its political and economic implications rising even as companies continue to push the technology forward.

Threat level: AI is being cast in increasingly existential terms, including by its own creators. Warnings over the chaos the technology could unleash have become part of mainstream discourse, alongside promises of sweeping economic transformation.

This dual promise of disruption and progress has helped elevate AI into one of the most consequential policy debates in the world, but also one of the most emotionally charged."

Agency in the Age of AI; Time, April 14, 2026

 John Palfrey , Time; Agency in the Age of AI

"OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold. 

These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?"

Monday, April 13, 2026

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

 , The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

 MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Thursday, April 9, 2026

Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’; New York Daily News via Chicago Tribune, April 9, 2026

 , New York Daily News via Chicago Tribune; Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’

"An unimpressed Manhattan judge ordered a corporate representative for OpenAI to undergo a second deposition after finding he failed to answer “even the simplest questions” the first time around about what the company has described as efforts to limit chatbots from stealing writers’ work.

​Magistrate Judge Ona Wang, in a sharply-worded 11-page order Tuesday, said OpenAI had been put on notice that the company’s purported expert on plagiarism John Vincent “Vinnie” Monaco was woefully underprepared for his January deposition, ordering him to submit to 3.5 more hours of questioning that took place Wednesday.

​In granting a motion from the Chicago Tribune, New York Times and other news outlets suing OpenAI to compel the additional testimony, Wang deferred ruling on a request for sanctions, saying it would depend on how Monaco fared in his do-over. She said she may issue fines or recommend some of his answers be deemed as admissions.

​OpenAI has previously said that Monaco has more knowledge than any of its engineers about Project Giraffe, an internal operation which the company claims is designed to develop ways to limit its learning language models, or LLMs, from inadvertently regurgitating copyrighted works — the issue at the core of the ongoing Manhattan Federal Court lawsuit."

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Saturday, April 4, 2026

AI agents are scrambling power users' brains; Axios, April 4, 2026

 Megan Morrone, Axios ; AI agents are scrambling power users' brains

"A growing number of software developers say AI coding tools are frying their brains. 

The big picture: The most popular agentic AI systems have triggered something that looks a lot like addiction among some of tech's highest performers."

Thursday, March 26, 2026

OpenAI shutters AI video generator Sora in abrupt announcement; The Guardian, March 24, 2026

 , The Guardian; OpenAI shutters AI video generator Sora in abrupt announcement

Tech firm ‘says goodbye’ to Sora, made publicly available in 2024, just six months after its launch of a stand-alone app

"In an abrupt announcement on Tuesday, OpenAI said it was “saying goodbye” to its AI video generator Sora. The move comes just six months after the company’s splashy launch of a stand-alone app with which people could make and share hyper-realistic AI videos in a scrolling social feed."

Saturday, March 21, 2026

The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue; Fortune, March 21, 2026

, Fortune; The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue

"In a filing submitted to the Southern District of New York, the companies accuse OpenAI of cannibalizing the traffic and ad revenue that publishers depend on to survive. “ChatGPT starves web publishers, like [the] Plaintiffs, of revenue,” the complaint reads. Where a traditional search engine sends users to a publisher’s website, Britannica and Merriam-Webster allege ChatGPT instead absorbs the content and delivers a polished answer. It also alleges the AI company fed its LLM with researched and fact-checked work of the companies’ hundreds of human writers and editors...

In an apt example, the complaint describes a prompt asking “How does Merriam-Webster define plagiarize?” to which the model reportedly responded with a definition identical to the one found in the Merriam-Webster dictionary. The complaint adds that the dictionary has been registered with the U.S. Copyright Office."

Tuesday, March 17, 2026

Now OpenAI is getting sued by the dictionary; Quartz, March 17, 2026

Quartz Staff, Quartz; Now OpenAI is getting sued by the dictionary

Encyclopedia Britannica and Merriam-Webster sued the ChatGPT maker, accusing it of copying almost 100,000 articles to train its AI models

"Encyclopedia Britannica and its subsidiary Merriam-Webster have filed suit against OpenAI, alleging that the ChatGPT maker copied their copyrighted content without authorization to train its large language models,

The lawsuit, filed in Manhattan federal court last week, alleges that OpenAI used close to 100,000 Britannica articles to train its models, and that ChatGPT responses frequently reproduce or closely paraphrase Britannica's reference content, including encyclopedia articles and dictionary entries. The complaint also alleges OpenAI uses a retrieval-augmented generation system to pull from Britannica's content in real time when generating responses."

Monday, March 16, 2026

The dictionary sues OpenAI; TechCrunch, March 16, 2026

Amanda Silberling, TechCrunch; The dictionary sues OpenAI

"Encyclopedia Britannica and Merriam-Webster have filed a lawsuit against OpenAI, alleging in its complaint that the AI giant has committed “massive copyright infringement.”

Britannica, which owns Merriam-Webster, retains the copyright to nearly 100,000 online articles, which have been scraped and used to train OpenAI’s LLMs without permission, the publisher alleges in the lawsuit.

Britannica also accuses OpenAI of violating copyright laws when it generates outputs that contain “full or partial verbatim reproductions” of its content and when the AI lab uses its articles in ChatGPT’s RAG (retrieval augmented generation) workflow. OpenAI’s RAG tool is how the LLM scans the web or other databases for newly updated information when responding to a query. Britannica also alleges that OpenAI violates the Lanham Act, a trademark statute, when it generates made-up hallucinations and attributes them falsely to the publisher."

Friday, March 13, 2026

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

 AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Wednesday, March 11, 2026

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

 , The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"penAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Tuesday, March 10, 2026

Nielsen's Gracenote sues OpenAI for copyright infringement; Axios, March 10, 2026

Sara Fischer, Axios; Nielsen's Gracenote sues OpenAI for copyright infringement

"How it works: Gracenote employs hundreds of editors who use human insight and judgment to create millions of narrative descriptions, original video descriptors, unique identifiers and other program identifiers that TV providers and other clients can use to help customers discover content. 

For example, Gracenote editors described HBO's "Game of Thrones" as "the depiction of two power families — kings and queens, knights and renegades, liars and honest men — playing a deadly game of control of the Seven Kingdoms of Westeros, and to sit atop the Iron Throne."

In the lawsuit, Gracenote alleges OpenAI scraped and used a near-exact copy of that descriptor when prompted by a ChatGPT user to describe "Game of Thrones." 

It provides several other examples where, with minimal prompting, OpenAI's various ChatGPT models recite large portions of Gracenote's program descriptions verbatim. 

Between the lines: Gracenote's entire Programs Database, which includes its metadata and the proprietary relational map its editors use to connect that data, is registered with the U.S. Copyright Office."