Saturday, April 25, 2026

OpenAI's Sam Altman writes apology to community of Tumbler Ridge; CBC News, April 24, 2026

Andrew Kurjata , CBC News; OpenAI's Sam Altman writes apology to community of Tumbler Ridge

"Sam Altman, the CEO of OpenAI, has written a letter of apology to the community of Tumbler Ridge for failing to alert RCMP about the account of the Tumbler Ridge shooter.

The company shared the letter with the local news website Tumbler RidgeLines, which published it in full. Its authenticity was confirmed by a spokesperson for OpenAI...

Altman committed to authoring an apology after meeting with B.C. Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka at the beginning of March, but said he wanted to take some time before doing so in order to give the community the opportunity to "grieve in their own time."

He also acknowledged that his company should have alerted law enforcement about the account of the shooter, which was flagged for problematic activity in advance of the tragedy but was not escalated to alerting authorities in Canada...

Altman's company is being sued by one Tumbler Ridge family, who alleges the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

Apology 'necessary' but 'grossly insufficient': Eby

Eby also shared the letter on social media, writing "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."

A statement from the District of Tumbler Ridge released Friday afternoon acknowledged that Altman's letter "may evoke a range of emotions, and we encourage everyone to take the time and space they need.""

Artificial Intelligence and Copyright- Where Does the UK Stand?; The National Law Review, April 23, 2026

Serena TotinoSimon CasinaderK&L Gates LLP , The National Law Review; Artificial Intelligence and Copyright- Where Does the UK Stand?

"The UK Government’s report on the copyright and AI consultation was recently published. While the report confirms that balancing the interests of copyrights holders and AI developers is a complex exercise, it also provides an indication of likely scenarios to consider in this fast-evolving environment.

The consultation focused on whether AI developers should be permitted to use copyright protected works for training purposes without prior authorisation and, if so, under what conditions...

Takeaways

Rights holders should continue to assess how their content is accessed and used, consider technical or contractual mechanisms for licensing and rights reservation.

AI developers should remain cautious when sourcing training data, ensure governance and record keeping processes are robust, and factor copyright risk into product development and deployment strategies."

Friday, April 24, 2026

White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says; The New York Times, April 24, 2026

 , The New York Times; White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says

Two watchdogs say internal White House guidance that text messages need not be preserved unless “they are the sole record of official decision-making” contradicted the law.

"Two government watchdogs sued President Trump and the White House on Friday over internal guidance that instructed that some text messages exchanged between officials could be deleted, despite a law generally mandating the preservation of presidential records.

The watchdogs, Citizens for Responsibility and Ethics in Washington and the Freedom of the Press Foundation, also asked a federal judge to overrule a separate but related Justice Department memo, which declared unconstitutional a longstanding federal law requiring safeguarding of presidents’ records, including text messages. The White House guidance cited the memo.

Their lawsuit comes amid a torrent of accusations that the Trump administration has disregarded record-keeping and document disclosure required by law, even as the president and his officials have sought to transform the government and push the legal bounds of their power. They have displayed a particular willingness to skirt record-keeping requirements on text messages exchanged among top officials.

In their complaint, the two watchdogs said the “deficient instructions” from the White House would “result in the irreparable loss or destruction” of presidential records."

Sam Altman Wants to Know Whether You’re Human; The Atlantic, April 24, 2026

 Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.; The New York Times, April 24, 2026

Meaghan Tobin and , The New York Times; DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.

"DeepSeek released its models as open source, which means others can freely use and modify them. By contrast, OpenAI and Anthropic kept their leading models proprietary. The episode demonstrated that an open-source system could perform almost as well as closed versions. In the months that followed, Chinese firms released dozens of other open-source models. By the end of 2025, these models made up a significant share of global A.I. usage.

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.

The implications are meaningful. Using A.I. to write code is faster and frees up human programmers to focus on bigger issues. It also means people can use DeepSeek’s latest release to power A.I. agents, which are personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services."

Ombudsman column: The Pentagon is trying to silence me; Stars and Stripes, April 23, 2026

Jacqueline Smith, Stars and Stripes; Ombudsman column: The Pentagon is trying to silence me

"A recent opinion column I wrote as the Stars and Stripes ombudsman began with this: “Pete Hegseth doesn’t want you to see cartoons in this newspaper anymore.”

Apparently the Pentagon also doesn’t want you to hear from me anymore about threats to the editorial independence of Stars and Stripes.

They fired me.

This happened in the coldest way possible: DA Form 3434 stated that my last day as ombudsman for Stars and Stripes is April 28. (They have to give five days’ notice.) No reason is given. But: “This action is not grievable.”

No one should be surprised that they’re kicking out the one person charged by Congress with protecting Stars and Stripes’ editorial independence. For nearly a year, Pentagon leadership has placed more and more restrictions on the mainstream media...

I was immensely honored to be chosen as the 13th, and first female, ombudsman for Stars and Stripes. I’ve come to appreciate the many talented and dedicated journalists and staff at Stripes — it’s more than a job for them wherever they are stationed around the world. I’ve been fortunate to meet or hear from innumerable veterans, officers and enlisted personnel and military spouses. I’ve even respected the colonels who I tangled with over the rights of Stripes reporters to cover public gatherings on bases.

What you can worry about is the future of Stars and Stripes. This newspaper has a long history of commitment to the military community and to journalistic values. Please don’t let it be controlled by Pentagon brass."


Pentagon Fires Stars and Stripes’ Advocate for Independence; The New York Times, April 23, 2026

 , The New York Times ; Pentagon Fires Stars and Stripes’ Advocate for Independence

"In a blow to independent coverage of the military, the Pentagon has fired the ombudsman for Stars and Stripes, a newspaper that covers the U.S. armed forces and is partly funded by the Defense Department.

“Apparently the Pentagon also doesn’t want you to hear from me anymore about threats to the editorial independence of Stars and Stripes,” the ombudsman, Jacqueline Smith, wrote in a Stars and Stripes column published on Thursday. She said that the Defense Department had given no reason for her dismissal and that she had been told it was “not grievable.”

Her role as ombudsman, which she began in December 2023, was to serve as a watchdog monitoring the paper’s independence and to report concerns to Congress.

“Jacqueline Smith has been relieved of her duties as Stars and Stripes ombudsman effective immediately,” the Defense Department said in a statement."

ABA Law Day events to focus on the ‘The Rule of Law and the American Dream’; ABA Journal, April 21, 2026

 ABA Journal; ABA Law Day events to focus on the ‘The Rule of Law and the American Dream’

"The American Bar Association will host various events to mark Law Day 2026 that address the theme, “The Rule of Law and the American Dream.”

May 1 is designated as the official Law Day."

Thursday, April 23, 2026

U.S. accuses China of "industrial-scale" campaigns to steal AI secrets; Axios, April 23, 2026

Sam Sabin, Axios ; U.S. accuses China of "industrial-scale" campaigns to steal AI secrets

"The Trump administration on Thursday accused China-backed actors of running "deliberate, industrial-scale campaigns" to distill and copy American frontier AI models...

Driving the news: Michael Kratsios, director of the White House Office of Science and Technology Policy, sent a memo Thursday to federal agency heads accusing mostly China-based actors of using proxy accounts to evade detection and jailbreak models to "expose proprietary information" and "extract capabilities from American AI models."

Distillation attacks involve querying proprietary models, like Claude or Gemini, millions of times via APIs to build datasets that replicate how the systems behave.

Kratsios said these campaigns enable foreign actors to release models that appear to match U.S. AI capabilities at a fraction of the cost.

He added that such tactics can also strip away guardrails meant to keep outputs "ideologically neutral and truth-seeking.""

St. Louis Cardinals fighting Hamilton Cardinals attempt to trademark baseball team name and design in Canada; CBC News, April 22, 2026

Justin Chandler , CBC News; St. Louis Cardinals fighting Hamilton Cardinals attempt to trademark baseball team name and design in Canada

"The Hamilton Cardinals baseball team is facing off against the St. Louis Cardinals, but not in a ballpark.

The MLB team from St. Louis, Mo., is opposing a trademark application the Canadian Baseball League (CBL) team filed before the Canadian Intellectual Property Office in 2023.

Hamilton team owner Eric Spearin described the MLB team's opposition as “just a big shock.” The teams play in different leagues, he said, and "our logo looks nothing like theirs.""

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

 Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Penalties stack up as AI spreads through the legal system; NPR, April 3, 2026

, NPR; Penalties stack up as AI spreads through the legal system

""Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI...

The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.

Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.

The professional embarrassments even take place at the level of state supreme courts...

"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled...

When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

  Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."

Anthropic seeks pivotal court win in music publisher lawsuit over AI training; Reuters, April 21, 2026

, Reuters; Anthropic seeks pivotal court win in music publisher lawsuit over AI training

"Artificial intelligence company Anthropic has asked a California federal court to ​rule in its favor in a copyright lawsuit brought by music publishers Universal Music Group, Concord ‌and ABKCO, arguing it made "fair use" of their song lyrics to train its AI-powered chatbot Claude.

Anthropic's Monday filing addresses the key question for a wave of high-stakes copyright cases brought by creators against tech companies: is it legally permissible to copy millions of copyrighted works ​without permission to train AI models?...

The lawsuit ​is one of dozens of disputes between copyright owners such as authors and news outlets, and tech giants ​including OpenAI, Microsoft and Meta Platforms over the training of their AI systems. Amazon- and Google-backed Anthropic was the first major AI ‌company ⁠to settle one of the cases, agreeing last yearto pay a group of authors $1.5 billion to resolve a class-action lawsuit."

Got an Old Kindle? It Might Not Work Anymore. Here’s What to Do.; The New York Times, April 9, 2026

Brenda Stolyar, The New York Times; Got an Old Kindle? It Might Not Work Anymore. Here’s What to Do. 

"Earlier this week, Amazon notified its customers via email that, starting May 20, it will end support for Kindle and Kindle Fire devices released in 2012 or earlier. That means you’ll no longer be able to download new content to your e-reader via Amazon’s Kindle Store.

Although you don’t have to stop using your old Kindle immediately, the restricted functionality may force you to consider whether you want to upgrade to a newer version or ditch the Amazon ecosystem altogether.

If you own a Kindle that’s no longer supported, Amazon wants you to buy a new one. The company is offering a 20% discount that you can apply toward one of its new Kindle models, along with a $20 e-book credit that will automatically be applied to your account with the purchase of a new device. The promotion will be valid through June 20, exactly a month after the company ends support for its older models.

Here’s what you need to know about Amazon’s decision to sunset its older e-readers and tablets, and what that means for you."

AI use surges among policymakers; Axios, April 23, 2026

 Eleanor Hawkins, Axios; AI use surges among policymakers

"AI is no longer just a research tool in Washington, D.C. — it's starting to shape how policymakers form opinions, according to Penta Group data shared exclusively with Axios. 

Why it matters: Policymakers are the latest to lean on AI for guidance, signaling its growing role in shaping decisions across markets, consumer behavior and now public policy.

By the numbers: Penta Group surveyed 2,060 U.S. federal policymakers and senior staff across Congress, the administration and federal agencies, and found that 27% say AI informs their perspective on a topic — up from 17% in 2025 — putting AI on par with traditional sources like experts and web searches...

The intrigue: Republican policymakers are about 1.2 times more likely than Democratic policymakers to use AI daily  69% compared to 57%.

Republicans are also more likely to find AI helpful in shaping their perspectives (30% vs. 23% for Democrats).

Meanwhile, Democrats are more than twice as likely to avoid AI entirely. 13% say they don't use it in their daily work, compared with 5% of Republicans."

CC’s position on key copyright issues; Creative Commons, April 22, 2026

 Creative Commons ; CC’s position on key copyright issues

"Creative Commons was founded in response to the radical expansion of copyright in the 20th century."

Wednesday, April 22, 2026

Authors Guild Addresses Publishers’ AI Use; Publishers Weekly, April 21, 2026

 Sam Spratford , Publishers Weekly; Authors Guild Addresses Publishers’ AI Use

"The Authors Guild has released a statement criticizing publishing professionals’ use of AI tools following a report first published in the Bookseller that some editors have been uploading authors’ personal information, including manuscripts, into consumer-facing LLMs like ChatGPT.

“Uploading or inputting a copyrighted work or an author’s personal information into AI systems without permission may constitute a violation of the author’s copyright or right of privacy, and it puts the author’s intellectual property and personal information at risk,” the statement read. “Editors, agents, and others in the industry who have access to authors’ works should not upload any manuscript to or otherwise prompt consumer-facing chatbots with any author’s works without first getting the author’s written permission.”"

A.I. ‘Hallucinations’ Created Errors in Court Filing, Top Law Firm Says; The New York Times, April 21, 2026

  , The New York Times; A.I. ‘Hallucinations’ Created Errors in Court Filing, Top Law Firm Says

Sullivan & Cromwell apologized for submitting a court document that had fake citations created by artificial intelligence.

"An elite Wall Street law firm has apologized to a federal judge for submitting a court filing replete with errors created by artificial intelligence, including “hallucinations” that fabricated case citations.

The A.I.-generated errors came in a recent motion in U.S. Bankruptcy Court in Manhattan and were discovered by lawyers from an opposing firm, Andrew Dietderich, a partner at Sullivan & Cromwell, wrote in a letter to Judge Martin Glenn on April 18."

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era; The New York Times, April 22, 2026

 , The New York Times; Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

Artificial intelligence tools are making it faster than ever to reproduce creative work. Does copyright even matter anymore?

"Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law?

Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it.

Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin’s version remains online. He said Anthropic had not asked him to take it down.

It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over.

He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes?

“I just wanted to raise some ethical questions in the A.I. agent era,” he said. “Any creative work can be reproduced in a second.”"

When AI advice enters a murder case; Politico, April 22, 2026

Aaron Man, Politico; When AI advice enters a murder case

"Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI on Tuesday following a mass shooting at Florida State University that resulted in two deaths last year. The attorney general stated during a press conference that ChatGPT “offered significant advice” to the suspected gunman, Phoenix Ikner, based on a preliminary review by prosecutors.

“If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said.

The prospect of OpenAI facing criminal liability raises new questions about whether developers should be held responsible for a chatbot’s potential role in such a tragedy.

Legal scholars told DFD that, compared with prior civil cases, imposing criminal liability on the company would be a much steeper uphill battle. A key challenge, according to them, would be proving OpenAI acted with criminal intent."

Technology Leadership Includes a Duty to Care for our Most Vulnerable Populations; Santa Clara University Markkula Center for Applied Ethics, April 20, 2026

 Ann Skeet , Santa Clara University Markkula Center for Applied Ethics; Technology Leadership Includes a Duty to Care for our Most Vulnerable Populations

"Notably, what Meta did not say after the verdict was that it was sorry for the harms children experienced."

Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?; The New York Times, April 20, 2026

 David DeSteno, The New York Times; Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?

"In a public statement of its intentions for its Claude chatbot, the artificial intelligence company Anthropic has said that it wants Claude to be “a genuinely good, wise and virtuous agent.” The company raised the moral stakes this month, when it announced that its latest A.I. model, Claude Mythos Preview, poses too great a cybersecurity threat to be widely released. Behind the scenes, Anthropic has been trying to shore up the ethical foundations of its products, working with a Catholic priest and consulting with other prominent Christians to help foster Claude’s moral and spiritual development.

Anthropic’s intentions are admirable, but the project of drawing on religion to cultivate the ethical behavior of Claude (or any other chatbot) is likely to fail. Not because there isn’t moral wisdom in Scripture, sermons and theological treatises — texts that Claude has undoubtedly already scraped from the web and integrated — but because Claude is missing a crucial mechanism by which religion fosters moral growth: a body."

Tuesday, April 21, 2026

The Onion Has a New Plan to Take Over Infowars; The New York Times, April 21, 2026

Benjamin Mullin and , The New York Times; The Onion Has a New Plan to Take Over Infowars 

"When Infowars, the website founded by the right-wing conspiracist Alex Jones, came up for sale two years ago, an unlikely suitor stepped up. The Onion, a satirical news outlet, planned to convert the site into a parody of itself.

That sale was scuttled by a bankruptcy court. Now, The Onion has re-emerged with a new plan: licensing the website from Gregory Milligan, the court-appointed manager of the site.

On Monday, Mr. Milligan asked Maya Guerra Gamble, a judge in Texas’ Travis County District Court overseeing the disposition of Infowars, to approve that licensing agreement in a court filing. Under the terms, The Onion’s parent company, Global Tetrahedron, would pay $81,000 a month to license Infowars.com and its associated intellectual property — such as its name — for an initial six months, with an option to renew for another six months.

The licensing deal has been agreed to by The Onion and the court-appointed administrator. But it is not effective until Judge Guerra Gamble approves it, and Mr. Jones could appeal any ruling. That means the fate of Infowars remains in limbo until the court rules, probably sometime in the next two weeks. Mr. Jones continues to operate Infowars.com and host its weekday program, “The Alex Jones Show.”

YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive); The Hollywood Reporter, April 21, 2026

Alex Weprin, The Hollywood Reporter ; YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive)

The tool, which requires a celebrity to upload their likeness, will flag potentially infringing content — like, say, a star playing a role in fan-generated movie — for a possible takedown.

"Executives at the Google-owned platform tell The Hollywood Reporter that their proprietary deepfake detection tool, years in the making, is now open to anyone at high risk of having their likeness abused: Actors, athletes, creators and musicians, whether they have a YouTube channel or not, can sign up to identify and request removal of deepfakes on its platform...

The timing of the tool’s expansion comes as the industry grapples with the continued growth of deepfakes across platforms, and with video models quickly turning hypothetical worst-case scenarios into reality for many stars."

Church of Jesus Christ files trademark complaint against podcaster for alleged imitation of brands; DeseretNews, April 19, 2026

Tad Walch, DeseretNews; Church of Jesus Christ files trademark complaint against podcaster for alleged imitation of brands

"A trademark dispute is headed to the courts after The Church of Jesus Christ of Latter-day Saints and podcaster John Dehlin failed to reach a solution after five months of negotiations.

The church and its intellectual property entity, Intellectual Reserve Inc., filed a trademark and copyright complaint in federal court Friday that names Dehlin and his Open Stories Foundation.

The lawsuit claims Dehlin willfully and knowingly created confusion by using logos and marks similar to the church’s and by using copyrighted church photographs. 

Dehlin is the founder and host of the podcast “Mormon Stories.” He was excommunicated from the church in 2015."

Churchill Downs strikes $85m deal for Preakness intellectual property rights; The Guardian, April 21, 2026

 , The Guardian; Churchill Downs strikes $85m deal for Preakness intellectual property rights

"Churchill Downs has reached a deal to acquire the intellectual property rights to the Preakness Stakes, the company announced Tuesday, in a move that brings one of US thoroughbred racing’s most celebrated events under the same corporate umbrella as the Kentucky Derby.

Churchill Downs Inc said it will pay $85m to buy the trademarks and associated rights to the Preakness and the Black-Eyed Susan Stakes from 1/ST Maryland LLC, an affiliate of 1/ST Racing.

The agreement covers the intellectual property tied to the races, not the events themselves. Under a separate licensing arrangement, Churchill Downs will grant the state of Maryland the rights needed to continue staging the races in exchange for an annual fee.

The transaction follows a 2024 agreement in which Maryland bought Pimlico Race Course from 1/ST Racing but allowed the company to retain the intellectual property rights to the Preakness and Black-Eyed Susan Stakes. Under that arrangement, 1/ST received annual payments and a share of wagering revenue tied to the races.

The Preakness Stakes, first run in 1873, is the second leg of US horse racing’s Triple Crown, held two weeks after the Kentucky Derby and followed by the Belmont Stakes. The Black-Eyed Susan Stakes, a major race for three-year-old fillies, is traditionally run the day before the Preakness at Pimlico Race Course in northwest Baltimore."

Japanese Man Sentenced For Posting ‘Godzilla’ Spoilers Online; Gizmodo, April 21, 2026

 , Gizmodo; Japanese Man Sentenced For Posting ‘Godzilla’ Spoilers Online

39-year-old Wataru Takeuchi was charged for posting plot descriptions of movies and anime series so detailed that a Tokyo court declared them copyright infringement.

"Last week, Asahi (via The Verge) reported that the Tokyo District Court sentenced Wataru Takeuchi to a year and a half in prison, as well as a 1 million yen (roughly $6,300) fine, for running foul of Japan’s copyright laws. Takeuchi was the administrator of an entertainment site littered with articles (and, perhaps most crucially, monetized ad displays, from which Takeuchi made almost a quarter of a million dollars in 2023) that went into great length and spoiler-filled detail to summarize the plots of popular shows and movies."

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Anthropic Settlement Hearing Comes into Focus; Publishers Weekly, April 20, 2026

Jim Milliot , Publishers Weekly ; Anthropic Settlement Hearing Comes into Focus

"With the May 14 Bartz v. Anthropic settlement fairness hearing drawing closer, both the Authors Guild and Authors Alliance have issued updates on where the $1.5 billion copyright infringement agreement stands....

The Guild noted that with the higher claim rate, the payout per work will be closer to the $3,000 per work estimated in the lawsuit rather than the $4,876 payout that was based on the number of works claimed in March...

The Authors Alliance update focused on the various objections that have been made about the settlement and which are likely to be raised in the settlement hearing. The objections were unsealed following a motion filed by professor Lea Victoria Bishop.

Among the objections are the claims that the distribution plan systematically favors publishers over authors; that the class notice was “misleading/coercive,” since statutory damages per infringement can technically be up to $150,000/work which would make the settlement amount per work is inadequate; and that the settlement sets a “dangerous precedent” by permitting “a multi-billion dollar AI company to ‘buy’ its way out of massive piracy for a ‘discounted’ rate.”

Judge Martínez-Olguín, who took over the case following the retirement of Judge William Alsup, will oversee the May 14 hearing set for 2 p.m. in the San Francisco Federal Courthouse. A Zoom link will be available for those who cannot make the trip to San Francisco."