Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Wednesday, April 22, 2026

Authors Guild Addresses Publishers’ AI Use; Publishers Weekly, April 21, 2026

 Sam Spratford , Publishers Weekly; Authors Guild Addresses Publishers’ AI Use

"The Authors Guild has released a statement criticizing publishing professionals’ use of AI tools following a report first published in the Bookseller that some editors have been uploading authors’ personal information, including manuscripts, into consumer-facing LLMs like ChatGPT.

“Uploading or inputting a copyrighted work or an author’s personal information into AI systems without permission may constitute a violation of the author’s copyright or right of privacy, and it puts the author’s intellectual property and personal information at risk,” the statement read. “Editors, agents, and others in the industry who have access to authors’ works should not upload any manuscript to or otherwise prompt consumer-facing chatbots with any author’s works without first getting the author’s written permission.”"

Tuesday, April 21, 2026

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Monday, April 20, 2026

Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers; TechCrunch, April 15, 2026

 Rebecca Bellan, TechCrunch ; Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

"After helping lead the lawsuit that bankrupted media firm Gawker, Aron D’Souza says he saw something broken in the American media system: People who felt harmed by coverage had little recourse to fight back.

His solution is software. D’Souza says his latest startup, Objection, aims to use AI to adjudicate the truth of journalism. And for the price of $2,000, anyone can pay to challenge a story, triggering a public investigation into its claims. (D’Souza is also the founder of the Enhanced Games, an Olympics-style competition that allows performance-enhancing drugs and is set to debut in Las Vegas next month.)

Objection launched on Wednesday with “multiple millions” in seed funding from Peter Thiel and Balaji Srinivasan, as well as VC firms Social Impact Capital and Off Piste Capital. 

Thiel, who funded the Gawker lawsuit partly in defense of the individual right to privacy, has long been critical of the media. D’Souza says his goal is to restore trust in the Fourth Estate, which he argues has collapsed over decades. Critics, including media lawyers, warn Objection could make it harder to publish the kind of reporting that holds powerful institutions to account, particularly if that reporting relies on confidential sources."

Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?; WAMU, April 17, 2026

 Esther Ciammichilli, Jackson Sinnenberg, WAMU; Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?

"The Maryland General Assembly passed a bill this week will prohibit food retailers from changing the price of their products – in real time – depending on who is buying them. The practice is called dynamic pricing. 

The new legislation is expected to be signed into law by Governor Wes Moore, who introduced it with leaders in the General Assembly. It will specifically prohibit retailers from using personal protected data to set prices for individual customers. This kind of data includes biometric information like ethnicity, sex, and gender identity...

What made Governor Wes Moore and the assembly leadership want to tackle dynamic pricing during this session?

Well, I think we’ve seen over the last several years this sort of catch up that we’re doing. Technology is moving so fast and the tech companies are finding more and more ways to exploit, really, the data, the algorithms, what they know about us in ways that are really harmful to consumers.

Over the last few years we’ve had several bills that are about protecting biodynamics, protecting consumer privacy, protecting the use of data without people’s permission. I think over the last year we saw a new way that these tech companies and these large corporations are finding ways to combine data brokers, private personal data, in a way that’s really harmful to consumers, in a way that really exploits consumers. And so this year, this is what we tackled.

During the final debate over the bill last week, you said, “One of the largest corporations in the world is announcing to their shareholders technology which they will patent to be able to adjust prices based on personal data.” Can you elaborate on the details of that announcement?

Yeah, so, you know, Walmart is …  they’re not going to have paper tags on their grocery stores anymore on there for for their prices. They’re gonna have these little screens that can change immediately. Digital screens to price your milk and your eggs and flour and and whatever else.

But what this technology allows them to do ultimately is to figure out who’s standing in front of that screen and change the price based on who you are. And that’s really the thing that we’re trying to get ahead of with this legislation."

Sunday, April 5, 2026

What Teens Are Doing With Those Role-Playing Chatbots; The New York Times, April 4, 2026

  

, The New York Times ; What Teens Are Doing With Those Role-Playing Chatbots

"There are a growing number of companies offering social chatbots that can act like friends, enemies, lovers, adventurous companions, or the manifestation of a fictional or real person you’ve always wanted to meet. You can pick A.I. Elon Musk’s brain or spar with A.I. Draco Malfoy. The myriad characters, often created by fellow users, offer drama, romance, therapy and LOLs.

Apps that feature role-playing chatbots are used by tens of millions of people, with engagement times that rival or surpass those of social media behemoths such as TikTok, according to market intelligence firm Sensor Tower. The majority of teens surveyed by Pew use A.I. chatbots, with one out of 11 saying they had used Character.AI.

“If you think your child is not talking to chatbot companions, you’re probably wrong,” said Mitch Prinstein, co-director of the Winston Center on Technology and Brain Development at U.N.C. Chapel Hill.

Chatbots are surging in popularity as society is still grappling with how social media has affected young people; a wave of lawsuits is moving through the courts seeking damages from companies that plaintiffs say have deliberately created addictive products. (A jury in California recently found that Meta and YouTube were liable for $6 million in damages to one young woman.) And now parents and caregivers have a new attention-absorbing technology to reckon with.

At the beginning of last year, a high school teacher in Chicago told me that some of her students were dating chatbots, and she worried that they were having their first erotic experiences with them. I wanted to find out what teens had to say about that, so I joined communities devoted to social chatbot apps on the online messaging forum Discord. I introduced myself as a reporter and “an old,” and explained that I was interested in talking to young people who used the services regularly."

Tuesday, March 31, 2026

I broke up with my Kindle. My new e-reader treats me better.; The Washington Post, March 31, 2026

 , The Washington Post; I broke up with my Kindle. My new e-reader treats me better.

After Amazon’s Kindle removed my ability to download and back up my own e-books, I went in search of an alternative.


"As corporate walled gardens have replaced the freewheeling, open internet of the 1990s and 2000s, we’ve ceded control over almost everything about our online experience. Nearly every keystroke, swipe and tap is now monitored, recorded and analyzed for potential profit.


The Kindle ecosystem is perhaps the apotheosis of this shift. One Guardian reporter found Amazon had recorded every title, highlight and page turn on her Kindle app (40,000 entries over two years). The company’s dominance sets the terms for everyone in the marketplace.


Including me. Like tens of millions of others, I have owned a Kindle (a Paperwhite). Last year, it started to feel as if it owned me. The final straw was when Kindle removed my ability to download and back up my own e-books. So I went in search of an alternative.


I bought a Kobo.


Was it the bibliophile Eden some Kobo fans described? Not quite. The reality was messier than I expected. It turns out we can’t escape Big Brother on our e-readers just yet. But a more open society is coming into view for book lovers — and perhaps all of us.


Here’s how to turn the page."

Tuesday, March 10, 2026

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

  , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Wednesday, February 18, 2026

Dinner Is Being Recorded, Whether You Know It or Not; The New York Times, February 16, 2026

  , The New York Times; Dinner Is Being Recorded, Whether You Know It or Not

"To be in public is to risk being filmed. And these days, there’s a good chance it’s happening surreptitiously with smart glasses. Their wearers are filming in restaurants, cafes and bars, capturing warped, eye-level video of drive-through pranks, Michelin-starred meals and work shifts at Texas Roadhouse. Servers, owners and customers can end up as captive participants...

Filming in public spaces is broadly protected by the First Amendment. Some states, including California and Pennsylvania, have two-party consent laws that prohibit recording without express permission, but enforcing them hinges on whether someone has a “reasonable expectation of privacy” in a given setting, said Aaron Krowne, a New York City lawyer specializing in privacy and civil liberties. Restaurants fall in a legal gray area: They are privately owned, but open to anyone who walks in...

The responsibility of using these devices ethically falls largely on the wearer."

Saturday, February 7, 2026

Moltbook was peak AI theater; MIT Technology Review, February 6, 2026

Will Douglas Heaven, MIT Technology Review; Moltbook was peak AI theater

"Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data."


Saturday, January 10, 2026

Trump may be the beginning of the end for ‘enshittification’ – this is our chance to make tech good again; The Guardian, January 10, 2026

 , The Guardian ; Trump may be the beginning of the end for ‘enshittification’ – this is our chance to make tech good again

"Until we repeal the anti-circumvention law, we can’t reverse-engineer the US’s cloud software, whether it’s a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours – say, the solar inverters and batteries we buy from China.

This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks.

Admittedly, the door is only open a crack, but it’s been locked tight since the turn of the century. When it comes to a better technology future, “open a crack” is the most exciting proposition I’ve heard in decades."

Thursday, December 4, 2025

OpenAI loses fight to keep ChatGPT logs secret in copyright case; Reuters, December 3, 2025

  , Reuters ; OpenAI loses fight to keep ChatGPT logs secret in copyright case

"OpenAI must produce millions of anonymized chat logs from ChatGPT users in its high-stakes copyright dispute with the New York Times and other news outlets, a federal judge in Manhattan ruled.

U.S. Magistrate Judge Ona Wang in a decision made public on Wednesday said that the 20 million logs were relevant to the outlets' claims and that handing them over would not risk violating users' privacy."

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

 

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Wednesday, August 20, 2025

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas; Electronic Frontier Foundation (EFF), August 18, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

"Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief."

Friday, July 25, 2025

Trump’s Comments Undermine AI Action Plan, Threaten Copyright; Publishers Weekly, July 23, 2025

Ed Nawotka  , Publishers Weekly; Trump’s Comments Undermine AI Action Plan, Threaten Copyright

"Senate bill proposes 'opt-in' legislation

Trump's comments come on the heels of the introduction, by U.S. senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), of the AI Accountability and Personal Data Protection Act this past Monday following a hearing last week on AI companies' copyright infringement. The bipartisan legislation aims to hold AI firms liable for using copyrighted works or personal data without acquiring explicit consent to train AI models. It would empower individuals—including writers, artists, and content creators—to sue companies in federal court if their data or copyrighted works are used without consent. It also supports class action lawsuits and advocates for violators to pay robust penalties.

"AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse," said Hawley. "It’s time for Congress to give the American worker their day in court to protect their personal data and creative works. My bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness."

"This bill embodies a bipartisan consensus that AI safeguards are urgent—because the technology is moving at accelerating speed, and so are dangers to privacy," added Blumenthal. "Enforceable rules can put consumers back in control of their data, and help bar abuses. Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent. Consumers must be given rights and remedies—and legal tools to make them real—not relying on government enforcement alone."

Tuesday, June 24, 2025

Copyright Cases Should Not Threaten Chatbot Users’ Privacy; Electronic Frontier Foundation (EFF), June 23, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Copyright Cases Should Not Threaten Chatbot Users’ Privacy

"Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per dayoften for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy."

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Sunday, June 8, 2025

OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'; Foxbusiness, June 6, 2025

, Foxbusiness; OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'

"The OpenAI co-founder said the case has accelerated the need for a conversation about "AI privilege," in which "talking to an AI should be like talking to a lawyer or a doctor.""

Monday, January 6, 2025

OpenAI holds off on promise to creators, fails to protect intellectual property; The American Bazaar, January 3, 2025

 Vishnu Kamal, The American Bazaar; OpenAI holds off on promise to creators, fails to protect intellectual property

"OpenAI may yet again be in hot water as it seems that the tech giant may be reneging on its earlier assurances. Reportedly, in May, OpenAI said it was developing a tool to let creators specify how they want their works to be included in—or excluded from—its AI training data. But seven months later, this feature has yet to see the light of day.

Called Media Manager, the tool would “identify copyrighted text, images, audio, and video,” OpenAI said at the time, to reflect creators’ preferences “across multiple sources.” It was intended to stave off some of the company’s fiercest critics, and potentially shield OpenAI from IP-related legal challenges...

OpenAI has faced various legal challenges related to its AI technologies and operations. One major issue involves the privacy and data usage of its language models, which are trained on large datasets that may include publicly available or copyrighted material. This raises concerns over privacy violations and intellectual property rights, especially regarding whether the data used for training was obtained with proper consent.

Additionally, there are questions about the ownership of content generated by OpenAI’s models. If an AI produces a work based on copyrighted data, it is tricky to determine who owns the rights—whether it’s OpenAI, the user who prompted the AI, or the creators of the original data.

Another concern is the liability for harmful content produced by AI. If an AI generates misleading or defamatory information, legal responsibility could fall on OpenAI."