Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Sunday, April 5, 2026

What Teens Are Doing With Those Role-Playing Chatbots; The New York Times, April 4, 2026

  

, The New York Times ; What Teens Are Doing With Those Role-Playing Chatbots

"There are a growing number of companies offering social chatbots that can act like friends, enemies, lovers, adventurous companions, or the manifestation of a fictional or real person you’ve always wanted to meet. You can pick A.I. Elon Musk’s brain or spar with A.I. Draco Malfoy. The myriad characters, often created by fellow users, offer drama, romance, therapy and LOLs.

Apps that feature role-playing chatbots are used by tens of millions of people, with engagement times that rival or surpass those of social media behemoths such as TikTok, according to market intelligence firm Sensor Tower. The majority of teens surveyed by Pew use A.I. chatbots, with one out of 11 saying they had used Character.AI.

“If you think your child is not talking to chatbot companions, you’re probably wrong,” said Mitch Prinstein, co-director of the Winston Center on Technology and Brain Development at U.N.C. Chapel Hill.

Chatbots are surging in popularity as society is still grappling with how social media has affected young people; a wave of lawsuits is moving through the courts seeking damages from companies that plaintiffs say have deliberately created addictive products. (A jury in California recently found that Meta and YouTube were liable for $6 million in damages to one young woman.) And now parents and caregivers have a new attention-absorbing technology to reckon with.

At the beginning of last year, a high school teacher in Chicago told me that some of her students were dating chatbots, and she worried that they were having their first erotic experiences with them. I wanted to find out what teens had to say about that, so I joined communities devoted to social chatbot apps on the online messaging forum Discord. I introduced myself as a reporter and “an old,” and explained that I was interested in talking to young people who used the services regularly."

Tuesday, March 31, 2026

I broke up with my Kindle. My new e-reader treats me better.; The Washington Post, March 31, 2026

 , The Washington Post; I broke up with my Kindle. My new e-reader treats me better.

After Amazon’s Kindle removed my ability to download and back up my own e-books, I went in search of an alternative.


"As corporate walled gardens have replaced the freewheeling, open internet of the 1990s and 2000s, we’ve ceded control over almost everything about our online experience. Nearly every keystroke, swipe and tap is now monitored, recorded and analyzed for potential profit.


The Kindle ecosystem is perhaps the apotheosis of this shift. One Guardian reporter found Amazon had recorded every title, highlight and page turn on her Kindle app (40,000 entries over two years). The company’s dominance sets the terms for everyone in the marketplace.


Including me. Like tens of millions of others, I have owned a Kindle (a Paperwhite). Last year, it started to feel as if it owned me. The final straw was when Kindle removed my ability to download and back up my own e-books. So I went in search of an alternative.


I bought a Kobo.


Was it the bibliophile Eden some Kobo fans described? Not quite. The reality was messier than I expected. It turns out we can’t escape Big Brother on our e-readers just yet. But a more open society is coming into view for book lovers — and perhaps all of us.


Here’s how to turn the page."

Tuesday, March 10, 2026

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

  , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Wednesday, February 18, 2026

Dinner Is Being Recorded, Whether You Know It or Not; The New York Times, February 16, 2026

  , The New York Times; Dinner Is Being Recorded, Whether You Know It or Not

"To be in public is to risk being filmed. And these days, there’s a good chance it’s happening surreptitiously with smart glasses. Their wearers are filming in restaurants, cafes and bars, capturing warped, eye-level video of drive-through pranks, Michelin-starred meals and work shifts at Texas Roadhouse. Servers, owners and customers can end up as captive participants...

Filming in public spaces is broadly protected by the First Amendment. Some states, including California and Pennsylvania, have two-party consent laws that prohibit recording without express permission, but enforcing them hinges on whether someone has a “reasonable expectation of privacy” in a given setting, said Aaron Krowne, a New York City lawyer specializing in privacy and civil liberties. Restaurants fall in a legal gray area: They are privately owned, but open to anyone who walks in...

The responsibility of using these devices ethically falls largely on the wearer."

Saturday, February 7, 2026

Moltbook was peak AI theater; MIT Technology Review, February 6, 2026

Will Douglas Heaven, MIT Technology Review; Moltbook was peak AI theater

"Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data."


Saturday, January 10, 2026

Trump may be the beginning of the end for ‘enshittification’ – this is our chance to make tech good again; The Guardian, January 10, 2026

 , The Guardian ; Trump may be the beginning of the end for ‘enshittification’ – this is our chance to make tech good again

"Until we repeal the anti-circumvention law, we can’t reverse-engineer the US’s cloud software, whether it’s a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours – say, the solar inverters and batteries we buy from China.

This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks.

Admittedly, the door is only open a crack, but it’s been locked tight since the turn of the century. When it comes to a better technology future, “open a crack” is the most exciting proposition I’ve heard in decades."

Thursday, December 4, 2025

OpenAI loses fight to keep ChatGPT logs secret in copyright case; Reuters, December 3, 2025

  , Reuters ; OpenAI loses fight to keep ChatGPT logs secret in copyright case

"OpenAI must produce millions of anonymized chat logs from ChatGPT users in its high-stakes copyright dispute with the New York Times and other news outlets, a federal judge in Manhattan ruled.

U.S. Magistrate Judge Ona Wang in a decision made public on Wednesday said that the 20 million logs were relevant to the outlets' claims and that handing them over would not risk violating users' privacy."

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

 

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Wednesday, August 20, 2025

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas; Electronic Frontier Foundation (EFF), August 18, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

"Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief."

Friday, July 25, 2025

Trump’s Comments Undermine AI Action Plan, Threaten Copyright; Publishers Weekly, July 23, 2025

Ed Nawotka  , Publishers Weekly; Trump’s Comments Undermine AI Action Plan, Threaten Copyright

"Senate bill proposes 'opt-in' legislation

Trump's comments come on the heels of the introduction, by U.S. senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), of the AI Accountability and Personal Data Protection Act this past Monday following a hearing last week on AI companies' copyright infringement. The bipartisan legislation aims to hold AI firms liable for using copyrighted works or personal data without acquiring explicit consent to train AI models. It would empower individuals—including writers, artists, and content creators—to sue companies in federal court if their data or copyrighted works are used without consent. It also supports class action lawsuits and advocates for violators to pay robust penalties.

"AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse," said Hawley. "It’s time for Congress to give the American worker their day in court to protect their personal data and creative works. My bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness."

"This bill embodies a bipartisan consensus that AI safeguards are urgent—because the technology is moving at accelerating speed, and so are dangers to privacy," added Blumenthal. "Enforceable rules can put consumers back in control of their data, and help bar abuses. Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent. Consumers must be given rights and remedies—and legal tools to make them real—not relying on government enforcement alone."

Tuesday, June 24, 2025

Copyright Cases Should Not Threaten Chatbot Users’ Privacy; Electronic Frontier Foundation (EFF), June 23, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Copyright Cases Should Not Threaten Chatbot Users’ Privacy

"Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per dayoften for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy."

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Sunday, June 8, 2025

OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'; Foxbusiness, June 6, 2025

, Foxbusiness; OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'

"The OpenAI co-founder said the case has accelerated the need for a conversation about "AI privilege," in which "talking to an AI should be like talking to a lawyer or a doctor.""

Monday, January 6, 2025

OpenAI holds off on promise to creators, fails to protect intellectual property; The American Bazaar, January 3, 2025

 Vishnu Kamal, The American Bazaar; OpenAI holds off on promise to creators, fails to protect intellectual property

"OpenAI may yet again be in hot water as it seems that the tech giant may be reneging on its earlier assurances. Reportedly, in May, OpenAI said it was developing a tool to let creators specify how they want their works to be included in—or excluded from—its AI training data. But seven months later, this feature has yet to see the light of day.

Called Media Manager, the tool would “identify copyrighted text, images, audio, and video,” OpenAI said at the time, to reflect creators’ preferences “across multiple sources.” It was intended to stave off some of the company’s fiercest critics, and potentially shield OpenAI from IP-related legal challenges...

OpenAI has faced various legal challenges related to its AI technologies and operations. One major issue involves the privacy and data usage of its language models, which are trained on large datasets that may include publicly available or copyrighted material. This raises concerns over privacy violations and intellectual property rights, especially regarding whether the data used for training was obtained with proper consent.

Additionally, there are questions about the ownership of content generated by OpenAI’s models. If an AI produces a work based on copyrighted data, it is tricky to determine who owns the rights—whether it’s OpenAI, the user who prompted the AI, or the creators of the original data.

Another concern is the liability for harmful content produced by AI. If an AI generates misleading or defamatory information, legal responsibility could fall on OpenAI."

Saturday, November 30, 2024

Why I regret using 23andMe: I gave up my DNA just to find out I’m British; The Guardian, November 30, 2024

, The Guardian; Why I regret using 23andMe: I gave up my DNA just to find out I’m British

"With the future of 23andMe in peril, the overarching question among previous customers now is what will happen to the data that has already been collected. Leuenberger noted that by entering DNA into a database, users sacrifice not only their own privacy but that of blood relatives. Because an individual’s DNA is similar in structure to that of their relatives, information about others can be gleaned from one person’s sample. This is especially pronounced with the rise of open-access DNA sites like GEDMatch, on which users can upload genetic data that can be compared to other samples. A consumer genealogy test contributed to the identification of serial killer Joseph James DeAngelo.

“What is ethically tricky with genetic data is that it’s not just about self-knowledge – it’s also knowledge about all of your relatives,” Leuenberger said. “Morally speaking, it is not necessarily information that is yours to give – and this risk is exacerbated if this company goes down and the fate of the data becomes more perilous.”"

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

 Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Saturday, June 29, 2024

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

Thursday, May 23, 2024

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

 FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."