Showing posts with label Big Tech. Show all posts
Showing posts with label Big Tech. Show all posts

Sunday, March 29, 2026

Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

 Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Thursday, March 26, 2026

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

 Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

Saturday, March 14, 2026

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

 , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Wednesday, March 11, 2026

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

 , The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"penAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Thursday, February 5, 2026

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI; The Guardian, February 5, 2026

 Anuj Behal, The Guardian; ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI


[Kip Currier: The largely unaddressed plight of content moderators became more real for me after reading this haunting 9/9/24 piece in the Washington Post, "I quit my job as a content moderator. I can never go back to who I was before."

As mentioned in the graphic article's byline, content moderator Alberto Cuadra spoke with journalist Beatrix Lockwood. Maya Scarpa's illustrations poignantly give life to Alberto Cuadra's first-hand experiences and ongoing impacts from the content moderation he performed for an unnamed tech company. I talk about Cuadra's experiences and the ethical issues of content moderation, social media, and AI in my Ethics, Information, and Technology book.]


[Excerpt]

"Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.

On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm.

This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of a workforce often described as “ghost workers”.

“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her...

“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.

A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

 , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."


Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

 

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Monday, September 29, 2025

I Sued Anthropic, and the Unthinkable Happened; The New York Times, September 29, 2025

 , The New York Times; I Sued Anthropic, and the Unthinkable Happened

"In August 2024, I became one of three named plaintiffs leading a class-action lawsuit against the A.I. company Anthropic for pirating my books and hundreds of thousands of other books to train its A.I. The fight felt daunting, almost preposterous: me — a queer, female thriller writer — versus a company now worth $183 billion?

Thanks to the relentless work of everyone on my legal team, the unthinkable happened: Anthropic agreed to pay authors and publishers $1.5 billion in the largest copyright settlement in history. A federal judge preliminarily approved the agreement last week.

This settlement sends a clear message to the Big Tech companies splashing generative A.I. over every app and page and program: You are not above the law. And it should signal to consumers everywhere that A.I. isn’t an unstoppable tsunami about to overwhelm us. Now is the time for ordinary Americans to recognize our agency and act to put in place the guardrails we want.

The settlement isn’t perfect. It’s absurd that it took an army of lawyers to demonstrate what any 10-year-old knows is true: Thou shalt not steal. At around $3,000 per work, shared by the author and publisher, the damages are far from life-changing (and, some argue, a slap on the wrist for a company flush with cash). I also disagree with the judge’s ruling that, had Anthropic acquired the books legally, training its chatbot on them would have been “fair use.” I write my novels to engage human minds — not to empower an algorithm to mimic my voice and spit out commodity knockoffs to compete directly against my originals in the marketplace, nor to make that algorithm’s creators unfathomably wealthy and powerful.

But as my fellow plaintiff Kirk Wallace Johnson put it, this is “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of A.I.” Anthropic will destroy its trove of illegally downloaded books; its competitors should take heed to get out of the business of piracy as well. Dozens of A.I. copyright lawsuits have been filed against OpenAI, Microsoft and other companies, led in part by Sylvia Day, Jonathan Franzen, David Baldacci, John Grisham, Stacy Schiff and George R. R. Martin. (The New York Times has also brought a suit against OpenAI and Microsoft.)

Though a settlement isn’t legal precedent, Bartz v. Anthropic may serve as a test case for other A.I. lawsuits, the first domino to fall in an industry whose “move fast, break things” modus operandi led to large-scale theft. Among the plaintiffs of other cases are voice actors, visual artists, record labels, YouTubers, media companies and stock-photo libraries, diverse stakeholders who’ve watched Big Tech encroach on their territory with little regard for copyright law...

Now the book publishing industry has sent a message to all A.I. companies: Our intellectual property isn’t yours for the taking, and you cannot act with impunity. This settlement is an opening gambit in a critical battle that will be waged for years to come."

Saturday, June 21, 2025

Conservative groups demand Congress protect intellectual property from patent abuse; Washington Examiner, June 18, 2025

"A collection of 28 conservative groups is urging Republican Congress members to pass the PERA, PREVAIL, and RESTORE acts — all aimed at patent protection — as Chinese influence permeates U.S.intellectual property...

The PREVAIL Act, or Promoting and Respecting Economically Vital American Innovation Leadership, was introduced by Sen. Chris Coons (D-DE) in the last Congress and aims to “invest in inventors in the United States, maintain the United States as the leading innovation economy in the world, and protect the property rights of the inventors that grow the economy of the United States.”

The PERA Act, or the Patent Eligibility Restoration, was introduced by Sen. Thom Tillis (R-NC) also in the last Congress and aims to restore patent eligibility to several fields. Lastly, the RESTORE Act, or Realizing Engineering, Science, and Technology Opportunities by Restoring Exclusive Patent Rights, works to give patent owners the right to a “rebuttable presumption that the court should grant a permanent injunction with respect to that infringing conduct” if a court finds that there was an infringement of a right secured by patent.

All three acts could work as pro-patent freedom legislation, possibly helping U.S. intellectual property owners fight back against Big Tech and China."


Tuesday, May 20, 2025

The AI and Copyright Issues Dividing Trump’s Court; Jacobin, May 19, 2025

DAVID MOSCROP , Jacobin; The AI and Copyright Issues Dividing Trump’s Court

"As many have pointed out, the copyright-AI battle is not only a central struggle within the Trump administration; it is also a broader conflict over who controls intellectual property and to what end. For decades, corporations have abused copyright to unreasonably extend coverage periods and impoverish the public domain. Their goal: maximizing both control over IP and profits. But AI firms aren’t interested in reforming that system. They’re not looking to open access or enrich the commons — they just want training data. And in fighting for it, they may end up reshaping copyright law in ways that outlast this administration.

As Nguyen notes, after the Register of Copyrights, Shira Perlmutter, was turfed by DOGE-aligned officials, Trump antitrust adviser Mike Davis posted to Truth Social: “Now tech bros are going to steal creators’ copyrights for AI profits. . . . This is 100 percent unacceptable.” Trump reposted it. That’s the shape of the struggle: MAGA populists, who see their own content as sacred property, are up against a tech elite that views all content as extractable fuel."

Tuesday, February 25, 2025

Musicians release silent album to protest UK's AI copyright changes; Reuters, February 25, 2025

 , Reuters; Musicians release silent album to protest UK's AI copyright changes

"More than 1,000 musicians, including Kate Bush and Cat Stevens, on Tuesday released a silent album to protest proposed changes to Britain's copyright laws, which could allow tech firms to train artificial intelligence models using their work."

Monday, February 24, 2025

Copyright 'sell-out' will silence British musicians, says BRIAN MAY; Daily Mail, February 23, 2025

Andy Behring , Daily Mail; Copyright 'sell-out' will silence British musicians, says BRIAN MAY

"No one will make music in Britain any more if Labour's AI copyright proposal succeeds, Sir Brian May warned last night as he backed the Daily Mail's campaign against it.

The Queen guitarist said he feared it may already be 'too late' because 'monstrously arrogant' Big Tech barons have already carried out an industrial-scale 'theft' of Britain's cultural genius.

He called on the Government to apply the brakes before the next chapter of Britain's rich cultural heritage – which includes Shakespeare, Chaucer, James Bond, The Beatles and Britpop – is nipped in the bud thanks to Sir Keir Starmer's copyright 'sell-out'...

Sir Brian said: 'My fear is that it's already too late – this theft has already been performed and is unstoppable, like so many incursions that the monstrously arrogant billionaire owners of Al and social media are making into our lives. The future is already forever changed."

Monday, June 24, 2024

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

 Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Wednesday, June 12, 2024

Big Tech Launches Campaign to Defend AI Use; The Hollywood Reporter, June 6, 2024

Winston Cho , The Hollywood Reporter; Big Tech Launches Campaign to Defend AI Use

"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems.

The group says the campaign, called “Generate and Create” and unveiled on Thursday, will aim to highlight “how artists use generative AI to enhance their creative output” and “showcase how AI lowers barriers for producing art” as part of an initiative to “defend the longstanding legal principle of fair use under copyright law.”"

Monday, May 27, 2024

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Wednesday, December 27, 2023

Classical Musicians Victimized by Erroneous Copyright Claims; Violinist.com, December 19, 2023

Laurie Niles, Violinist.com; Classical Musicians Victimized by Erroneous Copyright Claims

""One or more actions were applied to your video because of a copyright match."

This was just one of two copyright claims that Amy Beth Horman received from Facebook Thursday, disputing ownership of videos of her daughter's violin performances. First, she received a copyright claim for a video of Ava's live performance of the Mendelssohn Violin Concerto this week. Then, she got another for video she had posted in 2020 of then-10-year-old Ava performing "Meditation from Thais." These are both classical works that are in the public domain - not subject to copyright.

Nonetheless, classical musicians receive these kinds of dreaded messages on a regular basis if they post videos of their performances on social media outlets such as Facebook, Instagram or YouTube.

Has the musician violated anyone's copyright? Almost never. These are automated copyright claims created by bots on behalf of big companies like Sony Music Entertainment, Warner Music Group or Universal Music. If the bot finds that your performance has approximately the same notes and timing as one in their catalogue, they then claim that they own rights to your recording. But musicians have every right to perform and post a public domain work. Even so, musicians often find their recordings muted, earnings from ads on their performances given instead to the company filing the erroneous claim, and threats of having their accounts suspended or banned."

Wednesday, December 20, 2023

AI’s Billion-Dollar Copyright Battle Starts With a Font Designer; Bloomberg Law, December 18, 2023

 Isaiah Poritz, Bloomberg Law; AI’s Billion-Dollar Copyright Battle Starts With a Font Designer

"The makers of Copilot, which include OpenAI Inc., Microsoft Corp., GitHub Inc., and other top AI companies, are now facing nearly a dozen lawsuits from authors, artists, and programmers. They claim the industry has vacuumed up their creative work—without consent or compensation—to train AI chatbots and image generators that are already beginning to replace them.

At the core of these novel cases sits Butterick, a typographer and lawyer hailed by some for leading the fight to holding AI accountable, and slammed by others as a Luddite and an obstacle to transformative technological advances."