Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Thursday, March 5, 2026

A Long-Running AI Copyright Question Gets an Answer as Supreme Court Stays Mum; CNET, March 4, 2026

Omar Gallaga, CNET ; A Long-Running AI Copyright Question Gets an Answer as Supreme Court Stays Mum

The man behind the AI-generated image in question reflects on what he calls a "philosophical milestone."

"A legal battle over AI copyright that has gone on for more than a decade may have reached its end, with the US Supreme Court declining to hear a case involving AI-generated visual art...

In an email to CNET, Thaler said that although the court declined to hear his appeal, "I see this moment as a philosophical milestone rather than a defeat."

While he's unsure if legal action will continue, Thaler says he's still certain that the law on copyright, as written, is intended to exclude nonhuman inventors.

"By bringing DABUS into the legal system, I confronted a question long confined to theory: whether invention and creativity must remain tied to humans or whether autonomous computational processes could genuinely originate ideas," Thaler said."

Tuesday, March 3, 2026

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

 Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’


"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

US Supreme Court declines to hear dispute over copyrights for AI-generated material; Reuters, March 2, 2026

, Reuters; US Supreme Court declines to hear dispute over copyrights for AI-generated material

"The U.S. Supreme Court declined on Monday to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.

Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator."

Monday, March 2, 2026

Everybody’s Talking About AI: Takeaways from the February 20, 2026 Fordham Law Symposium; Lexology, February 26, 2026

Seyfarth Shaw LLP - Owen Wolfe, Lexology; Everybody’s Talking About AI: Takeaways from the February 20, 2026 Fordham Law Symposium

"On February 20, 2026, Gadgets, Gigabytes and Goodwill Blog co-editor Owen Wolfe spoke at the Fordham School of Law as part of the Fordham Intellectual Property, Media & Entertainment Law Journal Symposium, The Meaning of Ownership: Rethinking Intellectual Property, Creativity, and Control in the Age of Innovation. Owen discussed how courts have so far applied the “fair use” doctrine to cases involving generative AI, distinguishing between use of copyrighted materials in gen AI training and gen AI outputs that are alleged to be substantially similar to the original works. He noted that the decisions to date have been mixed, with some courts finding that certain uses of copyrighted works for AI training are fair use, and other courts expressing skepticism about whether that is the correct result. Owen also surveyed arguments both for and against a finding of fair use, giving the audience food for thought about what courts might decide in the future and whether we might see an amendment to the Copyright Act down the road.

Owen’s talk followed one by Dr. Douglas Lind, a professor at Virginia Tech, who surveyed the history of copyright law in the United States. He focused on the law’s treatment of phonograph records and sound recordings when those new technologies first emerged. Dr. Lind noted that copyright law evolved, and the Copyright Act was eventually amended, to address those new technologies. Dr. Lind raised the question of whether the Copyright Act should be amended again to address gen AI."

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

  , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

An Ohio newspaper has a new star writer. It isn’t human.; The Washington Post, March 1, 2026

 

, The Washington Post; An Ohio newspaper has a new star writer. It isn’t human.

At the 184-year-old Cleveland Plain Dealer, a top editor’s push to let AI draft news articles is boosting traffic — and spooking staffers.


"The Plain Dealer, Cleveland’s largest newspaper, has begun to feature a new byline. On recent articles about an ice carving festival, a medical research discovery and a roaming pack of chicken-slaying dogs, a reporter’s name is paired with the words “Advance Local Express Desk.” It means: This article was drafted by artificial intelligence."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

 , The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

Friday, February 20, 2026

The battle over Scott Adams' AI afterlife; Business Insider, February 20, 2026

 Katherine Tangalakis-Lippert, Business Insider; The battle over Scott Adams' AI afterlife

 "In a 2021 podcast clip, the cartoonist said he granted "explicit permission" for anyone to make a posthumous AI based on him, arguing that his public thoughts and words are "so pervasive on the internet" that he'd be "a good candidate to turn into AI." He added that he was OK with an AI version of him saying new things after he died, as long as they seemed compatible with what he might say while alive.

Shortly after the 68-year-old's January death from complications of metastatic prostate cancer, an AI-generated "Scott Adams" account began posting videos of a digital version of the cartoonist speaking directly to viewers about current events and philosophy, mirroring the cadence and topics the actual human Adams discussed for years.

His family says it's a violation, not a tribute."

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

 REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

 Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI; Notre Dame News, February 17, 2026

 Carrie Gates, Notre Dame News; New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI

"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.

However, a new publication, co-edited by University of Notre Dame theologian Paul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.

The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.

“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.

“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”

In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.

They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."

Saturday, February 14, 2026

How Fast Can A.I. Change the Workplace?; The New York Times, February 14, 2026

 ROSS DOUTHAT, The New York Times; How Fast Can A.I. Change the Workplace?

"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.

One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediated by human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.

Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.

I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.

Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.

The second is the reality that people clearly like a human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."

Microsoft AI CEO predicts 'most, if not all' white-collar tasks will be automated by AI within 18 months; Business Insider, February 12, 2026

  and , Business Insider; Microsoft AI CEO predicts 'most, if not all' white-collar tasks will be automated by AI within 18 months


[Kip Currier: Microsoft AI Chief Mustafa Suleyman's assertion that AI will be performing "most, if not all" white-collar  tasks within 12 to 18 months raises lots of questions, like:

  • Is this forecast accurate or AI hype?
  • As individuals and societies, do we want AI to displace human workers? Who has decided that this is "a good thing"?
  • What are the spiritual implications of this revolutionary transformation of our world?
  • What are the implications of such changes for the physical and mental well-being of children, young people, and adults?
  • What are the short-term and long-term cognitive impacts of AI use?
  • How will marginalized persons around the globe be affected by such radical employment changes? How will the Global South be impacted?
  • What are the implications for income disparities and wealth concentration?
  • In what ways will culture, the arts, science, medicine, and research be influenced?
  • What are the impacts on education, life-long learning, and professional development?
  • How will the environment, diminishing resources like water, and climate change be influenced by this employment forecast?
  • In what ways will AI proliferation impact people in need and the fauna and flora of the world, particularly vulnerable organisms and ecosystems?
  • How will monies and resources spent on AI data centers create new environmental justice communities and exacerbate inequities in existing ones?
  • What are the implications for democracy, human rights, and civil liberties, like privacy, data agency, free expression, intellectual freedom, and access to accurate, uncensored information?
  • Do you trust AI to do the white-collar jobs that humans have done? 
  • Are Microsoft and Suleyman disinterested parties? Microsoft has major self-interest in hyping AI enterprise products that Microsoft will be charging users to adopt and license.
  • If Suleyman's claim is accurate, or even is accurate but in a longer time period than 12 to 18 months, what kinds of oversight, regulations, and ethical guardrails are needed/desired?]


[Excerpt]

"Mustafa Suleyman, the Microsoft AI chief, said in an interview with the Financial Times that he predicts most, if not every, task in white-collar fields will be automated by AI within the next year or year and a half.

"I think that we're going to have a human-level performance on most, if not all, professional tasks," Suleyman said in the interview that was published Wednesday. "So white-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.""

Friday, February 13, 2026

Lawyer sets new standard for abuse of AI; judge tosses case; Ars Technica, February 6, 2026

 ASHLEY BELANGER , Ars Technica; Lawyer sets new standard for abuse of AI; judge tosses case

"Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, District Judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations."

Wednesday, February 11, 2026

Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models; Deadline, February 10, 2026

Ted Johnson, Deadline; Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models

"Sen. Adam Schiff (D-CA) and Sen. John Curtis (R-UT) are introducing a bill that touches on one of the hottest Hollywood-tech debates in the development of AI: The use of copyrighted works in training models.

The Copyright Labeling and Ethical AI Reporting Act would require companies file a notice with the Register of Copyrights that detail the copyrighted works used to train datasets for an AI model. The notice would have to be filed before a new model is publicly released, and would apply retroactively to models already available to consumers.

The Copyright Office also would be required to establish a public database of the notices filed. There also would be civil penalties for failure to disclose the works used."

OpenAI Is Making the Mistakes Facebook Made. I Quit.; The New York Times, February 11, 2026

 Zoë Hitzig , The New York Times; OpenAI Is Making the Mistakes Facebook Made. I Quit.

"This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.

I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.

I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy."

Tuesday, February 10, 2026

No, the human-robot singularity isn’t here. But we must take action to govern AI; The Guardian, February 10, 2026

  , The Guardian; No, the human-robot singularity isn’t here. But we must take action to govern AI

"Based upon my years of research on bots, AI and computational propaganda, I can tell you two things with near certainty. First, Moltbook is nothing new. Humans have built bots that can talk to one another – and to humans – for decades. They’ve been designed to make outlandish, even frightening, claims throughout this time. Second, the singularity is not here. Nor is AGI. According to most researchers, neither is remotely close. AI’s advancement is limited by a number of very tangible factors: mathematics, data access and business costs among them. Claims that AGI or the singularity have arrived are not grounded in empirical research or science.

But as tech companies breathlessly promote their AI capabilities another thing is also clear: big tech is now far from being the countervailing force it was during the first Trump administration. The overblown claims emanating from Silicon Valley about AI have become intertwined with the nationalism of the US government as the two work together in a bid to “win” the AI race. Meanwhile, ICE is paying Palantir $30m to provide AI-enabled software that may be used for government surveillance. Musk and other tech executives continue to champion far-right causes. Google and Apple also removed apps people were using to track ICE from their digital storefronts after political pressure.

Even if we don’t yet have to worry about the singularity, we do need to fight back against this marriage of convenience caused by big tech’s quest for higher valuations and Washington’s desire for control. When tech and politicians are in lockstep, constituents will need to use their power to decide what will happen with AI."