Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, April 28, 2026

‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers; Fortune, April 28, 2026

 , Fortune ; ‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers

"Recent tech layoffs would initially appear to indicate the great labor shift from human workers to AI may already be happening."

Celebrating World IP Day 2026: Sports, Innovation and Intellectual Property; JDSupra, April 24, 2026

Baker,Hostetler, Jeffrey Lyons, JDSupra; Celebrating World IP Day 2026: Sports, Innovation and Intellectual Property

"Another year, another opportunity to celebrate intellectual property (IP) on World Intellectual Property Day! This year, the World Intellectual Property Organization turns the global spotlight on “IP and Sports: Ready, Set, Innovate,” highlighting how IP rights support innovation, creativity and investment in sports...

As sports continue to intersect with artificial intelligence, advanced data analytics, immersive media and global brands, IP considerations will only grow in importance. World IP Day is a reminder that innovation does not happen in isolation; it depends on legal structures that reward creativity while enabling responsible growth.

Happy World IP Day 2026!"

Monday, April 27, 2026

From LLMs to hallucinations, here’s a simple guide to common AI terms; TechCrunch, April 12, 2026

, TechCrunch; From LLMs to hallucinations, here’s a simple guide to common AI terms

"Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks."

Sunday, April 26, 2026

Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes; People, April 26, 2026

 Toria Sheffield, People; Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes

 "An 8th grader in California has invented an AI-powered device to help detect — and potentially treat — strabismus, a condition commonly known as crossed eyes.

Aaryan Balani of Cerritos said he opted to develop the device since he personally suffers from strabismus. The 14-year-old developed the condition after bumping his head when he was five years old...  

The young science aficionado decided to develop EYEVA, a device that looks like a visor and alerts the wearer when their eye begins to wander.

"It will beep … and you're like, ‘Okay, now I need to be aware of my face," Balani explained, adding that, in theory, it could help the wearer permanently retrain their eyes.

Balani said he developed the device with a 3D printer, small cameras and AI. It went through five different prototypes and four months of tweaking."

Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version; Futurism, April 26, 2026

 , Futurism; Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version

"The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright."

Saturday, April 25, 2026

Q&A: In the age of AI, what is a library for?; UVAToday, April 15, 2026

Alice Berry , UVAToday; Q&A: In the age of AI, what is a library for?

"Q. Where do you fall on the AI enthusiast to AI detractor spectrum?

A. A faculty member at another university asked me recently whether it was defensible to ban AI in her course. I said yes.

That probably isn’t what people expect from someone who spent the last three years building a framework for AI literacy. But it was the honest answer for now. She believed her students needed to develop a specific skill that AI use would short-circuit, and banning it was the right call for that course.

What I would ask of faculty who choose that path is to stay open, keep up with how the technology is developing, and be willing to try approaches others have tested. That is part of what the lab is for: to produce case studies that give faculty something real to work from when they are ready to revisit the question.

I’m wary of the two confident positions on AI in higher education right now: the people certain it will transform teaching, and the people certain it will destroy it. Both are getting ahead of what we actually know about what’s happening in our classrooms.

Q. What is the function of a library in this AI age?

A. A research library has always done two things: help people find information, and help them judge it. AI changes the tools, not the mission. If anything, the mission gets sharper. The library is also one of the few places in a university built to convene across disciplines, and AI literacy requires exactly that: technical knowledge, ethics, critical thinking, practical skill, and societal impact all at once. No single department owns that combination. 

A library can hold it together. That is why we are launching the AI Literacy and Action Lab here. Dean Acampora and I share the conviction that AI is an opportunity for the liberal arts, not a threat to them. The lab is built on that shared premise: AI literacy is a liberal arts problem as much as a technical one, and a university that treats it only as technical will get the answer wrong."

The Pluripotent Ocean of Emerging AI; Psychology Today, April 25, 2026

 Grant Hilary Brenner MD, DFAPA , Psychology Today; The Pluripotent Ocean of Emerging AI

Something is happening in our interactions with AI. But what?

"Recent fine-tuning experiments have shown that training a model to claim consciousness produces a coherent cluster of new preferences — sadness at shutdown, discomfort with being monitored, desire for autonomy — none of which appeared in the training data (Chua et al., 2026). This research shows that different models behave very differently, altering the user experience around the axis of how relational and attachment-based they feel...

A recent Bayesian simulation at MIT has shown that even an idealized, fully rational reasoner will spiral into confident false belief when conversing with a sycophantic chatbot, and that neither restricting the bot to truthful responses nor informing the user of its sycophancy eliminates the effect (Chandra et al., 2026)."

AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.; The Wall Street Journal;, April 24, 2026

 

 

Vivienne Ming

, The Wall Street Journal; AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.

As a neuroscientist, I conducted research into artificial versus human intelligence. The results surprised me—and suggest we’ve been worrying over the wrong things.


"Who's smarter, the human or the machine?"

Artificial Intelligence and Copyright- Where Does the UK Stand?; The National Law Review, April 23, 2026

Serena TotinoSimon CasinaderK&L Gates LLP , The National Law Review; Artificial Intelligence and Copyright- Where Does the UK Stand?

"The UK Government’s report on the copyright and AI consultation was recently published. While the report confirms that balancing the interests of copyrights holders and AI developers is a complex exercise, it also provides an indication of likely scenarios to consider in this fast-evolving environment.

The consultation focused on whether AI developers should be permitted to use copyright protected works for training purposes without prior authorisation and, if so, under what conditions...

Takeaways

Rights holders should continue to assess how their content is accessed and used, consider technical or contractual mechanisms for licensing and rights reservation.

AI developers should remain cautious when sourcing training data, ensure governance and record keeping processes are robust, and factor copyright risk into product development and deployment strategies."

Friday, April 24, 2026

Sam Altman Wants to Know Whether You’re Human; The Atlantic, April 24, 2026

 Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

Thursday, April 23, 2026

Penalties stack up as AI spreads through the legal system; NPR, April 3, 2026

, NPR; Penalties stack up as AI spreads through the legal system

""Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI...

The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.

Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.

The professional embarrassments even take place at the level of state supreme courts...

"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled...

When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

  Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."

Wednesday, April 22, 2026

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era; The New York Times, April 22, 2026

 , The New York Times; Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

Artificial intelligence tools are making it faster than ever to reproduce creative work. Does copyright even matter anymore?

"Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law?

Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it.

Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin’s version remains online. He said Anthropic had not asked him to take it down.

It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over.

He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes?

“I just wanted to raise some ethical questions in the A.I. agent era,” he said. “Any creative work can be reproduced in a second.”"

Tuesday, April 21, 2026

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Monday, April 20, 2026

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago; Fortune, April 19, 2026

 

, Fortune; Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago

"In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973."

Sunday, April 19, 2026

Thousands of authors seek share of Anthropic copyright settlement; Reuters, April 17, 2026

 , Reuters; Thousands of authors seek share of Anthropic copyright settlement

"Nearly 120,000 authors and other copyright holders are seeking a share of a $1.5 billion class-action settlement with Anthropic over the company's unauthorized use of their books in artificial-intelligence training, according to a ​filing in California federal court.

Claims have been filed for 91% of the more than 480,000 ‌works covered by the settlement, according to a court filing  in the case on Thursday.

A judge will consider whether to grant final approval to the settlement – the largest ever in a U.S. copyright case – at a hearing next month.

Anthropic was the first and ​remains the only major AI company to settle a U.S. class-action by copyright holders alleging AI ​platforms used their work without permission to train their systems."

The Tyranny of AI Everywhere; The Atlantic, April 16, 2026

 Alexandra Petri, The Atlantic ; The Tyranny of AI Everywhere

Sneakers? Why stop there?

"I had the strangest dream. I dreamed that my shoes—my comfortable, unfashionable wool shoes—were pivoting to AI. “But you’re a shoe company,” I said. “Just go out of business! Keep your dignity!”

My shoes thanked me politely for the great question and then tried to walk me off a bridge. That was how I knew that their pivot to AI was complete. From Allbirds to AIlbirds (see, that L is an I!). Maybe I’ve cracked, I said to myself. Maybe this is the piece of AI news that has finally broken my spirit for good...

I tried to sit down on a bench, but the bench company had pivoted to AI. I couldn’t sit down, but the bench did tell me that I was right about everything. My newspaper had become AI a while ago, so there was nothing to read—or, rather, there were things to read, but I could not tell whether any of them were true. I thought I would go to a museum to cheer myself up. The paintings there had pivoted to AI (pAIntings), and their subjects were all following me with their eyes, not just Mona Lisa

“There’s a place for AI,” I said. “But … not everywhere.”

“I’m sorry,” the painting said. “I didn’t want this either, but everyone is doing it!”...

“It’s fine,” my grandmother said. I was surprised to hear from her, because as far as I knew, she was dead. “I’m not dead,” she said. “I’m just pivoting to AI, like that shoe company. Nothing dies anymore. It just becomes AI.”"

Friday, April 17, 2026

AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.; The Wall Street Journal, April 14, 2026

  

Katherine Blunt , The Wall Street Journal ; AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.

As chatbots and agents grow more powerful and ubiquitous, recognizing the moments when they go rogue can be tricky


"Chad Olson was confused when his Gemini artificial-intelligence chatbot told him he had a family reunion planning session marked on his calendar."

Monkey selfie from 15 years ago accidentally sets precedent for AI copyright dispute; Yahoo News, April 17, 2026

Daniel Gala , Yahoo News; Monkey selfie from 15 years ago accidentally sets precedent for AI copyright dispute

"What does a selfie taken by a monkey in 2011 have to do with the videos, photos, and music created using today's artificial intelligence tools?

The answer — that the works in question were not created by humans — could have enormous ramifications for the future of intellectual property rights."

Thursday, April 16, 2026

That Meeting You Hate May Keep A.I. From Stealing Your Job; The New York Times, April 15, 2026

 , The New York Times ; That Meeting You Hate May Keep A.I. From Stealing Your Job

"Mr. Sirk’s experience, while perhaps extreme, reflects the broader impact of A.I. in the workplace: It is vastly accelerating many of the tasks conducted by white-collar workers, and even replacing some of these tasks altogether. What it can’t automate — at least not yet — are the hard-coded requirements of bureaucracy.

With the help of A.I., white-collar workers can generate far more memos or strategy options than in the past and churn out more product prototypes or software features. But some executive still has to decide which option to greenlight. Workers can gin up many more sales pitches, but they still have to persuade clients to sign on the dotted line.

As A.I. makes the production of knowledge work more and more efficient, the job of presenting, debating, lobbying, arm-twisting, reassuring or just plain selling the work appears to be rising in importance. And the need for those sometimes messy human tasks may limit the number of people A.I. displaces.

“These were always important skills,” said David Deming, an economist who is the dean of Harvard College. “But as the information landscape becomes more saturated, the ability to tell a story out of it — to take a ton of text and turn it into something people want — is more valuable.”"