Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Sunday, September 28, 2025

Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers; The Hastings Center for Bioethics, September 25, 2025

The Hastings Center for Bioethics; Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers

"As artificial intelligence rapidly transforms healthcare, The Hastings Center for Bioethics has released an interactive tool to help policymakers, patients and providers understand the ways that AI is being used in medicine—from making a diagnosis to evaluating insurance claims—and navigate the ethical questions that emerge along the way.

The new tool, a Patient’s Journey with Medical AI, follows an imaginary patient through five interactions with medical AI. It guides users through critical decision points in diagnostics, treatment, and communication, offering personalized insights into how algorithms might influence their care. 

Each decision point in the Patient’s Journey includes a summary of the ethical issues raised and multiple choice questions intended to stimulate thinking and discussion about particular uses of AI in medicine. Policy experts from across the political spectrum were invited to review the tool for accuracy and utility.

The Patient’s Journey is the latest in a set of resources developed through Hastings on the Hill, a project that translates bioethics research for use by policymakers—with an initial focus on medical AI. “This isn’t just about what AI can do — it’s about what it should do,” said Hastings Center President Vardit Ravitsky, who directs Hastings on the Hill. “Patients deserve to understand how technologies affect their health decisions, and policymakers can benefit from expert guidance as they seek to ensure that AI serves the public good.”

The Greenwall Foundation is supporting this initiative. Additional support comes from The Donaghue Foundation and the National Institutes of Health’s Bridge2AI initiative.

In addition to using Hastings on the Hill resources, policymakers, industry leaders, and others who shape medical AI policy and practice are invited to contact The Hastings Center with questions related to ethical issues they are encountering. Hastings Center scholars and fellows can provide expert nonpartisan analysis on urgent bioethics issues, such as algorithmic bias, patient privacy, data governance, and informed consent.

“Ethics should not be an afterthought,” says Ravitsky. “Concerns about biased health algorithms and opaque clinical decision tools have underscored the need for ethical oversight alongside technical innovation.”

“The speed of AI development has outpaced the ethical guardrails we need,” said Erin Williams, President and CEO of EDW Wisdom, LLC — the consultancy working with The Hastings Center. “Our role is to bridge that gap —ensuring that human dignity, equity, and trust are not casualties of technological progress.”

Explore Patient’s Journey with Medical AI. Learn more about Hastings on the Hill."

Thursday, September 18, 2025

AI could never replace my authors. But, without regulation, it will ruin publishing as we know it; The Guardian, September 18, 2025

, The Guardian ; AI could never replace my authors. But, without regulation, it will ruin publishing as we know it


[Kip Currier: This is a thought-provoking piece by literary agent Jonny Geller. He suggests an "artists’ rights charter for AI that protects two basic principles: permission and attribution". His charter idea conveys some aspects of the copyright area called "moral rights".

Moral rights provide copyright creators with a right of paternity (i.e. attribution) and a right of integrity. The latter can enable creators to exercise some levels of control over how their copyrighted works can be adapted. The moral right of integrity, for example, was an argument in cases involving whether black and white films (legally) could be or (ethically) should be colorized. (See Colors in Conflicts: Moral Rights and the Foreign Exploitation of Colorized U.S. Motion PicturesMoral rights are not widespread in U.S. copyright law because of tensions between the moral right of integrity and the right of free expression/free speech under the U.S. Constitution (whose September 17, 1787 birthday was yesterday). The Visual Artists Rights Act (1990) is a narrow example of moral rights under U.S. copyright law.

To Geller's proposed Artists' Rights Charter for AI I'd suggest adding the word and concept of "Responsibilities". Compelling arguments can be made for providing authors with some rights regarding use of their copyrighted works as AI training data. And, commensurately, persuasive arguments can be made that authors have certain responsibilities if they use AI at any stage of their creative processes. Authors can and ethically should be transparent about how they have used AI, if applicable, in the creation stages of their writing.

Of course, how to operationalize that as an ethical standard is another matter entirely. But just because it may be challenging to initially develop some ethical language as guidance for authors and strive to instill it as a broad standard doesn't mean it shouldn't be attempted or done.]


[Excerpt]

"The single biggest threat to the livelihood of authors and, by extension, to our culture, is not short attention spans. It is AI...

As a literary agent and CEO of one of the largest agencies in Europe, I think this is something everyone should care about – not because we fear progress, but because we want to protect it. If you take away the one thing that makes us truly human – our ability to think like humans, create stories and imagine new worlds – we will live in a diminished world.

AI that doesn’t replace the artist, or that will work with them transparently, is not all bad. An actor who is needed for reshoots on a movie may authorise use of the footage they have to complete a picture. This will save on costs, the environmental impact and time. A writer may wish to speed up their research and enhance their work by training their own models to ask the questions that a researcher would. The translation models available may enhance the range of offering of foreign books, adding to our culture.

All of this is worth discussing. But it has to be a discussion and be transparent to the end user. Up to now, work has simply been stolen and there are insufficient guardrails on the distributors, studios, publishers. As a literary agent, I have a more prosaic reason to get involved – I don’t think it is fair for someone’s work to be taken without their permission to create an inferior competitor.

What can we do? We could start with some basic principles for all to sign up to. An artists’ rights charter for AI that protects two basic principles: permission and attribution."

Friday, May 30, 2025

This Latest AI Book Debacle Is A Disturbing Part Of A Growing Trend; ScreenRant, May 29, 2025

 Rose-Graceling Moore, ScreenRant; This Latest AI Book Debacle Is A Disturbing Part Of A Growing Trend

"Yet another AI scandal has hit self-publishing, as an author left generative AI in a final draft of their book - but this isn't an isolated incident, and reveals a growing, and deeply problematic, trend."

Thursday, January 16, 2025

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

  in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

 Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Wednesday, September 4, 2024

NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence; National Endowment for the Humanities (NEH), August 27, 2024

 Press Release, National Endowment for the Humanities (NEH); NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence

"The National Endowment for the Humanities (NEH) today announced grant awards totaling $2.72 million for five colleges and universities to create new humanities-led research centers that will serve as hubs for interdisciplinary collaborative research on the human and social impact of artificial intelligence (AI) technologies.

As part of NEH’s third and final round of grant awards for FY2024, the Endowment made its inaugural awards under the new Humanities Research Centers on Artificial Intelligence program, which aims to foster a more holistic understanding of AI in the modern world by creating scholarship and learning centers across the country that spearhead research exploring the societal, ethical, and legal implications of AI. 

Institutions in California, New York, North Carolina, Oklahoma, and Virginia were awarded NEH grants to establish the first AI research centers and pilot two or more collaborative research projects that examine AI through a multidisciplinary humanities lens. 

The new Humanities Research Centers on Artificial Intelligence grant program is part of NEH’s agencywide Humanities Perspectives on Artificial Intelligence initiative, which supports humanities projects that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties. The initiative responds to President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which establishes new standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights."

Friday, May 6, 2022

What Is Happening to the People Falling for Crypto and NFTs; The New York Times, May 5, 2020

Farhad Manjoo, The New York Times; What Is Happening to the People Falling for Crypto and NFTs

"In the past year Yuga Labs, the well-funded start-up that makes Bored Apes, has embarked on a parade of new and even farther-out digital spinoffs of its simians. Its latest ventures have highlighted the head-scratching, money-burning, broken-casino vibe of what’s being called the internet’s next big thing. Cryptocurrencies, blockchains, NFTs and the constellation of hyped-up technologies known as “web3” have been celebrated as a way to liberate the internet from the tech giants who control it now. Instead what’s happening with Bored Apes suggests they’re doing the opposite: polluting the digital world in a thick haze of errors, swindles and expensive, largely unregulated financial speculation that ruins whatever scrap of trust still remains online...

But how many people have to lose their shirts before we realize that web3 isn’t a solution to any of our problems?"

Saturday, February 8, 2020

Putting China in charge of the world’s intellectual property is a bad idea; The Washington Post, Janaury 30, 2020



"Beijing is lobbying hard to take over leadership of the international organization that oversees intellectual property, which could result in dire consequences for the future of technology and economic competition. But the U.S.-led effort to prevent this from happening faces a steep uphill climb.

In March, 83 countries will vote to elect the next director general of the World Intellectual Property Organization (WIPO), a U.N.-created body founded in 1967 “to promote the protection of intellectual property throughout the world.” The Chinese candidate, Wang Binying, currently serves as one of its four deputy director-generals and is widely seen as the front-runner.

On its face, allowing China to assume leadership of the WIPO poses a clear risk to the integrity of the institution, given that the U.S. government has singled out China as the leading source of intellectual property theft in the world."

Monday, April 22, 2019

Wary of Chinese Espionage, Houston Cancer Center Chose to Fire 3 Scientists; The New York Times, April 22, 2019

Mihir Zaveri, The New York Times; Wary of Chinese Espionage, Houston Cancer Center Chose to Fire 3 Scientists

"“A small but significant number of individuals are working with government sponsorship to exfiltrate intellectual property that has been created with the support of U.S. taxpayers, private donors and industry collaborators,” Dr. Peter Pisters, the center’s president, said in a statement on Sunday.

“At risk is America’s internationally acclaimed system of funding biomedical research, which is based on the principles of trust, integrity and merit.”

The N.I.H. had also flagged two other researchers at MD Anderson. One investigation is proceeding, the center said, and the evidence did not warrant firing the other researcher.

The news of the firings was first reported by The Houston Chronicle and Science magazine.

The investigations began after Francis S. Collins, the director of the National Institutes of Health, sent a letter in August to more than 10,000 institutions the agency funds, warning of “threats to the integrity of U.S. biomedical research.”"

Friday, March 15, 2019

Review: 'The Inventor' is a coolly appalling portrait of Elizabeth Holmes and the Theranos scandal; The Los Angeles Times, March 14, 2019

Justin Chang, The Los Angeles Times;

Review: 'The Inventor' is a coolly appalling portrait of Elizabeth Holmes and the Theranos scandal


"As a quick glance at this week’s headlines will remind you — a staggering college admissions scandal, a wave of indictments in the cases of Paul Manafort and Jussie Smollett — we are living in deeply fraudulent times. But if there are few people or institutions worthy of our trust anymore, perhaps we can still trust that, eventually, Alex Gibney will get around to making sense of it all. Over the course of his unflagging, indispensable career he has churned out documentaries on Scientology and Enron, Lance Armstrong and Casino Jack — individual case studies in a rich and fascinating investigation of the American hustler at work.
 
Gibney approaches his subjects with the air of an appalled moralist and, increasingly, a grudging connoisseur. His clean, straightforward style, which usually combines smart talking heads, slick graphics and reams of meticulous data, is clearly galvanized by these charismatic individuals, who are pathological in their dishonesty and riveting in their chutzpah. And he is equally fascinated by the reactions, ranging from unquestioning belief to conflicted loyalty, that they foster among their followers and associates, who in many cases shielded them, at least for a while, from public discovery and censure.
 
“The Inventor: Out for Blood in Silicon Valley,” Gibney’s latest exercise in coolly measured outrage, is an engrossing companion piece to his other works in this vein. The subject of this HBO documentary is Elizabeth Holmes, the self-styled biotech visionary who dropped out of Stanford at age 19 and founded a company called Theranos, which promised to bring about a revolution in preventive medicine and personal healthcare. Its top-secret weapon was a compact machine called the Edison, which could purportedly run more than 200 individual tests from just a few drops of blood, obtained with just a prick of the finger.
 
Holmes’ vision of a brave new world — one in which anyone could stop by Walgreens and obtain a comprehensive, potentially life-saving snapshot of their health — proved tantalizing enough to raise more than $400 million and earned her a reputation as possibly the greatest inventor since, well, Thomas Edison. Her investors included Betsy DeVos, Rupert Murdoch and the Waltons; Henry Kissinger, George Shultz and James Mattis sat on her board of directors. But that was all before the Wall Street Journal’s John Carreyrou and other investigative journalists exposed glaring faults in the Edison’s design and sent the company’s $10-billion valuation spiraling down to nothing. Theranos dissolved in 2018, and Holmes and former company president Sunny Balwani were charged with conspiracy and fraud.

Full disclosure: As the son of a retired medical technologist who spent more than 30 years testing blood the traditional way, I approached “The Inventor” with great fascination and more than a little schadenfreude. The movie, for its part, seems both magnetized and repelled by its subject, a reaction that it will likely share with its audience. Gibney is perhaps overly fond of deploying intense, lingering close-ups of Holmes’ face and peering deep into her unnerving blue eyes (“She didn’t blink,” a former employee recalls). If the eyes are the windows to the soul, “The Inventor” just keeps looking and looking, as though uncertain whether or not its subject has one."

Wednesday, June 14, 2017

National Geographic Traveler Used My Photo for a Cover and Never Paid Me; PetaPixel, June 12, 2017

Mustafa Turgut, PetaPixel; National Geographic Traveler Used My Photo for a Cover and Never Paid Me

"After a couple of months of receiving no payment, I emailed them again asking them when they would be paying for the use of my photo on their cover.

They never responded to my email, and they have not responded to any contact attempt since then.

Frustrated, I began emailing the global National Geographic headquarters with my story. Although I have tried contacting headquarters over and over, I have yet to receive a single response.

I then began posting on National Geographic social media pages in 2013, but all of my posts were deleted shortly after I wrote them."