Showing posts with label informed consent. Show all posts
Showing posts with label informed consent. Show all posts

Wednesday, July 26, 2023

If artificial intelligence uses your work, it should pay you; The Washington Post, July 26, 2023

If artificial intelligence uses your work, it should pay you

"Renowned technologists and economists, including Jaron Lanier and E. Glen Weyl, have long argued that Big Tech should not be allowed to monetize people’s data without compensating them. This concept of “data dignity” was largely responding to the surveillance advertising business models of companies such as Google and Facebook, but Lanier and Weyl also pointed out, quite presciently, that the principle would only grow more vital as AI rose to prominence...

When I do a movie, and I sign my contract with a movie studio, I agree that the studio will own the copyright to the movie. Which feels fair and non-threatening. The studio paid to make the movie, so it should get to monetize the movie however it wants. But if I had known that by signing this contract and allowing the studio to be the movie’s sole copyright holder, I would then be allowing the studio to use that intellectual property as training data for an AI that would put me out of a job forever, I would never have signed that contract."

Saturday, December 10, 2022

Your selfies are helping AI learn. You did not consent to this.; The Washington Post, December 9, 2022

, The Washington Post; Your selfies are helping AI learn. You did not consent to this.

"My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful but they also come with a bunch of thorny ethical issues.

Tatum reported that Lensa’s portrait wizardly comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analyzing your recipes, social media posts, product reviews and other text from everyone on the internet...

Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI. The hard part, he said, is knowing what to do to effectively limit the harms and maximize the benefits." 

Thursday, May 24, 2018

New privacy rules could spell the end of legalese — or create a lot more fine print; The Washington Post, May 24, 2018

Elizabeth DwoskinThe Washington Post; New privacy rules could spell the end of legalese — or create a lot more fine print

"“The companies are realizing that it is not enough to get people to just click through,” said Lorrie Cranor, director of the CyLab Usable Privacy and Security Laboratory at Carnegie Mellon University and the U.S. Federal Trade Commission’s former chief technologist. “That they need to communicate so that people are not surprised when they find out what they consented to.”

That has become more apparent in the past two months since revelations that a Trump-connected consultancy, Cambridge Analytica, made off with the Facebook profiles of up to 87 million Americans. Cranor said that consumer outrage over Cambridge was directly related to concerns that companies were engaging in opaque practices behind the scenes, and that consumers had unknowingly allowed it to happen by signing away their rights.

Irrespective of simpler explanations, the impact and success of the GDPR will hinge upon whether companies will try to force users to consent to their tracking or targeting as condition for access to their services, said Alessandro Acquisti, a Carnegie Mellon computer science professor and privacy researcher. "This will tell us a lot regarding whether the recent flurry of privacy policy modifications demonstrates a sincere change in the privacy stance of those companies or is more about paying lip service to the new regulation. The early signs are not auspicious.""

Wednesday, April 13, 2016

Making the Most of Clinical Trial Data; New York Times, 4/12/16

Editorial Board, New York Times; Making the Most of Clinical Trial Data:
"Some researchers may oppose sharing data they have worked hard to gather, or worry that others will analyze it incorrectly. Creating opportunities for collaboration on subsequent analysis may help alleviate these concerns.
Of course, any data sharing must take patients’ privacy into account; patients must be informed before joining a clinical trial that their data may be shared and researchers must ensure that the data cannot be used to identify individuals.
By making data available and supporting analysis, foundations, research institutions and drug companies can increase the benefit of clinical trials and pave the way for new findings that could help patients."

Wednesday, February 17, 2016

Balancing Benefits and Risks of Immortal Data Participants’ Views of Open Consent in the Personal Genome Project; Hastings Center Report, 12/17/15

Oscar A. Zarate, Julia Green Brody, Phil Brown, Monica D. Ramirez-Andreotta, Laura Perovich andJacob Matz, Hastings Center Report; Balancing Benefits and Risks of Immortal Data: Participants’ Views of Open Consent in the Personal Genome Project:
"Abstract
An individual's health, genetic, or environmental-exposure data, placed in an online repository, creates a valuable shared resource that can accelerate biomedical research and even open opportunities for crowd-sourcing discoveries by members of the public. But these data become “immortalized” in ways that may create lasting risk as well as benefit. Once shared on the Internet, the data are difficult or impossible to redact, and identities may be revealed by a process called data linkage, in which online data sets are matched to each other. Reidentification (re-ID), the process of associating an individual's name with data that were considered deidentified, poses risks such as insurance or employment discrimination, social stigma, and breach of the promises often made in informed-consent documents. At the same time, re-ID poses risks to researchers and indeed to the future of science, should re-ID end up undermining the trust and participation of potential research participants.
The ethical challenges of online data sharing are heightened as so-called big data becomes an increasingly important research tool and driver of new research structures. Big data is shifting research to include large numbers of researchers and institutions as well as large numbers of participants providing diverse types of data, so the participants’ consent relationship is no longer with a person or even a research institution. In addition, consent is further transformed because big data analysis often begins with descriptive inquiry and generation of a hypothesis, and the research questions cannot be clearly defined at the outset and may be unforeseeable over the long term. In this article, we consider how expanded data sharing poses new challenges, illustrated by genomics and the transition to new models of consent. We draw on the experiences of participants in an open data platform—the Personal Genome Project—to allow study participants to contribute their voices to inform ethical consent practices and protocol reviews for big-data research."