Showing posts with label transparency. Show all posts
Showing posts with label transparency. Show all posts

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

  Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Sunday, September 1, 2024

QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS; Creative Commons, July 24, 2024

 Anna Tumadóttir , Creative Commons; QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS

"The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?...

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

  • Folks want more choice over how their work is used.
  • If they have no choice, they might not share their work at all (under a CC license or strict copyright).

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

 Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Wednesday, May 15, 2024

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

 Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Friday, February 18, 2022

The government dropped its case against Gang Chen. Scientists still see damage done; WBUR, February 16, 2022

Max Larkin, WBUR ; The government dropped its case against Gang Chen. Scientists still see damage done

"When federal prosecutors dropped all charges against MIT professor Gang Chen in late January, many researchers rejoiced in Greater Boston and beyond.

Chen had spent the previous year fighting charges that he had lied and omitted information on U.S. federal grant applications. His vindication was a setback for the "China Initiative," a controversial Trump-era legal campaign aimed at cracking down on the theft of American research and intellectual property by the Chinese government.

Researchers working in the United States say the China Initiative has harmed both their fellow scientists and science itself — as a global cooperative endeavor. But as U.S.-China tensions remain high, the initiative remains in place."

Thursday, May 20, 2021

A Little-Known Statute Compels Medical Research Transparency. Compliance Is Pretty Shabby.; On The Media, April 21, 2021

 On The Media; A Little-Known Statute Compels Medical Research Transparency. Compliance Is Pretty Shabby.

"Evidence-based medicine requires just that: evidence. Access to the collective pool of knowledge produced by clinical trials is what allows researchers to safely and effectively design future studies. It's what allows doctors to make the most informed decisions for their patients.

Since 2007, researchers have been required by law to publish the findings of any clinical trial with human subjects within a year of the trial's conclusion. Over a decade later, even the country's most well-renown research institutions sport poor reporting records. This week, Bob spoke with Charles Piller, an investigative journalist at Science Magazine who's been documenting this dismal state of affairs since 2015. He recently published an op-ed in the New York Times urging President Biden to make good on his 2016 "promise" to start withholding funds to force compliance."

Wednesday, August 19, 2020

A New Copyright Office Warehouse–25 Years in the Making; Library of Congress, August 19, 2020

, Library of CongressA New Copyright Office Warehouse–25 Years in the Making

"The following is a guest post by Paul Capel, Supervisory Records Management Section Head.

The United States Copyright Office holds the most comprehensive collection of copyright records in the world. The Office has over 200,000 boxes of deposit copies spread among three storage facilities in Landover, Maryland; a contracted space in Pennsylvania; and the National Archives and Records Administration (NARA) facility in Massachusetts. Even with these three warehouses, that’s not enough space. Each day, the Office receives new deposits, and despite the increase in electronic deposits, our physical deposits continue to grow year after year.

These deposits are managed by the Deposit Copies Storage Unit, a dedicated team that springs into action to retrieve deposits when requested by examiners or researchers or for litigation cases. In this type of work, speed and efficiency of retrieval are critical. Managing deposits across three storage locations can present a challenge to our ideal retrieval times. When our records are stored in several locations, the potential for miscommunication or misplaced deposits increases.

This October, the Office will be opening a new 40,000 square foot warehouse that has been in discussion for over twenty-five years. We will be moving our deposits out of facilities that are more than forty years old to centrally locate them in a new state-of-the-art facility. This is a huge undertaking, and we are aiming to move 88,000 boxes from Landover in under 45 days. The new space is environmentally controlled and meets preservation requirements for the storage of federal records. Even more importantly, the new facility will allow the Office to maintain control over all our records in a single location, which will improve our retrieval times and will enable us to serve our stakeholders better.
This new facility is a great start, but we have an even bigger vision for our deposits. To truly inventory and track our deposits, the Office is investigating a warehouse management system that will help staff inventory, track, locate, and manage all the items in our warehouse. This type of system will help the Office enhance the availability and accessibility of materials, decreasing manual processing, and allowing for real-time tracking of deposits at any given time. It will also let us know who has them and when their period of retention ends.
This system will provide all the notifications  expected from any modern delivery service. Copyright Office staff will be able to obtain a copy of their order and tell when it is in transit, know when it has been delivered, and sign for it digitally. This system will also provide transparency to others who might have an interest in requesting the same deposit, to see where it currently is, who has it, and how long they have had it."

Saturday, February 8, 2020

Putting China in charge of the world’s intellectual property is a bad idea; The Washington Post, Janaury 30, 2020



"Beijing is lobbying hard to take over leadership of the international organization that oversees intellectual property, which could result in dire consequences for the future of technology and economic competition. But the U.S.-led effort to prevent this from happening faces a steep uphill climb.

In March, 83 countries will vote to elect the next director general of the World Intellectual Property Organization (WIPO), a U.N.-created body founded in 1967 “to promote the protection of intellectual property throughout the world.” The Chinese candidate, Wang Binying, currently serves as one of its four deputy director-generals and is widely seen as the front-runner.

On its face, allowing China to assume leadership of the WIPO poses a clear risk to the integrity of the institution, given that the U.S. government has singled out China as the leading source of intellectual property theft in the world."

Wednesday, January 22, 2020

It’s Copyright Week 2020: Stand Up for Copyright Laws That Actually Serve Us All; Electronic Frontier Foundation (EFF), January 20, 2020

Katharine Trendacosta, Electronic Frontier Foundation (EFF); It’s Copyright Week 2020: Stand Up for Copyright Laws That Actually Serve Us All

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation...

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are:
  • Monday: Fair Use and Creativity
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Tuesday: Copyright and Competition
    Copyright should not be used to control knowledge, creativity, or the ability to tinker with or repair your own devices. Copyright should encourage more people to share, make, or repair things, rather than concentrate that power in only a few players.
  • Wednesday: Remedies
    Copyright claims should not raise the specter of huge, unpredictable judgments that discourage important uses of creative work. Copyright should have balanced remedies that also provide a real path for deterring bad-faith claims.
  • Thursday: The Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Friday: Copyright and Democracy
    Copyright must be set through a participatory, democratic, and transparent process. It should not be decided through back-room deals, secret international agreements, unaccountable bureaucracies, or unilateral attempts to apply national laws extraterritorially.
Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on Twitter.

As we said last year, and the year before that, if you too stand behind these principles, please join us by supporting them, sharing them, and telling your lawmakers you want to see copyright law reflect them."

Wednesday, December 11, 2019

Elsevier signs first open-access deal in the United States; Science, November 25, 2019

Science News Staff, Science; Elsevier signs first open-access deal in the United States

"Publishing giant Elsevier has signed its first open-access deal with a U.S. institution, Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania, Inside Higher Ed reports. The arrangement, which CMU announced on 21 November, will allow CMU scholars to publish articles in any Elsevier journal on an immediately free-to-read basis. CMU researchers will also continue to have access to paywalled Elsevier articles, which previous contracts covered with subscription fees.

CMU did not disclose the cost of the arrangement, which has been a sticking point in Elsevier’s open-access negotiations with other research institutions. After the University of California system insisted on a price cut, Elsevier’s negotiations failed in February; in April, a research consortium in Norway cut a deal with Elsevier similar to CMU’s, while agreeing to a price hike. “All I can say is that we achieved the financial objectives we set out to achieve,” Keith Webster, dean of CMU’s university libraries and director of emerging and integrative media initiatives, tells Inside Higher Ed

CMU researchers only publish about 175 papers annually in Elsevier journals. That low volume gives Elsevier an opportunity to test the 4-year arrangement with relatively low financial risk."

Thursday, May 17, 2018

New Guidelines For Tech Companies To Be Transparent, Accountable On Censoring User Content; Intellectual Property Watch, May 7, 2018,

Intellectual Property Watch; New Guidelines For Tech Companies To Be Transparent, Accountable On Censoring User Content

"The Electronic Frontier Foundation (EFF) called on Facebook, Google, and other social media companies today to publicly report how many user posts they take down, provide users with detailed explanations about takedowns, and implement appeals policies to boost accountability.

EFF, ACLU of Northern California, Center for Democracy & Technology, New America’s Open Technology Institute, and a group of academic experts and free expression advocates today released the Santa Clara Principles, a set of minimum standards for tech companies to augment and strengthen their content moderation policies. The plain language, detailed guidelines call for disclosing not just how and why platforms are removing content, but how much speech is being censored. The principles are being released in conjunction with the second edition of the Content Moderation and Removal at Scale conference. Work on the principles began during the first conference, held in Santa Clara, California, in February.

“Our goal is to ensure that enforcement of content guidelines is fair, transparent, proportional, and respectful of users’ rights,” said EFF Senior Staff Attorney Nate Cardozo."

Wednesday, March 28, 2018

Why an Indian hotel startup is taking the difficult route of filing patents; Quartz, March 28, 2018

Ananya Bhattacharya, Quartz; Why an Indian hotel startup is taking the difficult route of filing patents

"India and patents
High costs, lengthy processing periods, and a general lack of awareness are huge deterrents for startups eyeing patents in India. Gaps in the system, like a shortage of examiners, have caused hundreds of thousands of applications to pile up.
“Filing patents is common practice in other parts of the world but the importance of filing patents has only of late become apparent to startups in India,” said Anindya Ghose, the Heinz Riehl professor of business at New York University. Shorter processing times for intellectual property (IP) rights applications, an 80% rebate on patent fees for startups, and more transparency around the system are helping.
However, the country is still ranked an unimpressive 44th out of 50 in a score of IP robustness compiled by the US Chamber of Commerce (pdf) this year. “India’s score continues to suggest that additional, meaningful reforms are needed to complement the Policy,” the federal entity said."

Thursday, August 3, 2017

To Protect Voting, Use Open-Source Software; New York Times, August 3, 2017

R. James Woolsey and Brian J. Fox, New York Times; To Protect Voting,Use Open-Source Software

"If the community of proprietary vendors, including Microsoft, would support the use of open-source model for elections, we could expedite progress toward secure voting systems.

With an election on the horizon, it’s urgent that we ensure that those who seek to make our voting systems more secure have easy access to them, and that Mr. Putin does not."

Saturday, July 29, 2017

Open data comes to Syracuse; WRVO, July 27, 2017

Ellen Abbott, WRVO; Open data comes to Syracuse

"Mayor Stephanie Miner says this kind of open data policy is the wave of the future.

"This is how people are thinking about governmental services in terms of transparency. And now that resources are as tight as they are. This will help you measure the effectiveness and efficiency of policies put into place."

Friday, July 21, 2017

Should Open Access And Open Data Come With Open Ethics?; Forbes, July 20, 2017

Kalev Leetaru, Forbes; Should Open Access And Open Data Come With Open Ethics?

"In the end, the academic community must decide if “openness” and “transparency” apply only to the final outputs of our scholarly institutions, with individual researchers, many from fields without histories of ethical prereview, are exclusively empowered to decide what constitutes ethical and moral conduct and just how much privacy should be permitted in our digital society, or if we should add “open ethics” to our focus on open access and open data and open universities up to public discourse on just what the future of “big data” research should look like."

Sunday, July 16, 2017

How can we stop algorithms telling lies?; Guardian, July 16, 2017

Cathy O'Neil, Guardian; 

How can we stop algorithms telling lies?


[Kip Currier: Cathy O'Neil is shining much-needed light on the little-known but influential power of algorithms on key aspects of our lives. I'm using her thought-provoking 2016 Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy as one of several required reading texts in my Information Ethics graduate course at the University of Pittsburgh's School of Computing and Information.]

"A proliferation of silent and undetectable car crashes is harder to investigate than when it happens in plain sight.

I’d still maintain there’s hope. One of the miracles of being a data sceptic in a land of data evangelists is that people are so impressed with their technology, even when it is unintentionally creating harm, they openly describe how amazing it is. And the fact that we’ve already come across quite a few examples of algorithmic harm means that, as secret and opaque as these algorithms are, they’re eventually going to be discovered, albeit after they’ve caused a lot of trouble.

What does this mean for the future? First and foremost, we need to start keeping track. Each criminal algorithm we discover should be seen as a test case. Do the rule-breakers get into trouble? How much? Are the rules enforced, and what is the penalty? As we learned after the 2008 financial crisis, a rule is ignored if the penalty for breaking it is less than the profit pocketed. And that goes double for a broken rule that is only discovered half the time...

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around."

Thursday, June 1, 2017

Five questions about open science answered; Phys.org, May 30, 2017

Elizabeth Gilbert, Katie Corker, 
Phys.org; Five questions about open science answered

"What is "open science"?

Open science is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research  and lab procedures freely available online to anyone. Many scientists are also proponents of open access, a parallel movement involving making research articles available to read without a subscription or access fee."

Tuesday, May 2, 2017

Chinese Government and Hollywood Launch Snoop-and-Censor Copyright Filter; Electronic Frontier Foundation (EFF), May 1, 2017

Jeremy Malcolm, Electronic Frontier Foundation (EFF); 

Chinese Government and Hollywood Launch Snoop-and-Censor Copyright Filter

"Two weeks ago the Copyright Society of China (also known as the China Copyright Association) launched its new 12426 Copyright Monitoring Center, which is dedicated to scanning the Chinese Internet for evidence of copyright infringement. This frightening panopticon is said to be able to monitor video, music and images found on "mainstream audio and video sites and graphic portals, small and medium vertical websites, community platforms, cloud and P2P sites, SmartTV, external set-top boxes, aggregation apps, and so on."...

The announcement of China's government-linked 12426 Copyright Monitoring Center is absolutely chilling. It is just as chilling that the governments of the United States and Europe are being lobbied by copyright holders to follow China's lead. Although this call is being heard on both sides of the Atlantic, it has gained the most ground in Europe, where it needs to be urgently stopped in its tracks. Europeans can learn more and speak out against these draconian censorship demands at the Save the Meme campaign website."