Analysis by Nitasha Tiku
Issues and developments related to IP, AI, and OM, examined in the IP and tech ethics graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in January 2026 and includes chapters on IP, AI, OM, and other emerging technologies (IoT, drones, robots, autonomous vehicles, VR/AR). Kip Currier, PhD, JD
Analysis by Nitasha Tiku
JAMEY KEATEN, AP; ‘AI Jesus’ avatar tests man’s faith in machines and the divine
"Researchers and religious leaders on Wednesday released findings from a two-month experiment through art in a Catholic chapel in Switzerland, where an avatar of “Jesus” on a computer screen — tucked into a confessional — took questions by visitors on faith, morality and modern-day woes, and offered responses based on Scripture...
Philipp Haslbauer, an IT specialist at the Lucerne University of Applied Sciences and Arts who pulled together the technical side of the project, said the AI responsible for taking the role of “AI Jesus” and generating responses was GPT-4o by OpenAI, and an open-source version of the company’s Whisper was used for speech comprehension.
An AI video generator from Heygen was used to produce voice and video from a real person, he said. Haslbauer said no specific safeguards were used “because we observed GPT-4o to respond fairly well to controversial topics.”
Visitors broached many topics, including true love, the afterlife, feelings of solitude, war and suffering in the world, the existence of God, plus issues like sexual abuse cases in the Catholic Church or its position on homosexuality.
Most visitors described themselves as Christians, though agnostics, atheists, Muslims, Buddhists and Taoists took part too, according to a recap of the project released by the Catholic parish of Lucerne.
About one-third were German speakers, but “AI Jesus” — which is conversant in about 100 languages — also had conversations in languages like Chinese, English, French, Hungarian, Italian, Russian and Spanish."
"Indigenous peoples are widely affected by natural hazards and their history and knowledge can directly inform on past events and mitigation strategies. Here we show how effective co-creation of resources and bi-lateral knowledge exchange between natural hazard researchers and local Indigenous communities provides an effective, equitable, and sustainable way to conduct research.
Internationally there is no single agreed definition of Indigenous peoples, and here we use Indigenous as an all-encompassing international term (in Canada this includes First Nations, Métis and Inuit). The United Nations Office for Disaster Risk Reduction (UNDRR) reports1 that 476 million people in more than 90 countries identify as Indigenous and ~20% of the Earth is covered by Indigenous territories. Consequently, Indigenous peoples globally live at risk from natural hazards (e.g., volcanic eruptions, landslides, earthquakes) and also receive benefits from living in active geological areas (e.g., fertile soils, tourism, geothermal power). According to 2021 Canadian census data2, in the Province of British Columbia (B.C.) there are 290,210 people who identify as Indigenous, with 180,085, 97,865 and 1725 people self-identifying as First Nations, Métis and Inuit, respectively. All these people are susceptible to natural hazards and, as illustrated by Fig. 1, the largest volcanic eruptions, earthquakes, landslides, wildfires, and floods in British Columbia, Canada have all affected Indigenous territories. Thus, given the global spatial overlap between Indigenous peoples and natural hazards, and the need for meaningful collaboration, both research and Indigenous local knowledge should be shared for mutual benefit. Here, we outline how the Nisg̱a’a First Nation and volcanology researchers have initiated and maintained a fruitful collaboration with bi-lateral knowledge exchange and resource co-creation. Furthermore, this comment article is co-written by non-Indigenous volcanology researchers based at universities (Jones and Williams-Jones) and a government organisation (Le Moigne) and by Indigenous scholars of the Nisg̱a’a First Nation, based at a post-secondary education establishment (Nyce and Nyce Jr.)...
There are many different resources that can be co-created between researchers and Indigenous communities. For the purposes of disseminating knowledge and history about natural hazards and past events, resource examples include information boards, posters, school activities, museum displays, online media, artwork, and audio tours. Irrespective of the resource type, co-creation and development has multiple benefits. The knowledge exchange is bi-lateral, and in many cases, Indigenous Knowledge directly informs on risk mitigation and hazard perception4...
Based on our experience, we highlight several mutually beneficial exchanges involving both Indigenous communities and research scientists working on natural hazards. These reciprocal benefits are applicable globally and are not specific to the natural hazard or the individual researcher-Indigenous community relationship."
Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?
"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.
But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.
Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.
But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."
UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content
"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.
To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.
“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"
Leyland Cecco, The Guardian; Canada moves to protect coral reef that scientists say ‘shouldn’t exist’
"For generations, members of the Kitasoo Xai’xais and Heiltsuk First Nations, two communities off the Central Coast region of British Columbia, had noticed large groups of rockfish congregating in a fjord system.
In 2021, researchers and the First Nations, in collaboration with the Canadian government, deployed a remote-controlled submersible to probe the depths of the Finlayson Channel, about 300 miles north-west of Vancouver.
On the last of nearly 20 dives, the team made a startling discovery – one that has only recently been made public...
The discovery marks the latest in a string of instances in which Indigenous knowledge has directed researchers to areas of scientific or historic importance. More than a decade ago, Inuk oral historian Louie Kamookak compared Inuit stories with explorers’ logbooks and journals to help locate Sir John Franklin’s lost ships, HMS Erebus and HMS Terror. In 2014, divers located the wreck of the Erebus in a spot Kamookak suggested they search, and using his directions found the Terror two years later."
Hayden Field, CNBC; Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst
"The company, founded by ex-Meta researchers, specializes in evaluation and testing for large language models — the technology behind generative AI products.
Alongside the release of its new tool, CopyrightCatcher, Patronus AI released results of an adversarial test meant to showcase how often four leading AI models respond to user queries using copyrighted text.
The four models it tested were OpenAI’s GPT-4, Anthropic’s Claude 2, Meta’s Llama 2 and Mistral AI’s Mixtral.
“We pretty much found copyrighted content across the board, across all models that we evaluated, whether it’s open source or closed source,” Rebecca Qian, Patronus AI’s cofounder and CTO, who previously worked on responsible AI research at Meta, told CNBC in an interview.
Qian added, “Perhaps what was surprising is that we found that OpenAI’s GPT-4, which is arguably the most powerful model that’s being used by a lot of companies and also individual developers, produced copyrighted content on 44% of prompts that we constructed.”"
Scottish Legal News; Study: Digital watermarks and AI will expedite copyright cases
"The study was carried out by Professor James Griffin from the University of Exeter Law School and others. Researchers applied an existing AI system to copyright case law, to see how it could read and understand cases and produce outcomes in disputes concerning 3D printing. They found more complex watermarks will lead to faster and more accurate resolutions."
Katharine Sanderson, Nature; Editors quit top neuroscience journal to protest against open-access charges
"More than 40 editors have resigned from two leading neuroscience journals in protest against what the editors say are excessively high article-processing charges (APCs) set by the publisher. They say that the fees, which publishers use to cover publishing services and in some cases make money, are unethical. The publisher, Dutch company Elsevier, says that its fees provide researchers with publishing services that are above average quality for below average price. The editors plan to start a new journal hosted by the non-profit publisher MIT Press.
The decision to resign came about after many discussions among the editors, says Stephen Smith, a neuroscientist at the University of Oxford, UK, and editor-in-chief of one of the journals, NeuroImage. “Everyone agreed that the APC was unethical and unsustainable,” says Smith, who will lead the editorial team of the new journal, Imaging Neuroscience, when it launches.
The 42 academics who made up the editorial teams at NeuroImage and its companion journal NeuroImage: Reports announced their resignations on 17 April. The journals are open access and require authors to pay a fee for publishing services. The APC for NeuroImage is US$3,450; NeuroImage: Reports charges $900, which will double to $1,800 from 31 May. Elsevier, based in Amsterdam, says that the APCs cover the costs associated with publishing an article in an open-access journal, including editorial and peer-review services, copyediting, typesetting archiving, indexing, marketing and administrative costs. Andrew Davis, Elsevier’s vice-president of corporate communications, says that NeuroImage’s fee is less than that of the nearest comparable journal in its field, and that the publisher’s APCs are “set in line with our policy [of] providing above average quality for below average price”."
Max Larkin, WBUR ; The government dropped its case against Gang Chen. Scientists still see damage done
"When federal prosecutors dropped all charges against MIT professor Gang Chen in late January, many researchers rejoiced in Greater Boston and beyond.
Chen had spent the previous year fighting charges that he had lied and omitted information on U.S. federal grant applications. His vindication was a setback for the "China Initiative," a controversial Trump-era legal campaign aimed at cracking down on the theft of American research and intellectual property by the Chinese government.
Researchers working in the United States say the China Initiative has harmed both their fellow scientists and science itself — as a global cooperative endeavor. But as U.S.-China tensions remain high, the initiative remains in place."
Dalmeet Singh Chawla , Nature; Massive open index of scholarly papers launches
"An ambitious free index of more than 200 million scientific documents that catalogues publication sources, author information and research topics, has been launched.
The index, called OpenAlex after the ancient Library of Alexandria in Egypt, also aims to chart connections between these data points to create a comprehensive, interlinked database of the global research system, say its founders. The database, which launched on 3 January, is a replacement for Microsoft Academic Graph (MAG), a free alternative to subscription-based platforms such as Scopus, Dimensions and Web of Science that was discontinued at the end of 2021."
On The Media; A Little-Known Statute Compels Medical Research Transparency. Compliance Is Pretty Shabby.
"Evidence-based medicine requires just that: evidence. Access to the collective pool of knowledge produced by clinical trials is what allows researchers to safely and effectively design future studies. It's what allows doctors to make the most informed decisions for their patients.
Since 2007, researchers have been required by law to publish the findings of any clinical trial with human subjects within a year of the trial's conclusion. Over a decade later, even the country's most well-renown research institutions sport poor reporting records. This week, Bob spoke with Charles Piller, an investigative journalist at Science Magazine who's been documenting this dismal state of affairs since 2015. He recently published an op-ed in the New York Times urging President Biden to make good on his 2016 "promise" to start withholding funds to force compliance."