, Reason; SCOTUS Tackles Illegal File Sharing, Internet Music Piracy, and Copyright Law
"A decision in Cox Communications v. Sony Music Entertainment won't be coming our way until sometime next year, so until then, we'll just have to wait and see."
My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
DAMON ROOT, Reason; SCOTUS Tackles Illegal File Sharing, Internet Music Piracy, and Copyright Law
"A decision in Cox Communications v. Sony Music Entertainment won't be coming our way until sometime next year, so until then, we'll just have to wait and see."
Ronald Mann, SCOTUSblog; Court seems dubious of billion-dollar judgment for copyright infringement
"My basic reaction to the argument is that the justices would be uncomfortable with accepting the broadest version of the arguments that Cox has presented to it (that the ISP is protected absent an affirmative act of malfeasance), but Sony’s position seems so unpalatable to them that a majority is most unlikely to coalesce around anything that is not a firm rejection of the lower court’s ruling against Cox. I wouldn’t expect that ruling to come soon, but I don’t think there is much doubt about what it will say."
Natasha Singer, The New York Times; College Students Flock to a New Major: A.I.
"Artificial intelligence is the hot new college major...
Now interest in understanding, using and learning how to build A.I. technologies is soaring, and schools are racing to meet rising student and industry demand.
Over the last two years, dozens of U.S. universities and colleges have announced new A.I. departments, majors, minors, courses, interdisciplinary concentrations and other programs.
In 2022, for instance, the Massachusetts Institute of Technology created a major called “A.I. and decision-making.” Students in the program learn to develop A.I. systems and study how technologies like robots interact with humans and the environment. This year, nearly 330 students are enrolled in the program — making A.I. the second-largest major at M.I.T. after computer science.
“Students who prefer to work with data to address problems find themselves more drawn to an A.I. major,” said Asu Ozdaglar, the deputy dean of academics at the M.I.T. Schwarzman College of Computing. Students interested in applying A.I. in fields like biology and health care are also flocking to the new major, she added."
JULIANNE HILL, ABA Journal; Is unauthorized artificial intelligence use in law school an honor code violation?
"With generative artificial intelligence’s growing availability and acceptance into students’ workflow, some law schools are wondering whether unauthorized AI use should be an honor code violation—something that could potentially trip up aspiring lawyers in the character and fitness portion of the bar licensure process...
Lack of clarity
The problem stems from unclear AI policies within law schools and universities, says Daniel W. Linna Jr., a senior lecturer and the director of law and technology initiatives at the Northwestern University Pritzker School of Law in Illinois.
These cases “illustrate why these policies are problematic,” says Linna, a 2018 Journal Legal Rebel.
The vast majority of policies that Linna has seen at law schools don’t draw firm lines between what is and what isn’t acceptable...
“We don’t have a good means of policing this,” Linna says. “What if someone is wrongly accused and or maybe even makes innocent mistakes? This should really force law schools to reconsider what we’re trying to accomplish with these policies and whether we’re doing more harm than good.”...
Along with clear AI policies, says Kellye Testy, the executive director and CEO of the Association of American Law Schools, the solution includes solid ethical training for law students to use AI before entering the workplace, where comfort with the tool will be expected."
Jeffrey C. Dixon, Professor of Sociology, College of the Holy Cross , The Conversation; Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too
"Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.
As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.
I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves."
Visuals by Philip Cheung
, The New York Times ; Big Tech Makes Cal State Its A.I. Training GroundKimmy Yam, NBC News ; MIT is first school to reject Trump administration's agenda in exchange for funding benefits
"The Massachusetts Institute of Technology on Friday became the first school to reject an offer of federal funds in exchange for agreeing to the Trump administration's education agenda.
MIT disagreed with a number of aspects of the administration's proposal, which was sent to nine major universities last week, arguing that it would restrict the university's freedom of expression and independence, Sally Kornbluth, president of the Cambridge-based school, wrote in a letter Friday to the Department of Education.
“In our view, America’s leadership in science and innovation depends on independent thinking and open competition for excellence. In that free marketplace of ideas, the people of MIT gladly compete with the very best, without preferences,” Kornbluth wrote. “Therefore, with respect, we cannot support the proposed approach to addressing the issues facing higher education.”"
MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University
"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.
“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.
“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”
Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.
Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.
“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”
“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”
On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”
“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”
Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.
Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLM, which Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”
PittGPT joins that list today, Helfrich said.
Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).
And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.
“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"
MYLES WERNTZ, Christianity Today; CHATBOT CHEATING IN ETHICS CLASS
"The ethical and practical problems are legion: copyright disputes, ecological effects, a possible economic bubble, and plain deceit. Still, for an undergraduate on a deadline, the appeal is obvious."
Theodore Schleifer and Nicholas Kulish, The New York Times; Larry Ellison Wants to Do Good, Do Research and Make a Profit
"Mr. Ellison has rarely engaged with the community of Giving Pledge signers, according to two people with knowledge of the matter. He has cherished his autonomy and does not want to be influenced to support Mr. Gates’s causes, one of the people said, while also sensitive to any idea that he is backing off the pledge.
But the stakes of Mr. Ellison’s message on X are enormous. His fortune is about 10 times what it was when he signed the pledge as the software company he founded, Oracle, rides the artificial intelligence boom. Mr. Ellison controls a staggering 40-plus percent of the company’s stock...
“Oxford, Cambridge and the whole university sector are under pressure to capitalize on intellectual property because of long-running government policy belief that the U.K. has fallen behind economically,” said John Picton, an expert in nonprofit law at the University of Manchester."
Michael Barclay, Quartz; The Pope has a message for AI executives
Kathryn Palmer, Inside Higher Ed; In Wisconsin, Professors Worry AI Could Replace Them
"Faculty at the cash-strapped Universities of Wisconsin System are pushing back against a proposed copyright policy they believe would cheapen the relationship between students and their professors and potentially allow artificial intelligence bots to replace faculty members...
The policy proposal is not yet final and is open for public comment through Dec. 13. ..
Natalia Taft, an associate professor of biological sciences at the University of Wisconsin–Parkside who signed the open letter, told Inside Higher Ed that she believes the policy proposal “is part of the trend of the corporatization of academia.”...
Jane Ginsburg, a professor of literary and artistic property law at Columbia University School of Law, said the university has the law on its side.
Under the 1976 Copyright Act, “course material prepared by employees, including professors, as part of their jobs comes within the definition of a ‘work made for hire,’ whose copyright vests initially in the employer (the University), not the employee (the professor).”"
Lauren Coffey, Inside Higher Ed; Internet Archive Court Loss Leaves Higher Ed in Gray Area
"Pandemic-era library programs that helped students access books online could be potentially threatened by an appeals court ruling last week.
Libraries across the country, from Carnegie Mellon University to the University of California system, turned to what’s known as a digital or controlled lending program in 2020, which gave students a way to borrow books that weren’t otherwise available. Those programs are small in scale and largely experimental but part of a broader shift in modernizing the university library.
But the appeals court ruling could upend those programs...
Still, librarians at colleges and elsewhere, along with other experts, feared that the long-running legal fight between the Internet Archive and leading publishers could imperil the ability of libraries to own and preserve books, among other ramifications."
J. D. WRIGHT, University of Pittsburgh University Times ; Teaching and the Legal Landscape: Primer on the Fair Use Doctrine in copyright law
"As the fall semester hurtles toward us, along with decisions about what readings and other materials we’ll assign as homework or present in class, think about the implications of posting copyrighted works on Canvas or presenting them in class. What is acceptable, and what trespasses beyond permissible bounds? Considerations like these are ripe for exploration as we engage in the regular ritual of preparing courses for a new term.
Our question is: Does the Fair Use Doctrine exempt a copyrighted work from the general rule requiring rights-holder approval before someone else can distribute that content?
Applying U.S. intellectual property law, including the Fair Use Doctrine, can be a maddeningly fact-specific process that makes broad generalizations incomplete, unreliable, or even dangerous. However, we can outline some basic principles and a set of steps to follow as you make important decisions about what to redistribute or duplicate—and what not to. In close cases, play it safe or seek legal guidance; this article provides background and context, not legal advice."