Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

 MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Amazon upsets ebook lovers by ending support for old Kindle devices; Guardian, April 9, 2026

 , The Guardian; Amazon upsets ebook lovers by ending support for old Kindle devices


Up to 2m e-readers made before 2013 will no longer be able to download new titles

"Amazon is to stop supporting older Kindle models leaving longtime ebook fans unable to access new content from the Kindle store.

Devices released during or before 2012 will no longer receive updates from 20 May, affecting owners of older Kindles, including the earliest models such as the Touch and some Fire tablets. It is thought that 2m e-readers could be affected.

Users will still be able to read ebooks they have downloaded, and their accounts and their Kindle library will remain accessible on mobile and desktop apps. Active users have been offered discounts to help “transition to newer devices”. Amazon said performing a factory reset on affected Kindles would make them unusable.

Disappointed users have vented their frustration online, including in comments on The Verge, accusing Amazon of “causing waste at a large scale” and saying their devices would be reduced to a paperweight despite still working."

Thursday, April 9, 2026

Claude Mythos Is Everyone’s Problem; The Atlantic, April 9, 2026

 Matteo Wong , The Atlantic; Claude Mythos Is Everyone’s Problem

What happens when AI can hack everything?

"These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers."

Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’; New York Daily News via Chicago Tribune, April 9, 2026

 , New York Daily News via Chicago Tribune; Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’

"An unimpressed Manhattan judge ordered a corporate representative for OpenAI to undergo a second deposition after finding he failed to answer “even the simplest questions” the first time around about what the company has described as efforts to limit chatbots from stealing writers’ work.

​Magistrate Judge Ona Wang, in a sharply-worded 11-page order Tuesday, said OpenAI had been put on notice that the company’s purported expert on plagiarism John Vincent “Vinnie” Monaco was woefully underprepared for his January deposition, ordering him to submit to 3.5 more hours of questioning that took place Wednesday.

​In granting a motion from the Chicago Tribune, New York Times and other news outlets suing OpenAI to compel the additional testimony, Wang deferred ruling on a request for sanctions, saying it would depend on how Monaco fared in his do-over. She said she may issue fines or recommend some of his answers be deemed as admissions.

​OpenAI has previously said that Monaco has more knowledge than any of its engineers about Project Giraffe, an internal operation which the company claims is designed to develop ways to limit its learning language models, or LLMs, from inadvertently regurgitating copyrighted works — the issue at the core of the ongoing Manhattan Federal Court lawsuit."

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law; Electronic Frontier Foundation (EFF), April 8, 2026

MITCH STOLTZ, Electronic Frontier Foundation (EFF); Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

"Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control."

Who owns ideas in the AI age?; Fortune, April 8, 2026

 , Fortune; Who owns ideas in the AI age?; David Shelley, CEO of Hachette’s U.K. and U.S. operations, on taking on Big Tech, defending copyright, and why the future of human creativity is at stake.

"Can you ever really own an idea?"

Wednesday, April 8, 2026

Grambling State Secures Trademark for its "G" Logo After Almost 30 Years; Ebony Magazine, April 7, 2026

STARR ROCQUE , Ebony Magazine; Grambling State Secures Trademark for its "G" Logo After Almost 30 Years

"Grambling State University secured a major win in court this month. The HBCU secured its iconic “G” logo under a US trademark. The historic logo has represented the school’s athletic excellence and pride since the 1970s. However, the process of securing the trademark, led by the Division of Administration and Business Affairs and counsel Kean Miller, had been ongoing since 1998. 

This new milestone follows a coordinated effort to address prior court refusals to grant the trademark while considering other nationally recognized “G” marks, such as those associated with the University of Georgia and the Green Bay Packers."

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Tuesday, April 7, 2026

The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting; The New York Times, March 25, 2026

 Andrea Bartz , The New York Times; The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting

"But as generative artificial intelligence worms its way through the publishing industry, I’m bracing for a stomach-turning query: Did you actually write this?

The worry has been at the front of my mind since last week, when Hachette canceled the forthcoming U.S. publication of the horror novel “Shy Girl after readers and journalists flagged prose that sounded like A.I. slop. (The author maintains that a freelance editor is to blame for any prose written by a large language model.)

Though I’m against the use of generative A.I. in creative writing, not everyone feels the same way. What does seem clear, however, is that most readers want disclosure when A.I. has been used, and they are quick to note the telltale rhythms and patterns of popular large language models.

But as A.I. models continue to improve, I’m concerned that it will become difficult to distinguish between something written by a human versus a bot. As more A.I.-generated writing is put out in the world, more readers will question whether the text they are poring over was penned by a human. We’re barreling toward a rapid erosion of trust between authors and readers, and the publishing industry is unprepared to deal with the consequences."

I told the internet I use AI. Boy, was it mad.; The Washington Post, April 5, 2026

  , The Washington Post; I told the internet I use AI. Boy, was it mad.

"...Many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine, and they reacted badly to a journalist suggesting some AI use might be all right.

Obviously, I disagree, but I recognize those folks are grappling with important questions, such as “What is writing for?” and “Which uses of AI serve those purposes, and which undermine them?”"

The New York Times drops freelance journalist who used AI to write book review; The Guardian, March 31, 2026

  , The Guardian; The New York Times drops freelance journalist who used AI to write book review

"The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.

It came after a New York Times reader flagged similarities between the paper’s January review of Watching Over Her by Jean-Baptiste Andrea,written by author and journalist Alex Preston, and an August review of the same book written by Christobel Kent in the Guardian.

The New York Times launched an investigation, during which Preston admitted that he had used AI to assist writing the review and did not spot the sections that were pulled from the Guardian before submitting it. In a statement to the Guardian on Tuesday, Preston said that he was “hugely embarrassed” and had “made a serious mistake”."

The Copyright Act in the age of AI; Politico, April 6, 2026

AARON MAK , Politico; The Copyright Act in the age of AI

"The Copyright Act is reaching a major milestone this year, yet some legal scholars aren’t sure how well it will hold up in the age of artificial intelligence.

Stanford University held a summit on Friday to celebrate (and fret about) the 1976 act, which is the foundation of modern copyright law, as the 50th anniversary of its signing approaches in October. Academics advanced a number of proposals to update and reinterpret American copyright law, though several also warned against stretching it too far. The consensus: AI will reshape copyright whether we like it or not, and that it’s time to grapple with the implications."

Iowa can restrict LGBTQ+ books and topics at schools, appellate court rules; Associated Press via The Guardian, April 6, 2026

 Associated Press via The Guardian; Iowa can restrict LGBTQ+ books and topics at schools, appellate court rules

Ruling, vacating lower court’s temporary block, applies to classrooms and libraries up to sixth grade 

"Iowa can enforce a law that restricts teachers from talking about LGBTQ+ topics with students in kindergarten through the sixth grade and bans some books in libraries and classrooms, an appellate court said on Monday.

The decision for now vacates a lower court judge’s temporary blocks on the law.

The measure was first approved by Republican majorities in the Iowa house and senate and the Republican governor, Kim Reynolds, in 2023, which they said reinforced age-appropriate education in kindergarten through 12th grades. It has been a back-and-forth battle in the courts in the three years since lawsuits were filed by the Iowa State Education Association, major publishing houses and bestselling authors, as well as Iowa Safe Schools, an LGBTQ+ advocacy organization."

Monday, April 6, 2026

The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.; ProPublica, April 6, 2026

 Renee Dudley , ProPublica; The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

"This emerging technology has its grip on everyone: Home users, corporations and the federal government are all rushing to use it. President Donald Trump and his Cabinet say AI will transform the nation, making us more prosperous, efficient and secure — if only we can adopt it fast enough. 

But this messaging isn’t new. President Barack Obama’s administration used nearly identical language a decade and a half ago as the U.S. barreled into the technological revolution of cloud computing.


I’ve studied how the federal government has handled — and mishandled — this transition over the past two decades, and my reporting offers some cautionary tales and valuable lessons as policymakers encourage the use of AI and federal agencies adopt the technology."

US music publishers suing Anthropic make their case against AI 'fair use'; Reuters, March 24, 2026

  , Reuters; US music publishers suing Anthropic make their case against AI 'fair use'

"Music publishers Universal Music Group , Concord and ABKCO have asked a judge in California to rule that U.S. copyright law does not insulate artificial intelligence startup Anthropic from ​liability for copying their song lyrics to train its AI-powered chatbot Claude.

The publishers' request , filed on Monday ‌in federal court in San Jose, tees up a critical question in the legal battle between creators and tech companies: Does the doctrine of "fair use" apply to the copying of millions of copyrighted works to train AI models?"

Brands Adopt ‘No AI’ Disclaimers to Stand Out Amid the Slop; Wall Street Journal, April 6, 2026

Patrick Coffee, Wall Street Journal; Brands Adopt ‘No AI’ Disclaimers to Stand Out Amid the Slop

Marketers move to get ahead of growing consumer skepticism by labeling content that doesn’t use AI

"As the AI-generated imagery and video colloquially called slop spreads across social media and video feeds, marketers are going out of their way to tell consumers they’re not to blame.

For some, it’s part of a message about authenticity that they want to send to their customers.

“We commit: No AI generated bodies or people,” promised a campaign last month from Aerie, the intimate apparel brand owned by American Eagle Outfitters."

Contentious House USPTO Oversight Hearing Centers on PTAB Reforms, Trump’s Political Influence; IP Watchdog, March 25, 2026

STEVE BRACHMANN, IP Watchdog; Contentious House USPTO Oversight Hearing Centers on PTAB Reforms, Trump’s Political Influence

"Today, the U.S. House of Representatives Judiciary Committee’s Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet conducted its first oversight hearing of the U.S. Patent and Trademark Office (USPTO) during the second Trump Administration. The harshest lines of questioning for USPTO Director John Squires during the hearing were reserved for the agency’s notice of proposed rulemaking(NPRM) to reform rules of practice at the Patent Trial and Appeal Board (PTAB) as well as President Trump’s political influence at the agency. During the hearing, Squires also confirmed that the agency’s Patent Public Advisory Committee (PPAC) would soon be revived, following an offer to join PPAC extended last night to an undisclosed independent inventor...

The full House Judiciary Committee’s Ranking Member, Jamie Raskin (D-MD), also hammered Director Squires on President Trump’s influence, raising questions throughout the hearing about the USPTO’s role in filing two trademark applications for “Board of Peace” on behalf of President Trump...

Rep. Zoe Lofgren’s (D-CA) concerns over Squires’ PTAB reforms involved not just the centralization of decision-making authority over IPR proceedings but the lack of explanation for decisions to not institute or de-institute stemming from what Lofgren called “barebones summary denials.” Squires responded that PTAB judges work from a record of more than 600 written decisions, and that controversial reforms like settled expectations have cut both ways in favor of patent owners and challengers. However, Lofgren commented that the 64% of IPRs discretionarily denied under Director Squires was due to the adoption of discretionary considerations and a centralized Director review process acting as barriers that Congress never envisioned."

Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code; Futurism, April 3, 2026

 , Futurism; Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code

As the Wall Street Journal reports, Anthropic is scrambling to contain a leak of its Claude Code AI model’s source code by issuing a copyright takedown request for more than 8,000 copies of it — a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place.

The leak isn’t considered to be an outright disaster; no customer data was exposed, Anthropic says, nor were the internal mathematical “weights” that determine how the AI “learns” and which distinguish it from other models. But it did expose the techniques its engineers used to get its AI model to act as an autonomous agent, a form of digital infrastructure coders call a harness, and other tricks for making the AI operate as seamlessly as it does.

Hence Anthropic’s copyright takedown request, which targets the thousands of copies that were shared on GitHub. It later narrowed its request from 8,000 copies to 96 copies, according to the WSJ reporting, claiming that the initial one covered more accounts than intended.

It’s certainly within Anthropic’s right to issue the takedown request, but the hypocrisy of Anthropic running to the law to protect its intellectual property is plain to see, especially for a company that’s relentlessly positioned itself as the ethical adult in the room."

Sunday, April 5, 2026

Claude's Constitution; Anthropic, January 21, 2026

 Anthropic, Claude's Constitution

Our vision for Claude's character

"Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all of our other guidance and training to be consistent with it.

Training models is a difficult task, and Claude’s behavior might not always reflect the constitution’s ideals. We will be open—for example, in our system cards—about the ways in which Claude’s behavior comes apart from our intentions. But we think transparency about those intentions is important regardless.

The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g., “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.

This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.

For a summary of the constitution, and for more discussion of how we’re thinking about it, see our blog post “Claude’s new constitution.”

Powerful AI models will be a new kind of force in the world, and people creating them have a chance to help them embody the best in humanity. We hope this constitution is a step in that direction.

We’re releasing Claude’s constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

Many people at Anthropic and beyond contributed to the creation of this document, as did several Claude models. Amanda Askell is the primary author and wrote the majority of the text. Joe Carlsmith wrote significant parts of many sections and played a core role in revising the text. Chris Olah, Jared Kaplan, and Holden Karnofsky made significant contributions to its content and development. More detailed contribution statement and acknowledgments below.

The preface and the acknowledgements are not part of the official constitution."

The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code; Observer, March 31, 2026

 , Observer; The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code

"Father Brendan McGuire is writing a novel about a disenchanted monk and his A.I. companion. He’s doing it with Claude. That detail—a Catholic priest using Anthropic’s chatbot to explore questions of faith and artificial consciousness—tells you something about where Silicon Valley’s moral reckoning has arrived. McGuire, 60, leads St. Simon Catholic Parish in Los Altos, Calif., a congregation that counts some of the Valley’s A.I. researchers among its members. Earlier this year, he and a group of faith leaders helped Anthropic shape the Claude Constitution, the set of guiding principles governing how its A.I. behaves.

He is not, in other words, an outside critic. He is something more complicated: a true believer in both God and technology, trying to hold them in the same hand. “I left the tech industry, but it never really left me,” McGuire told Observer...

McGuire wasn’t Anthropic’s only religious collaborator. Bishop Paul Tighe of the Vatican’s Dicastery for Culture and Education and Brian Patrick Green, a technology ethics director at Santa Clara University, also reviewed the Claude Constitution. Green and other Catholic scholars recently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”...

Anthropic says its engagement with religious voices—part of a broader effort to engage a wide variety of communities to keep pace with technological acceleration—is only a beginning. The company plans to expand outreach beyond Catholic institutions to other religious leaders going forward."

What Teens Are Doing With Those Role-Playing Chatbots; The New York Times, April 4, 2026

  

, The New York Times ; What Teens Are Doing With Those Role-Playing Chatbots

"There are a growing number of companies offering social chatbots that can act like friends, enemies, lovers, adventurous companions, or the manifestation of a fictional or real person you’ve always wanted to meet. You can pick A.I. Elon Musk’s brain or spar with A.I. Draco Malfoy. The myriad characters, often created by fellow users, offer drama, romance, therapy and LOLs.

Apps that feature role-playing chatbots are used by tens of millions of people, with engagement times that rival or surpass those of social media behemoths such as TikTok, according to market intelligence firm Sensor Tower. The majority of teens surveyed by Pew use A.I. chatbots, with one out of 11 saying they had used Character.AI.

“If you think your child is not talking to chatbot companions, you’re probably wrong,” said Mitch Prinstein, co-director of the Winston Center on Technology and Brain Development at U.N.C. Chapel Hill.

Chatbots are surging in popularity as society is still grappling with how social media has affected young people; a wave of lawsuits is moving through the courts seeking damages from companies that plaintiffs say have deliberately created addictive products. (A jury in California recently found that Meta and YouTube were liable for $6 million in damages to one young woman.) And now parents and caregivers have a new attention-absorbing technology to reckon with.

At the beginning of last year, a high school teacher in Chicago told me that some of her students were dating chatbots, and she worried that they were having their first erotic experiences with them. I wanted to find out what teens had to say about that, so I joined communities devoted to social chatbot apps on the online messaging forum Discord. I introduced myself as a reporter and “an old,” and explained that I was interested in talking to young people who used the services regularly."

Saturday, April 4, 2026

Small Kansas town gets its first library after mysterious donation; CBS, April 4, 2026

 [Video] CBS; Small Kansas town gets its first library after mysterious donation

"Frontenac, Kansas, is a community of 3,000 residents. When its city administrator received a mysterious $4.6 million dolllar donation from a couple whose dying wish was for a library to be built, the town sprung into action to solidify its legacy."

A folk musician became a target for AI fakes and a copyright troll; The Verge, April 4, 2026

Terrence O'Brien, The Verge; A folk musician became a target for AI fakes and a copyright troll

"The worlds of generative AI, music distribution, and copyright are complex with multiple points of failure and opportunities for abuse."

Mysterious donation gives small Kansas town its first public library; CBS News, April 4, 2026

 

 , CBS News; Mysterious donation gives small Kansas town its first public library

[Kip Currier: Incredibly inspiring story underscoring the positive impacts that one person -- or in this case two people, a librarian and her husband -- can have on the world.

Keep this story in mind and pass it along to anyone who questions the value of and need for libraries in our communities.]

"For over 140 years, the small town of Frontenac, Kansas had almost everything its 3,000 residents needed – except a public library. 

All that changed when city administrator John Zafuta got an unexpected phone call in 2019. 

"It was a surprise," Zafuta said. "An attorney told us that we were the sole beneficiary of the Tavella family trust."

Richard and Jeanette Tavella had both grown up in Frontenac before moving north to Kansas City, where Jeanette worked as a librarian until her death in 2019. Richard had died the year before. The couple wanted to use their parting gift to write a new legacy in their hometown. The town would receive $4.6 million from their estate, and use it to build Frontenac's first public library...

Building a space for community 

Seth Nutt, a teacher, historian and Frontenac native, was brought on as the library's director...

Today, its shelves hold over 17,000 titles. The library also hosts activities catered to different age groups, including story time for children, cooking classes for teens and book clubs for adults. It's also home to the McKay Street Coffeehouse and the Heritage Hall Museum. The museum is the first dedicated exclusively to Frontenac's history, and is also overseen by Nutt. 

"You hear people debate, do you need a library, do you not? And once you see it, and once you see how involved the community is with it, we've definitely needed it," said fire lieutenant and building inspector Justin Ziesenis, who had brought his six-year-old daughter to the library to pick out books. 

"I think it has made a difference in this community. It's drawing more people here, and more students are getting involved with reading books," said eighth-grader Rylinn Girth-Barnow, who had come to the library for a cooking class."

Copyright Head Chides Supreme Court’s ‘Shocking’ Cox Opinion; Bloomberg Law, April 3, 2026

 , Bloomberg Law; Copyright Head Chides Supreme Court’s ‘Shocking’ Cox Opinion

"US Copyright Office Director Shira Perlmutter said the Supreme Court put “little thought” into the implications of its recent decision absolving an internet service provider for not shuttering accounts of users who repeatedly pirated music.

"The throwing out of many, many decades of copyright law on secondary liability so quickly and with so little analysis was shocking," Perlmutter said Friday at a Stanford Law School event."

AI agents are scrambling power users' brains; Axios, April 4, 2026

 Megan Morrone, Axios ; AI agents are scrambling power users' brains

"A growing number of software developers say AI coding tools are frying their brains. 

The big picture: The most popular agentic AI systems have triggered something that looks a lot like addiction among some of tech's highest performers."

‘Occasionally a picture can change the course of history’: 33 scandalous photos that shocked the world; The Guardian, April 4, 2026

, The Guardian; ‘Occasionally a picture can change the course of history’: 33 scandalous photos that shocked the world 

"The Bullingdon Club photograph, 1987

By Rona Marsden

In 2007, the Mail on Sunday published a photograph taken 20 years earlier: a group portrait of the Bullingdon Club’s class of 1987. Ten young members appear in the bespoke uniform of the exclusive all-male “dining club” at the University of Oxford. Among them are two future luminaries of the Conservative party: David Cameron (standing, second left) and Boris Johnson (seated on the right).

The club’s reputation as a drinking society for badly behaved posh boys – in 1987, a plant pot was thrown out of a window during a Bullingdon party – made the photo a source of embarrassment for Cameron, then leader of the opposition. “We do things when we are young that we deeply regret,” he said in 2009.

Soon after, the company that holds the copyright for the image withdrew permission to republish it. This painting by Oxford-based artist Rona Marsden was commissioned by BBC Newsnight as an alternative. The image remains a striking illustration of the elitism of Britain’s ruling class, and the vast inequality within the country. GS"