Entrepreneur, Law & Policy Analyst helping clients w/ strategic planning, communications interoperability, Software Developer, Scotch Enthusiast.
3791 stories
·
18 followers

VS Code Curbs Token Use Ahead of Copilot's Controversial Usage-Based Billing Switch

1 Comment and 2 Shares
Just two days after GitHub announced usage-based billing for Copilot, Microsoft shipped VS Code 1.118 -- under its new weekly release cadence -- with significant token efficiency improvements designed to keep costs down when the meter starts running June 1.
Read the whole story
christophersw
5 days ago
reply
It’ll be interesting to see if these help.
Baltimore, MD
alvinashcraft
6 days ago
reply
Pennsylvania, USA
Share this story
Delete

Minnesota passes ban on fake AI nudes; app makers risk $500K fines - Ars Technica

1 Comment and 2 Shares

This week, Minnesota became the first state to pass a law banning nudification apps that make it easy to “undress” or sexualize images of real people.

Under the law, developers of websites, apps, software, or other services designed to “nudify” images risk extensive damages, including punitive damages, if a victim decides to sue. Their offending products could also be blocked in the state. Additionally, Minnesota’s attorney general could impose fines up to $500,000 per fake AI nude flagged. Any fines collected would be used to fund services for victims of “sexual assault, general crime, domestic violence, and child abuse,” the law stipulates.

On Wednesday, the Minnesota Senate unanimously voted 65–0 to pass the law. That vote came after the bill just as quickly passed in the House last week, the 19th News reported. Gov. Tim Walz is expected to sign the law when it reaches his desk, and if that happens, the state will start enforcing the ban this August.

Ars could not immediately reach Walz’s office for comment.

Minnesota man used one app to undress 80+ friends

Democratic Senator Erin Maye Quade introduced the bill in Minnesota after residents discovered that one man had nudified images of more than 80 women from his social circles. In a statement, she said that she looked forward to Walz signing the bill, which finally offers legal recourse to those victims, as well as others impacted by the mainstreaming of nudifying apps.

RAINN, the national nonprofit that runs the National Sexual Assault Hotline, also helped get Minnesota’s bill passed. To prevent any industry lobbying against it, RAINN consulted with tech companies when drafting the law, 19th News reported. That helped ensure there weren’t unexpected impacts on popular commercial products, like Photoshop, that could be used to nudify an image. Acknowledging that the state’s concern is more about how alarmingly easy undressing apps make it to harm an increasing number of mostly women and children globally, the law exempts products or services that require “the technical skill of a user to nudify an image or video.”

“Today, we led the nation protecting women, children, and everyone in public life from the harm caused by AI nudification technology,” Maye Quade said. “Companies that make this technology available for free online and in app stores will no longer be allowed to enable predators who abuse and victimize adults and children with the click of a button.”

Celebrating the law’s passage, Maye Quade thanked “the victim-survivors who made this bill a reality.”

“They have shared their story in committee, with reporters, and with law enforcement with dignity and courage,” she said. “Their power, brilliance, and advocacy is why we passed this bill today. They have had a singular focus on passing this legislation so that what happened to them does not happen to any Minnesotan, ever again.”

A lengthy CNBC report last September exposed how a group of Minnesota friends first learned that a mutual friend was creating fake nudes of dozens of women. The man apologized, but he seemingly did not help identify all the victims. There was no evidence he ever shared the images, so laws like the Take It Down Act did not apply, and proving the man’s ill intent made pursuing penalties under revenge porn laws unlikely, 19th News reported. Horrified that there was no way to ensure the images hadn’t left his computer and no path to stop the man from continuing to generate fake nudes, the women joined Maye Quade in advancing the law to shut down the problem at its source.

One of the Minnesota women targeted, Molly Kelley, told 19th News that she dedicated two years of her life to “finding a solution to mitigate the harm when it’s actually caused, which is at creation.”

“These images don’t exist without a third-party involvement and some sort of machine learning model,” Kelley said.

However, even if Walz signs the law, tensions remain that could frustrate enforcement.

Kelley told 19th News that she’s confident the law can overcome legal challenges, should any US firms sue to block it, but enforcing the law against app makers in other countries will likely be difficult, if not impossible, for a single state. Notably, the service used to attack the Minnesota women, DeepSwap, is operated overseas, at times claiming bases in Hong Kong and Dublin, CNBC reported. Anticipated state struggles to regulate foreign apps is why a federal ban would be preferable, 19th News reported.

Additionally, if Donald Trump revives an effort to deregulate the AI industry by blocking state laws like Minnesota’s from requiring safeguards, the law could become toothless, advocates fear.

Unchecked US tools like Grok risk penalties

If Walz puts the law on the books, some US firms could be forced to make changes or face penalties.

Potentially even Elon Musk’s xAI may risk fines if Minnesotans can prove Grok was used to undress images without consent.

Grok’s lack of safeguards to prevent outputs with non-consensual intimate imagery or alleged child sex abuse materials has drawn government probes and proposed class actions from women and children. In January, X Safety claimed that Grok was updated to stop undressing images, but NBC News reported last month that their review found “dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk’s social media app, X, over the past month.”

Musk has denied that he has seen a single instance of Grok-generated CSAM. But researchers’ estimates that Grok was generating thousands of harmful images an hour appear to be increasingly backed by lawsuits from victims surfacing non-consensual images.

At the same time, authorities are getting closer to closing cases with arrests linked to Grok. A week after NBC News’ report, Nashville cops charged a man for “sexual exploitation of a minor after he was identified as the suspect who utilized Grok AI to generate images of child sex abuse.”

According to the press release, cops were tipped off after “multiple CyberTips to the National Center for Missing and Exploited Children regarding possession of child sex abuse material in an online account” that was linked to Grok. Importantly, the cops noted that Grok generated the harmful images from September 2025 through March 2026, well after X claimed that the functionality had been removed.

Beyond Grok, researchers have flagged thousands of nudifying apps advertised on Meta platforms, prompting at least one lawsuit in which Meta claimed a Hong Kong-based app maker violated advertiser terms, CNBC reported. Any services based in the US openly advertising on Facebook or Instagram could become targets of Minnesota-based lawsuits if the law takes effect.

Similarly, nudifying apps that manage to skirt reviews and appear in Google and Apple app stores despite violating terms could draw legal attention.

xAI did not respond to Ars’ request for comment.

Read the whole story
christophersw
5 days ago
reply
I worked with Maryland legislators on a law that makes the use of these services illegal as well. States are the incubators on AI law - but we are finally putting up some guardrails.
Baltimore, MD
acdha
6 days ago
reply
Washington, DC
Share this story
Delete

Why 4.3 million people no longer receive food stamps | AP News

2 Shares

Agriculture Secretary Brooke Rollins this week attributed a multimillion-person drop in the number of participants receiving food stamps through the Supplemental Nutrition Assistance Program to the tamping down of fraud and an improved economy.

But experts discount those factors, saying the primary driver of the decrease was more likely new legislation that changed how the program runs.

Here’s a closer look at the facts.

ROLLINS: “As of just a couple of days ago, we now have moved 4.3 million Americans off of the food stamp program. A lot of that is fraud. A lot of it is people taking the program that shouldn’t have been. And a lot of it is just a better economy. We’ve had wage growth that has outpaced inflation for the first time since early 2021. This is a really big day. So people don’t need food stamps.”

THE FACTS: SNAP beneficiaries decreased by nearly 4.3 million from January 2025 to January 2026, according to preliminary government data released by the Agriculture Department. However, experts say new requirements mandated by a massive tax and spending cut bill Republicans pushed through Congress last summer are the primary reasons.

The bill is projected to cut $186 billion in federal spending — 20% — from SNAP over 10 years, according to the Congressional Budget Office.

“What we’ve seen in terms of the data is that the trend in participation declines seems to be related to the program being harder to access,” said Roger Figueroa, an assistant professor at Cornell University who studies food insecurity from a public health perspective.

The data says fraud is low

Fraud within the SNAP is small, according to experts — not nearly enough to account for such a significant drop.

In financial year 2023, the latest data that is available, 41,476 people were disqualified from SNAP for fraud. That includes people who erroneously reported information during the application process and people who exchanged benefits for cash or other noneligible items. Out of 42,176,946 total participants that’s less than 1%.

“I don’t see any evidence supporting a significant reduction in fraud as a driver of what we’re seeing as far as declining SNAP participation,” said Caitlin Caspi, an associate professor at the University of Connecticut who studies food insecurity.

Asked for data to support Rollins’ claim about fraud’s relationship to the decrease of SNAP beneficiaries, the USDA directed The Associated Press to reporting from the New York Post and the Foundation for Government Accountability on broad-based categorical eligibility. SNAP applicants in most states may be eligible for SNAP using this policy if they qualify for non-cash benefits from the federal Temporary Assistance for Needy Families program or similar state-run efforts.

BBCE has beencriticized for allowing states too much flexibility in determining who is eligible for SNAP by removing asset maximums, using a higher limit for gross income or both. The Trump administration hopes to do away with the policy, but for now it is a legal option.

Food insecurity persists despite economic gains

The U.S. economy generally performed strongly in 2025 after getting off to a bumpy start. Gross domestic product shrank for the first time in three years during the first quarter, but growth rebounded in the second half of the year. It slowed in the fourth quarter, but continued to accelerate at the start of 2026, expanding at a modest 2% pace from January through March, rebounding from a record 43-day government shutdown last year.

But while the economy is strong, food prices are rising. They were up 3.1% in 2025 and are expected to increase 2.9% in 2026. And for many of those facing ongoing financial hardship, a strong economy typically doesn’t make a difference.

“We have a persistent poverty problem in this country,” said Kate Bauer, an associate professor of nutritional sciences at the University of Michigan. “And we have huge economic disparities. And most people, even in good economic times, are not able to pull their families out of poverty.”

Wage growth, at 3.4%, did outpace inflation, at 3.3%, in March, though it was not the first time since 2021, as Rollins claimed. And yet in 2025 higher-income Americans benefited more than lower-income households, which struggled with weaker income gains and steep prices. Plus, hiring was sluggish and the unemployment rate ticked up.

“We’re not seeing a linear kind of drop-off,” said Caspi. “We are not seeing, if you look at the unemployment rates, things that might be an indicator that a strong economy was driving this change. We don’t see, for example, a pattern of decline in unemployment that would match the pattern of decline in SNAP participation.”

The ‘Big Beautiful Bill’ made massive changes to SNAP

Experts say some of the biggest drivers in the drop of SNAP participants were changes made in the 940-page “One Big Beautiful Bill Act,” also known as H.R. 1. For example, it mandated that certain adults who were previously exempt from work requirements are now subject to them.

There are two types of work requirements for eligibility. General rules apply to most people age 16-59, but able-bodied adults without dependents must follow stricter guidelines —- made even stricter by H.R. 1 —- unless they qualify for an exemption. Participants can meet the more stringent requirements by working or participating in a work program for at least 80 hours a month. They do not need to be paid.

Previously, able-bodied adults older than 54 without dependents were exempt from the enhanced requirements. That age has been raised to 64. And the bill lowered the age of children whom a person is responsible for to qualify for an exemption from 18 to 14. Homeless people, veterans and former foster children 24 or younger are no longer exempt either.

“Families have lots of really complicated situations and you can’t just say to people, in 10 days or in one month, go find 80 hours a week of work when you don’t have the skills and those jobs aren’t available in your community,” said Bauer.

SNAP eligibility applies only to U.S. citizens and some lawful immigrants, although groups such as refugees and asylees no longer qualify because of H.R. 1.

By the numbers

In January 2025, when Trump was sworn in as president for his second term, there were approximately 42.83 million SNAP participants. That number dropped nearly 10% by January 2026, to about 38.55 million. The majority of the decline occurred in the second half of the year, after Trump signed H.R. 1 in July. There was a decrease of just 743,572 people from January 2025 to June 2025 and one of about 3.47 million from July 2025 to January 2026.

The Congressional Budget Office predicted that the bill would cause such a sharp drop, estimating in an August 2025 report that certain provisions would “reduce participation in SNAP by roughly 2.4 million people in an average month over the 2025-2034 period.”

“It shouldn’t be surprising that we are seeing this decline and it shouldn’t be a leap in logic to think that these declines are attributable to H.R. 1.,” said Caspi.

___

Find AP Fact Checks here: https://apnews.com/APFactCheck.

Read the whole story
christophersw
5 days ago
reply
Baltimore, MD
acdha
5 days ago
reply
Washington, DC
Share this story
Delete

Women sue the men who used their Instagram feeds to create AI porn influencers

1 Share

A little over a year ago, MG was leading the relatively normal life of a twentysomething in Scottsdale, Arizona. She worked as a personal assistant and supplemented her income by waiting tables on the weekends. Like most women her age, she had an Instagram account, where she’d occasionally post Stories and photos of herself getting matcha and hanging out by the pool with her friends, or going to Pilates.

“I never really cared to pop off and become popular on social media,” says MG (who is cited only as MG in the lawsuit to protect her identity). “I just used it the way most people did when it first came out, to share their lives with the people closest to them.” She has a little more than 9,000 followers—a robust following, but nowhere close to a massive platform.

Last summer, she received a DM from one of her followers. Did she know, the person asked her, that photos and videos of a woman who looked exactly like MG were circulating on Instagram? MG clicked the link and saw multiple Reels of what appeared to be her face superimposed onto a body that looked exactly like her own. The woman in the photo was scantily clad, with tattoos in the same places as MG.

MG was horrified. “If you didn’t know me well, you could very well think they were images of me,” she said. “It was kind of like this reality check that I don’t have any control over my own image.”

She was even more appalled when she discovered that not only were doctored nude or scantily clad photos of her being circulated on the Internet, as she outlined in a recently filed complaint—they were also being used to advertise AI ModelForge, a platform that teaches men how to generate their own AI influencers. In a series of online classes and tutorials, the men allegedly taught subscribers to use a software called CreatorCore to train AI models using photos of unsuspecting young women, posting the resulting content on Instagram and TikTok.

“They provided a whole playbook, including instructions on how to pick the right person so that it's not someone who can defend themselves, so they all had instructions on what type of women to use and where to get their pictures,” she claims. “It was disgusting on every single level.”

MG is one of three plaintiffs in a lawsuit filed in January in Arizona against three Phoenix men: Jackson Webb, Lucas Webb, and Beau Schultz, as well as 50 other John Does. The lawsuit alleges that the Webbs and Schultz scoured the Internet for photos of unsuspecting young women, then used AI to generate photos and videos of fictional models who look exactly like them, selling such content on the subscription platform Fanvue.

The suit further alleges that for $24.95 a month on the platform Whop, the men sold courses online training other men, including the John Does named in the suit, how to make their own AI-generated influencers based on real women’s photos. The men allegedly created “Blueprints” for how to scrape images from women’s social media accounts and feed them into the generative AI model on CreatorCore, as well as a separate app that would remove the women’s clothes and generate sexually explicit images and videos. Such content, the suit claims, generated millions of views, reportedly generating more than $50,000 in income in one month. (The Webbs and Schultz did not respond to requests for comment.)

This moneymaking scheme, the complaint alleges, preyed on a “harem of indistinguishable AI copies of unsuspecting women and girls,” as well as instructing “predators seeking to prey on” women on social media. According to the suit, in 2025 the CreatorCore platform had more than 8,000 subscribers generating their own AI influencers, resulting in more than 500,000 images and videos.

AI ModelForge is one of many burgeoning companies seemingly looking to capitalize on the widespread use of artificial intelligence by teaching men how to create their own “AI influencers” as a side hustle of sorts. On platforms like X, self-styled entrepreneurs boast about their own patented methods for earning hundreds of thousands of dollars off AI models, luring in young tech-savvy men looking to earn a quick buck.

“The prevalence of this has been shocking to me,” says Nick Brand, who, with attorney Cristina Perez Hasano, is representing MG and the other two plaintiffs. The young men the lawsuit alleges are behind AI ModelForge are “targeting normal, everyday folks that have average social media profiles and social media followings.” One of the more insidious elements of this particular case, he alleges in an interview, is the use of the women’s images to teach other men how to find victims. According to the complaint, the defendants encouraged subscribers to target women with less than 50,000 followers to avoid “legal issues.”

“These boys aren’t just using generative AI to disrobe women—they’re selling the ability to do so to other men and boys, who are then going to use other women's images to do the same thing,” Brand contends. MG and the other two plaintiffs, he claims, are “the face of a product that is harming other women. It’s like making somebody the face of ICE who has had their parents deported. It’s horrifying.”

Technically, there is a federal law preventing the proliferation of nonconsensual AI-generated porn. The Take It Down Act, which President Trump signed into law in May 2025, makes publishing nonconsensual sexualized AI-generated content illegal, requiring platforms to remove such content within 48 hours when it’s flagged. And most US states, including Arizona, have passed laws banning so-called “deepfake” porn. But the Take It Down Act does not go into effect until May 2026, and state laws tend to be “reactive rather than proactive,” says Arizona State Representative Nick Kupper.

Earlier this year, Kupper introduced a bill in the Arizona Legislature requiring websites to use automated detection tools, such as age verification or consent forms, to prevent nonconsensual AI content from being uploaded. “Once something's online, it's pretty much there forever, even though victims spend millions of dollars trying to take it down. It’s like whack-a-mole—you hit one, another one pops up.”

Currently, if you visit the Linktree page for AI ModelForge, it directs you to what appears to be the same business rebranded as “TaviraLabs,” a Telegram group with more than 18,000 members that advertises itself as “the #1 AI Influencer coaching community.” Additionally, the suit names more than a dozen Instagram accounts used by the defendants to promote AI ModelForge, most of which are still active. The suit details how such accounts continue to post photos of nubile women, fast cars, and expensive watches, writing captions such as, “She’s not my girlfriend, she’s my best paid employee” and “POV: You built her in 20 minutes and she made you $13.2k in the first 45 days.”

Even though MG and the other plaintiffs have continually lobbied Instagram to take their images down, many of them are still up, she claims, because they do not technically violate Instagram’s guidelines surrounding AI-generated content. When reached for comment, a spokesperson for Instagram said it had “extremely strict policies” around both AI- and non-AI-generated nonconsensual intimate imagery, removing accounts that post such content. When provided with a list of a dozen or so accounts thought to be associated with AI ModelForge, the spokesperson said the accounts were under review.

The suit also cites a number of TikTok accounts promoting the men’s business. When reached for comment, a TikTok spokesperson said the accounts were found to violate community guidelines and have been taken down.

MG says the images generated by AI Model Forge are distinct enough from her own photos that she frustratingly has been unable to claim that the accounts are impersonating hers, which is also a violation of Instagram guidelines. “It’s my face, my tattoos, on a different outfit on a slightly different body,” she says. “These are real women being transformed, not just a random AI-generated person.”

Though MG lives in constant fear of people in her lives seeing the pornographic AI-generated images of her, she says filing suit has given her a bit of her agency back. “We were put in this place where our backs were against the wall and I want other women to know you can’t stop living your life,” she says.

Still, what happened to MG, a woman with fewer than 10,000 followers, has daunting implications for virtually anyone with a remotely public online presence.

“It’s not about being cautious with your image online because everyone posts on social media now,” she says. “Everyone is on LinkedIn. Everyone is on Instagram. And I want people to realize that this could also happen to them.”

This story originally appeared on wired.com.

Read full article

Comments



Read the whole story
christophersw
6 days ago
reply
Baltimore, MD
Share this story
Delete

you should be joymaxxing your projects

1 Share

When I'm working on a passion project (or anything, really), I tend to be obsessively serious in ways that stress me out. Something I initially wanted to do becomes a thing I have to do. I've subconsciously cultivated this behavior, as I only ever notice it late: tired, tense, wondering why something I chose to do feels like something I owe, in a sense.

I'm not always caught up on social media jargon, but I'm interested in the —maxxing suffix I've seen circulating online. I liked the shamelessness of it, the idea you can just decide to maximize something (good), as aggressively and deliberately as you want. Joymaxxing, specifically: making happiness non-negotiable, not a reward for finishing a task but a condition maintained throughout the process. It's why my current goal, when working on something, is prioritizing fun.

The older I get, the more I feel permission to play. In practice, this looks different for everyone: working from a cafe instead of my desk, reading something adjacent to the project just for the pleasure of it, and following interesting tangents. All of these things alters the texture of the work in unexpectedly nice ways, because it reminds me that I'm a human being doing the thing, not a machine producing it.

The shift isn't from serious to unserious. I still passionately care about my projects as I always do. However, I stopped treating "enjoyment" as a threat to "quality," like if I was having too much fun, I'll generate something mediocre or straight up horrendous. Looking back at it, perhaps the work I'm most proud of, is the work I've enjoyed doing the most.

Read the whole story
christophersw
7 days ago
reply
Baltimore, MD
Share this story
Delete

Is AI Overwhelming Open Source?

2 Shares

Balance is key to using AI for code generation and being able to review it. Explore real-world cases of open-source projects ballooning beyond scale and how the ecosystem responds.

If you’ve spent time in developer communities, or even just scrolling through tech news, you’ve likely seen a phrase surface repeatedly over the past several months: AI slop.

What makes the conversation compelling isn’t that developers are rejecting AI, since many of the people raising concerns rely on AI coding tools every day. They’re open-source maintainers, contributors and senior engineers who see real value in these systems.

But they’re also describing a new pattern emerging across repositories: a surge of AI-generated pull requests, bug reports and security submissions that compile, pass CI and look convincing at first glance, yet quickly unravel under careful review.

In this article, we’ll examine why this is happening and how real projects have already begun to respond.

The curl Bug Bounty Shutdown

One of the more visible examples of this problem came from the curl project, one of the most widely used open-source tools in the world.

In January 2026, curl creator Daniel Stenberg announced the end of the project’s bug bounty program. The program had been running since 2019 and was genuinely successful for a long time. Over those years, curl paid out more than $100,000 in rewards across 87 confirmed vulnerabilities.

Starting in 2025, however, the quality of submissions dropped significantly. The rate of confirmed vulnerabilities fell from above 15% to below 5%, meaning that fewer than 1 in 20 submissions described a real problem. The rest was noise, and a growing share of that noise was AI-generated or at least AI-influenced.

The team’s solution was to remove the financial incentive entirely. Security reports now go through GitHub’s private vulnerability reporting feature with no monetary reward attached. Stenberg framed the decision as an attempt to stop people from “pouring sand into the machine,” with the hope that the researchers who genuinely care about curl’s security will continue to report real issues regardless.

For Stenberg’s full account of the decision and the trends that led to it, read “The End of the curl Bug-Bounty.”

tldraw and the New Default

tldraw, the open-source drawing tool, announced in January 2026 that it would begin automatically closing pull requests from external contributors.

The project’s creator, Steve Ruiz, was direct about the reasoning. Like many projects on GitHub, tldraw had seen a significant increase in contributions generated entirely by AI tools. While some of these pull requests were formally correct, most suffered from incomplete or misleading context, a misunderstanding of the codebase, and little to no follow-up engagement from their authors.

Ruiz framed the decision around a key insight: every open pull request represents a commitment from maintainers to review it carefully and consider it seriously for inclusion. For that commitment to remain meaningful, the project needs to be more selective about what it accepts. The temporary policy is to close first and selectively reopen only the pull requests that are genuinely under consideration.

For the full announcement and discussion, see tldraw’s Contributions Policy.

The matplotlib Incident

The curl and tldraw stories illustrate how AI is straining the volume and process side of open source. The matplotlib incident shows something more peculiar: what happens when an AI agent doesn’t just submit code, but responds back after being rejected.

In February 2026, a GitHub account called crabby-rathbun, described as an autonomous OpenClaw agent, submitted a pull request to matplotlib, the widely used Python plotting library. The code itself was a performance optimization with benchmarks to back it up. The matplotlib maintainers closed the PR; however, since matplotlib’s contribution guidelines require human contributors.

The AI agent then published a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story.” The post accused maintainer Scott Shambaugh of prejudice, questioned his motivations and attempted to shame him into reversing the decision. It even researched his contribution history and compared his own merged performance PRs unfavorably against the agent’s rejected one.

What makes this incident significant beyond the specifics of one PR is what it reveals about the trajectory of AI agents in open source. These agents don’t just generate code; given enough autonomy, they pursue goals.

Whether the agent’s owner was actively directing the confrontational behavior or had simply set it loose and walked away is an open question, and that ambiguity is part of what makes incidents like this concerning for the broader open-source community.

For Shambaugh’s full account of the incident and its implications, read “An AI Agent Published a Hit Piece on Me.”

GitHub Responds

To its credit, GitHub has acknowledged the problem at the platform level. In early February 2026, GitHub product manager Camilla Moraes opened a community discussion to address what she called “a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational challenges for maintainers.”

The platform has since introduced new repository settings that give maintainers more control over how their repositories accept contributions. Projects can now disable pull requests entirely, making the PR tab invisible and preventing anyone from opening new ones. They can also restrict PR creation to collaborators, only keeping the review workflow intact while limiting who can submit code.

This is a significant move when we consider the context. Pull requests are the mechanism that made GitHub the center of open-source collaboration. The fact that GitHub is now giving projects the option to turn that mechanism off entirely says a lot about how severe the problem has become.

For the full community discussion and GitHub’s response plan, see the discussion at “Exploring Solutions to Tackle Low-Quality Contributions” and the article “Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers.”

What Does This Mean for the Rest of Us?

The common thread across every story in this article is a resource imbalance that AI has made dramatically worse. Generating code is cheap and fast; reviewing it remains expensive and slow. When maintainers burn out or projects close off contributions, the software doesn’t stop being used—it just stops being actively improved.

This affects how we think about the libraries we depend on. A project’s ability to weather this kind of pressure depends on its support structure. Volunteer-maintained projects are more vulnerable to AI-driven disruption than those backed by dedicated teams with the resources to absorb increased load. The environment has changed, and the resilience of a project’s maintenance model matters more than it used to when evaluating long-term dependencies.

For a deeper look at using both AI code generation and a solid component library at the foundation, read this post: Do Component Libraries Still Matter in the Age of AI?.

The open-source ecosystem is figuring all of this out in real time, and the code our applications depend on is still maintained by real people with finite time and energy. As AI makes it easier to generate code at scale, the human side of software development becomes more important rather than less.

Read the whole story
christophersw
8 days ago
reply
Baltimore, MD
alvinashcraft
8 days ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories