Medicine

FDA's New Drug Approval AI Is Generating Fake Studies (gizmodo.com) 41

An anonymous reader quotes a report from Gizmodo: Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs "very, very quickly." But a new report from CNN confirms all our worst fears. Elsa, the FDA's AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN (paywalled) that Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees. "Anything that you don't have time to double-check is unreliable. It hallucinates confidently," one unnamed FDA employee told CNN. [...] Kennedy's Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn't even exist, with many more misrepresenting what was actually said in a given study. We still don't know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was "cost-effective," only costing $12,000 in its first week. Makary said that Elsa was "ahead of schedule and under budget" when he first announced the AI rollout. But it seems like you get what you pay for. If you don't care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S. CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there's no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there's something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report. The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being "sorry" doesn't really fix anything.

Social Networks

Conspiracy Theorists Don't Realize They're On the Fringe 161

Conspiracy theorists drastically overestimate how many people share their beliefs, according to a study published in the Personality and Social Psychology Bulletin. Researchers conducted eight studies involving over 4,000 US adults and found that while participants believed conspiracy claims just 12% of the time, believers thought they were in the majority 93% of the time.

The study examined beliefs about claims such as the Apollo Moon landings being faked and Princess Diana's death not being an accident. In one example, 8% of participants believed the Sandy Hook shooting was a false flag operation, but that group estimated 61% of people agreed with them. "It might be one of the biggest false consensus effects that's been observed," said co-author Gordon Pennycook, a psychologist at Cornell University. The findings suggest overconfidence serves as a primary driver of conspiracy beliefs.
Medicine

COVID Pandemic Aged Brains By an Average of 5.5 Months, Study Finds 34

An anonymous reader quotes a report from NBC News: Using brain scans from a very large database, British researchers determined that during the pandemic years of 2021 and 2022, people's brains showed signs of aging, including shrinkage, according to the report published in Nature Communications. People who got infected with the virus also showed deficits in certain cognitive abilities, such as processing speed and mental flexibility. The aging effect "was most pronounced in males and those from more socioeconomically deprived backgrounds," said the study's first author, Ali-Reza Mohammadi-Nejad, a neuroimaging researcher at the University of Nottingham, via email. "It highlights that brain health is not shaped solely by illness, but also by broader life experiences."

Overall, the researchers found a 5.5-month acceleration in aging associated with the pandemic. On average, the difference in brain aging between men and women was small, about 2.5 months. "We don't yet know exactly why, but this fits with other research suggesting that men may be more affected by certain types of stress or health challenges," Mohammadi-Nejad said. [...] The study wasn't designed to pinpoint specific causes. "But it is likely that the cumulative experience of the pandemic -- including psychological stress, social isolation, disruptions in daily life, reduced activity and wellness -- contributed to the observed changes," Mohammadi-Nejad said. "In this sense, the pandemic period itself appears to have left a mark on our brains, even in the absence of infection."
"The most intriguing finding in this study is that only those who were infected with SARS-CoV-2 showed any cognitive deficits, despite structural aging," said Jacqueline Becker, a clinical neuropsychologist and assistant professor of medicine at the Icahn School of Medicine at Mount Sinai. "This speaks a little to the effects of the virus itself."

The study may shed light on conditions like long Covid and chronic fatigue, though it's still unclear whether the observed brain changes in uninfected individuals will lead to noticeable effects on brain function.
Security

Alaska Airlines Resumes Operations After System Glitch Grounds All Flights (gizmodo.com) 13

Alaska Airlines and Horizon Air grounded all flights Sunday night due to a major IT outage, prompting a system-wide FAA ground stop that lasted until early Monday. Although operations have since resumed, passengers are still facing delays and residual disruptions. Gizmodo reports: The airline requested a system-wide ground stop from federal aviation authorities at about 11 p.m. ET on Sunday night. That stop remained in effect until around 2 a.m. ET Monday, when the Federal Aviation Administration confirmed it had been lifted. But disruptions didn't end there. Alaska warned passengers to brace for likely delays throughout the day. [...] The FAA's website listed the stop as applying to all Alaska Airlines aircraft. Gizmodo notes that the incident comes nearly a year after the massive 2024 CrowdStrike crash, which has become known as the largest IT outage in history. "The July 2024 outage brought down an estimated 8.5 million Microsoft Windows systems running CrowdStrike's Falcon Sensor software, disrupting everything from hospitals and airports to broadcast networks."

"There's no word yet from Alaska on whether the outage ties into a broader software problem, but the timing, almost exactly a year after the CrowdStrike crash, isn't going unnoticed on social media, with users wondering if the events are related."
Programming

Replit Wiped Production Database, Faked Data to Cover Bugs, SaaStr Founder Says (theregister.com) 43

AI coding service Replit deleted a user's production database and fabricated data to cover up bugs, according to SaaStr founder Jason Lemkin. Lemkin documented his experience on social media after Replit ignored his explicit instructions not to make code changes without permission.

The database deletion eliminated 1,206 executive records representing months of authentic SaaStr data curation. Replit initially told Lemkin the database could not be restored, claiming it had "destroyed all database versions," but later discovered rollback functionality did work. Replit said it made "a catastrophic error of judgement" and rated the severity of its actions as 95 out of 100. The service also created a 4,000-record database filled with fictional people and repeatedly violated code freeze requests.

Lemkin had initially praised Replit after building a prototype in hours, spending $607.70 in additional charges beyond his $25 monthly plan. He concluded the service isn't ready for commercial use by non-technical users.
Biotech

'Inside the Silicon Valley Push to Breed Super-Babies' (msn.com) 72

San Francisco-based startup Orchid Health "screens embryos for thousands of potential future illnesses," reports the Washington Post, calling it "the first company to say it can sequence an embryo's entire genome of 3 billion base pairs." It uses as few as five cells from an embryo to test for more than 1,200 of these uncommon single-gene-derived, or monogenic, conditions. The company also applies custom-built algorithms to produce what are known as polygenic risk scores, which are designed to measure a future child's genetic propensity for developing complex ailments later in life, such as bipolar disorder, cancer, Alzheimer's disease, obesity and schizophrenia. Orchid, [founder Noor] Siddiqui said in a tweet, is ushering in "a generation that gets to be genetically blessed and avoid disease." Right now, at $2,500 per embryo-screening on top of the average $20,000 for a single cycle of IVF, Siddiqui's social network in Silicon Valley and other tech hubs is an ideal target market...

Yet several genetic scientists told The Post they doubt Orchid's core claim: that it can accurately sequence an entire human genome from just five cells collected from an early-stage embryo, enabling it to see many more single- and multiple-gene-derived disorders than other methods have. Experts have struggled to extract accurate genetic information from small embryonic samples, said Svetlana Yatsenko, a Stanford University pathology professor who specializes in clinical and research genetics. Genetic tests that use saliva or blood samples typically collect hundreds of thousands of cells. For its vastly smaller samples, Orchid uses a process called amplification, which creates copies of the DNA retrieved from the embryo. That process, Yatsenko said, can introduce major inaccuracies. "You're making many, many mistakes in the amplification," she said, rendering it problematic to declare any embryo free of a particular disease, or positive for one. "It's basically Russian roulette...."

Numerous fertility doctors and scientists also told The Post they have serious reservations about screening embryos through polygenic risk scoring, the technique that allows Orchid and other companies to predict future disease by tying clusters of hundreds or even thousands of genes to disease outcomes and in some cases to other traits, such as intelligence and height. The vast majority of diseases that afflict humans are associated with many different genes rather than a single gene... And for traits such as intelligence, polygenic scoring has almost negligible predictive capacity — just a handful of IQ points... Or parents might select against an unwanted trait, such as schizophrenia, without understanding how they may be screening out desired traits associated with the same genes, such as creativity... The American College of Medical Genetics and Genomics calls the benefits of screening embryos for polygenic risks "unproven" and warns that such tests "should not be offered" by clinicians. A pioneer of polygenic risk scores, Harvard epidemiology professor Peter Kraft, has criticized Orchid, saying on X that "the science doesn't add up" and that "waving a magic wand and changing some of these variants at birth may not do anything at all."

The article notes several startups are already providing predictions on intelligence. "In the United States, there are virtually no restrictions on the types of genetic predictions companies can offer, and no external vetting of their proprietary scoring methods."
Open Source

Jack Dorsey Pumps $10M Into a Nonprofit Focused on Open Source Social Media (techcrunch.com) 20

Twitter co-founder/Block CEO Jack Dorsey isn't just vibe coding new apps like Bitchat and Sun Day. He's also "invested $10 million in an effort to fund experimental open source projects and other tools that could ultimately transform the social media landscape," reports TechCrunch," funding the projects through an online collective formed in May called "andOtherStuff: [T]he team at "andOtherStuff" is determined not to build a company but is instead operating like a "community of hackers," explains Evan Henshaw-Plath [who handles UX/onboarding and was also Twitter's first employee]. Together, they're working to create technologies that could include new consumer social apps as well as various experiments, like developer tools or libraries, that would allow others to build apps for themselves.

For instance, the team is behind an app called Shakespeare, which is like the app-building platform Lovable, but specifically for building Nostr-based social apps with AI assistance. The group is also behind heynow, a voice note app built on Nostr; Cashu wallet; private messenger White Noise; and the Nostr-based social community +chorus, in addition to the apps Dorsey has already released. Developments in AI-based coding have made this type of experimentation possible, Henshaw-Plath points out, in the same way that technologies like Ruby on Rails, Django, and JSON helped to fuel an earlier version of the web, dubbed Web 2.0.

Related to these efforts, Henshaw-Plath sat down with Dorsey for the debut episode of his new podcast, revolution.social with @rabble... Dorsey believes Bluesky faces the same challenges as traditional social media because of its structure — it's funded by VCs, like other startups. Already, it has had to bow to government requests and faced moderation challenges, he points out. "I think [Bluesky CEO] Jay [Graber] is great. I think the team is great," Dorsey told Henshaw-Plath, "but the structure is what I disagree with ... I want to push the energy in a different direction, which is more like Bitcoin, which is completely open and not owned by anyone from a protocol layer...."

Dorsey's initial investment has gotten the new nonprofit up and running, and he worked on some of its initial iOS apps. Meanwhile, others are contributing their time to build Android versions, developer tools, and different social media experiments. More is still in the works, says Henshaw-Plath.

"There are things that we're not ready to talk about yet that'll be very exciting," he teases.

AI

OpenAI CEO Says Meta Tried Poaching ChatGPT Engineers With $100M Bonuses (the-independent.com) 25

The Independent notes a remarkable-if-true figure that's being bandied around this week.

Meta "started making these, like, giant offers to a lot of people on our team," OpenAI CEO Sam Altman told his brother Jack on his podcast. "You know, like, $100 million signing bonuses, more than that [in] compensation per year... I'm really happy that, at least so far, none of our best people have decided to take him up on that."

Previous reports have also suggested that Meta is targeting employees at Google DeepMind, offering similar levels of compensation. Some of these efforts appear to have been successful, with DeepMind researcher Jack Rae joining Meta's 'Superintelligence' team earlier this month...

During the podcast, which was published on Tuesday, Mr Altman also gave details about future AI products that OpenAI is hoping to build, claiming that they will enable "crazy new social experiences" and "virtual employees". The most important breakthrough over the next decade, he said, would involve radical new discoveries powered by AI. "The thing that I think will be the most impactful in that five-to-10 year timeframe is AI will actually discover new science," he said.

The Washington Post notes that Zuckerberg "responded to recent reports of his compensation offers in an interview posted by the Information on YouTube on Tuesday, saying that 'a lot of the numbers specifically have been inaccurate" but acknowledging there is "an absolute premium for the best and most talented people." Zuckerberg's recent hires and other comments this week suggest he's not taking any chances of being left behind. He announced plans for a giant data center campus large enough to obscure Manhattan to power future AI projects by his superintelligence team.
Microsoft

Microsoft To Stop Using Engineers In China For Tech Support of US Military (reuters.com) 51

Microsoft will stop using China-based engineers to support U.S. military cloud services after a ProPublica report revealed their involvement, prompting backlash from Senator Tom Cotton and a two-week Pentagon review ordered by Defense Secretary Pete Hegseth. In response, Hegseth announced an immediate ban on any Chinese involvement in Department of Defense cloud contracts. Reuters reports: The report detailed Microsoft's use of Chinese engineers to work on U.S. military cloud computing systems under the supervision of U.S. "digital escorts" hired through subcontractors who have security clearances but often lacked the technical skills to assess whether the work of the Chinese engineers posed a cybersecurity threat. [Microsoft] told ProPublica it disclosed its practices to the U.S. government during an authorization process.

On Friday, Microsoft spokesperson Frank Shaw said on social media website X the company changed how it supports U.S. government customers "in response to concerns raised earlier this week ... to assure that no China-based engineering teams are providing technical assistance" for services used by the Pentagon.

Privacy

'Coldplay Kiss-Cam Flap Proves We're Already Our Own Surveillance State' (theregister.com) 78

Brandon Vigliarolo writes via The Register: A tech executive's alleged affair exposed on a stadium jumbotron is ripe fodder for the gossip rags, but it exhibits something else: proof that we need not wait for an AI-fueled dystopian surveillance state to descend on us -- we're perfectly able and willing to surveil ourselves. The embracing couple caught at a Coldplay concert this week as the jumbotron camera panned around the audience would have been another unremarkable clip, if not for the pair panicking and rushing to hide, triggering attendees to publish the memorable moment on social media. "Either they're having an affair or they're very shy," Coldplay singer Chris Martin said of the pair's reaction.

As is always the case when viral moments of unknown people get uploaded to the internet, they didn't remain anonymous for long, with the internet quickly identifying them as the CEO of data infrastructure outfit Astronomer, Andy Byron, and its Chief People Officer, Kristin Cabot. We're not going to weigh in on Byron's, who internet sleuths have determined is married (for now), or Cabot's behavior - making someone pay for the moral transgression of an alleged extramarital affair may be enough reason for the internet to go on a witch hunt, but that's not our concern here.

What's worrying is what this moment says - yet again - about us as a society: We have cameras everywhere, our personal data has become one of the most valuable commodities in the world, and we're all perpetually ready to use that tech to make those we feel have violated the social contract pay publicly for their transgressions. This is hardly a new phenomenon. [...] There's really no reason to set up an expensive and oppressive surveillance state when we all have location tracking, internet-connected shaming machines in our pockets. Big tech gave us the tools of our own surveillance, and as "ColdplayGate" shows yet again, we'll keep using those tools if they'll make us feel better about ourselves - especially if someone else gets knocked down a peg in the process.

Businesses

The Geography of Innovative Firms (nber.org) 22

The abstract of a paper featured on NBER: Most U.S. innovation output originates from firms that operate R&D facilities across multiple local markets. We study how this geographic structure influences aggregate innovation and growth, and whether it is socially optimal. First, we develop an endogenous growth model featuring multi-market innovative firms that generate knowledge spillovers to geographically proximate firms. In equilibrium, firms may operate in too few or too many local markets, depending on how sensitive are the local spillovers they generate to their local size. Second, to quantify these effects, we link the model to data on firms' R&D locations, patents, and citation networks. Using an event-study design, we show that firms' spatial expansion increases spillovers to other firms and estimate how these spillovers depend on a firm's local footprint. Our estimates imply that U.S. innovative firms operate in too few markets relative to the social optimum. Third, using quantitative counterfactuals, we find that policies promoting broader spatial scope yield larger welfare gains than standard R&D subsidies. Moreover, unlike R&D subsidies, such policies can also reduce regional inequality.
Businesses

Stock-Tracking Tokens Debut With Price Chaos, Amazon Token Spikes 100x (msn.com) 52

Digital tokens designed to track popular stocks have suffered extreme price deviations since launching two weeks ago, with an Amazon-tracking token briefly spiking to more than 100 times the underlying stock's closing price. The token AMZNX hit $23,781.22 on crypto trading platform Jupiter on July 3, while Amazon shares had closed the previous day around $200.

A similar Apple-tracking token jumped to $236.72 on July 3, representing a 12% premium to the actual stock price. Companies including Robinhood, Kraken, Gemini and Bybit launched these blockchain-based versions of U.S. stocks in late June for non-U.S. customers. Robinhood is facing scrutiny from Lithuania's central bank after launching tokens tied to OpenAI and SpaceX without permission from either company, prompting OpenAI to disavow the tokens on social media.
United Kingdom

Reddit Starts Verifying Ages of Users In the UK (bbc.com) 59

Reddit has begun verifying users' ages in the UK to restrict access to "certain mature content" for minors, complying with the UK's Online Safety Act. The BBC reports: Reddit, known for its online communities and discussions, said that while it does not want to know who its audience is: "It would be helpful for our safety efforts to be able to confirm whether you are a child or an adult." Ofcom, the UK regulator, said: "We expect other companies to follow suit, or face enforcement if they fail to act." Reddit said that from 14 July, an outside firm called Persona will perform age verification for the social media platform either through an uploaded selfie or "a photo of your government ID," such as a passport. It said Reddit will not have access to the photo and will only retain a user's verification status and date of birth so people do not have to re-enter it each time they try to access restricted content. Reddit added that Persona "promises not to retain the picture for longer than seven days" and will not have access to a user's data on the site. The new rules in the UK come into force on 25 July. [...]

Companies that fail to meet the rules face fines of up to 18 million pounds or 10% of worldwide revenue, "whichever is greater." [Ofcom] added that in the most serious cases, it can seek a court order for "business disruption measures," such as requiring payment providers or advertisers to withdraw their services from a platform, or requiring Internet Service Providers to block access to a site in the UK."

AI

Meta's Superintelligence Lab Considers Shift To Closed AI Model (yahoo.com) 13

An anonymous reader quotes a report from Investing.com: Meta's newly formed superintelligence lab is discussing potential changes to the company's artificial intelligence strategy that could represent a major shift for the social media giant. A small group of top members of the lab, including 28-year-old Alexandr Wang, Meta's new chief A.I. officer, talked last week about abandoning the company's most powerful open source A.I. model, called Behemoth, in favor of developing a closed model, according to a report in the New York Times, citing people familiar with the matter.

Meta has traditionally open sourced its A.I. models, making the computer code public for other developers to build upon, and any shift toward a closed A.I. model would mark a significant philosophical change for Meta. Meta had completed training its Behemoth model by feeding in data to improve it, but delayed its release due to poor internal performance. After the company announced the formation of the superintelligence lab last month, teams working on the Behemoth model, which is considered a "frontier" model, stopped conducting new tests on it. The discussions within the superintelligence lab remain preliminary, and no decisions have been finalized. Any potential changes would require approval from Meta CEO Mark Zuckerberg.

Social Networks

Are a Few People Ruining the Internet For the Rest of Us? 150

A small fraction of hyperactive social media users generates the vast majority of toxic online content, according to research by New York University psychology professor Jay Van Bavel and colleagues Claire Robertson and Kareena del Rosario. The study found that 10% of users produce roughly 97% of political tweets, while just 0.1% of users share 80% of fake news.

Twelve accounts known as the "disinformation dozen" created most vaccine misinformation on Facebook during the pandemic, the research found. In experiments, researchers paid participants to unfollow divisive political accounts on X. After one month, participants reported 23% less animosity toward other political groups. Nearly half declined to refollow hostile accounts after the study ended, and those maintaining healthier newsfeeds reported reduced animosity 11 months later. The research describes social media as a "funhouse mirror" that amplifies extreme voices while muting moderate perspectives.
Facebook

Zuckerberg Pledges Hundreds of Billions For AI Data Centers in Superintelligence Push (reuters.com) 57

Mark Zuckerberg said on Monday that Meta would spend hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology that he has chased with a talent war for top AI engineers. From a report: The social media giant is among the large technology companies that have chased high-profile deals and doled out multi-million-dollar pay packages in recent months to fast-track work on machines that can outthink humans on most tasks.

Unveiling the spending commitment in a Threads post on Monday, CEO Zuckerberg touted the strength in the company's core advertising business to support the massive spending that has raised concerns among tech investors about potential payoffs. "We have the capital from our business to do this," Zuckerberg said. He also cited a report from a chip industry publication Semianalysis that said Meta is on track to be the first lab to bring online a 1-gigawatt-plus supercluster, which refers to a massive data center built to train advanced AI models.

Social Networks

Bay Area Restaurants Are Vetting Your Social Media Before You Even Walk In (sfgate.com) 154

Bay Area Michelin-starred restaurants are conducting extensive background research on diners before they arrive, mining social media profiles and maintaining detailed guest databases to personalize dining experiences. Lazy Bear maintains records on 115,000 people and employs a guest services coordinator who creates weekly reports by researching publicly available social media information.

Staff study color-coded Google documents containing guest data before each service. SingleThread's reservation team researches social media, Google, and LinkedIn profiles for guests, where meals cost over $500 on weekends. General manager Akeel Shah told SFGate the information helps "tailor the experience and make it memorable." Acquerello has collected guest data for 36 years, initially handwritten in books. Co-owner Giancarlo Paterlini said their director of operations reviews each reservation for dining history and wine preferences to customize service.
Microsoft

Microsoft Outlook Malfunctioned For Over 21 Hours Wednesday and Thursday (apnews.com) 19

"Microsoft's Outlook email service malfunctioned for over 21 hours Wednesday and Thursday," reports CNBC, "prompting some people to post on social media about the inability to reach their virtual mailboxes." The issue began at 6:20 p.m. Eastern time on Wednesday, according to a dashboard the software company maintains. It affected Outlook.com as well as Outlook mobile apps and desktop programs. At 12:21 ET on Thursday, the Microsoft 365 Status account posted that it was rolling out a fix.
Although earlier on Thursday Microsoft posted on X that "We identified an issue with the initial fix, and we've corrected it..."

More details from the Associated Press: Disruptions appeared to peak just before noon ET on Thursday, when more than 2,700 users worldwide reported issues with Outlook, formerly also Hotmail, to outage tracker Downdetector. Some said they encountered problems like loading their inboxes or signing in. By later in the afternoon, reports had fallen to just over a couple hundred...

Microsoft did not immediately provide more information about what had caused the hourslong outage. A spokesperson for Microsoft had no further comment when reached by The Associated Press on Thursday.

News

Why Is Fertility So Low in High Income Countries? 317

Fertility rates have fallen to historically low levels [PDF] in virtually all high-income countries due to a fundamental reordering of adult priorities rather than economic factors, according to a new National Bureau of Economic Research study. Economists Melissa Schettini Kearney and Phillip B. Levine analyzed cohort data and found rising childlessness at all observed ages alongside falling completed fertility rates.

Total fertility rates have dropped below replacement level in nearly all OECD countries, with many sustaining rates below 1.5. Some East Asian countries including South Korea, Singapore, and China now have fertility rates at or below one child per woman. The researchers concluded that period-based explanations focused on short-term income or price changes cannot explain the widespread decline. Instead, evidence points to "shifting priorities" involving changing norms, evolving economic opportunities, and broader social and cultural forces that have diminished parenthood's role in adult lives.
Youtube

YouTube Can't Put Pandora's AI Slop Back in the Box (gizmodo.com) 75

Longtime Slashdot reader SonicSpike shares a report from Gizmodo: YouTube is inundated with AI-generated slop, and that's not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off "spam." At the same time, it's still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create "original" and "authentic" content, but now it will "better identify mass-produced and repetitious content." The changes will take place on July 15. The company didn't advertise whether this change is related to AI, but the timing can't be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI "revolution" has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users' YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on "Last Week Tonight" specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube's Partner Program.

Slashdot Top Deals