Government

53 LA County Public Health Workers Fall for Phishing Email. 200,000 People May Be Affected (yahoo.com) 37

The Los Angeles Times reports that "The personal information of more than 200,000 people in Los Angeles County was potentially exposed after a hacker used a phishing email to steal the login credentials of 53 public health employees, the county announced Friday." Details that were possibly accessed in the February data breach include the first and last names, dates of birth, diagnoses, prescription information, medical record numbers, health insurance information, Social Security numbers and other financial information of Department of Public Health clients, employees and other individuals. "Affected individuals may have been impacted differently and not all of the elements listed were present for each individual," the agency said in a news release...

The data breach happened between Feb. 19 and 20 when employees received a phishing email, which tries to trick recipients into providing important information such as passwords and login credentials. The employees clicked on a link in the body of the email, thinking they were accessing a legitimate message, according to the agency...

The county is offering free identity monitoring through Kroll, a financial and risk advisory firm, to those affected by the breach. Individuals whose medical records were potentially accessed by the hacker should review them with their doctor to ensure the content is accurate and hasn't been changed. Officials say people should also review the Explanation of Benefits statement they receive from their insurance company to make sure they recognize all the services that have been billed. Individuals can also request credit reports and review them for any inaccuracies.

From the official statement by the county's Public Health department: Upon discovery of the phishing attack, Public Health disabled the impacted e-mail accounts, reset and re-imaged the user's device(s), blocked websites that were identified as part of the phishing campaign and quarantined all suspicious incoming e-mails. Additionally, awareness notifications were distributed to all workforce members to remind them to be vigilant when reviewing e-mails, especially those including links or attachments. Law enforcement was notified upon discovery of the phishing attack, and they investigated the incident.
AI

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough (msn.com) 18

The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...."

In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Unix

Version 256 of systemd Boasts '42% Less Unix Philosophy' (theregister.com) 135

Liam Proven reports via The Register: The latest version of the systemd init system is out, with the openly confrontational tag line: "Available soon in your nearest distro, now with 42 percent less Unix philosophy." As Lennart Poettering's announcement points out, this is the first version of systemd whose version number is a nine-bit value. Version 256, as usual, brings in a broad assortment of new features, but also turns off some older features that are now considered deprecated. For instance, it won't run under cgroups version 1 unless forced.

Around since 2008, cgroups is a Linux kernel containerization mechanism originally donated by Google, as The Reg noted a decade ago. Cgroups v2 was merged in 2016 so this isn't a radical change. System V service scripts are now deprecated too, as is the SystemdOptions EFI variable. Additionally, there are some new commands and options. Some are relatively minor, such as the new systemd-vpick binary, which can automatically select the latest member of versioned directories. Before any OpenVMS admirers get excited, no, Linux does not now support versions on files or directories. Instead, this is a fresh option that uses a formalized versioning system involving: "... paths whose trailing components have the .v/ suffix, pointing to a directory. These components will then automatically look for suitable files inside the directory, do a version comparison and open the newest file found (by version)."

The latest function, which The Reg FOSS desk suspects will ruffle some feathers, is a whole new command, run0, which effectively replaces the sudo command as used in Apple's macOS and in Ubuntu ever since the first release. Agent P introduced the new command in a Mastodon thread. He says that the key benefit is that run0 doesn't need setuid, a basic POSIX function, which, to quote its Linux manual page, "sets the effective user ID of the calling process." [...] Another new command is importctl, which handles importing and exporting both block-level and file-system-level disk images. And there's a new type of system service called a capsule, and "a small new service manager" called systemd-ssh-generator, which lets VMs and containers accept SSH connections so long as systemd can find the sshd binary -- even if no networking is available.
The release notes are available here.
The Internet

The Stanford Internet Observatory is Being Dismantled (platformer.news) 37

An anonymous reader shares a report: After five years of pioneering research into the abuse of social platforms, the Stanford Internet Observatory is winding down. Its founding director, Alex Stamos, left his position in November. Renee DiResta, its research director, left last week after her contract was not renewed. One other staff member's contract expired this month, while others have been told to look for jobs elsewhere, sources say.

Some members of the eight-person team might find other jobs at Stanford, and it's possible that the university will retain the Stanford Internet Observatory branding, according to sources familiar with the matter. But the lab will not conduct research into the 2024 election or other elections in the future.

AI

Clearview AI Used Your Face. Now You May Get a Stake in the Company. (nytimes.com) 40

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

ISS

NASA Accidentally Broadcasts Simulation of Distressed Astronauts On ISS (reuters.com) 23

An anonymous reader quotes a report from Reuters: NASA accidentally broadcast a simulation of astronauts being treated for decompression sickness on the International Space Station (ISS) on Wednesday, prompting speculation of an emergency in posts on social media. About 5:28 p.m. U.S. Central Time (2228 GMT), The National Aeronautics and Space Administration's (NASA) live YouTube channel broadcast audio that indicated a crew member was experiencing the effects of decompression sickness (DCS), NASA said on its official ISS X account.

A female voice asks crew members to "get commander back in his suit", check his pulse and provide him with oxygen, later saying his prognosis was "tenuous", according to copies of the audio posted on social media. NASA did not verify the recordings or republish the audio. Several space enthusiasts posted a link to the audio on X with warnings that there was a serious emergency on the ISS. "This audio was inadvertently misrouted from an ongoing simulation where crew members and ground teams train for various scenarios in space and is not related to a real emergency," the ISS account post said. "There is no emergency situation going on aboard the International Space Station," it added.

Crew members on the ISS were in their sleep period at the time of the audio broadcast as they prepared for a spacewalk at 8 a.m. EDT on Thursday, the ISS post said. NASA's ISS YouTube channel -- at the time the audio was accidentally broadcast -- now shows an error message saying the feed has been interrupted.

Social Networks

A Growing Number of Americans Are Getting Their News From TikTok (theverge.com) 197

According to a new survey from the Pew Research Center, TikTok is the second most popular source of news for Americans after X, "though most TikTok users don't primarily think of the shortform video app as a news source," notes The Verge. The survey looked at how Facebook, Instagram, TikTok and X play a role in Americans' news diets. From the report: Among TikTok users, only 15 percent say keeping up with the news is a major reason they use the app. Still, 35 percent of those surveyed said they wouldn't have seen the news they get on TikTok elsewhere. And unlike other apps, the news users see on TikTok is just as likely to come from influencers or celebrities as it is from journalists -- and it's far more likely to come from total strangers. (Meanwhile, most Facebook and Instagram users say the news that pops up on their feeds is posted by friends, relatives, or other people they know; on X, users are more likely to see news posted by media outlets or reporters.)
Businesses

Wells Fargo Fires Employees for Faking Work By Simulating Keyboard Activity (yahoo.com) 115

Wells Fargo fired more than a dozen employees last month after investigating claims that they were faking work. From a report: The staffers, all in the firm's wealth- and investment-management unit, were "discharged after review of allegations involving simulation of keyboard activity creating impression of active work," according to disclosures filed with the Financial Industry Regulatory Authority. "Wells Fargo holds employees to the highest standards and does not tolerate unethical behavior," a company spokesperson said in a statement.

Devices and software to imitate employee activity, sometimes known as "mouse movers" or "mouse jigglers," took off during the pandemic-spurred work-from-home era, with people swapping tips for using them on social-media sites Reddit and TikTok. Such gadgets are available on Amazon.com for less than $20.

Games

Call of Duty: Black Ops 6's Enormous 309GB Download 'Not Representative of a Typical Player Install Experience' (eurogamer.net) 37

Activision has clarified Call of Duty: Black Ops 6 isn't 309GB after all -- or at least, you can download the core of it for less. From a report: This is despite Xbox's store page for the game stating that Call of Duty: Black Ops 6's install size is a rather chunky 309.85 GB. This made many heads turn, because that seemed excessive. The Call of Duty team has now issued a correction with more detail. Writing on social media platform X, Activision stated the file size currently listed for Black Ops 6 "does not represent the download size or disk footprint" for its upcoming Call of Duty game.

"The sizes as shown include the full installations of Modern Warfare 2, Modern Warfare 3, Warzone and all relevant content packs, including all localised languages combined which is not representative of a typical player install experience," it explained, before adding: "Players will be able to download Black Ops 6 at launch without downloading any other Call of Duty titles or all of the language packs."

Security

The Mystery of an Alleged Data Broker's Data Breach (techcrunch.com) 4

An anonymous reader shares a report: Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records -- impacting at least 300 million people -- from a U.S. data broker, which would make it one of the largest alleged data breaches of the year. The data, seen by TechCrunch, on its own appears partly legitimate -- if imperfect.

The stolen data, which was advertised on a known cybercrime forum, allegedly dates back years and includes U.S. citizens' full names, their home address history and Social Security numbers -- data that is widely available for sale by data brokers. But confirming the source of the alleged data theft has proven inconclusive; such is the nature of the data broker industry, which gobbles up individuals' personal data from disparate sources with little to no quality control. The alleged data broker in question, according to the hacker, is National Public Data, which bills itself as "one of the biggest providers of public records on the Internet."

On its official website, National Public Data claimed to sell access to several databases: a "People Finder" one where customers can search by Social Security number, name and date of birth, address or telephone number; a database of U.S. consumer data "covering over 250 million individuals;" a database containing voter registration data that contains information on 100 million U.S. citizens; a criminal records one; and several more. Malware research group vx-underground said on X (formerly Twitter) that they reviewed the whole stolen database and could "confirm the data present in it is real and accurate."

Social Networks

The Word 'Bot' Is Increasingly Being Used As an Insult On Social Media (newscientist.com) 111

The definition of the word "bot" is shifting to become an insult to someone you know is human, according to researchers who analyzed more than 22 million tweets. Researchers found this shift began around 2017, with left-leaning users more likely to accuse right-leaning users of being bots. "A potential explanation might be that media frequently reported about right-wing bot networks influencing major events like the [2016] US election," says Dennis Assenmacher at Leibniz Institute for Social Sciences in Cologne, Germany. "However, this is just speculation and would need confirmation." NewScientist reports: To investigate, Assenmacher and his colleagues looked at how users perceive what is a bot or not. They did so by looking at how the word "bot" was used on Twitter between 2007 and December 2022 (the social network changed its name to X in 2023, following its purchase by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets. The team found that before 2017, the word was usually deployed alongside allegations of automated behavior of the type that would traditionally fit the definition of a bot, such as "software," "script" or "machine." After that date, the use shifted. "Now, the accusations have become more like an insult, dehumanizing people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation," says Assenmacher. The study has been published in the journal Proceedings of the Eighteenth International AAAI Conference on Web and Social Media.
AI

Scammers' New Way of Targeting Small Businesses: Impersonating Them (wsj.com) 17

Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud.

Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

Space

SpaceX Hopes to Eventually Build One Starship Per Day at Its Texas 'Starfactory' (space.com) 305

SpaceX's successful launch (and reentry) of Starship was just the beginning, reports Space.com: SpaceX now aims to build on the progress with its Starship program as continues work on Starfactory, a new manufacturing facility under construction at the company's Starbase site in South Texas... "When you step into this factory, it is truly inspirational. My heart jumps out of my chest," Kate Tice, manager of SpaceX Quality Systems Engineering, said [during SpaceX's livestream of the Starship flight test]. "Now this will enable us to increase our production rate significantly as we build toward our long-term goal of producing one Ship per day and coming off the production line soon, Starship Version Two."

This new version of Starship is designed to be more easy to mass produce, SpaceX CEO Elon Musk said on social media.

Space.com argues that the long-term expansion comes as SpaceX "looks to use Starship to eventually make humanity interplanetary."
Encryption

Researcher Finds Side-Channel Vulnerability in Post-Quantum Key Encapsulation Mechanism (thecyberexpress.com) 12

Slashdot reader storagedude shared this report from The Cyber Express: A security researcher discovered an exploitable timing leak in the Kyber key encapsulation mechanism (KEM) that's in the process of being adopted by NIST as a post-quantum cryptographic standard. Antoon Purnal of PQShield detailed his findings in a blog post and on social media, and noted that the problem has been fixed with the help of the Kyber team. The issue was found in the reference implementation of the Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) that's in the process of being adopted as a NIST post-quantum key encapsulation standard. "A key part of implementation security is resistance against side-channel attacks, which exploit the physical side-effects of cryptographic computations to infer sensitive information," Purnal wrote.

To secure against side-channel attacks, cryptographic algorithms must be implemented in a way so that "no attacker-observable effect of their execution depends on the secrets they process," he wrote. In the ML-KEM reference implementation, "we're concerned with a particular side channel that's observable in almost all cryptographic deployment scenarios: time." The vulnerability can occur when a compiler optimizes the code, in the process silently undoing "measures taken by the skilled implementer." In Purnal's analysis, the Clang compiler was found to emit a vulnerable secret-dependent branch in the poly_frommsg function of the ML-KEM reference code needed in both key encapsulation and decapsulation, corresponding to the expand_secure implementation.

While the reference implementation was patched, "It's important to note that this does not rule out the possibility that other libraries, which are based on the reference implementation but do not use the poly_frommsg function verbatim, may be vulnerable — either now or in the future," Purnal wrote.

Purnal also published a proof-of-concept demo on GitHub. "On an Intel Core i7-13700H, it takes between 5-10 minutes to leak the entire ML-KEM 512 secret key using end-to-end decapsulation timing measurements."
AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

The Internet

Remote Amazon Tribe Connects To Internet, Gets Addicted To Porn and Social Media 96

The Marubo people, an isolated Indigenous tribe in the Amazon, have gained high-speed internet access through Elon Musk's Starlink service, drastically altering their traditional way of life. While the internet has brought significant benefits like improved communication and emergency response, it has also introduced challenges such as social media addiction, exposure to inappropriate content, and cultural erosion. The New York Times reports: After only nine months with Starlink, the Marubo are already grappling with the same challenges that have racked American households for years: teenagers glued to phones; group chats full of gossip; addictive social networks; online strangers; violent video games; scams; misinformation; and minors watching pornography. Modern society has dealt with these issues over decades as the internet continued its relentless march. The Marubo and other Indigenous tribes, who have resisted modernity for generations, are now confronting the internet's potential and peril all at once, while debating what it will mean for their identity and culture.

The internet was an immediate sensation. "It changed the routine so much that it was detrimental," [admitted one Marubo leader, Enoque Marubo]. "In the village, if you don't hunt, fish and plant, you don't eat." Leaders realized they needed limits. The internet would be switched on for only two hours in the morning, five hours in the evening, and all day Sunday. During those windows, many Marubo are crouched over or reclined in hammocks on their phones. They spend lots of time on WhatsApp. There, leaders coordinate between villages and alert the authorities to health issues and environmental destruction. Marubo teachers share lessons with students in different villages. And everyone is in much closer contact with faraway family and friends. To Enoque, the biggest benefit has been in emergencies. A venomous snake bite can require swift rescue by helicopter. Before the internet, the Marubo used amateur radio, relaying a message between several villages to reach the authorities. The internet made such calls instantaneous. "It's already saved lives," he said.

In April, seven months after Starlink's arrival, more than 200 Marubo gathered in a village for meetings. Enoque brought a projector to show a video about bringing Starlink to the villages. As proceedings began, some leaders in the back of the audience spoke up. The internet should be turned off for the meetings, they said. "I don't want people posting in the groups, taking my words out of context," another said. During the meetings, teenagers swiped through Kwai, a Chinese-owned social network. Young boys watched videos of the Brazilian soccer star Neymar Jr. And two 15-year-old girls said they chatted with strangers on Instagram. One said she now dreamed of traveling the world, while the other wants to be a dentist in Sao Paulo. This new window to the outside world had left many in the tribe feeling torn. "Some young people maintain our traditions," said TamaSay Marubo, 42, the tribe's first woman leader. "Others just want to spend the whole afternoon on their phones."
Social Networks

Israel Reportedly Uses Fake Social Media Accounts To Influence US Lawmakers On Gaza War (nytimes.com) 146

An anonymous reader quotes a report from the New York Times: Israel organized and paid for an influence campaign last year targeting U.S. lawmakers and the American public with pro-Israel messaging, as it aimed to foster support for its actions in the war with Gaza, according to officials involved in the effort and documents related to the operation. The covert campaign was commissioned by Israel's Ministry of Diaspora Affairs, a government body that connects Jews around the world with the State of Israel, four Israeli officials said. The ministry allocated about $2 million to the operation and hired Stoic, a political marketing firm in Tel Aviv, to carry it out, according to the officials and the documents. The campaign began in October and remains active on the platform X. At its peak, it used hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments. The accounts focused on U.S. lawmakers, particularly ones who are Black and Democrats, such as Representative Hakeem Jeffries, the House minority leader from New York, and Senator Raphael Warnock of Georgia, with posts urging them to continue funding Israel's military.

ChatGPT, the artificial intelligence-powered chatbot, was used to generate many of the posts. The campaign also created three fake English-language news sites featuring pro-Israel articles. The Israeli government's connection to the influence operation, which The New York Times verified with four current and former members of the Ministry of Diaspora Affairs and documents about the campaign, has not previously been reported. FakeReporter, an Israeli misinformation watchdog, identified the effort in March. Last week, Meta, which owns Facebook and Instagram, and OpenAI, which makes ChatGPT, said they had also found and disrupted the operation. The secretive campaign signals the lengths Israel was willing to go to sway American opinion on the war in Gaza.

Facebook

Meta Withheld Information on Instagram, WhatsApp Deals, FTC Says (yahoo.com) 9

Meta Platforms withheld information from federal regulators during their original reviews of the Instagram and WhatsApp acquisitions, the US Federal Trade Commission said in a court filing as part of a lawsuit seeking to break up the social networking giant. From a report: In its filing Tuesday, however, the FTC said the case involves "information Meta had in its files and did not provide" during the original reviews. "At Meta's request the FTC undertook only a limited review" of the deals, the agency said. "The FTC now has available vastly more evidence, including pre-acquisition documents Meta did not provide in 2012 and 2014."

Meta said that it met all of its legal obligations during the Instagram and WhatsApp merger reviews. The FTC has failed to provide evidence to support its claims, a spokesperson said. "The evidence instead shows that Meta faces fierce competition and that Meta's significant investment of time and resources in Instagram and WhatsApp has benefited consumers by making the apps into the services millions of users enjoy today for free," spokesperson Chris Sgro said in a statement. "The FTC has done nothing to build its case over the past four years, while Meta has invested billions to build quality products."

The Internet

Internet Addiction Alters Brain Chemistry In Young People, Study Finds (theguardian.com) 59

An anonymous reader quotes a report from The Guardian: Young people with internet addiction experience changes in their brain chemistry which could lead to more addictive behaviors, research suggests. The study, published in PLOS Mental Health, reviewed previous research using functional magnetic resonance imaging (fMRI) to examine how regions of the brain interact in people with internet addiction.

They found that the effects were evident throughout multiple neural networks in the brains of young people, and that there was increased activity in parts of the brain when participants were resting. At the same time, there was an overall decrease in the functional connectivity in parts of the brain involved in active thinking, which is the executive control network of the brain responsible for memory and decision-making. The research found that these changes resulted in addictive behaviors and tendencies in adolescents, as well as behavioral changes linked to mental health, development, intellectual ability and physical coordination.
"Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition and personalities," said Max Chang, the study's lead author and an MSc student at the UCL Great Ormond Street Institute of Child Health (GOS ICH). "As a result, the brain is particularly vulnerable to internet addiction-related urges during this time, such as compulsive internet usage, cravings towards usage of the mouse or keyboard and consuming media. The findings from our study show that this can lead to potentially negative behavioral and developmental changes that could impact the lives of adolescents. For example, they may struggle to maintain relationships and social activities, lie about online activity and experience irregular eating and disrupted sleep."

Chang said he hopes the findings allow early signs of internet addiction to be treated effectively. "Clinicians could potentially prescribe treatment to aim at certain brain regions or suggest psychotherapy or family therapy targeting key symptoms of internet addiction," said Chang. "Importantly, parental education on internet addiction is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of internet addiction will more effectively handle screen time, impulsivity, and minimize the risk factors surrounding internet addiction."
AI

ChatGPT, Claude and Perplexity All Went Down At the Same Time (techcrunch.com) 29

Sarah Perez reports via TechCrunch: After a multi-hour outage that took place in the early hours of the morning, OpenAI's ChatGPT chatbot went down again -- but this time, it wasn't the only AI provider affected. On Tuesday morning, both Anthropic's Claude and Perplexity began seeing issues, too, but these were more quickly resolved. Google's Gemini appears to be operating at present, though it may have also briefly gone offline, according to some user reports.

It's unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issue or internet-scale problem, such as those that affect multiple social media sites simultaneously, for example. It's also possible that Claude and Perplexity's issues were not due to bugs or other issues, but from receiving too much traffic in a short period of time due to ChatGPT's outage.

Slashdot Top Deals