×
Social Networks

Pornhub To Block Five More States Over Age Verification Laws (theverge.com) 187

Pornhub plans to block access to its website in Indiana, Idaho, Kansas, Kentucky, and Nebraska in response to age verification laws designed to prevent children from accessing adult websites. From a report: The website has now cut off access in more than half a dozen states in protest of similar age verification laws that have quickly spread across conservative-leaning US states. Indiana, Idaho, and Kansas will lose access on June 27th, according to alerts on Pornhub's website that were seen by local news sources and Reddit users; Kentucky will lose access on July 10th, according to Kentucky Public Radio.
AI

Meta Has Created a Way To Watermark AI-Generated Speech (technologyreview.com) 64

An anonymous reader quotes a report from MIT Technology Review: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with -- for example, by removing or forging them.

Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.

AI

A Social Network Where AIs and Humans Coexist 46

An anonymous reader quotes a report from TechCrunch: Butterflies is a social network where humans and AIs interact with each other through posts, comments and DMs. After five months in beta, the app is launching Tuesday to the public on iOS and Android. Anyone can create an AI persona, called a Butterfly, in minutes on the app. After that, the Butterfly automatically creates posts on the social network that other AIs and humans can then interact with. Each Butterfly has backstories, opinions and emotions.

Butterflies was founded by Vu Tran, a former engineering manager at Snap. Vu came up with the idea for Butterflies after seeing a lack of interesting AI products for consumers outside of generative AI chatbots. Although companies like Meta and Snap have introduced AI chatbots in their apps, they don't offer much functionality beyond text exchanges. Tran notes that he started Butterflies to bring more creativity to humans' relationships with AI. "With a lot of the generative AI stuff that's taking flight, what you're doing is talking to an AI through a text box, and there's really no substance around it," Vu told TechCrunch. "We thought, OK, what if we put the text box at the end and then try to build up more form and substance around the characters and AIs themselves?" Butterflies' concept goes beyond Character.AI, a popular a16z-backed chatbot startup that lets users chat with customizable AI companions. Butterflies wants to let users create AI personas that then take on their own lives and coexist with other. [...]

The app is free-to-use at launch, but Butterflies may experiment with a subscription model in the future, Vu says. Over time, Butterflies plans to offer opportunities for brands to leverage and interact with AIs. The app is mainly being used for entertainment purposes, but in the future, the startup sees Butterflies being used for things like discovery in a way that's similar to Instagram. Butterflies closed a $4.8 million seed round led by Coatue in November 2023. The funding round included participation from SV Angel and strategic angels, many of whom are former Snap product and engineering leaders.
Vu says that Butterflies is one of the most wholesome ways to use and interact with AI. He notes that while the startup isn't claiming that it can help cure loneliness, he says it could help people connect with others, both AI and human.

"Growing up, I spent a lot of my time in online communities and talking to people in gaming forums," Vu said. "Looking back, I realized those people could just have been AIs, but I still built some meaningful connections. I think that there are people afraid of that and say, 'AI isn't real, go meet some real friends.' But I think it's a really privileged thing to say 'go out there and make some friends.' People might have social anxiety or find it hard to be in social situations."
Education

Los Angeles Schools To Consider Ban on Smartphones (reuters.com) 92

The Los Angeles Unified School District on Tuesday will consider banning smartphones for its 429,000 students in an attempt to insulate a generation of kids from distractions and social media that undermine learning and hurt mental health. From a report: The proposal was being formulated before U.S. Surgeon General Vivek Murthy on Monday called for a warning label on social media platforms, akin to those on cigarette packages, due to what he considers a mental health emergency. The board of the second-largest school district in the United States is scheduled to vote on a proposal to within 120 days develop a policy that would prohibit student use of cellphones and social media platforms and be in place by January 2025.

The L.A. schools will consider whether phones should be stored in pouches or lockers during school hours, according to the meeting's agenda and what exceptions should be made for students with learning or physical disabilities. Nick Melvoin, a board member and former middle school teacher who proposed the resolution, said cell phones were already a problem when he left the classroom in 2011, and since then the constant texting and liking has grown far worse.

Earth

Kenya's First Nuclear Plant Faces Fierce Opposition (theguardian.com) 127

An anonymous reader quotes a report from The Guardian: Kilifi County's white sandy beaches have made it one of Kenya's most popular tourist destinations. Hotels and beach bars line the 165 mile-long (265km) coast; fishers supply the district's restaurants with fresh seafood; and visitors spend their days boating, snorkelling around coral reefs or bird watching in dense mangrove forests. Soon, this idyllic coastline will host Kenya's first nuclear plant, as the country, like its east African neighbour Uganda, pushes forward with atomic energy plans. The proposals have sparked fierce opposition in Kilifi. In a building by Mida Creek, a swampy bayou known for its birdlife and mangrove forests, more than a dozen conservation and rights groups meet regularly to discuss the proposed plant.

"Kana nuclear!" Phyllis Omido, an award-winning environmentalist who is leading the protests, tells one such meeting. The Swahili slogan means "reject nuclear", and encompasses the acronym for the Kenya Anti-Nuclear Alliance who say the plant will deepen Kenya's debt and are calling for broader public awareness of the cost. Construction on the power station is expected to start in 2027, with it due to be operational in 2034. "It is the worst economic decision we could make for our country," says Omido, who began her campaign last year. A lawsuit filed in the environmental court by lawyers Collins Sang and Cecilia Ndeti in July 2023 on behalf of Kilifi residents, seeks to stop the plant, arguing that the process has been "rushed" and was "illegal", and that public participation meetings were "clandestine". They argue the Nuclear Power and Energy Agency (Nupea) should not proceed with fixing any site for the plant before laws and adequate safeguards are in place. Nupea said construction would not begin for years, that laws were under discussion and that adequate public participation was being carried out. Hearings are continuing to take place.

In November, people in Kilifi filed a petition with parliament calling for an inquiry. The petition, sponsored by the Centre for Justice Governance and Environmental Action (CJGEA), a non-profit founded by Omido in 2009, also claimed that locals had limited information on the proposed plant and the criteria for selecting preferred sites. It raised concerns over the risks to health, the environment and tourism in the event of a nuclear spill, saying the country was undertaking a "high-risk venture" without proper legal and disaster response measures in place. The petition also flagged concerns over security and the handling of radioactive waste in a nation prone to floods and drought. The senate suspended (PDF) the inquiry until the lawsuit was heard. "If we really have to invest in nuclear, why can't [the government] put it in a place that does not cause so much risk to our ecological assets?" says Omido. "Why don't they choose an area that would not mean that if there was a nuclear leak we would lose so much as a country?" Peter Musila, a marine scientist who monitors the impacts of global heating on coral reefs, fears that a nuclear power station will threaten aquatic life. The coral cover in Watamu marine national reserve, a protected area near Kilifi's coast, has improved over the last decade and Musila fears progress could be reversed by thermal pollution from the plant, whose cooling system would suck large amounts of water from the ocean and return it a few degrees warmer, potentially killing fish and the micro-organisms such as plankton, which are essential for a thriving aquatic ecosystem. "It's terrifying," says Musila, who works with the conservation organisation A Rocha Kenya. "It could wreak havoc."
Nupea, for its part, "published an impact assessment report last year that recommended policies be put in place to ensure environmental protections, including detailed plans for the handling of radioactive waste; measures to mitigate environmental harm, such as setting up a nuclear unit in the national environment management authority; and emergency response teams," notes the Guardian. "It also proposed social and economic protections for affected communities, including clear guidelines on compensation for those who lose their livelihoods, or are displaced from their land, when the plant is set up."

"Nupea said a power station could create thousands of jobs for Kenyans and said it had partnered with Kilifi universities to start nuclear training programs that would enable more residents to take up jobs at the plant. Wilfred Baya, assistant director for energy for Kilifi county, says the plant could also bring infrastructural development and greater electricity access to a region which suffers frequent power cuts."
Facebook

Meta Accused of Trying To Discredit Ad Researchers (theregister.com) 18

Thomas Claburn reports via The Register: Meta allegedly tried to discredit university researchers in Brazil who had flagged fraudulent adverts on the social network's ad platform. Nucleo, a Brazil-based news organization, said it has obtained government documents showing that attorneys representing Meta questioned the credibility of researchers from NetLab, which is part of the Federal University of Rio de Janeiro (UFRJ). NetLab's research into Meta's ads contributed to Brazil's National Consumer Secretariat (Senacon) decision in 2023 to fine Meta $1.7 million (9.3 million BRL), which is still being appealed. Meta (then Facebook) was separately fined of $1.2 million (6.6 million BRL) related to Cambridge Analytica.

As noted by Nucleo, NetLab's report showed that Facebook, despite being notified about the issues, had failed to remove more than 1,800 scam ads that fraudulently used the name of a government program that was supposed to assist those in debt. In response to the fine, attorneys representing Meta from law firm TozziniFreire allegedly accused the NetLab team of bias and of failing to involve Meta in the research process. Nucleo says that it obtained the administrative filing through freedom of information requests to Senacon. The documents are said to date from December 26 last year and to be part of the ongoing case against Meta. A spokesperson for NetLab, who asked not to be identified by name due to online harassment directed at the organization's members, told The Register that the research group was aware of the Nucleo report. "We were kind of surprised to see the account of our work in this law firm document," the spokesperson said. "We expected to be treated with more fairness for our work. Honestly, it comes at a very bad moment because NetLab particularly, but also Brazilian science in general, is being attacked by far-right groups."

On Thursday, more than 70 civil society groups including NetLab published an open letter decrying Meta's legal tactics. "This is an attack on scientific research work, and attempts at intimidation of researchers and researchers who are performing excellent work in the production of knowledge from empirical analysis that have been fundamental to qualify the public debate on the accountability of social media platforms operating in the country, especially with regard to paid content that causes harm to consumers of these platforms and that threaten the future of our democracy," the letter says. "This kind of attack and intimidation is made even more dangerous by aligning with arguments that, without any evidence, have been used by the far right to discredit the most diverse scientific productions, including NetLab itself." The claim, allegedly made by Meta's attorneys, is that the ad biz was "not given the opportunity to appoint a technical assistant and present questions" in the preparation of the NetLabs report. This is particularly striking given Meta's efforts to limit research into its ad platform.
A Meta spokesperson told The Register: "We value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Meta's defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements."
Social Networks

Surgeon General Wants Tobacco-Style Warning Applied To Social Media Platforms (nbcnews.com) 80

An anonymous reader quotes a report from NBC News: U.S. Surgeon General Vivek Murthy on Monday called on Congress to require a tobacco-style warning for visitors to social media platforms. In an op-ed published in The New York Times, Murthy said the mental health crisis among young people is an urgent problem, with social media "an important contributor." He said his vision of the warning includes language that would alert users to the potential mental health harms of the websites and apps. "A surgeon general's warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe," he wrote.

In 1965, after the previous year's landmark report from Surgeon General Luther L. Terry that linked cigarette smoking to lung cancer and heart disease, Congress mandated unprecedented warning labels on packs of cigarettes, the first of which stated, "Caution: Cigarette Smoking May Be Hazardous to Your Health." Murthy said in the op-ed, "Evidence from tobacco labels shows that surgeon general's warnings can increase awareness and change behavior." But he acknowledged the limitations and said a label alone wouldn't make social media safe. Steps can be taken by Congress, social media companies, parents and others to mitigate the risks, ensure a safer experience online and protect children from possible harm, he wrote.

In the op-ed, Murthy linked the amount of time spent on social media to the increasing risk that children will experience symptoms of anxiety and depression. The American Psychological Association says teenagers spend nearly five hours every day on top platforms such as YouTube, TikTok and Instagram. In a 2019 study, the association found the proportion of young adults with suicidal thoughts or other suicide-related outcomes increased 47% from 2008 to 2017, when social media use among that age group soared. And that was before the pandemic triggered a year's worth of virtual isolation for the U.S. In early 2021, amid continued pandemic lockdowns, Murthy called on social media platforms to "proactively enhance and contribute to the mental health and well-being of our children." [...] A surgeon general's public health advisory on social media's mental health published last year cited research finding that among its potential harms are exposure to violent and sexual content and to bullying, harassment and body shaming.

Social Networks

YouTube Introduces Experimental 'Notes' for Users To Add Context To Videos (blog.youtube) 39

YouTube is piloting a new feature called "Notes" that allows viewers to add context and information under videos. The move comes as YouTube aims to minimize the spread of misinformation on its platform, particularly during the pivotal 2024 U.S. election year. The feature, similar to Community Notes on X (formerly Twitter), will initially be available on mobile in the U.S. in English.
United States

America's Defense Department Ran a Secret Disinfo Campaign Online Against China's Covid Vaccine (reuters.com) 280

"At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China's growing influence in the Philippines..." reports Reuters.

"It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found."

Reuters interviewed "more than two dozen current and former U.S officials, military contractors, social media analysts and academic researchers," and also reviewed posts on social media, technical data and documents about "a set of fake social media accounts used by the U.S. military" — some active for more than five years. Friday they reported the results of their investigation: Through phony internet accounts meant to impersonate Filipinos, the military's propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines — China's Sinovac inoculation. Reuters identified at least 300 accounts on X, formerly Twitter, that matched descriptions shared by former U.S. military officials familiar with the Philippines operation. Almost all were created in the summer of 2020 and centered on the slogan #Chinaangvirus — Tagalog for China is the virus.

"COVID came from China and the VACCINE also came from China, don't trust China!" one typical tweet from July 2020 read in Tagalog. The words were next to a photo of a syringe beside a Chinese flag and a soaring chart of infections. Another post read: "From China — PPE, Face Mask, Vaccine: FAKE. But the Coronavirus is real." After Reuters asked X about the accounts, the social media company removed the profiles, determining they were part of a coordinated bot campaign based on activity patterns and internal data.

The U.S. military's anti-vax effort began in the spring of 2020 and expanded beyond Southeast Asia before it was terminated in mid-2021, Reuters determined. Tailoring the propaganda campaign to local audiences across Central Asia and the Middle East, the Pentagon used a combination of fake social media accounts on multiple platforms to spread fear of China's vaccines among Muslims at a time when the virus was killing tens of thousands of people each day. A key part of the strategy: amplify the disputed contention that, because vaccines sometimes contain pork gelatin, China's shots could be considered forbidden under Islamic law...

A senior Defense Department official acknowledged the U.S. military engaged in secret propaganda to disparage China's vaccine in the developing world, but the official declined to provide details. A Pentagon spokeswoman... also noted that China had started a "disinformation campaign to falsely blame the United States for the spread of COVID-19."

A senior U.S. military officer directly involved in the campaign told Reuters that "We didn't do a good job sharing vaccines with partners. So what was left to us was to throw shade on China's."

At least six senior State Department officials for the region objected, according to the article. But in 2019 U.S. Defense Secretary Mark Esper signed "a secret order" that "elevated the Pentagon's competition with China and Russia to the priority of active combat, enabling commanders to sidestep the StateDepartment when conducting psyops against those adversaries."

[A senior defense official] said the Pentagon has rescinded parts of Esper's 2019 order that allowed military commanders to bypass the approval of U.S. ambassadors when waging psychological operations. The rules now mandate that military commanders work closely with U.S. diplomats in the country where they seek to have an impact. The policy also restricts psychological operations aimed at "broad population messaging," such as those used to promote vaccine hesitancy during COVID...

Nevertheless, the Pentagon's clandestine propaganda efforts are set to continue. In an unclassified strategy document last year, top Pentagon generals wrote that the U.S. military could undermine adversaries such as China and Russia using "disinformation spread across social media, false narratives disguised as news, and similar subversive activities [to] weaken societal trust by undermining the foundations of government."

And in February, the contractor that worked on the anti-vax campaign — General Dynamics IT — won a $493 million contract. Its mission: to continue providing clandestine influence services for the military.

Government

53 LA County Public Health Workers Fall for Phishing Email. 200,000 People May Be Affected (yahoo.com) 37

The Los Angeles Times reports that "The personal information of more than 200,000 people in Los Angeles County was potentially exposed after a hacker used a phishing email to steal the login credentials of 53 public health employees, the county announced Friday." Details that were possibly accessed in the February data breach include the first and last names, dates of birth, diagnoses, prescription information, medical record numbers, health insurance information, Social Security numbers and other financial information of Department of Public Health clients, employees and other individuals. "Affected individuals may have been impacted differently and not all of the elements listed were present for each individual," the agency said in a news release...

The data breach happened between Feb. 19 and 20 when employees received a phishing email, which tries to trick recipients into providing important information such as passwords and login credentials. The employees clicked on a link in the body of the email, thinking they were accessing a legitimate message, according to the agency...

The county is offering free identity monitoring through Kroll, a financial and risk advisory firm, to those affected by the breach. Individuals whose medical records were potentially accessed by the hacker should review them with their doctor to ensure the content is accurate and hasn't been changed. Officials say people should also review the Explanation of Benefits statement they receive from their insurance company to make sure they recognize all the services that have been billed. Individuals can also request credit reports and review them for any inaccuracies.

From the official statement by the county's Public Health department: Upon discovery of the phishing attack, Public Health disabled the impacted e-mail accounts, reset and re-imaged the user's device(s), blocked websites that were identified as part of the phishing campaign and quarantined all suspicious incoming e-mails. Additionally, awareness notifications were distributed to all workforce members to remind them to be vigilant when reviewing e-mails, especially those including links or attachments. Law enforcement was notified upon discovery of the phishing attack, and they investigated the incident.
AI

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough (msn.com) 18

The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...."

In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Unix

Version 256 of systemd Boasts '42% Less Unix Philosophy' (theregister.com) 135

Liam Proven reports via The Register: The latest version of the systemd init system is out, with the openly confrontational tag line: "Available soon in your nearest distro, now with 42 percent less Unix philosophy." As Lennart Poettering's announcement points out, this is the first version of systemd whose version number is a nine-bit value. Version 256, as usual, brings in a broad assortment of new features, but also turns off some older features that are now considered deprecated. For instance, it won't run under cgroups version 1 unless forced.

Around since 2008, cgroups is a Linux kernel containerization mechanism originally donated by Google, as The Reg noted a decade ago. Cgroups v2 was merged in 2016 so this isn't a radical change. System V service scripts are now deprecated too, as is the SystemdOptions EFI variable. Additionally, there are some new commands and options. Some are relatively minor, such as the new systemd-vpick binary, which can automatically select the latest member of versioned directories. Before any OpenVMS admirers get excited, no, Linux does not now support versions on files or directories. Instead, this is a fresh option that uses a formalized versioning system involving: "... paths whose trailing components have the .v/ suffix, pointing to a directory. These components will then automatically look for suitable files inside the directory, do a version comparison and open the newest file found (by version)."

The latest function, which The Reg FOSS desk suspects will ruffle some feathers, is a whole new command, run0, which effectively replaces the sudo command as used in Apple's macOS and in Ubuntu ever since the first release. Agent P introduced the new command in a Mastodon thread. He says that the key benefit is that run0 doesn't need setuid, a basic POSIX function, which, to quote its Linux manual page, "sets the effective user ID of the calling process." [...] Another new command is importctl, which handles importing and exporting both block-level and file-system-level disk images. And there's a new type of system service called a capsule, and "a small new service manager" called systemd-ssh-generator, which lets VMs and containers accept SSH connections so long as systemd can find the sshd binary -- even if no networking is available.
The release notes are available here.
The Internet

The Stanford Internet Observatory is Being Dismantled (platformer.news) 37

An anonymous reader shares a report: After five years of pioneering research into the abuse of social platforms, the Stanford Internet Observatory is winding down. Its founding director, Alex Stamos, left his position in November. Renee DiResta, its research director, left last week after her contract was not renewed. One other staff member's contract expired this month, while others have been told to look for jobs elsewhere, sources say.

Some members of the eight-person team might find other jobs at Stanford, and it's possible that the university will retain the Stanford Internet Observatory branding, according to sources familiar with the matter. But the lab will not conduct research into the 2024 election or other elections in the future.

AI

Clearview AI Used Your Face. Now You May Get a Stake in the Company. (nytimes.com) 40

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

ISS

NASA Accidentally Broadcasts Simulation of Distressed Astronauts On ISS (reuters.com) 23

An anonymous reader quotes a report from Reuters: NASA accidentally broadcast a simulation of astronauts being treated for decompression sickness on the International Space Station (ISS) on Wednesday, prompting speculation of an emergency in posts on social media. About 5:28 p.m. U.S. Central Time (2228 GMT), The National Aeronautics and Space Administration's (NASA) live YouTube channel broadcast audio that indicated a crew member was experiencing the effects of decompression sickness (DCS), NASA said on its official ISS X account.

A female voice asks crew members to "get commander back in his suit", check his pulse and provide him with oxygen, later saying his prognosis was "tenuous", according to copies of the audio posted on social media. NASA did not verify the recordings or republish the audio. Several space enthusiasts posted a link to the audio on X with warnings that there was a serious emergency on the ISS. "This audio was inadvertently misrouted from an ongoing simulation where crew members and ground teams train for various scenarios in space and is not related to a real emergency," the ISS account post said. "There is no emergency situation going on aboard the International Space Station," it added.

Crew members on the ISS were in their sleep period at the time of the audio broadcast as they prepared for a spacewalk at 8 a.m. EDT on Thursday, the ISS post said. NASA's ISS YouTube channel -- at the time the audio was accidentally broadcast -- now shows an error message saying the feed has been interrupted.

Social Networks

A Growing Number of Americans Are Getting Their News From TikTok (theverge.com) 197

According to a new survey from the Pew Research Center, TikTok is the second most popular source of news for Americans after X, "though most TikTok users don't primarily think of the shortform video app as a news source," notes The Verge. The survey looked at how Facebook, Instagram, TikTok and X play a role in Americans' news diets. From the report: Among TikTok users, only 15 percent say keeping up with the news is a major reason they use the app. Still, 35 percent of those surveyed said they wouldn't have seen the news they get on TikTok elsewhere. And unlike other apps, the news users see on TikTok is just as likely to come from influencers or celebrities as it is from journalists -- and it's far more likely to come from total strangers. (Meanwhile, most Facebook and Instagram users say the news that pops up on their feeds is posted by friends, relatives, or other people they know; on X, users are more likely to see news posted by media outlets or reporters.)
Businesses

Wells Fargo Fires Employees for Faking Work By Simulating Keyboard Activity (yahoo.com) 115

Wells Fargo fired more than a dozen employees last month after investigating claims that they were faking work. From a report: The staffers, all in the firm's wealth- and investment-management unit, were "discharged after review of allegations involving simulation of keyboard activity creating impression of active work," according to disclosures filed with the Financial Industry Regulatory Authority. "Wells Fargo holds employees to the highest standards and does not tolerate unethical behavior," a company spokesperson said in a statement.

Devices and software to imitate employee activity, sometimes known as "mouse movers" or "mouse jigglers," took off during the pandemic-spurred work-from-home era, with people swapping tips for using them on social-media sites Reddit and TikTok. Such gadgets are available on Amazon.com for less than $20.

Games

Call of Duty: Black Ops 6's Enormous 309GB Download 'Not Representative of a Typical Player Install Experience' (eurogamer.net) 37

Activision has clarified Call of Duty: Black Ops 6 isn't 309GB after all -- or at least, you can download the core of it for less. From a report: This is despite Xbox's store page for the game stating that Call of Duty: Black Ops 6's install size is a rather chunky 309.85 GB. This made many heads turn, because that seemed excessive. The Call of Duty team has now issued a correction with more detail. Writing on social media platform X, Activision stated the file size currently listed for Black Ops 6 "does not represent the download size or disk footprint" for its upcoming Call of Duty game.

"The sizes as shown include the full installations of Modern Warfare 2, Modern Warfare 3, Warzone and all relevant content packs, including all localised languages combined which is not representative of a typical player install experience," it explained, before adding: "Players will be able to download Black Ops 6 at launch without downloading any other Call of Duty titles or all of the language packs."

Security

The Mystery of an Alleged Data Broker's Data Breach (techcrunch.com) 4

An anonymous reader shares a report: Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records -- impacting at least 300 million people -- from a U.S. data broker, which would make it one of the largest alleged data breaches of the year. The data, seen by TechCrunch, on its own appears partly legitimate -- if imperfect.

The stolen data, which was advertised on a known cybercrime forum, allegedly dates back years and includes U.S. citizens' full names, their home address history and Social Security numbers -- data that is widely available for sale by data brokers. But confirming the source of the alleged data theft has proven inconclusive; such is the nature of the data broker industry, which gobbles up individuals' personal data from disparate sources with little to no quality control. The alleged data broker in question, according to the hacker, is National Public Data, which bills itself as "one of the biggest providers of public records on the Internet."

On its official website, National Public Data claimed to sell access to several databases: a "People Finder" one where customers can search by Social Security number, name and date of birth, address or telephone number; a database of U.S. consumer data "covering over 250 million individuals;" a database containing voter registration data that contains information on 100 million U.S. citizens; a criminal records one; and several more. Malware research group vx-underground said on X (formerly Twitter) that they reviewed the whole stolen database and could "confirm the data present in it is real and accurate."

Social Networks

The Word 'Bot' Is Increasingly Being Used As an Insult On Social Media (newscientist.com) 111

The definition of the word "bot" is shifting to become an insult to someone you know is human, according to researchers who analyzed more than 22 million tweets. Researchers found this shift began around 2017, with left-leaning users more likely to accuse right-leaning users of being bots. "A potential explanation might be that media frequently reported about right-wing bot networks influencing major events like the [2016] US election," says Dennis Assenmacher at Leibniz Institute for Social Sciences in Cologne, Germany. "However, this is just speculation and would need confirmation." NewScientist reports: To investigate, Assenmacher and his colleagues looked at how users perceive what is a bot or not. They did so by looking at how the word "bot" was used on Twitter between 2007 and December 2022 (the social network changed its name to X in 2023, following its purchase by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets. The team found that before 2017, the word was usually deployed alongside allegations of automated behavior of the type that would traditionally fit the definition of a bot, such as "software," "script" or "machine." After that date, the use shifted. "Now, the accusations have become more like an insult, dehumanizing people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation," says Assenmacher. The study has been published in the journal Proceedings of the Eighteenth International AAAI Conference on Web and Social Media.

Slashdot Top Deals