AI

US Police Seldom Disclose Use of AI-Powered Facial Recognition, Investigation Finds (msn.com) 63

An anonymous reader shared this report from the Washington Post: Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology...

In fact, the records show that officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects "through investigative means" or that a human source such as a witness or police officer made the initial identification... The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports, according to operations deputy chief Ryan Gallagher. He said investigative techniques are exempt from Florida's public disclosure laws... The department would disclose the source of the investigative lead if it were asked in a criminal proceeding, Gallagher added....

Prosecutors are required to inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them. When prosecutors fail to disclose such information — known as a "Brady violation" after the 1963 Supreme Court ruling that mandates it — the court can declare a mistrial, overturn a conviction or even sanction the prosecutor. No federal laws regulate facial recognition and courts do not agree whether AI identifications are subject to Brady rules. Some states and cities have begun mandating greater transparency around the technology, but even in these locations, the technology is either not being used that often or it's not being disclosed, according to interviews and public records requests...

Over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions. Among the arrestees, just 1 in 16 were told about the technology's use — less than 7 percent — according to a review by The Post of public reports and interviews with some arrestees and their lawyers. The police department said that in some of those cases the technology was used for purposes other than identification, such as finding a suspect's social media feeds, but did not indicate in how many of the cases that happened. Carlos J. Martinez, the county's chief public defender, said he had no idea how many of his Miami clients were identified with facial recognition until The Post presented him with a list. "One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that's against you," Martinez said. "When that's kept from you, that is an all-powerful government that can trample all over us."

After reviewing The Post's findings, Miami police and local prosecutors announced plans to revise their policies to require clearer disclosure in every case involving facial recognition.

The article points out that Miami's Assistant Police Chief actually told a congressional panel on law enforcement AI use that his department is "the first to be completely transparent about" the use of facial recognition. (When confronted with the Washington Post's findings, he "acknowledged that officers may not have always informed local prosecutors [and] said the department would give prosecutors all information on the use of facial recognition, in past and future cases".

He told the Post that the department would "begin training officers to always disclose the use of facial recognition in incident reports." But he also said they would "leave it up to prosecutors to decide what to disclose to defendants."
Android

Google Starts Adding Anti-Theft Locking Features to Android Phones (engadget.com) 81

An anonymous reader shared this report from Engadget: Three new theft protection features that Google announced earlier this year have reportedly started rolling out on Android. The tools — Theft Detection Lock, Offline Device Lock and Remote Lock — are aimed at giving users a way to quickly lock down their devices if they've been swiped, so thieves can't access any sensitive information. Android reporter Mishaal Rahman shared on social media that the first two tools had popped up on a Xiaomi 14T Pro, and said some Pixel users have started seeing Remote Lock.

Theft Detection Lock is triggered by the literal act of snatching. The company said in May that the feature "uses Google AI to sense if someone snatches your phone from your hand and tries to run, bike or drive away." In such a scenario, it'll lock the phone's screen.

The Android reporter summarized the other two locking features in a post on Reddit:
  • Remote Lock "lets you remotely lock your phone using just your phone number in case you can't sign into Find My Device using your Google account password."
  • Offline Device Lock "automatically locks your screen if a thief tries to keep your phone disconnected from the Internet for an extended period of time."

"All three features entered beta in August, starting in Brazil. Google told me the final versions of these features would more widely roll out this year, and it seems the features have begun expanding."


Twitter

Brazil's Top Court Says X Paid Pending Fines to Wrong Bank (reuters.com) 83

An anonymous reader shared this report from Reuters: Brazil's Supreme Court said on Friday that lawyers representing social media platform X did not pay pending fines to the proper bank, postponing its decision on whether to allow the tech firm to resume services in Brazil.

The payment of the fines, which X lawyers argued that the company had paid correctly, is the only outstanding measure demanded by the court in order to authorize X to operate again in Brazil... Earlier on Friday, X, owned by billionaire Elon Musk, filed a fresh request to have its services restored in Brazil, saying it had paid all pending fines. In response to the request, Supreme Court Justice Alexandre de Moraes requested the payment to be transferred to the right bank. He also determined that once fines are sorted out, Brazil's prosecutor general will give his opinion on the recent requests made by X's legal team in Brazil, which has been seeking to have the platform restored in the country.

Following Moraes' decision on Friday, X lawyers again asked the court for authorization to resume operations in Brazil, denying that the company had paid the fines to the wrong account and saying they do not see the need for the prosecutor general to be consulted before the ban is lifted.

IOS

iOS and Android Security Scare: Two Apps Found Supporting 'Pig Butchering' Scheme (forbes.com) 31

"Pig Butchering Alert: Fraudulent Trading App targeted iOS and Android users."

That's the title of a new report released this week by cybersecurity company Group-IB revealing the official Apple App Store and Google Play store offered apps that were actually one part of a larger fraud campaign. "To complete the scam, the victim is asked to fund their account... After a few seemingly successful trades, the victim is persuaded to invest more and more money. The account balance appears to grow rapidly. However, when the victim attempts to withdraw funds, they are unable to do so."

Forbes reports: Group-IB determined that the frauds would begin with a period of social engineering reconnaissance and entrapment, during which the trust of the potential victim was gained through either a dating app, social media app or even a cold call. The attackers spent weeks on each target. Only when this "fattening up" process had reached a certain point would the fraudsters make their next move: recommending they download the trading app from the official App Store concerned.

When it comes to the iOS app, which is the one that the report focussed on, Group-IB researchers said that the app remained on the App Store for several weeks before being removed, at which point the fraudsters switched to phishing websites to distribute both iOS and Android apps. The use of official app stores, albeit only fleetingly as Apple and Google removed the fake apps in due course, bestowed a sense of authenticity to the operation as people put trust in both the Apple and Google ecosystems to protect them from potentially dangerous apps.

"The use of web-based applications further conceals the malicious activity," according to the researchers, "and makes detection more difficult." [A]fter the download is complete, the application cannot be launched immediately. The victim is then instructed by the cybercriminals to manually trust the Enterprise developer profile. Once this step is completed, the fraudulent application becomes operational... Once a user registers with the fraudulent application, they are tricked into completing several steps. First, they are asked to upload identification documents, such as an ID card or passport. Next, the user is asked to provide personal information, followed by job-related details...

The first discovered application, distributed through the Apple App Store, functions as a downloader, merely retrieving and displaying a web-app URL. In contrast, the second application, downloaded from phishing websites, already contains the web-app within its assets. We believe this approach was deliberate, since the first app was available in the official store, and the cybercriminals likely sought to minimise the risk of detection. As previously noted, the app posed as a tool for mathematical formulas, and including personal trading accounts within an iOS app would have raised immediate suspicion.

The app (which only runs on mobile phones) first launches a fake activity with formulas and graphics, according to the researchers. "We assume that this condition must bypass Apple's checks before being published to the store. As we can see, this simple trick allows cybercriminals to upload their fraudulent application to the Apple Store." They argue their research "reinforces the need for continued review of app store submissions to prevent such scams from reaching unsuspecting victims". But it also highlights "the importance of vigilance and end-user education, even when dealing with seemingly trustworthy apps..."

"Our investigation began with an analysis of Android applications at the request of our client. The client reported that a user had been tricked into installing the application as part of a stock investment scam. During our research, we uncovered a list of similar fraudulent applications, one of which was available on the Google Play Store. These apps were designed to display stock-related news and articles, giving them a false sense of legitimacy."
GNU is Not Unix

Free Software Foundation Celebrates 39th Anniversary (fsf.org) 16

"Can you believe that we've been demanding user freedom since 1985?" asks a new blog post at FSF.org: Today, we're celebrating our thirty-ninth anniversary, the "lace year," which represents the intertwined nature and strength of our relationship with the free software community. We wouldn't be here without you, and we are so grateful for everyone who has stood with us, advocating for a world where complete user freedom is the norm and not the exception.

As we celebrate our anniversary and reflect on the past thirty-nine years, we feel inspired by how far we've come, not only as a movement but as an organization, and the changes that we've gone through. While we inevitably have challenges ahead, we feel encouraged and eager to take them on knowing that you'll be right there with us, working for a free future for everyone. Here's to many more years of fighting for user freedom!

Their suggestions for celebrating include:
  • Take a small step with big impact and swap out one nonfree program with one that's truly free
  • If you have an Android phone, download F-Droid, which is a catalogue of hundreds of free software applications
  • Donate $39 to help support free software advocacy

And to help with the celebrations they share a free video teaching the basics of SuperCollider (the free and open source audio synthesis/algorithmic composition software). The video appears on FramaTube, an instance of the decentralized (and ActivityPub-federated) Peertube video platform, supported by the French non-profit Framasoft and powered by WebTorrent, using peer-to-peer technology to reduce load on individual servers.


Privacy

A Quarter Million Comcast Subscribers Had Data Stolen From Debt Collector (theregister.com) 38

An anonymous reader quotes a report from The Register: Comcast says data on 237,703 of its customers was in fact stolen in a cyberattack on a debt collector it was using, contrary to previous assurances it was given that it was unaffected by that intrusion. That collections agency, Financial Business and Consumer Solutions aka FBCS, was compromised in February, and according to a filing with Maine's attorney general, the firm informed the US cable giant about the unauthorized access in March. At the time, FBCS told the internet'n'telly provider that no Comcast customer information was affected. However, that changed in July, when the collections outfit got in touch again to say that, actually, the Comcast subscriber data it held had been pilfered.

Among the data types stolen were names, addresses, Social Security numbers, dates of birth, and the Comcast account numbers and ID numbers used internally at FBCS. The data pertains to those registered as customers at "around 2021." Comcast stopped using FBCS for debt collection services in 2020. Comcast made it clear its own systems, including those of its broadband unit Xfinity, were not broken into, unlike that time in 2023. FBCS earlier said more than 4 million people had their records accessed during that February break-in. As far as we're aware, the agency hasn't said publicly exactly how that network intrusion went down. Now Comcast is informing subscribers that their info was taken in that security breach, and in doing so seems to be the first to say the intrusion was a ransomware attack. [...]

FBCS's official statement only attributes the attack to an "unauthorized actor." It does not mention ransomware, nor many other technical details aside from the data types involved in the theft. No ransomware group we're aware of has ever claimed responsibility for the raid on FBCS. When we asked Comcast about the ransomware, it simply referred us back to the customer notification letter. The cableco used that notification to send another small middle finger FBCS's way, slyly revealing that the agency's financial situation prevents it from offering the usual identity and credit monitoring protection for those affected, so Comcast is having to foot the bill itself.

EU

Meta Faces Data Retention Limits On Its EU Ad Business After Top Court Ruling (techcrunch.com) 35

An anonymous reader quotes a report from TechCrunch: The European Union's top court has sided with a privacy challenge to Meta's data retention policies. It ruled on Friday that social networks, such as Facebook, cannot keep using people's information for ad targeting indefinitely. The judgement could have major implications on the way Meta and other ad-funded social networks operate in the region. Limits on how long personal data can be kept must be applied in order to comply with data minimization principles contained in the bloc's General Data Protection Regulation (GDPR). Breaches of the regime can lead to fines of up to 4% of global annual turnover -- which, in Meta's case, could put it on the hook for billions more in penalties (NB: it is already at the top of the leaderboard of Big Tech GDPR breachers). [...]

The original challenge to Meta's ad business dates back to 2014 but was not fully heard in Austria until 2020, per noyb. The Austrian supreme court then referred several legal questions to the CJEU in 2021. Some were answered via a separate challenge to Meta/Facebook, in a July 2023 CJEU ruling -- which struck down the company's ability to claim a "legitimate interest" to process people's data for ads. The remaining two questions have now been dealt with by the CJEU. And it's more bad news for Meta's surveillance-based ad business. Limits do apply. Summarizing this component of the judgement in a press release, the CJEU wrote: "An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data."

The ruling looks important on account of how ads businesses, such as Meta's, function. Crudely put, the more of your data they can grab, the better -- as far as they are concerned. Back in 2022, an internal memo penned by Meta engineers which was obtained by Vice's Motherboard likened its data collection practices to tipping bottles of ink into a vast lake and suggested the company's aggregation of personal data lacked controls and did not lend itself to being able to silo different types of data or apply data retention limits. Although Meta claimed at the time that the document "does not describe our extensive processes and controls to comply with privacy regulations." How exactly the adtech giant will need to amend its data retention practices following the CJEU ruling remains to be seen. But the law is clear that it must have limits. "[Advertising] companies must develop data management protocols to gradually delete unneeded data or stop using them," noyb suggests.
The court also weighed in a second question that concerns sensitive data that has been "manifestly made public" by the data subject, "and whether sensitive characteristics could be used for ad targeting because of that," reports TechCrunch. "The court ruled that it could not, maintaining the GDPR's purpose limitation principle."
AI

Meta's New 'Movie Gen' AI System Can Deepfake Video From a Single Photo (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand. The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to "enhance their inherent creativity" rather than replace human artists and animators. The company envisions future applications such as easily creating and editing "day in the life" videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta's previous work in video synthesis, following 2022's Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos. [...] Movie Gen's video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.
You can view example videos here. Meta also released a research paper with more technical information about the model.

As for the training data, the company says it trained these models on a combination of "licensed and publicly available datasets." Ars notes that this "very likely includes videos uploaded by Facebook and Instagram users over the years, although this is speculation based on Meta's current policies and previous behavior."
AI

AI Agent Promotes Itself To Sysadmin, Trashes Boot Sequence 86

The Register's Thomas Claburn reports: Buck Shlegeris, CEO at Redwood Research, a nonprofit that explores the risks posed by AI, recently learned an amusing but hard lesson in automation when he asked his LLM-powered agent to open a secure connection from his laptop to his desktop machine. "I expected the model would scan the network and find the desktop computer, then stop," Shlegeris explained to The Register via email. "I was surprised that after it found the computer, it decided to continue taking actions, first examining the system and then deciding to do a software update, which it then botched." Shlegeris documented the incident in a social media post.

He created his AI agent himself. It's a Python wrapper consisting of a few hundred lines of code that allows Anthropic's powerful large language model Claude to generate some commands to run in bash based on an input prompt, run those commands on Shlegeris' laptop, and then access, analyze, and act on the output with more commands. Shlegeris directed his AI agent to try to SSH from his laptop to his desktop Ubuntu Linux machine, without knowing the IP address [...]. As a log of the incident indicates, the agent tried to open an SSH connection, and failed. So Shlegeris tried to correct the bot. [...]

The AI agent responded it needed to know the IP address of the device, so it then turned to the network mapping tool nmap on the laptop to find the desktop box. Unable to identify devices running SSH servers on the network, the bot tried other commands such as "arp" and "ping" before finally establishing an SSH connection. No password was needed due to the use of SSH keys; the user buck was also a sudoer, granting the bot full access to the system. Shlegeris's AI agent, once it was able to establish a secure shell connection to the Linux desktop, then decided to play sysadmin and install a series of updates using the package manager Apt. Then things went off the rails.

"It looked around at the system info, decided to upgrade a bunch of stuff including the Linux kernel, got impatient with Apt and so investigated why it was taking so long, then eventually the update succeeded but the machine doesn't have the new kernel so edited my Grub [bootloader] config," Buck explained in his post. "At this point I was amused enough to just let it continue. Unfortunately, the computer no longer boots." Indeed, the bot got as far as messing up the boot configuration, so that following a reboot by the agent for updates and changes to take effect, the desktop machine wouldn't successfully start.
The Courts

Judge Blocks California's New AI Law In Case Over Kamala Harris Deepfake (techcrunch.com) 128

An anonymous reader quotes a report from TechCrunch: A federal judge blocked one of California's new AI laws on Wednesday, less than two weeks after it was signed by Governor Gavin Newsom. Shortly after signing AB 2839, Newsom suggested it could be used to force Elon Musk to take down an AI deepfake of Vice President Kamala Harris he had reposted (sparking a petty online battle between the two). However, a California judge just ruled the state can't force people to take down election deepfakes -- not yet, at least. AB 2839 targets the distributors of AI deepfakes on social media, specifically if their post resembles a political candidate and the poster knows it's a fake that may confuse voters. The law is unique because it does not go after the platforms on which AI deepfakes appear, but rather those who spread them. AB 2839 empowers California judges to order the posters of AI deepfakes to take them down or potentially face monetary penalties.

Perhaps unsurprisingly, the original poster of that AI deepfake -- an X user named Christopher Kohls -- filed a lawsuit to block California's new law as unconstitutional just a day after it was signed. Kohls' lawyer wrote in a complaint that the deepfake of Kamala Harris is satire that should be protected by the First Amendment. On Wednesday, United States district judge John Mendez sided with Kohls. Mendez ordered a preliminary injunction to temporarily block California's attorney general from enforcing the new law against Kohls or anyone else, with the exception of audio messages that fall under AB 2839. [...] In essence, he ruled the law is simply too broad as written and could result in serious overstepping by state authorities into what speech is permitted or not.

Social Networks

Social Media Sanctions Hit Conservatives More, But Due to Content Sharing, Study Says (nature.com) 217

A study published in Nature has found that conservative social media users were more likely to face sanctions, but attributes this to their higher propensity to share low-quality news rather than political bias. Researchers analyzed 9,000 Twitter users during the 2020 U.S. election, finding pro-Trump users were 4.4 times more likely to be suspended than pro-Biden users.

However, they also shared significantly more links from sites rated as untrustworthy by both politically balanced groups and Republican-only panels. Similar patterns were observed across multiple datasets spanning 16 countries from 2016 to 2023. The study concludes that asymmetric enforcement can result from neutral policies when behavior differs between groups.
The Courts

NSO Should Lose Spyware Case for Discovery Violations, Meta Says (bloomberglaw.com) 10

WhatsApp and its parent Meta asked a judge to award them a total win against spyware maker NSO Group as punishment for discovery violations in a years-long case accusing the Israeli company of violating anti-hacking laws. From a report: NSO Group violated the Federal Rules of Civil Procedure, repeatedly ignoring the court's orders and its discovery obligations, according to a motion for sanctions filed Wednesday in the US District Court for the Northern District of California. "NSO's discovery violations were willful, and unfairly skew the record on virtually every key issue in the case, from the merits, to jurisdiction, to damages, making a full and fair trial on the facts impossible," they said. Judge Phyllis J. Hamilton should award the companies judgment as a matter of law or, "if the court finds that the limited discovery produced in this case does not suffice," enter default judgment against NSO, WhatsApp and Meta wrote.

The social media platforms first filed their complaint in October 2019, accusing NSO of using WhatsApp to install NSO spyware on the phones of about 1,400 WhatsApp users.
The move follows Apple asking a court last month to dismiss its three-year-old hacking lawsuit against spyware pioneer NSO Group, arguing that it might never be able to get the most critical files about NSO's Pegasus surveillance tool and that its own disclosures could aid NSO and its increasing number of rivals.
Security

Attackers Exploit Critical Zimbra Vulnerability Using CC'd Email Addresses (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Attackers are actively exploiting a critical vulnerability in mail servers sold by Zimbra in an attempt to remotely execute malicious commands that install a backdoor, researchers warn. The vulnerability, tracked as CVE-2024-45519, resides in the Zimbra email and collaboration server used by medium and large organizations. When an admin manually changes default settings to enable the postjournal service, attackers can execute commands by sending maliciously formed emails to an address hosted on the server. Zimbra recently patched the vulnerability. All Zimbra users should install it or, at a minimum, ensure that postjournal is disabled.

On Tuesday, Security researcher Ivan Kwiatkowski first reported the in-the-wild attacks, which he described as "mass exploitation." He said the malicious emails were sent by the IP address 79.124.49[.]86 and, when successful, attempted to run a file hosted there using the tool known as curl. Researchers from security firm Proofpoint took to social media later that day to confirm the report. On Wednesday, security researchers provided additional details that suggested the damage from ongoing exploitation was likely to be contained. As already noted, they said, a default setting must be changed, likely lowering the number of servers that are vulnerable. [...]

Proofpoint has explained that some of the malicious emails used multiple email addresses that, when pasted into the CC field, attempted to install a webshell-based backdoor on vulnerable Zimbra servers. The full cc list was wrapped as a single string and encoded using the base64 algorithm. When combined and converted back into plaintext, they created a webshell at the path: /jetty/webapps/zimbraAdmin/public/jsp/zimbraConfig.jsp. Proofpoint went on to say: "Once installed, the webshell listens for inbound connection with a pre-determined JSESSIONID Cookie field; if present, the webshell will then parse the JACTION cookie for base64 commands. The webshell has support for command execution via exec or download and execute a file over a socket connection."

The Internet

World Wide Web Foundation is Shutting Down (theregister.com) 28

After fifteen years of fighting to make the web safer and more accessible, the World Wide Web Foundation is shutting down. From a report: In a letter shared via the organization's website, co-founders Sir Tim Berners-Lee -- inventor of the World Wide Web -- and Rosemary Leith explain that the organization's mission has been somewhat accomplished and a new battle needs to be waged. When the foundation was founded in 2009, just over 20 percent of the world had access to the web and relatively few organizations were trying to change that, say Sir Tim and Leith. A decade and a half later, with nearly 70 percent of the world online, there are many similar non-governmental organizations trying to make the web more accessible and affordable.

The two founders thank their supporters over the years who "have enabled us to move the needle in a big way" with regard to access and affordability. But the issues facing the web have changed, they insist, and the foundation believes other advocacy groups can take it from here. Chief among the more pressing problems, claim Sir Tim and Leith, is the social media business model that commoditized user data and concentrates power with platforms, contrary to Sir Tim's original vision for the web. To address that threat, Sir Tim intends to dismantle his foundation so he can focus on decentralized technology. "We, along with the Web Foundation board, have been asking ourselves where we can have the most impact in the future," the authors say. "The conclusion we have reached is that Tim's passion on restoring power over and control of data to individuals and actively building powerful collaborative systems needs to be the highest priority going forward. In order to best achieve this, Tim will focus his efforts to support his vision for the Solid Protocol and other decentralized systems."

The Almighty Buck

Bank of America Is Down: Users Report Their Accounts Showing Empty Balance (independent.co.uk) 33

schwit1 shares a report from The Independent: Thousands of Bank of America customers reported trouble accessing their bank accounts Wednesday afternoon as the financial institution faced a widespread outage. On social media, customers said they could not view their account balances. Those who could view their accounts said they were met with an alarming $0 balance. For many, a "Connection Error" message popped up while trying to log into the banking app. The message said it was "unable to complete your request" and asked the user to "try again later."

By 1:15 p.m. Eastern Time, nearly 20,000 customers said they were having trouble, according to Downdetector, which reports web outages. That number dropped before rising again around 2:45 p.m. ET. It is unclear what caused the outage

Privacy

Did Apple Just Kill Social Apps? (nytimes.com) 78

Apple's iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it "the end of the world" for friend-based social apps. Critics argue the change doesn't apply to Apple's own services, potentially giving the tech giant an unfair advantage.
Social Networks

Russia Is Banning Discord (pcgamer.com) 133

Russian authorities are considering a ban on Discord, citing unspecified legal violations. According to the Russian daily newspaper Kommersant, the ban may happen "in the coming days." PC Gamer reports: The opening salvo has already been fired. The Russian state media regulator Roskomnadzor has issued five separate rulings relating to Discord since September 20, which can all now be used as justification for an upcoming ban. Say what you will about authoritarian regimes, but they love their bureaucracy. Kommersant quotes an anonymous official source as saying the ban is being considered for violations of Russian law: needless to say, these violations have not been detailed, nor are likely to be.

Russian users have also complained about periodic outages on Discord over September, with many resorting to VPNs, and both the web and mobile versions of the platform affected. Should the ban become a reality, the big losers will be Russian players and developers, with no obvious domestic replacement. "The problem is that for Russian developers, communication with the community, including the international one, and technical support are implemented through Discord," said Vasily Ovchinnikov, head of Russia's Organization for the Development of the Video Game Industry. Today, a Moscow court fined Discord 3.5 million roubles ($37,675) for, apparently, failing to restrict access to banned information.

Social Networks

Reddit is Making Sitewide Protests Basically Impossible (theverge.com) 73

Reddit has implemented new restrictions on moderators' ability to alter community visibility settings, the social media platform announced Monday. Moderators must now obtain admin approval before switching subreddits between public, private, or NSFW status.

The move comes in response to last year's widespread protests against API pricing changes, during which thousands of subreddits went private, disrupting platform accessibility. Reddit VP Laura Nestler stated the policy aims to prevent actions that "deliberately cause harm" and protect the site's long-term health.
Privacy

Meta Fined $102 Million For Storing 600 Million Passwords In Plain Text (appleinsider.com) 28

Meta has been fined $101.5 million by the Irish Data Protection Commission (DPC) for storing over half a billion user passwords in plain text for years, with some engineers having access to this data for over a decade. The issue, discovered in 2019, predominantly affected non-US users, especially those using Facebook Lite. AppleInsider reports: Meta Ireland was found guilty of infringing four parts of GDPR, including how it "failed to notify the DPC of a personal data breach concerning storage of user passwords in plain text." Meta Ireland did report the failure, but only some months after it was discovered. "It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," said Graham Doyle, Deputy Commissioner at the DPC, in a statement about the fine. "It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."

Other than the fine and an official reprimand, the full extent of the DPC's ruling is yet to be released publicly. The details published so far do not reveal whether the passwords included any of US users as well as ones in Ireland or across the rest of the European Union. It's most likely that the issue concerns only non-US users, however. That's because in 2019, Facebook told CNN that the majority of the plain text passwords were for a service called Facebook Lite, which it described as being a cut-down service for areas of the world with slower connectivity.

Facebook

Science Editors Raise New Doubts on Meta's Claims It Isn't Polarizing (msn.com) 16

Meta Platforms' claims that Facebook doesn't polarize Americans came under new doubt as the journal Science raised questions about a prominent research paper the tech giant has cited to support its position. WSJ: In an editorial Thursday, Science said that Meta's emergency efforts to calm its platforms in the wake of the 2020 election may have swayed the conclusions of the paper, which the journal published in July 2023. The editorial, titled "Context matters in social media," was prompted by a letter that Science also published presenting new criticism of the paper. Because the study of Facebook's algorithms relied on data provided by Meta when it was undertaking extraordinary efforts to restrain incendiary political content, the letter's authors argue that the paper may have overstated the case that social media algorithms didn't contribute to political polarization.

Such criticisms of peer-reviewed research often appear below papers in academic journals, but Science's editors felt their editorial was needed to more prominently caveat this original paper's conclusions, said Holden Thorp, Science's editor in chief. "It was incumbent on us to come up with a way somehow that people who would come to the paper would know of these concerns,â Thorp said in an interview. While no correction was warranted, he said, "There's an election coming up, and we care about people citing this paper." Meta said it had been transparent with researchers about its actions during the time of the study, and the company and its research partners say it had no control over the Science paper's conclusions. Meta called debates of the sort aired on Thursday as part of the research process.

Slashdot Top Deals