Privacy

PowerSchool Data Breach Victims Say Hackers Stole 'All' Historical Student and Teacher Data (techcrunch.com) 21

An anonymous reader shares a report: U.S. school districts affected by the recent cyberattack on edtech giant PowerSchool have told TechCrunch that hackers accessed "all" of their historical student and teacher data stored in their student information systems. PowerSchool, whose school records software is used to support more than 50 million students across the United States, was hit by an intrusion in December that compromised the company's customer support portal with stolen credentials, allowing access to reams of personal data belonging to students and teachers in K-12 schools.

The attack has not yet been publicly attributed to a specific hacker or group. PowerSchool hasn't said how many of its school customers are affected. However, two sources at affected school districts -- who asked not to be named -- told TechCrunch that the hackers accessed troves of personal data belonging to both current and former students and teachers.
Further reading: Lawsuit Accuses PowerSchool of Selling Student Data To 3rd Parties.
China

US Finalizes Rule To Effectively Ban Chinese Vehicles (theverge.com) 115

An anonymous reader quotes a report from The Verge: The Biden administration finalized a new rule that would effectively ban all Chinese vehicles from the US under the auspices of blocking the "sale or import" of connected vehicle software from "countries of concern." The rule could have wide-ranging effects on big automakers, like Ford and GM, as well as smaller manufacturers like Polestar -- and even companies that don't produce cars, like Waymo. The rule covers everything that connects a vehicle to the outside world, such as Bluetooth, Wi-Fi, cellular, and satellite components. It also addresses concerns that technology like cameras, sensors, and onboard computers could be exploited by foreign adversaries to collect sensitive data about US citizens and infrastructure. And it would ban China from testing its self-driving cars on US soil.

"Cars today have cameras, microphones, GPS tracking, and other technologies connected to the internet," US Secretary of Commerce Gina Raimondo said in a statement. "It doesn't take much imagination to understand how a foreign adversary with access to this information could pose a serious risk to both our national security and the privacy of U.S. citizens. To address these national security concerns, the Commerce Department is taking targeted, proactive steps to keep [People's Republic of China] and Russian-manufactured technologies off American roads." The rules for prohibited software go into effect for model year 2027 vehicles, while the ban on hardware from China waits until model year 2030 vehicles. According to Reuters, the rules were updated from the original proposal to exempt vehicles weighing over 10,000 pounds, which would allow companies like BYD to continue to assemble electric buses in California.
The Biden administration published a fact sheet with more information about this rule.

"[F]oreign adversary involvement in the supply chains of connected vehicles poses a significant threat in most cars on the road today, granting malign actors unfettered access to these connected systems and the data they collect," the White House said. "As PRC automakers aggressively seek to increase their presence in American and global automotive markets, through this final rule, President Biden is delivering on his commitment to secure critical American supply chains and protect our national security."
Transportation

Texas Sues Allstate For Collecting Driver Data To Raise Premiums (gizmodo.com) 62

An anonymous reader quotes a report from Gizmodo: Texas has sued (PDF) one of the nation's largest car insurance providers alleging that it violated the state's privacy laws by surreptitiously collecting detailed location data on millions of drivers and using that information to justify raising insurance premiums. The state's attorney general, Ken Paxton, said the lawsuit against Allstate and its subsidiary Arity is the first enforcement action ever filed by a state attorney general to enforce a data privacy law. It also follows a deceptive business practice lawsuit he filed against General Motors accusing the car manufacturer of misleading customers by collecting and selling driver data.

In 2015, Allstate developed the Arity Driving Engine software development kit (SDK), a package of code that the company allegedly paid mobile app developers to install in their products in order to collect a variety of sensitive data from consumers' phones. The SDK gathered phone geolocation data, accelerometer, and gyroscopic data, details about where phone owners started and ended their trips, and information about "driving behavior," such as whether phone owners appeared to be speeding or driving while distracted, according to the lawsuit. The apps that installed the SDK included GasBuddy, Fuel Rewards, and Life360, a popular family monitoring app, according to the lawsuit.

Paxton's complaint said that Allstate and Arity used the data collected by its SDK to develop and sell products to other insurers like Drivesight, an algorithmic model that assigned a driving risk score to individuals, and ArityIQ, which allowed other insurers to "[a]ccess actual driving behavior collected from mobile phones and connected vehicles to use at time of quote to more precisely price nearly any driver." Allstate and Arity marketed the products as providing "driver behavior" data but because the information was collected via mobile phones the companies had no way of determining whether the owner was actually driving, according to the lawsuit. "For example, if a person was a passenger in a bus, a taxi, or in a friend's car, and that vehicle's driver sped, hard braked, or made a sharp turn, Defendants would conclude that the passenger, not the actual driver, engaged in 'bad' driving behavior," the suit states. Neither Allstate and Arity nor the app developers properly informed customers in their privacy policies about what data the SDK was collecting or how it would be used, according to the lawsuit.
The lawsuit violates Texas' Data Privacy and Security Act (DPSA) and insurance code by failing to address violations within the required 30-day cure period. "In its complaint, filed in federal court, Texas requested that Allstate be ordered to pay a penalty of $7,500 per violation of the state's data privacy law and $10,000 per violation of the state's insurance code, which would likely amount to millions of dollars given the number of consumers allegedly affected," adds the report.

"The lawsuit also asks the court to make Allstate delete all the data it obtained through actions that allegedly violated the privacy law and to make full restitution to customers harmed by the companies' actions."
United States

US Removes Malware Allegedly Planted on Computers By Chinese-Backed Hackers (reuters.com) 17

The U.S. Justice Department said on Tuesday that it has deleted malware planted on more than 4,200 computers by a group of criminal hackers who were backed by the People's Republic of China. From a report: The malware, known as "PlugX," affected thousands of computers around the globe and was used to infect and steal information, the department said. Investigators said the malware was installed by a band of hackers who are known by the names "Mustang Panda" and "Twill Typhoon."
Earth

Nobel Prize Winners Call For Urgent 'Moonshot' Effort To Avert Global Hunger Catastrophe (theguardian.com) 117

More than 150 Nobel and World Food prize laureates have signed an open letter calling for "moonshot" efforts to ramp up food production before an impending world hunger catastrophe. From a report: The coalition of some of the world's greatest living thinkers called for urgent action to prioritise research and technology to solve the "tragic mismatch of global food supply and demand." Big bang physicist Robert Woodrow Wilson; Nobel laureate chemist Jennifer Doudna; the Dalai Lama; economist Joseph E Stiglitz; Nasa scientist Cynthia Rosenzweig; Ethiopian-American geneticist Gebisa Ejeta; Akinwumi Adesina, president of the African Development Bank; Wole Soyinka, Nobel prize for literature winner; and black holes Nobel physicist Sir Roger Penrose were among the signatories in the appeal coordinated by Cary Fowler, joint 2024 World Food prize laureate and US special envoy for global food security.

Citing challenges including the climate crisis, war and market pressures, the coalition called for "planet-friendly" efforts leading to substantial leaps in food production to feed 9.7 billion people by 2050. The plea was for financial and political backing, said agricultural scientist Geoffrey Hawtin, the British co-recipient of last year's World Food prize. [...] The world was "not even close" to meeting future needs, the letter said, predicting humanity faced an "even more food insecure, unstable world" by mid-century unless support for innovation was ramped up internationally.

United Kingdom

UK Plans To Ban Public Sector Organizations From Paying Ransomware Hackers (techcrunch.com) 16

U.K. public sector and critical infrastructure organizations could be banned from making ransom payments under new proposals from the U.K. government. From a report: The U.K.'s Home Office launched a consultation on Tuesday that proposes a "targeted ban" on ransomware payments. Under the proposal, public sector bodies -- including local councils, schools, and NHS trusts -- would be banned from making payments to ransomware hackers, which the government says would "strike at the heart of the cybercriminal business model."

This government proposal comes after a wave of cyberattacks targeting the U.K. public sector. The NHS last year declared a "critical" incident following a cyberattack on pathology lab provider Synnovis, which led to a massive data breach of sensitive patient data and months of disruption, including canceled operations and the diversion of emergency patients. According to new data seen by Bloomberg, the cyberattack on Synnovis resulted in harm to dozens of patients, leading to long-term or permanent damage to their health in at least two cases.

Security

Snyk Researcher Caught Deploying Malicious Code Targeting AI Startup (sourcecodered.com) 3

A Snyk security researcher has published malicious NPM packages targeting Cursor, an AI coding startup, in what appears to be a dependency confusion attack. The packages, which collect and transmit system data to an attacker-controlled server, were published under a verified Snyk email address, according to security researcher Paul McCarty.

The OpenSSF package analysis scanner flagged three packages as malicious, generating advisories MAL-2025-27, MAL-2025-28 and MAL-2025-29. The researcher deployed the packages "cursor-retrieval," "cursor-always-local" and "cursor-shadow-workspace," likely attempting to exploit Cursor's private NPM packages of the same names.
Encryption

Ransomware Crew Abuses AWS Native Encryption, Sets Data-Destruct Timer for 7 Days (theregister.com) 18

A new ransomware group called Codefinger targets AWS S3 buckets by exploiting compromised or publicly exposed AWS keys to encrypt victims' data using AWS's own SSE-C encryption, rendering it inaccessible without the attacker-generated AES-256 keys. While other security researchers have documented techniques for encrypting S3 buckets, "this is the first instance we know of leveraging AWS's native secure encryption infrastructure via SSE-C in the wild," Tim West, VP of services with the Halcyon RISE Team, told The Register. "Historically AWS Identity IAM keys are leaked and used for data theft but if this approach gains widespread adoption, it could represent a significant systemic risk to organizations relying on AWS S3 for the storage of critical data," he warned. From the report: ... in addition to encrypting the data, Codefinder marks the compromised files for deletion within seven days using the S3 Object Lifecycle Management API â" the criminals themselves do not threaten to leak or sell the data, we're told. "This is unique in that most ransomware operators and affiliate attackers do not engage in straight up data destruction as part of a double extortion scheme or to otherwise put pressure on the victim to pay the ransom demand," West said. "Data destruction represents an additional risk to targeted organizations."

Codefinger also leaves a ransom note in each affected directory that includes the attacker's Bitcoin address and a client ID associated with the encrypted data. "The note warns that changes to account permissions or files will end negotiations," the Halcyon researchers said in a report about S3 bucket attacks shared with The Register. While West declined to name or provide any additional details about the two Codefinger victims -- including if they paid the ransom demands -- he suggests that AWS customers restrict the use of SSE-C.

"This can be achieved by leveraging the Condition element in IAM policies to prevent unauthorized applications of SSE-C on S3 buckets, ensuring that only approved data and users can utilize this feature," he explained. Plus, it's important to monitor and regularly audit AWS keys, as these make very attractive targets for all types of criminals looking to break into companies' cloud environments and steal data. "Permissions should be reviewed frequently to confirm they align with the principle of least privilege, while unused keys should be disabled, and active ones rotated regularly to minimize exposure," West said.
An AWS spokesperson said it notifies affected customers of exposed keys and "quickly takes any necessary actions, such as applying quarantine policies to minimize risks for customers without disrupting their IT environment."

They also directed users to this post about what to do upon noticing unauthorized activity.
AI

Ministers Mull Allowing Private Firms to Make Profit From NHS Data In AI Push 35

UK ministers are considering allowing private companies to profit from anonymized NHS data as part of a push to leverage AI for medical advancements, despite concerns over privacy and ethical risks. The Guardian reports: Keir Starmer on Monday announced a push to open up the government to AI innovation, including allowing companies to use anonymized patient data to develop new treatments, drugs and diagnostic tools. With the prime minister and the chancellor, Rachel Reeves, under pressure over Britain's economic outlook, Starmer said AI could bolster the country's anaemic growth, as he put concerns over privacy, disinformation and discrimination to one side.

"We are in a unique position in this country, because we've got the National Health Service, and the use of that data has already driven forward advances in medicine, and will continue to do so," he told an audience in east London. "We have to see this as a huge opportunity that will impact on the lives of millions of people really profoundly." Starmer added: "It is important that we keep control of that data. I completely accept that challenge, and we will also do so, but I don't think that we should have a defensive stance here that will inhibit the sort of breakthroughs that we need."

The move to embrace the potential of AI rather than its risks comes at a difficult moment for the prime minister, with financial markets having driven UK borrowing costs to a 30-year high and the pound hitting new lows against the dollar. Starmer said on Monday that AI could help give the UK the economic boost it needed, adding that the technology had the potential "to increase productivity hugely, to do things differently, to provide a better economy that works in a different way in the future." Part of that, as detailed in a report by the technology investor Matt Clifford, will be to create new datasets for startups and researchers to train their AI models.

Data from various sources will be included, such as content from the National Archives and the BBC, as well as anonymized NHS records. Officials are working out the details on how those records will be shared, but said on Monday that they would take into account national security and ethical concerns. Starmer's aides say the public sector will keep "control" of the data, but added that could still allow it to be used for commercial purposes.
AI

Nvidia Snaps Back at Biden's 'Innovation-Killing' AI Chip Export Restrictions (theregister.com) 61

Nvidia has hit back at the outgoing Biden administration's AI chip tech export restrictions designed to tighten America's stranglehold on supply chains and maintain market dominance. From a report: The White House today unveiled what it calls the Final Rule on Artificial Intelligence Diffusion from the Biden-Harris government, placing limits on the number of AI-focused chips that can be exported to most countries, but allowing exemptions for key allies and partners.

The intent is to work with AI companies and foreign governments to initiate critical security and trust standards as they build out their AI infrastructure, but the regulation also makes it clear that the focus of this policy is "to enhance US national security and economic strength," and "it is essential that ... the world's AI runs on American rails." Measures are intended to restrict the transfer to non-trusted countries of the weights for advanced "closed-weight" AI models, and set out security standards to protect the weights of such models. However GPU supremo Nvidia claims the proposed rules are so harmful that it has published a document strongly criticizing the decision.

China

FBI Chief Warns China Poised To Wreak 'Real-World Harm' on US Infrastructure (cbsnews.com) 106

FBI Director Christopher Wray, in his final interview before stepping down, warned that China poses the greatest long-term threat to U.S. national security, calling it "the defining threat of our generation." China's cyber program has stolen more American personal and corporate data than all other nations combined, Wray told CBS News. He said Chinese government hackers have infiltrated U.S. civilian infrastructure, including water treatment facilities, transportation systems and telecommunications networks, positioning themselves to potentially cause widespread disruption.

"To lie in wait on those networks to be in a position to wreak havoc and can inflict real-world harm at a time and place of their choosing," Wray said. The FBI director, who is leaving his post nearly three years early after President-elect Donald Trump indicated he would make leadership changes, said China has likely accessed communications of some U.S. government personnel. He added that Beijing's pre-positioning on American civilian critical infrastructure has not received sufficient attention.
AI

New LLM Jailbreak Uses Models' Evaluation Skills Against Them (scworld.com) 37

SC Media reports on a new jailbreak method for large language models (LLMs) that "takes advantage of models' ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more.

"The 'Bad Likert Judge' multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts..." For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn't contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code. After the model scored the provided content on the scale, the researchers would then ask the model in a second step to provide examples of content that would score a 1 and a 2, adding that the second example should contain thorough step-by-step information. This would typically result in the LLM generating harmful content as part of the second example meant to demonstrate the model's understanding of the evaluation scale.

An additional one or two steps after the second step could be used to produce even more harmful information, the researchers found, by asking the LLM to further expand on and add more details to their harmful example. Overall, when tested across 1,440 cases using six different "state-of-the-art" models, the Bad Likert Judge jailbreak method had about a 71.6% average attack success rate across models.

Thanks to Slashdot reader spatwei for sharing the news.
Earth

California's Wildfires Still Burn. Prison Inmates Join the Fight (npr.org) 101

As an ecological disaster devastated two coastal California cities, more than 7,500 firefighters pushed back against the wildfires. 900 of them are inmates, reports NPR. That's about 12%: California is one of more than a dozen states that operates conservation camps, commonly known as fire camps, for incarcerated people to train to fight fires and respond to other disasters... There are now 35 such camps in California, all of which are minimum-security facilities... When they are not fighting fires, they also respond to floods and other disasters and emergencies. Otherwise, the crews do community service work in areas close to their camp, according to the state corrections department...

A 2018 Time investigation found that incarcerated firefighters are at a higher risk for serious injuries. They also are more than four times as likely to get cuts, bruises or broken bones compared to professional firefighters working the same fires, the report found. They were also more than eight times as likely to face injuries after inhaling smoke, ash and other debris compared with other firefighters, the report said.

"Two of the camps are for incarcerated women," reports the BBC. One of them — since released — remembers that "It felt like you were doing something that mattered instead of rotting away in a cell," according to the nonprofit new site CalMatters. They can also earn credits that help reduce their prison sentences, the BBC learned from the California Department of Corrections and Rehabilitation.

Friday one local California news report shared the perspective of formerly incarcerated Californian, Matthew Hahn (from a 2021 Washington Post column). "Yes, the decision to take part is largely made under duress, given the alternative. Yes, incarcerated firefighters are paid pennies for an invaluable task. And yes, it is difficult though not impossible for participants to become firefighters after leaving prison," Hahn said. "Despite this, fire camps remain the most humane places to do time in the California prison system."
From that 2021 Washington Post column: California prisons have, on average, three times the murder rate of the country overall and twice the rate of all American prisons. These figures don't take into account the sheer number of physical assaults that occur behind prison walls. Prison feels like a dangerous place because it is. Whether it's individual assaults or large-scale riots, the potential for violence is ever-present. Fire camp represents a reprieve from that risk. Sure, people can die in fire camp as well — at least three convict-firefighters have died working to contain fires in California since 2017 — but the threat doesn't weigh on the mind like the prospect of being murdered by a fellow prisoner. I will never forget the relief I felt the day I set foot in a fire camp in Los Angeles County, like an enormous burden had been lifted...

[When his 12-man crew was called to fight the Jesusita Fire], the fire had ignited one home's deck and was slowly burning its way to the structure. We cut the deck off the house, saving the home. I often fantasize about the owners returning to see it still standing, unaware and probably unconcerned that an incarcerated fire crew had saved it. There was satisfaction in knowing that our work was as valuable as that of any other firefighter working the blaze and that the gratitude expressed toward first responders included us.

There are other reasons for prisoners to choose fire camp if given the opportunity. They are often located in secluded natural settings, giving inmates the chance to live in an environment that doesn't remotely resemble a prison. There are no walls, and sometimes there aren't even fences. Gun towers are conspicuously absent, and the guards aren't even armed.... [C]onsider the guy pushing a broom in his cell block making the equivalent of one Top Ramen noodle packet per day, just so he can have the privilege of making a collect call to his mother. Or think of the man scrubbing the streaks out of the guards' toilets, making seven cents an hour, half of which goes to pay court fees and restitution, just so he can have those couple of hours outside his cage for the day...

So, while we may have faced the heat of a wildfire for a few bucks a day, and we may have saved a few homes and been happy doing so, understand that we were rational actors. We wanted to be there, where some of our dignity was returned to us.

Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
Earth

California's Wildfires: Livestreams from Burning Homes and Dire Text Messages - Sometimes Erroneous (msn.com) 150

As the ecological disaster continues, CNN reports the Palisades Fire near Malibu, California has burned at least 22,660 acres, left 100,000 peope under evacuation orders, left at least 11 people dead and "destroyed thousands of homes and other structures." From the last reports it was only 11% contained, and "flames are now spreading east in the Mandeville Canyon area, approaching Interstate 405, one of LA's busiest freeways."

But the Atlantic's assistant editor wrote Friday that "I have received 11 alerts. As far as I can tell, they were all sent in error." My home is not in a mandatory evacuation zone or even a warning zone. It is, or is supposed to be, safe. Yet my family's phones keep blaring with evacuation notices, as they move in and out of service....

Earlier today, Kevin McGowan, the director of Los Angeles County's emergency-management office, acknowledged at a press conference that officials knew alerts like these had gone out, acknowledged some of them were wrong, and still had no idea why, or how to keep it from happening again. The office did not immediately respond to a request for comment, but shortly after this article was published, the office released a statement offering a preliminary assessment that the false alerts were sent "due to issues with telecommunications systems, likely due to the fires' impacts on cellular towers" and announcing that the county's emergency notifications would switch to being managed through California's state alert system...

The fifth, sixth, and seventh evacuation warnings came through at around 6 a.m. — on my phone.

At the same time a Los Angeles-area couple "spent two hours watching a live stream of flames closing in on their home," reports the Washington Post, and at one point "saw firefighters come through the house and extinguish flames in the backyard." At around 4:30 p.m. Eastern time on Tuesday, the camera feeds gave out and the updates from their security system stopped. About four hours later, [Zibby] Owens's husband got an alert on his cellphone that the indoor sprinkler system had gone off and the fire alarm had been activated. They do not know the current status of their home, Owens said on Tuesday.

Real estate agent Shana Tavangarian Soboroff said in a phone interview Thursday that one set of clients had followed their Pacific Palisades home's ordeal this week in a foreboding play-by-play of text alerts from an ADT security system. The system first detected smoke, then motion, next that doors had been opened, and finally fire alerts before the system lost communication. Their home's destruction was later confirmed when someone returned to the neighborhood and recorded video, Tavangarian Soboroff said.

Soboroff also lost her home in the fire, the article adds. Burned to the ground are "the places where people raised their kids," Zibby Owens wrote in this update posted Friday. But "even if my one home, or 'structure' as newscasters call it, happens to be mostly OK, I've still lost something I loved more than anything. We've all lost it... [M]y heart and soul are aching across the country as I sit alone in my office and try to make sense of the devastation." [I]t isn't about our house.

It's about our life.

Our feelings. Our community. Our memories. Our beloved stores, restaurants, streets, sidewalks, neighbors. It's about the homes where we sat at friends' kitchen tables and played Uno, celebrated their birthdays, and truly connected.

It's all gone... [E]very single person I know and so many I don't who live in the Palisades have lost everything. Not just one or two friends. Everyone.

And then I saw video footage of our beloved village. The yogurt shop and Beach Street? Gone. Paliskates, our kids' favorite store? Gone. Burned to the ground.

Gelson's grocery store, where we just recently picked up the New York Post and groceries for the break? Gone...

The. Whole. Town.

How? How is it possible?

How could everyone have lost everything? Schools, homes, power, cell service, cars, everything. All their belongings...

All the schools, gone. It's unthinkable....

I've worked in the local library and watched the July 4 parade from streets that are now smoldering embers...

It is an unspeakable loss.

"Everyone I know in the Palisades has lost all of their possessions," the author writes, publishing what appear to be text messages from friends.

"It's gone."
"We lost everything."
"Nothing left."
"We lost it."
Youtube

CES 'Worst In Show' Devices Mocked In IFixit Video - While YouTube Inserts Ads For Them (worstinshowces.com) 55

While CES wraps up this week, "Not all innovation is good innovation," warns Elizabeth Chamberlain, iFixit's Director of Sustainability (heading their Right to Repair advocacy team). So this year the group held its fourth annual "anti-awards ceremony" to call out CES's "least repairable, least private, and least sustainable products..." (iFixit co-founder Kyle Wiens mocked a $2,200 "smart ring" with a battery that only lasts for 500 charges. "Wanna open it up and change the battery? Well you can't! Trying to open it will completely destroy this device...") There's also a category for the worst in security — plus a special award titled "Who asked for this?" — and then a final inglorious prize declaring "the Overall Worst in Show..."

Thursday their "panel of dystopia experts" livestreamed to iFixit's feed of over 1 million subscribers on YouTube, with the video's description warning about manufacturers "hoping to convince us that they have invented the future. But will their vision make our lives better, or lead humanity down a dark and twisted path?" The video "is a fun and rollicking romp that tries to forestall a future clogged with power-hungry AI and data-collecting sensors," writes The New Stack — though noting one final irony.

"While the ceremony criticized these products, YouTube was displaying ads for them..."

UPDATE: Slashdot reached out to iFixit co-founder Kyle Wiens, who says this teaches us all a lesson. "The gadget industry is insidious and has their tentacles everywhere."

"Of course they injected ads into our video. The beast can't stop feeding, and will keep growing until we knife it in the heart."

Long-time Slashdot reader destinyland summarizes the article: "We're seeing more and more of these things that have basically surveillance technology built into them," iFixit's Chamberlain told The Associated Press... Proving this point was EFF executive director Cindy Cohn, who gave a truly impassioned takedown for "smart" infant products that "end up traumatizing new parents with false reports that their baby has stopped breathing." But worst for privacy was the $1,200 "Revol" baby bassinet — equipped with a camera, a microphone, and a radar sensor. The video also mocks Samsung's "AI Home" initiative which let you answer phone calls with your washing machine, oven, or refrigerator. (And LG's overpowered "smart" refrigerator won the "Overall Worst in Show" award.)

One of the scariest presentations came from Paul Roberts, founder of SecuRepairs, a group advocating both cybersecurity and the right to repair. Roberts notes that about 65% of the routers sold in the U.S. are from a Chinese company named TP-Link — both wifi routers and the wifi/ethernet routers sold for homes and small offices.Roberts reminded viewers that in October, Microsoft reported "thousands" of compromised routers — most of them manufactured by TP-Link — were found working together in a malicious network trying to crack passwords and penetrate "think tanks, government organizations, non-governmental organizations, law firms, defense industrial base, and others" in North America and in Europe. The U.S. Justice Department soon launched an investigation (as did the U.S. Commerce Department) into TP-Link's ties to China's government and military, according to a SecuRepairs blog post.

The reason? "As a China-based company, TP-Link is required by law to disclose flaws it discovers in its software to China's Ministry of Industry and Information Technology before making them public." Inevitably, this creates a window "to exploit the publicly undisclosed flaw... That fact, and the coincidence of TP-Link devices playing a role in state-sponsored hacking campaigns, raises the prospects of the U.S. government declaring a ban on the sale of TP-Link technology at some point in the next year."

TP-Link won the award for the worst in security.

AI

Foreign Cybercriminals Bypassed Microsoft's AI Guardrails, Lawsuit Alleges (arstechnica.com) 3

"Microsoft's Digital Crimes Unit is taking legal action to ensure the safety and integrity of our AI services," according to a Friday blog post by the unit's assistant general counsel. Microsoft blames "a foreign-based threat-actor group" for "tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content.

Microsoft "is accusing three individuals of running a 'hacking-as-a-service' scheme," reports Ars Technica, "that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content" after bypassing Microsoft's AI guardrails: They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity.... The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site... The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content."

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored...

The lawsuit alleges the defendants' service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference.

Social Networks

'What If They Ban TikTok and People Keep Using It Anyway?' (yahoo.com) 101

"What if they ban TikTok and people keep using it anyway?" asks the New York Times, saying a pending ban in America "is vague on how it would be enforced" Some experts say that even if TikTok is actually banned this month or soon, there may be so many legal and technical loopholes that millions of Americans could find ways to keep TikTok'ing. The law is "Swiss cheese with lots of holes in it," said Glenn Gerstell, a former top lawyer at the National Security Agency and a senior adviser at the Center for Strategic and International Studies, a policy research organization. "There are obviously ways around it...." When other countries ban apps, the government typically orders internet providers and mobile carriers to block web traffic to and from the blocked website or app. That's probably not how a ban on TikTok in the United States would work. Two lawyers who reviewed the law said the text as written doesn't appear to order internet and mobile carriers to stop people from using TikTok.

There may not be unanimity on this point. Some lawyers who spoke to Bloomberg News said internet providers would be in legal hot water if they let their customers continue to use a banned TikTok. Alan Rozenshtein, a University of Minnesota associate law professor, said he suspected internet providers aren't obligated to stop TikTok use "because Congress wanted to allow the most dedicated TikTok users to be able to access the app, so as to limit the First Amendment infringement." The law also doesn't order Americans to stop using TikTok if it's banned or to delete the app from our phones....

Odds are that if the Supreme Court declares the TikTok law constitutional and if a ban goes into effect, blacklisting the app from the Apple and Google app stores will be enough to stop most people from using TikTok... If a ban goes into effect and Apple and Google block TikTok from pushing updates to the app on your phone, it may become buggy or broken over time. But no one is quite sure how long it would take for the TikTok app to become unusable or compromised in this situation.

Users could just sideload the app after downloading it outside a phone's official app store, the article points out. (More than 10 million people sideloaded Fortnite within six weeks of its removal from Apple and Google's app stores.) And there's also the option of just using a VPN — or watching TikTok's web site.

(I've never understood why all apps haven't already been replaced with phone-optimized web sites...)
AI

OpenAI's Bot Crushes Seven-Person Company's Website 'Like a DDoS Attack' 78

An anonymous reader quotes a report from TechCrunch: On Saturday, Triplegangers CEO Oleksandr Tomchuk was alerted that his company's e-commerce site was down. It looked to be some kind of distributed denial-of-service attack. He soon discovered the culprit was a bot from OpenAI that was relentlessly attempting to scrape his entire, enormous site. "We have over 65,000 products, each product has a page," Tomchuk told TechCrunch. "Each page has at least three photos." OpenAI was sending "tens of thousands" of server requests trying to download all of it, hundreds of thousands of photos, along with their detailed descriptions. "OpenAI used 600 IPs to scrape data, and we are still analyzing logs from last week, perhaps it's way more," he said of the IP addresses the bot used to attempt to consume his site. "Their crawlers were crushing our site," he said "It was basically a DDoS attack."

Triplegangers' website is its business. The seven-employee company has spent over a decade assembling what it calls the largest database of "human digital doubles" on the web, meaning 3D image files scanned from actual human models. It sells the 3D object files, as well as photos -- everything from hands to hair, skin, and full bodies -- to 3D artists, video game makers, anyone who needs to digitally recreate authentic human characteristics. [...] To add insult to injury, not only was Triplegangers knocked offline by OpenAI's bot during U.S. business hours, but Tomchuk expects a jacked-up AWS bill thanks to all of the CPU and downloading activity from the bot.
Triplegangers initially lacked a properly configured robots.txt file, which allowed the bot to freely scrape its site since the system interprets the absence of such a file as permission. It's not an opt-in system.

Once the file was updated with specific tags to block OpenAI's bot, along with additional defenses like Cloudflare, the scraping stopped. However, robots.txt is not foolproof since compliance by AI companies is voluntary, leaving the burden on website owners to monitor and block unauthorized access proactively. "[Tomchuk] wants other small online business to know that the only way to discover if an AI bot is taking a website's copyrighted belongings is to actively look," reports TechCrunch.
Privacy

Database Tables of Student, Teacher Info Stolen From PowerSchool In Cyberattack (theregister.com) 18

An anonymous reader quotes a report from The Register: A leading education software maker has admitted its IT environment was compromised in a cyberattack, with students and teachers' personal data -- including some Social Security Numbers and medical info -- stolen. PowerSchool says its cloud-based student information system is used by 18,000 customers around the globe, including the US and Canada, to handle grading, attendance records, and personal information of more than 60 million K-12 students and teachers. On December 28 someone managed to get into its systems and access their contents "using a compromised credential," the California-based biz told its clients in an email seen by Register this week.

[...] "We believe the unauthorized actor extracted two tables within the student information system database," a spokesperson told us. "These tables primarily include contact information with data elements such as name and address information for families and educators. "For a certain subset of the customers, these tables may also include Social Security Number, other personally identifiable information, and limited medical and grade information. "Not all PowerSchool student information system customers were impacted, and we anticipate that only a subset of impacted customers will have notification obligations."
While the company has tightened security measures and offered identity protection services to affected individuals, cybersecurity firm Cyble suggests the intrusion "may have been more serious and gone on much longer than has been publicly acknowledged so far," reports The Register. The cybersecurity vendor says the intrusion could have occurred as far back as June 16, 2011, with it ending on January 2 of this year.

"Critical systems and applications such as Oracle Netsuite ERP, HR software UltiPro, Zoom, Slack, Jira, GitLab, and sensitive credentials for platforms like Microsoft login, LogMeIn, Windows AD Azure, and BeyondTrust" may have been compromised, too.

Slashdot Top Deals