×
Social Networks

Surgeon General: There Isn't Enough Evidence That Social Media Is Safe For Kids (statnews.com) 137

An anonymous reader quotes a report from STAT News: Amid what he called the worst youth mental health crisis in recent memory, U.S. Surgeon General Vivek Murthy issued an advisory Tuesday warning about social media's impact on developing young brains. "Through the last two and a half years I've been in office, I've been hearing concerns from kids and parents," Murthy told STAT. "Parents are asking 'Is social media safe for my kids?' Based on our review of the data, there isn't enough evidence that it is safe for our kids."

The advisory calls on policymakers and technology companies to take steps to minimize the risks of social media. "This is not going to be an issue that we solve with one sector alone," Murthy said. Policymakers, according to the report, need to develop age restrictions and safety standards for social media -- much like the regulations that the U.S. has in place for everything from cars to medicine. Specifically, Murthy would like to see policymakers require a higher standard of data privacy for children to protect them from potential harms like exploitation and abuse. Technology companies, meanwhile, need to be more transparent about the data they share, according to Murthy. He calls on companies to assess the potential risks of online interactions and take active steps to prevent potential misuse. He also suggests the establishment of scientific advisory committees to inform approaches and policies aimed at creating safe online environments for children.

The advisory also suggests families attempt to protect young people's mental health by developing a family media plan aimed at establishing healthy technology boundaries at home, such as creating "tech-free zones" that restrict phone use during certain hours or family mealtime. But Murthy noted that parents are already at the end of their rope in trying to manage how their children are exposed to and using this rapidly evolving technology. That responsibility has fallen entirely on them up to this point. "We've got to move quickly," he said. "None of us should be satisfied until we have clear evidence that these platforms are safe."
The surgeon general's report comes two weeks after the American Psychological Association issued a health advisory on teens and social media use. The group noted the increased risk of anxiety and depression among adolescents who are exposed to discrimination and bullying online. "Other research has shown that adolescents ages 12-15 who spent more than three hours per day on social media face a heightened risk of experiencing poor mental health outcomes compared to those who spent less time online," adds STAT News.
Earth

CEO of Biggest Carbon Credit Certifier To Resign After Claims Offsets Worthless (theguardian.com) 80

The head of the world's leading carbon credit certifier has announced he will step down as CEO next month. From a report: It comes amid concerns that Verra, a Washington-based nonprofit, approved tens of millions of worthless offsets that are used by major companies for climate and biodiversity commitments, according to a joint Guardian investigation earlier this year. In a statement on LinkedIn on Monday, Verra's CEO, David Antonioli, said he would leave his role after 15 years leading the organisation that dominates the $2bn voluntary carbon market, which has certified more than 1bn credits through its verified carbon standard (VCS).

Antonioli thanked current and former staff, and said he was immensely proud of what Verra had accomplished through the environmental standards it operates. He did not give a reason for his departure and said he would be taking a break once he left the role. Judith Simon, Verra's recently appointed president, will serve as interim CEO following Antonioli's departure on 16 June. "The trust you placed in Verra and myself in my role as CEO has meant a lot, and I leave knowing we have made tremendous strides together in addressing some of the world's most vexing environmental and social problems. Working with you on these important issues has been a great highlight of my career," he said.

Facebook

Meta Sells Giphy To Shutterstock at a Loss in a $53 Million Deal (cnbc.com) 19

The online stock-photo marketplace Shutterstock announced Tuesday it would acquire Giphy from Meta Platforms for $53 million, a significant loss for Meta, which acquired Giphy in 2020 for $315 million. From a report: The acquisition is an all-cash deal, and in an investor presentation, Shutterstock said it would maintain its full-year revenue guidance. The acquisition would add "minimal revenue in 2023," Shutterstock noted. The deal is expected to close in June. Shutterstock's shares rose nearly 2% in morning trading Tuesday. U.K.'s Competition and Markets Authority had ordered Meta to divest Giphy in 2022, citing potential anti-competitive effects. The CMA disclosed it was probing the deal in June 2020. Giphy, which is a platform for searching for and using animated images in messaging apps, was well-integrated into Meta's ecosystem, and had been an acquisition target for the social-media company years before Meta acquired it in 2020.
Google

Google CEO: Building AI Responsibly is the Only Race That Really Matters (ft.com) 53

Sundar Pichai, CEO of Google and Alphabet, writing at Financial Times: While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. At Google, we've been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right. We're approaching this in three ways. First, by boldly pursuing innovations to make AI more helpful to everyone. We're continuing to use AI to significantly improve our products -- from Google Search and Gmail to Android and Maps. These advances mean that drivers across Europe can now find more fuel-efficient routes; tens of thousands of Ukrainian refugees are helped to communicate in their new homes; flood forecasting tools are able to predict floods further in advance. Google DeepMind's work on AlphaFold, in collaboration with the European Molecular Biology Laboratory, resulted in a groundbreaking understanding of over 200mn catalogued proteins known to science, opening up new healthcare possibilities.

Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people's health and wellbeing. We're launching a social innovation fund on AI to help social enterprises solve some of Europe's most pressing challenges. Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That's why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications. We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker's voice and match their lip movements. It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we'll provide ways to identify when we've used it to generate content in our services.

Facebook

Meta Fined Record $1.3 Billion in EU Over US Data Transfers (bloomberg.com) 84

Facebook owner Meta was hit by a record $1.3 billion European Union privacy fine and given a deadline to stop shipping users' data to the US after regulators said it failed to protect personal information from the prying eyes of American security services. Bloomberg News: The social network giant's continued data transfers to the US didn't address "the risks to the fundamental rights and freedoms" of people whose data was being transfered across the Atlantic, according to a decision by the Irish Data Protection Commission announced on Monday. On top of the fine, which eclipses a $806 million EU privacy penalty previously doled out to Amazon, Meta was given five months to "suspend any future transfer of personal data to the US" and six months to stop "the unlawful processing, including storage, in the US" of transferred personal EU data. A data-transfers ban for Meta was widely expected and once prompted the US firm to threaten a total withdrawal from the EU. But its impact has now been muted by the transition phase given in the decision and the prospect of a new EU-US data flows agreement that could already be operational by the middle of this year.
AI

Is Concern About Deadly AI Overblown? (sfgate.com) 190

"Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction," acknowledges the Washington Post. "And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

"But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren't rooted in good science. Instead, it distracts from the very real problems that the tech is already causing..." It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control... [I]nside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions. "Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk," said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher...

The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced. The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies. "There are a set of people who view this as, 'Look, these are just algorithms. They're just repeating what it's seen online.' Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan," Google CEO Sundar Pichai said during an interview with "60 Minutes" in April. "We need to approach this with humility...."

There's no question that modern AIs are powerful, but that doesn't mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies. "Most technology and risk in technology is a gradual shift," Hooker said. "Most risk compounds from limitations that are currently present."

The Post also points out that some of the heaviest criticism of the "killer robot" debate "has come from researchers who have been studying the technology's downsides for years."

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
Cellphones

Are Smartphones Costing Gen Z Crucial Life Experiences? (cnn.com) 158

CNN's chief medical correspondent spoke to psychology professor Jean Twenge from San Diego State University who in 2018 published a book which, even before lockdowns, warned that teenagers were missing crucial life experiences. Its title? "iGen: Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy — and Completely Unprepared for Adulthood — and What That Means for the Rest of Us."

From CNN's report: In her book, Twenge makes the case that Gen Z (or iGen, as she calls them) is growing up in a way that is fundamentally different from previous generations. She told me that some of the biggest behavioral changes ever recorded in human history coincided with the release of the smartphone.

Twelfth-graders now are more like eighth-graders from previous generations, waiting longer to take part in activities associated with independence and adulthood, according to Twenge. They are less likely to go out with friends, drive, go to prom or drink alcohol than Gen X 12th-graders were. They are more likely to lie on their beds and scroll through endless social media feeds. They may be physically safer, but the long-term effect on their mental and brain health is a big question mark.

Twenge told me that she "saw just a very, very sudden change, especially in mental health but also in optimism and expectations ... between millennials and iGen or Gen Z."

CNN's chief medical correspondent ultimately recommends parents talk to teenagers about how they're using social media. But the article also recommends: "don't catastrophize." In all likelihood, you'll find out your kids are on some type of screen or device more often than you would like, but — this is key — not everyone develops a problem. In other words, don't assume the worst about the impact that use of technology will have on your child's brain and development. Most people may not develop catastrophic problems, but it can be challenging to predict who is most vulnerable...

And lastly, in the words of author and science journalist Catherine Price, remember that life is what we pay attention to. Think about that for a moment; it is such a simple idea, but it is so true. I find it both deeply inspirational and empowering because it implies that we have it within our control to determine what our lives are like. The next time you go to pick up your phone, Price wants us to remember the three Ws: What for? Why now? What else?

Price also wrote a book — titled "How to Break Up With Your Phone: The 30-Day Plan to Take Back Your Life.". Here's how CNN ends their article: As Dr. Keneisha Sinclair-McBride, a clinical psychologist at Boston Children's Hospital and an assistant professor of psychology at Harvard Medical School, pointed out, we possess something very valuable that Big Tech companies want: our time and attention. We need to be judicious about how we allocate these precious resources — not just because they are important to TikTok, Snap or Instagram but because they are priceless for us, too.
Privacy

Freenet 2023: a Drop-in Decentralized Replacement for the Web - and More (freenet.org) 54

Wikipedia describes Freenet as "a peer-to-peer platform for censorship-resistant, anonymous communication," released in the year 2000. "Both Freenet and some of its associated tools were originally designed by Ian Clarke," Wikipedia adds. (And in 2000 Clarke answered questions from Slashdot's readers...)

And now Ian Clarke (aka Sanity — Slashdot reader #1,431) returns to share this announcement: Freenet, a familiar name to Slashdot readers for over 23 years, has undergone a radical transformation: Freenet 2023, or "Locutus". While the original Freenet was like a decentralized hard drive, the new Freenet is like a full decentralized computer, allowing the creation of entirely decentralized services like messaging, group chat, search, social networking, among others. The new Freenet is implemented in Rust and designed for efficiency, flexibility, and transparency to the end user.
"Designed for simplicity and flexibility, Freenet 2023 can be used seamlessly through your web browser, providing an experience that feels just like using the traditional web," explains the announcement...

And in the comments below, Ian points out that "When the new Freenet is up and running, I think it will be the first system of any kind that could host something like Wikipedia, not just the data but the wiki CMS system it's built on. An editable wikipedia, entirely decentralized and very scalable...

"We've already had interest from everyone from video game developers who want to build a decentralized MMORPG, to political advocacy groups across the political spectrum. Plenty of people value freedom."
Transportation

Kia and Hyundai Agree To $200 Million Settlement for Making Cars Viral Theft Targets 52

Hyundai and Kia will pay out $200 million in a class-action lawsuit settlement, compensating roughly 9 million people for their losses after a 2022 social media trend revealed how relatively simple it was to steal certain models. From a report: As reported by Reuters, $145 million of the payout goes to the out-of-pocket expenses of those whose cars were stolen. Many Kias made between 2011-2021, and Hyundais from 2015-2021, lacked electronic engine immobilizers, which would prevent a car from starting unless an electronically matched key was present. Without the immobilizer, the car could be started by turning the ignition with other objects, such as a USB-A cable that thieves discovered was a perfect fit.

Customers whose cars were totaled are eligible for up to $6,125, while damaged vehicles and property can receive a maximum of $3,375, along with costs for raised insurance, car rental, towing, tickets, and others. Kia and Hyundai had previously pledged to provide free software upgrades to vehicles and free wheel locks (i.e. The Club), typically in coordination with regional police departments. The National Highway Traffic Safety Administration said in February that the companies have given out 26,000 wheel locks since November 2022. A September 2022 report by the Insurance Institute for Highway Safety (IIHS) showed that immobilizers were standard on 96 percent of cars sold in the US by 2015 but only 26 percent of Kias and Hyundais. Cars with immobilizers were stolen at a rate of 1.21 per 1,000 insured vehicles, according to the IIHS; those without immobilizers had a 2.18 per 1,000 rate. Kia and Hyundai's far-too-thrifty design decisions might have been simply a balance sheet story were it not for the "Kia Challenge," a 2022 TikTok trend that detailed theft techniques and joyrides. By February 2023, the National Highway Traffic Safety Administration attributed 14 crashes and eight deaths to challenge-inspired thefts.
Google

Google Pushes New Domains Onto the Internet, and the Internet Pushes Back (arstechnica.com) 50

A recent move by Google to populate the Internet with eight new top-level domains is prompting concerns that two of the additions could be a boon to online scammers who trick people into clicking on malicious links. From a report: Two weeks ago, Google added eight new TLDs to the Internet, bringing the total number of TLDs to 1,480, according to the Internet Assigned Numbers Authority, the governing body that oversees the DNS Root, IP addressing, and other Internet protocol resources. Two of Google's new TLDs -- .zip and .mov -- have sparked scorn in some security circles. While Google marketers say the aim is to designate "tying things together or moving really fast" and "moving pictures and whatever moves you," respectively, these suffixes are already widely used to designate something altogether different. Specifically, .zip is an extension used in archive files that use a compression format known as zip. The format .mov, meanwhile, appears at the end of video files, usually when they were created in Apple's QuickTime format. Many security practitioners are warning that these two TLDs will cause confusion when they're displayed in emails, on social media, and elsewhere. The reason is that many sites and software automatically convert strings like "arstechnica.com" or "mastodon.social" into a URL that, when clicked, leads a user to the corresponding domain. The worry is that emails and social media posts that refer to a file such as setup.zip or vacation.mov will automatically turn them into clickable links -- and that scammers will seize on the ambiguity.
Open Source

Bluesky Social Just Took a Big Open-Source Step Forward (zdnet.com) 17

An anonymous reader quotes a report from ZDNet: Bluesky Social, the popular new beta social network, is taking a big open-source step forward. On May 15th, 2023, it open-sourced the codebase for its Bluesky Social app on GitHub. This fits well with its plans. From the start, its owner, BlueSky Public Benefit LLC, a public benefit corporation, was building an "open and decentralized" social network.

Unlike Twitter, which is still tripping over its own open source feet, Bluesky client code is for anyone who wants to work on improving the code or use it as the basis for their own social network. Twitter's recommendation code, on the other hand, is essentially unusable. The Bluesky code, licensed under the MIT License, can be used now. Indeed, while it's been out for only about 24 hours, it's already been forked 88 times and has earned over 1,300 GitHub Stars.

While it's specifically the Bluesky Social app's codebase, it's also a resource for AT Protocol programmers. This protocol supports a decentralized social network. Its features include connecting with anyone on a server that supports AT Protocol; controlling how users see the world via an open algorithm market; and enabling users to change hosts without losing their content, followers, or identity. The code itself is written in React Native. This is an open-source, user-interface JavaScript software framework. It's used primarily to build applications that run on both iOS and Android devices.

The Courts

Supreme Court Rules Against Reexamining Section 230 (theverge.com) 58

Adi Robertson writes via The Verge: The Supreme Court has declined to consider reinterpreting foundational internet law Section 230, saying it wasn't necessary for deciding the terrorism-related case Gonzalez v. Google. The ruling came alongside a separate but related ruling in Twitter v. Taamneh, where the court concluded that Twitter had not aided and abetted terrorism. In an unsigned opinion (PDF) issued today, the court said the underlying complaints in Gonzalez were weak, regardless of Section 230's applicability. The case involved the family of a woman killed in a terrorist attack suing Google, which the family claimed had violated the law by recommending terrorist content on YouTube. They sought to hold Google liable under anti-terrorism laws.

The court dismissed the complaint largely because of its unanimous ruling (PDF) in Twitter v. Taamneh. Much like in Gonzalez, a family alleged that Twitter knowingly supported terrorists by failing to remove them from the platform before a deadly attack. In a ruling authored by Justice Clarence Thomas, however, the court declared that the claims were "insufficient to establish that these defendants aided and abetted ISIS" for the attack in question. Thomas declared that Twitter's failure to police terrorist content failed the requirement for some "affirmative act" that involved meaningful participation in an illegal act. "If aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer," writes Thomas. That includes "those who merely deliver mail or transmit emails" becoming liable for the contents of those messages or even people witnessing a robbery becoming liable for the theft. "There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants' relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm's length, passive, and largely indifferent."

For Gonzalez v. Google, "the allegations underlying their secondary-liability claims are materially identical to those at issue in Twitter," says the court. "Since we hold that the complaint in that case fails to state a claim for aiding and abetting ... it appears to follow that the complaint here likewise fails to state such a claim." Because of that, "we therefore decline to address the application of 230 to a complaint that appears to state little, if any, plausible claim for relief." [...] The Gonzalez ruling is short and declines to deal with many of the specifics of the case. But the Twitter ruling does take on a key question from Gonzalez: whether recommendation algorithms constitute actively encouraging certain types of content. Thomas appears skeptical: "To be sure, plaintiffs assert that defendants' 'recommendation' algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs' own telling, their claim is based on defendants' 'provision of the infrastructure which provides material support to ISIS.' Viewed properly, defendants' 'recommendation' algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS' content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants' passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS."
"The interpretation may deal a blow to one common argument for adding special liability to social media: the claim that recommendation systems go above and beyond simply hosting content and explicitly encourage that content," adds Robertson. "This ruling's reasoning suggests that simply recommending something on an 'agnostic' basis -- as opposed to, in one hypothetical from Thomas, creating a system that 'consciously and selectively chose to promote content provided by a particular terrorist group' -- isn't an active form of encouragement."
The Courts

Supreme Court Sidesteps Challenge To Internet Companies' Broad Protections From Lawsuits (apnews.com) 48

The Supreme Court on Thursday sidestepped a case against Google that might have allowed more lawsuits against social media companies. From a report: The justices' decision returns to a lower court the case of a family of an American college student who was killed in an Islamic State terrorist attack in Paris. The family wants to sue Google for YouTube videos they said helped attract IS recruits and radicalize them. Google claims immunity from the lawsuit under a 1996 law that generally shields social media company for content posted by others. Lower courts agreed with Google. The justices had agreed to consider whether the legal shield is too broad. But in arguments in February, several sounded reluctant to weigh in now. In an unsigned opinion Thursday, the court wrote that it was declining to address the law at issue.
Government

Montana Becomes First US State To Ban TikTok (reuters.com) 135

Montana is now the first U.S. state to ban TikTok after Montana Governor Greg Gianforte signed legislation to ban the app from operating in the state. Reuters reports: Montana will make it unlawful for Google and Apple's app stores to offer the TikTok app within its borders. The ban takes effect Jan. 1, 2024. TikTok, which has over 150 million American users, is facing growing calls from U.S. lawmakers and state officials to ban the app nationwide over concerns about potential Chinese government influence over the platform. Gov. Gianforte, a Republican, said the bill will further "our shared priority to protect Montanans from Chinese Communist Party surveillance."

Montana, which has a population of just over 1 million people, said TikTok could face fines for each violation and additional fines of $10,000 per day if they violate the ban. It takes effect Jan. 1, 2024. The ban will likely face numerous legal challenges that it violates the First Amendment free speech rights of users. An attempt by then President Donald Trump to ban new downloads of TikTok and WeChat through a Commerce Department order in 2020 was blocked by multiple courts and never took effect.
The legislation that Gianforte signed also generally prohibits "the use of all social media applications that collect and provide personal information or data to foreign adversaries on government-issued devices," adds Reuters.

It's unclear if the bill signed today would effectively ban all social media in Montana, since most social media networks collect such information and share it with entities in foreign countries.
Businesses

Vice, Decayed Digital Colossus, Files for Bankruptcy (nytimes.com) 44

Vice Media has filed for bankruptcy, "punctuating a yearslong descent from a new-media darling to a cautionary tale of the problems facing the digital publishing industry," writes Lauren Hirsch and Benjamin Mullin via the New York Times. The media company was once valued at $5.7 billion back in 2017. From the report: The bankruptcy will not interrupt daily operations for Vice's businesses, which in addition to its flagship website include the ad agency Virtue, the Pulse Films division and Refinery29, a women-focused site acquired by Vice in 2019. A group of Vice's lenders, including Fortress Investment Group and Soros Fund Management, is in the leading position to acquire the company out of bankruptcy. The group has submitted a bid of $225 million, which would be covered by its existing loans to the company. It would also take over "significant liabilities" from Vice after any deal closes. A sale process follows next. The lenders have secured a $20 million loan to continue operating Vice and then, if a better bid does not emerge, the group that includes Fortress and Soros will acquire Vice.

Investments from media titans like Disney and shrewd financial investors like TPG, which spent hundreds of millions of dollars, will be rendered worthless by the bankruptcy, cementing Vice's status among the most notable bad bets in the media industry. Like some of its peers in the digital-media industry, including BuzzFeed and Vox Media, Vice and its investors bet big on the rising power of social media networks like Facebook and Instagram, anticipating they would deliver a tide of young, upwardly mobile readers that advertisers craved. Though readers came by the millions, new media companies had trouble wringing profits from them, and the bulk of digital ad dollars went to the major tech platforms.

Power

How Off-Grid Solar Power Transforms Remote Villages (apnews.com) 71

775 million people around the world didn't have electricity last year, according to the International Energy Agency. But the Associated Press points out that's changing in some of the world's most remote places — thanks to off-grid solar systems.

Here's a typical example from the world's fourth most-populous country... Before electricity came to the village a bit less than two years ago, the day ended when the sun went down. Villagers in Laindeha, on the island of Sumba in eastern Indonesia, would set aside the mats they were weaving or coffee they were sorting to sell at the market as the light faded.

A few families who could afford them would start noisy generators that rumbled into the night, emitting plumes of smoke. Some people wired lightbulbs to old car batteries, which would quickly die or burn out appliances, as they had no regulator. Children sometimes studied by makeshift oil lamps, but these occasionally burned down homes when knocked over by the wind. That's changed since grassroots social enterprise projects have brought small, individual solar panel systems to Laindeha and villages like it across the island...

Around the world, hundreds of millions of people live in communities without regular access to power, and off-grid solar systems like these are bringing limited access to electricity to places like these years before power grids reach them... Indonesia has brought electricity to millions of people in recent years, going from 85% to nearly 97% coverage between 2005 and 2020, according to World Bank data. But there are still more than half a million people in Indonesia living in places the grid doesn't reach.

While barriers still remain, experts say off-grid solar programs on the island could be replicated across the vast archipelago nation, bringing renewable energy to remote communities.

AI

Will AI Just Turn All of Human Knowledge into Proprietary Products? (theguardian.com) 139

"Tech CEOs want us to believe that generative AI will benefit humanity," argues an column in the Guardian, adding "They are kidding themselves..."

"There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life... " AI — far from living up to all those utopian hallucinations — is much more likely to become a fearsome tool of further dispossession and despoilation...

What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon ...) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal... The trick, of course, is that Silicon Valley routinely calls theft "disruption" — and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don't apply to your new tech; scream that regulation will only help China — all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands... These companies must know they are engaged in theft, or at least that a strong case can be made that they are. They are just hoping that the old playbook works one more time — that the scale of the heist is already so large and unfolding with such speed that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all...

[W]e trained the machines. All of us. But we never gave our consent. They fed on humanity's collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.

Thanks to long-time Slashdot reader mspohr for sharing the article.
Social Networks

Former ByteDance Exec Claims CCP 'Maintained' Access to US Data (axios.com) 26

An anonymous Slashdot reader shared this report from Axios: The Chinese Communist Party "maintained supreme access" to data belonging to TikTok parent company ByteDance, including data stored in the U.S., a former top executive claimed in a lawsuit Friday...

In a wrongful dismissal suit filed in San Francisco Superior Court, Yintao Yu said ByteDance "has served as a useful propaganda tool for the Chinese Communist Party." Yu, whose claim says he served as head of engineering for ByteDance's U.S. offices from August 2017 to November 2018, alleged that inside the Beijing-based company, the CCP "had a special office or unit, which was sometimes referred to as the 'Committee'." The "Committee" didn't work for ByteDance but "played a significant role," in part by "gui[ding] how the company advanced core Communist values," the lawsuit claims... The CCP could also access U.S. user data via a "backdoor channel in the code," the suit states...

In an interview with the New York Times, which first reported the lawsuit, Yu said promoting anti-Japanese sentiment was done without hesitation.

"The allegations come as federal officials weigh the fate of the social media giant in the U.S. amid growing concerns over national security and data privacy," the article adds.

Yu also accused ByteDance of a years-long, worldwide "scheme" of scraping data from Instagram and Snapchat to post on its own services.
HP

HP Updates Firmware, Blocks Its Printers From Using Cheaper Ink Cartridges from Rivals (telegraph.co.uk) 212

Hewlett-Packward printers recently got a firmware update that "blocks customers from using cheaper, non-HP ink cartridges," reports the Telegraph: Customers' devices were remotely updated in line with new terms which mean their printers will not work unless they are fitted with approved ink cartridges. It prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.

HP printers used to display a warning when a "third-party" ink cartridge was inserted, but now printers will simply refuse to print altogether.

The printer company said it issued the update to reduce the risk of malware attacks, saying "third-party cartridges that use non-HP chips or circuitry can pose risks to the hardware performance, print quality, and security." It also said it used regular updates to improve its services, such as introducing alerts for some customers telling them when their ink is running low. However, according to HP's website, the company also blocks the use of rival cartridges in order to "maintain the integrity of our printing systems, and protect our intellectual property".

Outraged customers have flooded social media with complaints, saying they felt "cheated" by the update. HP ink cartridges can cost more than double the price of third-party offerings... Some customers can choose to disable HP's cartridge-blocking feature in the printer's settings, HP said, but it depends on the printer model. Others will be stuck with a printer that only works if they commit to spending more on ink cartridges approved by HP.

Cellphones

Millions of Mobile Phones Come Pre-Infected With Malware, Say Researchers (theregister.com) 45

Trend Micro researchers at Black Hat Asia are warning that millions of Android devices worldwide come pre-infected with malicious firmware before the devices leave their factories. "This hardware is mainly cheapo Android mobile devices, though smartwatches, TVs, and other things are caught up in it," reports The Register. From the report: This insertion of malware began as the price of mobile phone firmware dropped, we're told. Competition between firmware distributors became so furious that eventually the providers could not charge money for their product. "But of course there's no free stuff," said [Trend Micro researcher Fyodor Yarochkin], who explained that, as a result of this cut-throat situation, firmware started to come with an undesirable feature -- silent plugins. The team analyzed dozens of firmware images looking for malicious software. They found over 80 different plugins, although many of those were not widely distributed. The plugins that were the most impactful were those that had a business model built around them, were sold on the underground, and marketed in the open on places like Facebook, blogs, and YouTube.

The objective of the malware is to steal info or make money from information collected or delivered. The malware turns the devices into proxies which are used to steal and sell SMS messages, take over social media and online messaging accounts, and used as monetization opportunities via adverts and click fraud. One type of plugin, proxy plugins, allow the criminal to rent out devices for up to around five minutes at a time. For example, those renting the control of the device could acquire data on keystrokes, geographical location, IP address and more. "The user of the proxy will be able to use someone else's phone for a period of 1200 seconds as an exit node," said Yarochkin. He also said the team found a Facebook cookie plugin that was used to harvest activity from the Facebook app.

Through telemetry data, the researchers estimated that at least millions of infected devices exist globally, but are centralized in Southeast Asia and Eastern Europe. A statistic self-reported by the criminals themselves, said the researchers, was around 8.9 million. As for where the threats are coming from, the duo wouldn't say specifically, although the word "China" showed up multiple times in the presentation, including in an origin story related to the development of the dodgy firmware. Yarochkin said the audience should consider where most of the world's OEMs are located and make their own deductions.

The team confirmed the malware was found in the phones of at least 10 vendors, but that there was possibly around 40 more affected. For those seeking to avoid infected mobile phones, they could go some way of protecting themselves by going high end. That is to say, you'll find this sort of bad firmware in the cheaper end of the Android ecosystem, and sticking to bigger brands is a good idea though not necessarily a guarantee of safety. "Big brands like Samsung, like Google took care of their supply chain security relatively well, but for threat actors, this is still a very lucrative market," said Yarochkin.

Slashdot Top Deals