×
Social Networks

Is Reddit Dying? (eff.org) 266

"Compared to the website's average daily volume over the past month, the 52,121,649 visits Reddit saw on June 13th represented a 6.6 percent drop..." reports Engadget (citing data provided by internet analytics firm Similarweb). [A]s many subreddits continue to protest the company's plans and its leadership contemplates policy changes that could change its relationship with moderators, the platform could see a slow but gradual decline in daily active users. That's unlikely to bode well for Reddit ahead of its planned IPO and beyond.
In fact, the Financial Times now reports that Reddit "acknowledged that several advertisers had postponed certain premium ad campaigns in order to wait for the blackouts to pass." But they also got this dire prediction from a historian who helps moderate the subreddit "r/Askhistorians" (with 1.8 million subscribers).

"If they refuse to budge in any way I do not see Reddit surviving as it currently exists. That's the kind of fire I think they're playing with."

More people had the same same thought. The Reddit protests drew this response earlier this week from EFF's associate director of community organizing: This tension between these communities and their host have, again, fueled more interest in the Fediverse as a decentralized refuge... Unfortunately, discussions of Reddit-like fediverse services Lemmy and Kbin on Reddit were colored by paranoia after the company banned users and subreddits related to these projects (reportedly due to "spam"). While these accounts and subreddits have been reinstated, the potential for censorship around such projects has made a Reddit exodus feel more urgently necessary...
Saturday the EFF official reiterated their concerns when Wired asked: does this really signal the death of Reddit? "I can't see it as anything but that... [I]t's not a big collapse when a social media website starts to die, but it is a slow attrition unless they change their course. The longer they stay in their position, the more loss of users and content they're going to face."

Wired even heard a thought-provoking idea from Amy Bruckman, a regents' professor/senior associate chair at the School of Interactive Computing at Georgia Institute of Technology. Bruckman "advocates for public funding of a nonprofit version of something akin to Reddit."

Meanwhile, hundreds of people are now placing bets on whether Reddit will backtrack on its new upcoming API pricing — or oust CEO Steve Huffman — according to Insider, citing reports from online betting company BetUS.

CEO Huffman's complaint that the moderators were ignoring the wishes of Reddit's users led to a funny counter-response, according to the Verge. After asking users to vote on whether to end the protest, two forums saw overwhelming support instead for the only offered alternative: the subreddits "now only allow posts about comedian and Last Week Tonight host John Oliver."

Both r/pics (more than 30 million subscribers) and r/gifs (more than 21 million subscribers) offered two options to users to vote on... The results were conclusive:

r/pics: return to normal, -2,329 votes; "only allow images of John Oliver looking sexy," 37,331 votes.
r/gifs: return to normal, -1,851 votes; only feature GIFs of John Oliver, 13,696 votes...

On Twitter, John Oliver encouraged the subreddits — and even gave them some fodder. "Dear Reddit, excellent work," he wrote to kick off a thread that included several ridiculous pictures. A spokesperson for Last Week Tonight with John Oliver didn't immediately reply to a request for comment.

Government

Texas Bans Kids From Social Media Without Parental Consent (theverge.com) 254

Texas Governor Greg Abbott has signed a bill prohibiting children under 18 from joining various social media platforms without parental consent. Similar legislation has been passed in Utah and Louisiana. The Verge reports: The bill, HB 18, requires social media companies to receive explicit consent from a minor's parent or guardian before they'd be allowed to create their own accounts starting in September of next year. It also forces these companies to prevent children from seeing "harmful" content -- like content related to eating disorders, substance abuse, or "grooming" -- by creating new filtering systems.

Texas' definition of a "digital service" is extremely broad. Under the law, parental consent would be necessary for kids trying to access nearly any site that collects identifying information, like an email address. There are some exceptions, including sites that primarily deliver educational or news content and email services. The Texas attorney general could sue companies found to have violated this law. The law's requirements to filter loosely defined "harmful material" and provide parents with control over their child's accounts mirror language in some federal legislation that has spooked civil and digital rights groups.

Like HB 18, the US Senate-led Kids Online Safety Act orders platforms to prevent minors from being exposed to content related to disordered eating and other destructive behaviors. But critics fear this language could encourage companies like Instagram or TikTok to overmoderate non-harmful content to avoid legal challenges. Overly strict parental controls could also harm kids in abusive households, allowing parents to spy on marginalized children searching for helpful resources online.

Patents

US Patent Office Proposes Rule To Make It Much Harder To Kill Bad Patents (techdirt.com) 110

An anonymous reader quotes a report from Techdirt: So, this is bad. Over the last few years, we've written plenty about the so-called "inter partes review" or "IPR" that came into being about a decade ago as part of the "America Invents Act," which was the first major change to the patent system in decades. For much of the first decade of the 2000s, patent trolls were running wild and creating a massive tax on innovation. There were so many stories of people (mostly lawyers) getting vague and broad patents that they never had any intention of commercializing, then waiting for someone to come along and build something actually useful and innovative... and then shaking them down with the threat of patent litigation. The IPR process, while not perfect, was at least an important tool in pushing back on some of the worst of the worst patents. In its most basic form, the IPR process allows nearly anyone to challenge a bad patent and have the special Patent Trial and Appeal Board (PTAB) review the patent to determine if it should have been granted in the first place. Given that a bad patent can completely stifle innovation for decades this seems like the very least that the Patent Office should offer to try to get rid of innovation-killing bad patents.

However, patent trolls absolutely loathe the IPR process for fairly obvious reasons. It kills their terrible patents. The entire IPR process has been challenged over and over again and (thankfully) the Supreme Court said that it's perfectly fine for the Patent Office to review granted patents to see if they made a mistake. But, of course, that never stops the patent trolls. They've complained to Congress. And, now, it seems that the Patent Office itself is trying to help them out. Recently, the USPTO announced a possible change to the IPR process that would basically lead to limiting who can actually challenge bad patents, and which patents could be challenged.

The wording of the proposed changes seems to be written in a manner to be as confusing as possible. But there are a few different elements to the proposal. One part would limit who can bring challenges to patents under the IPR system, utilizing the power of the director to do a "discretionary denial." For example, it would say that "certain for-profit entities" are not allowed to bring challenges. Why? That's not clear. [...] But the more worrisome change is this one: "Recognizing the important role the USPTO plays in encouraging and protecting innovation by individual inventors, startups, and under-resourced innovators who are working to bring their ideas to market, the Office is considering limiting the impact of AIA post-grant proceedings on such entities by denying institution when certain conditions are met." Basically, if a patent holder is designated as an "individual inventor, startup" or "under-resourced innovator" then their patents are protected from the IPR process. But, as anyone studying this space well knows, patent trolls often present themselves as all three of those things (even though it's quite frequently not at all true). [...] And, again, none of this should matter. A bad patent is a bad patent. Why should the USPTO create different rules that protect bad patents? If the patent is legit, it will survive the IPR process.
The Electronic Frontier Foundation issued a response to the proposed changes: "The U.S. Patent Office has proposed new rules about who can challenge wrongly granted patents. If the rules become official, they will offer new protections to patent trolls. Challenging patents will become far more onerous, and impossible for some. The new rules could stop organizations like EFF, which used this process to fight the Personal Audio 'podcasting patent,' from filing patent challenges altogether."

The digital rights group added: "If these rules were in force, it's not clear that EFF would have been able to protect the podcasting community by fighting, and ultimately winning, a patent challenge against Personal Audio LLC. Personal Audio claimed to be an inventor-owned company that was ready to charge patent royalties against podcasters large and small. EFF crowd-funded a patent challenge and took out the Personal Audio patent after a 5-year legal battle (that included a full IPR process and multiple appeals)."
Electronic Frontier Foundation

Federal Judge Makes History In Holding That Border Searches of Cell Phones Require a Warrant (eff.org) 79

In a groundbreaking ruling, a district court judge in New York, United States v. Smith (S.D.N.Y. May 11, 2023), declared that a warrant is necessary for cell phone searches at the border, unless there are urgent circumstances. The Electronic Frontier Foundation (EFF) reports: The Ninth Circuit in United States v. Cano (2019) held that a warrant is required for a device search at the border that seeks data other than "digital contraband" such as child pornography. Similarly, the Fourth Circuit in United States v. Aigbekaen (2019) held that a warrant is required for a forensic device search at the border in support of a domestic criminal investigation. These courts and the Smith court were informed by Riley v. California (2014). In that watershed case, the Supreme Court held that the police must get a warrant to search an arrestee's cell phone. [...]

The Smith court's application of Riley's balancing test is nearly identical to the arguments we've made time and time again. The Smith court also cited Cano, in which the Ninth Circuit engaged extensively with EFF's amicus brief even though it didn't go as far as requiring a warrant in all cases. The Smith court acknowledged that no federal appellate court "has gone quite this far (although the Ninth Circuit has come close)."

We're pleased that our arguments are moving through the federal judiciary and finally being embraced. We hope that the Second Circuit affirms this decision and that other courts -- including the Supreme Court -- are courageous enough to follow suit and protect personal privacy.

Government

'Delete Act' Seeks To Give Californians More Power To Block Data Tracking (kqed.org) 62

On Tuesday, the Senate Judiciary Committee in Sacramento is expected to consider a new bill called "The Delete Act," or SB 362, which aims to give Californians the power to block data tracking. "The onus is on individuals to try to protect their data from an estimated 2,000-4,000 data brokers worldwide -- many of which have no other relationship with consumers beyond the trade in their data," reports KQED. "This lucrative trade is also known as surveillance advertising, or the 'ad tech' industry." From the report: EFF supports The Delete Act, or SB 362, by state Sen. Josh Becker, who represents the Peninsula. "I want to be able to hit that delete button and delete my personal information, delete the ability of these data brokers to collect and track me," said Becker, of his second attempt to pass such a bill. "These data brokers are out there analyzing, selling personal information. You know, this is a way to put a stop to it."

Tracy Rosenberg, a data privacy advocate with Media Alliance and Oakland Privacy, said she anticipates a lot of pushback from tech companies, because "making [the Delete Act] workable probably destroys their businesses as most of us, by now, don't really see the value in the aggregating and sale of our data on the open market by third parties... "It is a pretty basic-level philosophical battle about whether your personal information is, in fact, yours to share as you see appropriate and when it is personally beneficial to you, or whether it is property to be bought and sold," Rosenberg said.

Electronic Frontier Foundation

EFF Warns US 'Deserves Stronger Spyware Protections Than Biden's Executive Order' (eff.org) 31

In March U.S. President Joe Biden "signed an executive order that limits U.S. government agencies from using commercially available spyware," writes EFF senior policy analyst Matthew Guariglia.

"But that doesn't mean there will be no government use of spyware in the United States...." The executive order arrived only days before revelations that the United States, which was previously thought to have steered clear of some of the most infamous foreign spyware products, actually had a contract to test and deploy the notorious Pegasus created by Israeli company NSO Group. The contract was signed under a fake name on November 8, 2021 between an organization that acts as a front for the U.S. government and an American affiliate of NSO group. Only five days before, on November 3, 2021, the U.S. Commerce Department added NSO Group and other foreign spyware companies to a blacklist — the "Entity List for engaging in activities that are contrary to the national security or foreign policy interests of the United States." So the signing of this straw contract was in apparent breach of this ban. NSO Group is just one of the companies that should be covered by the new executive order....

Though the NSO Group's Pegasus spyware has garnered particular attention for its widespread use against human rights advocates, journalists, and politicians, the executive order did not name any company specifically, keeping the policy broad. This may lead some government agencies to think that their purchase of foreign spyware might fly under the radar if it comes from another, smaller vendor, or the vendor can plausibly deny that it is really spyware that they are selling. We urge the Biden administration to publish a non-exhaustive list of spyware companies included as part of this ban. That would send a clear message to agencies who wish to exploit any ambiguity in order to skirt the law.

The EFF applauds the U.S. order for specyfing ways in which spyware is not to be used — including a ban on its use against journalists, activists, political figures, and any U.S. person "without proper legal authorization, safeguards, and oversight." And the EFF also notes positive signs of progress towards stopping government misuse of spyware:
Building upon the U.S. executive order, a global coalition of eleven countries, including Australia, Canada, Costa Rica, Denmark, France, New Zealand, Norway, Sweden, Switzerland, the United Kingdom, and the United States, are working towards a common goal of countering the misuse of commercial spyware. This alliance is committed to establishing robust guardrails and procedures that uphold fundamental human rights, civil liberties, and the rule of law, within each of their respective systems.
But the EFF also points out the biggest concern of the U.S. government appears to be with the dangers in spyware that's foreign made. "While this signals discomfort with foreign-made spyware, no one should take this as an indication that the U.S. government is averse to using similar technologies developed internally, or indeed acquiring foreign spyware companies for domestic use.

"Given the government's long history of using and abusing incredibly invasive techniques, people in the United States should push for robust human rights safeguards to ensure the government won't proceed with only the minor restrictions of this executive order to rein them in."
Electronic Frontier Foundation

'The Broad, Vague RESTRICT Act Is a Dangerous Substitute For Comprehensive Data Privacy Legislation' (eff.org) 76

The recently introduced RESTRICT Act, otherwise known as the "TikTok ban," is a dangerous substitute for comprehensive data privacy legislation, writes the Electronic Frontier Foundation in a blog post. From the post: As we wrote in our initial review of the bill, the RESTRICT Act would authorize the executive branch to block 'transactions' and 'holdings' of 'foreign adversaries' that involve 'information and communication technology' and create 'undue or unacceptable risk' to national security and more. We've explained our opposition to the RESTRICT Act and urged everyone who agrees to take action against it. But we've also been asked to address some of the concerns raised by others. We do that here in this post. At its core, RESTRICT would exempt certain information services from the federal statute, known as the Berman Amendments, which protects the free flow of information in and out of the United States and supports the fundamental freedom of expression and human rights concerns. RESTRICT would give more power to the executive branch and remove many of the commonsense restrictions that exist under the Foreign Intelligence Services Act (FISA) and the aforementioned Berman Amendments. But S. 686 also would do a lot more.

EFF opposes the bill, and encourages you to reach out to your representatives to ask them not to pass it. Our reasons for opposition are primarily that this bill is being used as a cudgel to protect data from foreign adversaries, but under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. Separately, handing relatively unchecked power over to the executive branch to make determinations about what sort of information technologies and technology services are allowed to enter the U.S. is dangerous. If Congress is concerned about foreign powers collecting our data, it should focus on comprehensive consumer data privacy legislation that will have a real impact, and protect our data no matter what platform it's on -- TikTok, Facebook, Twitter, or anywhere else that profits from our private information. That's why EFF supports such consumer data privacy legislation. Foreign adversaries won't be able to get our data from social media companies if the social media companies aren't allowed to collect, retain, and sell it in the first place.
EFF says it's not clear if the RESTRICT Act will even result in a "ban" on TikTok. It does, however, have potential to punish people for using a VPN to access TikTok if it is restricted. In conclusion, the group says the bill is similar to a surveillance bill and is "far too broad in the power it gives to investigate potential user data."
The Courts

Internet Archive Loses in Court. Judge Rules They Can't Scan and Lend eBooks (theverge.com) 96

The Verge reports: A federal judge has ruled against the Internet Archive in Hachette v. Internet Archive, a lawsuit brought against it by four book publishers, deciding that the website does not have the right to scan books and lend them out like a library. Judge John G. Koeltl decided that the Internet Archive had done nothing more than create "derivative works," and so would have needed authorization from the books' copyright holders — the publishers — before lending them out through its National Emergency Library program. The Internet Archive says it will appeal.
The decision was "a blow to all libraries and the communities we serve," argued Chris Freeland, the director of Open Libraries at the Internet Archive. In a blog post he argued the decision "impacts libraries across the U.S. who rely on controlled digital lending to connect their patrons with books online. It hurts authors by saying that unfair licensing models are the only way their books can be read online. And it holds back access to information in the digital age, harming all readers, everywhere.
The Verge adds that the judge rejected "fair use" arguments which had previously protected a 2014 digital book preservation project by Google Books and HathiTrust: Koetl wrote that any "alleged benefits" from the Internet Archive's library "cannot outweigh the market harm to the publishers," declaring that "there is nothing transformative about [Internet Archive's] copying and unauthorized lending," and that copying these books doesn't provide "criticism, commentary, or information about them." He notes that the Google Books use was found "transformative" because it created a searchable database instead of simply publishing copies of books on the internet.

Koetl also dismissed arguments that the Internet Archive might theoretically have helped publishers sell more copies of their books, saying there was no direct evidence, and that it was "irrelevant" that the Internet Archive had purchased its own copies of the books before making copies for its online audience. According to data obtained during the trial, the Internet Archive currently hosts around 70,000 e-book "borrows" a day.

Thanks to long-time Slashdot reader esme for sharing the news.
The Courts

GitHub and EFF Back YouTube Ripper In Legal Battle With the RIAA (torrentfreak.com) 20

GitHub and digital rights group EFF have filed briefs supporting stream-ripping site Yout.com in its legal battle with the RIAA. GitHub warns that the lower court's decision threatens to criminalize the work of many other developers. The EFF, meanwhile, stresses that an incorrect interpretation of the DMCA harms people who use stream-rippers lawfully. TorrentFreak reports: In 2020, YouTube ripper Yout.com sued the RIAA, asking a Connecticut district court to declare that the site does not violate the DMCA's anti-circumvention provision. The music group had previously used DMCA takedown notices to remove many of Yout's appearances in Google's search results. This had a significant impact on revenues, the site argued, adding that it always believed it wasn't breaking any laws and hoped the court would agree. Last October, the Connecticut district court concluded that Yout had failed to show that it doesn't circumvent YouTube's technological protection measures. As such, it could be breaking the law. Yout operator Johnathan Nader opted to appeal the decision. Nader's attorneys filed their opening brief (PDF) last week at the Court of Appeals for the Second Circuit, asking it to reverse the lower court's decision. The YouTube ripper is not the only party calling for a reversal. Yesterday, Microsoft-owned developer platform GitHub submitted an amicus brief that argues for the same. And in a separate filing, the EFF also agrees that the lower court's decision should be overturned.

GitHub's brief starts by pointing out that the company takes no position on the ultimate resolution of this appeal, nor does it side with all of Yout's arguments. However, it does believe that the lower court's interpretation of the DMCA is dangerous. The district court held that stream rippers can violate the DMCA's anti-circumvention provision. The court noted that these tools allow people to download video and audio from YouTube, despite the streaming platform's lack of a download button. According to GitHub, this conclusion is premature, dangerous, and places other software types at risk. In the present lawsuit, GitHub reiterates that stream-ripping tools should not be outlawed. The fact that YouTube doesn't have a download button doesn't mean that tools that enable people to download videos circumvent technological access restrictions. "YouTube's decision not to provide its own 'download' button, however, is not a restriction on access to works. It merely affects how users experience them," GitHub writes. If the court order is allowed to stand, GitHub warns that a broad group of developers could be exposed to criminal liability, effectively chilling technological innovation. YouTube download tools are not the only types of software at risk, according to GitHub. There are many others that affect 'how users experience' online websites. These could also be seen as problematic, based on the district court's expansive interpretation of the DMCA. These widely accepted tools could put their creators at risk if the DMCA is interpreted too strictly, GitHub warns.

The Electronic Frontier Foundation (EFF) also submitted an amicus curiae brief (PDF) yesterday. The digital rights group takes interest in copyright cases, particularly when they get in the way of people's ability to freely use technology. In this instance, EFF points out that stream-rippers such as Yout.com provide a neutral technology with plenty of legal uses. They can be used for infringing purposes, but that's also true for existing technologies -- the printing press, for example. "Like every reproduction technology -- from the printing press to the smartphone -- these programs, colloquially called 'streamrippers,' have important lawful uses as well as infringing ones. "Video creators, educators, journalists, and human rights organizations all depend on the ability to make copies of user-uploaded videos," EFF adds. In common with GitHub, EFF notes that the absence of a download button on YouTube doesn't imply that download tools automatically violate the DMCA, especially when there are no effective download restrictions on the platform. [...] According to EFF, Yout and similar tools provide the same functions as video cassette recorders once did. They allow people to make copies of videos that are posted publicly by their creators. In addition, these tools are vital for some reporters and useful to creatives who use them for future work.

United States

FCC Nomination Stalled for One Year, Preventing Restoration of US Net Neutrality (siliconvalley.com) 85

Why hasn't America restored net neutrality protections? "President Biden's nomination to serve on the Federal Communications Commission has been stalled in the Senate for more than a year," complain the editorial boards of two Silicon Valley newspapers: Confirming Gigi Sohn would end the 2-2 deadlock on the FCC that is keeping Biden from fulfilling his campaign promise to restore net neutrality, ensuring that all internet traffic is treated equally. Polls show that 75% of Americans support net neutrality rules. They know that an open internet is essential for innovation and economic growth, for fostering the next generation of entrepreneurs....

[T]elecommunication giants such as AT&T, Verizon and Comcast don't want that to happen. They favor the status quo that allows the internet companies to pick winners and losers by charging content providers higher rates for speedier access to customers. They seek to expand the cable system model and allow kingmakers to rake in billions at the expense of smaller, new startups that struggle to gain a wider audience on their slow-speed offerings. So Republicans and a handful of Democrats are holding up Sohn's confirmation, claiming that her "radical" views disqualify her....

They also object to Sohn's current service as an Electronic Frontier Foundation board member, saying it proves she wouldn't be an unbiased and impartial FCC Commissioner. The San Francisco-based EFF is a leading nonprofit with a mission of defending digital privacy, free speech and innovation....

Enough is enough. Confirm Sohn and allow the FCC to fulfill its mission of promoting connectivity and ensuring a robust and competitive internet market.

The Courts

Supreme Court Allows Reddit Mods To Anonymously Defend Section 230 (arstechnica.com) 152

An anonymous reader quotes a report from Ars Technica: Over the past few days, dozens of tech companies have filed briefs in support of Google in a Supreme Court case that tests online platforms' liability for recommending content. Obvious stakeholders like Meta and Twitter, alongside popular platforms like Craigslist, Etsy, Wikipedia, Roblox, and Tripadvisor, urged the court to uphold Section 230 immunity in the case or risk muddying the paths users rely on to connect with each other and discover information online. Out of all these briefs, however, Reddit's was perhaps the most persuasive (PDF). The platform argued on behalf of everyday Internet users, whom it claims could be buried in "frivolous" lawsuits for frequenting Reddit, if Section 230 is weakened by the court. Unlike other companies that hire content moderators, the content that Reddit displays is "primarily driven by humans -- not by centralized algorithms." Because of this, Reddit's brief paints a picture of trolls suing not major social media companies, but individuals who get no compensation for their work recommending content in communities. That legal threat extends to both volunteer content moderators, Reddit argued, as well as more casual users who collect Reddit "karma" by upvoting and downvoting posts to help surface the most engaging content in their communities.

"Section 230 of the Communications Decency Act famously protects Internet platforms from liability, yet what's missing from the discussion is that it crucially protects Internet users -- everyday people -- when they participate in moderation like removing unwanted content from their communities, or users upvoting and downvoting posts," a Reddit spokesperson told Ars. Reddit argues in the brief that such frivolous lawsuits have been lobbed against Reddit users and the company in the past, and Section 230 protections historically have consistently allowed Reddit users to "quickly and inexpensively" avoid litigation. [...]

The Supreme Court will have to weigh whether Reddit's arguments are valid. To help make its case defending Section 230 immunity protections for recommending content, Reddit received special permission from the Supreme Court to include anonymous comments from Reddit mods in its brief. This, Reddit's spokesperson notes, is "a significant departure from normal Supreme Court procedure." The Electronic Frontier Foundation, a nonprofit defending online privacy, championed the court's decision to allow moderators to contribute comments anonymously.
"We're happy the Supreme Court recognized the First Amendment rights of Reddit moderators to speak to the court about their concerns," EFF's senior staff attorney, Sophia Cope, told Ars. "It is quite understandable why those individuals may be hesitant to identify themselves should they be subject to liability in the future for moderating others' speech on Reddit."

"Reddit users that interact with third-party content -- including 'hosting' content on a sub-Reddit that they manage, or moderating that content -- could definitely be open to legal exposure if the Court carves out "recommending' from Section 230's protections, or otherwise narrows Section 230's reach," Cope told Ars.
Crime

San Jose Police Announce Three Stolen Vehicles Recovered Using Automatic License Plate Reader (kron4.com) 114

Saturday night in the Silicon Valley city of San Jose, the assistant police chief tweeted out praise for their recently-upgraded Automatic License Plate Readers: Officers in Air3 [police helicopter], monitoring the ALPR system, got alerted to 3 stolen cars. They directed ground units to the cars. All 3 drivers in custody! No dangerous vehicle pursuits occurred, nor were they needed.

2 drivers tried to run away. But, you can't outrun a helicopter!"

There's photos — one of the vehicles appears to be a U-Haul pickup truck — and the tweet drew exactly one response, from San Jose mayor Matt Mahan: "Nice job...! Appreciate the excellent police work and great to see ALPRs having an impact. Don't steal cars in San Jose!"
Some context: The San Jose Spotlight (a nonprofit local news site) noted that prior to last year license plate readers had been mounted exclusively on police patrol cars (and in use since 2006). But last year the San Jose Police Department launched a new "pilot program" with four cameras mounted at a busy intersection, that "captured nearly 300,000 plate scans in just the last month, according to city data."

By August this had led to plans for 150 more stationary ALPR cameras, a local TV station reported. "Just this week, police said they solved an armed robbery and arrested a suspected shooter thanks to the cameras." During a forum to update the community, San Jose police also mentioned success stories in other cities like Vallejo where they've reported a 100% increase in identifying stolen vehicles. San Jose is now installing hundreds around the city and the first batch is coming in the next two to three months....

The biggest concern among those attending Wednesday's virtual forum was privacy. But the city made it clear the data is only shared with trained police officers and certain city staff, no out-of-state or federal agencies. "Anytime that someone from the San Jose Police Department accesses the ALPR system, they have to input a reason, the specific plates they are looking for and all of that information is logged so that we can keep track of how many times its being used and what its being used for," said Albert Gehami, Digital Privacy Officer for San Jose.

More privacy concerns were raised in September, reports the San Jose Spotlight: The San Jose City Council unanimously approved a policy Tuesday that formally bans the police department from selling any license plate data, using that information for investigating a person's immigration status or for monitoring legally protected activities like protests or rallies.

Even with these new rules, some privacy advocates and community groups are still opposed to the technology. Victor Sin, chair of the Santa Clara Valley Chapter of ACLU of Northern California, expressed doubt that the readers are improving public safety. He made the comments in a letter to the council from himself and leaders of four other community organizations. "Despite claims that (automated license plate reader) systems can reduce crime, researchers have expressed concerns about the rapid acquisition of this technology by law enforcement without evidence of its efficacy," the letter reads. Groups including the Asian Law Alliance and San Jose-Silicon Valley NAACP also said the city should reduce the amount of time it keeps license plate data on file down from one year.....

Mayor Sam Liccardo said he's already convinced the readers are useful, but added the council should try to find a way to measure their effect. "It's probably not a bad idea for us to decide what are the outcomes we're trying to achieve, and if there is some reasonable metric that captures that outcome in a meaningful way," Liccardo said. "Was this used to actually help us arrest anybody, or solve a crime or prevent an accident?"

An EFF position paper argues that "ALPR data is gathered indiscriminately, collecting information on millions of ordinary people." By plotting vehicle times and locations and tracing past movements, police can use stored data to paint a very specific portrait of drivers' lives, determining past patterns of behavior and possibly even predicting future ones — in spite of the fact that the vast majority of people whose license plate data is collected and stored have not even been accused of a crime.... [ALPR technology] allows officers to track everyone..."
Maybe the police officer's tweet was to boost public support for the technology? It's already led to a short report from another local news station: San Jose police recovered three stolen cars using their automated license-plate recognition technology (ALPR) on Saturday, according to officials with the San Jose Police Department.

Officers inside of Air3, one of SJPD's helicopters, spotted three stolen cars using ALPR before directing ground units their way. Police say no pursuits occurred, though two of the drivers tried to run away.

Privacy

CES's 'Worst in Show' Criticized Over Privacy, Security, and Environmental Threats (youtube.com) 74

"We are seeing, across the gamut, products that impact our privacy, products that create cybersecurity risks, that have overarchingly long-term environmental impacts, disposable products, and flat-out just things that maybe should not exist."

That's the CEO of the how-to repair site iFixit, introducing their third annual "Worst in Show" ceremony for the products displayed at this year's CES. But the show's slogan promises it's also "calling out the most troubling trends in tech." For example, the EFF's executive director started with two warnings. First, "If it's communicating with your phone, it's generally communicating to the cloud too." But more importantly, if a product is gathering data about you and communicating with the cloud, "you have to ask yourself: is this company selling something to me, or are they selling me to other people? And this year, as in many past years at CES, it's almost impossible to tell from the products and the advertising copy around them! They're just not telling you what their actual business model is, and because of that — you don't know what's going on with your privacy."

After warning about the specific privacy implications of a urine-analyzing add-on for smart toilets, they noted there was a close runner-up for the worst privacy: the increasing number of scam products that "are basically based on the digital version of phrenology, like trying to predict your emotions based upon reading your face or other things like that. There's a whole other category of things that claim to do things that they cannot remotely do."

To judge the worst in show by environmental impact, Consumer Reports sent the Associate Director for their Product Sustainability, Research and Testing team, who chose the 55-inch portable "Displace TV" for being powered only by four lithium-ion batteries (rather than, say, a traditional power cord).

And the "worst in show" award for repairability went to the Ember Mug 2+ — a $200 travel mug "with electronics and a battery inside...designed to keep your coffee hot." Kyle Wiens, iFixit's CEO, first noted it was a product which "does not need to exist" in a world which already has equally effective double-insulated, vaccuum-insulated mugs and Thermoses. But even worse: it's battery powered, and (at least in earlier versions) that battery can't be easily removed! (If you email the company asking for support on replacing the battery, Wiens claims that "they will give you a coupon on a new, disposable coffee mug. So this is the kind of product that should not exist, doesn't need to exist, and is doing active harm to the world.

"The interesting thing is people care so much about their $200 coffee mug, the new feature is 'Find My iPhone' support. So not only is it harming the environment, it's also spying on where you're located!"

The founder of SecuRepairs.org first warned about "the vast ecosystem of smart, connected products that are running really low-quality, vulnerable software that make our persons and our homes and businesses easy targets for hackers." But for the worst in show for cybersecurity award, they then chose Roku's new Smart TV, partly because smart TVs in general "are a problematic category when it comes to cybersecurity, because they're basically surveillance devices, and they're not created with security in mind." And partly because to this day it's hard to tell if Roku has fixed or even acknowledged its past vulnerabilities — and hasn't implemented a prominent bug bounty program. "They're not alone in this. This is a problem that affects electronics makers of all different shapes and sizes at CES, and it's something that as a society, we just need to start paying a lot more attention to."

And US Pirg's "Right to Repair" campaign director gave the "Who Asked For This" award to Neutrogena's "SkinStacks" 3D printer for edible skin-nutrient gummies — which are personalized after phone-based face scans. ("Why just sell vitamins when you could also add in proprietary refills and biometic data harvesting.")
DRM

Unpaid Taxes Could Destroy Porn Studio Accused of Copyright Trolling (arstechnica.com) 22

Slashdot has covered the legal hijinx of Malibu Media over the years. Now Ars Technica reports that the studio could be destroyed by unpaid taxes: Over the past decade, Malibu Media has emerged as a prominent so-called "copyright troll," suing thousands of "John Does" for allegedly torrenting adult content hosted on the porn studio's website, "X-Art." Whether defendants were guilty or not didn't seem to matter to Malibu, critics claimed, as much as winning as many settlements as possible. As courts became more familiar with Malibu, however, some judges grew suspicious of the studio's litigiousness. As early as 2012, a California judge described these lawsuits as "essentially an extortion scheme," and by 2013, a Wisconsin judge ordered sanctions, agreeing with critics who said that Malibu's tactics were designed to "harass and intimidate" defendants into paying Malibu thousands in settlements.

By 2016, Malibu started losing footing in this arena — and even began fighting with its own lawyer. At that point, file-sharing lawsuits became less commonplace, with critics noting a significant reduction in Malibu's lawsuits over the next few years. Now, TorrentFreak reports that Malibu's litigation machine appears to finally be running out of steam — with its corporate status suspended in California sometime between mid-2020 and early 2021 after failing to pay taxes. Last month, a Texas court said that Malibu has until January 20 to pay what's owed in back taxes and get its corporate status reinstated. If that doesn't happen over the next few weeks, one of Malibu's last lawsuits on the books will be dismissed, potentially marking the end of Malibu's long run of alleged copyright trolling.

Government

iFixit Put Up a Right To Repair Billboard Along New York Governor's Drive To Work (pirg.org) 32

Right to Repair website iFixit put up a billboard in Albany, New York, calling for Gov. Kathy Hochul to sign the landmark Right to Repair law, which was passed overwhelmingly nearly six months ago by the state legislature. PIRG reports: Supported by Repair.org, U.S. PIRG and NYPIRG, Consumer Reports, Environment New York, the Story of Stuff Project, Sierra Club Atlantic Chapter, NRDC, Environmental Action and EFF, calls for the governor to sign the bill have increased The legislation must advance to the governor by the end of December and be signed by January 10, 2023.

The Albany Times Union editorialized twice for the governor to sign the bill, recently noting that the bill has come under intense opposition from manufacturers: "Meanwhile, lobbyists, big corporations and a few trade organizations are pressing for a veto ... Ms. Hochul must sign the bill, and then lawmakers should get to work passing an expanded version that includes all the products that were needlessly stripped from the original. Big corporations and the lobbyists they hire won't be happy, but that shouldn't matter a bit."

Electronic Frontier Foundation

Aaron Swartz Day Commemorated With International Hackathon (eff.org) 27

Long-time Slashdot reader destinyland shares this announcement from the EFF's DeepLinks blog:

This weekend, EFF is celebrating the life and work of programmer, activist, and entrepreneur Aaron Swartz by participating in the 2022 Aaron Swartz Day and Hackathon. This year, the event will be held in person at the Internet Archive in San Francisco on Nov. 12 and Nov. 13. It will also be livestreamed; links to the livestream will be posted each morning.

Those interested in attending in-person or remotely can register for the event here.

Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26....

Those interested in working on projects in Aaron's honor can also contribute to the annual hackathon, which this year includes several projects: SecureDrop, Bad Apple, the Disability Technology Project (Sat. only), and EFF's own Atlas of Surveillance. In addition to the hackathon in San Francisco, there will also be concurrent hackathons in Ecuador, Argentina, and Brazil. For more information on the hackathon and for a full list of speakers, check out the official page for the 2022 Aaron Swartz Day and Hackathon.

Speakers this year include Chelsea Manning and Cory Doctorow, as well as Internet Archive founder Brewster Kahle, EFF executive director Cindy Cohn, and Creative Commons co-founder Lisa Rein.
Electronic Frontier Foundation

Peter Eckersley, Co-Creator of Let's Encrypt, Dies at 43 (sophos.com) 35

Seven years ago, Slashdot reader #66,542 announced "Panopticlick 2.0," a site showing how your web browser handles trackers.

But it was just one of the many privacy-protecting projects Peter Eckersley worked on, as a staff technologist at the EFF for more than a decade. Eckersley also co-created Let's Encrypt, which today is used by hundreds of millions of people.

Friday the EFF's director of cybersecurity announced the sudden death of Eckersley at age 43. "If you have ever used Let's Encrypt or Certbot or you enjoy the fact that transport layer encryption on the web is so ubiquitous it's nearly invisible, you have him to thank for it," the announcement says. "Raise a glass."

Peter Eckersley's web site is still online, touting "impactful privacy and cybersecurity projects" that he co-created, including not just Let's Encrypt, Certbot, and Panopticlick, but also Privacy Badger and HTTPS Everywhere. And in addition, "During the COVID-19 pandemic he convened the the stop-covid.tech group, advising many groups working on privacy-preserving digital contact tracing and exposure notification, assisting with several strategy plans for COVID mitigation." You can also still find Peter Eckersley's GitHub repositories online.

But Peter "had apparently revealed recently that he had been diagnosed with cancer," according to a tribute posted online by security company Sophos, noting his impact is all around us: If you click on the padlock in your browser [2022-09-0T22:37:00Z], you'll see that this site, like our sister blog site Sophos News, uses a web certificate that's vouched for by Let's Encrypt, now a well-established Certificate Authority (CA). Let's Encrypt, as a CA, signs TLS cryptographic certificates for free on behalf of bloggers, website owners, mail providers, cloud servers, messaging services...anyone, in fact, who needs or wants a vouched-for encryption certificate, subject to some easy-to-follow terms and conditions....

Let's Encrypt wasn't the first effort to try to build a free-as-in-freedom and free-as-in-beer infrastructure for online encryption certificates, but the Let's Encrypt team was the first to build a free certificate signing system that was simple, scalable and solid. As a result, the Let's Encrypt project was soon able to to gain the trust of the browser making community, to the point of quickly getting accepted as a approved certificate signer (a trusted-by-default root CA, in the jargon) by most mainstream browsers....

In recent years, Peter founded the AI Objectives Institute, with the aim of ensuring that we pick the right social and economic problems to solve with AI:

"We often pay more attention to how those goals are to be achieved than to what those goals should be in the first place. At the AI Objectives Institute, our goal is better goals."

Google

Dad Photographs Son for Doctor. Google Flags Him as Criminal, Notifies Police (yahoo.com) 241

"The nurse said to send photos so the doctor could review them in advance," the New York Times reports, decribing how an ordeal began in February of 2021 for a software engineer named Mark who had a sick son: Mark's wife grabbed her husband's phone and texted a few high-quality close-ups of their son's groin area to her iPhone so she could upload them to the health care provider's messaging system. In one, Mark's hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images. With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up....

Two days after taking the photos of his son, Mark's phone made a blooping notification noise: His account had been disabled because of "harmful content" that was "a severe violation of Google's policies and might be illegal." A "learn more" link led to a list of possible reasons, including "child sexual abuse & exploitation...." He filled out a form requesting a review of Google's decision, explaining his son's infection. At the same time, he discovered the domino effect of Google's rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son's first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn't get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life....

A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation. Mark didn't know it, but Google's review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.... In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark's Google account: his internet searches, his location history, his messages and any document, photo and video he'd stored with the company. The search, related to "child exploitation videos," had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn't worked....

Mark appealed his case to Google again, providing the police report, but to no avail.... A Google spokeswoman said the company stands by its decisions...

"The day after Mark's troubles started, the same scenario was playing out in Texas," the Times notes, quoting a technologist at the EFF who speculates other people experiencing the same thing may not want to publicize it. "There could be tens, hundreds, thousands more of these."

Reached for a comment on the incident, Google told the newspaper that "Child sexual abuse material is abhorrent and we're committed to preventing the spread of it on our platforms."
The Courts

Federal Court Upholds First Amendment Protections For Student's Off-Campus Social Media Post (eff.org) 105

"Students should not have to fear expulsion for expressing themselves on social media after school and off-campus, but that is just what happened to the plaintiff in C1.G v. Siegfried," writes Mukund Rathi via the Electronic Frontier Foundation (DFF). "Last month, the Tenth Circuit Court of Appeals ruled the student's expulsion violated his First Amendment rights. The court's opinion affirms what we argued in an amicus brief last year." From the report: We strongly support the Tenth Circuit's holding that schools cannot regulate how students use social media off campus, even to spread "offensive, controversial speech," unless they target members of the school community with "vulgar or abusive language."

The case arose when the student and his friends visited a thrift shop on a Friday night. There, they posted a picture on Snapchat with an offensive joke about violence against Jews. He deleted the post and shared an apology just a few hours later, but the school suspended and eventually expelled him. [...] The Tenth Circuit held the First Amendment protected the student's speech because "it does not constitute a true threat, fighting words, or obscenity." The "post did not include weapons, specific threats, or speech directed toward the school or its students." While the post spread widely and the school principal received emails about it, the court correctly held that this did not amount to "a reasonable forecast of substantial disruption" that would allow regulation of protected speech.

IT

Indonesia Unblocks Steam and Yahoo, But Fortnite and FIFA Are Still Banned (theverge.com) 4

Indonesia has lifted its ban on Steam and Yahoo now that both companies complied with the country's restrictive laws that regulate online activity. From a report: The Indonesian Ministry of Communication and Information (Kominfo) announced the news in a translated update on Twitter, noting that Counter-Strike: Global Offensive and Dota 2 are back online as well. Last week, Indonesia blocked access to Steam, PayPal, Yahoo, Epic Games, and Origin after the companies failed to meet a deadline to register with the country's database. This requirement is bundled with a broader law, called MR5, that Indonesia first introduced in 2020. The law gives the Indonesian government the authority to order platforms to take down content considered illegal as well as request the data of specific users. In 2021, the digital rights group Electronic Frontier Foundation (EFF) called the policy "invasive of human rights." Although PayPal has yet to comply, Indonesia unblocked access to the service for five days starting July 31st to give users a chance to withdraw money and make payments. According to the Indonesian news outlet Antara News, PayPal reportedly plans on registering with the country's database soon.

Slashdot Top Deals