Communications

FCC Asks $5 Million Fine For Activists' Election Robocalls (axios.com) 80

The Federal Communications Commission has proposed a $5 million fine against right-wing activists Jacob Wohl and Jack Burkman for allegedly making illegal robocalls discouraging mail voting ahead of the 2020 election. From a report: The record-setting penalty from the FCC comes as the pair faces criminal charges of voter suppression in Michigan and a federal lawsuit in New York accusing them of making 85,000 robocalls to Black Americans in an attempt to keep them from voting. The FCC says Wohl and Burkman made over 1,000 pre-recorded calls to wireless phones without receiving consent for those calls, in violation of the Telephone Consumer Protection Act. The messages said if the voters cast their ballot by mail, their "personal information will be part of a public database that will be used by police departments to track down old warrants and be used by credit card companies to collect outstanding debts," according to an FCC news release.
AI

AI-Powered Tech Put a 65-Year-Old in Jail For Almost a Year Despite 'Insufficient Evidence' (apnews.com) 98

"ShotSpotter" is an AI-powered tool that claims it can detect the sound of gunshots. To install it can cost up to $95,000 per square mile — every year — reports the Associated Press.

There's just one problem. "The algorithm that analyzes sounds to distinguish gunshots from other noises has never been peer reviewed by outside academics or experts." "The concern about ShotSpotter being used as direct evidence is that there are simply no studies out there to establish the validity or the reliability of the technology. Nothing," said Tania Brief, a staff attorney at The Innocence Project, a nonprofit that seeks to reverse wrongful convictions.

A 2011 study commissioned by the company found that dumpsters, trucks, motorcycles, helicopters, fireworks, construction, trash pickup and church bells have all triggered false positive alerts, mistaking these sounds for gunshots. ShotSpotter CEO Ralph Clark said the company is constantly improving its audio classifications, but the system still logs a small percentage of false positives. In the past, these false alerts — and lack of alerts — have prompted cities from Charlotte, North Carolina, to San Antonio, Texas, to end their ShotSpotter contracts, the AP found.

And the potential for problems isn't just hypothetical. Just ask 65-year-old Michael Williams: Williams was jailed last August, accused of killing a young man from the neighborhood who asked him for a ride during a night of unrest over police brutality in May... "I kept trying to figure out, how can they get away with using the technology like that against me?" said Williams, speaking publicly for the first time about his ordeal. "That's not fair." Williams sat behind bars for nearly a year before a judge dismissed the case against him last month at the request of prosecutors, who said they had insufficient evidence.

Williams' experience highlights the real-world impacts of society's growing reliance on algorithms to help make consequential decisions about many aspects of public life... ShotSpotter evidence has increasingly been admitted in court cases around the country, now totaling some 200. ShotSpotter's website says it's "a leader in precision policing technology solutions" that helps stop gun violence by using "sensors, algorithms and artificial intelligence" to classify 14 million sounds in its proprietary database as gunshots or something else. But an Associated Press investigation, based on a review of thousands of internal documents, emails, presentations and confidential contracts, along with interviews with dozens of public defenders in communities where ShotSpotter has been deployed, has identified a number of serious flaws in using ShotSpotter as evidentiary support for prosecutors. AP's investigation found the system can miss live gunfire right under its microphones, or misclassify the sounds of fireworks or cars backfiring as gunshots.

Forensic reports prepared by ShotSpotter's employees have been used in court to improperly claim that a defendant shot at police, or provide questionable counts of the number of shots allegedly fired by defendants. Judges in a number of cases have thrown out the evidence... The company's methods for identifying gunshots aren't always guided solely by the technology. ShotSpotter employees can, and often do, change the source of sounds picked up by its sensors after listening to audio recordings, introducing the possibility of human bias into the gunshot detection algorithm. Employees can and do modify the location or number of shots fired at the request of police, according to court records. And in the past, city dispatchers or police themselves could also make some of these changes.

Three more eye-popping details from the AP's 4,000-word exposé
  • "One study published in April in the peer-reviewed Journal of Urban Health examined ShotSpotter in 68 large, metropolitan counties from 1999 to 2016, the largest review to date. It found that the technology didn't reduce gun violence or increase community safety..."
  • "Forensic tools such as DNA and ballistics evidence used by prosecutors have had their methodologies examined in painstaking detail for decades, but ShotSpotter claims its software is proprietary, and won't release its algorithm..."
  • "In 2018, it acquired a predictive policing company called HunchLab, which integrates its AI models with ShotSpotter's gunshot detection data to purportedly predict crime before it happens."

Google

Google Says Geofence Warrants Make Up One-Quarter Of All US Demands (techcrunch.com) 55

For the first time, Google has published the number of geofence warrants it's historically received from U.S. authorities, providing a rare glimpse into how frequently these controversial warrants are issued. ZDNet's Zack Whittaker reports: The figures, published Thursday, reveal that Google has received thousands of geofence warrants each quarter since 2018, and at times accounted for about one-quarter of all U.S. warrants that Google receives. The data shows that the vast majority of geofence warrants are obtained by local and state authorities, with federal law enforcement accounting for just 4% of all geofence warrants served on the technology giant. According to the data, Google received 982 geofence warrants in 2018, 8,396 in 2019 and 11,554 in 2020. But the figures only provide a small glimpse into the volume of warrants received and did not break down how often it pushes back on overly broad requests.

Geofence warrants are also known as "reverse-location" warrants, since they seek to identify people of interest who were in the near vicinity at the time a crime was committed. Police do this by asking a court to order Google, which stores vast amounts of location data to drive its advertising business, to turn over details of who was in a geographic area, such as a radius of a few hundred feet at a certain point in time, to help identify potential suspects. Google has long shied away from providing these figures, in part because geofence warrants are largely thought to be unique to Google. Law enforcement has long known that Google stores vast troves of location data on its users in a database called Sensorvault, first revealed by The New York Times in 2019.
Google spokesperson Alex Krasov said in a statement: "We vigorously protect the privacy of our users while supporting the important work of law enforcement. We developed a process specifically for these requests that is designed to honor our legal obligations while narrowing the scope of data disclosed."
AT&T

Hacker Selling Private Data Allegedly From 70 Million AT&T Customers (restoreprivacy.com) 12

An anonymous reader quotes a report from Restore Privacy: A well-known threat actor with a long list of previous breaches is selling private data that was allegedly collected from 70 million AT&T customers. We analyzed the data and found it to include social security numbers, date of birth, and other private information. The hacker is asking $1 million for the entire database (direct sell) and has provided RestorePrivacy with exclusive information for this report. The threat actor goes by the name of ShinyHunters and was also behind other previous exploits that affected Microsoft, Tokopedia, Pixlr, Mashable, Minted, and more. The hacker posted the leak on an underground hacking forum earlier today, along with a sample of the data that we analyzed. AT&T has initially denied the breach in a statement to RestorePrivacy. The hacker has responded by saying, "they will keep denying until I leak everything." "Based on our investigation yesterday, the information that appeared in an internet chat room does not appear to have come from our systems," AT&T said in a statement. When pressed harder and asked specifically if there was no AT&T breach, the company said: "Based on our investigation, no, we don't believe this was a breach of AT&T systems."

"Given this information did not come from us, we can't speculate on where it came from or whether it is valid," they added. The hacker says they're willing to reach "an agreement" with AT&T to remove the data from sale.

The possible breach of AT&T follows a T-Mobile hack from earlier this week, which impacts 40 million records of former and prospective customers.
Apple

We Built a CSAM System Like Apple's - the Tech Is Dangerous (washingtonpost.com) 186

An anonymous reader writes: Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties firestorm, and Apple's own employees have been expressing alarm. The company insists reservations about the system are rooted in "misunderstandings." We disagree.

We wrote the only peer-reviewed publication on how to build a system like Apple's -- and we concluded the technology was dangerous. We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we're also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn't restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
About the authors of this report: Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton University. He previously served as technology counsel to then-Sen. Kamala D. Harris and as chief technologist of the Federal Communications Commission Enforcement Bureau. Anunay Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.
Privacy

'Apple's Device Surveillance Plan Is a Threat To User Privacy -- And Press Freedom' (freedom.press) 213

The Freedom of the Press Foundation is calling Apple's plan to scan photos on user devices to detect known child sexual abuse material (CSAM) a "dangerous precedent" that "could be misused when Apple and its partners come under outside pressure from governments or other powerful actors." They join the EFF, whistleblower Edward Snowden, and many other privacy and human rights advocates in condemning the move. Advocacy Director Parker Higgins writes: Very broadly speaking, the privacy invasions come from situations where "false positives" are generated -- that is to say, an image or a device or a user is flagged even though there are no sexual abuse images present. These kinds of false positives could happen if the matching database has been tampered with or expanded to include images that do not depict child abuse, or if an adversary could trick Apple's algorithm into erroneously matching an existing image. (Apple, for its part, has said that an accidental false positive -- where an innocent image is flagged as child abuse material for no reason -- is extremely unlikely, which is probably true.) The false positive problem most directly touches on press freedom issues when considering that first category, with adversaries that can change the contents of the database that Apple devices are checking files against. An organization that could add leaked copies of its internal records, for example, could find devices that held that data -- including, potentially, whistleblowers and journalists who worked on a given story. This could also reveal the extent of a leak if it is not yet known. Governments that could include images critical of its policies or officials could find dissidents that are exchanging those files.
[...]
Journalists, in particular, have increasingly relied on the strong privacy protections that Apple has provided even when other large tech companies have not. Apple famously refused to redesign its software to open the phone of an alleged terrorist -- not because they wanted to shield the content on a criminal's phone, but because they worried about the precedent it would set for other people who rely on Apple's technology for protection. How is this situation any different? No backdoor for law enforcement will be safe enough to keep bad actors from continuing to push it open just a little bit further. The privacy risks from this system are too extreme to tolerate. Apple may have had noble intentions with this announced system, but good intentions are not enough to save a plan that is rotten at its core.

Privacy

Afghans Scramble To Delete Digital History, Evade Biometrics (reuters.com) 203

Thousands of Afghans struggling to ensure the physical safety of their families after the Taliban took control of the country have an additional worry: that biometric databases and their own digital history can be used to track and target them. From a report: U.N. Secretary-General Antonio Guterres has warned of "chilling" curbs on human rights and violations against women and girls, and Amnesty International on Monday said thousands of Afghans - including academics, journalists and activists - were "at serious risk of Taliban reprisals." After years of a push to digitise databases in the country, and introduce digital identity cards and biometrics for voting, activists warn these technologies can be used to target and attack vulnerable groups. "We understand that the Taliban is now likely to have access to various biometric databases and equipment in Afghanistan," the Human Rights First group wrote on Twitter on Monday.

"This technology is likely to include access to a database with fingerprints and iris scans, and include facial recognition technology," the group added. The U.S.-based advocacy group quickly published a Farsi-language version of its guide on how to delete digital history - that it had produced last year for activists in Hong Kong - and also put together a manual on how to evade biometrics. Tips to bypass facial recognition include looking down, wearing things to obscure facial features, or applying many layers of makeup, the guide said, although fingerprint and iris scans were difficult to bypass.

Security

Secret Terrorist Watchlist With 2 Million Records Exposed Online (bleepingcomputer.com) 87

A secret terrorist watchlist with 1.9 million records, including classified "no-fly" records was exposed on the internet. The list was left accessible on an Elasticsearch cluster that had no password on it. BleepingComputer reports: July this year, Security Discovery researcher Bob Diachenko came across a plethora of JSON records in an exposed Elasticsearch cluster that piqued his interest. The 1.9 million-strong recordset contained sensitive information on people, including their names, country citizenship, gender, date of birth, passport details, and no-fly status. The exposed server was indexed by search engines Censys and ZoomEye, indicating Diachenko may not have been the only person to come across the list.

The researcher discovered the exposed database on July 19th, interestingly, on a server with a Bahrain IP address, not a US one. However, the same day, he rushed to report the data leak to the U.S. Department of Homeland Security (DHS). "I discovered the exposed data on the same day and reported it to the DHS." "The exposed server was taken down about three weeks later, on August 9, 2021." "It's not clear why it took so long, and I don't know for sure whether any unauthorized parties accessed it," writes Diachenko in his report. The researcher considers this data leak to be serious, considering watchlists can list people who are suspected of an illicit activity but not necessarily charged with any crime. "In the wrong hands, this list could be used to oppress, harass, or persecute people on the list and their families." "It could cause any number of personal and professional problems for innocent people whose names are included in the list," says the researcher.

Electronic Frontier Foundation

Edward Snowden and EFF Slam Apple's Plans To Scan Messages and iCloud Images (macrumors.com) 55

Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). MacRumors reports: In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future. Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."

The EFF, an eminent international non-profit digital rights group, has issued an extensive condemnation of Apple's move to scan users' iCloud libraries and messages, saying that it is extremely "disappointed" that a "champion of end-to-end encryption" is undertaking a "shocking about-face for users who have relied on the company's leadership in privacy and security." The EFF highlighted how various governments around the world have passed laws that demand surveillance and censorship of content on various platforms, including messaging apps, and that Apple's move to scan messages and "iCloud Photos" could be legally required to encompass additional materials or easily be widened. "Make no mistake: this is a decrease in privacy for all "iCloud Photos" users, not an improvement," the EFF cautioned.

Transportation

Infrastructure Bill Could Enable Government To Track Drivers' Travel Data (theintercept.com) 238

Presto Vivace shares a report from The Intercept: The Senate's $1.2 trillion bipartisan infrastructure bill proposes a national test program that would allow the government to collect drivers' data in order to charge them per-mile travel fees. The new revenue would help finance the Highway Trust Fund, which currently depends mostly on fuel taxes to support roads and mass transit across the country. Under the proposal, the government would collect information about the miles that drivers travel from smartphone apps, another on-board device, automakers, insurance companies, gas stations, or other means. For now, the initiative would only be a test effort -- the government would solicit volunteers who drive commercial and passenger vehicles -- but the idea still raises concerns about the government tracking people's private data.

The bill would establish an advisory board to guide the program that would include officials representing state transportation departments and the trucking industry as well as data security and consumer privacy experts. As the four-year pilot initiative goes on, the Transportation and Treasury departments would also have to keep Congress informed of how they maintain volunteers' privacy and how the per-mile fee idea could affect low-income drivers. Still, [Sean Vitka, policy counsel at Demand Progress] said the concept could put Americans' private data at risk. "We already know the government is unable to keep data like this secure, which is another reason why the government maintaining a giant database of travel information about people in the United States is a bad idea."
"If you think this is a bad idea, NOW would be a good time to let your Senators and representative know," says Slashdot reader Presto Vivace.
Security

Hackers Shut Down System For Booking COVID-19 Shots in Italy's Lazio Region (reuters.com) 33

Hackers have attacked and shut down the IT systems of the company that manages COVID-19 vaccination appointments for the Lazio region surrounding Rome, the regional government said on Sunday. From a report: "A powerful hacker attack on the region's CED (database) is under way," the region said in a Facebook posting. It said all systems had been deactivated, including those of the region's health portal and vaccination network, and warned the inoculation programme could suffer a delay. "It is a very powerful hacker attack, very serious... everything is out. The whole regional CED is under attack," Lazio region's health manager Alessio D'Amato said.


Programming

Are Python Libraries Riddled With Security Holes? (techradar.com) 68

"Almost half of the packages in the official Python Package Index (PyPI) repository have at least one security issue," reports TechRadar, citing a new analysis by Finnish researchers, which even found five packages with more than a thousand issues each... The researchers used static analysis to uncover the security issues in the open source packages, which they reason end up tainting software that use them. In total the research scanned through 197,000 packages and found more than 749,000 security issues in all... Explaining their methodology the researchers note that despite the inherent limitations of static analysis, they still found at least one security issue in about 46% of the packages in the repository. The paper reveals that of the issues identified, the maximum (442,373) are of low severity, while 227,426 are moderate severity issues. However, 11% of the flagged PyPI packages have 80,065 high severity issues.
The Register supplies some context: Other surveys of this sort have come to similar conclusions about software package ecosystems. Last September, a group of IEEE researchers analyzed 6,673 actively used Node.js apps and found about 68 per cent depended on at least one vulnerable package... The situation is similar with package registries like Maven (for Java), NuGet (for .NET), RubyGems (for Ruby), CPAN (for Perl), and CRAN (for R). In a phone interview, Ee W. Durbin III, director of infrastructure at the Python Software Foundation, told The Register, "Things like this tend not to be very surprising. One of the most overlooked or misunderstood parts of PyPI as a service is that it's intended to be freely accessible, freely available, and freely usable. Because of that we don't make any guarantees about the things that are available there..."

Durbin welcomed the work of the Finnish researchers because it makes people more aware of issues that are common among open package management systems and because it benefits the overall health of the Python community. "It's not something we ignore but it's also not something we historically have had the resources to take on," said Durbin. That may be less of an issue going forward. According to Durbin, there's been significantly more interest over the past year in supply chain security and what companies can do to improve the situation. For the Python community, that's translated into an effort to create a package vulnerability reporting API and the Python Advisory Database, a community-run repository of PyPI security advisories that's linked to the Google-spearheaded Open Vulnerability Database.

China

Early Virus Sequences 'Mysteriously' Deleted Have Been Not-So-Mysteriously Undeleted (nytimes.com) 128

"A batch of early coronavirus data that went missing for a year has emerged from hiding," reports the New York Times. (Jesse Bloom, a virologist at the Fred Hutchinson Cancer Center in Seattle, had found copies of 13 of the deleted sequences on Google Cloud.)

Though their deletion raised some suspicions, "An odd explanation has emerged, stemming from an editorial oversight by a scientific journal," reports the Times. "And the sequences have been uploaded into a different database, overseen by the Chinese government."

The Times also notes that the researchers had already posted their early findings online in March 2020: That month, they also uploaded the sequences to an online database called the Sequence Read Archive, which is maintained by the National Institutes of Health, and submitted a paper describing their results to a scientific journal called Small. The paper was published in June 2020... [A] spokeswoman for the N.I.H. said that the authors of the study had requested in June 2020 that the sequences be withdrawn from the database. The authors informed the agency that the sequences were being updated and would be added to a different database... On July 5, more than a year after the researchers withdrew the sequences from the Sequence Read Archive and two weeks after Dr. Bloom's report was published online, the sequences were quietly uploaded to a database maintained by China National Center for Bioinformation by Ben Hu, a researcher at Wuhan University and a co-author of the Small paper.

On July 21, the disappearance of the sequences was brought up during a news conference in Beijing... According to a translation of the news conference by a journalist at the state-controlled Xinhua News Agency, the vice minister of China's National Health Commission, Dr. Zeng Yixin, said that the trouble arose when editors at Small deleted a paragraph in which the scientists described the sequences in the Sequence Read Archive. "Therefore, the researchers thought it was no longer necessary to store the data in the N.C.B.I. database," Dr. Zeng said, referring to the Sequence Read Archive, which is run by the N.I.H.

An editor at Small, which specializes in science at the micro and nano scale and is based in Germany, confirmed his account. "The data availability statement was mistakenly deleted," the editor, Plamena Dogandzhiyski, wrote in an email. "We will issue a correction very shortly, which will clarify the error and include a link to the depository where the data is now hosted." The journal posted a formal correction to that effect on Thursday.

While the researchers' first report had described their sequences as coming from patients "early in the epidemic," thus provoking intense curiosity, the sequences were, as promised, updated, to include a more specific date after they were published in the database, according to the Times. "They were taken from Renmin Hospital of Wuhan University on January 30 — almost two months after the earliest reports of Covid-19 in China."
Privacy

Estonia Says a Hacker Downloaded 286,000 ID Photos From Government Database (therecord.media) 11

Estonian officials said they arrested last week a local suspect who used a vulnerability to gain access to a government database and downloaded government ID photos for 286,438 Estonians. From a report: The attack took place earlier this month, and the suspect was arrested last week on July 23, Estonian police said in a press conference yesterday, July 28. The identity of the attacker was not disclosed, and he was only identified as a Tallinn-based male. Officials said the suspect discovered a vulnerability in a database managed by the Information System Authority (RIA), the Estonian government agency which manages the country's IT systems.
Facebook

Facebook, Twitter and Other Tech Giants To Target Attacker Manifestos, Far-right Militias in Database (reuters.com) 197

A counterterrorism organization formed by some of the biggest U.S. tech companies including Facebook and Microsoft is significantly expanding the types of extremist content shared between firms in a key database, aiming to crack down on material from white supremacists and far-right militias, the group told Reuters. From the report: Until now, the Global Internet Forum to Counter Terrorism's (GIFCT) database has focused on videos and images from terrorist groups on a United Nations list and so has largely consisted of content from Islamist extremist organizations such as Islamic State, al Qaeda and the Taliban. Over the next few months, the group will add attacker manifestos -- often shared by sympathizers after white supremacist violence -- and other publications and links flagged by U.N. initiative Tech Against Terrorism. It will use lists from intelligence-sharing group Five Eyes, adding URLs and PDFs from more groups, including the Proud Boys, the Three Percenters and neo-Nazis. The firms, which include Twitter and Alphabet 's YouTube, share "hashes," unique numerical representations of original pieces of content that have been removed from their services. Other platforms use these to identify the same content on their own sites in order to review or remove it.
United Kingdom

Hole Blasted In Guntrader: UK Firearms Sales Website's CRM Database Breached, 111K Users' Info Spilled Online (theregister.com) 63

Criminals have hacked into a Gumtree-style website used for buying and selling firearms, making off with a 111,000-entry database containing partial information from a CRM product used by gun shops across the UK. The Register reports: The Guntrader breach earlier this week saw the theft of a SQL database powering both the Guntrader.uk buy-and-sell website and its electronic gun shop register product, comprising about 111,000 users and dating between 2016 and 17 July this year. The database contains names, mobile phone numbers, email addresses, user geolocation data, and more including bcrypt-hashed passwords. It is a severe breach of privacy not only for Guntrader but for its users: members of the UK's licensed firearms community. Guntrader spokesman Simon Baseley told The Register that Guntrader.uk had emailed all the users affected by the breach on July 21 and issued a further update yesterday.

Guntrader is roughly similar to Gumtree: users post ads along with their contact details on the website so potential purchasers can get in touch. Gun shops (known in the UK as "registered firearms dealers" or RFDs) can also use Guntrader's integrated gun register product, which is advertised as offering "end-to-end encryption" and "daily backups", making it (so Guntrader claims) "the most safe and secure gun register system on today's market." [British firearms laws say every transfer of a firearm (sale, drop-off for repair, gift, loan, and so on) must be recorded, with the vast majority of these also being mandatory to report to the police when they happen...]

The categories of data in the stolen database are: Latitude and longitude data; First name and last name; Police force that issued an RFD's certificate; Phone numbers; Fax numbers; bcrypt-hashed passwords; Postcode; Postal addresses; and User's IP addresses. Logs of payments were also included, with Coalfire's Barratt explaining that while no credit card numbers were included, something that looks like a SHA-256 hashed string was included in the payment data tables. Other payment information was limited to prices for rifles and shotguns advertised through the site.
The Register recommends you check if your data is included in the hack by visiting Have I Been Pwned. If you are affected and you used the same password on Guntrader that you used on other websites, you should change it as soon as possible.
Google

Google Turns AlphaFold Loose On the Entire Human Genome (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Just one week after Google's DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure -- a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures. In a press conference associated with the paper's release, DeepMind's Demis Hassabis made clear that the company isn't stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.
[...]
At some point in the near future (possibly by the time you read this), all this data will be available on a dedicated website hosted by the European Bioinformatics Institute, a European Union-funded organization that describes itself in part as follows: "We make the world's public biological data freely available to the scientific community via a range of services and tools." The AlphaFold data will be no exception; once the above link is live, anyone can use it to download information on the human protein of their choice. Or, as mentioned above, the mouse, yeast, or fruit fly version. The 20 organisms that will see their data released are also just a start. DeepMind's Demis Hassabis said that over the next few months, the team will target every gene sequence available in DNA databases. By the time this work is done, over 100 million proteins should have predicted structures. Hassabis wrapped up his part of the announcement by saying, "We think this is the most significant contribution AI has made to science to date." It would be difficult to argue otherwise.
Further reading: Google details its protein-folding software, academics offer an alternative (Ars Technica)
Bug

MITRE Updates List of Top 25 Most Dangerous Software Bugs (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: MITRE has shared this year's top 25 list of most common and dangerous weaknesses plaguing software throughout the previous two years. MITRE developed the top 25 list using Common Vulnerabilities and Exposures (CVE) data from 2019 and 2020 obtained from the National Vulnerability Database (NVD) (roughly 27,000 CVEs). "A scoring formula is used to calculate a ranked order of weaknesses that combines the frequency that a CWE is the root cause of a vulnerability with the projected severity of its exploitation," MITRE explained. "This approach provides an objective look at what vulnerabilities are currently seen in the real world, creates a foundation of analytical rigor built on publicly reported vulnerabilities instead of subjective surveys and opinions, and makes the process easily repeatable."

MITRE's 2021 top 25 bugs are dangerous because they are usually easy to discover, have a high impact, and are prevalent in software released during the last two years. They can also be abused by attackers to potentially take complete control of vulnerable systems, steal targets' sensitive data, or trigger a denial-of-service (DoS) following successful exploitation. The list [here] provides insight to the community at large into the most critical and current software security weaknesses.

AI

AI Firm DeepMind Puts Database of the Building Blocks of Life Online (theguardian.com) 19

Last year the artificial intelligence group DeepMind cracked a mystery that has flummoxed scientists for decades: stripping bare the structure of proteins, the building blocks of life. Now, having amassed a database of nearly all human protein structures, the company is making the resource available online free for researchers to use. From a report: The key to understanding our basic biological machinery is its architecture. The chains of amino acids that comprise proteins twist and turn to make the most confounding of 3D shapes. It is this elaborate form that explains protein function; from enzymes that are crucial to metabolism to antibodies that fight infectious attacks. Despite years of onerous and expensive lab work that began in the 1950s, scientists have only decoded the structure of a fraction of human proteins.

DeepMind's AI program, AlphaFold, has predicted the structure of nearly all 20,000 proteins expressed by humans. In an independent benchmark test that compared predictions to known structures, the system was able to predict the shape of a protein to a good standard 95% of time. DeepMind, which has partnered with the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI), hopes the database will help researchers to analyse how life works at an atomic scale by unpacking the apparatus that drives some diseases, make strides in the field of personalised medicine, create more nutritious crops and develop "green enzymes" that can break down plastic.

Privacy

Man Behind LinkedIn Scraping Said He Grabbed 700 Million Profiles 'For Fun' (9to5mac.com) 27

The man behind last month's scraping of LinkedIn data, which exposed the location, phone numbers, and inferred salaries of 700 million users, says that he did it "for fun" -- though he is also selling the data. 9to5Mac reports: BBC News spoke with the man who took the data, under the name Tom Liner: "How would you feel if all your information was catalogued by a hacker and put into a monster spreadsheet with millions of entries, to be sold online to the highest paying cyber-criminal? That's what a hacker calling himself Tom Liner did last month 'for fun' when he compiled a database of 700 million LinkedIn users from all over the world, which he is selling for around $5,000 [...]. In the case of Mr Liner, his latest exploit was announced at 08:57 BST in a post on a notorious hacking forum [...] 'Hi, I have 700 million 2021 LinkedIn records,' he wrote. Included in the post was a link to a sample of a million records and an invite for other hackers to contact him privately and make him offers for his database."

Liner says he was also behind the scraping of 533 million Facebook profiles back in April (you can check whether your data was grabbed): "Tom told me he created the 700 million LinkedIn database using 'almost the exact same technique' that he used to create the Facebook list. He said: 'It took me several months to do. It was very complex. I had to hack the API of LinkedIn. If you do too many requests for user data in one time then the system will permanently ban you.'"

Slashdot Top Deals