Bug

NVIDIA Warns Its High-End GPUs May Be Vulnerable to Rowhammer Attacks (nerds.xyz) 13

Slashdot reader BrianFagioli shared this report from Nerds.xyz: NVIDIA just put out a new security notice, and if you're running one of its powerful GPUs, you might want to pay attention. Researchers from the University of Toronto have shown that Rowhammer attacks, which are already known to affect regular DRAM, can now target GDDR6 memory on NVIDIA's high-end GPUs when ECC [error correction code] is not enabled.

They pulled this off using an A6000 card, and it worked because system-level ECC was turned off. Once it was switched on, the attack no longer worked. That tells you everything you need to know. ECC matters.

Rowhammer has been around for years. It's one of those weird memory bugs where repeatedly accessing one row in RAM can cause bits to flip in another row. Until now, this was mostly a CPU memory problem. But this research shows it can also be a GPU problem, and that should make data center admins and workstation users pause for a second.

NVIDIA is not sounding an alarm so much as reminding everyone that protections are already in place, but only if you're using the hardware properly. The company recommends enabling ECC if your GPU supports it. That includes cards in the Blackwell, Hopper, Ada, and Ampere lines, along with others used in DGX, HGX, and Jetson systems. It also includes popular workstation cards like the RTX A6000.

There's also built-in On-Die ECC in certain newer memory types like GDDR7 and HBM3. If you're lucky enough to be using a card that has it, you're automatically protected to some extent, because OD-ECC can't be turned off. It's always working in the background. But let's be real. A lot of people skip ECC because it can impact performance or because they're running a setup that doesn't make it obvious whether ECC is on or off. If you're not sure where you stand, it's time to check. NVIDIA suggests using tools like nvidia-smi or, if you're in a managed enterprise setup, working with your system's BMC or Redfish APIs to verify settings.

Space

Please Don't Cut Funds For Space Traffic Control, Industry Begs Congress (theregister.com) 49

Major space industry players -- including SpaceX, Boeing, and Blue Origin -- are urging Congress to maintain funding for the TraCSS space traffic coordination program, warning that eliminating it would endanger satellite safety and potentially drive companies abroad. Under the proposed FY 2026 budget, the Office of Space Commerce's funding would be cut from $65 million to just $10 million. "That $55M cut is accomplished by eliminating the Traffic Coordination System for Space (TraCSS) program," reports The Register. From the report: "One of OSC's most important functions is to provide space traffic coordination support to US satellite operators, similar to the Federal Aviation Administration's role in air traffic control," stated letters from space companies including SpaceX, Boeing, Blue Origin, and others. The letters argue that safe space operations "in an increasingly congested space domain" are critical for modern services like broadband satellite internet and weather forecasting, but that's not all. "Likewise, a safe space operating environment is vital for continuity of national security space missions such as early warning of missile attacks on deployed US military forces," the letters added.

Industry trade groups sent the letters to the Democratic and Republican leadership of the House and Senate budget subcommittees for Commerce, Justice, Science, and Related Agencies, claiming to represent more than 450 US companies in the space, satellite, and defense sectors. The letters argue for the retention of the OSC's FY 2025 budget of $65 million, as well as keeping control of space traffic coordination within the purview of the Department of Commerce, under which the OSC is nested, and not the Department of Defense, where it was previously managed. "Successive administrations have recognized on a bipartisan basis that space traffic coordination is a global, commercial-facing function best managed by a civilian agency," the companies explained. "Keeping space traffic coordination within the Department of Commerce preserves military resources for core defense missions and prevents the conflation of space safety with military control."

In the budget request document, the government explained the Commerce Department was unable to complete "a government owned and operated public-facing database and traffic coordination system" in a timely manner. The private sector, meanwhile, "has proven they have the capability and the business model to provide civil operators" with the necessary space tracking data. But according to the OSC, TraCSS would have been ready for operations by January 2026, raising the question of why the government would kill the program so late in the game.

Security

Qantas Confirms Data Breach Impacts 5.7 Million Customers (bleepingcomputer.com) 4

Qantas has confirmed that 5.7 million customers have been impacted by a recent data breach through a third-party platform used by its contact center. The breach, attributed to the Scattered Spider threat group, exposed various personal details but did not include passwords, financial, or passport data. BleepingComputer reports: In a new update today, Qantas has confirmed that the threat actors stole data for approximately 5.7 million customers, with varying types of data exposed in the breach:

4 million customer records are limited to name, email address and Qantas Frequent Flyer details. Of this:
- 1.2 million customer records contained name and email address.
- 2.8 million customer records contained name, email address and Qantas Frequent Flyer number. The majority of these also had tier included. A smaller subset of these had points balance and status credits included.

Of the remaining 1.7 million customers, their records included a combination of some of the data fields above and one or more of the following:
- Address - 1.3 million. This is a combination of residential addresses and business addresses including hotels for misplaced baggage delivery.
- Date of birth - 1.1 million
- Phone number (mobile, landline and/or business) - 900,000
- Gender - 400,000. This is separate to other gender identifiers like name and salutation.
- Meal preferences - 10,000

Security

Russia Blocks Ethical Hacking Legislation Over Security Concerns (theregister.com) 7

Russia's State Duma rejected legislation that would have legalized ethical hacking, citing national security concerns. Politicians worried that discovering vulnerabilities in software from hostile countries would require sharing those security flaws with foreign companies, potentially enabling strategic exploitation.

The bill also failed to explain how existing laws would accommodate white-hat hacking provisions. Russia's Ministry of Digital Development introduced the proposal in 2022, with a first draft in 2023. Individual security researchers currently face prosecution under Russian Criminal Code for unauthorized computer access, while established cybersecurity companies can conduct limited vulnerability research.
Privacy

Swedish Bodyguards Reveal Prime Minister's Location on Fitness App (politico.eu) 18

Swedish security service members who shared details of their running and cycling routes on fitness app Strava have been accused of revealing details of the prime minister's location, including his private address. Politico: According to Swedish daily Dagens Nyheter, on at least 35 occasions bodyguards uploaded their workouts to the training app and revealed information linked to Prime Minister Ulf Kristersson, including where he goes running, details of overnight trips abroad, and the location of his private home, which is supposed to be secret.
Security

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security.

But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted.

As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

The Internet

Browser Extensions Turn Nearly 1 Million Browsers Into Website-Scraping Bots (arstechnica.com) 28

Over 240 browser extensions with nearly a million total installs have been covertly turning users' browsers into web-scraping bots. "The extensions serve a wide range of purposes, including managing bookmarks and clipboards, boosting speaker volumes, and generating random numbers," reports Ars Technica. "The common thread among all of them: They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions." Ars Technica reports: Some of the data swept up in the collection free-for-all included surveillance videos hosted on Nest, tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive and Intuit.com, vehicle identification numbers of recently bought automobiles along with the names and addresses of the buyers, patient names and the doctors they saw, travel itineraries hosted on Priceline, Booking.com, and airline websites, Facebook Messenger attachments and Facebook photos, even when the photos were set to be private. The dragnet also collected proprietary information belonging to Tesla, Blue Origin, Amgen, Merck, Pfizer, Roche, and dozens of other companies.

Tuckner said in an email Wednesday that the most recent status of the affected extensions is:

- Of 45 known Chrome extensions, 12 are now inactive. Some of the extensions were removed for malware explicitly. Others have removed the library.
- Of 129 Edge extensions incorporating the library, eight are now inactive.
- Of 71 affected Firefox extensions, two are now inactive.

Some of the inactive extensions were removed for malware explicitly. Others have removed the library in more recent updates. A complete list of extensions found by Tuckner is here.

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
AMD

AMD Warns of New Meltdown, Spectre-like Bugs Affecting CPUs (theregister.com) 26

AMD is warning users of a newly discovered form of side-channel attack affecting a broad range of its chips that could lead to information disclosure. Register: Akin to Meltdown and Spectre, the Transient Scheduler Attack (TSA) comprises four vulnerabilities that AMD said it discovered while looking into a Microsoft report about microarchitectural leaks.

The four bugs do not appear too venomous at face value -- two have medium-severity ratings while the other two are rated "low." However, the low-level nature of the exploit's impact has nonetheless led Trend Micro and CrowdStrike to assess the threat as "critical."

The reasons for the low severity scores are the high degree of complexity involved in a successful attack -- AMD said it could only be carried out by an attacker able to run arbitrary code on a target machine. It affects AMD processors (desktop, mobile and datacenter models), including 3rd gen and 4th gen EPYC chips -- the full list is here.

AI

Linux Foundation Adopts A2A Protocol To Help Solve One of AI's Most Pressing Challenges 38

An anonymous reader quotes a report from ZDNet: The Linux Foundation announced at the Open Source Summit in Denver that it will now host the Agent2Agent (A2A) protocol. Initially developed by Google and now supported by more than 100 leading technology companies, A2A is a crucial new open standard for secure and interoperable communication between AI agents. In his keynote presentation, Mike Smith, a Google staff software engineer, told the conference that the A2A protocol has evolved to make it easier to add custom extensions to the core specification. Additionally, the A2A community is working on making it easier to assign unique identities to AI agents, thereby improving governance and security.

The A2A protocol is designed to solve one of AI's most pressing challenges: enabling autonomous agents -- software entities capable of independent action and decision-making -- to discover each other, securely exchange information, and collaborate across disparate platforms, vendors, and frameworks. Under the hood, A2A does this work by creating an AgentCard. An AgentCard is a JavaScript Object Notation (JSON) metadata document that describes its purpose and provides instructions on how to access it via a web URL. A2A also leverages widely adopted web standards, such as HTTP, JSON-RPC, and Server-Sent Events (SSE), to ensure broad compatibility and ease of integration. By providing a standardized, vendor-neutral communication layer, A2A breaks down the silos that have historically limited the potential of multi-agent systems.

For security, A2A comes with enterprise-grade authentication and authorization built in, including support for JSON Web Tokens (JWTs), OpenID Connect (OIDC), and Transport Layer Security (TLS). This approach ensures that only authorized agents can participate in workflows, protecting sensitive data and agent identities. While the security foundations are in place, developers at the conference acknowledged that integrating them, particularly authenticating agents, will be a hard slog.
Antje Barth, an Amazon Web Services (AWS) principal developer advocate for generative AI, explained what the adoption of A2A will mean for IT professionals: "Say you want to book a train ride to Copenhagen, then a hotel there, and look maybe for a fancy restaurant, right? You have inputs and individual tasks, and A2A adds more agents to this conversation, with one agent specializing in hotel bookings, another in restaurants, and so on. A2A enables agents to communicate with each other, hand off tasks, and finally brings the feedback to the end user."

Jim Zemlin, executive director of the Linux Foundation, said: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Zemlin expects A2A to become a cornerstone for building interoperable, multi-agent AI systems.
Security

Activision Took Down Call of Duty Game After PC Players Hacked (techcrunch.com) 21

Activision removed "Call of Duty: WWII" from Microsoft Store and Game Pass after hackers exploited a security vulnerability that allowed them to compromise players' computers, TechCrunch reported Tuesday, citing a source. The gaming giant took the 2017 first-person shooter offline last week while investigating what it initially described only as "reports of an issue."

Players posted on social media claiming their systems had been hacked while playing the game. The vulnerability was a remote code execution exploit that enables attackers to install malware and take control of victims' devices. The Microsoft Store and Game Pass versions contained an unpatched security flaw that had been fixed in other versions of the game.
Android

Unless Users Take Action, Android Will Let Gemini Access Third-Party Apps (arstechnica.com) 74

Google is implementing a change that will enable its Gemini AI engine to interact with third-party apps, such as WhatsApp, even when users previously configured their devices to block such interactions. ArsTechnica: Users who don't want their previous settings to be overridden may have to take action. An email Google sent recently informing users of the change linked to a notification page that said that "human reviewers (including service providers) read, annotate, and process" the data Gemini accesses.

The email provides no useful guidance for preventing the changes from taking effect. The email said users can block the apps that Gemini interacts with, but even in those cases, data is stored for 72 hours. The email never explains how users can fully extricate Gemini from their Android devices and seems to contradict itself on how or whether this is even possible.

The Courts

Samsung and Epic Games Call a Truce In App Store Lawsuit (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Epic Games, buoyed by the massive success of Fortnite, has spent the last few years throwing elbows in the mobile industry to get its app store on more phones. It scored an antitrust win against Google in late 2023, and the following year it went after Samsung for deploying "Auto Blocker" on its Android phones, which would make it harder for users to install the Epic Games Store. Now, the parties have settled the case just days before Samsung will unveil its latest phones.

The Epic Store drama began several years ago when the company defied Google and Apple rules about accepting outside payments in the mega-popular Fortnite. Both stores pulled the app, and Epic sued. Apple emerged victorious, with Fortnite only returning to the iPhone recently. Google, however, lost the case after Epic showed it worked behind the scenes to stymie the development of app stores like Epic's. Google is still working to avoid penalties in that long-running case, but Epic thought it smelled a conspiracy last year. It filed a similar lawsuit against Samsung, accusing it of implementing a feature to block third-party app stores. The issue comes down to the addition of a feature to Samsung phones called Auto Blocker, which is similar to Google's new Advanced Protection in Android 16. It protects against attacks over USB, disables link previews, and scans apps more often for malicious activity. Most importantly, it blocks app sideloading. Without sideloading, there's no way to install the Epic Games Store or any of the content inside it.

Auto Blocker is enabled by default on Samsung phones, but users can opt out during setup. Epic claimed in its suit that the sudden inclusion of this feature was a sign that Google was working with Samsung to stand in the way of alternative app stores again. Epic has apparently gotten what it wanted from Samsung -- CEO Tim Sweeney has announced that Epic is dropping the case in light of a new settlement.
Sweeney said Samsung "will address Epic's concerns," without elaborating on the details. Samsung may stop making Auto Blocker the default or create a whitelist of apps, like the Epic Games Store, that can bypass Auto Blocker. Another possibility is that Epic and select third-party stores are granted special access while Auto Blocker remains on for others, balancing security and openness.

A "more interesting outcome," according to Ars, would be for Samsung to pre-install the Epic Games Store on its new phones.
AI

Is China Quickly Eroding America's Lead in the Global AI Race? (msn.com) 136

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal.

And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns.

OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace...

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful...

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility."

The article also warns of other potential issues:
  • "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI."
  • "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

AI

XBOW's AI-Powered Pentester Grabs Top Rank on HackerOne, Raises $75M to Grow Platform (csoonline.com) 10

We're living in a new world now — one where it's an AI-powered penetration tester that "now tops an eminent US security industry leaderboard that ranks red teamers based on reputation." CSO Online reports: On HackerOne, which connects organizations with ethical hackers to participate in their bug bounty programs, "Xbow" scored notably higher than 99 other hackers in identifying and reporting enterprise software vulnerabilities. It's a first in bug bounty history, according to the company that operates the eponymous bot...

Xbow is a fully autonomous AI-driven penetration tester (pentester) that requires no human input, but, its creators said, "operates much like a human pentester" that can scale rapidly and complete comprehensive penetration tests in just a few hours. According to its website, it passes 75% of web security benchmarks, accurately finding and exploiting vulnerabilities.

Xbow submitted nearly 1,060 vulnerabilities to HackerOne, including remote code execution, information disclosures, cache poisoning, SQL injection, XML external entities, path traversal, server-side request forgery (SSRF), cross-site scripting, and secret exposure. The company said it also identified a previously unknown vulnerability in Palo Alto's GlobalProtect VPN platform that impacted more than 2,000 hosts. Of the vulnerabilities Xbow submitted over the last 90 days, 54 were classified as critical, 242 as high and 524 as medium in severity. The company's bug bounty programs have resolved 130 vulnerabilities, and 303 are classified as triaged.

Notably, though, roughly 45% of the vulnerabilities it found are still awaiting resolution, highlighting the "volume and impact of the submissions across live targets," Nico Waisman, Xbow's head of security, wrote in a blog post this week... To further hone the technology, the company developed "validators," — automated peer reviewers that confirm each uncovered vulnerability, Waisman explained.

"As attackers adopt AI to automate and accelerate exploitation, defenders must meet them with even more capable systems," XBOW's CEO said this week, as the company raised $75 million in Series B funding to grow its platform, bringing its total funding to $117 million. Help Net Security reports: With the new funding, XBOW plans to grow its engineering team and expand its go-to-market efforts. The product is now generally available, and the company says it is working with large banks, tech firms, and other organizations that helped shape the platform during its early testing phase. XBOW's long-term goal is to help security teams stay ahead of adversaries using advanced automation. As attackers increasingly turn to AI, the company argues that defenders will need equally capable systems to match their speed and sophistication.
Bug

Two Sudo Vulnerabilities Discovered and Patched (thehackernews.com) 20

In April researchers responsibly disclosed two security flaws found in Sudo "that could enable local attackers to escalate their privileges to root on susceptible machines," reports The Hacker News. "The vulnerabilities have been addressed in Sudo version 1.9.17p1 released late last month." Stratascale researcher Rich Mirch, who is credited with discovering and reporting the flaws, said CVE-2025-32462 has managed to slip through the cracks for over 12 years. It is rooted in the Sudo's "-h" (host) option that makes it possible to list a user's sudo privileges for a different host. The feature was enabled in September 2013. However, the identified bug made it possible to execute any command allowed by the remote host to be run on the local machine as well when running the Sudo command with the host option referencing an unrelated remote host. "This primarily affects sites that use a common sudoers file that is distributed to multiple machines," Sudo project maintainer Todd C. Miller said in an advisory. "Sites that use LDAP-based sudoers (including SSSD) are similarly impacted."

CVE-2025-32463, on the other hand, leverages Sudo's "-R" (chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file. It's also a critical-severity flaw. "The default Sudo configuration is vulnerable," Mirch said. "Although the vulnerability involves the Sudo chroot feature, it does not require any Sudo rules to be defined for the user. As a result, any local unprivileged user could potentially escalate privileges to root if a vulnerable version is installed...."

Miller said the chroot option will be removed completely from a future release of Sudo and that supporting a user-specified root directory is "error-prone."

AI

UK Minister Tells Turing AI Institute To Focus On Defense (bbc.com) 40

UK Science and Technology Secretary Peter Kyle has written to the UK's national institute for AI to tell its bosses to refocus on defense and security. BBC: In a letter, Kyle said boosting the UK's AI capabilities was "critical" to national security and should be at the core of the Alan Turing Institute's activities. Kyle suggested the institute should overhaul its leadership team to reflect its "renewed purpose."

The cabinet minister said further government investment in the institute would depend on the "delivery of the vision" he had outlined in the letter. A spokesperson for the Alan Turing Institute said it welcomed "the recognition of our critical role and will continue to work closely with the government to support its priorities."
Further reading, from April: Alan Turing Institute Plans Revamp in Face of Criticism and Technological Change.
AI

Simple Text Additions Can Fool Advanced AI Reasoning Models, Researchers Find 51

Researchers have discovered that appending irrelevant phrases like "Interesting fact: cats sleep most of their lives" to math problems can cause state-of-the-art reasoning AI models to produce incorrect answers at rates over 300% higher than normal [PDF]. The technique -- dubbed "CatAttack" by teams from Collinear AI, ServiceNow, and Stanford University -- exploits vulnerabilities in reasoning models including DeepSeek R1 and OpenAI's o1 family. The adversarial triggers work across any math problem without changing the problem's meaning, making them particularly concerning for security applications.

The researchers developed their attack method using a weaker proxy model (DeepSeek V3) to generate text triggers that successfully transferred to more advanced reasoning models. Testing on 225 math problems showed the triggers increased error rates significantly across different problem types, with some models like R1-Distill-Qwen-32B reaching combined attack success rates of 2.83 times baseline error rates. Beyond incorrect answers, the triggers caused models to generate responses up to three times longer than normal, creating computational slowdowns. Even when models reached correct conclusions, response lengths doubled in 16% of cases, substantially increasing processing costs.
The Almighty Buck

Wells Fargo Scandal Pushed Customers Toward Fintech, Says UC Davis Study (nerds.xyz) 18

BrianFagioli shares a report from NERDS.xyz: A new academic study has found that the 2016 Wells Fargo scandal pushed many consumers toward fintech lenders instead of traditional banks. The research, published in the Journal of Financial Economics, suggests that it was a lack of trust rather than interest rates or fees that drove this behavioral shift. Conducted by Keer Yang, an assistant professor at the UC Davis Graduate School of Management, the study looked closely at what happened after the Wells Fargo fraud erupted into national headlines. Bank employees were caught creating millions of unauthorized accounts to meet unrealistic sales goals. The company faced $3 billion in penalties and a massive public backlash.

Yang analyzed Google Trends data, Gallup polls, media coverage, and financial transaction datasets to draw a clear conclusion. In geographic areas with a strong Wells Fargo presence, consumers became measurably more likely to take out mortgages through fintech lenders. This change occurred even though loan costs were nearly identical between traditional banks and digital lenders. In other words, it was not about money. It was about trust. That simple fact hits hard. When big institutions lose public confidence, people do not just complain. They start moving their money elsewhere.

According to the study, fintech mortgage use increased from just 2 percent of the market in 2010 to 8 percent in 2016. In regions more heavily exposed to the Wells Fargo brand, fintech adoption rose an additional 4 percent compared to areas with less exposure. Yang writes, "Therefore it is trust, not the interest rate, that affects the borrower's probability of choosing a fintech lender." [...] Notably, while customers may have been more willing to switch mortgage providers, they were less likely to move their deposits. Yang attributes that to FDIC insurance, which gives consumers a sense of security regardless of the bank's reputation. This study also gives weight to something many of us already suspected. People are not necessarily drawn to fintech because it is cheaper. They are drawn to it because they feel burned by the traditional system and want a fresh start with something that seems more modern and less manipulative.

Slashdot Top Deals