Robotics

OpenAI's Former Research Chief Raises $70M to Automate Manufacturing With AI (msn.com) 22

"OpenAI's former chief research officer is raising $70 million for a new startup building an AI and software platform to automate manufacturing," reports the Wall Street Journal, citing "people familiar with the matter.

"Arda, the new startup co-founded by Bob McGrew, is raising at a valuation of $700 million, according to people familiar with the matter...." Arda is developing an AI and software platform, including a video model that can analyze footage from factory floors and use it to train robots to run factories autonomously, the people said. The company's software will coordinate machines and humans across the entire production process, from product design and manufacturability to finished goods coming off the line.

The startup's goal is to make manufacturing cost effective in the Western part of the globe, reducing reliance on China as geopolitical and national security concerns rise... At OpenAI, McGrew was tasked with training robots to do tasks in the physical world, according to this LinkedIn. McGrew was also one of the earliest employees at Palantir.

IT

2/3 of Node.Js Users Run an Outdated Version. So OpenJS Announces Program Offering Upgrade Providers (openjsf.org) 26

How many Node.js users are running unsupported or outdated versions. Roughly two thirds, according to data from Node's nonprofit steward, OpenJS.

So they've announced "the Node.js LTS Upgrade and Modernization program" to help enterprises move safely off legacy/end-of-life Node.js. "This program gives enterprises a clear, trusted path to modernize," said the executive director of the OpenJS Foundation, "while staying aligned with the Node.js project and community." The Node.js LTS Upgrade and Modernization program connects organizations with experienced Node.js service providers who handle the work of upgrading safely.

Approved partners assess current versions and dependencies, manage phased upgrades to supported LTS releases, and offer temporary security support when immediate upgrades are not possible... Partners are surfaced exactly where users go when upgrades become unavoidable, including the Node.js website, documentation, and end of life guidance.

The program follows the existing OpenJS Ecosystem Sustainability Program revenue model, with partners retaining 85% of revenue and 15% supporting OpenJS and Node.js through Open Collective and foundation operations. OpenJS provides the guardrails, alignment, and oversight to keep the program credible and connected to the project. We're pleased to welcome NodeSource as the inaugural partner in the Node.js LTS Upgrade and Modernization program.

"The goal is simple: reduce risk without breaking production or trust with the upstream project."
AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
AI

OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined' (engadget.com) 56

In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation. They said they "care deeply about the Robotics team and the work we built together," so this "wasn't an easy call," but offered this reason for resigning: AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.

This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.

"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details."

The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
Firefox

How Anthropic's Claude Helped Mozilla Improve Firefox's Security (yahoo.com) 41

"It took Anthropic's most advanced artificial-intelligence model about 20 minutes to find its first Firefox browser bug during an internal test of its hacking prowess," reports the Wall Street Journal. The Anthropic team submitted it, and Firefox's developers quickly wrote back: This bug was serious. Could they get on a call? "What else do you have? Send us more," said Brian Grinstead, an engineer with Mozilla, Firefox's parent organization.

Anthropic did. Over a two-week period in January, Claude Opus 4.6 found more high-severity bugs in Firefox than the rest of the world typically reports in two months, Mozilla said... In the two weeks it was scanning, Claude discovered more than 100 bugs in total, 14 of which were considered "high severity..." Last year, Firefox patched 73 bugs that it rated as either high severity or critical.

A Mozilla blog post calls Firefox "one of the most scrutinized and security-hardened codebases on the web. Open source means our code is visible, reviewable, and continuously stress-tested by a global community." So they're impressed — and also thankful Anthropic provided test cases "that allowed our security team to quickly verify and reproduce each issue." Within hours, our platform engineers began landing fixes, and we kicked off a tight collaboration with Anthropic to apply the same technique across the rest of the browser codebase... . A number of the lower-severity findings were assertion failures, which overlapped with issues traditionally found through fuzzing, an automated testing technique that feeds software huge numbers of unexpected inputs to trigger crashes and bugs. However, the model also identified distinct classes of logic errors that fuzzers had not previously uncovered...

We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers' toolbox. Firefox has undergone some of the most extensive fuzzing, static analysis, and regular security review over decades. Despite this, the model was able to reveal many previously unknown bugs. This is analogous to the early days of fuzzing; there is likely a substantial backlog of now-discoverable bugs across widely deployed software.

"In the time it took us to validate and submit this first vulnerability to Firefox, Claude had already discovered fifty more unique crashing inputs" in 6,000 C++ files, Anthropic says in a blog post (which points out they've also used Claude Opus 4.6 to discover vulnerabilities in the Linux kernel).

"Anthropic "also rolled out Claude Code Security, an automated code security testing tool, last month," reports Axios, noting the move briefly rattled cybersecurity stocks...
IOS

Apple Blocks US Users From Downloading ByteDance's Chinese Apps (wired.com) 25

An anonymous reader quotes a report from Wired: While TikTok operates in the United States under new ownership, Apple has deployed technical restrictions to block iOS users in the United States from downloading other apps made by the video platform's Chinese parent organization ByteDance. ByteDance owns a vast array of different apps spanning social media, entertainment, artificial intelligence, and other sectors. The leading one is Douyin, the Chinese version of TikTok, which has over 1 billion monthly active users. While most of those users reside in China, iPhone owners around the world have traditionally been able to download these apps from anywhere without using a VPN, as long as they have a valid App Store account registered in China.

That's not true anymore. Starting in late January, iPhone users in the U.S. with Chinese App Store accounts began reporting that they were encountering new obstacles when they tried to download apps developed by ByteDance. WIRED has confirmed that even with a valid Chinese App Store account, downloading or updating a ByteDance-owned Chinese app is blocked on Apple devices located in the United States. Instead, a pop-up window appears that says, "This app is unavailable in the country or region you're in." The restriction appears to apply only to ByteDance-owned apps and not those developed by other Chinese companies.

The timing and technical specifics suggest the restriction is related to the deal TikTok agreed to in January to divest Chinese ownership of its U.S. operations. The agreement was the result of the so-called TikTok ban law passed by Congress in 2024, which also barred companies like Apple and Google from distributing other apps majority-owned by ByteDance. The Protecting Americans from Foreign Adversary Controlled Applications Act states that no company can "distribute, maintain, or update" any app majority-controlled by ByteDance "within the land or maritime borders of the United States."

The law was primarily aimed at TikTok, which has more than 100 million users in the U.S. and had been the subject of years of debate in Washington over whether its Chinese ownership posed a national security risk. But ByteDance also has dozens of other apps that at some point were also removed from Apple's and Google's app stores in the U.S.. Now it seems like the scope of impact has reached even more apps that are not technically designed for U.S. audiences, such as Douyin, the AI chatbot Doubao, and the fiction reading platform Fanqie Novel.

Security

US Cybersecurity Adds Exploited VMware Aria Operations To KEV Catalog (thehackernews.com) 4

joshuark writes: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a VMware Aria Operations vulnerability tracked as CVE-2026-22719 to its Known Exploited Vulnerabilities catalog, flagging the flaw as exploited in attacks. VMware Aria Operations is an enterprise monitoring platform that helps organizations track the performance and health of servers, networks, and cloud infrastructure. The flaw has now been added to the CISA's Known Exploited Vulnerabilities (KEV) catalog, with the U.S. cyber agency requiring federal civilian agencies to address the issue by March 24, 2026. Broadcom said it is aware of reports indicating the vulnerability is exploited in attacks but cannot confirm the claims.

"A malicious unauthenticated actor may exploit this issue to execute arbitrary commands which may lead to remote code execution in VMware Aria Operations while support-assisted product migration is in progress," the advisory explains. Broadcom released security patches on February 24 and also provided a temporary workaround for organizations unable to apply the patches immediately. The mitigation is a shell script named "aria-ops-rce-workaround.sh," which must be executed as root on each Aria Operations appliance node. There are currently no details on how the vulnerability is being exploited in the wild, who is behind it, and the scale of such efforts.

Transportation

Vehicle Tire Pressure Sensors Enable Silent Tracking (darkreading.com) 96

Longtime Slashdot reader linuxwrangler writes: Dark Reading reports that a team of researchers has determined that signals from tire pressure monitoring systems (TPMSs), required in U.S. cars since 2007, can be used to track the presence, type, weight, and driving pattern of vehicles. The researchers report (PDF) that the TPMS data, which includes unique sensor IDs, is sent in clear text without authentication and can be intercepted 40-50 meters from a vehicle using devices costing $100. "Researchers have discovered that most TPMS sensors transmit a unique identifier in clear text that never changes during the lifetime of the tire," the researchers pointed out. "This unencrypted wireless communication makes the signals susceptible to eavesdropping and potential tracking by any third party in proximity to the car."
Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
Iphone

A Possible US Government iPhone-Hacking Toolkit Is Now In the Hands of Foreign Spies, Criminals (wired.com) 39

Security researchers say a highly sophisticated iPhone exploitation toolkit dubbed "Coruna," which possibly originated from a U.S. government contractor, has spread from suspected Russian espionage operations to crypto-stealing criminal campaigns. Apple has patched the exploited vulnerabilities in newer iOS versions, but tens of thousands of devices may have already been compromised. An anonymous reader quotes an excerpt from Wired's report: Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as "Triangulation" that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. (The US government didn't respond to Russia's claim.)

Coruna's code also appears to have been originally written by English-speaking coders, notes iVerify's cofounder Rocky Cole. "It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government," Cole tells WIRED. "This is the first example we've seen of very likely US government tools -- based on what the code is telling us -- spinning out of control and being used by both our adversaries and cybercriminal groups." Regardless of Coruna's origin, Google warns that a highly valuable and rare hacking toolkit appears to have traveled through a series of unlikely hands, and now exists in the wild where it could still be adopted -- or adapted -- by any hacker group seeking to target iPhone users.
"How this proliferation occurred is unclear, but suggests an active market for 'second hand' zero-day exploits," Google's report reads. "Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be re-used and modified with newly identified vulnerabilities."
Chrome

Google Chrome Is Switching To a Two-Week Release Cycle (9to5google.com) 31

Google is accelerating Chrome's major release cadence from four weeks to two starting with version 153 on September 8th. "...our goal is to ensure developers and users have immediate access to the latest performance improvements, fixes and new capabilities," says Google. "Building on our history of adapting our release process to match the demands of a modern web, Chrome is moving to a two-week release cycle." The company says the "smaller scope" of these releases "minimizes disruption and simplifies post-release debugging." They also cite "recent process enhancements" that will "maintain [Chrome's] high standards for stability." 9to5Google reports: There will still be weekly security updates between milestones. This applies to desktop, Android, and iOS, while there are "no changes to the Dev and the Canary channels": "A Chrome Beta for each version will ship three weeks before the stable release. We recommend developers test with the beta to keep up to date with any upcoming changes that might impact your sites and applications."

The eight-week Extended Stable release schedule for enterprise customers and Chromium embedders will not change. Chromebooks will also have "extended release options": "Our priority is a seamless experience, so the latest Chrome releases will roll out to Chromebooks after dedicated platform testing. We are adapting these channels for the new two-week browser cycle and we will share more details soon regarding milestone updates for managed devices."

Businesses

Accenture Acquires Ookla, Downdetector As Part of $1.2 Billion Deal (theregister.com) 15

Accenture is acquiring Downdetector parent company Ookla from Ziff Davis in a $1.2 billion deal to bolster its network analytics and visibility tools for telecoms, hyperscalers, and enterprises. "The deal, which will transfer all of Ziff Davis's Connectivity division to Accenture, includes Ookla's Speedtest, Ekahau, and RootMetrics," notes The Register reports: "Modern networks have evolved from simple infrastructure into business-critical platforms," said Accenture CEO Julie Sweet in a canned statement. "Without the ability to measure performance, organizations cannot optimize experience, revenue, or security." Ookla is meant to let them do just that.

Data captured at the network and device layer are used to enhance fraud prevention in banking, smart homes monitoring, and traffic optimization in retail, Accenture said. Ookla's platform, which lets user's test their own connectivity speed, captures more than 1,000 attributes per test, and provides the foundation for those analytics, Accenture said.

Android

Motorola Partners With GrapheneOS 72

At MWC 2026, Motorola announced a partnership with the GrapheneOS Foundation to bring the hardened, Google-free Android variant to future devices. Until now, the OS had been designed exclusively for Google Pixel phones. "We are thrilled to be partnering with Motorola to bring GrapheneOS's industry-leading privacy and security-focused mobile operating system to their next-generation smartphone," a GrapheneOS statement reads. "This collaboration marks a significant milestone in expanding the reach of GrapheneOS, and we applaud Motorola for taking this meaningful step towards advancing mobile security."

GrapheneOS is a privacy and security focused mobile OS with Android app compatibility developed as a non-profit open source project. It's often referred to as the "de-Googled OS" because Google apps are not available by default. However, users can install them via a sandboxed version of Google Play Services.
The Military

America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It (engadget.com) 64

Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.")

Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela — reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
The Internet

After US-Israel Attacks, 90 Million Iranians Lose Internet Connectivity (cnn.com) 240

CNN reports that images from Iran's capital "have shown cars jammed along Tehran's street, with heavy traffic on major roads after today's wave of attacks by the US and Israel." And though Iran has a population of 93 million, the attacks suddenly plunged Iran into "a near-total internet blackout with national connectivity at 4% of ordinary levels," according to internet monitoring experts at NetBlocks.

CNN reports: Since Iran's brutal crackdown earlier this year, the regime has made progress to allow only a subset of people with security clearance to access the international web, experts said. After previous internet shutdowns, some platforms never returned. The Iranian government blocked Instagram after the internet shutdown and protests in 2022, and the popular messaging app Telegram following protests in 2018.
The International Atomic Energy Agency announced an hour ago that they're "closely monitoring developments" — keeping in contact with countries in the region and so far seeing "no evidence of any radiological impact." They're also urging "restraint to avoid any nuclear safety risks to people in the region."

UPDATE (1 PM PST): Qatar, Bahrain and Kuwait "are shifting to remote learning starting Sunday until further notice following Iranâ(TM)s retaliatory strikes on Saturday," reports CNN.
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

Government

CISA Replaces Bumbling Acting Director After a Year (techcrunch.com) 26

New submitter DeanonymizedCoward shares a report from TechCrunch: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is reportedly in crisis following major budget cuts, layoffs, and furloughs under the Trump administration, says TechCrunch. The agency has now replaced its acting director, Madhu Gottumukkala, after a turbulent year marked by controversy and internal turmoil. During his tenure, Gottumukkala allegedly mishandled sensitive information by uploading government documents to ChatGPT, oversaw a one-third reduction in staff, and reportedly failed a counterintelligence polygraph needed for classified access. His leadership also saw the suspension of several senior officials, including CISA's chief security officer. Nextgov also reported that CISA lost another top senior official, Bob Costello, the agency's chief information officer tasked with overseeing the agency's IT systems and data policies. "Last month, CISA's acting director Madhu Gottumukkala reportedly took steps to transfer Costello, but other political appointees blocked it," added Nextgov.
Google

South Korea Set To Get a Fully Functioning Google Maps (reuters.com) 14

South Korea has reversed a two-decade policy and approved the export of high-precision map data, paving the way for a fully functional Google Maps in the country. Reuters reports: The approval was made "on the condition that strict security requirements are met," the Ministry of Land, Infrastructure and Transport said in a statement. Those conditions include blurring military and other sensitive security-related facilities, as well as restricting longitude and latitude coordinates for South Korean territory on products such as Google Maps and Google Earth, it said.

The decision is expected to hurt Naver and Kakao -- local internet giants which currently dominate the country's market for digital map services. But it will appease Washington, which has urged Seoul to tackle what it says is discrimination against U.S. tech companies. South Korea, still technically at war with North Korea, had shot down Google's previous bids in 2007 and 2016 to be allowed to export the data, citing the risks that information about sensitive military and security facilities could be exposed.
"Google can now come in, slash usage fees, and take the market," said Choi Jin-mu, a geography professor at Kyung Hee University. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services -- logistics firms, for example -- become dependent, and in the long run, even government GIS (geographic information) systems could end up dependent on Google or Apple. That's the biggest concern."

Slashdot Top Deals