Desktops (Apple)

Apple Readies Several New Macs With Next-Generation M2 Chips (bloomberg.com) 47

Apple has started widespread internal testing of several new Mac models with next-generation M2 chips, according to developer logs, part of its push to make more powerful computers using homegrown processors. Bloomberg: The company is testing at least nine new Macs with four different M2-based chips -- the successors to the current M1 line -- with third-party apps in its App Store, according to the logs, which were corroborated by people familiar with the matter. The move is a key step in the development process, suggesting that the new machines may be nearing release in the coming months. The M2 chip is Apple's latest attempt to push the boundaries of computer processing after a split with Intel in recent years. Apple has gradually replaced Intel chips with its own silicon, and now looks to make further gains with a more advanced line. After years of slow growth, the Mac computer division enjoyed a resurgence the past two years, helped in part by home office workers buying new equipment. The business generated $35.2 billion in sales the past fiscal year, about 10% of Apple's total.
Apple

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109

By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.

Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Intel

Intel Says It'll Deliver 2025 Chip Tech a Half Year Early (cnet.com) 35

After years of trouble and delay, Intel's chipmaking business finally has some good news to report. The most advanced manufacturing process the company has committed to will arrive in the second half of 2024, six months earlier than planned. From a report: Intel fell behind rivals Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung because of problems modernizing its manufacturing, and it convinced chip designer Pat Gelsinger to return to the company as chief executive in 2021. Shortly afterward, Intel laid out a road map that meant five improvements to its manufacturing processes in four years, with manufacturing processes named Intel 7, Intel 4, Intel 8, Intel 20A and Intel 18A. Each step improves a chip's performance relative to its power consumption. Those steps are the foundation of a plan to catch up to rivals in 2024 and surpass them in 2025.
Intel

Intel Suspends All Operations in Russia 'Effective Immediately' (arstechnica.com) 107

Intel, one of the world's largest semiconductor companies, is suspending business operations in Russia "effective immediately," the company announced late Tuesday. From a report: "Intel continues to join the global community in condemning Russia's war against Ukraine," the company said in a statement. Intel stopped shipping chips to customers in Russia and Belarus in early March. Intel said that it is "working to support all of our employees through this difficult situation, including our 1,200 employees in Russia."

Ordinarily, it would be a drastic step for a multinational company like Intel to exit a market the size of Russia. But Western sanctions have made it increasingly difficult for global companies to operate in Russia. Earlier this week, the Biden administration announced broad sanctions on the Russian electronics industry, which presumably includes many of Intel's partners and customers in Russia. Two of Intel's major competitors, AMD and Nvidia, halted sales of their products in Russia early last month. Taiwanese chipmaker TSMC has also restricted sales in Russia.

AMD

AMD To Acquire Pensando in a $1.9 Billion Bid for Networking Tech (protocol.com) 12

AMD said early Monday that it plans to acquire networking chip maker Pensando for $1.9 billion in cash, in a bid to arm itself with tech that competes with directly with Nvidia and Intel's data-center chip packages. From a report: Pensando was founded by several former Cisco engineers, and makes edge computing technology that competes with AWS Nitro, Intel's DPU launched last year, and Nvidia's data processing units called BlueField. In a release distributed in advance of the announcement, AMD said that buying the closely held Pensando will give it a networking platform that will bolster its existing server chip lineup. Pensando's chips are an increasingly important part of data center design, as it becomes impossible to simply throw larger numbers of processors at demanding computing tasks. As regular chips scale up, the networking connections become a bottleneck, and the DPU's goal (Intel calls it an IPU) is to free up the central processor to perform other functions.
Intel

Intel Beats AMD and Nvidia with Arc GPU's Full AV1 Support (neowin.net) 81

Neowin notes growing support for the "very efficient, potent, royalty-free video codec" AV1, including Microsoft's adding of support for hardware acceleration of AV1 on Windows.

But AV1 even turned up in Intel's announcement this week of the Arc A-series, a new line of discrete GPUs, Neowin reports: Intel has been quick to respond and the company has become the first such GPU hardware vendor to have full AV1 support on its newly launched Arc GPUs. While AMD and Nvidia both offer AV1 decoding with their newest GPUs, neither have support for AV1 encoding.

Intel says that hardware encoding of AV1 on its new Arc GPUs is 50 times faster than those based on software-only solutions. It also adds that the efficiency of AV1 encode with Arc is 20% better compared to HEVC. With this feature, Intel hopes to potentially capture at least some of the streaming and video editing market that's based on users who are looking for a more robust AV1 encoding solution compared to CPU-based software approaches.

From Intel's announcement: Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
Security

Lapsus$ Gang Claims New Hack With Data From Apple Health Partner (theverge.com) 5

After a short "vacation," the Lapsus$ hacking gang is back. In a post shared through the group's Telegram channel on Wednesday, Lapsus$ claimed to have stolen 70GB of data from Globant -- an international software development firm headquartered in Luxembourg, which boasts some of the world's largest companies as clients. From a report: Screenshots of the hacked data, originally posted by Lapsus$ and shared on Twitter by security researcher Dominic Alvieri, appeared to show folders bearing the names of a range of global businesses: among them were delivery and logistics company DHL, US cable network C-Span, and French bank BNP Paribas. Also in the list were tech giants Facebook and Apple, with the latter referred to in a folder titled "apple-health-app." The data appears to be development material for Globant's BeHealthy app, described in a prior press release as software developed in partnership with Apple to track employee health behaviors using features of the Apple Watch.
Intel

Intel Enters Discrete GPU Market With Launch of Arc A-Series For Laptops (hothardware.com) 23

MojoKid writes: Today Intel finally launched its first major foray into discrete GPUs for gamers and creators. Dubbed Intel Arc A-Series and comprised of 5 different chips built on two different Arc Alchemist SoCs, the company announced its entry level Arc 3 Graphics is shipping in market now with laptop OEMs delivering new all-Intel products shortly. The two SoCs set the foundation across three performance tiers, including Arc 3, Arc 5, and Arc 7.

For example, Arc A370M arrives today with 8 Xe cores, 8 ray tracing units, 4GB of GDDR6 memory linked to a 64-bit memory bus, and a 1,550MHz graphics clock. Graphics power is rated at 35-50W. However, Arc A770M, Intel's highest-end mobile GPU will come with 32 Xe cores, 32 ray tracing units, 16GB of GDDR 6 memory over a 256-bit interface and with a 1650MHz graphics clock. Doing the math, Arc A770M could be up to 4X more powerful than Arc 370M. In terms of performance, Intel showcased benchmarks from a laptop outfitted with a Core i7-12700H processor and Arc A370M GPU that can top the 60 FPS threshold at 1080p in many games where integrated graphics could come up far short. Examples included Doom Eternal (63 fps) at high quality settings, and Hitman 3 (62 fps), and Destiny 2 (66 fps) at medium settings. Intel is also showcasing new innovations for content creators as well, with its Deep Link, Hyper Encode and AV1 video compression support offering big gains in video upscaling, encoding and streaming. Finally, Intel Arc Control software will offer unique features like Smooth Sync that blends tearing artifacts when V-Synch is turned off, as well as Creator Studio with background blur, frame tracking and broadcast features for direct game streaming services support.

Linux

Asahi Linux Is Reverse-Engineering Support For Apple Silicon, Including M1 Ultra (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: For months, a small group of volunteers has worked to get this Arch Linux-based distribution up and running on Apple Silicon Macs, adapting existing drivers and (in the case of the GPU) painstakingly writing their own. And that work is paying off -- last week, the team released its first alpha installer to the general public, and as of yesterday, the software supports the new M1 Ultra in the Mac Studio. In the current alpha, an impressive list of hardware already works, including Wi-Fi, USB 2.0 over the Thunderbolt ports (USB 3.0 only works on Macs with USB-A ports, but USB 3.0 over Thunderbolt is "coming soon"), and the built-in display. But there are still big features missing, including DisplayPort and Thunderbolt, the webcam, Bluetooth, sleep mode, and GPU acceleration. That said, regarding GPU acceleration, the developers say that the M1 is fast enough that a software-rendered Linux desktop feels faster on the M1 than a GPU-accelerated desktop feels on many other ARM chips.

Asahi's developers don't think the software will be "done," with all basic M1-series hardware and functionality supported and working out of the box, "for another year, maybe two." By then, Apple will probably have introduced another generation or two of M-series chips. But the developers are optimistic that much of the work they're doing now will continue to work on future generations of Apple hardware with relatively minimal effort. [...] If you want to try Asahi Linux on an M1 Mac, the current installer is run from the command line and requires "at least 53GB of free space" for an install with a KDE Plasma desktop. Asahi only needs about 15GB, but the installer requires you to leave at least 38GB of free space to the macOS install so that macOS system updates don't break. From there, dual-booting should work similarly to the process on Intel Macs, with the alternate OS visible from within Startup Disk or the boot picker you can launch when your start your Mac. Future updates should be installable from within your new Asahi Linux installation and shouldn't require you to reinstall from scratch.

Intel

Nvidia Would Consider Using Intel as a Foundry, CEO Says (bloomberg.com) 21

Nvidia, one of the largest buyers of outsourced chip production, said it will explore using Intel as a possible manufacturer of its products, but said Intel's journey to becoming a foundry will be difficult. From a report: Nvidia Chief Executive Officer Jensen Huang said he wants to diversify his company's suppliers as much as possible and will consider working with Intel. Nvidia currently uses Taiwan Semiconductor Manufacturing Co and Samsung Electronics to build its products. "We're very open-minded to considering Intel," Huang said Wednesday in an online company event. "Foundry discussions take a long time. It's not just about desire. We're not buying milk here."
Google

Steam (Officially) Comes To Chrome OS 24

An anonymous reader shares a report: This may feel like deja vu because Google itself mistakenly leaked this announcement a few days ago, but the company today officially announced the launch of Steam OS on Chrome OS. Before you run off to install it, there are a few caveats: This is still an alpha release and only available on the more experimental and unstable Chrome OS Dev channel. The number of supported devices is also still limited since it'll need at least 8GB of memory, an 11th-generation Intel Core i5 or i7 processor and Intel Iris Xe Graphics. That's a relatively high-end configuration for what are generally meant to be highly affordable devices and somewhat ironically means that you can now play games on Chrome OS devices that are mostly meant for business users. The list of supported games is also still limited but includes the likes of Portal 2, Skyrim, The Witcher 3: Wild Hunt, Half-Life 2, Stardew Valley, Factorio, Stellaris, Civilization V, Fallout 4, Dico Elysium and Untitled Goose Game.
Security

How to Eliminate the World's Need for Passwords (arstechnica.com) 166

The board members of the FIDO alliance include Amazon, Google, PayPal, RSA, and Apple and Microsoft (as well as Intel and Arm). It describes its mission as reducing the world's "over-reliance on passwords."

Today Wired reports that the group thinks "it has finally identified the missing piece of the puzzle" for finally achieving large-scale adoption of a password-supplanting technology: On Thursday, the organization published a white paper that lays out FIDO's vision for solving the usability issues that have dogged passwordless features and, seemingly, kept them from achieving broad adoption....

The paper is conceptual, not technical, but after years of investment to integrate what are known as the FIDO2 and WebAuthn passwordless standards into Windows, Android, iOS, and more, everything is now riding on the success of this next step.... FIDO is looking to get to the heart of what still makes passwordless schemes tough to navigate. And the group has concluded that it all comes down to the procedure for switching or adding devices. If the process for setting up a new phone, say, is too complicated, and there's no simple way to log in to all of your apps and accounts — or if you have to fall back to passwords to reestablish your ownership of those accounts — then most users will conclude that it's too much of a hassle to change the status quo.

The passwordless FIDO standard already relies on a device's biometric scanners (or a master PIN you select) to authenticate you locally without any of your data traveling over the Internet to a web server for validation. The main concept that FIDO believes will ultimately solve the new device issue is for operating systems to implement a "FIDO credential" manager, which is somewhat similar to a built-in password manager. Instead of literally storing passwords, this mechanism will store cryptographic keys that can sync between devices and are guarded by your device's biometric or passcode lock. At Apple's Worldwide Developer Conference last summer, the company announced its own version of what FIDO is describing, an iCloud feature known as "Passkeys in iCloud Keychain," which Apple says is its "contribution to a post-password world...."

FIDO's white paper also includes another component, a proposed addition to its specification that would allow one of your existing devices, like your laptop, to act as a hardware token itself, similar to stand-alone Bluetooth authentication dongles, and provide physical authentication over Bluetooth. The idea is that this would still be virtually phish-proof since Bluetooth is a proximity-based protocol and can be a useful tool as needed in developing different versions of truly passwordless schemes that don't have to retain a backup password. Christiaan Brand, a product manager at Google who focuses on identity and security and collaborates on FIDO projects, says that the passkey-style plan follows logically from the smartphone or multi-device image of a passwordless future. "This grand vision of 'Let's move beyond the password,' we've always had this end state in mind to be honest, it just took until everyone had mobile phones in their pockets," Brand says....

To FIDO, the biggest priority is a paradigm shift in account security that will make phishing a thing of the past.... When asked if this is really it, if the death knell for passwords is truly, finally tolling, Google's Brand turns serious, but he doesn't hesitate to answer: "I feel like everything is coalescing," he says. "This should be durable."

Such a change won't happen overnight, the article points out. "With any other tech migration (ahem, Windows XP), the road will inevitably prove arduous."
Math

Linux Random Number Generator Sees Major Improvements (phoronix.com) 80

An anonymous Slashdot reader summarizes some important news from the web page of Jason Donenfeld (creator of the open-source VPN protocol WireGuard): The Linux kernel's random number generator has seen its first set of major improvements in over a decade, improving everything from the cryptography to the interface used. Not only does it finally retire SHA-1 in favor of BLAKE2s [in Linux kernel 5.17], but it also at long last unites '/dev/random' and '/dev/urandom' [in the upcoming Linux kernel 5.18], finally ending years of Slashdot banter and debate:

The most significant outward-facing change is that /dev/random and /dev/urandom are now exactly the same thing, with no differences between them at all, thanks to their unification in random: block in /dev/urandom. This removes a significant age-old crypto footgun, already accomplished by other operating systems eons ago. [...] The upshot is that every Internet message board disagreement on /dev/random versus /dev/urandom has now been resolved by making everybody simultaneously right! Now, for the first time, these are both the right choice to make, in addition to getrandom(0); they all return the same bytes with the same semantics. There are only right choices.

Phoronix adds: One exciting change to also note is the getrandom() system call may be a hell of a lot faster with the new kernel. The getrandom() call for obtaining random bytes is yielding much faster performance with the latest code in development. Intel's kernel test robot is seeing an 8450% improvement with the stress-ng getrandom() benchmark. Yes, an 8450% improvement.
Graphics

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

AMD

Radeon Super Resolution Arrives To Speed Up Your Games in AMD Adrenalin (anandtech.com) 7

Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. From a report: Dubbed FSR 2.0, the next generation of AMD's upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today's early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA's temporal-based DLSS 2.0 upscaling technology, as well as Intel's forthcoming XeSS upscaling tech.

AMD's current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling's simplicity is great for compatibility but it's limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA's class-leading DLSS 2.0.

Apple

Apple's Charts Set the M1 Ultra up for an RTX 3090 Fight it Could Never Win (theverge.com) 142

An anonymous reader shares a report:When Apple introduced the M1 Ultra -- the company's most powerful in-house processor yet and the crown jewel of its brand new Mac Studio -- it did so with charts boasting that the Ultra capable of beating out Intel's best processor or Nvidia's RTX 3090 GPU all on its own. The charts, in Apple's recent fashion, were maddeningly labeled with "relative performance" on the Y-axis, and Apple doesn't tell us what specific tests it runs to arrive at whatever numbers it uses to then calculate "relative performance." But now that we have a Mac Studio, we can say that in most tests, the M1 Ultra isn't actually faster than an RTX 3090, as much as Apple would like to say it is.
Chrome

Google Casually Announces Steam For Chrome OS Is Coming In Alpha For Select Chromebooks (engadget.com) 19

At the 2022 Google for Games Developer Summit where its Stadia B2B cloud gaming platform was unveiled, Google announced the long-awaited availability of Steam on Chromebooks. 9to5Google reports: Google specifically said that the "Steam Alpha just launched, making this longtime PC game store available on select Chromebooks for users to try." That said, no other details appear to be live this morning, but we did reveal the device list last month. As we noted at the time: "At a minimum, your Chromebook needs to have an (11th gen) Intel Core i5 or i7 processor and a minimum of 7 GB of RAM. This eliminates almost all Chromebooks but those in the upper-mid range and high end."

Google today said "you can check that out on the Chromebook community forum." The post in question is now live, but without any actual availability timeline beyond "coming soon." However, we did learn that the "early, alpha-quality version of Steam" will first come to the Chrome OS Dev channel for a "small set" of devices.

Meanwhile, Google also said Chrome OS is getting a new "games overlay" on "select" Android titles to make them "playable with user-driven keyboard and mouse configurations on Chromebooks without developer changes." It will launch later this year in a public beta.
Further reading: The part of the keynote where this announcement was made can be viewed here.

Google's Domain Name Registrar is Out of Beta After Seven Years
Intel

Intel Announces $88 Billion Megafab to Keep Chipmaking in Europe (cnet.com) 16

Intel on Tuesday revealed plans for a second new "megafab," a chipmaking site in Magdeburg, Germany, that's the centerpiece of an expected $88 billion in investments across several European countries. The capacity expansion comes on top of other gargantuan spending commitments in the United States, including a planned megafab in Ohio, intended to bring Intel back to the forefront of chip manufacturing. From a report: "The world has an insatiable demand for semiconductors," Intel Chief Executive Pat Gelsinger said in a video announcing the investments. Today, 80% of chipmaking takes place in Asia, but the company's spending in the US and Europe will mean a "more balanced and resilient" supply chain that isn't so dependent on Asia. Intel will start with new chip fabrication facilities, called fabs, at the Magdeburg site costing about $19 billion, with construction set to begin in 2023 and manufacturing in 2027, Gelsinger said. That'll let Intel build its own chips with leading edge technology, both for Intel itself and through a major expansion of its business called Intel Foundry Services, build chips for other customers as well.
AMD

Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44

"News of a fresh Spectre BHB vulnerability that only impacts Intel and Arm processors emerged this week," reports Tom's Hardware, "but Intel's research around these new attack vectors unearthed another issue.

"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....

Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."

As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....

In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....

AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.

Thanks to Slashdot reader Hmmmmmm for submitting the story...

Slashdot Top Deals