Businesses

Nvidia and AMD GPUs Are Returning To Shelves and Prices Are Finally Falling (theverge.com) 78

For nearly two years, netting a PS5, Xbox Series X, or AMD Radeon and Nvidia RTX graphics cards without paying a fortune has been a matter of luck (or a lot of skill). At its peak, scalpers were successfully charging double or even triple MSRP for a modern GPU. But it's looking like the great GPU shortage is nearly over. From a report: In January, sites including Tom's Hardware reported that prices were finally beginning to drop, and drop they did; they've now dropped an average of 30 percent in the three months since. On eBay, the most popular graphics cards are only commanding a street price of $200-$300 over MSRP. And while that might still seem like a lot, some have fallen further: used Nvidia RTX 3080 Ti or AMD RX 6900 XT are currently fetching less than their original asking price, a sure sign that sanity is returning to the marketplace.

Just as importantly, some graphics cards are actually staying in stock at retailers when their prices are too high -- again, something that sounds perfectly normal but that we haven't seen in a while. For many months, boutiques like my local retailer Central Computers could only afford to sell you a GPU as part of a big PC bundle, but now it's making every card available on its own. GameStop is selling a Radeon RX 6600 for just $10 over MSRP, and it hasn't yet sold out. Newegg has also continually been offering an RTX 3080 Ti for just $10 over MSRP (after rebate, too) -- even if $1,200 still seems high for that card's level of performance.

GNU is Not Unix

Richard Stallman Speaks on the State of Free Software, and Answers Questions (libreplanet.org) 112

Richard Stallman celebrated his 69th birthday last month. And Wednesday, he gave a 92-minute presentation called "The State of the Free Software Movement."

Stallman began by thanking everyone who's contributed to free software, and encouraged others who want to help to visit gnu.org/help. "The Free Software movement is universal, and morally should not exclude anyone. Because even though there are crimes that should be punished, cutting off someone from contributing to free software punishes the world. Not that person."

And then he began by noting some things that have gotten better in the free software movement, including big improvements in projects like GNU Emacs when displaying external packages. (And in addition, "GNU Health now has a hospital management facility, which should make it applicable to a lot more medical organizations so they can switch to free software. And [Skype alternative] GNU Jami got a big upgrade.")

What's getting worse? Well, the libre-booted machines that we have are getting older and scarcer. Finding a way to support something new is difficult, because Intel and AMD are both designing their hardware to subjugate people. If they were basically haters of the public, it would be hard for them to do it much worse than they're doing.

And Macintoshes are moving towards being jails, like the iMonsters. It's getting harder for users to install even their own programs to run them. And this of course should be illegal. It should be illegal to sell a computer that doesn't let users install software of their own from source code. And probably shouldn't allow the computer to stop you from installing binaries that you get from others either, even though it's true in cases like that, you're doing it at your own risk. But tying people down, strapping them into their chairs so that they can't do anything that hurts themselves -- makes things worse, not better. There are other systems where you can find ways to trust people, that don't depend on being under the power of a giant company.

We've seen problems sometimes where supported old hardware gets de-supported because somebody doesn't think it's important any more — it's so old, how could that matter? But there are reasons...why old hardware sometimes remains very important, and people who aren't thinking about this issue might not realize that...


Stallman also had some advice for students required by their schools to use non-free software like Zoom for their remote learning. "If you have to use a non-free program, there's one last thing... which is to say in each class session, 'I am bitterly ashamed of the fact that I'm using Zoom for this class.' Just that. It's a few seconds. But say it each time.... And over time, the fact that this is really important to you will sink in."

And then halfway through, Stallman began taking questions from the audience...

Read on for Slashdot's report on Stallman's remarks, or jump ahead to...
Apple

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109

By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.

Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Intel

Intel Suspends All Operations in Russia 'Effective Immediately' (arstechnica.com) 107

Intel, one of the world's largest semiconductor companies, is suspending business operations in Russia "effective immediately," the company announced late Tuesday. From a report: "Intel continues to join the global community in condemning Russia's war against Ukraine," the company said in a statement. Intel stopped shipping chips to customers in Russia and Belarus in early March. Intel said that it is "working to support all of our employees through this difficult situation, including our 1,200 employees in Russia."

Ordinarily, it would be a drastic step for a multinational company like Intel to exit a market the size of Russia. But Western sanctions have made it increasingly difficult for global companies to operate in Russia. Earlier this week, the Biden administration announced broad sanctions on the Russian electronics industry, which presumably includes many of Intel's partners and customers in Russia. Two of Intel's major competitors, AMD and Nvidia, halted sales of their products in Russia early last month. Taiwanese chipmaker TSMC has also restricted sales in Russia.

AMD

AMD Confirms Its GPU Drivers Are Overclocking CPUs Without Asking (tomshardware.com) 73

AMD has confirmed to Tom's Hardware that a bug in its GPU driver is, in fact, changing Ryzen CPU settings in the BIOS without permission. This condition has been shown to auto-overclock Ryzen CPUs without the user's knowledge. From the report: Reports of this issue began cropping up on various social media outlets recently, with users reporting that their CPUs had mysteriously been overclocked without their consent. The issue was subsequently investigated and tracked back to AMD's GPU drivers. AMD originally added support for automatic CPU overclocking through its GPU drivers last year, with the idea that adding in a Ryzen Master module into the Radeon Adrenalin GPU drivers would simplify the overclocking experience. Users with a Ryzen CPU and Radeon GPU could use one interface to overclock both. Previously, it required both the GPU driver and AMD's Ryzen Master software.

Overclocking a Ryzen CPU requires the software to manipulate the BIOS settings, just as we see with other software overclocking utilities. For AMD, this can mean simply engaging the auto-overclocking Precision Boost Overdrive (PBO) feature. This feature does all the dirty work, like adjusting voltages and frequency on the fly, to give you a one-click automatic overclock. However, applying a GPU profile in the AMD driver can now inexplicably alter the BIOS settings to enable automatic overclocking. This is problematic because of the potential ill effects of overclocking -- in fact, overclocking a Ryzen CPU automatically voids the warranty. AMD's software typically requires you to click a warning to acknowledge that you understand the risks associated with overclocking, and that it voids your warranty, before it allows you to overclock the system. Unfortunately, that isn't happening here.
Until AMD issues a fix, "users have taken to using the Radeon Software Slimmer to delete the Ryzen Master SDK from the GPU driver, thus preventing any untoward changes to the BIOS settings," adds Tom's Hardware.
AMD

AMD To Acquire Pensando in a $1.9 Billion Bid for Networking Tech (protocol.com) 12

AMD said early Monday that it plans to acquire networking chip maker Pensando for $1.9 billion in cash, in a bid to arm itself with tech that competes with directly with Nvidia and Intel's data-center chip packages. From a report: Pensando was founded by several former Cisco engineers, and makes edge computing technology that competes with AWS Nitro, Intel's DPU launched last year, and Nvidia's data processing units called BlueField. In a release distributed in advance of the announcement, AMD said that buying the closely held Pensando will give it a networking platform that will bolster its existing server chip lineup. Pensando's chips are an increasingly important part of data center design, as it becomes impossible to simply throw larger numbers of processors at demanding computing tasks. As regular chips scale up, the networking connections become a bottleneck, and the DPU's goal (Intel calls it an IPU) is to free up the central processor to perform other functions.
Intel

Intel Beats AMD and Nvidia with Arc GPU's Full AV1 Support (neowin.net) 81

Neowin notes growing support for the "very efficient, potent, royalty-free video codec" AV1, including Microsoft's adding of support for hardware acceleration of AV1 on Windows.

But AV1 even turned up in Intel's announcement this week of the Arc A-series, a new line of discrete GPUs, Neowin reports: Intel has been quick to respond and the company has become the first such GPU hardware vendor to have full AV1 support on its newly launched Arc GPUs. While AMD and Nvidia both offer AV1 decoding with their newest GPUs, neither have support for AV1 encoding.

Intel says that hardware encoding of AV1 on its new Arc GPUs is 50 times faster than those based on software-only solutions. It also adds that the efficiency of AV1 encode with Arc is 20% better compared to HEVC. With this feature, Intel hopes to potentially capture at least some of the streaming and video editing market that's based on users who are looking for a more robust AV1 encoding solution compared to CPU-based software approaches.

From Intel's announcement: Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
Graphics

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

AMD

Radeon Super Resolution Arrives To Speed Up Your Games in AMD Adrenalin (anandtech.com) 7

Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. From a report: Dubbed FSR 2.0, the next generation of AMD's upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today's early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA's temporal-based DLSS 2.0 upscaling technology, as well as Intel's forthcoming XeSS upscaling tech.

AMD's current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling's simplicity is great for compatibility but it's limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA's class-leading DLSS 2.0.

AMD

Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44

"News of a fresh Spectre BHB vulnerability that only impacts Intel and Arm processors emerged this week," reports Tom's Hardware, "but Intel's research around these new attack vectors unearthed another issue.

"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....

Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."

As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....

In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....

AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.

Thanks to Slashdot reader Hmmmmmm for submitting the story...
China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AMD

New UCIe Chiplet Standard Supported by Intel, AMD and Arm (anandtech.com) 20

A number of industry stalwarts including Intel, AMD, Arm, TSMC, and Samsung on Wednesday introduced a new Universal Chiplet Interconnect Express (UCIe) consortium. AnandTech: Taking significant inspiration from the very successful PCI-Express playbook, with UCIe the involved firms are creating a standard for connecting chiplets, with the goal of having a single set of standards that not only simplify the process for all involved, but lead the way towards full interoperability between chiplets from different manufacturers, allowing chips to mix-and-match chiplets as chip makers see fit. In other words, to make a complete and compatible ecosystem out of chiplets, much like today's ecosystem for PCIe-based expansion cards.

The comparisons to PCIe are apt on multiple levels, and this is perhaps the best way to quickly understand the UCIe group's goals. Not only is the new standard being made available in an open fashion, but the companies involved will be establishing a formal consortium group later this year to administer UCIe and further develop it. Meanwhile from a general technology perspective, the use of chiplets is the latest step in the continual consolidation of integrated circuits, as smaller and smaller transistors have allowed more and more functionality to be brought on-chip. In essence, features that have been on an expansion card or separate chip up until now are starting to make their way on to the chip/SoC itself. So like PCIe moderates how these parts work together as expansion cards, a new standard has become needed to moderate how these parts should work together as chiplets.

AMD

AMD Is Now Worth More Than Rival Intel (yahoo.com) 25

Hmmmmmm shares a report from Yahoo Finance: AMD's market cap currently stands at $188 billion after shares rose nearly 2% in Tuesday's session. Intel's market cap is $182 billion. That marks the second time in a week AMD's market value has climbed above Intel -- the first time it happened was a week ago. Followers of this battle may not be surprised to see this one happen (and seeing it continue from here) for several reasons. First, AMD has been winning the battle on Wall Street for sexier investment thesis. AMD last week closed on its $35 billion acquisition for Xilinx. Secondarily, AMD has flat out posted better financials than Intel (for some time) as it has gained market share in key areas (notably in servers). AMD's sales and profits rose 68% and 117%, respectively in 2021. The company outlined 31% revenue growth for 2022 and gross profit margins of 51%. Intel's 2021 sales and earnings increased 2% and 7%, respectively. The company sees sales in 2022 rising about 2%. Profits are expected to drop 36% as Intel further builds out its chip-making capacity.
Intel

Intel's 12th Gen Alder Lake Chips for Thinner and Lighter Laptops Have Arrived (theverge.com) 28

Intel launched the first wave of its 12th Gen Alder Lake chips at CES 2022 -- but only for its H-series lineup of chips, destined for the most powerful and power-hungry laptops. And now, it's rolling out the rest of its Alder Lake laptop lineup: the P-series and U-series models it briefly showed off in January, which are set to power the thinner, lighter, and cheaper laptops of 2022. From a report: In total, there are a whopping 20 chips fit for a wide range of hardware across the P-series, U-series (15W), and U-series (9W) categories, with the first laptops powered by the new processors set to arrive in March. Like their more powerful H-series cousins (and the Alder Lake desktop chips that Intel launched in late 2021 and at CES 2022), the new P-series and U-series chips have a lot more cores than 2020's 11th Gen models, with a hybrid architecture approach that combines performance and efficiency cores to maximize both power and battery life. And Intel is promising some big improvements focused around those boosted core counts, touting up to 70 percent better multi-thread performance than previous 11th Gen (and AMD) hardware. The company also says that it wins out in benchmarks against chips like Apple's M1 and M1 Pro (although not the M1 Max), and AMD's Ryzen R7 5800U in tasks like web browsing and photo editing.
Intel

Intel Discloses Multi-Generation Xeon Scalable Roadmap: New E-Core Only Xeons in 2024 (anandtech.com) 5

AnandTech reports: It's no secret that Intel's enterprise processor platform has been stretched in recent generations. Compared to the competition, Intel is chasing its multi-die strategy while relying on a manufacturing platform that hasn't offered the best in the market. That being said, Intel is quoting more shipments of its latest Xeon products in December than AMD shipped in all of 2021, and the company is launching the next generation Sapphire Rapids Xeon Scalable platform later in 2022. Beyond Sapphire Rapids has been somewhat under the hood, with minor leaks here and there, but today Intel is lifting the lid on that roadmap.

Currently in the market is Intel's Ice Lake 3rd Generation Xeon Scalable platform, built on Intel's 10nm process node with up to 40 Sunny Cove cores. The die is large, around 660 mm2, and in our benchmarks we saw a sizeable generational uplift in performance compared to the 2nd Generation Xeon offering. The response to Ice Lake Xeon has been mixed, given the competition in the market, but Intel has forged ahead by leveraging a more complete platform coupled with FPGAs, memory, storage, networking, and its unique accelerator offerings. Datacenter revenues, depending on the quarter you look at, are either up or down based on how customers are digesting their current processor inventories (as stated by CEO Pat Gelsinger).
Further reading: Intel Arc Update: Alchemist Laptops Q1, Desktops Q2; 4M GPUs Total for 2022.
AMD

AMD Closes $50 Billion Purchase of Xilinx (tomshardware.com) 19

AMD on Monday completed the acquisition of Xilinx, creating a company that can offer various types of compute devices, including CPUs, GPUs, and FPGAs. Tom's Hardware reports: AMD CEO Dr. Lisa Su told analyst Patrick Moorhead that the first processor combining Xilinx technologies will arrive in 2023, which is in contrast to the company's previous integration efforts. After AMD bought ATI Technologies in 2006, it took the company five years to build its first accelerated processing units (which included AMD's x86 cores and ATI's GPU). This time AMD inked a long-term development pact with Xilinx and was able to work collaboratively even before the regulators approved the transaction. It remains to be seen what exactly AMD plans to offer, but it is reasonable to expect the new processor to feature AMD's x86 cores and Xilinx's programmable engines.

The move will help AMD to continue expanding its presence in the datacenter sector and offer unique solutions that will combine IP ingredients designed by the two companies. Interestingly, the first fruits of the deal are expected to materialize next year. [...] The Xilinx business will become AMD's Adaptive and Embedded Computing Group (AECG), led by former Xilinx CEO Victor Peng. As a result, Xilinx will maintain its leadership for at least a while. Furthermore, AMD's embedded business will cease to be a part of the company's enterprise and semi-custom unit and will merge into AECG, which might be good news as executives from the enterprise division will now spend more time on EPYC CPUs.
"The acquisition of Xilinx brings together a highly complementary set of products, customers and markets combined with differentiated IP and world-class talent to create the industry's high-performance and adaptive computing leader," said Lisa Su, chief executive of AMD. "Xilinx offers industry-leading FPGAs, adaptive SoCs, AI inference engines and software expertise that enable AMD to offer the strongest portfolio of high-performance and adaptive computing solutions in the industry and capture a larger share of the approximately $135 billion market opportunity we see across cloud, edge, and intelligent devices."
Intel

Intel Wins Historic Court Fight Over EU Antitrust Fine (bloomberg.com) 22

Intel won a historic victory in its court fight over a record 1.06 billion-euro ($1.2 billion) competition fine, in a landmark ruling that upends one of the European Union's most important antitrust cases. From a report: The EU General Court ruled on Wednesday that regulators made key errors in a landmark 2009 decision over allegedly illegal rebates that the U.S. chip giant gave to PC makers to squeeze out rival Advanced Micro Devices (AMD). While the surprise ruling can be appealed one more time, it's a stinging defeat for the European Commission, which hasn't lost a big antitrust case in court for more than 20 years. The Luxembourg-based EU court said the commission provided an "incomplete" analysis when it fined Intel, criticizing it for failing to provide sufficient evidence to back up its findings of anti-competitive risks.
AMD

AMD Returns To Smartphone Graphics (theregister.com) 13

AMD's GPU technology is returning to mobile handsets with Samsung's Exynos 2200 system-on-chip, which was announced on Tuesday. The Register reports: The Exynos 2200 processor, fabricated using a 4nm process, has Armv9 CPU cores and the oddly named Xclipse GPU, which is an adaptation of AMD's RDNA 2 mainstream GPU architecture. AMD was in the handheld GPU market until 2009, when it sold the Imageon GPU and handheld business for $65m to Qualcomm, which turned the tech into the Adreno GPU for its Snapdragon family. AMD's Imageon processors were used in devices from Motorola, Panasonic, Palm and others making Windows Mobile handsets. AMD's now returning to a more competitive mobile graphics market with Apple, Arm and Imagination also possessing homegrown smartphone GPUs.

Samsung and AMD announced the companies were working together on graphics in June last year. With Exynos 2200, Samsung has moved on from Arm's Mali GPU family, which was in the predecessor Exynos 2100 used in the current flagship Galaxy smartphones. Samsung says the power-optimized GPU has hardware-accelerated ray tracing, which simulates lighting effects and other features to make gaming a better experience. [...] The Exynos 2200 has an image signal processor that can apparently handle 200-megapixel pictures and record 8K video. Other features include HDR10+ support, and 4K video decoding at up to 240fps or 8K decoding at up to 60fps. It supports display refresh rates of up to 144Hz.

The eight-core CPU cluster features a balance of high-performing and power-efficient cores. It has one Arm Cortex-X2 flagship core, three Cortex-A710 big cores and four Cortex-A510s, which is in the same ballpark as Qualcomm's Snapdragon 8 Gen 1 and Mediatek's Dimensity 9000, which are the only other chips using Arm's Armv9 cores and are made using a 4nm process. An integrated 5G modem supports both sub-6GHz and millimeter wave bands, and a feature to mix LTE and 5G signals speeds up data transfers to 10Gbps. The chip also has a security processor and an AI engine that is said to be two times faster than its predecessor in the Exynos 2100.

Hardware

Samsung No-showed On Its Major Exynos 2200 Launch (arstechnica.com) 18

ArsTechnica: So here's a crazy story. Samsung was supposed to have a big SoC launch on Tuesday, but that launch did not happen. Samsung didn't cancel or delay the event. The January 11 date was announced, but when the time for the event came, nothing happened! Samsung pulled a no-call no-show for a major product launch. [...] The Exynos 2200 was (?) shaping up to be a major launch for Samsung. It is, after all, the first Samsung SoC with the headline-grabbing feature of having an AMD GPU. The two companies announced this deal a year ago, and we've been giddy about it ever since. The Exynos 2200 is (or was) going to debut in the Galaxy S22. That launch event is currently scheduled for February 8, assuming Samsung doesn't ghost everyone again.

Slashdot Top Deals