×
Hardware

VMware, AMD, Samsung and RISC-V Push For Confidential Computing Standards (theregister.com) 15

VMware has joined AMD, Samsung, and members of the RISC-V community to work on an open and cross-platform framework for the development and operation of applications using confidential computing hardware. The Register reports: Revealing the effort at the Confidential Computing Summit 2023 in San Francisco, the companies say they aim to bring about an industry transition to practical confidential computing by developing the open source Certifier Framework for Confidential Computing project. Among other goals, the project aims to standardize on a set of platform-independent developer APIs that can be used to develop or adapt application code to run in a confidential computing environment, with a Certifier Service overseeing them in operation. VMware claims to have researched, developed and open sourced the Certifier Framework, but with AMD on board, plus Samsung (which develops its own smartphone chips), the group has the x86 and Arm worlds covered. Also on board is the Keystone project, which is developing an enclave framework to support confidential computing on RISC-V processors.

Confidential computing is designed to protect applications and their data from theft or tampering by protecting them inside a secure enclave, or trusted execution environment (TEE). This uses hardware-based security mechanisms to prevent access from everything outside the enclave, including the host operating system and any other application code. Such security protections are likely to be increasingly important in the context of applications running in multi-cloud environments, VMware reckons.

Another scenario for confidential computing put forward by Microsoft, which believes confidential computing will become the norm -- is multi-party computation and analytics. This sees several users each contribute their own private data to an enclave, where it can be analyzed securely to produce results much richer than each would have got purely from their own data set. This is described as an emerging class of machine learning and "data economy" workloads that are based on sensitive data and models aggregated from multiple sources, which will be enabled by confidential computing. However, VMware points out that like many useful hardware features, it will not be widely adopted until it becomes easier to develop applications in the new paradigm.

AI

Oracle Spending 'Billions' on Nvidia Chips This Year, Ellison Says (reuters.com) 27

Oracle is spending "billions" of dollars on chips from Nvidia as it expands a cloud computing service targeting a new wave of artificial intelligence companies, Oracle founder and Chairman Larry Ellison said. From a report: Oracle's cloud division is working to gain ground against larger rivals such as Amazon Web Services and Microsoft. To get an edge, Oracle has focused on building fast networks that can shuffle around the huge amount of data needed to create AI systems similar to ChatGPT.

Oracle is also buying huge numbers of GPUs designed to crunch that data for AI work. Oracle is also spending "billions" of dollars on Nvidia chips but even more on CPUs from Ampere Computing, a chip startup it has invested in, and AMD, Ellison said at an Ampere event.

Chrome

Google's New Standard For ChromeOS: 'Chromebook X' (9to5google.com) 27

Google is launching the "Chromebook X" program, aiming to differentiate high-quality laptops and tablets from standard Chromebooks by improving hardware specifications and adding exclusive features such as enhanced video conferencing capabilities and unique wallpapers. Chromebook X devices, expected to be priced between $350 and $500, will provide users with an elevated experience beyond the basic functionality of traditional Chromebooks. The devices are anticipated to be available in stores by the end of the year, coinciding with the release of ChromeOS version 115 or newer. 9to5Google reports: For the past few months, Google has been preparing new branding for above average devices from various Chromebook makers. Notably, we haven't yet seen any signs of Google making a Chromebook X device of its own, which is honestly a shame considering how long it's been since a Pixelbook has been released. The Chromebook X brand, which could change before launch, will appear somewhere on a laptop/tablet's chassis, with a mark that could be as simple as an "X" next to the usual "Chromebook" logo. There should also be a special boot screen instead of the standard "chromeOS" logo that's shown on all machines today.

Aside from the added "X," what actually sets a Chromebook X apart from other devices is the hardware inside. Specifically, Google appears to require a certain amount of RAM, a good-quality camera for video conferencing, and a (presumably) higher-end display. Beyond that, Google has so far made specific preparations for Chromebook X models to be built on four types of processors from Intel and AMD (though newer generations will likely also be included): AMD Zen 2+ (Skyrim), AMD Zen 3 (Guybrush), and Intel Core 12th Gen (Brya & Nissa).

To further differentiate Chromebook X models from low-end Chromebooks, Google is also preparing an exclusive set of features. As mentioned, one of the key focuses of Chromebook X is video conferencing, with Google requiring an up-to-spec camera. Complementing that hardware, Google is bringing unique features like Live Caption (adding generated captions to video calls), a built-in portrait blur effect, and "voice isolation." Earlier this year, we reported that ChromeOS was readying a set of "Time Of Day" wallpapers and screen savers that would change in appearance throughout the day, particularly to match the sunrise and sunset. We now know that these are going to be exclusive to Chromebook X devices. To ensure that those wallpapers only appear on Chromebook X and can't be forcibly enabled, Google is preparing a system it calls "feature management." At the moment, feature management is only used to check whether to enable Chromebook X exclusives. Based on that, some other exclusive features of Chromebook X include: Support for up to 16 virtual desks; "Pinned" (available offline) files from Google Drive; and A revamped retail demo mode.

Security

Latest SUSE Linux Enterprise Goes All in With Confidential Computing 7

SUSE's latest release of SUSE Linux Enterprise 15 Service Pack 5 (SLE 15 SP5) has a focus on security, claiming it as the first distro to offer full support for confidential computing to protect data. From a report: According to SUSE, the latest version of its enterprise platform is designed to deliver high-performance computing capabilities, with an inevitable mention of AI/ML workloads, plus it claims to have extended its live-patching capabilities. The release also comes just weeks after the community release openSUSE Leap 15.5 was made available, with the two sharing a common core. The Reg's resident open source guru noted that Leap 15.6 has now been confirmed as under development, which implies that a future SLE 15 SP6 should also be in the pipeline.

SUSE announced the latest version at its SUSECON event in Munich, along with a new report on cloud security issues claiming that more than 88 percent of IT teams have reported at least one cloud security incident over the the past year. This appears to be the justification for the claim that SLE 15 SP5 is the first Linux distro to support "the entire spectrum" of confidential computing, allowing customers to run fully encrypted virtual machines on their infrastructure to protect applications and their associated data. Confidential computing relies on hardware-based security mechanisms in the processor to provide this protection, so enterprises hoping to take advantage of this will need to ensure their servers have the necessary support, such as AMD's Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) and Intel's Trust Domain Extensions (TDX).
Intel

Intel To Spend $33 Billion in Germany in Landmark Expansion (reuters.com) 12

Intel will invest more than 30 billion euros ($33 billion) in Germany as part of its expansion push in Europe, the U.S. company said on Monday, marking the biggest investment by a foreign company in Europe's top economy. From a report: The deal to build two leading-edge semiconductor facilities in the eastern city of Magdeburg involves 10 billion euros in German subsidies, a person familiar with the matter said. Intel CEO Pat Gelsinger said he was grateful to the German government and the state of Saxony-Anhalt, where Magdeburg is located, for "fulfilling the vision of a vibrant, sustainable, leading-edge semiconductor industry in Germany and the EU." Under Gelsinger, Intel has been investing billions in building factories across three continents to restore its dominance in chipmaking and better compete with rivals AMD, Nvidia and Samsung.
Bug

Dev Boots Linux 292,612 Times to Find Kernel Bug (tomshardware.com) 32

Long-time Slashdot reader waspleg shared this story from Hot Hardware: Red Hat Linux developer Richard WM Jones has shared an eyebrow raising tale of Linux bug hunting. Jones noticed that Linux 6.4 has a bug which means it will hang on boot about 1 in 1,000 times. Jones set out to pinpoint the bug, and prove he had caught it red handed. However, his headlining travail, involving booting Linux 292,612 times (and another 1,000 times to confirm the bug) apparently "only took 21 hours." It also seems that the bug is less common with Intel hardware than AMD based machines.
AI

AWS is Considering AMD's New AI Chips (reuters.com) 12

Amazon Web Services, the world's largest cloud computing provider, is considering using new artificial intelligence chips from AMD, though it has not made a final decision, an AWS executive told Reuters. From the report: The remarks came during an AMD event where the chip company outlined its strategy for the AI market, which is dominated by rival Nvidia. In interviews with Reuters, AMD Chief Executive Lisa Su outlined an approach to winning over major cloud computing customers by offering a menu of all the pieces needed to build the kinds of systems to power services similar to ChatGPT, but letting customers pick and choose which they want, using industry standard connections.

While AWS has not made any public commitments to use AMD's new MI300 chips in its cloud services, Dave Brown, vice president of elastic compute cloud at Amazon, said AWS is considering them. "We're still working together on where exactly that will land between AWS and AMD, but it's something that our teams are working together on," Brown said. "That's where we've benefited from some of the work that they've done around the design that plugs into existing systems."

Encryption

Hackers Can Steal Cryptographic Keys By Video-Recording Power LEDs 60 Feet Away (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: Researchers have devised a novel attack that recovers the secret encryption keys stored in smart cards and smartphones by using cameras in iPhones or commercial surveillance systems to video record power LEDs that show when the card reader or smartphone is turned on. The attacks enable a new way to exploit two previously disclosed side channels, a class of attack that measures physical effects that leak from a device as it performs a cryptographic operation. By carefully monitoring characteristics such as power consumption, sound, electromagnetic emissions, or the amount of time it takes for an operation to occur, attackers can assemble enough information to recover secret keys that underpin the security and confidentiality of a cryptographic algorithm. [...]

On Tuesday, academic researchers unveiled new research demonstrating attacks that provide a novel way to exploit these types of side channels. The first attack uses an Internet-connected surveillance camera to take a high-speed video of the power LED on a smart card reader -- or of an attached peripheral device -- during cryptographic operations. This technique allowed the researchers to pull a 256-bit ECDSA key off the same government-approved smart card used in Minerva. The other allowed the researchers to recover the private SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on the power LED of a USB speaker connected to the handset, in a similar way to how Hertzbleed pulled SIKE keys off Intel and AMD CPUs. Power LEDs are designed to indicate when a device is turned on. They typically cast a blue or violet light that varies in brightness and color depending on the power consumption of the device they are connected to.

There are limitations to both attacks that make them unfeasible in many (but not all) real-world scenarios (more on that later). Despite this, the published research is groundbreaking because it provides an entirely new way to facilitate side-channel attacks. Not only that, but the new method removes the biggest barrier holding back previously existing methods from exploiting side channels: the need to have instruments such as an oscilloscope, electric probes, or other objects touching or being in proximity to the device being attacked. In Minerva's case, the device hosting the smart card reader had to be compromised for researchers to collect precise-enough measurements. Hertzbleed, by contrast, didn't rely on a compromised device but instead took 18 days of constant interaction with the vulnerable device to recover the private SIKE key. To attack many other side channels, such as the one in the World War II encrypted teletype terminal, attackers must have specialized and often expensive instruments attached or near the targeted device. The video-based attacks presented on Tuesday reduce or completely eliminate such requirements. All that's required to steal the private key stored on the smart card is an Internet-connected surveillance camera that can be as far as 62 feet away from the targeted reader. The side-channel attack on the Samsung Galaxy handset can be performed by an iPhone 13 camera that's already present in the same room.
Videos here and here show the video-capture process of a smart card reader and a Samsung Galaxy phone, respectively, as they perform cryptographic operations. "To the naked eye, the captured video looks unremarkable," adds Ars.

"But by analyzing the video frames for different RGB values in the green channel, an attacker can identify the start and finish of a cryptographic operation."
AMD

AMD Likely To Offer Details on AI Chip in Challenge To Nvidia (reuters.com) 18

Advanced Micro Devices on Tuesday is expected to reveal new details about an AI "superchip" that analysts believe will be a strong challenger to Nvidia, whose chips dominate the fast-growing artificial intelligence market. From a report: AMD Chief Executive Lisa Su will give a keynote address at an event in San Francisco on the company's strategy in the data center and AI markets. Analysts expect fresh details about a chip called the MI300, AMD's most advanced graphics processing unit, the category of chips that companies like OpenAI use to develop products such as ChatGPT. Nvidia dominates the AI computing market with 80% to 95% of market share, according to analysts.

Last month, Nvidia's market capitalization briefly touched $1 trillion after the company said it expected a jump in revenue after it secured new chip supplies to meet surging demand. Nvidia has few competitors working at a large scale. While Intel and several startups such as Cerebras Systems and SambaNova Systems have competing products, Nvidia's biggest sales threat so far is the internal chip efforts at Alphabet's Google and Amazon's cloud unit, both of which rent their custom chips to outside developers.

Intel

Intel's Revival Plan Runs Into Trouble. 'We Had Some Serious Issues.' (wsj.com) 79

Rivals such as Nvidia have left Intel far behind. CEO Pat Gelsinger aims to reverse firm's fortunes by vastly expanding its factories. From a report: Pat Gelsinger is keenly aware he must act fast to stop Intel from becoming yet another storied American technology company left in the dust by nimbler competitors. Over the past decade, rivals overtook Intel in making the most advanced chips, graphics-chip maker Nvidia leapfrogged Intel to become America's most valuable semiconductor company, and perennial also-ran AMD has been stealing market share. Intel, by contrast, has faced repeated delays introducing new chips and frustration from would-be customers. "We didn't get into this mud hole because everything was going great," said Gelsinger, who took over as CEO in 2021. "We had some serious issues in terms of leadership, people, methodology, et cetera that we needed to attack."

As he sees it, Intel's problems stem largely from how it botched a transition in how chips are made. Intel came to prominence by both designing circuits and making them in its own factories. Now, chip companies tend to specialize either in circuit design or manufacturing, and Intel hasn't been able to pick up much business making chips designed by other people. So far, the turnaround has been rough. Gelsinger, 62 years old and a devout Christian, said he takes inspiration from the biblical story of Nehemiah, who rebuilt the walls of Jerusalem under attack from his enemies. Last year, he told a Christian group in Singapore: "You'll have your bad days, and you need to have a deep passion to rebuild." Gelsinger's plan is to invest as much as hundreds of billions of dollars into new factories that would make semiconductors for other companies alongside Intel's own chips. Two years in, that contract-manufacturing operation, called a "foundry" business, is bogged down with problems.

AMD

AMD Has A One-Liner To Help Speed Up Linux System Resume Time 23

Michael Larabel, reporting at Phoronix: AMD engineers have been working out many quirks and oddities in system suspend/resume handling to make it more reliable on their hardware particularly around Ryzen laptops. In addition to suspend/resume reliability improvements and suspend-to-idle (s2idle) enhancements, one of their engineers also discovered an easy one-liner as a small step to speeding up system resume time. AMD engineer Basavaraj Natikar realized a missing check in the USB XHCI driver can avoid an extra 120ms delay during system resume time. It's only 120 ms, but it's a broad win given it's for the XHCI driver code and part of their larger effort of improving the AMD Ryzen platform on Linux and this 120ms savings is from altering one line of code.
AMD

AMD's and Nvidia's Latest Sub-$400 GPUs Fail To Push the Bar on 1440p Gaming (theverge.com) 96

An anonymous reader shares a report: I'm disappointed. I've been waiting for AMD and Nvidia to offer up more affordable options for this generation of GPUs that could really push 1440p into the mainstream, but what I've been reviewing over the past week hasn't lived up to my expectations. Nvidia and AMD are both releasing new GPUs this week that are aimed at the budget PC gaming market. After seven years of 1080p dominating the mainstream, I was hopeful this generation would deliver 1440p value cards. Instead, Nvidia has started shipping a $399 RTX 4060 Ti today that the company is positioning as a 1080p card and not the 1440p sweet spot it really should be at this price point.

AMD is aggressively pricing its new Radeon RX 7600 at just $269, and it's definitely more suited to the 1080p resolution at that price point and performance. I just wish there were an option between the $300 to $400 marks that offered enough performance to push us firmly into the 1440p era. More than 60 percent of PC gamers are playing at 1080p, according to Valve's latest Steam data. That means GPU makers like AMD and Nvidia don't have to target 1440p with cards that sell in high volume because demand seems to be low. Part of that low demand could be because a monitor upgrade isn't a common purchase for PC gamers, or they'd have to pay more for a graphics card to even support 1440p. That's probably why both of these cards also still ship with just 8GB of VRAM because why ship it with more if you're only targeting 1080p? A lower resolution doesn't need as much VRAM for texture quality. I've been testing both cards at 1080p and 1440p to get a good idea of where they sit in the GPU market right now. It's fair to say that the RTX 4060 Ti essentially offers the same 1440p performance as an RTX 3070 at 1440p for $399. That's $100 less than the RTX 3070's $499 price point, which, in October 2020, I said offered a 1440p sweet spot for games during that period of time. It's now nearly three years on, and I'd certainly expect more performance here at 1440p. Why is yesterday's 1440p card suddenly a 1080p one for Nvidia?

Intel

Intel Gives Details on Future AI Chips as It Shifts Strategy (reuters.com) 36

Intel on Monday provided a handful of new details on a chip for artificial intelligence (AI) computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and Advanced Micro Devices. From a report: At a supercomputing conference in Germany on Monday, Intel said its forthcoming "Falcon Shores" chip will have 288 gigabytes of memory and support 8-bit floating point computation. Those technical specifications are important as artificial intelligence models similar to services like ChatGPT have exploded in size, and businesses are looking for more powerful chips to run them.

The details are also among the first to trickle out as Intel carries out a strategy shift to catch up to Nvidia, which leads the market in chips for AI, and AMD, which is expected to challenge Nvidia's position with a chip called the MI300. Intel, by contrast, has essentially no market share after its would-be Nvidia competitor, a chip called Ponte Vecchio, suffered years of delays. Intel on Monday said it has nearly completed shipments for Argonne National Lab's Aurora supercomputer based on Ponte Vecchio, which Intel claims has better performance than Nvidia's latest AI chip, the H100. But Intel's Falcon Shores follow-on chip won't be to market until 2025, when Nvidia will likely have another chip of its own out.

AMD

AMD Now Powers 121 of the World's Fastest Supercomputers (tomshardware.com) 22

The Top 500 list of the fastest supercomputers in the world was released today, and AMD continues its streak of impressive wins with 121 systems now powered by AMD's silicon -- a year-over-year increase of 29%. From a report: Additionally, AMD continues to hold the #1 spot on the Top 500 with the Frontier supercomputer, while the test and development system based on the same architecture continues to hold the second spot in power efficiency metrics on the Green 500 list. Overall, AMD also powers seven of the top ten systems on the Green 500 list. The AMD-powered Frontier remains the only fully-qualified exascale-class supercomputer on the planet, as the Intel-powered two-exaflop Aurora has still not submitted a benchmark result after years of delays.

In contrast, Frontier is now fully operational and is being used by researchers in a multitude of science workloads. In fact, Frontier continues to improve from tuning -- the system entered the Top 500 list with 1.02 exaflops of performance in June 2022 but has now improved to 1.194 exaflops, a 17% increase. That's an impressive increase from the same 8,699,904 CPU cores it debuted with. For perspective, that extra 92 petaflops of performance from tuning represents the same amount of computational horsepower as the entire Perlmutter system that ranks eighth on the Top 500.

AI

Meta's Building an In-House AI Chip to Compete with Other Tech Giants (techcrunch.com) 17

An anonymous reader shared this report from the Verge: Meta is building its first custom chip specifically for running AI models, the company announced on Thursday. As Meta increases its AI efforts — CEO Mark Zuckerberg recently said the company sees "an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful" — the chip and other infrastructure plans revealed Thursday could be critical tools for Meta to compete with other tech giants also investing significant resources into AI.

Meta's new MTIA chip, which stands for Meta Training and Inference Accelerator, is its "in-house, custom accelerator chip family targeting inference workloads," Meta VP and head of infrastructure Santosh Janardhan wrote in a blog post... But the MTIA chip is seemingly a long ways away: it's not set to come out until 2025, TechCrunch reports.

Meta has been working on "a massive project to upgrade its AI infrastructure in the past year," Reuters reports, "after executives realized it lacked the hardware and software to support demand from product teams building AI-powered features."

As a result, the company scrapped plans for a large-scale rollout of an in-house inference chip and started work on a more ambitious chip capable of performing training and inference, Reuters reported...

Meta said it has an AI-powered system to help its engineers create computer code, similar to tools offered by Microsoft, Amazon and Alphabet.

TechCrunch calls these announcements "an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems — hobbling its ability to keep pace with rivals such as Google and Microsoft."

Meta's VP of Infrastructure told TechCrunch "This level of vertical integration is needed to push the boundaries of AI research at scale." Over the past decade or so, Meta has spent billions of dollars recruiting top data scientists and building new kinds of AI, including AI that now powers the discovery engines, moderation filters and ad recommenders found throughout its apps and services. But the company has struggled to turn many of its more ambitious AI research innovations into products, particularly on the generative AI front. Until 2022, Meta largely ran its AI workloads using a combination of CPUs — which tend to be less efficient for those sorts of tasks than GPUs — and a custom chip designed for accelerating AI algorithms...

The MTIA is an ASIC, a kind of chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel... Custom AI chips are increasingly the name of the game among the Big Tech players. Google created a processor, the TPU (short for "tensor processing unit"), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena.

Meta says that it created the first generation of the MTIA — MTIA v1 — in 2020, built on a 7-nanometer process. It can scale beyond its internal 128 MB of memory to up to 128 GB, and in a Meta-designed benchmark test — which, of course, has to be taken with a grain of salt — Meta claims that the MTIA handled "low-complexity" and "medium-complexity" AI models more efficiently than a GPU. Work remains to be done in the memory and networking areas of the chip, Meta says, which present bottlenecks as the size of AI models grow, requiring workloads to be split up across several chips. (Not coincidentally, Meta recently acquired an Oslo-based team building AI networking tech at British chip unicorn Graphcore.) And for now, the MTIA's focus is strictly on inference — not training — for "recommendation workloads" across Meta's app family...

If there's a common thread in today's hardware announcements, it's that Meta's attempting desperately to pick up the pace where it concerns AI, specifically generative AI... In part, Meta's feeling increasing pressure from investors concerned that the company's not moving fast enough to capture the (potentially large) market for generative AI. It has no answer — yet — to chatbots like Bard, Bing Chat or ChatGPT. Nor has it made much progress on image generation, another key segment that's seen explosive growth.

If the predictions are right, the total addressable market for generative AI software could be $150 billion. Goldman Sachs predicts that it'll raise GDP by 7%. Even a small slice of that could erase the billions Meta's lost in investments in "metaverse" technologies like augmented reality headsets, meetings software and VR playgrounds like Horizon Worlds.

Hardware

US Focuses on Invigorating 'Chiplet' Production in the US (nytimes.com) 19

More than a decade ago engineers at AMD "began toying with a radical idea," remembers the New York Times. Instead of designing one big microprocessor, they "conceived of creating one from smaller chips that would be packaged tightly together to work like one electronic brain."

But with "diminishing returns" from Moore's Law, packaging smaller chips suddenly becomes more important. [Alternate URL here.] As much as 80% of microprocessors will be using these designs by 2027, according to an estimate from the market research firm Yole Group cited by the Times: The concept, sometimes called chiplets, caught on in a big way, with AMD, Apple, Amazon, Tesla, IBM and Intel introducing such products. Chiplets rapidly gained traction because smaller chips are cheaper to make, while bundles of them can top the performance of any single slice of silicon. The strategy, based on advanced packaging technology, has since become an essential tool to enabling progress in semiconductors. And it represents one of the biggest shifts in years for an industry that drives innovations in fields like artificial intelligence, self-driving cars and military hardware. "Packaging is where the action is going to be," said Subramanian Iyer, a professor of electrical and computer engineering at the University of California, Los Angeles, who helped pioneer the chiplet concept. "It's happening because there is actually no other way."

The catch is that such packaging, like making chips themselves, is overwhelmingly dominated by companies in Asia. Although the United States accounts for around 12 percent of global semiconductor production, American companies provide just 3 percent of chip packaging, according to IPC, a trade association. That issue has now landed chiplets in the middle of U.S. industrial policymaking. The CHIPS Act, a $52 billion subsidy package that passed last summer, was seen as President Biden's move to reinvigorate domestic chip making by providing money to build more sophisticated factories called "fabs." But part of it was also aimed at stoking advanced packaging factories in the United States to capture more of that essential process... The Commerce Department is now accepting applications for manufacturing grants from the CHIPS Act, including for chip packaging factories. It is also allocating funding to a research program specifically on advanced packaging...

Some chip packaging companies are moving quickly for the funding. One is Integra Technologies in Wichita, Kan., which announced plans for a $1.8 billion expansion there but said that was contingent on receiving federal subsidies. Amkor Technology, an Arizona packaging service that has most of its operations in Asia, also said it was talking to customers and government officials about a U.S. production presence... Packaging services still need others to supply the substrates that chiplets require to connect to circuit boards and one another... But the United States has no major makers of those substrates, which are primarily produced in Asia and evolved from technologies used in manufacturing circuit boards. Many U.S. companies have also left that business, another worry that industry groups hope will spur federal funding to help board suppliers start making substrates.

In March, Mr. Biden issued a determination that advanced packaging and domestic circuit board production were essential for national security, and announced $50 million in Defense Production Act funding for American and Canadian companies in those fields. Even with such subsidies, assembling all the elements required to reduce U.S. dependence on Asian companies "is a huge challenge," said Andreas Olofsson, who ran a Defense Department research effort in the field before founding a packaging start-up called Zero ASIC. "You don't have suppliers. You don't have a work force. You don't have equipment. You have to sort of start from scratch."

AMD

AMD Will Replace AGESA With Open Source Initialization Library 'openSIL' (phoronix.com) 9

Phoronix shares some overlooked news from AMD's openSIL presentation at the OCP Regional Summit in April. Specifically, that AMD openSIL — their open-source x86 silicon initialization library — "is planned to eventually replace AMD's well known AGESA [BIOS utility]" around 2026, and "it will be supported across AMD's entire processor stack — just not limited to EPYC server processors as some were initially concerned..." Raj Kapoor, AMD Fellow and AMD's Chief Firmware Architect, in fact began the AMD openSIL presentation by talking about the challenges they've had with AGESA in adapting it to Coreboot for Chromebook purposes with Ryzen SoCs... With AMD openSIL not expected to be production ready until around 2026, this puts it roughly inline for an AMD Zen 6 or Zen 7 introduction. The proof of concept code for AMD Genoa is expected to come soon...

The presentation also noted that beyond AMD openSIL code being open-source, the openSIL specification will also be open. AMD "invites every silicon vendor" to participate in this open-source system firmware endeavor.

AMD

Report: Microsoft is Partnering with AMD on Athena AI Chipset 17

According to Bloomberg (paywalled), Microsoft is helping finance AMD's expansion into AI chips. Meanwhile, AMD is working with Microsoft to create an in-house chipset, codenamed Athena, for the software giant's data centers. Paul Thurrott reports: Athena is designed as a cost-effective replacement for AI chipsets from Nvidia, which currently dominates this market. And it comes with newfound urgency as Microsoft's ChatGPT-powered Bing chatbot workloads are incredibly expensive using third-party chips. With Microsoft planning to expand its use of AI dramatically this year, it needs a cheaper alternative.

Microsoft's secretive hardware efforts also come amid a period of Big Tech layoffs. But the firm's new Microsoft Silicon business, led by former Intel executive Rani Borkar, is growing and now has almost 1,000 employees, several hundred of which are working on Athena. The software giant has invested about $2 billion on this effort so far, Bloomberg says. (And that's above the $11 billion it's invested in ChatGPT maker OpenAI.) Bloomberg also says that Microsoft intends to keep partnering with Nvidia too, and that it will continue buying Nvidia chipsets as needed.
AMD

AMD Posts First Loss in Years as Consumer Chip Sales Plummet by 65% (tomshardware.com) 44

AMD has posted its first quarterly loss in years due to weak sales of processors for client PCs. From a report: Overall, AMD's chip sales dropped 64%. AMD's data center and gaming hardware shipments remained strong and were flat year-over-year, which is quite an achievement given the slowing purchases of servers and weak demand for gaming hardware among consumers. While AMD's management expects the CPU market to start recovering in the second half of the year, the company's outlook for Q2 is not that optimistic.

In the first quarter of FY2023, AMD's revenue amounted to $5.353 billion, which is a 9% decrease compared to the same period in the previous year and a slight decrease compared to the previous quarter. Unfortunately, the company slipped into the red with a $139 million net loss as compared to a $786 million net income in Q1 FY2022. Additionally, AMD's gross margin decreased from 48% in Q1 FY2022 to 44% in Q1 FY2023. [...] AMD's results were a mixed bag as all of the company's business units except its Client Computing business remained more or less flat compared to the first quarter of FY2022, and even remained profitable. In fact, AMD's Data Center unit even managed to modestly increase its revenue, yet its profitability declined.

Open Source

Linux Kernel 6.3 Released (zdnet.com) 16

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: The latest Linux kernel is out with a slew of new features -- and, for once, this release has been nice and easy. [...] Speaking of Rust, everyone's favorite memory-safe language, the new kernel comes with user-mode Linux support for Rust code. Miguel Ojeda, the Linux kernel developer, who's led the efforts to bring Rust to Linux, said the additions mean we're, "getting closer to a point where the first Rust modules can be upstreamed."

Other features in the Linux 6.3 kernel include support and enablement for upcoming and yet-to-be-released Intel and AMD CPUs and graphics hardware. While these updates will primarily benefit future hardware, several changes in this release directly impact today's users' day-to-day experience. The kernel now supports AMD's automatic Indirect Branch Restricted Speculation (IBRS) feature for Spectre mitigation, providing a less performance-intensive alternative to the retpoline speculative execution.

Linux 6.3 also includes new power management drivers for ARM and RISC-V architectures. RISC-V has gained support for accelerated string functions via the Zbb bit manipulation extension, while ARM received support for scalable matrix extension 2 instructions. For filesystems, Linux 6.3 brings AES-SHA2-based encryption support for NFS, optimizations for EXT4 direct I/O performance, low-latency decompression for EROFS, and a faster Brtfs file-system driver. Bottom line: many file operations will be a bit more secure and faster.

For gamers, the new kernel provides a native Steam Deck controller interface in HID. It also includes compatibility for the Logitech G923 Xbox edition racing wheel and improvements to the 8BitDo Pro 2 wired game controllers. Who says you can't game on Linux? Single-board computers, such as BannaPi R3, BPI-M2 Pro, and Orange Pi R1 Plus, also benefit from updated drivers in this release. There's also support for more Wi-Fi adapters and chipsets. These include: Realtek RTL8188EU Wi-Fi adapter support; Qualcomm Wi-Fi 7 wireless chipset support; and Ethernet support for NVIDIA BlueField 3 DPU. For users dealing with complex networks that have both old-school and modern networks, the new kernel can also handle multi-path TCP handling mixed flows with IPv4 and IPv6.
Linux 6.3 is available from kernel.org. You can learn how to compile the Linux kernel yourself here.

Slashdot Top Deals