×
Intel

Intel To Invest $25 Billion in Israel After Winning Incentives (bloomberg.com) 150

Intel confirmed it will invest a total of $25 billion in Israel after securing $3.2 billion in incentives from the country's government. From a report: The outlay, announced by the Israeli government in June and unconfirmed by Intel until now, will go toward an expansion of the company's wafer fabrication site in Kiryat Gat, south of Tel Aviv. The incentives amount to 12.8% of Intel's planned investment.

"The expansion plan for the Kiryat Gat site is an important part of Intel's efforts to foster a more resilient global supply chain, alongside the company's ongoing and planned manufacturing investments in Europe and the US," Intel said in a statement Tuesday. Intel is among chipmakers diversifying manufacturing outside of Asia, which dominates chip production. The semiconductor pioneer is trying to restore its technological heft after being overtaken by rivals including Nvidia and Taiwan Semiconductor Manufacturing Co.

AMD

Ryzen vs. Meteor Lake: AMD's AI Often Wins, Even On Intel's Hand-Picked Tests (tomshardware.com) 6

Velcroman1 writes: Intel's new generation of "Meteor Lake" mobile CPUs herald a new age of "AI PCs," computers that can handle inference workloads such as generating images or transcribing audio without an Internet connection. Officially named "Intel Core Ultra" processors, the chips are the first to feature an NPU (neural processing unit) that's purpose-built to handle AI tasks. But there are few ways to actually test this feature at present: software will need to be rewritten to specifically direct operations at the NPU.

Intel has steered testers toward its Open Visual Inference and Neural Network Optimization (OpenVINO) AI toolkit. With those benchmarks, Tom's Hardware tested the new Intel chips against AMD -- and surprisingly, AMD chips often came out on top, even on these hand-selected benchmarks. Clearly, optimization will take some time!

Intel

Intel Unveils New AI Chip To Compete With Nvidia and AMD (cnbc.com) 13

Intel unveiled new computer chips on Thursday, including Gaudi3, an AI chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models. From a report: The most prominent AI models, like OpenAI's ChatGPT, run on Nvidia GPUs in the cloud. It's one reason Nvidia stock has been up nearly 230% year-to-date while Intel shares are up 68%. And it's why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia's dominant position in the market.

While the company was light on details, Gaudi3 will compete with Nvidia's H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD's forthcoming MI300X, when it starts shipping to customers in 2024. Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

Intel

Intel Core Ultra Processors Debut for AI-powered PCs (venturebeat.com) 27

Intel launched its Intel Core Ultra processors for AI-powered PCs at its AI Everywhere event today. From a report: The big chip maker said these processors spearhead a new era in computing, offering unparalleled power efficiency, superior compute and graphics performance, and an unprecedented AI PC experience to mobile platforms and edge devices. Available immediately, these processors will be used in over 230 AI PCs coming from renowned partners like Acer, ASUS, Dell, Gigabyte, and more.

The Intel Core Ultra processors represent an architectural shift for Intel, marking its largest design change in 40 years. These processors harness the Intel 4 process technology and Foveros 3D advanced packaging, leveraging leading-edge processes for optimal performance and capabilities. The processors boast a performance-core (P-core) architecture enhancing instructions per cycle (IPC). Efficient-cores (E-cores) and low-power Efficient-cores (LP E-cores). They deliver up to 11% more compute power compared to competitors, ensuring superior CPU performance for ultrathin PCs.

Features of Intel Core Ultra
Intel Arc GPU: Featuring up to eight Xe-cores, this GPU incorporates AI-based Xe Super Sampling, offering double the graphics performance compared to prior generations. It includes support for modern graphics features like ray tracing, mesh shading, AV1 encode and decode, HDMI 2.1, and DisplayPort 2.1 20G.
AI Boost NPU: Intel's latest NPU, Intel AI Boost, focuses on low-power, long-running AI tasks, augmenting AI processing on the CPU and GPU, offering 2.5x better power efficiency compared to its predecessors.
Advanced Performance Capabilities: With up to 16 cores, 22 threads, and Intel Thread Director for optimized workload scheduling, these processors boast a maximum turbo frequency of 5.1 GHz and support for up to 96 GB DDR5 memory capacity.
Cutting-edge Connectivity: Integrated Intel Wi-Fi 6E and support for discrete Intel Wi-Fi 7 deliver blazing wireless speeds, while Thunderbolt 4 ensures connectivity to multiple 4K monitors and fast storage with speeds of 40 Gbps.
Enhanced AI Performance: OpenVINO toolkits, ONNX, and ONNX Runtime offer streamlined workflow, automatic device detection, and enhanced AI performance.

Portables (Apple)

First AirJet-Equipped Mini PC Tested (tomshardware.com) 49

An anonymous reader quotes a report from Tom's Hardware: Zotac's ZBox PI430AJ mini PC is the first computer to use Frore System's fanless AirJet cooler, and as tested by HKEPC, it's not a gimmick. Two AirJet coolers were able to keep Intel's N300 CPU below 70 degrees Celsius under load, allowing for an incredibly thin mini PC with impressive performance. AirJet is the only active cooling solution for PCs that doesn't use fans; even so-called liquid coolers still use fans. Instead of using fans to push and pull air, AirJet uses ultrasonic waves, which have a variety of benefits: lower power consumption, near-silent operation, and a much thinner and smaller size. AirJet coolers can also do double duty as both intake and exhaust vents, whereas a fan can only do intake or exhaust, not both.

Equipped with two of the smaller AirJet Mini models, which are rated to cool 5.25 watts of heat each, the ZBox PI430AJ is just 23.7mm thick, or 0.93 inches. The mini PC's processor is Intel's low-end N300 Atom CPU with a TDP of 7 watts, and after HKEPC put the ZBox through a half-hour-long stress test, the N300 only peaked at 67 C. That's all thanks to AirJet being so thin and being able to both intake and exhaust air. For comparison, Beelink's Mini S12 Pro mini PC with the lower-power N100, which has a TDP of 6 watts, is 1.54 inches thick (66% thicker than the ZBox PI430AJ). Traditional fan-equipped coolers just can't match AirJet coolers in size, which is perhaps AirJet's biggest advantage.
Last month, engineers from Frore Systems integrated the AirJet into an M2-based Apple MacBook Air. "With proper cooling, the relatively inexpensive laptop matched the performance of a more expensive MacBook Pro based on the same processor," reports Tom's Hardware.
Bug

Nearly Every Windows and Linux Device Vulnerable To New LogoFAIL Firmware Attack (arstechnica.com) 69

"Researchers have identified a large number of bugs to do with the processing of images at boot time," writes longtime Slashdot reader jd. "This allows malicious code to be installed undetectably (since the image doesn't have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack." Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year's worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. "Once arbitrary code execution is achieved during the DXE phase, it's game over for platform security," researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. "From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started." From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device -- a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June -- runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.

"A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo," reports Ars. "People who want to know if a specific device is vulnerable should check with the manufacturer."

"The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday's coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It's also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs."
Intel

Intel Calls AMD's Chips 'Snake Oil' (tomshardware.com) 189

Aaron Klotz, reporting for Tom's Hardware: Intel recently published a new playbook titled "Core Truths" that put AMD under direct fire for utilizing its older Zen 2 CPU architecture in its latest Ryzen 7000 mobile series CPU product stack. Intel later removed the document, but we have the slides below. The playbook is designed to educate customers about AMD's product stack and even calls it "snake oil."

Intel's playbook specifically talks about AMD's latest Ryzen 5 7520U, criticizing the fact it features AMD's Zen 2 architecture from 2019 even though it sports a Ryzen 7000 series model name. Further on in the playbook, the company accuses AMD of selling "half-truths" to unsuspecting customers, stressing that the future of younger kid's education needs the best CPU performance from the latest and greatest CPU technologies made today. To make its point clear, Intel used images in its playbook referencing "snake oil" and images of used car salesmen.

The playbook also criticizes AMD's new naming scheme for its Ryzen 7000 series mobile products, quoting ArsTechnica: "As a consumer, you're still intended to see the number 7 and think, 'Oh, this is new.'" Intel also published CPU benchmark comparisons of the 7520U against its 13th Gen Core i5-1335U to back up its points. Unsurprisingly, the 1335U was substantially faster than the Zen 2 counterpart.

Hardware

Apple's Chip Lab: Now 15 Years Old With Thousands of Engineers (cnbc.com) 68

"As of this year, all new Mac computers are powered by Apple's own silicon, ending the company's 15-plus years of reliance on Intel," according to a new report from CNBC.

"Apple's silicon team has grown to thousands of engineers working across labs all over the world, including in Israel, Germany, Austria, the U.K. and Japan. Within the U.S., the company has facilities in Silicon Valley, San Diego and Austin, Texas..." The latest A17 Pro announced in the iPhone 15 Pro and Pro Max in September enables major leaps in features like computational photography and advanced rendering for gaming. "It was actually the biggest redesign in GPU architecture and Apple silicon history," said Kaiann Drance, who leads marketing for the iPhone. "We have hardware accelerated ray tracing for the first time. And we have mesh shading acceleration, which allows game developers to create some really stunning visual effects." That's led to the development of iPhone-native versions from Ubisoft's Assassin's Creed Mirage, The Division Resurgence and Capcom's Resident Evil 4.

Apple says the A17 Pro is the first 3-nanometer chip to ship at high volume. "The reason we use 3-nanometer is it gives us the ability to pack more transistors in a given dimension. That is important for the product and much better power efficiency," said the head of Apple silicon, Johny Srouji . "Even though we're not a chip company, we are leading the industry for a reason." Apple's leap to 3-nanometer continued with the M3 chips for Mac computers, announced in October. Apple says the M3 enables features like 22-hour battery life and, similar to the A17 Pro, boosted graphics performance...

In a major shift for the semiconductor industry, Apple turned away from using Intel's PC processors in 2020, switching to its own M1 chip inside the MacBook Air and other Macs. "It was almost like the laws of physics had changed," Ternus said. "All of a sudden we could build a MacBook Air that's incredibly thin and light, has no fan, 18 hours of battery life, and outperformed the MacBook Pro that we had just been shipping." He said the newest MacBook Pro with Apple's most advanced chip, the M3 Max, "is 11 times faster than the fastest Intel MacBook Pro we were making. And we were shipping that just two years ago." Intel processors are based on x86 architecture, the traditional choice for PC makers, with a lot of software developed for it. Apple bases its processors on rival Arm architecture, known for using less power and helping laptop batteries last longer.

Apple's M1 in 2020 was a proving point for Arm-based processors in high-end computers, with other big names like Qualcomm — and reportedly AMD and Nvidia — also developing Arm-based PC processors. In September, Apple extended its deal with Arm through at least 2040.

Since Apple first debuted its homegrown semiconductors in 2010 in the iPhone 4, other companies started pursuing their own custom semiconductor development, including Amazon, Google, Microsoft and Tesla.

CNBC reports that Apple is also reportedly working on its own Wi-Fi and Bluetooth chip. Apple's Srouji wouldn't comment on "future technologies and products" but told CNBC "we care about cellular, and we have teams enabling that."
United States

Nvidia CEO Says US Will Take Years To Achieve Chip Independence (bloomberg.com) 121

Nvidia Chief Executive Officer Jensen Huang, who runs the semiconductor industry's most valuable company, said the US is as much as 20 years away from breaking its dependence on overseas chipmaking. From a report: Huang, speaking at the New York Times's DealBook conference in New York, explained how his company's products rely on myriad components that come from different parts of the world -- not just Taiwan, where the most important elements are manufactured. "We are somewhere between a decade and two decades away from supply chain independence," he said. "It's not a really practical thing for a decade or two."

The outlook suggests there's a long road ahead for a key Biden administration objective -- bringing more of the chipmaking industry to US shores. The president has championed bipartisan legislation to support the building of manufacturing facilities here. And many of the biggest companies are planning to expand their US operations. That includes Taiwan Semiconductor Manufacturing Co., Nvidia's top manufacturing partner, as well as Samsung and Intel.

Businesses

Nvidia Beats TSMC and Intel To Take Top Chip Industry Revenue Crown For the First Time (tomshardware.com) 21

Nvidia has swung from fourth to first place in an assessment of chip industry revenue published today. From a report: Taipei-based financial analyst Dan Nystedt noted that the green team took the revenue crown from contract chip-making titan TSMC as Q3 financials came into view. Those keeping an eye on the world of investing and finance will have seen our report about Nvidia's earnings explosion, evidenced by the firm's publishing of its Q3 FY23 results.

Nvidia charted an amazing performance, with a headlining $18.12 billion in revenue for the quarter, up 206% year-over-year (YoY). The firm's profits were also through the roof, and Nystedt posted a graph showing Nvidia elbowed past its chip industry rivals by this metric in Q3 2023, too. Nvidia's advance is supported by multiple highly successful operating segments, which have provided a multiplicative effect on its revenue and income. Again, we saw clear evidence of a seismic shift in revenue, with the latest set of financials shared with investors earlier this week.

Microsoft

Microsoft Celebrates 20th Anniversary of 'Patch Tuesday' (microsoft.com) 17

This week the Microsoft Security Response Center celebrated the 20th anniversary of Patch Tuesday updates.

In a blog post they call the updates "an initiative that has become a cornerstone of the IT world's approach to cybersecurity." Originating from the Trustworthy Computing memo by Bill Gates in 2002, our unwavering commitment to protecting customers continues to this day and is reflected in Microsoft's Secure Future Initiative announced this month. Each month, we deliver security updates on the second Tuesday, underscoring our pledge to cyber defense. As we commemorate this milestone, it's worth exploring the inception of Patch Tuesday and its evolution through the years, demonstrating our adaptability to new technology and emerging cyber threats...

Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner. Senior leaders of the Microsoft Security Response Center (MSRC) at the time spearheaded the idea of a predictable schedule for patch releases, shifting from a "ship when ready" model to a regular weekly, and eventually, monthly cadence...

This led to a shift from a "ship when ready" model to a regular weekly, and eventually, monthly cadence. In addition to consolidating patch releases into a monthly schedule, we also organized the security update release notes into a consolidated location. Prior to this change, customers had to navigate through various Knowledge Base articles, making it difficult to find the information they needed to secure themselves. Recognizing the need for clarity and convenience, we provided a comprehensive overview of monthly releases. This change was pivotal at a time when not all updates were delivered through Windows Update, and customers needed a reliable source to find essential updates for various products.

Patch Tuesday has also influenced other vendors in the software and hardware spaces, leading to a broader industry-wide practice of synchronized security updates. This collaborative approach, especially with hardware vendors such as AMD and Intel, aims to provide a united front against vulnerabilities, enhancing the overall security posture of our ecosystems. While the volume and complexity of updates have increased, so has the collaboration with the security community. Patch Tuesday has fostered better relationships with security researchers, leading to more responsible vulnerability disclosures and quicker responses to emerging threats...

As the landscape of security threats evolves, so does our strategy, but our core mission of safeguarding our customers remains unchanged.

Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

Linux

Canonical Intros Microcloud: Simple, Free, On-prem Linux Clustering (theregister.com) 16

Canonical hosted an amusingly failure-filled demo of its new easy-to-install, Ubuntu-powered tool for building small-to-medium scale, on-premises high-availability clusters, Microcloud, at an event in London yesterday. From a report: The intro to the talk leaned heavily on Canonical's looming 20th anniversary, and with good reason. Ubuntu has carved out a substantial slice of the Linux market for itself on the basis of being easier to use than most of its rivals, at no cost -- something that many Linux players still seem not to fully comprehend. The presentation was as buzzword-heavy as one might expect, and it's also extensively based on Canonical's in-house tech, such as the LXD containervisor, Snap packaging, and, optionally, the Ubuntu Core snap-based immutable distro. (The only missing buzzword didn't crop up until the Q&A session, and we were pleased by its absence: it's not built on and doesn't use Kubernetes, but you can run Kubernetes on it if you wish.)

We're certain this is going to turn off or alienate a lot of the more fundamentalist Penguinistas, but we are equally sure that Canonical won't care. In the immortal words of Kevin Smith, it's not for critics. Microcloud combines several existing bits of off-the-shelf FOSS tech in order to make it easy to link from three to 50 Ubuntu machines into an in-house, private high-availability cluster, with live migration and automatic failover. It uses its own LXD containervisor to manage nodes and workloads, Ceph for distributed storage, OpenZFS for local storage, and OVN to virtualize the cluster interconnect. All the tools are packaged as snaps. It supports both x86-64 and Arm64 nodes, including Raspberry Pi kit, and clusters can mix both architectures. The event included several demonstrations using an on-stage cluster of three ODROID machines with "Intel N6005" processors, so we reckon they were ODROID H3+ units -- which we suspect the company chose because of their dual Ethernet connections.

Network

Ethernet is Still Going Strong After 50 Years (ieee.org) 81

The technology has become the standard LAN worldwide. From a report: Ethernet became commercially available in 1980 and quickly grew into the industry LAN standard. To provide computer companies with a framework for the technology, in June 1983 Ethernet was adopted as a standard by the IEEE 802 Local Area Network Standards Committee. Currently, the IEEE 802 family consists of 67 published standards, with 49 projects under development. The committee works with standards agencies worldwide to publish certain IEEE 802 standards as international guidelines.

A plaque recognizing the technology is displayed outside the PARC facility. It reads: "Ethernet wired LAN was invented at Xerox Palo Alto Research Center (PARC) in 1973, inspired by the ALOHAnet packet radio network and the ARPANET. In 1980 Xerox, DEC, and Intel published a specification for 10 Mbps Ethernet over coaxial cable that became the IEEE 802.3-1985 Standard. Later augmented for higher speeds, and twisted-pair, optical, and wireless media, Ethernet became ubiquitous in home, commercial, industrial, and academic settings worldwide."

Bug

Intel Fixes High-Severity CPU Bug That Causes 'Very Strange Behavior' (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: Intel on Tuesday pushed microcode updates to fix a high-severity CPU bug that has the potential to be maliciously exploited against cloud-based hosts. The flaw, affecting virtually all modern Intel CPUs, causes them to "enter a glitch state where the normal rules don't apply," Tavis Ormandy, one of several security researchers inside Google who discovered the bug, reported. Once triggered, the glitch state results in unexpected and potentially serious behavior, most notably system crashes that occur even when untrusted code is executed within a guest account of a virtual machine, which, under most cloud security models, is assumed to be safe from such faults. Escalation of privileges is also a possibility.

The bug, tracked under the common name Reptar and the designation CVE-2023-23583, is related to how affected CPUs manage prefixes, which change the behavior of instructions sent by running software. Intel x64 decoding generally allows redundant prefixes -- meaning those that don't make sense in a given context -- to be ignored without consequence. During testing in August, Ormandy noticed that the REX prefix was generating "unexpected results" when running on Intel CPUs that support a newer feature known as fast short repeat move, which was introduced in the Ice Lake architecture to fix microcoding bottlenecks. The unexpected behavior occurred when adding the redundant rex.r prefixes to the FSRM-optimized rep mov operation. [...]

Intel's official bulletin lists two classes of affected products: those that were already fixed and those that are fixed using microcode updates released Tuesday. An exhaustive list of affected CPUs is available here. As usual, the microcode updates will be available from device or motherboard manufacturers. While individuals aren't likely to face any immediate threat from this vulnerability, they should check with the manufacturer for a fix. People with expertise in x86 instruction and decoding should read Ormandy's post in its entirety. For everyone else, the most important takeaway is this: "However, we simply don't know if we can control the corruption precisely enough to achieve privilege escalation." That means it's not possible for people outside of Intel to know the true extent of the vulnerability severity. That said, anytime code running inside a virtual machine can crash the hypervisor the VM runs on, cloud providers like Google, Microsoft, Amazon, and others are going to immediately take notice.

AMD

AMD-Powered Frontier Remains Fastest Supercomputer in the World (tomshardware.com) 25

The Top500 organization released its semi-annual list of the fastest supercomputers in the world, with the AMD-powered Frontier supercomputer retaining its spot at the top of the list with 1.194 Exaflop/s (EFlop/s) of performance, fending off a half-scale 585.34 Petaflop/s (PFlop/s) submission from the Argonne National Laboratory's Intel-powered Aurora supercomputer. From a report: Argonne's submission, which only employs half of the Aurora system, lands at the second spot on the Top500, unseating Japan's Fugaku as the second-fastest supercomputer in the world. Intel also made inroads with 20 new supercomputers based on its Sapphire Rapids CPUs entering the list, but AMD's EPYC continues to take over the Top500 as it now powers 140 systems on the list -- a 39% year-over-year increase.

Intel and Argonne are currently still working to bring Arora fully online for users in 2024. As such, the Aurora submission represented 10,624 Intel CPUs and 31,874 Intel GPUs working in concert to deliver 585.34 PFlop/s at a total of 24.69 megawatts (MW) of energy. In contrast, AMD's Frontier holds the performance title at 1.194 EFlop/s, which is more than twice the performance of Aurora, while consuming a comparably miserly 22.70 MW of energy (yes, that's less power for the full Frontier supercomputer than half of the Aurora system). Aurora did not land on the Green500, a list of the most power-efficient supercomputers, with this submission, but Frontier continues to hold eighth place on that list. However, Aurora is expected to eventually reach up to 2 EFlop/s of performance when it comes fully online. When complete, Auroroa will have 21,248 Xeon Max CPUs and 63,744 Max Series 'Ponte Vecchio' GPUs spread across 166 racks and 10,624 compute blades, making it the largest known single deployment of GPUs in the world. The system leverages HPE Cray EX â" Intel Exascale Compute Blades and uses HPE's Slingshot-11 networking interconnect.

AI

Nvidia Upgrades Processor as Rivals Challenge Its AI Dominance (bloomberg.com) 39

Nvidia, the world's most valuable chipmaker, is updating its H100 artificial intelligence processor, adding more capabilities to a product that has fueled its dominance in the AI computing market. From a report: The new model, called the H200, will get the ability to use high-bandwidth memory, or HBM3e, allowing it to better cope with the large data sets needed for developing and implementing AI, Nvidia said Monday. Amazon's AWS, Alphabet's Google Cloud and Oracle's Cloud Infrastructure have all committed to using the new chip starting next year.

The current version of the Nvidia processor -- known as an AI accelerator -- is already in famously high demand. It's a prized commodity among technology heavyweights like Larry Ellison and Elon Musk, who boast about their ability to get their hands on the chip. But the product is facing more competition: AMD is bringing its rival MI300 chip to market in the fourth quarter, and Intel claims that its Gaudi 2 model is faster than the H100. With the new product, Nvidia is trying to keep up with the size of data sets used to create AI models and services, it said. Adding the enhanced memory capability will make the H200 much faster at bombarding software with data -- a process that trains AI to perform tasks such as recognizing images and speech.

Education

How 'Hour of Code' Will Teach Students About Issues with AI (code.org) 17

Started in 2013, "Hour of Code" is an annual tradition started by the education non-profit Code.org (which provides free coding lessons to schools). Its FAQ describes the December event for K-12 students as "a worldwide effort to celebrate computer science, starting with 1-hour coding activities," and over 100 million schoolkids have participated over the years.

This year's theme will be "Creativity With AI," and the "computer vision" lesson includes a short video (less than 7 minutes) featuring a Tesla Autopilot product manager from its computer vision team. "I build self-driving cars," they say in the video. "Any place where there can be resources used more efficiently I think is a place where technology can play a role. But of course one of the best, impactful ways of AI, I hope, is through self-driving cars." (The video then goes on to explain how lots of training data ultimately generates a statistical model, "which is just a fancy way of saying, a guessing machine.")

The 7-minute video is part of a larger lesson plan (with a total estimated time of 45 minutes) in which students tackle a fun story problem. If a sports arena's scoreboard is showing digital numbers, what series of patterns would a machine-vision system have to recognize to identify each digit. (Students are asked to collaborate in groups.) And it's just one of seven 45-minute lessons, each one accompanied by a short video. (The longest video is 7 minutes and 28 seconds, and all seven videos, if watched back-to-back, would run for about 31 minutes.)

Not all the lessons involve actual coding, but the goal seems to be familiarizing students (starting at the 6th grade level) with artificial intelligence of today, and the issues it raises. The second-to-last lesson is titled "Algorithmic Bias" — with a video including interviews with an ethicist at Open AI and professor focused on AI from both MIT and Stanford. And the last lesson — "Our AI Code of Ethics" — challenges students to assemble documents and videos on AI-related "ethical pitfalls," and then pool their discoveries into an educational resource "for AI creators and legislators everywhere."

This year's installment is being billed as "the largest learning event in history." And it's scheduled for the week of December 4 so it coincides with "Computer Science Education Week" (a CS-education event launched in 2009 by the Association for Computing Machinery, with help from partners including Intel, Microsoft, Google, and the National Science Foundation).
AMD

Gaining on Intel? AMD Increases CPU Market Share In Desktops, Laptops, and Servers (techspot.com) 21

A a report from TechSpot says AMD has recently increased its market share in the CPU sector for desktops, laptops, and servers: According to Mercury Research (via Tom's Hardware), AMD gained 5.8% unit share in desktops, 3.8% in laptops, and 5.8% in servers. In terms of revenue share, Team Red gained 4.1% in desktops, 5.1% in laptops, and 1.7% in servers. The report does not mention competitors by name, but the global PC industry only has one other major CPU supplier, Intel, which has a major stake in all the market segments.

While Intel and AMD make x86 processors for PCs, Qualcomm offers Arm-based SoCs for Windows notebooks, but its market share is minuscule by comparison. So, while the report doesn't say anything about the market share of Intel or Qualcomm, it is fair to assume that most of AMD's gains came at Intel's expense.

Thanks to Slashdot reader jjslash for sharing the news.
Intel

Intel's 14th Gen 'Raptor Lake Refresh' CPUs Nail a Total of 50 World Records (tomshardware.com) 40

Velcroman1 writes: Overclocking master Allen 'Splave' Golibersuch surfaced on Tom's Hardware to detail his work with liquid nitrogen to set a slew of new world records with Intel's Raptor Lake Refresh" CPUs. They include 15 world records with the Core i7-14700K and eight records with the Core i5-14600K, along with four records with the Core i9-14900K, spanning benchmarks from Cinebench to wPrime and H265.

"My top speeds were 7,730.11 MHz on all cores on the 14900K, 7,859.05 MHz on the 14600K and 7,600 MHz on the 14700K," writes Splave. "All of these achieved in Cinebench R23 while using Liquid Nitrogen cooling."
"At the end of a week of playing around, I broke the 8-core Cinebench record at a crazy 7.73 GHz on all cores," concludes Splave. "Overall, these CPUs potentially OC better than their predecessors and cost the same. It was a rather refreshing refresh, I would say."

Slashdot Top Deals