×
Ubuntu

Ubuntu 24.04 Yields a 20% Performance Advantage Over Windows 11 On Ryzen 7 Framework Laptop (phoronix.com) 53

Michael Larabel reports via Phoronix: With the Framework 16 laptop one of the performance pieces I've been meaning to carry out has been seeing out Linux performs against Microsoft Windows 11 for this AMD Ryzen 7 7840HS powered modular/upgradeable laptop. Recently getting around to it in my benchmarking queue, I also compared the performance of Ubuntu 23.10 to the near final Ubuntu 24.04 LTS on this laptop up against a fully-updated Microsoft Windows 11 installation. The Framework 16 review unit as a reminder was configured with the 8-core / 16-thread AMD Ryzen 7 7840HS Zen 4 SoC with Radeon RX 7700S graphics, a 512GB SN810 NVMe SSD, MediaTek MT7922 WiFi, and a 2560 x 1600 display.

In the few months of testing out the Framework 16 predominantly under Linux it's been working out very well. With also having a Windows 11 partition as shipped by Framework, after updating that install it made for an interesting comparison against the Ubuntu 23.10 and Ubuntu 24.04 performance. The same Framework 16 AMD laptop was used throughout all of the testing for looking at the out-of-the-box performance across Microsoft Windows 11, Ubuntu 23.10, and the near-final state of Ubuntu 24.04. [...]

Out of 101 benchmarks carried out on all three operating systems with the Framework 16 laptop, Ubuntu 24.04 was the fastest in 67% of those tests, the prior Ubuntu 23.10 led in 22% (typically with slim margins to 24.04), and then Microsoft Windows 11 was the front-runner just 10% of the time... If taking the geomean of all 101 benchmark results, Ubuntu 23.10 was 16% faster than Microsoft Windows 11 while Ubuntu 24.04 enhanced the Ubuntu Linux performance by 3% to yield a 20% advantage over Windows 11 on this AMD Ryzen 7 7840HS laptop. Ubuntu 24.04 is looking very good in the performance department and will see its stable release next week.

PlayStation (Games)

Sony's PS5 Pro is Real and Developers Are Getting Ready For It (theverge.com) 25

Sony is getting ready to release a more powerful PS5 console, possibly by the end of this year. After reports of leaked PS5 Pro specifications surfaced recently, The Verge has obtained a full list of specs for the upcoming console. From the report: Sources familiar with Sony's plans tell me that developers are already being asked to ensure their games are compatible with this upcoming console, with a focus on improving ray tracing. Codenamed Trinity, the PlayStation 5 Pro model will include a more powerful GPU and a slightly faster CPU mode. All of Sony's changes point to a PS5 Pro that will be far more capable of rendering games with ray tracing enabled or hitting higher resolutions and frame rates in certain titles. Sony appears to be encouraging developers to use graphics features like ray tracing more with the PS5 Pro, with games able to use a "Trinity Enhanced" (PS5 Pro Enhanced) label if they "provide significant enhancements."

Sony expects GPU rendering on the PS5 Pro to be "about 45 percent faster than standard PlayStation 5," according to documents outlining the upcoming console. The PS5 Pro GPU will be larger and use faster system memory to help improve ray tracing in games. Sony is also using a "more powerful ray tracing architecture" in the PS5 Pro, where the speed here is up to three times better than the regular PS5. "Trinity is a high-end version of PlayStation 5," reads one document, with Sony indicating it will continue to sell the standard PS5 after this new model launches. Sony is expecting game developers to have a single package that will support both the PS5 and PS5 Pro consoles, with existing games able to be patched for higher performance.

Google

With Vids, Google Thinks It Has the Next Big Productivity Tool For Work (theverge.com) 56

For decades, work has revolved around documents, spreadsheets, and slide decks. Word, Excel, PowerPoint; Pages, Numbers, Keynote; Docs, Sheets, Slides. Now Google is proposing to add another to that triumvirate: an app called Vids that aims to help companies and consumers make collaborative, shareable video more easily than ever. From a report: Google Vids is very much not an app for making beautiful movies... or even not-that-beautiful movies. It's meant more for the sorts of things people do at work: make a pitch, update the team, explain a complicated concept. The main goal is to make everything as easy as possible, says Kristina Behr, Google's VP of product management for the Workspace collaboration apps. "The ethos that we have is, if you can make a slide, you can make a video in Vids," she says. "No video production is required."

Based on what I've seen of Vids so far, it appears to be roughly what you'd get if you transformed Google Slides into a video app. You collect assets from Drive and elsewhere and assemble them in order -- but unlike the column of slides in the Slides sidebar, you're putting together a left-to-right timeline for a video. Then, you can add voiceover or film yourself and edit it all into a finished video. A lot of those finished videos, I suspect, will look like recorded PowerPoint presentations or Meet calls or those now-ubiquitous training videos where a person talks to you from a small circle in the bottom corner while graphics play on the screen. There will be lots of clip art-heavy product promos, I'm sure. But in theory, you can make almost anything in Vids. ou can either do all this by yourself or prompt Google's Gemini AI to make a first draft of the video for you. Gemini can build a storyboard; it can write a script; it can read your script aloud with text-to-speech; it can create images for you to use in the video. The app has a library of stock video and audio that users can add to their own Vids, too.

AMD

AMD To Open Source Micro Engine Scheduler Firmware For Radeon GPUs 23

AMD plans to document and open source its Micro Engine Scheduler (MES) firmware for GPUs, giving users more control over Radeon graphics cards. From a report: It's part of a larger effort AMD confirmed earlier this week about making its GPUs more open source at both a software level in respect to the ROCm stack for GPU programming and a hardware level. Details were scarce with this initial announcement, and the only concrete thing it introduced was a GitHub tracker.

However, yesterday AMD divulged more details, specifying that one of the things it would be making open source was the MES firmware for Radeon GPUs. AMD says it will be publishing documentation for MES around the end of May, and will then release the source code some time afterward. For one George Hotz and his startup, Tiny Corp, this is great news. Throughout March, Hotz had agitated for AMD to make MES open source in order to fix issues he was experiencing with his RX 7900 XTX-powered AI server box. He had talked several times to AMD representatives, and even the company's CEO, Lisa Su.
Software

Rickroll Meme Immortalized In Custom ASIC That Includes 164 Hardcoded Programs (theregister.com) 9

Matthew Connatser reports via The Register: An ASIC designed to display the infamous Rickroll meme is here, alongside 164 other assorted functions. The project is a product of Matthew Venn's Zero to ASIC Course, which offers prospective chip engineers the chance to "learn to design your own ASIC and get it fabricated." Since 2020, Zero to ASIC has accepted several designs that are incorporated into a single chip called a multi-project wafer (MPW), a cost-saving measure as making one chip for one design would be prohibitively expensive. Zero to ASIC has two series of chips: MPW and Tiny Tapeout. The MPW series usually includes just a handful of designs, such as the four on MPW8 submitted in January 2023. By contrast, the original Tiny Tapeout chip included 152 designs, and Tiny Tapeout 2 (which arrived last October) had 165, though could bumped up to 250. Of the 165 designs, one in particular may strike a chord: Design 145, or the Secret File, made by engineer and YouTuber Bitluni. His Secret File design for the Tiny Tapeout ASIC is designed to play a small part of Rick Astley's music video for Never Gonna Give You Up, also known as the Rickroll meme.

Bitluni was a late inclusion on the Tiny Tapeout 2 project, having been invited just three days before the submission deadline. He initially just made a persistence-of-vision controller, which was revised twice for a total of three designs. "At the end, I still had a few hours left, and I thought maybe I should also upload a meme project," Bitluni says in his video documenting his ASIC journey. His meme of choice was of course the Rickroll. One might even call it an Easter egg. However, given that there were 250 total plots for each design, there wasn't a ton of room for both the graphics processor and the file it was supposed to render, a short GIF of the music video. Ultimately, this had to be shrunk from 217 kilobytes to less than half a kilobyte, making its output look similar to games on the Atari 2600 from 1977. Accessing the Rickroll rendering processor and other designs isn't simple. Bitluni created a custom circuit board to mount the Tiny Tapeout 2 chip, creating a device that could then be plugged into a motherboard capable of selecting specific designs on the ASIC. Unfortunately for Bitluni, his first PCB had a design error on it that he had to correct, but the revised version worked and was able to display the Rickroll GIF in hardware via a VGA port.

News

Taiwan Quake Puts World's Most Advanced Chips at Risk (msn.com) 99

Taiwan's biggest earthquake in 25 years has disrupted production at the island's semiconductor companies, raising the possibility of fallout for the technology industry and perhaps the global economy. From a report: The potential repercussions are significant because of the critical role Taiwan plays in the manufacture of advanced chips, the foundation of technologies from artificial intelligence and smartphones to electric vehicles.

The 7.4-magnitude earthquake led to the collapse of at least 26 buildings, four deaths and the injury of 57 people across Taiwan, with much of the fallout still unknown. Taiwan Semiconductor Manufacturing Co., the world's largest maker of advanced chips for customers like Apple and Nvidia, halted some chipmaking machinery and evacuated staff. Local rival United Microelectronics also stopped machinery at some plants and evacuated certain facilities at its hubs of Hsinchu and Tainan.

Taiwan is the leading producer of the most advanced semiconductors in the world, including the processors at the heart of the latest iPhones and the Nvidia graphics chips that train AI models like OpenAI's ChatGPT. TSMC has become the tech linchpin because it's the most advanced in producing complex chips. Taiwan is the source of an estimated 80% to 90% of the highest-end chips -- there is effectively no substitute. Jan-Peter Kleinhans, director of the technology and geopolitics project at Berlin-based think tank Stiftung Neue Verantwortung, has called Taiwan "potentially the most critical single point of failure" in the semiconductor industry.

The Matrix

'Yes, We're All Trapped in the Matrix Now' (cnn.com) 185

"As you're reading this, you're more likely than not already inside 'The Matrix'," according to a headline on the front page of CNN.com this weekend.

It linked to an opinion piece by Rizwan Virk, founder of MIT's startup incubator/accelerator program. He's now a doctoral researcher at Arizona State University, where his profile identifies him as an "entrepreneur, video game pioneer, film producer, venture capitalist, computer scientist and bestselling author." Virk's 2019 book was titled "The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are in a Video Game." In the decades since [The Matrix was released], this idea, now called the simulation hypothesis, has come to be taken more seriously by technologists, scientists and philosophers. The main reason for this shift is the stunning improvements in computer graphics, virtual and augmented reality (VR and AR) and AI. Taking into account three developments just this year from Apple, Neuralink and OpenAI, I can now confidently state that as you are reading this article, you are more likely than not already inside a computer simulation. This is because the closer our technology gets to being able to build a fully interactive simulation like the Matrix, the more likely it is that someone has already built such a world, and we are simply inside their video game world...

In 2003, Oxford philosopher Nick Bostrom imagined a "technologically mature" civilization could easily create a simulated world. The logic, then, is that if any civilization ever reaches this point, it would create not just one but a very large number of simulations (perhaps billions), each with billions of AI characters, simply by firing up more servers. With simulated worlds far outnumbering the "real" world, the likelihood that we are in a simulation would be significantly higher than not. It was this logic that prompted Elon Musk to state, a few years ago, that the chances that we are not in a simulation (i.e. that we are in base reality) was "one in billions." It's a theory that is difficult to prove — but difficult to disprove as well. Remember, the simulations would be so good that you wouldn't be able to tell the difference between a physical and a simulated world. Either the signals are being beamed directly into your brain, or we are simply AI characters inside the simulation...

Recent developments in Silicon Valley show that we could get to the simulation point very soon. Just this year, Apple released its Vision Pro headset — a mixed-reality (including augmented and virtual reality) device that, if you believe initial reviews (ranging from mildly positive to ecstatic), heralds the beginning of a new era of spatial computing — or the merging of digital and physical worlds... we can see a direct line to being able to render a realistic fictional world around us... Just last month, OpenAI released Sora AI, which can now generate highly realistic videos that are pretty damn difficult to distinguish from real human videos. The fact that AI can so easily fool humans visually as well as through text (and according to some, has already passed the well-known Turing Test) shows that we are not far from fully immersive worlds populated with simulated AI characters that seem (and perhaps even think they are) conscious. Already, millions of humans are chatting with AI characters, and millions of dollars are pouring into making AI characters more realistic. Some of us may be players of the game, who have forgotten that we allowed the signal to be beamed into our brain, while others, like Neo or Morpheus or Trinity in "The Matrix," may have been plugged in at birth...

The fact that we are approaching the simulation point so soon in our future means that the likelihood that we are already inside someone else's advanced simulation goes up exponentially. Like Neo, we would be unable to tell the difference between a simulated and a physical world. Perhaps the most appropriate response to that is another of Reeves' most famous lines from that now-classic sci-fi film: Woah.

The author notes that the idea of being trapped inside a video game already "had been articulated by one of the Wachowskis' heroes, science fiction author Philip K. Dick, who stated, all the way back in 1977, 'We are living in a computer programmed reality.'" A few years ago, I interviewed Dick's wife Tessa and asked her what he would have thought of "The Matrix." She said his first reaction would have been that he loved it; however, his second reaction would most likely have been to call his agent to see if he could sue the filmmakers for stealing his ideas.
Graphics

Canva Acquires Affinity To Fill the Adobe-Sized Holes In Its Design Suite (theverge.com) 31

Web-based design platform Canva has acquired the Affinity creative software suite for an undisclosed sum, though Bloomberg reports that it's valued at "several hundred million [British] pounds." The Verge reports that the acquisition helps the company "[position] itself as a challenger to Adobe's grip over the digital design industry." From the report: Canva announced the deal on Tuesday, which gives the company ownership over Affinity Designer, Photo, and Publisher -- three popular creative applications for Windows, Mac, and iPad that provide similar features to Adobe's Illustrator, Photoshop, and InDesign software, respectively. [T]he acquisition makes sense as the Australian-based company tries to attract more creative professionals. As of January this year, Canva's design platform attracted around 170 million monthly global users. That's a lot of people who probably aren't using equivalent Adobe software like Express, but unlike Adobe, Canva doesn't have its own design applications that target creative professionals like illustrators, photographers, and video editors.

Affinity apps are used by over three million global users according to Canva -- that's a fraction of Adobe's user base, but Affinity shouldn't be underestimated here. The decision to make its Affinity applications a one-time-purchase with no ongoing subscription fees has earned it a loyal fanbase, especially with creatives who are actively looking for alternatives to Adobe's subscription-based design ecosystem. In an interview with the Sydney Morning Herald, Canva co-founder Cameron Adams said that Affinity applications will remain separate from Canva's platform, but that some small integrations should be expected over time. "Our product teams have already started chatting and we have some immediate plans for lightweight integration, but we think the products themselves will always be separate," said Adams.

Open Source

OpenTTD (Unofficial Remake of 'Transport Tycoon Deluxe' Game) Turns 20 (openttd.org) 17

In 1995 Scottish video game designer Chris Sawyer created the business simulator game Transport Tycoon Deluxe — and within four years, Wikipedia notes, work began on the first version of an open source version that's still being actively developed. "According to a study of the 61,154 open-source projects on SourceForge in the period between 1999 and 2005, OpenTTD ranked as the 8th most active open-source project to receive patches and contributions. In 2004, development moved to their own server."

Long-time Slashdot reader orudge says he's been involved for almost 25 years. "Exactly 21 years ago, I received an ICQ message (look it up, kids) out of the blue from a guy named Ludvig Strigeus (nicknamed Ludde)." "Hello, you probably don't know me, but I've been working on a project to clone Transport Tycoon Deluxe for a while," he said, more or less... Ludde made more progress with the project [written in C] over the coming year, and it looks like we even attempted some multiplayer games (not too reliable, especially over my dial-up connection at the time). Eventually, when he was happy with what he had created, he agreed to allow me to release the game as open source. Coincidentally, this happened exactly a year after I'd first spoken to him, on the 6th March 2004...

Things really got going after this, and a community started to form with enthusiastic developers fixing bugs, adding in new features, and smoothing off the rough edges. Ludde was, I think, a bit taken aback by how popular it proved, and even rejoined the development effort for a while. A read through the old changelogs reveals just how many features were added over a very short period of time. Quick wins like higher vehicle limits came in very quickly, and support for TTDPatch's NewGRF format started to be functional just four months later. Large maps, improved multiplayer, better pathfinders, improved TTDPatch compatibility, and of course, ports to a great many different operating systems, such as Mac OS X, BeOS, MorphOS and OS/2. It was a very exciting time to be a TTD fan!

Within six years, ambitious projects to create free replacements for the original TTD graphics, sounds and music sets were complete, and OpenTTD finally had its 1.0 release. And while we may not have the same frantic addition of new features we had in 2004, there have still been massive improvements to the code, with plenty of exciting new features over the years, with major releases every year since 2008. he move to GitHub in 2018 and the release of OpenTTD on Steam in 2021 have also re-energised development efforts, with thousands of people now enjoying playing the game regularly. And development shows no signs of slowing down, with the upcoming OpenTTD 14.0 release including over 40 new features!

"Personally, I would like to say thank you to everyone who has supported OpenTTD development over the past two decades..." they write, adding "Finally, of course, I'd like to thank you, the players! None of us would be here if people weren't still playing the game.

"Seeing how the first twenty years have gone, I can't wait to see what the next twenty years have in store. :)"
IT

HDMI Forum Rejects Open-Source HDMI 2.1 Driver Support Sought By AMD (phoronix.com) 114

Michael Larabel, reporting at Phoronix: One of the limitations of AMD's open-source Linux graphics driver has been the inability to implement HDMI 2.1+ functionality on the basis of legal requirements by the HDMI Forum. AMD engineers had been working to come up with a solution in conjunction with the HDMI Forum for being able to provide HDMI 2.1+ capabilities with their open-source Linux kernel driver, but it looks like those efforts for now have concluded and failed. For three years there has been a bug report around 4K@120Hz being unavailable via HDMI 2.1 on the AMD Linux driver. Similarly, there have been bug reports like 5K @ 240Hz not possible either with the AMD graphics driver on Linux.

As covered back in 2021, the HDMI Forum closing public specification access is hurting open-source support. AMD as well as the X.Org Foundation have been engaged with the HDMI Forum to try to come up with a solution to be able to provide open-source implementations of the now-private HDMI specs. AMD Linux engineers have spent months working with their legal team and evaluating all HDMI features to determine if/how they can be exposed in their open-source driver. AMD had code working internally and then the past few months were waiting on approval from the HDMI Forum. Sadly, the HDMI Forum has turned down AMD's request for open-source driver support.

KDE

KDE Plasma 6 Released (kde.org) 35

"Today, the KDE Community is announcing a new major release of Plasma 6.0 and Gear 24.02," writes longtime Slashdot reader jrepin. "The new version brings new windows and desktop overview effects, improved color management, a cleaner theme, better overall performance, and much more." From the announcement: KDE Plasma is a modern, feature-rich desktop environment for Linux-based operating systems. Known for its sleek design, customizable interface, and extensive set of applications, it is also open source, devoid of ads, and makes protecting your privacy and personal data a priority.

With Plasma 6, the technology stack has undergone two major upgrades: a transition to the latest version of the application framework, Qt 6, and a migration to the modern Linux graphics platform, Wayland. We will continue providing support for the legacy X11 session for users who prefer to stick with it for now. [...] KDE Gear 24.02 brings many applications to Qt 6. In addition to the changes in Breeze, many applications adopted a more frameless look for their interface.

IT

Lenovo's Laptop Concept is Fully Transparent, But the Point Isn't Entirely Clear (techcrunch.com) 25

An anonymous reader shares a report from the ongoing Mobile World Congress trade show: This year's big scrum gatherer was Lenovo's long-rumored transparent laptop. It's real. It functions surprisingly well and -- nearest anyone can tell -- its existence is a testament to form over function. That's a perfectly fine thing to be when you're a concept device. When it comes to actually shipping a product, however, that's another conversation entirely. [...] Broadly speaking, it looks like a laptop, with a transparent pane where the screen should be. It's perhaps best understood as a kind of augmented reality device, in the sense that its graphics are overlaid on whatever happens to be behind it.

It's a crowd pleaser, with a futuristic air to it that embodies all manner of sci-fi tech tropes. The transparent display has become a kind of shorthand for future tech in stock art, and it's undeniably neat to see the thing in action. [...] The bottom of the device is covered in a large capacitive touch surface. This area serves as both a keyboard and a large stylus-compatible drawing surface. The flat surface can't compete with real, tactile keyboards, of course. Typing isn't the greatest experience here, as evidenced by previous dual-screen Lenovo laptops. But that's the tradeoff for the versatility of the virtual version.

Businesses

Nvidia Posts Record Revenue Up 265% On Booming AI Business (cnbc.com) 27

In its fourth quarter earnings report today, Nvidia beat Wall Street's forecast for earnings and sales, causing shares to rise about 10% in extended trading. CNBC reports: Here's what the company reported compared with what Wall Street was expecting for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

Earnings per share: $5.16 adjusted vs. $4.64 expected
Revenue: $22.10 billion vs. $20.62 billion expected

Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales. Nvidia CEO Jensen Huang addressed investor fears that the company may not be able to keep up this growth or level of sales for the whole year on a call with analysts. "Fundamentally, the conditions are excellent for continued growth" in 2025 and beyond, Huang told analysts. He says demand for the company's GPUs will remain high due to generative AI and an industry-wide shift away from central processors to the accelerators that Nvidia makes.

Nvidia reported $12.29 billion in net income during the quarter, or $4.93 per share, up 769% versus last year's $1.41 billion or 57 cents per share. Nvidia's total revenue rose 265% from a year ago, based on strong sales for AI chips for servers, particularly the company's "Hopper" chips such as the H100, it said. "Strong demand was driven by enterprise software and consumer internet applications, and multiple industry verticals including automotive, financial services and health care," the company said in commentary provided to investors. Those sales are reported in the company's Data Center business, which now comprises the majority of Nvidia's revenue. Data center sales were up 409% to $18.40 billion. Over half the company's data center sales went to large cloud providers. [...]

The company's gaming business, which includes graphics cards for laptops and PCs, was merely up 56% year over year to $2.87 billion. Graphics cards for gaming used to be Nvidia's primary business before its AI chips started taking off, and some of Nvidia's graphics cards can be used for AI. Nvidia's smaller businesses did not show the same meteoric growth. Its automotive business declined 4% to $281 million in sales, and its OEM and other business, which includes crypto chips, rose 7% to $90 million. Nvidia's business making graphics hardware for professional applications rose 105% to $463 million.

Businesses

Nvidia Becomes Third Most Valuable US Company (cnbc.com) 75

Nvidia is now the third most valuable company in the U.S., surpassing Google parent Alphabet and Amazon. It's only behind Apple and Microsoft in terms of market cap. CNBC reports: Nvidia rose over 2% to close at $739.00 per share, giving it a market value of $1.83 trillion to Google's $1.82 trillion market cap. The move comes one day after Nvidia surpassed Amazon in terms of market value. The symbolic milestone is more confirmation that Nvidia has become a Wall Street darling on the back of elevated AI chip sales, valued even more highly than some of the large software companies and cloud providers that develop and integrate AI technology into their products.

Nvidia shares are up over 221% over the past 12 months on robust demand for its AI server chips that can cost more than $20,000 each. Companies like Google and Amazon need thousands of them for their cloud services. Before the recent AI boom, Nvidia was best known for consumer graphics processors it sold to PC makers to build gaming computers, a less lucrative market.

Portables (Apple)

Asahi Linux Project's OpenGL Support On Apple Silicon Officially Surpasses Apple's (arstechnica.com) 43

Andrew Cunningham reports via Ars Technica: For around three years now, the team of independent developers behind the Asahi Linux project has worked to support Linux on Apple Silicon Macs, despite Apple's total lack of involvement. Over the years, the project has gone from a "highly unstable experiment" to a "surprisingly functional and usable desktop operating system." Even Linus Torvalds has used it to run Linux on Apple's hardware. The team has been steadily improving its open source, standards-conformant GPU driver for the M1 and M2 since releasing them in December 2022, and today, the team crossed an important symbolic milestone: The Asahi driver's support for the OpenGL and OpenGL ES graphics have officially passed what Apple offers in macOS. The team's latest graphics driver fully conforms with OpenGL version 4.6 and OpenGL ES version 3.2, the most recent version of either API. Apple's support in macOS tops out at OpenGL 4.1, announced in July 2010.

Developer Alyssa Rosenzweig wrote a detailed blog post that announced the new driver, which had to pass "over 100,000 tests" to be deemed officially conformant. The team achieved this milestone despite the fact that Apple's GPUs don't support some features that would have made implementing these APIs more straightforward. "Regrettably, the M1 doesn't map well to any graphics standard newer than OpenGL ES 3.1," writes Rosenzweig. "While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set... Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on."

Now that the Asahi GPU driver supports the latest OpenGL and OpenGL ES standards -- released in 2017 and 2015, respectively -- the work turns to supporting the low-overhead Vulkan API on Apple's hardware. Vulkan support in macOS is limited to translation layers like MoltenVK, which translates Vulkan API calls to Metal ones that the hardware and OS can understand. [...] Rosenzweig's blog post didn't give any specific updates on Vulkan except to say that the team was "well on the road" to supporting it. In addition to supporting native Linux apps, supporting more graphics APIs in Asahi will allow the operating system to take better advantage of software like Valve's Proton, which already has a few games written for x86-based Windows PCs running on Arm-based Apple hardware.

Businesses

Sam Altman Seeks Trillions of Dollars To Reshape Business of Chips and AI (wsj.com) 54

Sam Altman was already trying to lead the development of human-level artificial intelligence. Now he has another great ambition: raising trillions of dollars to reshape the global semiconductor industry. From a report: The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.

The fundraising plans, which face significant obstacles, are aimed at solving constraints to OpenAI's growth, including the scarcity of the pricey AI chips required to train large language models behind AI systems such as ChatGPT. Altman has often complained that there aren't enough of these kinds of chips -- known as graphics processing units, or GPUs -- to power OpenAI's quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans. Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment -- the costly machinery needed to run chip factories -- last year were $100 billion, according to an estimate by the industry group SEMI.

AI

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says (idc.com) 70

IDC, in a press release: A new forecast from IDC shows shipments of artificial intelligence (AI) PCs -- personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally -- growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. [...] Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC's performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.

To date, IDC has identified three types of NPU-enabled AI PCs:
1. Hardware-enabled AI PCs include an NPU that offers less than 40 tera operations per second (TOPS) performance and typically enables specific AI features within apps to run locally. Qualcomm, Apple, AMD, and Intel are all shipping chips in this category today.

2. Next-generation AI PCs include an NPU with 40 to 60 TOPS performance and an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps. Qualcomm, AMD, and Intel have all announced future chips for this category, with delivery expected to begin in 2024. Microsoft is expected to roll out major updates (and updated system specifications) to Windows 11 to take advantage of these high-TOPS NPUs.

3. Advanced AI PCs are PCs that offer more than 60 TOPS of NPU performance. While no silicon vendors have announced such products, IDC expects them to appear in the coming years. This IDC forecast does not include advanced AI PCs, but they will be incorporated into future updates.
Michael Dell, commenting on X: This is correct and might be underestimating it. AI PCs are coming fast and Dell is ready.
Movies

Avatar VFX Workers Vote To Unionize (hollywoodreporter.com) 28

Visual effects artists working on James Cameron's Avatar movies have voted to unionize in a National Labor Relations Board (NLRB) election. From the Hollywood Reporter: Of an eligible 88 workers at Walt Disney Studios subsidiary TCF US Productions 27, Inc. who assist with productions for Cameron's Lightstorm Entertainment, 57 voted to join the union and 19 voted against, while two ballots were void. These workers include creatures costume leads and environment artists as well as others in the stage, environments, render, post viz, sequence, turn over and kabuki departments. Management and labor now have a few days to file any objections, and if none are raised, the election results will be certified.

This bargaining unit doesn't include employees of VFX facility vendors, notably Weta FX, which is the lead VFX house on the Avatar films and employs the vast majority of the more than 1,000 artists who work on a typical Avatar movie. But unionizing the group represents a major inroad for the VFX industry labor movement, believes one VFX industry source who spoke with THR. "While insignificant as a number, this is the core team that answers to Jim Cameron," says the source. "They are not necessarily impressive in size, but in influence."

The workers first went public with their organizing bid in December, when they filed for a union election with the NLRB. At the time, participating workers said in public statements that they were aiming to gain comparable benefits and pay to their unionized peers and have greater input into in working conditions. "Every one of my coworkers has dedicated so much time, creativity and passion to make these films a reality. So when you see them struggling to cover their health premiums, or being overworked because they took on multiple roles, or are just scraping by on their wages ... you cannot keep silent," said kabuki lead Jennifer Anaya.

Android

Google Is Rolling Out WebGPU For Next-Gen Gaming On Android 14

In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan.

Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo.

Next up: WebGPU for Chrome on Linux.
Security

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."

Slashdot Top Deals