AMD

AMD Unveils Zen 2 CPU Architecture, Navi GPU Architecture and a Slew of Products (hothardware.com) 167

MojoKid writes: AMD let loose today with a number of high profile launches at the E3 2019 Expo in Los Angeles, CA. The company disclosed its full Zen 2 Ryzen 3000 series microarchitecture, which AMD claims offers an IPC uplift of 15% generation over generation, thanks to better branch prediction, higher integer throughput, and reduced effective latency to memory. Zen 2 also significantly beefs up floating point throughput with double the FP performance of the previous generation. AMD also announced a 16-core/32-thread variant, dubbed Ryzen 3950X, that drops at $750 -- a full $950 cheaper than a similar spec 16-core Intel Core i9-9960X. On the graphics side, AMD's RDNA architecture in Navi will power the company's new Radeon RX 5700 series, which is said to offer competitive performance to NVIDIA's GeForce RTX 2070 and 2060 series. The Navi-based GPU at the heart of the upcoming Radeon RX 5700 series is manufactured on TSMC's 7nm process node and features GDDR6 memory, along with PCI Express 4.0 interface support. Versus AMD's previous generation GCN (Graphics Core Next) architecture, RDNA delivers more than 50% better performance-per-watt and 25% better overall performance. Greater than 50% of that improvement comes from architecture optimizations according to AMD; the GPU also gets a boost from its 7nm process and frequency gains. Radeon RX 5700 and 5700 XT cards will be available in market on July 7th, along with AMD Ryzen 3000 chips, but pricing hasn't been established yet for the Radeon GPUs.
XBox (Games)

Project Scarlett is the Next Xbox Console, Launching in Holiday 2020 (polygon.com) 115

Project Scarlett is the next Microsoft video game console. Phil Spencer, executive vice president of gaming at the company, announced the hardware during Microsoft's E3 2019 press briefing. From a report: "The console should be optimized for one thing and one thing only," said Spencer, "gaming." Spencer explained the console has been developed by the team responsible for the Xbox One X. A promotional video featuring various Xbox employees promised variable refresh rates, real-time ray tracing, 8K resolution and frame rates up to 120 frames per second, and a new SSD that has upwards of 40 times better performance over the current generation. The tech at the heart of the console -- which Microsoft said is four times as powerful as the Xbox One X -- will be a custom chip based on AMD's Zen 2 and Navi technology.
Desktops (Apple)

Apple's Top Spec Mac Pro and Pro Display Will Cost At Least $50,000 (theverge.com) 335

Apple announced this week that its new Mac Pro starts at an already pricey $6,000, but the company neglected to mention how much the top-of-the-line model will cost. From a report on The Verge: So we shopped around for equivalent parts to the top-end spec that Apple's promising. As it turns out: $33,720.88 is likely the bare minimum -- and that's before factoring in the four GPUs, which could easily jack that price up to around $45,000. For all that dough, big-budget video editors and other creative types get a lot of firepower: a 28-core Intel Xeon W processor, an almost-impossible-to-comprehend 1.5TB of RAM, 4TB of SSD storage, and four AMD Radeon Pro Vega II Duo GPUs -- assuming you can afford one. Add in a Pro Display XDR monitor (and a Pro Stand to go with it), and you're looking at a workstation that could clear $50,000. Keep in mind too that these estimates are based on market prices for these (or similar) parts: Apple historically has charged far more for its pre-built configurations than for a computer you'd build on your own.
Desktops (Apple)

Apple Announces All-New Redesigned Mac Pro, Starting at $5,999 (theverge.com) 317

The long-awaited Mac Pro is here. From a report: The new Intel Xeon processor inside the Mac Pro will have up to 28 cores, with up to 300W of power and heavy-duty cooling, "so it can run unconstrained at full power at all times." System memory can be maxed out at an eyebrow-raising 1.5TB, says Apple. There are eight internal PCI Express slots, with four of them being double-wide. Two USB-C and two USB-A ports will grace the front of the system, which is at least one more USB-C port than you'll find on a majority of desktop PC systems and cases today. With this Mac Pro, Apple is launching a custom expansion module it calls an MPX Module. This is a giant quad-wide PCIe card that fits two graphics cards, has its own dedicated heatsink, and also has a Thunderbolt 3 connector on the bottom for extra bandwidth / power / display connectivity. Apple says you can spec that out with AMD's Radeon Pro Vega 2 or Radeon Pro Vega 2 Duo, the latter of which would get you four GPUs in total. The power supply of the new Mac Pro maxes out at 1.4kW. Three large fans sit at the front, just behind the new aluminum grille, blowing air across the system at a rate of 300 cubic feet per minute. It starts at $5,999.
AMD

Samsung and AMD Announce Multi-Year Strategic Graphics IP Licensing Deal For SLSI Mobile GPUs (anandtech.com) 17

Samsung and AMD announced today a new multi-year strategic partnership between the two companies, where Samsung SLSI will license graphics IP from AMD for use in mobile GPUs. From a report: The announcement is a major disruptive move for the mobile graphics landscape as it signifies that Samsung is going forward with the productization of their own in-house GPU architecture in future Exynos chipsets. Samsung is said to have started work on their own "S-GPU" at its research division back around in 2012, with the company handing over the new IP to a new division called "ACL," or Advanced Computing Lab in San Jose, which has a joint charter with SARC (Samsung Austin R&D Center, where Samsung currently designs its custom mobile CPU & memory controller IP). With today's announced partnership, Samsung will license "custom graphics IP" from AMD. What this IP means is a bit unclear from the press release, but we have some strong pointers on what it might be.

Samsung's own GPU architecture is already quite far along, having seen 7 years of development, and already being integrated in test silicon chipsets. Unless the deal was signed years ago and only publicly announced today, it would signify that the IP being talked about now is a patent-deal, rather than new architectural IP from AMD that Samsung would integrate in their own designs. Samsung's new GPU IP is the first from-scratch design in over a decade, in an industry with very old incumbents with massive patent-pools. Thus what today's announcement likely means is likely that Samsung is buying a patent-chest from AMD in order to protect themselves from possible litigation from other industry players.

Intel

Intel Boldly Claims Its 'Ice Lake' Integrated Graphics Are As Good as AMD's (pcworld.com) 147

While Intel is expected to detail its upcoming 10nm processor, Ice Lake, during its Tuesday keynote here at Computex, the company is already making one bold claim -- that Ice Lake's integrated Gen11 graphics engine is on par or better than AMD's current Ryzen 7 graphics. From a report: It's a bold claim, and one that Ryan Shrout, a former journalist and now the chief performance strategist for Intel, said that Intel doesn't make lightly. "I don't think we can overstate how important this is for us, to make this claim and this statement about the one area that people railed on us for in the mobile space," Shrout said shortly before Computex began. Though Intel actually supplies the largest number of integrated graphics chipsets in the PC space, it does so on the strength of its CPU performance (and also thanks to strong relationships with laptop makers). Historically, AMD has leveraged its Radeon "Vega" GPUs to attract buyers seeking a more powerful integrated graphics solution. But what Intel is trying to do now, with its Xe discrete graphics on the horizon, is let its GPUs stand on their own merits. Referencing a series of benchmarks and games from the 3DMark Sky Diver test to Fortnite to Overwatch, Intel claims performance that's 3 to 15 percent faster than the Ryzen 7. Intel's argument is based on a comparison of a U-series Ice Lake part at 25 watts, versus a Ryzen 7 3700U, also at 25 watts.
AMD

AMD Unveils the 12-Core Ryzen 9 3900X, at Half the Price of Intel's Competing Core i9 9920X Chipset (techcrunch.com) 261

AMD CEO Lisa Su today unveiled news about its chips and graphics processors that will increase pressure on competitors Intel and Nvidia, both in terms of pricing and performance. From a report: All new third-generation Ryzen CPUs, the first with 7-nanometer desktop chips, will go on sale on July 7. The showstopper of Su's keynote was the announcement of AMD's 12-core, 24-thread Ryzen 9 3900x chip, the flagship of its third-generation Ryzen family. It will retail starting at $499, half the price of Intel's competing Core i9 9920X chipset, which is priced at $1,189 and up. The 3900x has 4.6 Ghz boost speed and 70 MB of total cache and uses 105 watts of thermal design power (versus the i9 9920x's 165 watts), making it more efficient. AMD says that in a Blender demo against Intel i9-9920x, the 3900x finished about 18 percent more quickly. Starting prices for other chips in the family are $199 for the 6-core, 12-thread 3600; $329 for the 8-core, 16-thread Ryzen 3700x (with 4.4 Ghz boost, 36 MB of total cache and a 65 watt TDP); and $399 for the 8-core, 16-thread Ryzen 3800X (4.5 Ghz, 32MB cache, 105w).
AMD

Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches (extremetech.com) 170

Phoronix has conducted a series of tests to show just how much the Spectre and Meltdown patches have impacted the raw performance of Intel and AMD CPUs. While the patches have resulted in performance decreases across the board, ranging from virtually nothing to significant depending on the application, it appears that Intel received the short end of the stick as its CPUs have been hit five times harder than AMD, according to ExtremeTech. From the report: The collective impact of enabling all patches is not a positive for Intel. While the impacts vary tremendously from virtually nothing to significant on an application-by-application level, the collective whack is about 15-16 percent on all Intel CPUs without Hyper-Threading disabled. Disabling increases the overall performance impact to 20 percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K).

The AMD CPUs are not tested with HT disabled, because disabling SMT isn't a required fix for the situation on AMD chips, but the cumulative impact of the decline is much smaller. AMD loses ~3 percent with all fixes enabled. The impact of these changes is enough to change the relative performance weighting between the tested solutions. With no fixes applied, across its entire test suite, the CPU performance ranking is (from fastest to slowest): 7980XE (288), 8700K (271), 2990WX (245), 2700X (219), 6800K. (200). With the full suite of mitigations enabled, the CPU performance ranking is (from fastest to slowest): 2990WX (238), 7980XE (231), 2700X (213), 8700K (204), 6800K (159).
In closing, ExtremeTech writes: "AMD, in other words, now leads the aggregate performance metrics, moving from 3rd and 4th to 1st and 3rd. This isn't the same as winning every test, and since the degree to which each test responds to these changes varies, you can't claim that the 2990WX is now across-the-board faster than the 7980XE in the Phoronix benchmark suite. It isn't. But the cumulative impact of these patches could result in more tests where Intel and AMD switch rankings as a result of performance impacts that only hit one vendor."
Businesses

Ask Slashdot: Are the Big Players In Tech Even Competing With Each Other? 145

dryriver writes: For capitalism to work for consumers in a beneficial way, the big players have to compete hard against each other and innovate courageously. What appears to be happening instead, however, is that every year almost everybody is making roughly the same product at roughly the same price point. Most 4K TVs at the same price point have the same features -- there is little to distinguish manufacturer A from manufacturer B. Ditto for smartphones -- nobody suddenly puts a 3D scanning capable lightfield camera, shake-the-phone-to-charge-it or something similarly innovative into their next phone. Ditto for game consoles -- Xbox and Playstation are not very different from each other at all. Nintendo does "different," but underpowers its hardware. Ditto for laptops -- the only major difference I see in laptops is the quality of the screen panel used and of the cooling system. The last laptop with an auto stereoscopic 3D screen I have seen is the long-discontinued Toshiba Satellite 3D. Ditto for CPUs and GPUs -- it doesn't really matter whether you buy Intel, AMD, or Nvidia. There is nothing so "different" or "distinct" in any of the electronics they make that it makes you go "wow, that is truly groundbreaking." Ditto for sports action cameras, DSLRs, portable storage and just about everything else "tech." So where precisely -- besides pricing and build-quality differences -- is the competition in what these companies are doing? Shouldn't somebody be trying to "pull far ahead of the pack" or "ahead of the curve" with some crazy new feature that nobody else has? Or is true innovation in tech simply dead now?
Businesses

Hewlett Packard Enterprise To Acquire Supercomputer Maker Cray for $1.3 Billion (anandtech.com) 101

Hewlett Packard Enterprise will be buying the supercomputer maker Cray for roughly $1.3 billion, the companies said this morning. Intending to use Cray's knowledge and technology to bolster their own supercomputing and high-performance computing technologies, when the deal closes, HPE will become the world leader for supercomputing technology. From a report: Cray of course needs no introduction. The current leader in the supercomputing field and founder of supercomputing as we know it, Cray has been a part of the supercomputing landscape since the 1970s. Starting at the time with fully custom systems, in more recent years Cray has morphed into an integrator and scale-out specialist, combining processors from the likes of Intel, AMD, and NVIDIA into supercomputers, and applying their own software, I/O, and interconnect technologies. The timing of the acquisition announcement closely follows other major news from Cray: the company just landed a $600 million US Department of Energy contract to supply the Frontier supercomputer to Oak Ridge National Laboratory in 2021. Frontier is one of two exascale supercomputers Cray is involved in -- the other being a subcontractor for the 2021 Aurora system -- and in fact Cray is involved in the only two exascale systems ordered by the US Government thus far. So in both a historical and modern context, Cray was and is one of the biggest players in the supercomputing market.
AMD

World's Fastest Supercomputer Coming To US in 2021 From Cray, AMD (cnet.com) 89

The "exascale" computing race is getting a new entrant called Frontier, a $600 million machine with Cray and AMD technology that could become the world's fastest when it arrives at Oak Ridge National Laboratory in 2021. From a report: Frontier should be able to perform 1.5 quintillion calculations per second, a level called 1.5 exaflops and enough to claim the performance crown, the Energy Department announced Tuesday. Its speed will be about 10 times faster than that of the current record holder on the Top500 supercomputer ranking, the IBM-built Summit machine, also at Oak Ridge, and should surpass a $500 million, 1-exaflops Cray-Intel supercomputer called Aurora to be built in 2021 at Argonne National Laboratory. There's no guarantee the US will win the race to exascale machines -- those that cross the 1-exaflop threshold -- because China, Japan and France each could have exascale machines in 2020. At stake is more than national bragging rights: It's also about the ability to perform cutting-edge research in areas like genomics, nuclear physics, cosmology, drug discovery, artificial intelligence and climate simulation.
Software

Blender Developers Find Old Linux Drivers Are Better Maintained Than Windows (phoronix.com) 151

To not a lot of surprise compared to the world of proprietary graphics drivers on Windows where once the support is retired the driver releases stop, old open-source Linux OpenGL drivers are found to be better maintained. From a report: Blender developers working on shipping Blender 2.80 this July as the big update to this open-source 3D modeling software today rolled out the Linux GPU requirements for this next release. The requirements themselves aren't too surprising and cover NVIDIA GPUs released in the last ten years, AMD GCN for best support, and Intel Haswell graphics or newer. In the case of NVIDIA graphics they tend to do a good job maintaining their legacy driver branches. With the AMD Radeon and Intel graphics, Blender developers acknowledge older hardware may work better on Linux.
AMD

AMD Gained Market Share For 6th Straight Quarter, CEO Says (venturebeat.com) 123

Advanced Micro Devices CEO Lisa Su said during her remarks on AMD's first quarter earnings conference call with analysts today that she was confident about the state of competition with rivals like Intel and Nvidia in processors and graphics chips. She also pointed out that the company gained market share in processors for the 6th straight quarter. From a report: AMD's revenue was $1.27 billion for the first quarter, down 23% from the same quarter a year ago. But Su noted that Ryzen and Epyc processor and datacenter graphics processing units (GPUs) revenue more than doubled year-over-year, helping expand the gross margin by 5 percentage points. If there was a lag in the quarter, it was due to softness in the graphics channel and lower semi-custom revenue (which includes game console chips). Su said AMD's unit shipments increased significantly and the company's new products drove a higher client average selling price (ASP).
Graphics

Ask Slashdot: Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019? 230

dryriver writes: A lot of people seem to believe that computers somehow need polygons, NURBS surfaces, voxels or point clouds "to be able to define and render 3D models to the screen at all." This isn't really true. All a computer needs to light, shade, and display a 3D model is to know the answer to the question "is there a surface point at coordinate XYZ or not." Many different mathematical structures or descriptors can be dreamed up that can tell a computer whether there is indeed a 3D model surface point at coordinate XYZ or behind a given screen pixel XY. Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics. The brains who invented the technique back in the late 1960s probably figured that by the 1990s at the latest, their method would be replaced by something better and more clever. Yet here we are in 2019 buying pricey Nvidia, AMD, and other GPUs that are primarily polygon/triangle accelerators.

Why is this? Creating good-looking polygon models is still a slow, difficult, iterative and money intensive task in 2019. A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons. So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot? Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?
PlayStation (Games)

What To Expect From Sony's Next-Gen PlayStation (wired.com) 131

Daetrin writes: Sony is unwilling to confirm "Playstation 5" as the name, but their next console is "no mere upgrade" according to a report from Wired, which cites Sony executives -- who spoke on the record:

"PlayStation's next-generation console ticks all those boxes, starting with an AMD chip at the heart of the device. (Warning: some alphabet soup follows.) The CPU is based on the third generation of AMD's Ryzen line and contains eight cores of the company's new 7nm Zen 2 microarchitecture. The GPU, a custom variant of Radeon's Navi family, will support ray tracing, a technique that models the travel of light to simulate complex interactions in 3D environments. While ray tracing is a staple of Hollywood visual effects and is beginning to worm its way into $10,000 high-end processors, no game console has been able to manage it. Yet."

The console will also have a solid-state drive and is currently planned to be backward-compatible with both PS4 games and PSVR.

AMD

Could AMD's Upcoming EPYC 'Rome' Server Processors Feature Up To 162 PCIe Lanes? (tomshardware.com) 107

jwhyche (Slashdot reader #6,192) tipped us off to some interesting speculation about AMD's upcoming Zen 2-based EPYC Rome server processors. "The new Epyc processor would be Gen 4 PCIe where Intel is still using Gen 3. Gen 4 PCIe features twice the bandwidth of the older Gen 3 specification."

And now Tom's Hardware reports: While AMD has said that a single EPYC Rome processor could deliver up to 128 PCIe lanes, the company hasn't stated how many lanes two processors could deliver in a dual-socket server. According to ServeTheHome.com, there's a distinct possibility EPYC could feature up to 162 PCIe 4.0 lanes in a dual-socket configuration, which is 82 more lanes than Intel's dual-socket Cascade Lake Xeon servers. That even beats Intel's latest 56-core 112-thread Platinum 9200-series processors, which expose 80 PCIe lanes per dual-socket server.

Patrick Kennedy at ServeTheHome, a publication focused on high-performance computing, and RetiredEngineer on Twitter have both concluded that two Rome CPUs could support 160 PCIe 4.0 lanes. Kennedy even expects there will be an additional PCIe lane per CPU (meaning 129 in a single socket), bringing the total number of lanes in a dual-socket server up to 162, but with the caveat that this additional lane per socket could only be used for the baseboard management controller (or BMC), a vital component of server motherboards... If @RetiredEngineer and ServeTheHome did their math correctly, then Intel has even more serious competition than AMD has let on.

Intel

Intel Announces Cascade Lake With Up To 56 Cores and Optane Persistent Memory DIMMs (tomshardware.com) 112

At its Data-Centric Innovation Day, Intel today announced its Cascade Lake line of Xeon Scalable data center processors. From a report: The second-generation lineup of Xeon Scalable processors comes in 53 flavors that span up to 56 cores and 12 memory channels per chip, but as a reminder that Intel company is briskly expanding beyond "just" processors, the company also announced the final arrival of its Optane DC Persistent Memory DIMMs along with a range of new data center SSDs, Ethernet controllers, 10nm Agilex FPGAs, and Xeon D processors. This broad spectrum of products leverages Intel's overwhelming presence in the data center, it currently occupies ~95% of the worlds server sockets, as a springboard to chew into other markets, including its new assault on the memory space with the Optane DC Persistent Memory DIMMs. The long-awaited DIMMs open a new market for Intel and have the potential to disrupt the entire memory hierarchy, but also serve as a potentially key component that can help the company fend off AMD's coming 7nm EPYC Rome processors.
Graphics

Crytek Shows 4K 30 FPS Ray Tracing On Non-RTX AMD and NVIDIA GPUs (techspot.com) 140

dryriver writes: Crytek has published a video showing an ordinary AMD Vega 56 GPU -- which has no raytracing specific circuitry and only costs around $450 -- real-time ray tracing a complex 3D city environment at 4K 30 FPS. Crytek says that the technology demo runs fine on most normal NVIDIA and AMD gaming GPUs. As if this wasn't impressive already, the software real-time ray tracing technology is still in development and not even final. The framerates achieved may thus go up further, raising the question of precisely what the benefits of owning a super-expensive NVIDIA RTX 20xx series GPU are. Nvidia has claimed over and over again that without its amazing new RTX cores and AI denoiser, GPUs will choke on real-time ray tracing tasks in games. Crytek appears to have proven already that with some intelligently written code, bog ordinary GPU cores can handle real-time ray tracing just fine -- no RTX cores, AI denoiser or anything else NVIDIA touts as necessary.
iMac

Apple Finally Updates the iMac With Significantly More Powerful CPU and GPU Options (arstechnica.com) 143

Today, Apple will finally begin taking orders for newly refreshed 21- and 27-inch iMacs. The new versions don't change the basic design or add major new features, but they offer substantially faster configuration options for the CPU and GPU. From a report: The 21.5-inch iMac now has a 6-core, eighth-generation Intel CPU option -- up from a maximum of four cores before. The 27-inch now has six cores as the standard configuration, with an optional upgrade to a 3.6GHz, 9th-gen, 8-core Intel Core i9 CPU that Apple claims will double performance over the previous 27-inch iMac. The base 27-inch model has a 3GHz 6-core Intel Core i5 CPU, with intermediate configurations at 3.1GHz and 3.7GHz (both Core i5). The big news is arguably that both sizes now offer high-end, workstation-class Vega-graphics options for the first time. Apple added a similar upgrade option to the 15-inch MacBook Pro late last year. In this case, the 21.6-inch iMac has an option for the 20-compute-unit version of Vega with 4GB of HBM2 video memory. That's the same as the top-end 15-inch MacBook Pro option.

The 27-inch iMac can now be configured with the Radeon Pro Vega 48 with 8GB of HBM2. For reference, the much pricier iMac Pro has Vega 56 and Vega 64 options. Apple claims the Vega 48 will net a 50-percent performance improvement over the Radeon Pro 580, the previous top configuration. Speaking of the previous top configuration, the non-Vega GPU options are the same as what was available yesterday. The only difference is that they now have an "X" affixed to the numbers in their names, per AMD branding conventions -- i.e., Radeon Pro 580X instead of 580. RAM options are the same in terms of volume (up to 32GB for the 21.5-inch and 64GB for the 27-inch), but the DDR4 RAM is slightly faster now, at 2666MHz.

Graphics

NVIDIA Launches New $219 Turing-Powered GeForce GTX 1660 (hothardware.com) 101

MojoKid writes: NVIDIA took the wraps off yet another lower cost Turing-based graphics card today, dubbed the GeForce GTX 1660. For a $219 MSRP, the card offers a cut-down NVIDIA TU116 GPU comprised of 1408 CUDA cores with a 1785MHz boost clock and 6GB of GDDR6 RAM with 192.1GB/s of bandwidth. Generally speaking, the new GeForce GTX 1660 is 15% to 30% faster than NVIDIA's previous generation GeForce GTX 1060 but doesn't support new ray tracing and DLSS features that the majority of NVIDIA's new Turing cards support. Performance-wise, GeForce GTX 1660 is generally faster than an AMD Radeon RX 590 overall. Boards from various OEM partners should be in the channel for purchase this week.

Slashdot Top Deals