Graphics

NVIDIA Launches New $219 Turing-Powered GeForce GTX 1660 (hothardware.com) 101

MojoKid writes: NVIDIA took the wraps off yet another lower cost Turing-based graphics card today, dubbed the GeForce GTX 1660. For a $219 MSRP, the card offers a cut-down NVIDIA TU116 GPU comprised of 1408 CUDA cores with a 1785MHz boost clock and 6GB of GDDR6 RAM with 192.1GB/s of bandwidth. Generally speaking, the new GeForce GTX 1660 is 15% to 30% faster than NVIDIA's previous generation GeForce GTX 1060 but doesn't support new ray tracing and DLSS features that the majority of NVIDIA's new Turing cards support. Performance-wise, GeForce GTX 1660 is generally faster than an AMD Radeon RX 590 overall. Boards from various OEM partners should be in the channel for purchase this week.
Intel

Intel CPU Shortages To Worsen in Q2 2019: Research (digitimes.com) 97

Shortages of Intel's CPUs are expected to worsen in the second quarter compared to the first as demand for Chromebooks, which are mostly equipped with Intel's entry-level processors, enters the high period, according to Digitimes Research. From the report: Digitimes Research expects Intel CPUs' supply gap to shrink to 2-3% in the first quarter with Core i3 taking over Core i5 as the series hit hardest by shortages. The shortages started in August 2018 with major brands including Hewlett-Packard (HP), Dell and Lenovo all experiencing supply gaps of over 5% at their worst moment. Although most market watchers originally believed that the shortages would gradually ease after vendors completed their inventory preparations for the year-end holidays, the supply gap in the fourth quarter of 2018 still stayed at the same level as that in the third as HP launched a second wave of CPU inventory buildup during the last quarter of the year, prompting other vendors to follow suit. Taiwan-based vendors were underprepared and saw their supply gaps expand from a single digit percentage previously to over 10% in the fourth quarter. With all the impacts, the notebook market continued suffering a 4-5% supply gap in the fourth quarter of 2018.
First Person Shooters (Games)

Study Shows Gamers At High FPS Have Better Kill-To-Death Ratios In Battle Royale Games (hothardware.com) 149

MojoKid writes: Gaming enthusiasts and pro-gamers have believed for a long time that playing on high refresh rates displays with high frame rates offers a competitive edge in fast-action games like PUBG, Fortnite and Apex Legends. The premise is, the faster the display can update the action for you, every millisecond saved will count when it comes to tracking targets and reaction times. This sounds logical but there's never been specific data tabulated to back this theory up and prove it. NVIDIA, however, just took it upon themselves with the use of their GeForce Experience tool, to compile anonymous data on gamers by hours played per week, panel refresh rate and graphics card type. Though obviously this data speaks to only NVIDIA GPU users, the numbers do speak for themselves.

The more powerful the GPU with a higher frame rate, along with higher panel refresh rate, generally speaking, the higher the kill-to-death ratio (K/D) for the gamers that were profiled. In fact, it really didn't matter hour many hours per week were played. Casual gamers and heavy-duty daily players alike could see anywhere from about a 50 to 150 percent increase in K/D ratio for significantly better overall player performance. It should be underscored that it really doesn't matter what GPU is at play; gamers with AMD graphics cards that can push high frame rates at 1080p or similar can see similar K/D gains. However, the new performance sweet spot seems to be as close to 144Hz/144FPS as your system can push, the better off you'll be and the higher the frame rate and refresh rate the better as well.

Open Source

Linux 5.0 Released (phoronix.com) 107

An anonymous reader writes: Linus Torvalds has released Linux 5.0 in kicking off the kernel's 28th year of development. Linux 5.0 features include AMD FreeSync support, open-source NVIDIA Turing GPU support, Intel Icelake graphics, Intel VT-d scalable mode, NXP PowerPC processors are now mitigated for Spectre Variant Two, and countless other additions. eWeek adds: Among the new features that have landed in Linux 5.0 is support for the Adiantum encryption system, developed by Google for low power devices. Google's Android mobile operating system and ChromeOS desktop operating system both rely on the Linux kernel. "Storage encryption protects your data if your phone falls into someone else's hands," Paul Crowley and Eric Biggers, Android Security and Privacy Team at Google wrote in a blog post. "Adiantum is an innovation in cryptography designed to make storage encryption more efficient for devices without cryptographic acceleration, to ensure that all devices can be encrypted. Memory management in Linux also gets a boost in the 5.0 kernel with a series of improvements designed to help prevent memory fragmentation, which can reduce performance.
Graphics

AMD Radeon VII Graphics Card Launched, Benchmarks Versus NVIDIA GeForce RTX (hothardware.com) 73

MojoKid writes: AMD officially launched its new Radeon VII flagship graphics card today, based on the company's 7nm second-generation Vega architecture. In addition to core GPU optimizations, Radeon VII provides 2X the graphics memory at 16GB and 2.1X the memory bandwidth at a full 1TB/s, compared to AMD's previous generation Radeon RX Vega 64. The move to 7nm allowed AMD to shrink the Vega 20 GPU die down to 331 square millimeters. This shrink and the subsequent silicon die area saving is what allowed them to add an additional two stacks of HBM2 memory and increase the high-bandwidth cache (frame buffer) capacity to 16GB. The GPU on board the Radeon VII has 60CUs and a total of 3,840 active stream processors with a board power TDP of 300 Watts. As you might expect, it's a beast in the benchmarks that's able to pull ahead of NVIDIA's GeForce RTX 2080 in spots but ultimately lands somewhere in between the performance of an RTX 2070 and 2080 overall. AMD Radeon VII cards will be available in a matter of days at an MSRP of $699 with custom boards from third-party partners showing up shortly as well.
AMD

GPU Accelerated Realtime Skin Smoothing Algorithms Make Actors Look Perfect 138

dryriver writes: A recent Guardian article about the need for actors and celebrities -- male and female -- to look their best in a high-definition media world ended on the note that several low-profile Los Angeles VFX outfits specialize in "beautifying actors" in movies, TV shows and video ads. They reportedly use a software named "Beauty Box," resulting in films and other motion content that are -- for lack of a better term -- "motion Photoshopped." After some investigating, it turns out that "Beauty Box" is a sophisticated CUDA and OpenGL accelerated skin-smoothing plugin for many popular video production software that not only smooths even terribly rough or wrinkly looking skin effectively, but also suppresses skin spots, blemishes, scars, acne or freckles in realtime, or near realtime, using the video processing capabilities of modern GPUs.

The product's short demo reel is here with a few examples. Everybody knows about photoshopped celebrities in an Instagram world, and in the print magazine world that came long before it, but far fewer people seem to realize that the near-perfect actor, celebrity, or model skin you see in high-budget productions is often the result of "digital makeup" -- if you were to stand next to the person being filmed in real life, you'd see far more ordinary or aged skin from the near-perfection that is visible on the big screen or little screen. The fact that the algorithms are realtime capable also means that they may already be being used for live television broadcasts without anyone noticing, particularly in HD and 4K resolution broadcasts. The question, as was the case with photoshopped magazine fashion models 25 years ago, is whether the technology creates an unrealistic expectation of having to have "perfectly smooth looking" skin to look attractive, particularly in people who are past their teenage years.
AMD

Nvidia CEO Trashes AMD's New GPU: 'The Performance Is Lousy' (gizmodo.com) 115

An anonymous reader shares a report: Yesterday, AMD announced a new graphics card, the $700 Radeon VII, based on its second-generation Vega architecture. The GPU is the first one available to consumers based on the 7nm process. Smaller processes tend to be faster and more energy efficient, which means it could theoretically be faster than GPUs with larger processes, like the first generation Vega GPU (14nm) or Nvidia's RTX 20-series (12nm). I say "could," because so far Nvidia's RTX 20-series has been speedy in our benchmarks. From the $1,000+ 2080 Ti down to $350 2060 announced Sunday, support ray tracing. This complex technology allows you to trace a point of light from a source to a surface in a digital environment. What it means in practice is video games with hyperrealistic reflections and shadows.

It's impressive technology, and Nvidia has touted it as the primary reason to upgrade from previous generation GPUs. AMD's GPUs, notably, do not support it. And at a round table Gizmodo attended with Nvidia CEO Jensen Huang he jokingly dismissed AMD's Tuesday announcement, claiming the announcement itself was "underwhelming" and that his company's 2080 would "crush" the Radeon VII in benchmarks. "The performance is lousy," he said of the rival product. When asked to comment about these slights, AMD CEO Lisa Su told a collection of reporters, "I would probably suggest he hasn't seen it." When pressed about his comments, especially his touting of ray tracing she said, "I'm not gonna get into it tit for tat that's just not my style."

AMD

AMD Announces Radeon VII, Its Next-Generation $699 Graphics Card (theverge.com) 145

An anonymous reader shares a report: AMD has been lagging behind Nvidia for years in the high-end gaming graphics card race, to the point that it's primarily been pushing bang-for-the-buck cards like the RX 580 instead. But at CES, the company says it has a GPU that's competitive with Nvidia's RTX 2080. It's called the Radeon VII ("Seven"), and it uses the company's first 7nm graphics chip that we'd seen teased previously. It'll ship on February 7th for $699, according to the company. That's the same price as a standard Nvidia RTX 2080. [...] AMD says the second-gen Vega architecture offers 25 percent more performance at the same power as previous Vega graphics, and the company showed it running Devil May Cry 5 here at 4K resolution, ultra settings, and frame rates "way above 60 fps." AMD says it has a terabyte-per-second of memory bandwidth.
Graphics

NVIDIA Launches $349 GeForce RTX 2060, Will Support Other Adaptive Sync Monitors (hothardware.com) 145

MojoKid writes from a report via Hot Hardware: NVIDIA launched a new, more reasonably-priced GeForce RTX card today, dubbed the GeForce RTX 2060. The new midrange graphics card will list for $349 and pack all the same features as NVIDIA's higher-end GeForce RTX 2080 and 2070 series cards. The card is also somewhat shorter than other RTX 20-series cards at only 9.5" (including the case bracket), and its GPU has a few functional blocks disabled. Although it's packing a TU106 like the 2070, six Streaming Multiprocessors (SMs) have been disabled, along with 20% of its Tensor and RT cores. All told, the RTX 2060 has 1,920 active CUDA cores, with 240 Tensor cores, and 30 RT cores. Although the GeForce RTX 2060 seems like the next-gen cousin to the 1060, the RTX 2060 is significantly more powerful and more in line with the GeForce GTX 1070 Ti and GTX 1080 in terms of raw performance in the benchmarks. It can also play ray-tracing enabled games like Battlefield V with decent frame rates at 1080p with high image quality and max ray-tracing enabled. NVIDIA has also apparently decided to support open standards-based adaptive refresh rate monitor technology and will soon begin supporting even AMD FreeSync monitors in future driver update.
AMD

AMD's New 12nm Ryzen Laptop Chips Look To Put the Pressure on Intel (theverge.com) 105

AMD has been pushing its Ryzen lineup of processors for a few years now, with the company looking to put pressure on Intel's seemingly unbeatable hold on the chip landscape. From a report: At CES 2019, AMD unveiled its second generation of Ryzen laptop chips, which look to jump ahead of Intel's 14nm roadblock to offer some of the first 12nm processors on the market. To that end, AMD is launching a new lineup of Ryzen 3, Ryzen 5, and Ryzen 7 chips across both the 15W U-series and 35W H-series lineups, almost all of which are built off of the company's new 12nm Zen+ architecture. For the more powerful H-series, there are a pair of new chips: the Ryzen 7 3750H, offering four cores / eight threads, a base clock speed of 2.3 GHz (which can boost to 4.0 GHz), and the Ryzen 5 3550H, also a four core / eight thread processor, but with a 2.1 GHz base speed (which can boost to 3.7 GHz), and only eight GPU cores to the Ryzen 7 3750H's ten. Further reading: AMD Gets Serious About Chromebooks at CES 2019.
AMD

AMD Gets Serious About Chromebooks at CES 2019 (cnet.com) 28

An anonymous reader shares a report: AMD's early CES 2019 announcements brought us some updates on its laptop processors, which include a targeted attempt to capture some of the growing cheap Chromebook market, slightly faster mobile Ryzens and a promise to keep everyone's AMD laptop drivers up to date with the latest zero-day game-release optimizations. Sadly, the news didn't include the much-anticipated, high-performance 7-nanometer Navi GPUs or the rumored Ryzen 3000-series desktop CPUs -- hopefully, the company's just holding back that info for its CEO's keynote on Wednesday. For the first time, AMD has gained a little bit of traction in Chromebooks with some partner announcements at CES such as the HP Chromebook 14 AMD and the Acer Chromebook 315. The announcements are in conjunction with the new A4-9120C and its sibling, the A6-9220C, which have slower CPU and GPU clock speeds than the 15-watt full-fat versions. That allows AMD to match the 6-watt target power draw of Intel's competing Celeron and Pentium models. AMD claims somewhat better performance on both Chrome OS and Android apps, which is possible given that their clock speeds are still faster despite the drop. Further reading: AMD at CES 2019: Ryzen Mobile 3000-Series Launched, 2nd Gen Mobile at 15W and 35W, and Chromebooks.
Operating Systems

Linux 4.20 Released in Time for Christmas (betanews.com) 47

Linus Torvalds has announced the general availability of v4.20 of the Linux kernel. In a post to the Linux Kernel Mailing List, Torvalds said that there was no point in delaying the release of the latest stable version of the kernel just because so many people are taking a break for the holiday season. From a report: He says that while there are no known issues with the release, the shortlog is a little longer than he would have liked. However "nothing screams 'oh, that's scary'", he insists. The most notable features and changes in the new version includes: New hardware support! New hardware support includes bringing up the graphics for AMD Picasso and Raven 2 APUs, continued work on bringing up Vega 20, Intel has continued putting together its Icelake Gen 11 graphics support, there is support for the Hygon Dhyana CPUs out of China based upon AMD Zen, C-SKY 32-bit CPU support, Qualcomm Snapdragon 835 SoC enablement, Intel 2.5G Ethernet controller support for "Foxville", Creative Sound Blaster ZxR and AE-5 sound card support, and a lot of smaller additions.

Besides new hardware support when it comes to graphics processors, in the DRM driver space there is also VCN JPEG acceleration for Raven Ridge, GPUVM performance work resulting in some nice Vulkan gaming boosts, Intel DRM now has full PPGTT support for Haswell/IvyBridge/ValleyView, and HDMI 2.0 support for the NVIDIA/Nouveau driver. On the CPU front there are some early signs of AMD Zen 2 bring-up, nested virtualization now enabled by default for AMD/Intel CPUs, faster context switching for IBM POWER9, and various x86_64 optimizations. Fortunately the STIBP work for cross-hyperthread Spectre V2 mitigation was smoothed out over the release candidates that the performance there is all good now.

Btrfs performance improvements, new F2FS features, faster FUSE performance, and MDRAID improvements for RAID10 round out the file-system/storage work. One of the technical highlights of Linux 4.20 that will be built up moving forward is the PCIe peer-to-peer memory support for device-to-device memory copies over PCIe for use-cases like data going directly from NICs to SSD storage or between multiple GPUs.

Security

Researchers Discover SplitSpectre, a New Spectre-like CPU Attack (zdnet.com) 48

An anonymous reader writes from a report via ZDNet: Three academics from Northeastern University and three researchers from IBM Research have discovered a new variation of the Spectre CPU vulnerability that can be exploited via browser-based code. The vulnerability, which researchers codenamed SplitSpectre, is a variation of the original Spectre v1 vulnerability discovered last year and which became public in January 2018. The difference in SplitSpectre is not in what parts of a CPU's microarchitecture the flaw targets, but how the attack is carried out. Researchers say a SplitSpectre attack is both faster and easier to execute, improving an attacker's ability to recover code from targeted CPUs. The research team says they were successfully able to carry out a SplitSpectre attack against Intel Haswell and Skylake CPUs, and AMD Ryzen processors, via SpiderMonkey 52.7.4, Firefox's JavaScript engine. The good news is that existing Spectre mitigations would thwart the SplitSpectre attacks.
Microsoft

Microsoft's Surface Roadmap Reportedly Includes Ambient Computing and a Modular All-in-One PC (venturebeat.com) 41

Journalist Brad Sams is releasing a book chronicling the company's Surface brand: Beneath a Surface. VentureBeat writes: While you'll want to read all 26 chapters to get the juicy details, the last one includes Microsoft's hardware roadmap for 2019, and even a part of 2020 -- spanning various Surface products and even a little Xbox. Here's a quick rundown of Microsoft's current Surface lineup plans:

Spring 2019: A new type of Surface-branded ambient computing device designed to address "some of the common frustrations of using a smartphone," but that isn't itself a smartphone.
Q4 2019: Surface Pro refresh with USB-C (finally), smaller bezels, rounded corners, and new color options.
Q4 2019: AMD-based Surface Laptop -- Microsoft is exploring using the Picasso architecture.
Late 2019: Microsoft's foldable tablet Andromeda could be larger than earlier small form factor prototypes for a pocketable device with dual screens and LTE connectivity.
Q1 2020: Surface Book update that might include new hinge designs (high-end performance parts may delay availability).
2020: A Surface monitor, and the modular design debuted for Surface Hub 2 could make its way to Surface Studio. The idea is to bring simple upgrades to all-in-one PCs, rather than having to replace the whole computer.
GeekWire adds: A pair of new lower-cost devices Xbox One S devices could come next year. Sams reports that one of the models may be all digital, without a disc drive.
Businesses

TSMC, a Company Few Americans Know, is About To Dethrone Intel (bloomberg.com) 195

For more than 30 years, Intel has dominated chipmaking, producing the most important component in the bulk of the world's computers. That run is now under threat from a company many Americans have never heard of. From a report: Taiwan Semiconductor Manufacturing Co. was created in 1987 to churn out chips for companies that lacked the money to build their own facilities. The approach was famously dismissed at the time by Advanced Micro Devices founder Jerry Sanders. "Real men have fabs," he quipped at a conference, using industry lingo for factories. These days, ridicule has given way to envy as TSMC plants have risen to challenge Intel at the pinnacle of the $400 billion industry. AMD recently chose TSMC to make its most advanced processors, having spun off its own struggling factories years before.

TSMC's threat to Intel reflects a sea change in chipmaking that's seen one company after another hire TSMC to manufacture the chips they design. Hsinchu-based TSMC has scores of customers, including tech giants Apple and Qualcomm, second-tier players like AMD, and minnows such as Ampere Computing. The explosion of components built this way has given TSMC the technical know-how needed to churn out the smallest, most efficient and powerful chips in the highest volumes.

"It's a once-in-a-50-year situation," said Renee James, the former No. 2 at Intel who heads startup Ampere. Her company is less than two years old and yet it's going after Intel's dominant server chip business. That Ampere thinks it can compete is a testament to stumbles by Intel, and TSMC's ability to benefit from those mistakes. It's been a decade since Intel faced major competition and its 90 percent revenue share in computer processing will again deliver record results this year. But some on Wall Street are concerned, and rivals are emboldened, because TSMC has a real chance to replace Intel as the best chipmaker in the business. Last year, the Taiwanese company amassed a bigger market value than its U.S. rival for the first time.

Cloud

Amazon Web Services Introduces its Own Custom-Designed ARM Server Processor, Promises 45 Percent Lower Costs For Some Workloads (geekwire.com) 65

After years of waiting for someone to design an ARM server processor that could work at scale on the cloud, Amazon Web Services just went ahead and designed its own. From a report: Vice president of infrastructure Peter DeSantis introduced the AWS Graviton Processor Monday night, adding a third chip option for cloud customers alongside instances that use processors from Intel and AMD. The company did not provide a lot of details about the processor itself, but DeSantis said that it was designed for scale-out workloads that benefit from a lot of servers chipping away at a problem. The new instances will be known as EC2 A1, and they can run applications written for Amazon Linux, Red Hat Enterprise Linux, and Ubuntu. They are generally available in four regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland). Intel dominates the market for server processors, both in the cloud and in the on-premises server market. AMD has tried to challenge that lead over the years with little success, although its new Epyc processors have been well-received by server buyers and cloud companies like AWS. John Gruber of DaringFireball, where we first spotted this story, adds: Makes you wonder what the hell is going on at Intel and AMD -- first they missed out on mobile, now they're missing out on the cloud's move to power-efficient ARM chips.
Security

Researchers Discover Seven New Meltdown and Spectre Attacks (zdnet.com) 98

A team of nine academics has revealed today seven new CPU attacks. The seven impact AMD, ARM, and Intel CPUs to various degrees. From a report: Two of the seven new attacks are variations of the Meltdown attack, while the other five are variations on the original Spectre attack -- two well-known attacks that have been revealed at the start of the year and found to impact CPUs models going back to 1995. Researchers say they've discovered the seven new CPU attacks while performing "a sound and extensible systematization of transient execution attacks" -- a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU's internal caches, and other internal execution stages. The research team says they've successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers. Update: In a statement to Slashdot, an Intel spokesperson said, "the vulnerabilities documented in this paper can be fully addressed by applying existing mitigation techniques for Spectre and Meltdown, including those previously documented here, and elsewhere by other chipmakers. Protecting customers continues to be a critical priority for us and we are thankful to the teams at Graz University of Technology, imec-DistriNet, KU Leuven, & the College of William and Mary for their ongoing research."
Intel

Intel Launches New Core i9-9980XE 18-Core CPU With 4.5GHz Boost Clock (hothardware.com) 192

MojoKid writes: When Intel officially announced its 9th Generation Core processors, it used the opportunity to also unveil a refreshed line-up of 9th Gen-branded Core-X series processors. Unlike other 9th Gen Core i products, however, which leverage an updated Coffee Lake microarchitecture, new processors in Intel's Core-X series remain based on Skylake-X architecture but employ notable tweaks in manufacturing and packaging of the chips, specifically with a solder TIM (Thermal Interface Material) under their heat spreaders for better cooling and more overclocking headroom. The Core i9-9980XE is the new top-end CPU that supplants the Core i9-7980XE at the top of Intel's stack. The chip features 18 Skylake-X cores (36 threads) with a base clock of 3.0GHz that's 400MHz higher than the previous gen. The Core i9-9980XE has max Turbo Boost 2.0 and Turbo Boost Max 3.0 frequencies of 4.4GHz and 4.5GHz, which are 200MHz and 100MHz higher than Intel's previous gen Core i9-7980XE, respectively.

In the benchmarks, the new Core i9-9980XE is easily the fastest many-core desktop processor Intel has released to date, out-pacing all previous-gen Intel processors and AMD Threadripper X series processors in heavily threaded applications. However, the 18-core Core i9-9980XE typically trailed AMD's 24 and 32-core Threadripper WX series processors. Intel's Core i9-9980XE also offered relatively strong single-threaded performance, with an IPC advantage that's superior to any AMD Ryzen processor currently.

AMD

To Keep Pace With Moore's Law, Chipmakers Turn to 'Chiplets' (wired.com) 130

As chipmakers struggle to keep up with Moore's law, they are increasingly looking for alternatives to boost computers' performance. "We're seeing Moore's law slowing," says Mark Papermaster, chief technology officer at chip designer AMD. "You're still getting more density but it costs more and takes longer. It's a fundamental change." Wired has a feature story which looks at those alternatives and the progress chipmakers have been able to make with them so far. From a report: AMD's Papermaster is part of an industry-wide effort around a new doctrine of chip design that Intel, AMD, and the Pentagon all say can help keep computers improving at the pace Moore's law has conditioned society to expect. The new approach comes with a snappy name: chiplets. You can think of them as something like high-tech Lego blocks. Instead of carving new processors from silicon as single chips, semiconductor companies assemble them from multiple smaller pieces of silicon -- known as chiplets. "I think the whole industry is going to be moving in this direction," Papermaster says. Ramune Nagisetty, a senior principal engineer at Intel, agrees. She calls it "an evolution of Moore's law."

Chip chiefs say chiplets will enable their silicon architects to ship more powerful processors more quickly. One reason is that it's quicker to mix and match modular pieces linked by short data connections than to painstakingly graft and redesign them into a single new chip. That makes it easier to serve customer demand, for example for chips customized to machine learning, says Nagisetty. New artificial-intelligence-powered services such as Google's Duplex bot that makes phone calls are enabled in part by chips specialized for running AI algorithms.

Chiplets also provide a way to minimize the challenges of building with cutting-edge transistor technology. The latest, greatest, and smallest transistors are also the trickiest and most expensive to design and manufacture with. In processors made up of chiplets, that cutting-edge technology can be reserved for the pieces of a design where the investment will most pay off. Other chiplets can be made using more reliable, established, and cheaper techniques. Smaller pieces of silicon are also inherently less prone to manufacturing defects.

AMD

AMD Reveals Zen 2 Processor Architecture in Bid To Stay Ahead of Intel (venturebeat.com) 100

AMD on Monday revealed the Zen 2 architecture for the family of processors that it will launch in the coming years, starting with 2019. The move is a follow-up to the competitive Zen designs that AMD launched in March 2017, and it promises two-times improvement in performance throughput. From a report: AMD hopes the Zen 2 processors will keep it ahead of or at parity with Intel, the world's biggest maker of PC processors. The earlier Zen designs enabled chips that could process 52 percent more instructions per clock cycle than the previous generation. Zen has spawned AMD's most competitive chips in a decade, including Ryzen for the desktop, Threadripper (with up to 32 cores) for gamers, Ryzen Mobile for laptops, and Epyc for servers. In the future, you can expect to see Zen 2 cores in future models of those families of chips. AMD's focus is on making central processing units (CPUs), graphics processing units (GPUs), and accelerated processing units (APUs) that put the two other units together on the same chip.

Slashdot Top Deals