Intel

Intel Unveils New Core H-Series Laptop and 11th Gen Desktop Processors At CES 2021 (hothardware.com) 68

MojoKid writes: At its virtual CES 2021 event today, Intel's EVP Gregory Bryant unveiled an array of new processors and technologies targeting virtually every market, from affordable Chromebooks to enthusiast-class gaming laptops and high-end desktops. Intel's 11th Gen Core vPro platform was announced, featuring new Intel Hardware Shield AI-enabled threat ransomware and crytpo-mining malware detection technology. In addition, the Intel Rocket Lake-S based Core i9-11900K 8-core CPU was revealed, offering up to a 19% improvement in IPC performance and the ability to out-pace AMD's Ryzen 9 5900X 12-core CPU in some workloads like gaming. Also, a new high-end hybrid processor, code-named Alder Lake was previewed. Alder Lake packs both high-performance cores and high-efficiency cores on a single product, for what Intel calls its "most power-scalable system-on-chip" ever. Alder Lake will also be manufactured using an enhanced version of 10nm SuperFin technology with improved power and thermal characteristics, and targets both desktop and mobile form factors when they arrive later this year.

Finally, Intel launched its new 11th Gen Core H-Series Tiger Lake H35 parts that will appear in high-performance laptops as thin as 16mm. At the top of the 11th Gen H-Series stack is the Intel Core i7-11375H Special Edition, a 35W quad-core processor (8-threads) that turbos up to 5GHz and supports PCI Express 4.0, and is targeted for ultraportable gaming notebooks. Intel is claiming single-threaded performance improvements in the neighborhood of 15% over previous-gen architectures and a greater than 40% improvement in multi-threaded workloads. Intel's Bryant also announced an 8-core mobile processor variant leveraging the same architecture as the 11th Gen H-Series that is slated to start shipping a bit later this quarter at 5GHz on multiple cores, with 20 lanes of PCIe Gen 4 connectivity.

Hardware

Graphics Cards Are About To Get a Lot More Expensive, Asus Warns (pcworld.com) 159

Ever since Nvidia's GeForce RTX 30-series and AMD's Radeon RX 6000-series graphics cards launched last fall, the overwhelming demand and tight supply, exacerbated by a cryptocurrency boom, has caused prices for all graphics cards to go nuts. Brace yourself: It looks like it's about to get even worse. From a report: In the Asus DIY PC Facebook group, Asus technical marketing manager Juan Jose Guerrero III warned that prices for the company's components will increase in the new year. "We have an announcement in regards to MSRP price changes that are effective in early 2021 for our award-winning series of graphic cards and motherboards," Guerrero wrote, though he warned that "additional models" may also wind up receiving price increases as well. "Our new MSRP reflects increases in cost for components. operating costs, and logistical activities plus a continuation of import tariffs. We worked closely with our supply and logistic partners to minimize price increases. ASUS greatly appreciates your continued business and support as we navigate through this time of unprecedented market change."
Intel

Linus Torvalds Rails At Intel For 'Killing' the ECC Industry (theregister.com) 218

An anonymous reader quotes a report from The Register: Linux creator Linus Torvalds has accused Intel of preventing widespread use of error-correcting memory and being "instrumental in killing the whole ECC industry with its horribly bad market segmentation." ECC stands for error-correcting code. ECC memory uses additional parity bits to verify that the data read from memory is the same as the data that was written. Without this check, memory is vulnerable to occasional corruption where a bit is flipped spontaneously, for example, by background radiation. Memory can also be attacked using a technique called Rowhammer, where rapid repeated reads of the same memory locations can cause adjacent locations to change their state. ECC memory solves these problems and has been available for over 50 years yet most personal computers do not use it. Cost is a factor but what riles Torvalds is that Intel has made ECC support a feature of its Xeon range, aimed at servers and high-end workstations, and does not support it in other ranges such as the Core series.

The topic came up in a discussion about AMD's new Zen 3 Ryzen 9 5000 series processors on the Real World Tech forum site. AMD has semi-official ECC support in most of its processors. "I don't really see AMD's unofficial ECC support being a big deal," said an unwary contributor. "ECC absolutely matters," retorted Torvalds. "Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously. And if you don't believe me, then just look at multiple generations of rowhammer, where each time Intel and memory manufacturers bleated about how it's going to be fixed next time... And yes, that was -- again -- entirely about the misguided and arse-backwards policy of 'consumers don't need ECC', which made the market for ECC memory go away."

The accusation is significant particularly at a time when security issues are high on the agenda. The suggestion is that Intel's marketing decisions have held back adoption of a technology that makes users more secure -- though rowhammer is only one of many potential attack mechanisms -- as well as making PCs more stable. "The arguments against ECC were always complete and utter garbage. Now even the memory manufacturers are starting to do ECC internally because they finally owned up to the fact that they absolutely have to," said Torvalds. Torvalds said that Xeon prices deterred usage. "I used to look at the Xeon CPU's, and I could never really make the math work. The Intel math was basically that you get twice the CPU for five times the price. So for my personal workstations, I ended up using Intel consumer CPU's." Prices, he said, dropped last year "because of Ryzen and Threadripper... but it was a 'too little, much too late' situation." By way of mitigation, he added that "apart from their ECC stance I was perfectly happy with [Intel's] consumer offerings."

AMD

Xbox Series X and S Shortages Have Microsoft Asking AMD for Help (gizmodo.com) 32

Supply issues have hamstrung the rollout of the latest generation of video game consoles. Even now, nearly two months after the Xbox Series X and Xbox Series S released, Microsoft is still scrambling to meet demand and has reportedly reached out to chipmaker AMD to fast-track production on its end. From a report: AMD manufactures the GPU and CPU for both consoles, so if it's able to push out its chips faster, Microsoft could, in theory, churn out more consoles by extension. As spotted by VGC, Microsoft is "working as hard as we can" to pump out more systems and has even contacted AMD for help, according to Xbox head Phil Spencer in a recent appearance on the Major Nelson Radio podcast hosted by Xbox Live director of programming Larry Hyrb "I get some people [asking], 'why didn't you build more? Why didn't you start earlier? Why didn't you ship them earlier?' I mean, all of those things," Spencer said. "It's really just down to physics and engineering. We're not holding them back: We're building them as fast as we can. We have all the assembly lines going. I was on the phone last week with [CEO and president] Lisa Su at AMD [asking], 'How do we get more? How do we get more?' So it's something that we're constantly working on."
AMD

Speculation Grows As AMD Files Patent for GPU Design (hothardware.com) 39

Long-time Slashdot reader UnknowingFool writes: AMD filed a patent on using chiplets for a GPU with hints on why it has waited this long to extend their CPU strategy to GPUs. The latency between chiplets poses more of a performance problem for GPUs, and AMD is attempting to solve the problem with a new interconnect called high bandwidth passive crosslink. This new interconnect will allow each GPU to more effectively communicate with each other and the CPU.
"With NVIDIA working on its own MCM design with Hopper architecture, it's about time that we left monolithic GPU designs in the past and enable truly exponential performance growth," argues Wccftech.

And Hot Hardware delves into the details, calling it a "hybrid CPU-FPGA design that could be enabled by Xilinx tech." While they often aren't as great as CPUs on their own, FPGAs can do a wonderful job accelerating specific tasks... [A]n FPGA in the hands of a capable engineer can offload a wide variety of tasks from a CPU and speed processes along. Intel has talked a big game about integrating Xeons with FPGAs over the last six years, but it hasn't resulted in a single product hitting its lineup. A new patent by AMD, though, could mean that the FPGA newcomer might be ready to make one of its own...

AMD made 20 claims in its patent application, but the gist is that a processor can include one or more execution units that can be programmed to handle different types of custom instruction sets. That's exactly what an FPGA does...

AMD has been working on different ways to speed up AI calculations for years. First the company announced and released the Radeon Impact series of AI accelerators, which were just big headless Radeon graphics processors with custom drivers. The company doubled down on that with the release of the MI60, its first 7-nm GPU ahead of the Radeon RX 5000 series launch, in 2018. A shift to focusing on AI via FPGAs after the Xilinx acquisition makes sense, and we're excited to see what the company comes up with.

Bug

'Cyberpunk 2077' Players Are Fixing Parts of the Game Before CD Projekt (vice.com) 79

Cyberpunk 2077 is here in all its glory and pain. On some machines, it's a visual spectacle pushing the limits of current technology and delivering on the promise of Deus Ex, but open world. On other machines, including last-gen consoles, it's a unoptimized and barely playable nightmare. Developer CD Projekt Red has said it's working to improve the game, but fans already have a number of fixes, particularly if you're using an AMD CPU. From a report: Fans aren't waiting for the developer however and over the weekend AMD CPU users discovered that a few small tweaks could improve performance on their PCs. Some players reported performance gains of as much as 60 percent. Cyberpunk 2077 seems to be a CPU intensive game and, at release, it isn't properly optimized for AMD chips. "If you run the game on an AMD CPU and check your usage in task manager, it seems to utilise 4 (logical, 2 physical) cores in frequent bursts up to 100% usage, whereas the rest of the physical cores sit around 40-60%, and their logical counterparts remain idle," Redditor BramblexD explained in a post on the /r/AMD subreddit. Basically, Cyberpunk 2077 is only utilizing a portion of any AMD chips power.

Digital Foundry, a YouTube channel that does in-depth technical analysis of video games, noticed the AMD issue as well. "It really looks like Cyberpunk is not properly using the hyperthreads on Ryzen CPUs," Digital Foundry said in a recent video. To fix this issue, the community has developed three separate solutions. One involves altering the game's executable with a hex editor, the other involves editing a config file, and a third is an unofficial patch built by the community. All three do the same thing -- unleash the power of AMDs processors. "Holy shit are you a wizard or something? The game is finally playable now!" One redditor said of the hex editing technique. "With this tweak my CPU usage went from 50% to ~75% and my frametime is so much more stable now."

Hardware

'This Is a Bad Time to Build a High-End Gaming PC' (extremetech.com) 177

Joel Hruska, writing at ExtremeTech: It's not just a question of whether top-end hardware is available, but whether midrange and last-gen hardware is selling at reasonable prices. If you want to go AMD, be aware that Ryzen 5000 CPUs are hard to find and the 6800 and 6800 XT are vanishingly rare. The upper-range Ryzen 3000 CPUs available on Amazon and Newegg are also selling for well above their prices six months ago. If you want to build an Intel system, the situation is a little different. A number of the 9th and 10th-gen chips are actually priced at MSRP and not too hard to find. The Core i7-9700K has fallen to $269, for example, and it's still one of Intel's fastest gaming CPUs. At that price, paired with a Z370 motherboard, you could build a gaming-focused system, so long as you don't actually need a new high-end GPU. The Core i7-10700K is $359, which isn't quite as competitive, but it squares off reasonably well against chips like the 3700X at $325. Amazon and Newegg both report the 3600X selling for more, at $400 and $345, respectively.

But even if these prices are appealing, the current GPU market makes building a gaming system much above lower-midrange to midrange a non-starter. Radeon 6000 GPUs and RTX 3000 GPUs are both almost impossible to find, and the older, slower, and less feature-rich cards that you can buy are almost all selling for more today than they were six months ago. Not every GPU has been kicked into the stratosphere, but between the cards you can't buy and the cards you shouldn't buy, there's a limited number of deals currently on the market. Your best bet is to set up price alerts on specific SKUs you are watching with the vendor in question. There is some limited good news, though: DRAM and SSDs are both still reasonably priced. DRAM and SSD prices are both expected to decline 10-15 percent through Q4 2020 compared with the previous quarter, and there are good deals to be had on both. [...] Power supply prices look reasonable, too, and motherboard availability looks solid. If you don't need to buy a GPU right now and you're willing to or prefer to use Intel, there's a more reasonable case to be made for building a system. But if you need a high-end GPU and/or want a high-end Ryzen chip to go with it, you may be better off shopping prebuilt systems or waiting a few more months.

PlayStation (Games)

Is Sony Developing a Dual-GPU PS5 Pro? (collider.com) 60

According to a Sony patent spotted by T3, the console maker may be working on a new PlayStation 5 with two graphics card. From the report: The patent describes a "scalable game console" where "a second GPU [is] communicatively coupled to the first GPU" and that the system is for "home console and cloud gaming" usage. To us here at T3 that suggests a next-gen PlayStation console, most likely a new PS5 Pro flagship, supercharged with two graphics cards instead of just one. These would both come in the APU (accelerated processing unit) format that the PlayStation 5's system-on-a-chip (SoC) do, with two custom made AMD APUs working together to deliver enhanced gaming performance and cloud streaming.

The official Sony patent notes that, "plural SoCs may be used to provide a 'high-end' version of the console with greater processing and storage capability," while "the 'high end' system can also contain more memory such as random-access memory (RAM) and other features and may also be used for a cloud-optimized version using the same game console chip with more performance." And, with the PlayStation 5 console only marginally weaker on paper than the Xbox Series X (the PS5 delivers 10.28 teraflops compared to the Xbox Series X's 12 teraflops), a new PS5 Pro console that comes with two APUs rather than one, improving local gaming performance as well as cloud gaming, would be no doubt the Xbox Series X as king of the next-gen consoles death blow.

The cloud gaming part of the patent is particularly interesting, too, as it seems to suggest that this technology could not just find itself in a new flagship PS5 Pro console, but also in more streamlined cloud-based hardware, too. An upgraded PS5 Digital Edition seems a smart bet, as too the much-rumored PSP 5G. [...] Will we see a PS5 Pro anytime soon? Here at T3 we think absolutely not -- we imagine we'll get at least two straight years of PS5 before we see anything at all. As for a cloud-based next-gen PSP 5G, though...

Hardware

NVIDIA Launches GeForce RTX 3060 Ti, Sets a New Gaming Performance Bar At $399 (hothardware.com) 70

MojoKid writes: NVIDIA expanded its line-up of Ampere-based graphics cards today with a new lower cost GeForce RTX 3060 Ti. As its name suggests, the new $399 NVIDIA GPU supplants the previous-gen GeForce RTX 2060 / RTX 2060 Super, and slots in just behind the recently-released GeForce RTX 3070. The GeForce RTX 3060 Ti features 128 CUDA cores per SM, for a total of 4,864, 4 Third-Gen Tensor cores per SM (152 total), and 38 Second-Gen RT cores. The GPU has a typical boost clock of 1,665MHz and it is linked to 8GB of standard GDDR6 memory (not the GDDR6X of the RTX 3080/3090) via a 256-bit memory interface that offers up to 448GB/s of peak bandwidth. In terms of overall performance, the RTX 3060 Ti lands in the neighborhood of the GeForce RTX 2080 Super, and well ahead of cards like AMD's Radeon RX 5700 XT. The GeForce RTX 3060 Ti's 8GB frame buffer may give some users pause, but for 1080p and 1440p gaming, it shouldn't be a problem for the overwhelming majority of titles. It's also par for the course in this $399 price band. Cards are reported to be shipping in retail tomorrow.
Graphics

Radeon RX 6800 and 6800 XT Performance Marks AMD's Return To High-End Graphics (hothardware.com) 62

MojoKid writes: AMD officially launched its Radeon RX 6800 and Radeon RX 6800 XT graphics cards today, previously known in the PC gaming community as Big Navi. The company claimed these high-end GPUs would compete with NVIDIA's best GeForce RTX 30 series and it appears AMD made good on its claims. AMD's new Radeon RX 6800 XT and Radeon 6800 are based on the company's RDNA 2 GPU architecture, with the former sporting 72 Compute Units (CUs) and 2250MHz boost clock, while the RX 6800 sports 60 CUs at a 2105MHz boost clock. Both cards come equipped with 16GB of GDDR6 memory and 128MB of on-die cache AMD calls Infinity Cache, that improves bandwidth and latency, feeding the GPU in front of its 256-bit GDDR6 memory interface.

In the benchmarks, it is fair to say the Radeon RX 6800 is typically faster than an NVIDIA GeForce RTX 3070 just as AMD suggested. Things are not as cut and dry for the Radeon RX 6800 XT though, as the GeForce RTX 3080 and Radeon RX 6800 XT trade victories depending on the game title or workload, but the RTX 3080 has an edge overall. In DXR Ray Tracing performance, NVIDIA has a distinct advantage at the high-end. Though the Radeon RX 6800 wasn't too far behind and RTX 3070, neither the Radeon RX 6800 XT or 6800 card came close the GeForce RTX 3080. Pricing is set at $649 and $579 for the AMD Radeon RX 6800 XT and Radeon RX 6800, respectively and the cards are on sale as of today. However, demand is likely to be fierce as this new crop of high-end graphics cards from both companies have been quickly evaporating from retail shelves.

Desktops (Apple)

Apple's M1 Is Exceeding Expectations (extremetech.com) 274

Reviews are starting to pour in of Apple's MacBook Pro, MacBook Air and Mac Mini featuring the new M1 ARM-based processor -- and they're overwhelmingly positive. "As with the Air, the Pro's performance exceeds expectations," writes Nilay Patel via The Verge.

"Apple's next chapter offers strong performance gains, great battery and starts at $999," says Brian Heater via TechCrunch.

"When Apple said it would start producing Macs with its own system-on-chip processors, custom CPU and GPU silicon (and a bunch of other stuff) to replace parts from Intel and AMD, we figured it would be good. I never expected it would be this good," says Jason Cross in his review of the MacBook Air M1.

"The M1 is a serious, serious contender for one of the all-time most efficient and highest-performing architectures we've ever seen deploy," says ExtremeTech's Joel Hruska.

"Spending a few days with the 2020 Mac mini has shown me that it's a barnburner of a miniature desktop PC," writes Chris Welch via The Verge. "It outperforms most Intel Macs in several benchmarks, runs apps reliably, and offers a fantastic day-to-day experience whether you're using it for web browsing and email or for creative editing and professional work. That potential will only grow when Apple inevitably raises the RAM ceiling and (hopefully) brings back those missing USB ports..."

"Quibbling about massively parallel workloads -- which the M1 wasn't designed for -- aside, Apple has clearly broken the ice on high-performance ARM desktop and laptop designs," writes Jim Salter via Ars Technica. "Yes, you can build an ARM system that competes strongly with x86, even at very high performance levels."

"The M1-equipped MacBook Air now packs far better performance than its predecessors, rivaling at times the M1-based MacBook Pro. At $999, it's the best value among macOS laptops," concludes PCMag.

"For developers, the Apple Silicon Macs also represent the very first full-fledged Arm machines on the market that have few-to-no compromises. This is a massive boost not just for Apple, but for the larger Arm ecosystem and the growing Arm cloud-computing business," writes Andrei Frumusanu via AnandTech. "Overall, Apple hit it out of the park with the M1."

AMD

Microsoft Reveals Pluton, a Custom Security Chip Built Into Intel, AMD, and Qualcomm Processors (techcrunch.com) 143

An anonymous reader shares a report: For the past two years, some of the world's biggest chip makers have battled a series of hardware flaws, like Meltdown and Spectre, which made it possible -- though not easy -- to pluck passwords and other sensitive secrets directly from their processors. The chip makers rolled out patches, but required the companies to rethink how they approach chip security. Now, Microsoft thinks it has the answer with its new security chip, which it calls Pluton. The chip, announced today, is the brainchild of a partnership between Microsoft, and chip makers Intel, AMD, and Qualcomm. Pluton acts as a hardware root-of-trust, which in simple terms protects a device's hardware from tampering, such as from hardware implants or by hackers exploiting flaws in the device's low-level firmware. By integrating the chip inside future Intel, AMD, and Qualcomm central processor units, or CPUs, it makes it far more difficult for hackers with physical access to a computer to launch hardware attacks and extract sensitive data, the companies said. "The Microsoft Pluton design will create a much tighter integration between the hardware and the Windows operating system at the CPU that will reduce the available attack surface," said David Weston, director of enterprise and operating system security at Microsoft.
Programming

Why Apple Silicon Needs an Open Source Fortran Compiler (walkingrandomly.com) 113

"Earlier this week Apple announced their new, ARM-based 'Apple Silicon' machines to the world in a slick marketing event that had many of us reaching for our credit cards," writes Mike Croucher, technical evangelist at The Numerical Algorithms Group.

"Simultaneously, The Numerical Algorithms Group announced that they had ported their Fortran Compiler to the new platform. At the time of writing this is the only Fortran compiler publicly available for Apple Silicon although that will likely change soon as open source Fortran compilers get updated."

An anonymous Slashdot reader offers this analysis: Apple Silicon currently has no open source Fortran compiler and Apple themselves are one of the few silicon manufacturers who don't have their own Fortran compiler. You could be forgiven for thinking that this doesn't matter to most users... if it wasn't for the fact that sizeable percentages of foundational data science platforms such as R and SciPy are written in Fortran.
Croucher argues that "More modern systems, such as R, make direct use of a lot of this code because it is highly performant and, perhaps more importantly, has been battle tested in production for decades. Numerical computing is hard (even when all of your instincts suggest otherwise) and when someone demonstrably does it right, it makes good sense to reuse rather than reinvent..."

"The community needs and will demand open source (or at least free) Fortran compilers if data scientists are ever going to realise the full potential of Apple's new hardware and I have no doubt that these are on the way. Other major silicon providers (e.g. Intel, AMD, NEC and NVIDIA/PGI) have their own Fortran compiler that co-exist with the open ones. Perhaps Apple should join the club..."
AMD

AMD Ryzen 5000 Series Processors Set a New Performance Bar Over Intel (hothardware.com) 70

MojoKid writes: AMD made bold claims when the company unveiled its new Zen 3-based Ryzen 5000 series processors early last month. Statements like "historic IPC uplift" and "fastest for gamers" were waved about like flags of victory. However, as with most things in the computing world, independent testing is always the best way to validate claims. Today AMD lifted the embargo on 3rd party reviews and, in testing, AMD's new Ryzen 5000 series CPUs set a new performance bar virtually across the board, and one that Intel currently can't touch. There are four processors in the initial Ryzen 5000 series lineup, though it's a safe bet more will be coming later. The current entry point is the Ryzen 5 5600X 6-core / 8-thread processor, followed by the 8-core / 16-thread Ryzen 7 5800X, 12-core / 24 thread Ryzen 9 5900X, and the flagship 16-core / 32-thread Ryzen 9 5950X. All of these new CPUs are backwards compatible with AMD socket AM4 motherboards. In comparison to Zen 2, Zen 3 has a larger L1 branch target buffer and improved bandwidth through multiple parts of its pipeline with additional load/store flexibility. Where Zen 2 could handle 2 load and 1 store per cycle, Zen 3 can handle 3 load and 2 stores. All told, AMD is claiming an average 19% increase in IPC with Zen 3, which is a huge uplift gen-over-gen. Couple that IPC uplift with stronger multi-core scaling and a new unified L3 cache configuration, and Zen 3's performance looks great across a wide variety of workloads for both content creation and gaming especially. AMD's Ryzen 9 5950X, Ryzen 9 5900X, Ryzen 7 5800X and Ryzen 5 5600X will be priced at $799, $549, $449 and $299, respectively and should be on retail and etail shelves starting today.
Linux

SiFive Unveils Plan For Linux PCs With RISC-V Processors (venturebeat.com) 42

SiFive today announced it is creating a platform for Linux-based personal computers based on RISC-V processors. VentureBeat reports: Assuming customers adopt the processors and use them in PCs, the move might be part of a plan to create Linux-based PCs that use royalty-free processors. This could be seen as a challenge to computers based on designs from Intel, Advanced Micro Devices, Apple, or Arm, but giants of the industry don't have to cower just yet. The San Mateo, California-based company unveiled HiFive Unmatched, a development design for a Linux-based PC that uses its RISC-V processors. At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665.

The SiFive HiFive Unmatched board will have a SiFive processor, dubbed the SiFive FU740 SoC, a 5-core processor with four SiFive U74 cores and one SiFive S7 core. The U-series cores are Linux-based 64-bit application processor cores based on RISC-V. These cores can be mixed and matched with other SiFive cores, such as the SiFive FU740. These components are all leveraging SiFive's existing intellectual property portfolio. The HiFive Unmatched board comes in the mini-ITX standard form factor to make it easy to build a RISC-V PC. SiFive also added some standard industry connectors -- ATX power supplies, PCI-Express expansion, Gigabit Ethernet, and USB ports are present on a single-board RISC-V development system.

The HiFive Unmatched board includes 8GB of DDR4 memory, 32MB of QSPI flash memory, and a microSD card slot on the motherboard. For debugging and monitoring, developers can access the console output of the board through the built-in microUSB type-B connector. Developers can expand it using PCI-Express slots, including both a PCIe general-purpose slot (PCIe Gen 3 x8) for graphics, FPGAs, or other accelerators and M.2 slots for NVME storage (PCIe Gen 3 x4) and Wi-Fi/Bluetooth modules (PCIe Gen 3 x1). There are four USB 3.2 Gen 1 type-A ports on the rear, next to the Gigabit Ethernet port, making it easy to connect peripherals. The system will ship with a bootable SD card that includes Linux and popular system developer packages, with updates available for download from SiFive.com. It will be available for preorders soon.

For some more context: Could RISC-V processors compete with Intel, ARM, and AMD?
Intel

Could RISC-V Processors Compete With Intel, ARM, and AMD? (venturebeat.com) 112

"As promised, SiFive has unveiled a new computer featuring the company's SiFive FU740 processor based on RISC-V architecture," reports Liliputing: The company, which has been making RISC-V chips for several years, is positioning its new SiFive HiFive Unmatched computer as a professional development board for those interested in working with RISC-V. But unlike the company's other HiFive boards, the new Unmatched model is designed so that it can be easily integrated into a standard PC...

SiFive says the system can support GNU/Linux distributions including Yocto, Debian, and Fedora.

"SiFive is releasing the HiFive Unleashed in an effort to afford developers the ability to build RISC-V based systems, using readily available, off-the-shelf parts," explains Forbes: SiFive says it built the board to address the market need for easily accessible RISC-V hardware to further advance development of new platforms, products, and software using the royalty-free ISA...

A short video demo shows the HiFive Unmatched installed in a common mid-tower PC chassis, running the included Linux distro, with an AMD Radeon graphics card pushing the pixels. In the video, the HiFive Unmatched is compiling an application and is shown browsing the web and opening a PDF. SiFive also notes that video playback is accelerated in hardware with the included version of Linux.

"At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665," notes VentureBeat.

But they add that "While it's still early days, it's not inconceivable that RISC-V processors could someday be alternatives to Intel-based PCs and PC processors." The startup has raised $190 million to date, and former Qualcomm executive Patrick Little recently joined SiFive as CEO. His task will be to establish the company's RISC-V processors as an alternative to ARM. This move comes in the wake of Nvidia's $40 billion acquisition of the world's leading processor architecture.

If Little is also looking to challenge Intel and AMD in PCs, he'll have his work cut out for him. For starters, SiFive is currently focused on Linux-based PCs, not Microsoft Windows PCs. Secondly, SiFive wouldn't build these processors or computers on its own. Its customers — anyone brave enough to take on the PC giants — would have to do that.

"I wouldn't see this as SiFive moving out of the box. It's more like they're expanding their box," said Linley Group senior analyst Aakash Jani. "They're using their core architecture to enable other chip designers to build PCs, or whatever they plan to build."

China

New Chinese Laptop Appears With 14nm Loongsoon Quad-Core 3A4000 CPU (tomshardware.com) 75

"BDY electronics, a Chinese laptop manufacturer, has unveiled an all-new 13.3-inch laptop sporting Longsoon's new Dragon Core 3A4000 quad-core 14nm CPU," reports Tom's Hardware: The biggest feature of this laptop is the CPU, featuring Longsoon's latest 14nm quad-core 3A4000 CPU. Longsoon claims the CPU is 100% faster than the previous generation 3A3000 and is comparable in performance to AMD's "Excavator" cores used in the A8-7680 Godavari architecture.

Of course, this demonstrates how far behind Longsoon is from TSMC and Intel in performance, speed, and efficiency of its latest node. However, the chairman of Loongsoon Technologies, Hu Weiwu, says, "14nm and 28nm (for its GPU node) is enough for 90% of applications.," so it appears the company isn't too worried about catching up to the performance leaders like Intel and AMD.

Due to this laptop being in the Chinese market, Windows is not supported at all. It only runs Chinese "domestic operating systems" that are typically modified versions of Linux. Fortunately, this does mean you can install any Linux flavor you want on the laptop, which can be handy if you don't want to run China-specific apps only.

Slashdot reader Hmmmmmm points out that Loongson's upcoming 3a5000 CPU "will be a 12nm CPU that is 50% faster than the 3A4000."
AMD

AMD Reveals The Radeon RX 6000 Series 57

Preparing to close out a major month of announcements for AMD -- and to open the door to the next era of architectures across the company -- AMD wrapped up its final keynote presentation of the month by announcing their Radeon RX 6000 series of video cards. From a report: Hosted once more by AMD CEO Dr. Lisa Su, AMD's hour-long keynote revealed the first three parts in AMD's new RDNA2 architecture video card family: the Radeon RX 6800 ($579), 6800 XT ($649), and 6900 XT ($999). The core of AMD's new high-end video card lineup, AMD means to do battle with the best of the best out of arch-rival NVIDIA. And we'll get to see first-hand if AMD can retake the high-end market on November 18th, when the first two cards hit retail shelves. AMD's forthcoming video card launch has been a long time coming for the company, and one they've been teasing particularly heavily. For AMD, the Radeon RX 6000 series represents the culmination of efforts from across the company as everyone from the GPU architecture team and the semi-custom SoC team to the Zen CPU team has played a role in developing AMD's latest GPU technology. All the while, these new cards are AMD's best chance in at least half a decade to finally catch up to NVIDIA at the high-end of the video card market. So understandably, the company is jazzed -- and in more than just a marketing manner -- about what the RX 6000 means.

Anchoring the new cards is AMD's RDNA2 GPU architecture. RDNA2 is launching near-simultaneously across consoles and PC video cards next month, where it will be the backbone of some 200 million video game consoles plus countless AMD GPUs and APUs to come. Accordingly, AMD has pulled out all of the stops in designing it, assembling an architecture that's on the cutting-edge of technical features like ray tracing and DirectX 12 Ultimate support, all the while leveraging the many things they've learned from their successful Zen CPU architectures to maximize RDNA2's performance. RDNA2 is also rare in that it isn't being built on a new manufacturing process, so coming from AMD's earlier RDNA architecture and associated video cards, AMD is relying on architectural improvements to deliver virtually all of their performance gains. Truly, it's AMD's RDNA2 architecture that's going to make or break their new cards.
Medicine

Folding@Home Exascale Supercomputer Finds Potential Targets For COVID-19 Cure (networkworld.com) 38

An anonymous reader quotes a report from Network World: The Folding@home project has shared new results of its efforts to simulate proteins from the SARS-CoV-2 virus to better understand how they function and how to stop them. Folding@home is a distributed computing effort that uses small clients to run simulations for biomedical research when users' PCs are idle. The clients operate independently of each other to perform their own unique simulation and send in the results to the F@h servers. In its SARS-CoV-2 simulations, F@h first targeted the spike, the cone-shaped appendages on the surface of the virus consisting of three proteins. The spike must open to attach itself to a human cell to infiltrate and replicate. F@h's mission was to simulate this opening process to gain unique insight into what the open state looks like and find a way to inhibit the connection between the spike and human cells.

And it did so. In a newly published paper, the Folding@home team said it was able to simulate an "unprecedented" 0.1 seconds of the viral proteome. They captured dramatic opening of the spike complex, as well as shape-shifting in other proteins that revealed more than 50 "cryptic" pockets that expand targeting options for the design of antivirals. [...] The model derived from the F@h simulations shows that the spike opens up and exposes buried surfaces. These surfaces are necessary for infecting a human cell and can also be targeted with antibodies or antivirals that bind to the surface to neutralize the virus and prevent it from infecting someone.
"And the tech sector played a big role in helping the find," adds the anonymous Slashdot reader. "Microsoft, Nvidia, AMD, Intel, AWS, Oracle, and Cisco all helped with hardware and cloud services. Pure Storage donated a one petabyte all-flash storage array. Linus Tech Tips, a hobbyist YouTube channel for home system builders with 12 million followers, set up a 100TB server to take the load off."
AMD

AMD Grabs Xilinx for $35 Billion as Chip Industry Consolidation Continues (techcrunch.com) 37

The chip industry consolidation dance continued this morning as AMD has entered into an agreement to buy Xilinx for $35 billion, giving the company access to a broad set of specialized workloads. From a report: AMD sees this deal as combining two companies that complement each other's strengths without cannibalizing its own markets. CEO Lisa Su believes the acquisition will help make her company the high performance chip leader. "By combining our world-class engineering teams and deep domain expertise, we will create an industry leader with the vision, talent and scale to define the future of high performance computing," Su said in a statement.

Slashdot Top Deals