×
United Kingdom

Potential Sites For UK's First Prototype Fusion Power Plant Identified (bbc.co.uk) 82

A total of 15 potential sites are in the running to host the UK's first prototype fusion power plant. The BBC reports: Fusion is seen as a potential source of almost limitless clean energy but is currently only used in experiments. An open call for sites was made last year and nominations closed at the end of March this year. Following checks for compliance with key entry criteria the UK Atomic Energy Agency (UKAEA) has published a long list of possible locations. The sites, from north to south, with nominating body, are: Dounreay, East Airdrie, Poneil, Ardeer, Chapelcross, Moorside, Bay Fusion, Goole, West Burton, Ratcliffe on Soar, Pembroke, Severn Edge, Aberthaw, Bridgwater Bay, and Bradwell (Essex).

The UKAEA said that acceptance of the sites did not indicate that they were "preferred or desired" or that it believed they were "in all cases, possible." It stressed it was simply that the procedural entry criteria had been met and assessment had now begun. It said a shortlisting process would take place in the autumn with a final site decision likely by the end of next year. UKAEA is hoping to have such a plant operating in the early 2040s, with an initial concept design ready by 2024."

Google

Google Used Reinforcement Learning To Design Next-Gen AI Accelerator Chips (venturebeat.com) 18

Chip floorplanning is the engineering task of designing the physical layout of a computer chip. In a paper published in the journal Nature, Google researchers applied a deep reinforcement learning approach to chip floorplanning, creating a new technique that "automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area." VentureBeat reports: The Google team's solution is a reinforcement learning method capable of generalizing across chips, meaning that it can learn from experience to become both better and faster at placing new chips. Training AI-driven design systems that generalize across chips is challenging because it requires learning to optimize the placement of all possible chip netlists (graphs of circuit components like memory components and standard cells including logic gates) onto all possible canvases. [...] The researchers' system aims to place a "netlist" graph of logic gates, memory, and more onto a chip canvas, such that the design optimizes power, performance, and area (PPA) while adhering to constraints on placement density and routing congestion. The graphs range in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over a day.

Starting with an empty chip, the Google team's system places components sequentially until it completes the netlist. To guide the system in selecting which components to place first, components are sorted by descending size; placing larger components first reduces the chance there's no feasible placement for it later. Training the system required creating a dataset of 10,000 chip placements, where the input is the state associated with the given placement and the label is the reward for the placement (i.e., wirelength and congestion). The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist. The system took 48 hours to "pre-train" on an Nvidia Volta graphics card and 10 CPUs, each with 2GB of RAM. Fine-tuning initially took up to 6 hours, but applying the pre-trained system to a new netlist without fine-tuning generated placement in less than a second on a single GPU in later benchmarks. In one test, the Google researchers compared their system's recommendations with a manual baseline: the production design of a previous-generation TPU chip created by Google's TPU physical design team. Both the system and the human experts consistently generated viable placements that met timing and congestion requirements, but the AI system also outperformed or matched manual placements in area, power, and wirelength while taking far less time to meet design criteria.

Data Storage

Ultra-High-Density HDDs Made With Graphene Store Ten Times More Data (phys.org) 62

Graphene can be used for ultra-high density hard disk drives (HDD), with up to a tenfold jump compared to current technologies, researchers at the Cambridge Graphene Center have shown. Phys.Org reports: The study, published in Nature Communications, was carried out in collaboration with teams at the University of Exeter, India, Switzerland, Singapore, and the US. [...] HDDs contain two major components: platters and a head. Data are written on the platters using a magnetic head, which moves rapidly above them as they spin. The space between head and platter is continually decreasing to enable higher densities. Currently, carbon-based overcoats (COCs) -- layers used to protect platters from mechanical damages and corrosion -- occupy a significant part of this spacing. The data density of HDDs has quadrupled since 1990, and the COC thickness has reduced from 12.5nm to around 3nm, which corresponds to one terabyte per square inch. Now, graphene has enabled researchers to multiply this by ten.

The Cambridge researchers have replaced commercial COCs with one to four layers of graphene, and tested friction, wear, corrosion, thermal stability, and lubricant compatibility. Beyond its unbeatable thinness, graphene fulfills all the ideal properties of an HDD overcoat in terms of corrosion protection, low friction, wear resistance, hardness, lubricant compatibility, and surface smoothness. Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) -- a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch.

Robotics

McDonald's Starts Testing Automated Drive-Thru Ordering (cnbc.com) 133

New submitter DaveV1.0 shares a report from CNBC: At 10 McDonald's locations in Chicago, workers aren't taking down customers' drive-thru orders for McNuggets and french fries -- a computer is, CEO Chris Kempczinski said Wednesday. Kempczinski said the restaurants using the voice-ordering technology are seeing about 85% order accuracy. Only about a fifth of orders need to be a taken by a human at those locations, he said, speaking at Alliance Bernstein's Strategic Decisions conference.

In 2019, under former CEO Steve Easterbrook, McDonald's went on a spending spree, snapping up restaurant tech. One of those acquisitions was Apprente, which uses artificial intelligence software to take drive-thru orders. Kempczinski said the technology will likely take more than one or two years to implement. "Now there's a big leap from going to 10 restaurants in Chicago to 14,000 restaurants across the U.S., with an infinite number of promo permutations, menu permutations, dialect permutations, weather — and on and on and on," he said. Another challenge has been training restaurant workers to stop themselves from jumping in to help.

Hardware

US PC Shipments Soar 73% In the First Quarter As Apple Falls From Top Spot (techcrunch.com) 76

An anonymous reader quotes a report from TechCrunch: With increased demand from the pandemic, Canalys reports that U.S. PC shipments were up 73% over the same period last year. That added up to a total of 34 million units sold. While Apple had a good quarter with sales up 36%, it was surpassed by HP, which sold 11 million units in total with annual growth up an astonishing 122.6%. As Canalys pointed out, the first quarter tends to be a weaker one for Apple hardware following the holiday season, but it's a big move for HP nonetheless. Other companies boasting big growth numbers include Samsung at 116% and Lenovo at 92.8%. Dell was up 29.2%, fairly modest compared with the rest of the group.

Overall though it was a stunning quarter as units flew off the shelves. Canalys Research Analyst Brian Lynch says some of this can be attributed to the increased demand from 2020 as people moved to work and school from home and needed new machines to get their work done, but regardless the growth was unrivaled historically. " Q1 2021 still rates as one of the best first quarters the industry has ever seen. Vendors have prioritized fulfilling U.S. backlogs before supply issues are addressed in other parts of the world," Lynch said in a statement. Perhaps not surprisingly, low-cost Chromebooks were the most popular item as people looking to refresh their devices, especially for education purposes, turned to the lower end of the PC market, which likely had a negative impact on higher-priced Apple products, as well contributing to its drop from the top spot.
According to Canalys, Chromebook sales were up a whopping 548% with Samsung leading that growth with an astonishing 1,963% growth rate. "Asus, HP and Lenovo all reported Chromebook sales rates up over 900%," adds TechCrunch.
Power

Reducing Poverty Can Actually Lower Energy Demand, Finds Research (arstechnica.com) 196

An anonymous reader shares a report from The Conversation: As people around the world escape poverty, you might expect their energy use to increase. But my research in Nepal, Vietnam, and Zambia found the opposite: lower levels of deprivation were linked to lower levels of energy demand. What is behind this counterintuitive finding? [...] We found that households that do have access to clean fuels, safe water, basic education and adequate food -- that is, those not in extreme poverty -- can use as little as half the energy of the national average in their country. This is important, as it goes directly against the argument that more resources and energy will be needed for people in the global south to escape extreme poverty. The biggest factor is the switch from traditional cooking fuels, like firewood or charcoal, to more efficient (and less polluting) electricity and gas.

In Zambia, Nepal, and Vietnam, modern energy resources are extremely unfairly distributed -- more so than income, general spending, or even spending on leisure. As a consequence, poorer households use more dirty energy than richer households, with ensuing health and gender impacts. Cooking with inefficient fuels consumes a lot of energy, and even more when water needs to be boiled before drinking. But do households with higher incomes and more devices have a better chance of escaping poverty? Some do, but having higher incomes and mobile phones are not either prerequisites or guarantees of having basic needs satisfied. Richer households without access to electricity or sanitation are not spared from having malnourished children or health problems from using charcoal. Ironically, for most households, it is easier to obtain a mobile phone than a clean, nonpolluting fuel for cooking. Therefore, measuring progress via household income leads to an incomplete understanding of poverty and its deprivations.

So what? Are we arguing against the global south using more energy for development? No: instead of focusing on how much energy is used, we are pointing to the importance of collective services (like electricity, indoor sanitation and public transport) for alleviating the multiple deprivations of poverty. In addressing these issues we cannot shy away from asking why so many countries in the global south have such a low capacity to invest in those services. It has to do with the fact that poverty does not just happen: it is created via interlinked systems of wealth extraction such as structural adjustment, or high costs of servicing national debts. Given that climate change is caused by the energy use of a rich minority in the global north but the consequences are borne by the majority in the poorer global south, human development is not only a matter of economic justice but also climate justice. Investing in vital collective services underpins both.

Robotics

Sidewalk Robots are Now Delivering Food in Miami (msn.com) 74

18-inch tall robots on four wheels zipping across city sidewalks "stopped people in their tracks as they whipped out their camera phones," reports the Florida Sun-Sentinel.

"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in the way food comes to our doors and our tables." The semiautonomous vehicles were engineered by Kiwibot, a company started in 2017 to game-change the food delivery landscape...

In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program funded by the Knight Foundation. The program is driven to understand how residents and consumers will interact with this type of technology, especially as the trend of robot servers grows around the country. And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and several others, in the next couple of weeks... "Our program is completely focused on the residents of Miami-Dade County and the way they interact with this new technology. Whether it's interacting directly or just sharing the space with the delivery bots," said Carlos Cruz-Casas, with the county's Department of Transportation...

Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are placed on the front, back and sides of the vehicle, which the supervisors can view on a computer screen. [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] If crossing the street is necessary, the robot will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to allow deliveries up to a mile and a half away so robots can make it to their destinations in 30 minutes or less.

Earlier Kiwi tested its sidewalk-travelling robots around the University of California at Berkeley, where at least one of its robots burst into flames. But the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries...

"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles and Santa Monica by working with the Shopify app to connect businesses that want to employ their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is making it possible for Miami-Dade County restaurants to sign on for free."

A video shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog on a leash, and at least one potential restaurant customer looking forward to no longer having to tip human food-delivery workers.
AMD

RISC Vs. CISC Is the Wrong Lens For Comparing Modern x86, ARM CPUs (extremetech.com) 118

Long-time Slashdot reader Dputiger writes: Go looking for the difference between x86 and ARM CPUs, and you'll run into the idea of CISC versus RISC immediately. But 40 years after the publication of David Patterson and David Ditzel's 1981 paper, "The Case for a Reduced Instruction Set Computer," CISC and RISC are poor top-level categories for comparing these two CPU families.
ExtremeTech writes:
The problem with using RISC versus CISC as a lens for comparing modern x86 versus ARM CPUs is that it takes three specific attributes that matter to the x86 versus ARM comparison — process node, microarchitecture, and ISA — crushes them down to one, and then declares ARM superior on the basis of ISA alone. The ISA-centric argument acknowledges that manufacturing geometry and microarchitecture are important and were historically responsible for x86's dominance of the PC, server, and HPC market. This view holds that when the advantages of manufacturing prowess and install base are controlled for or nullified, RISC — and by extension, ARM CPUs — will typically prove superior to x86 CPUs.

The implementation-centric argument acknowledges that ISA can and does matter, but that historically, microarchitecture and process geometry have mattered more. Intel is still recovering from some of the worst delays in the company's history. AMD is still working to improve Ryzen, especially in mobile. Historically, both x86 manufacturers have demonstrated an ability to compete effectively against RISC CPU manufacturers.

Given the reality of CPU design cycles, it's going to be a few years before we really have an answer as to which argument is superior. One difference between the semiconductor market of today and the market of 20 years ago is that TSMC is a much stronger foundry competitor than most of the RISC manufacturers Intel faced in the late 1990s and early 2000s. Intel's 7nm team has got to be under tremendous pressure to deliver on that node.

Nothing in this story should be read to imply that an ARM CPU can't be faster and more efficient than an x86 CPU.

Google

How Reliable Are Modern CPUs? (theregister.com) 64

Slashdot reader ochinko (user #19,311) shares The Register's report about a recent presentation by Google engineer Peter Hochschild. His team discovered machines with higher-than-expected hardware errors that "showed themselves sporadically, long after installation, and on specific, individual CPU cores rather than entire chips or a family of parts." The Google researchers examining these silent corrupt execution errors (CEEs) concluded "mercurial cores" were to blame CPUs that miscalculated occasionally, under different circumstances, in a way that defied prediction...The errors were not the result of chip architecture design missteps, and they're not detected during manufacturing tests. Rather, Google engineers theorize, the errors have arisen because we've pushed semiconductor manufacturing to a point where failures have become more frequent and we lack the tools to identify them in advance.

In a paper titled "Cores that don't count" [PDF], Hochschild and colleagues Paul Turner, Jeffrey Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David Culler, and Amin Vahdat cite several plausible reasons why the unreliability of computer cores is only now receiving attention, including larger server fleets that make rare problems more visible, increased attention to overall reliability, and software development improvements that reduce the rate of software bugs. "But we believe there is a more fundamental cause: ever-smaller feature sizes that push closer to the limits of CMOS scaling, coupled with ever-increasing complexity in architectural design," the researchers state, noting that existing verification methods are ill-suited for spotting flaws that occur sporadically or as a result of physical deterioration after deployment.

Facebook has noticed the errors, too. In February, the social ad biz published a related paper, "Silent Data Corruption at Scale," that states, "Silent data corruptions are becoming a more common phenomena in data centers than previously observed...."

The risks posed by misbehaving cores include not only crashes, which the existing fail-stop model for error handling can accommodate, but also incorrect calculations and data loss, which may go unnoticed and pose a particular risk at scale. Hochschild recounted an instance where Google's errant hardware conducted what might be described as an auto-erratic ransomware attack. "One of our mercurial cores corrupted encryption," he explained. "It did it in such a way that only it could decrypt what it had wrongly encrypted."

How common is the problem? The Register notes that Google's researchers shared a ballpark figure "on the order of a few mercurial cores per several thousand machines similar to the rate reported by Facebook."
Hardware

Apple Working On iPad Pro With Wireless Charging, New iPad Mini (bloomberg.com) 11

An anonymous reader quotes a report from Bloomberg: Apple is working on a new iPad Pro with wireless charging and the first iPad mini redesign in six years, seeking to continue momentum for a category that saw rejuvenated sales during the pandemic. The Cupertino, California-based company is planning to release the new iPad Pro in 2022 and the iPad mini later this year [...]. The main design change in testing for the iPad Pro is a switch to a glass back from the current aluminum enclosure. The updated iPad mini is planned to have narrower screen borders while the removal of its home button has also been tested.

For the new Pro model, the switch to a glass back is being tested, in part, to enable wireless charging for the first time. Making the change in material would bring iPads closer to iPhones, which Apple has transitioned from aluminum to glass backs in recent years. Apple's development work on the new iPad Pro is still early, and the company's plans could change or be canceled before next year's launch [...]. Wireless charging replaces the usual power cable with an inductive mat, which makes it easier for users to top up their device's battery. It has grown into a common feature in smartphones but is a rarity among tablets. Apple added wireless charging to iPhones in 2017 and last year updated it with a magnet-based MagSafe system that ensured more consistent charging speeds.

The company is testing a similar MagSafe system for the iPad Pro. Wireless charging will likely be slower than directly plugging in a charger to the iPad's Thunderbolt port, which will remain as part of the next models. As part of its development of the next iPad Pro, Apple is also trying out technology called reverse wireless charging. That would allow users to charge their iPhone or other gadgets by laying them on the back of the tablet. Apple had previously been working on making this possible for the iPhone to charge AirPods and Apple Watches. In addition to the next-generation iPad Pro and iPad mini, Apple is also working on a thinner version of its entry-level iPad geared toward students. That product is planned to be released as early as the end of this year, about the same time as the new iPad mini.
Apple is still reportedly working on a technology similar to its failed AirPower, a charging mat designed to simultaneously charge an iPhone, Apple Watch and AirPods. People familiar with the matter said it's also internally investigating alternative wireless charging methods that can work over greater distances than an inductive connection.
Power

7-11 Is Opening 500 EV Charging Stations By the End of 2022 (cnet.com) 168

7-11 announced Tuesday that it will be placing 500 EV chargers at 250 stores in the U.S. and Canada by the end of 2022. CNET reports: OK, but if they can't keep the Slurpee machine up and running, what kind of charging can users expect? Well, we don't know, and 7-11 isn't saying, but we do know that they will be DC fast-chargers, and it looks like they'll be supplied by ChargePoint, so we'd bet on anything from 60-ish kilowatts to 125 kilowatts. These new chargers will join 7-11's small network of 22 charging stations at 14 stores in four states, and the whole thing is a part of 7-11's ongoing work to reduce its carbon footprint.
Wireless Networking

Samsung Will Shut Down the v1 SmartThings Hub This Month (arstechnica.com) 86

Samsung is killing the first-generation SmartThings Hub at the end of the month, kicking off phase two of its plan to shut down the SmartThings ecosystem and force users over to in-house Samsung infrastructure. "Phase one was in October, when Samsung killed the Classic SmartThings app and replaced it with a byzantine disaster of an app that it developed in house," writes Ars Technica's Ron Amadeo. "Phase three will see the shutdown of the SmartThings Groovy IDE, an excellent feature that lets members of the community develop SmartThings device handlers and complicated automation apps." From the report: The SmartThings Hub is basically a Wi-Fi access point -- but for your smart home stuff instead of your phones and laptops. Instead of Wi-Fi, SmartThings is the access point for a Zigbee and Z-Wave network, two ultra low-power mesh networks used by smart home devices. [...] The Hub connects your smart home network to the Internet, giving you access to a control app and connecting to other services like your favorite voice assistant. You might think that killing the old Hub could be a ploy to sell more hardware, but Samsung -- a hardware company -- is actually no longer interested in making SmartThings hardware. The company passed manufacturing for the latest "SmartThings Hub (v3)" to German Internet-of-things company Aeotec. The new Hub is normally $125, but Samsung is offering existing users a dirt-cheat $35 upgrade price.

For users who have to buy a new hub, migrating between hubs in the SmartThings ecosystem is a nightmare. Samsung doesn't provide any kind of migration program, so you have to unpair every single individual smart device from your old hub to pair it to the new one. This means you'll need to perform some kind of task on every light switch, bulb, outlet, and sensor, and you'll have to do the same for any other smart thing you've bought over the years. Doing this on each device is a hassle that usually involves finding the manual to look up the secret "exclusion" input, which is often some arcane Konami code. Picture holding the top button on a paddle light for seven seconds until a status light starts blinking and then opening up the SmartThings app to unpair it. Samsung is also killing the "SmartThings Link for Nvidia Shield" dongle, which let users turn Android TV devices into SmartThings Hubs.

Power

Bill Gates' Next Generation Nuclear Reactor To Be Built In Wyoming (reuters.com) 334

Billionaire Bill Gates' advanced nuclear reactor company TerraPower LLC and PacifiCorp have selected Wyoming to launch the first Natrium reactor project on the site of a retiring coal plant, the state's governor said on Wednesday. Reuters reports: TerraPower, founded by Gates about 15 years ago, and power company PacifiCorp, owned by Warren Buffet's Berkshire Hathaway, said the exact site of the Natrium reactor demonstration plant is expected to be announced by the end of the year. Small advanced reactors, which run on different fuels than traditional reactors, are regarded by some as a critical carbon-free technology than can supplement intermittent power sources like wind and solar as states strive to cut emissions that cause climate change.

The project features a 345 megawatt sodium-cooled fast reactor with molten salt-based energy storage that could boost the system's power output to 500 MW during peak power demand. TerraPower said last year that the plants would cost about $1 billion. Late last year the U.S. Department of Energy awarded TerraPower $80 million in initial funding to demonstrate Natrium technology, and the department has committed additional funding in coming years subject to congressional appropriations.

AMD

AMD Unveils Radeon RX 6000M Mobile GPUs For New Breed of All-AMD Gaming Laptops (hothardware.com) 15

MojoKid writes: AMD just took the wraps off its new line of Radeon RX 6000M GPUs for gaming laptops. Combined with its Ryzen 5000 series processors, the company claims all-AMD powered "AMD Advantage" machines will deliver new levels of performance, visual fidelity and value for gamers. AMD unveiled three new mobile GPUs. Sitting at the top is the Radeon RX 6800M, featuring 40 compute units, 40 ray accelerators, a 2,300MHz game clock and 12GB of GDDR6 memory. According to AMD, its flagship Radeon RX 6800M mobile GPU can deliver 120 frames per second at 1440p with a blend of raytracing, compute, and traditional effects.

Next, the new Radeon RX 6700M sports 36 compute units, 36 ray accelerators, a 2,300MHz game clock and 10GB of GDDR6 memory. Finally, the Radeon RX 6600M comes armed with 28 compute units and 28 ray accelerators, a 2,177MHz game clock and 8GB of GDDR6 memory. HotHardware has a deep dive review of a new ASUS ROG Strix G15 gaming laptop with the Radeon RX 6800M on board, as well as an 8-core Ryzen 9 5900HX processor. In the benchmarks, the Radeon RX 6800M-equipped machine puts up numbers that rival GeForce RTX 3070 and 3080 laptop GPUs in traditional rasterized game engines, though it trails a bit in ray tracing enhanced gaming. You can expect this new breed of all-AMD laptops to arrive in market sometime later this month.

Businesses

Instacart Bets on Robots To Shrink Ranks of 500,000 Gig Shoppers (bloomberg.com) 43

Instacart has an audacious plan to replace its army of gig shoppers with robots -- part of a long-term strategy to cut costs and put its relationship with supermarket chains on a sustainable footing. From a report: The plan, detailed in documents reviewed by Bloomberg, involves building automated fulfillment centers around the U.S., where hundreds of robots would fetch boxes of cereal and cans of soup while humans gather produce and deli products. Some facilities would be attached to existing grocery stores while larger standalone centers would process orders for several locations, according to the documents, which were dated July and December.

Despite working on the strategy for more than a year, however, the company has yet to sign up a single supermarket chain. Instacart had planned to begin testing the fulfillment centers later this year, the documents show. But the company has fallen behind schedule, according to people familiar with the situation. And though the documents mention asking several automation providers to build the technology, Instacart hasn't settled on any, said the people, who requested anonymity to discuss a private matter. In February, the Financial Times reported on elements of the strategy and said Instacart in early 2020 sent out requests for proposals to five robotics companies.

An Instacart spokeswoman said the company was busy buttressing its operations during the pandemic, when it signed up 300,000 new gig workers in a matter of weeks, bringing the current total to more than 500,000. But the delays in getting the automation strategy off the ground could potentially undermine plans to go public this year. Investors know robots will play a critical role in modernizing the $1.4 trillion U.S. grocery industry.

Hardware

The GeForce RTX 3080 Ti is Nvidia's 'New Gaming Flagship' (pcworld.com) 60

Nvidia officially announced the long-awaited GeForce RTX 3080 Ti during its Computex keynote late Monday night, and this $1,200 graphics card looks like an utter beast. The $600 GeForce RTX 3070 Ti also made its debut with faster GDDR6X memory. From a report: All eyes are on the RTX 3080 Ti, though. Nvidia dubbed it GeForce's "new gaming flagship" as the $1,500 RTX 3090 is built for work and play alike, but the new GPU is a 3090 in all but name (and memory capacity). While Nvidia didn't go into deep technical details during the keynote, the GeForce RTX 3080 Ti's specifications page shows it packing a whopping 10,240 CUDA cores -- just a couple hundred less than the 3090's 10,496 count, but massively more than the 8,704 found in the vanilla 3080.

Expect this card to chew through games on par with the best, especially in games that support real-time ray tracing and Nvidia's amazing DLSS feature. The memory system can handle the ride, as it's built using the RTX 3090's upgraded bones. The GeForce RTX 3080 Ti comes with a comfortable 12GB of blazing-fast GDDR6X memory over a wide 384-bit bus, which is half the ludicrous 24GB capacity found in the 3090, but more than enough to handle any gaming workload you through at it. That's not true with the vanilla RTX 3080, which comes with 10GB of GDDR6X over a smaller bus, as rare titles (like Doom Eternal) can already use more than 10GB of memory when you're playing at 4K resolution with the eye candy cranked to the max. The extra two gigs make the RTX 3080 Ti feel much more future-proof.

Data Storage

Seagate 'Exploring' Possible New Line of Crypto-Specific Hard Drives (techradar.com) 47

In a Q&A with TechRadar, storage hardware giant Seagate revealed it is keeping a close eye on the crypto space, with a view to potentially launching a new line of purpose-built drives. From the report: Asked whether companies might develop storage products specifically for cryptocurrency use cases, Jason M. Feist, who heads up Seagate's emerging products arm, said it was a "possibility." Feist said he could offer no concrete information at this stage, but did suggest the company is "exploring this opportunity and imagines others may be as well."
Intel

Intel's latest 11th Gen Processor Brings 5.0GHz Speeds To Thin and Light Laptops (theverge.com) 51

Intel made a splash earlier in May with the launch of its first 11th Gen Tiger Lake H-series processors for more powerful laptops, but at Computex 2021, the company is also announcing a pair of new U-series chips -- one of which marks the first 5.0GHz clock speed for the company's U-series lineup of lower voltage chips. From a report: Specifically, Intel is announcing the Core i7-1195G7 -- its new top of the line chip in the U-series range -- and the Core i5-1155G7, which takes the crown of Intel's most powerful Core i5-level chip, too. Like the original 11th Gen U-series chips, the new chips operate in the 12W to 28W range. Both new chips are four core / eight thread configurations, and feature Intel's Iris Xe integrated graphics (the Core i7-1195G7 comes with 96 EUs, while the Core i5-1155G7 has 80 EUs.)

The Core i7-1195G7 features a base clock speed of 2.9GHz, but cranks up to a 5.0GHz maximum single core speed using Intel's Turbo Boost Max 3.0 technology. The Core i5-1155G7, on the other hand, has a base clock speed of 2.5GHz and a boosted speed of 4.5GHz. Getting to 5GHz out of the box is a fairly recent development for laptop CPUs, period: Intel's first laptop processor to cross the 5GHz mark arrived in 2019.

Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Graphics

Resale Prices Triple for NVIDIA Chips as Gamers Compete with Bitcoin Miners (yahoo.com) 108

"In the niche world of customers for high-end semiconductors, a bitter feud is pitting bitcoin miners against hardcore gamers," reports Quartz: At issue is the latest line of NVIDIA graphics cards — powerful, cutting-edge chips with the computational might to display the most advanced video game graphics on the market. Gamers want the chips so they can experience ultra-realistic lighting effects in their favorite games. But they can't get their hands on NVIDIA cards, because miners are buying them up and adapting them to crunch cryptographic codes and harvest digital currency. The fierce competition to buy chips — combined with a global semiconductor shortage — has driven resale prices up as much as 300%, and led hundreds of thousands of desperate consumers to sign up for daily raffles for the right to buy chips at a significant mark-up.

To broker a peace between its warring customers, NVIDIA is, essentially, splitting its cutting-edge graphics chips into two dumbed-down products: GeForce for gamers and the Cryptocurrency Mining Processor (CMP) for miners. GeForce is the latest NVIDIA graphics card — except key parts of it have been slowed down to make it less valuable for miners racing to solve crypto puzzles. CMP is based on a slightly older version of NVIDIA's graphics card which has been stripped of all of its display outputs, so gamers can't use it to render graphics.

NVIDIA's goal in splitting its product offerings is to incentivize miners to only buy CMP chips, and leave the GeForce chips for the gamers. "What we hope is that the CMPs will satisfy the miners...[and] steer our GeForce supply to gamers," said CEO Jansen Huang on a May 26 conference call with investors and analysts... It won't be easy to keep the miners at bay, however. NVIDIA tried releasing slowed-down graphics chips in February in an effort to deter miners from buying them, but it didn't work. The miners quickly figured out how to hack the chips and make them perform at full-speed again.

Slashdot Top Deals