Team Constructs Silicon 2-qubit Gate, Enabling Construction of Quantum Computers ( 90

monkeyzoo writes: A team at the University of New South Wales (UNSW) in Sydney has made a crucial advance in quantum computing. Their advance, appearing in the journal Nature (abstract), demonstrated a two-qubit logic gate — the central building block of a quantum computer — and, significantly, did it in silicon. This makes the building of a quantum computer much more feasible, since it is based on the same manufacturing technology as today's computer industry. Until now, it had not been possible to make two quantum bits 'talk' to each other — and thereby create a logic gate — using silicon. But the UNSW team — working with Professor Kohei M. Itoh of Japan's Keio University — has done just that for the first time. The result means that all of the physical building blocks for a silicon-based quantum computer have now been successfully constructed, allowing engineers to finally begin the task of designing and building a functioning quantum computer.

NASA's Hurricane Model Resolution Increases Nearly 10-Fold Since Katrina 89

zdburke writes: Thanks to improvements in satellites and on-the-ground computing power, NASA's ability to model hurricane data has come a long way in the ten years since Katrina devastated New Orleans. Their blog notes, "Today's models have up to ten times the resolution than those during Hurricane Katrina and allow for a more accurate look inside the hurricane. Imagine going from video game figures made of large chunky blocks to detailed human characters that visibly show beads of sweat on their forehead." Gizmodo covered the post too and added some technical details, noting that, "the supercomputer has more than 45,000 processor cores and runs at 1.995 petfalops."

IBM 'TrueNorth' Neuro-Synaptic Chip Promises Huge Changes -- Eventually 97

JakartaDean writes: Each of IBM's "TrueNorth" chips contains 5.4 billion transistors and runs on 70 milliwatts. The chips are designed to behave like neurons—the basic building blocks of biological brains. Dharmenda Modha, the head of IBM's cognitive computing group, says a system of 24 connected chips simulates 48 million neurons, roughly the same number rodents have.

Whereas conventional chips are wired to execute particular "instructions," the TrueNorth juggles "spikes," much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone's voice as they speak—or changes in color from pixel to pixel in a photo. "You can think of it as a one-bit message sent from one neuron to another." says one of the chip's chief designers. The chips are designed well not for training neural networks, but for executing them. This has significant implications for consumer AI: big companies with lots of resources could focus on the training, which individual TrueNorth chips in people's gadgets could handle the execution.

How Weather Modeling Gets Better 43

Dr_Ish writes: Bob Henson over at Weather Underground has posted a fascinating discussion of the recent improvements made to the major weather models that are used to forecast hurricanes and the like. The post also included interesting links that explain more about the models. Quoting: "The latest version of the ECMWF model, introduced in May, has significant changes to model physics and the ways in which observations are brought into and used within the model. The overall improvements include better portrayal of clouds and precipitation, including a more accurate depiction of intense rainfall. The main effect of the model upgrade for tropical cyclones is slightly lower central pressure. During the first 3 days of a forecast, the ECMWF has tended to have a slight weak bias on tropical cyclones; the new version is closer to the mark."

Obama's New Executive Order Says the US Must Build an Exascale Supercomputer 223

Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."

Cray To Build Australia's Fastest Supercomputer 54

Bismillah writes: US supercomputer vendor Cray has scored the contract to build the Australian Bureau of Meteorology's new system, said to be capable of 1.6 petaFLOPS and with an upgrade option in three years' time to hit 5 petaFLOPS. From the iTnews story: "The increase in capacity will allow the BoM to deal with growth in the 1TB of data it collects every day, which it expects to increase by 30 percent every 18 months to two years. It will also allow the agency to collect new areas of information it previously lacked the capacity for. 'The new observation platforms that are coming online are bringing quite a lot more data,' supercomputer program director Tim Pugh told iTnews.

Ask Slashdot: Best Bang-for-the-Buck HPC Solution? 150

An anonymous reader writes: We are looking into procuring a FEA/CFD machine for our small company. While I know workstations well, the multi-socket rack cluster solutions are foreign to me. On one end of the spectrum, there are companies like HP and Cray that offer impressive setups for millions of dollars (out of our league). On the other end, there are quad-socket mobos from Supermicro and Intel, for 8-18 core CPUs that cost thousands of dollars apiece.

Where do we go from here? Is it even reasonable to order $50k worth of components and put together our own high-performance, reasonably-priced blade cluster? Or is this folly, best left to experts? Who are these experts if we need them?

And what is the better choice here? 16-core Opterons at 2.6 GHz, 8-core Xeons at 3.4 GHz? Are power and thermals limiting factors here? (A full rack cupboard would consume something like 25 kW, it seems?) There seems to be precious little straightforward information about this on the net.

Supercomputing Cluster Immersed In Oil Yields Extreme Efficiency 67

1sockchuck writes: A new supercomputing cluster immersed in tanks of dielectric fluid has posted extreme efficiency ratings. The Vienna Scientific Cluster 3 combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water. VSC3 recorded a PUE (Power Usage Efficiency) of 1.02, putting it in the realm of data centers run by Google and Facebook. The system avoids the use of chillers and air handlers, and doesn't require any water to cool the fluid in the cooling tanks. Limiting use of water is a growing priority for data center operators, as cooling towers can use large volumes of water resources. The VSC3 system packs 600 teraflops of computing power into 1,000 square feet of floor space.

AMD Outlines Plans For Zen-Based Processors, First Due In 2016 166

crookedvulture writes: AMD laid out its plans for processors based on its all-new Zen microarchitecture today, promising 40% higher performance-per-clock from from the x86 CPU core. Zen will use simultaneous multithreading to execute two threads per core, and it will be built using "3D" FinFETs. The first chips are due to hit high-end desktops and servers next year. In 2017, Zen will combine with integrated graphics in smaller APUs designed for desktops and notebooks. AMD also plans to produce a high-performance server APU with a "transformational memory architecture" likely similar to the on-package DRAM being developed for the company's discrete graphics processors. This chip could give AMD a credible challenger in the HPC and supercomputing markets—and it could also make its way into laptops and desktops.

Humans Dominating Poker Super Computer 93

New submitter IoTdude writes: The Claudico super computer uses an algorithm to account for gargantuan amounts of complexity by representing the number of possible Heads-Up No-limit Texas Hold'em decisions. Claudico also updates its strategy as it goes along, but its basic approach to the game involves getting into every hand by calling bets. And it's not working out so far. Halfway through the competition, the four human pros had a cumulative lead of 626,892 chips. Though much could change in the week remaining, a lead of around 600,000 chips is considered statistically significant.

Nuclear Fusion Simulator Among Software Picked For US's Summit Supercomputer 57

An anonymous reader writes Today, The Register has learned of 13 science projects approved by boffins at the US Department of Energy to run on the 300-petaFLOPS Summit. These software packages, selected for the Center for Accelerated Application Readiness (CAAR) program, will be ported to the massive parallel machine, and are hoped to make full use of the supercomputer's architecture.They range from astrophysics, biophysics, chemistry, and climate modeling to combustion engineering, materials science, nuclear physics, plasma physics and seismology.

US Blocks Intel From Selling Xeon Chips To Chinese Supercomputer Projects 229

itwbennett writes: U.S. government agencies have stopped Intel from selling microprocessors for China's supercomputers, apparently reflecting concern about their use in nuclear tests. In February, four supercomputing institutions in China were placed on a U.S. government list that effectively bans them from receiving certain U.S. exports. The institutions were involved in building Tianhe-2 and Tianhe-1A, both of which have allegedly been used for 'nuclear explosive activities,' according to a notice (PDF) posted by the U.S. Department of Commerce. Intel has been selling its Xeon chips to Chinese supercomputers for years, so the ban represents a blow to its business.

US Pens $200 Million Deal For Massive Nuclear Security-Focused Supercomputer 74

An anonymous reader writes For the first time in over twenty years of supercomputing history, a chipmaker [Intel] has been awarded the contract to build a leading-edge national computing resource. This machine, expected to reach a peak performance of 180 petaflops, will provide massive compute power to Argonne National Laboratory, which will receive the HPC gear in 2018. Supercomputer maker Cray, which itself has had a remarkable couple of years contract-wise in government and commercial spheres, will be the integrator and manufacturer of the "Aurora" super. This machine will be a next-generation variant of its "Shasta" supercomputer line. The new $200 million supercomputer is set to be installed at Argonne's Leadership Computing Facility in 2018, rounding out a trio of systems aimed at bolstering nuclear security initiatives as well as pushing the performance of key technical computing applications valued by the Department of Energy and other agencies.

NSF Commits $16M To Build Cloud-Based and Data-Intensive Supercomputers 29

aarondubrow writes: As supercomputing becomes central to the work and progress of researchers in all fields, new kinds of computing resources and more inclusive modes of interaction are required. The National Science Foundation announced $16M in awards to support two new supercomputing acquisitions for the open science community. The systems — "Bridges" at the Pittsburgh Supercomputing Center and "Jetstream," co-located at the Indiana University Pervasive Technology Institute and The University of Texas at Austin's Texas Advanced Computing Center — respond to the needs of the scientific computing community for more high-end, large-scale computing resources while helping to create a more inclusive computing environment for science and engineering. Reader 1sockchuck adds this article about why funding for the development of supercomputers is more important than ever: America's high-performance computing (HPC) community faces funding challenges and growing competition from China and other countries. At last week's SC14 conference, leading researchers focused on outlining the societal benefits of their work, and how it touches the daily lives of Americans. "When we talk at these conferences, we tend to talk to ourselves," said Wilf Pinfold, director of research and advanced technology development at Intel Federal. "We don't do a good job communicating the importance of what we do to a broader community." Why the focus on messaging? Funding for American supercomputing has been driven by the U.S. government, which is in a transition with implications for HPC funding. As ComputerWorld notes, climate change skeptic Ted Cruz is rumored to be in line to chair a Senate committee that oversees NASA and the NSF.

Does Being First Still Matter In America? 247

dcblogs writes At the supercomputing conference, SC14, this week, a U.S. Dept. of Energy offical said the government has set a goal of 2023 as its delivery date for an exascale system. It may be taking a risky path with that amount of lead time because of increasing international competition. There was a time when the U.S. didn't settle for second place. President John F. Kennedy delivered his famous "we choose to go to the moon" speech in 1962, and seven years later a man walked on the moon. The U.S. exascale goal is nine years away. China, Europe and Japan all have major exascale efforts, and the government has already dropped on supercomputing. The European forecast of Hurricane Sandy in 2012 was so far ahead of U.S. models in predicting the storm's path that the National Oceanic and Atmospheric Administration was called before Congress to explain how it happened. It was told by a U.S. official that NOAA wasn't keeping up in computational capability. It's still not keeping up. Cliff Mass, a professor of meteorology at the University of Washington, wrote on his blog last month that the U.S. is "rapidly falling behind leading weather prediction centers around the world" because it has yet to catch up in computational capability to Europe. That criticism followed the $128 million recent purchase a Cray supercomputer by the U.K.'s Met Office, its meteorological agency.

US DOE Sets Sights On 300 Petaflop Supercomputer 127

dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.

Researchers Simulate Monster EF5 Tornado 61

New submitter Orp writes: I am the member of a research team that created a supercell thunderstorm simulation that is getting a lot of attention. Presented at the 27th Annual Severe Local Storms Conference in Madison, Wisconsin, Leigh Orf's talk was produced entirely as high def video and put on YouTube shortly after the presentation. In the simulation, the storm's updraft is so strong that it essentially peels rain-cooled air near the surface upward and into the storm's updraft, which appears to play a key role in maintaining the tornado. The simulation was based upon the environment that produced the May 24, 2011 outbreak which included a long-track EF5 tornado near El Reno Oklahoma (not to be confused with the May 31, 2013 EF5 tornado that killed three storm researchers).

Interviews: Ask CMI Director Alex King About Rare Earth Mineral Supplies 62

The modern electronics industry relies on inputs and supply chains, both material and technological, and none of them are easy to bypass. These include, besides expertise and manufacturing facilities, the actual materials that go into electronic components. Some of them are as common as silicon; rare earth minerals, not so much. One story linked from Slashdot a few years back predicted that then-known supplies would be exhausted by 2017, though such predictions of scarcity are notoriously hard to get right, as people (and prices) adjust to changes in supply. There's no denying that there's been a crunch on rare earths, though, over the last several years. The minerals themselves aren't necessarily rare in an absolute sense, but they're expensive to extract. The most economically viable deposits are found in China, and rising prices for them as exports to the U.S., the EU, and Japan have raised political hackles. At the same time, those rising prices have spurred exploration and reexamination of known deposits off the coast of Japan, in the midwestern U.S., and elsewhere.

Alex King is director of the Critical Materials Institute, a part of the U.S. Department of Energy's Ames Laboratory. CMI is heavily involved in making rare earth minerals slightly less rare by means of supercomputer analysis; researchers there are approaching the ongoing crunch by looking both for substitute materials for things like gallium, indium, and tantalum, and easier ways of separating out the individual rare earths (a difficult process). One team there is working with "ligands – molecules that attach with a specific rare-earth – that allow metallurgists to extract elements with minimal contamination from surrounding minerals" to simplify the extraction process. We'll be talking with King soon; what questions would you like to see posed? (This 18-minute TED talk from King is worth watching first, as is this Q&A.)

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office 125

Memetic writes: The UK weather forecasting service is replacing its IBM supercomputer with a Cray XC40 containing 17 petabytes of storage and capable of 16 TeraFLOPS. This is Cray's biggest contract outside the U.S. With 480,000 CPUs, it should be 13 times faster than the current system. It will weigh 140 tons. The aim is to enable more accurate modeling of the unstable UK climate, with UK-wide forecasts at a resolution of 1.5km run hourly, rather than every three hours, as currently happens. (Here's a similar system from the U.S.)

First Demonstration of Artificial Intelligence On a Quantum Computer 98

KentuckyFC writes: Machine learning algorithms use a training dataset to learn how to recognize features in images and use this 'knowledge' to spot the same features in new images. The computational complexity of this task is such that the time required to solve it increases in polynomial time with the number of images in the training set and the complexity of the "learned" feature. So it's no surprise that quantum computers ought to be able to rapidly speed up this process. Indeed, a group of theoretical physicists last year designed a quantum algorithm that solves this problem in logarithmic time rather than polynomial, a significant improvement.

Now, a Chinese team has successfully implemented this artificial intelligence algorithm on a working quantum computer, for the first time. The information processor is a standard nuclear magnetic resonance quantum computer capable of handling 4 qubits. The team trained it to recognize the difference between the characters '6' and '9' and then asked it to classify a set of handwritten 6s and 9s accordingly, which it did successfully. The team says this is the first time that this kind of artificial intelligence has ever been demonstrated on a quantum computer and opens the way to the more rapid processing of other big data sets — provided, of course, that physicists can build more powerful quantum computers.