DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×
Government

US Federal Budget Proposal Cuts Science Funding (washingtonpost.com) 648

hey! writes: The U.S. Office of Management and Budget has released a budget "blueprint" which outlines substantial cuts in both basic research and applied technology funding. The proposal includes a whopping 18% reduction in National Institutes of Health medical research. NIH does get a new $500 million fund to track emerging infectious agents like Zika in the U.S., but loses its funding to monitor those agents overseas. The Department of Energy's research programs also get an 18% cut in research, potentially affecting basic physics research, high energy physics, fusion research, and supercomputing. Advanced Research Projects Agency (ARPA-E) gets the ax, as does the Advanced Technology Vehicle Manufacturing Program, which enabled Tesla to manufacture its Model S sedan. EPA loses all climate research funding, and about half the research funding targeted at human health impacts of pollution. The Energy Star program is eliminated; Superfund funding is drastically reduced. The Chesapeake Bay and Great Lakes cleanup programs are also eliminated, as is all screening of pesticides for endocrine disruption. In the Department of Commerce, Sea Grant is eliminated, along with all coastal zone research funding. Existing weather satellites GOES and JPSS continue funding, but JPSS-3 and -4 appear to be getting the ax. Support for transfer of federally funded research and technology to small and mid-sized manufacturers is eliminated. NASA gets a slight trim, and a new focus on deep space exploration paid for by an elimination of Earth Science programs. You can read more about this "blueprint" in Nature, Science, and the Washington Post, which broke the story. The Environmental Protection Agency, the State Department and Agriculture Department took the hardest hits, while the Defense Department, Department of Homeland Security, and Department of Veterans Affairs have seen their budgets grow.
China

NSA, DOE Say China's Supercomputing Advances Put US At Risk (computerworld.com) 130

dcblogs quotes a report from Computerworld: Advanced computing experts at the National Security Agency and the Department of Energy are warning that China is "extremely likely" to take leadership in supercomputing as early as 2020, unless the U.S. acts quickly to increase spending. China's supercomputing advances are not only putting national security at risk, but also U.S. leadership in high-tech manufacturing. If China succeeds, it may "undermine profitable parts of the U.S. economy," according to a report titled U.S. Leadership in High Performance Computing by HPC technical experts at the NSA, the DOE, the National Science Foundation and other agencies. The report stems from a workshop held in September that was attended by 60 people, many scientists, 40 of whom work in government, with the balance representing industry and academia. "Meeting participants, especially those from industry, noted that it can be easy for Americans to draw the wrong conclusions about what HPC investments by China mean -- without considering China's motivations," the report states. "These participants stressed that their personal interactions with Chinese researchers and at supercomputing centers showed a mindset where computing is first and foremost a strategic capability for improving the country; for pulling a billion people out of poverty; for supporting companies that are looking to build better products, or bridges, or rail networks; for transitioning away from a role as a low-cost manufacturer for the world; for enabling the economy to move from 'Made in China' to 'Made by China.'"
Supercomputing

D-Wave Open Sources Its Quantum Computing Tool (gcn.com) 45

Long-time Slashdot reader haruchai writes: Canadian company D-Wave has released their qbsolv tool on GitHub to help bolster interest and familiarity with quantum computing. "qbsolv is a metaheuristic or partitioning solver that solves a potentially large QUBO problem by splitting it into pieces that are solved either on a D-Wave system or via a classical tabu solver," they write on GitHub.

This joins the QMASM macro assembler for D-Wave systems, a tool written in Python by Scott Pakin of Los Alamos National Labs. D-Wave president Bo Ewald says "D-Wave is driving the hardware forward but we need more smart people thinking about applications, and another set thinking about software tools."

AI

IBM's Watson Used In Life-Saving Medical Diagnosis (businessinsider.co.id) 83

"Supercomputing has another use," writes Slashdot reader rmdingler, sharing a story that quotes David Kenny, the General Manager of IBM Watson: "There's a 60-year-old woman in Tokyo. She was at the University of Tokyo. She had been diagnosed with leukemia six years ago. She was living, but not healthy. So the University of Tokyo ran her genomic sequence through Watson and it was able to ascertain that they were off by one thing. Actually, she had two strains of leukemia. They did treat her and she is healthy."

"That's one example. Statistically, we're seeing that about one third of the time, Watson is proposing an additional diagnosis."

Japan

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It (reuters.com) 35

Japan plans to build the world's fastest-known supercomputer in a bid to arm the country's manufacturers with a platform for research that could help them develop and improve driverless cars, robotics and medical diagnostics. From a Reuters report: The Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173 million) on the previously unreported project, a budget breakdown shows, as part of a government policy to get back Japan's mojo in the world of technology. The country has lost its edge in many electronic fields amid intensifying competition from South Korea and China, home to the world's current best-performing machine. In a move that is expected to vault Japan to the top of the supercomputing heap, its engineers will be tasked with building a machine that can make 130 quadrillion calculations per second -- or 130 petaflops in scientific parlance -- as early as next year, sources involved in the project told Reuters. At that speed, Japan's computer would be ahead of China's Sunway Taihulight that is capable of 93 petaflops. "As far as we know, there is nothing out there that is as fast," said Satoshi Sekiguchi, a director general at Japan's âZNational Institute of Advanced Industrial Science and Technology, where the computer will be built.
United States

US Sets Plan To Build Two Exascale Supercomputers (computerworld.com) 59

dcblogs quotes a report from Computerworld: The U.S. believes it will be ready to seek vendor proposals to build two exascale supercomputers -- costing roughly $200 to $300 million each -- by 2019. The two systems will be built at the same time and be ready for use by 2023, although it's possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials. The U.S. will award the exascale contracts to vendors with two different architectures. But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump's administration will change directions. The incoming administration is a wild card. Supercomputing wasn't a topic during the campaign, and Trump's dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer. At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama's administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that "pointed-head geeks are not going to be well appreciated."
Supercomputing

A British Supercomputer Can Predict Winter Weather a Year In Advance (thestack.com) 177

The national weather service of the U.K. claims it can now predict the weather up to a year in advance. An anonymous reader quotes The Stack: The development has been made possible thanks to supercomputer technology granted by the UK Government in 2014. The £97 million high-performance computing facility has allowed researchers to increase the resolution of climate models and to test the retrospective skill of forecasts over a 35-year period starting from 1980... The forecasters claim that new supercomputer-powered techniques have helped them develop a system to accurately predict North Atlantic Oscillation -- the climatic phenomenon which heavily impacts winters in the U.K.
The researchers apparently tested their supercomputer on 36 years worth of data, and reported proudly that they could predict winter weather a year in advance -- with 62% accuracy.
Australia

Quantum Researchers Achieve 10-Fold Boost In Superposition Stability (thestack.com) 89

An anonymous reader quotes The Stack: A team of Australian researchers has developed a qubit offering ten times the stability of existing technologies. The computer scientists claim that the new innovation could significantly increase the reliability of quantum computing calculations... The new technology, developed at the University of New South Wales, has been named a 'dressed' quantum bit as it combines a single atom with an electromagnetic field. This process allows the qubit to remain in a superposition state for ten times longer than has previously been achieved. The researchers argue that this extra time in superposition could boost the performance stability of quantum computing calculations... Previously fragile and short-lived, retaining a state of superposition has been one of the major barriers to the development of quantum computing. The ability to remain in two states simultaneously is the key to scaling and strengthening the technology further.
Do you ever wonder what the world will look like when everyone has their own personal quantum computer?
Hardware

Fujitsu Picks 64-Bit ARM For Post-K Supercomputer (theregister.co.uk) 30

An anonymous reader writes: At the International Supercomputing Conference 2016 in Frankfurt, Germany, Fujitsu revealed its Post-K machine will run on ARMv8 architecture. The Post-K machine is supposed to have 100 times more application performance than the K Supercomputer -- which would make it a 1,000 PFLOPS beast -- and is due to go live in 2020. The K machine is the fifth fastest known super in the world, it crunches 10.5 PFLOPS, needs 12MW of power, and is built out of 705,000 Sparc64 VIIIfx cores.InfoWorld has more details.
China

China Builds World's Fastest Supercomputer Without U.S. Chips (computerworld.com) 247

Reader dcblogs writes: China on Monday revealed its latest supercomputer, a monolithic system with 10.65 million compute cores built entirely with Chinese microprocessors. This follows a U.S. government decision last year to deny China access to Intel's fastest microprocessors. There is no U.S.-made system that comes close to the performance of China's new system, the Sunway TaihuLight. Its theoretical peak performance is 124.5 petaflops (Linpack is 93 petaflops), according to the latest biannual release today of the world's Top500 supercomputers. It has been long known that China was developing a 100-plus petaflop system, and it was believed that China would turn to U.S. chip technology to reach this performance level. But just over a year ago, in a surprising move, the U.S. banned Intel from supplying Xeon chips to four of China's top supercomputing research centers. The U.S. initiated this ban because China, it claimed, was using its Tianhe-2 system for nuclear explosive testing activities. The U.S. stopped live nuclear testing in 1992 and now relies on computer simulations. Critics in China suspected the U.S. was acting to slow that nation's supercomputing development efforts. There has been nothing secretive about China's intentions. Researchers and analysts have been warning all along that U.S. exascale (an exascale is 1,000 petaflops) development, supercomputing's next big milestone, was lagging.
Power

Utility Targets Bitcoin Miners With Power Rate Hike (datacenterfrontier.com) 173

1sockchuck writes: A public utility in Washington state wants to raise rates for high-density power users, citing a flood of requests for electricity to power bitcoin mining operations. Chelan County has some of the cheapest power in the nation, supported by hydroelectric generation from dams along the Columbia River. That got the attention of bitcoin miners, prompting requests to provision 220 megawatts of additional power. After a one-year moratorium, the Chelan utility now wants to raise rates for high density users (more than 250kW per square foot) from 3 cents to 5 cents per kilowatt hour. Bitcoin businesses say the rate hike is discriminatory. But Chelan officials cite the transient nature of the bitcoin business as a risk to recovering their costs for provisioning new power capacity.
Classic Games (Games)

Computer Beats Go Champion 149

Koreantoast writes: Go (weiqi), the ancient Chinese board game, has long been held up as one of the more difficult, unconquered challenges facing AI scientists... until now. Google DeepMind researchers, led by David Silver and Demis Hassabis, developed a new algorithm called AlphaGo, enabling the computer to soundly defeat European Go champion Fan Hui in back-to-back games, five to zero. Played on a 19x19 board, Go players have more than 300 possible moves per turn to consider, creating a huge number of potential scenarios and a tremendous computational challenge. All is not lost for humanity yet: DeepMind is scheduled to face off in March with Lee Sedol, considered one of the best Go players in recent history, in a match compared to the Kasparov-Deep Blue duels of previous decades.
Math

Finally Calculated: All the Legal Positions In a 19x19 Game of Go (github.io) 117

Reader John Tromp points to an explanation posted at GitHub of a computational challenge Tromp coordinated that makes a nice companion to the recent discovery of a 22 million-digit Mersenne prime. A distributed effort using pooled computers from two centers at Princeton, and more contributed from the HP Helion cloud, after "many hiccups and a few catastrophes" calculated the number of legal positions in a 19x19 game of Go. Simple as Go board layout is, the permutations allowed by the rules are anything but simple to calculate: "For running an L19 job, a beefy server with 15TB of fast scratch diskspace, 8 to 16 cores, and 192GB of RAM, is recommended. Expect a few months of running time." More: Large numbers have a way of popping up in the game of Go. Few people believe that a tiny 2x2 Go board allows for more than a few hundred games. Yet 2x2 games number not in the hundreds, nor in the thousands, nor even in the millions. They number in the hundreds of billions! 386356909593 to be precise. Things only get crazier as you go up in boardsize. A lower bound of 10^{10^48} on the number of 19x19 games, as proved in our paper, was recently improved to a googolplex. (For anyone who wants to double check his work, Tromp has posted as open source the software used.)
Math

New Mersenne Prime Discovered, Largest Known Prime Number: 2^74,207,281 - 1 (mersenne.org) 132

Dave Knott writes: The Great Internet Mersenne Prime Search (GIMPS) has discovered a new largest known prime number, 2^74,207,281-1, having 22,338,618 digits. The same GIMPS software recently uncovered a flaw in Intel's latest Skylake CPUs, and its global network of CPUs peaking at 450 trillion calculations per second remains the longest continuously-running "grassroots supercomputing" project in Internet history. The prime is almost 5 million digits larger than the previous record prime number, in a special class of extremely rare prime numbers known as Mersenne primes. It is only the 49th known Mersenne prime ever discovered, each increasingly difficult to find.
Businesses

Uber Scaling Up Its Data Center Infrastructure (datacenterfrontier.com) 33

1sockchuck writes: Connected cars generate a lot of data. That's translating into big business for data center providers, as evidenced by a major data center expansion by Uber, which needs more storage and compute power to support its global data platform. Uber drivers' mobile phones send location updates every 4 seconds, which is why the design goal for Uber's geospatial index is to handle a million writes per second. It's a reminder that as our cars become mini data centers, the data isn't staying onboard, but will also be offloaded to the data centers of automakers and software companies.
Supercomputing

Seymour Cray and the Development of Supercomputers (linuxvoice.com) 54

An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
Security

Quantum Computer Security? NASA Doesn't Want To Talk About It (csoonline.com) 86

itwbennett writes: At a press event at NASA's Advanced Supercomputer Facility in Silicon Valley on Tuesday, the agency was keen to talk about the capabilities of its D-Wave 2X quantum computer. 'Engineers from NASA and Google are using it to research a whole new area of computing — one that's years from commercialization but could revolutionize the way computers solve complex problems,' writes Martyn Williams. But when questions turned to the system's security, a NASA moderator quickly shut things down [VIDEO], saying the topic was 'for later discussion at another time.'
Supercomputing

Google Finds D-Wave Machine To Be 10^8 Times Faster Than Simulated Annealing (blogspot.ca) 157

An anonymous reader sends this report form the Google Research blog on the effectiveness of D-Wave's 2X quantum computer: We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 10^8 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 10^8. A more detailed paper is available at the arXiv.
Intel

Intel Launches 72-Core Knight's Landing Xeon Phi Supercomputer Chip (hothardware.com) 179

MojoKid writes: Intel announced a new version of their Xeon Phi line-up today, otherwise known as Knight's Landing. Whatever you want to call it, the pre-production chip is a 72-core coprocessor solution manufactured on a 14nm process with 3D Tri-Gate transistors. The family of coprocessors is built around Intel's MIC (Many Integrated Core) architecture which itself is part of a larger PCI-E add-in card solution for supercomputing applications. Knight's Landing succeeds the current version of Xeon Phi, codenamed Knight's Corner, which has up to 61 cores. The new Knight's Landing chip ups the ante with double-precision performance exceeding 3 teraflops and over 8 teraflops of single-precision performance. It also has 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and three times as dense.
Math

'Shrinking Bull's-eye' Algorithm Speeds Up Complex Modeling From Days To Hours (mit.edu) 48

rtoz sends word of the discovery of a new algorithm that dramatically reduces the computation time for complex processes. Scientists from MIT say it conceptually resembles a shrinking bull's eye, incrementally narrowing down on its target. "With this method, the researchers were able to arrive at the same answer as a classic computational approaches, but 200 times faster." Their full academic paper is available at the arXiv. "The algorithm can be applied to any complex model to quickly determine the probability distribution, or the most likely values, for an unknown parameter. Like the MCMC analysis, the algorithm runs a given model with various inputs — though sparingly, as this process can be quite time-consuming. To speed the process up, the algorithm also uses relevant data to help narrow in on approximate values for unknown parameters."

Slashdot Top Deals