Microsoft made its fortune selling the equivalent of Fords, not Ferraris, but today it's wading into the rarefied world of supercomputing...
Microsoft made its fortune selling the equivalent of Fords, not Ferraris, but today it’s wading into the rarefied world of supercomputing with a new version of Windows for managing massively powerful computing systems.
Chairman Bill Gates is introducing a test version of a new product, Windows Compute Cluster Server 2003, at the SC\05, a supercomputing conference held this year in Seattle.
Among the 8,600 people at the weeklong event are vendors and scientists who use computers to solve enormously complicated problems, including analyzing proteins, predicting the effects of nuclear explosions and modeling what might happen if an asteroid exploded near Earth.
At this conference, in the Washington State Convention and Trade Center, Gates is also expected to announce support for university research in the field and describe experiences of customers testing its new software, including Rosetta Inpharmatics in Seattle.
Most Read Stories
- What drivers can and cannot do under Washington state's new distracted-driving law
- Foreign buyers drop off as Seattle housing market hits hottest tempo since 2006 bubble
- ‘A painful and frustrating experience’: Horizon Air scheduling havoc will continue into the fall
- Put down that cellphone; distracted-driving law is here
- Why watermelon is good for you
Microsoft sees a growing market at research-intensive companies for clustered systems, created by linking together a series of standard PC components to multiply their processing power.
Clusters now account for about 10 percent of all server sales, but Microsoft expects them to grow dramatically as prices continue to fall, the systems become simpler to manage and new applications are developed.
“What we’re seeing is, as the price comes down and advanced applications trickle down from academia, they’re really being picked up by enterprise [businesses],” said Kyril Faenov, Microsoft director of high-performance computing.
But Microsoft is wading into a largely academic field where perhaps 80 percent of the systems use freely shared, open-source software. The company rarely reveals the inner workings of its software to customers.
It’s also an open question whether the Microsoft-based systems are true supercomputers. Traditionally, the term has referred to cost-is-no-object, room-filling systems used mostly by governments and universities.
With costs falling, Microsoft envisions high-powered clusters proliferating in the business world and appearing in every department or work group in biotechnology, energy, financial services and other industries.
Microsoft engineers built demonstration systems for under $4,000 using hardware bought from local computer stores, according to Craig Mundie, a senior vice president who started Microsoft’s push into high-performance computing two years ago.
Microsoft is also entering the field to prepare for the more powerful computers expected to be widely available to consumers in 10 to 15 years.
Today’s laptops are more powerful than the $1 million supercomputers Mundie built in 1991 at his previous company.
Supercomputer performance has grown apace.
A widely used ranking of the top systems, released Monday, reported that the fastest system of all — IBM’s BlueGene/L — doubled in power over the past year to 280.6 teraflops. A teraflop is a trillion calculations per second.
“I don’t know about the definition of a supercomputer these days; it’s a constantly moving target,” said David Bernholdt, a conference speaker and senior research staff member at Oak Ridge National Laboratory in Tennessee.
Bernholdt said researchers are benefiting from a diversity of suppliers entering the market, and smaller systems such as those powered by Microsoft’s software are likely to find plenty of buyers. But supercomputer is a term that refers to bleeding-edge systems.
Seattle-based Cray pioneered the field of supercomputing more than 30 years ago when its namesake founder set out to build the world’s fastest computers.
A shift began about 10 years ago with the arrival of clustered computer systems that were less expensive to assemble, said David Patterson, a University of California, Berkeley, professor and president of the Association for Computing Machinery, the conference’s co-sponsor.
“That’s kind of the new wave; for some people, they can do the problems they want to solve with basically a lot of desktop computers,” he said.
Because these new systems are based on standard components, researchers can often write software for the machines on their laptops or desktop computers, said Marc Hamilton, director of technology for global education at Sun Microsystems.
Sun is announcing a supercomputer deal with Tokyo Institute of Technology today.
Separately, Cray announced the U.S. Department of Energy’s Sandia National Laboratories is increasing the size of its “Red Storm” system from 10,848 processors to 14,348 processors next year.
Red Storm, developed by Cray, has been used to simulate atmospheric conditions and predict the effects of a nearby asteroid’s explosion.
Brier Dudley: 206-515-5687 or email@example.com