[an error occurred while processing this directive] | Linux International Member |
ZDNET UK: Linux develops weather forecasting super-computer
Oct 7, 1999, 19:09 UTC
"Under the terms of a $15 million (£10m) deal, High Performance Technologies (HPT), will install a gigantic cluster of Compaq XP1000 Alpha workstations together running Linux at the Colorado labs. 277 computers will casually crunch 300 billion arithmetic calculations per second under the new architecture -- 20 times more powerful than the lab's previous super-computer."
Related Stories:
Federal Computer Week: Linux takes NOAA by storm (Oct 04, 1999)
PRNewswire: FSL Chooses Revolutionary Alpha Linux Cluster for Supercomputer (Sep 27, 1999)
LinuxPR: LinuxAlpha Supercomputer Wins Weather Procurement (Sep 17, 1999)
Tim Dion - Subject: MS says ... (1999-10-07 19:33:23) |
"There are no commercially proven clustering technologies to provide High Availability for Linux." -- Microsoft Corporation. Don't those guys at NOAA read Microsoft's press releases? I am sure Bill Gates and Steve Ballmer will assume this anouncement is just more Linux hype. Oh wait, they did say commercially proven not scientifically proven. Lets see how many supercomputers run Windows NT? |
Mark - Subject: Not famed? (1999-10-07 19:36:50) |
I thought the beowulf work was quite well known? Mark |
Felix Finch - Subject: Beowulf = High Avail ? (1999-10-07 22:37:40) |
I thought High Availability referred to clusters which could fail over to each other for web servers etc. Beowulf presumably won't corrupt data if a node fails, but is that the same?. High availability implies data replication, all sorts of stuff. Not that NT would be a particularly good platform even *with* H/A :-) |
hab - Subject: linux myths (1999-10-08 05:43:05) |
Somebody ought to go knock on the door at number 1 Microsoft Way and see if there's anyone home? It almost seems like the folk there could'nt find a clue if it jumped up and bit them on the arse. I could almost feel sorry for them. Almost! |
Simone Lazzaris - Subject: Not Famed ? (1999-10-08 07:34:17) |
If I'm not wrong, the CERN is actually using a linux beowulf system to process data from the huge, top-of-tecnology particle accelerator in Geneve. If this is not fame.... |
David G. Watson - Subject: Other big beowulf projects... (1999-10-08 14:23:46) |
I believe Brookhaven National Laboratory is using a big ol' Beowulf to deal with the huge amount of data that the Relativistic Heavy Ion Collider puts out. And NASA practically invented the Beowulf tech, or at least popularized the name. And then of course there's the Beowulf programming I do for $6.25 an hour at Kent State University (at least it's better than minimum wage, but I'm still in high school, so it's not bad :). We're working on LCD simulations. |
dinotrac - Subject: Gotta read the fine print when trying to grab a spinning object. (1999-10-08 19:01:33) |
"There are no commercially proven clustering technologies to provide High Availability for Linux." -- Microsoft Corporation. Most of the beowulf stuff I've seen is in the government sector, hence the "commercially proven" disclaimer. Also, beowulf is a high performance clustering technology, not a high availability technology. Never mind that NT needs high availability clustering just to be decent ... |
AJWM - Subject: No, Beowulf != High Availability (1999-10-08 19:18:38) |
Felix is correct, as is (to a degree) Microsoft in this one regard. Yes, there are supercomputer cluster solutions (not just Beowulf) using Linux, but that isn't what MS meant by a High Availability clustering technology. (IIRC, Microsoft's "solution" in that area is Wolfpack). The difference is rather than, as with a supercomputer cluster, each machine working on part of a problem, in an HA cluster each machine is basically doing the same thing and they're all serving as backups to each other. Think database replication, automatic failover, etc. in addition to the load sharing. Design of the solution depends on how tightly you're clustering things -- Sun's enterprise servers tightly cluster a bunch of (hot swappable) processor cards in a single box, whereas a very loose cluster might take care of all this at the application (or rather, middleware) level as with databases on separate machines replicating each other. (The extreme case of this one place I worked was where the servers were in two different cities and synched to each other via a dedicated T-3, either could go down without affecting the users. We're talking mission critical stuff here.) In truth any such HA requirement is probably best evaluated on an individual basis and no purely OS-based approach is going to cut it (you need to look at the middleware too), although you certainly want to choose your OS carefully in situation -- meaning going with something with a proven stability record on reliable hardware, almost certainly not something from Microsoft. |
Post your comments using the form below. [ Return to Today's Headlines | Top of Story ]
All times are recorded in UTC.
Copyright ©1999 by Linux Today
(webmaster@linuxtoday.com)
Linux is a trademark of Linus Torvalds.
Powered by Linux 2.2.9 and Apache 1.3.6.
Linux Today is a corporate member of Linux International.