The Full Wiki

More info on Kentucky Linux Athlon Testbed

Kentucky Linux Athlon Testbed: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

The Kentucky Linux Athlon Testbed (KLAT2) is a 64+2 node Beowulf cluster built by the University of Kentucky in 2000. The cluster used entirely off the shelf components. It is capable of over 64 GFLOPS using ScaLAPACK, and approximately 22.8 GFLOPS using the standard untuned/uncustomized 80/64-bit version. Those numbers represent actual performance by the KLAT2 supercomputer; the theoretical maxima are calculated at 179 and 89 GFLOPS for the 32-bit and 80/64-bit versions, respectively. At a total cost of $41,205 USD, it was one of the first two supercomputers to bring supercomputer power under the $1,000 USD per GFLOPS cost barrier.



The entire cluster was based on readily-available off the shelf hardware. After doing some tests on what the most effective hardware would be, the Aggregate (the University of Kentucky research group who was responsible for the project) decided to go with 700 MHz AMD Athlon processors. This decision was made because the 3DNow! instruction set (AMD's answer to the Intel's MMX technology) allowed for better processor operations for high-end mathematical computing.

The cluster contained 64 primary systems with 2 "hot spare" nodes, all of which contained this basic hardware:

         o One 700MHz AMD Athlon Slot A module and dual-fan heat sink
         o 128MB CAS2 PC100 SDRAM
         o FIC SD11 motherboard
         o Four RealTek-based Fast Ethernet NICs
         o Floppy drive (for net boot code)
         o 300W power supply and mid-tower case with extra fan 

After some performance testing, it was determined by the Aggregate that the ideal solution for clustering the computers would be to create a Flat Neighborhood Network instead of using gigabit Ethernet because of cost concerns. The four Fast Ethernet cards in each node created similar performance to gigabit Ethernet after overhead concerns were taken into account.

The cluster also required 10 32-port switches (one of which was an uplink) and over 264 CAT5 cables to connect all of the systems. All of the systems were powered by Red Hat Linux 6.0, with an updated kernel to support the Message Passing Interface.


Although the AMD Athlon CPUs used for this project were donated by AMD, the Aggregate compiled a list of the costs that were accrued in purchasing all the parts, and added in a market-value cost for all of the CPUs that were donated by AMD. Overall, the entire project cost approximately $41,205, with the primary costs being roughly $13,200 in processors, $8,100 in the network, $6,900 in motherboards, and $6,200 in memory.

The KLAT2 project was remarkable because it was one of the first two supercomputers/clustered computer networks to bring the cost of processing to under $1,000 USD per GFLOPS. Although the exact timetables are not clear, KLAT2 and Bunyip (the Beowulf cluster created by the Australian National University in Canberra) were built and brought online at roughly the same time. While Bunyip was the first to officially pass the mark, KLAT2's performance was not officially measured until well after it had surpassed the $1,000/GFLOPS mark. Since Bunyip only surpassed the $1k/GFLOPS margin by approximately 2%, strong fluctuations in the exchange rate between US Dollars and Australian Dollars can cause it to be temporarily out of contention. Also, the KLAT2 project used standard benchmarking software, while Bunyip used a customized version that was specifically tuned to the hardware being used. Given all the caveats and disclaimers, the two supercomputers basically share credit for breaking the $1,000 USD/GFLOPS mark.

Flat Neighborhood Network

Because of the large amount of network traffic being passed by the computers using the Message Passing Interface, it was important that the appropriate network topology be used to connect all of the various machines. This meant creating a mesh network where, while every machine could not connect directly to every other machine, it could connect with a common switch which would then connect it to every other machine.

The Flat Neighborhood Network design is incredibly complex. Only small design problems can be tackled by hand, due to the scaling complexities involved with adding multiple network cards with large numbers of processing units and switches. With 66 machines each having 4 network cards, the KLAT2 network had 264 network cards with which to make single-hop paths between any two given computers. The network also had to be optimized for the specific network traffic it would be carrying.

On top of the design issues, there are a several problems with the Flat Neighborhood Networks with regards to wiring. The designs often have no symmetry in their wiring schemes, which requires certain site design properties for the FNN to work. On top of that, issues needed to be dealt with regarding the routing properties of the network. Since the common switch between two computers is often different, asking Computer X for the IP address of Computer Y can yield a different result than asking Computer Z for the IP address of Computer Y. Finally, the standard Linux channel bonding features typically used for clustered computing do not work with FNN topologies.

The upsides to the difficulty in setting up a Flat Neighborhood Network is that, given the right configuration and components, it is much more cost effective to set up. The hardware for the KLAT2 network ran at approximately $8100 USD and provideded a bisected bandwidth of 25GB/s. This was far more effective than going with Gigabit Ethernet (which, at the time, was also far more expensive), and also provided similar performance to channel bonding for a fraction of the cost.

External links

The Aggregate: KLAT2
The Aggregate: Flat Neighborhood Networks



Got something to say? Make a comment.
Your name
Your email address