The Full Wiki

Mainframe computer: Wikis

Advertisements
  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

An IBM 704 mainframe

Mainframes (often colloquially referred to as Big Iron[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.

The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units.

Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)

There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)

Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.

Contents

Description

Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.

Software upgrades are only non-disruptive when using z/OS and Parallel Sysplex, with workload sharing so one system can take over another's application while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.

In the 1960s, most mainframes had no interactive interface. They accepted sets of punch cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) but not graphical user interfaces by the 1980s, but end user computing was largely obsoleted in the 1990s by the personal computer. Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]

Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claims its newer mainframes can reduce data center energy costs for power and cooling, and that they can reduce physical space requirements compared to server farms.[4]

Characteristics

Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.

Mainframes can add or hot swap system capacity non disruptively and granularly, to a level of sophistication usually not found on most servers. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Some IBM mainframe customers run no more than two machines[citation needed]: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice most customers use multiple mainframes linked by Parallel Sysplex and shared DASD.

Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual.[5] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing.

Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do some independent analysts. Sun Microsystems also takes that view, but beginning in 2007 promoted a partnership with IBM which largely focused on IBM support for Solaris on its System x and BladeCenter products (and therefore unrelated to mainframes), but also included positive comments for the company's OpenSolaris operating system being ported to IBM mainframes as part of increasing the Solaris community. Some analysts (such as Gartner[citation needed]) claim that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments while blurring the differences between the different approaches to enterprise computing.

Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.

Market

IBM mainframes dominate the mainframe market at well over 90% market share.[6] Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market, although they have been slow to introduce new hardware models in recent years.

The amount of vendor investment in mainframe development varies with marketshare. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce their development expenses, and they have also cut back their mainframe software development. (However, Unisys still maintains its own unique CMOS processor design development for certain high-end ClearPath models but contracts chip manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[7][8]

History

Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries/z9 mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the Strela is an example of an independently designed Soviet computer.

Shrinking demand and tough competition caused a shakeout in the market in the early 1980s — RCA sold out to UNIVAC and GE also left; Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. Infoworld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996.

That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000 IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27 year over year. But MIPS shipments (a measure of mainframe capacity) increased 4% per year over the past two years.[9]

Differences from supercomputers

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (Grand Challenge problems) which are limited by processing speed and memory size, while mainframes are used for problems which are limited by data movement in input/output devices, reliability, and for handling multiple business transactions concurrently. The differences are as follows:

  • Mainframes are measured in (millions of) integer operations per second (MIPS), whereas supercomputers are measured in floating point operations per second (FLOPS). Examples of integer operations include adjusting inventory counts, matching names, indexing tables of data, and making routine yes or no decisions. Floating point operations are mostly addition, subtraction, and multiplication with enough digits of precision to model continuous phenomena such as weather. In terms of computational ability, supercomputers are more powerful.[10]
  • Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council,[11] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another.

See also

References

  1. ^ "IBM preps big iron fiesta". The Register. July 20, 2005. http://www.theregister.co.uk/2005/07/20/ibm_mainframe_refresh/. 
  2. ^ Oxford English Dictionary, on -line edition, mainframe, n
  3. ^ Ebbers, Mike; O’Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (pdf). IBM International Technical Support Organization. http://publibz.boulder.ibm.com/zoslib/pdf/zosbasic.pdf. Retrieved 2007-06-01. 
  4. ^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. http://www-03.ibm.com/systems/migratetoibm/getthefacts/mainframe.html#4. Retrieved December 28, 2009. 
  5. ^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. http://www.wintercorp.com/PressReleases/ttp2005_pressrelease_091405.htm. Retrieved 2008-05-16. 
  6. ^ "IBM Tightens Stranglehold Over Mainframe Market; Gets Hit with Antitrust Complaint in Europe". CCIA. 2008-07-02. http://openmainframe.org/news/ibm-tightens-stranglehold-over-mainframe-market-gets-hit-wit.html. Retrieved 2008-07-09. 
  7. ^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. http://www.enterprisenetworksandservers.com/monthly/art.php?3306. 
  8. ^ "IBM Helps Clients Modernize Applications on the Mainframe". IBM. November 7, 2007. http://www-03.ibm.com/press/us/en/pressrelease/22556.wss. 
  9. ^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks". IBM. January 19, 2010. http://www.ibm.com/investor/4q09/presentation/4q09prepared.pdf. 
  10. ^ World's Top Supercomputer Retrieved on December 25, 2009
  11. ^ Transaction Processing Performance Council Retrieved on December 25, 2009.

External links

Advertisements

Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message