The Full Wiki

Computer architecture: Wikis

Advertisements
  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

In computer science, computer architecture or digital computer organization is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture comprises at least three main subcategories:[1]

  • Microarchitecture, also known as Computer organization is a lower level, more concrete and detailed, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA.[2] The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
  • System Design which includes all of the other hardware components within a computing system such as:
  1. System interconnects such as computer buses and switches
  2. Memory controllers and hierarchies
  3. CPU off-load mechanisms such as direct memory access (DMA)
  4. Issues like multiprocessing.

Once both ISA and microarchitecture have been specified, the actual device needs to be designed into hardware. This design process is called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Implementation can be further broken down into three (not fully distinct) pieces:

  • Logic Implementation — design of blocks defined in the microarchitecture at (primarily) the register-transfer and gate levels.
  • Circuit Implementation — transistor-level design of basic elements (gates, multiplexers, latches etc) as well as of some larger blocks (ALUs, caches etc) that may be implemented at this level, or even (partly) at the physical level, for performance reasons.
  • Physical Implementation — physical circuits are drawn out, the different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed.

For CPUs, the entire implementation process is often called CPU design.

More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.

Contents

History

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had the opportunity to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. In attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements was at the level of “system architecture” – a term that seemed more useful than “machine organization”. Subsequently, Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints”. Brooks went on to play a major role in the development of the IBM System/360 line of computers, where “architecture” gained currency as a noun with the definition “what the user needs to know”. Later the computer world would employ the term in many less-explicit ways.

The first mention of the term architecture in the referred computer literature is in a 1964 article describing the IBM System/360.[3] The article defines architecture as the set of “attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation”. In the definition, the programmer perspective of the computer’s functional behavior is key. The conceptual structure part of an architecture description makes the functional behavior comprehensible, and extrapolatable to a range of Use cases. Only later on did ‘internals’ such as “the way by which the CPU performs internally and accesses addresses in memory,” mentioned above, slip into the definition of computer architecture.

Computer architectures

There are many types of computer architectures:

The quantum computer architecture holds the most promise to revolutionize computing.[4]

Computer architecture topics

Advertisements

Sub-definitions

Some practitioners of computer architecture at companies such as Intel and AMD use more fine distinctions:

  • Macroarchitecture — architectural layers that are more abstract than microarchitecture, e.g. ISA
  • Instruction Set Architecture (ISA) — as defined above
  • Assembly ISA — a smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
  • Programmer Visible Macroarchitecture — higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture — although in practice the C microarchitecture for a particular computer includes
  • UISA (Microcode Instruction Set Architecture) — a family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.
  • Pin Architecture — the set of functions that a microprocessor is expected to provide, from the point of view of a hardware platform. E.g. the x86 A20M, FERR/IGNNE or FLUSH pins, and the messages that the processor is expected to emit after completing a cache invalidation so that external caches can be invalidated. Pin architecture functions are more flexible than ISA functions - external hardware can adapt to changing encodings, or changing from a pin to a message - but the functions are expected to be provided in successive implementations even if the manner of encoding them changes.

The Role Of Computer Architecture

Computer Architecture: The Definition

The coordination of abstract levels of a processor under changing forces, involving design, measurement and evaluation.it also include the overall fundamental working principle of the internal logical structure of a computer system.

Instruction Set Architecture

1) The ISA is the interface between the software and hardware. 2) It is the set of instructions that bridges the gap between high level languages and the hardware. 3)For a processor to understand a command, it should be in binary. The ISA encodes these values.

4)The ISA also defines the items in the computer that are available to a programmer. For example, it defines data types, registers, addressing modes, memory organization etc.

5)register are high speed storage for numbers that can be accessed by a processor. Data as well as instructions can be in a register.

Addressing modes are the way that an instructions are locates its operands.

Memory organization defines how instructions interact with the memory.

Computer Organization

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing ability of processors. They may need to optimize software in order to gain the most performance at the least expense. This can require quite detailed analysis of the computer organization. For example, in a multimedia decoder, the designers might need to arrange for most data to be processed in the fastest data path.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while supervisory software may need fast interrupts.

Sometimes certain tasks need additional components as well. For example, a computer capable of virtualization needs virtual memory hardware so that the memory of different simulated computers can be kept separated.

The computer organization and features also affect the power consumption and the cost of the processor.

Design goals

The exact form of a computer system depends on the constraints and goals for which it was optimized. Computer architectures usually trade off standards, cost, memory capacity, latency and throughput. Sometimes other considerations, such as features, size, weight, reliability, expandability and power consumption are factors as well.

The most common scheme carefully chooses the bottleneck that most reduces the computer's speed. Ideally, the cost is allocated proportionally to assure that the data rate is nearly the same for all parts of the computer, with the most costly part being the slowest. This is how skillful commercial integrators optimize personal computers.

Performance

Computer performance is often described in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance.

Computer performance can also be measured with the amount of cache a processor has. If the speed, MHz or GHz, were to be a car then the cache is like the gas tank. No matter how fast the car goes, it will still need to get gas. The higher the speed, and the greater the cache, the faster a processor runs.

Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds up a program. Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking almost immediately after they have been instructed to brake.

The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). Power consumption has become important in servers and portable devices like laptops.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general tasks.

Power consumption

Power consumption is another design criterion that factors in the design of modern computers. Power efficiency can often be traded for performance or cost benefits. With the increasing power density of modern circuits as the number of transistors per chip scales (Moore's law), power efficiency has increased in importance. Recent processor designs such as the Intel Core 2 put more emphasis on increasing power efficiency. Also, in the world of embedded computing, power efficiency has long been and remains the primary design goal next to performance.

See also

Notes

References

  1. ^ John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach (Third Edition ed.). Morgan Kaufmann Publishers. 
  2. ^ Laplante, Phillip A. (2001). Dictionary of Computer Science, Engineering, and Technology. CRC Press. pp. 94-95. ISBN 0849326915. 
  3. ^ Amdahl, G.M.; Blaauw, G.A.; and Brooks, F.P., Jr., Architecture of the IBM System/360, IBM Journal of Research and Development, April 1964
  4. ^ "Computer architecture: fundamentals and principles of computer design" by Joseph D. Dumas 2006. page 340.

External links


Study guide

Up to date as of January 14, 2010

From Wikiversity

Contents

Introduction

Discuss the notion of the modern computer as a Turing Machine in order to understand what an instruction is in the next section.

Instruction Set Architectures

Now a days, There are many electrical and electronic circuits which can do many jobs like Adding two numbers,multiplying, copying, conditionals etc.. In computers these all circuits are kept in an order and are given certain number such that they can be accessed from registers in processor. In processor they are executed. Data required for doing any job, in case of adding two numbers, is fetched either from input devices like keyboard, serial port or from memory itself. This set of circuits are called Instructon Set. The number associated with each circuit is called Opcode.

To make use of these circuits. we have to instruct processor to call certain circuit using opcode . In real world problems, we dont use just adding and moving. We use all of them for accompaingn our work. In case of adding two numbers we move our input to registers in processor and call adding opcode and move the result to certain memory unit. This can be done by giving the opcodes in an order. This is called Assembly programming. Instuctions are kept in RAM and are executed one by one by processor.

Pipelining

pipelining is basically a type of process in which every thing is organised like a production line.

Memory Hierarchies (Caches)

why memory Hierarchy? To achieve the best performance, allow memory to keep up with the processor and have a reasonable memory cost compared to other components.

Instruction Level Parallelism

See also

Computer architecture at Wikipedia.


Simple English

In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is the technical drawings and functional description of all design requirements (especially speeds and interconnections), it is how to design and implement various parts of a computer — focusing largely on the way by which the central processing unit (CPU) operates internally and how it accesses addresses in memory.

It can be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture includes at least three main subcategories:[1]

  1. Instruction set architecture, or ISA, is the abstract model of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, memory address modes, processor registers, and address and data formats.
  2. Microarchitecture, also known as Computer organization is a lower level, a detailed description of the system that is sufficient for completely describing the operation of all parts of the computing system, and how they are inter-connected and inter-operate in order to implement the ISA.[2] The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
  3. System Design which includes all of the other hardware components within a computing system such as:
  • System interconnects such as computer buses and switches.
  • Memory controllers and hierarchies.
  • CPU off-load mechanisms such as direct memory access.
  • Issues like multi-processing.

Once both ISA and microarchitecture has been specified, the actual computing system needs to be designed into hardware. This design process is called implementation. Implementation is usually a hardware engineering design process.

Implementation can be further broken down into three but not fully separate pieces:

  • Logic Implementation: Design of blocks defined in the microarchitecture, mainly, at the register-transfer and gate levels.
  • Circuit Implementation: Transistor-level design of basic elements (gates, multiplexers, flip-flops, etc) as well as of some larger blocks (ALUs, caches etc) that may be implemented at this level, or even at a lower physical level, for performance reasons.
  • Physical Implementation: Physical circuits are drawn out, the different circuit components are placed in a chip floor-plan or on a board and the wires connecting them are routed.

For CPUs, the entire implementation process is often called CPU design; it can also be a family of related CPU designs, such as RISC and CISC.

Contents

More sub-definitions

Some practitioners of computer architecture use more fine subcategories:

  • Macroarchitecture: An architectural layers that are more abstract than microarchitecture, e.g. ISA.
  • Instruction Set Architecture (ISA): As defined above.
  • UISA (Microcode Instruction Set Architecture): A family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence called a UISA.
  • Assembly ISA: A smart assembler may convert an abstract assembly language common to a group of CPUs into slightly different machine language for different CPU implementations.
  • Programmer Visible Macroarchitecture: Higher level language tools such as compilers may define a definite interface to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures; for example the C, C++, or Java standards define three different definite programming interfaces.
  • Pin Architecture: The set of functions that a microprocessor is expected to provide, from the point of view of a hardware platform. E.g. signals that the processor is expected to emit during executing an instruction.

Examples of computer architectures

Other pages

References

  1. John L. Hennessy and David A. Patterson (2003). Computer Architecture: A Quantitative Approach (Third Edition ed.). Morgan Kaufmann Publishers, Inc. ISBN 1558605967. 
  2. Phillip A. Laplante (2001). Dictionary of Computer Science, Engineering, and Technology. CRC Press. pp. 94–95. ISBN 0849326915. 

Other websites


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message