Web server: Wikis

  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

The inside and front of a Dell PowerEdge Web server

A Web server is a computer program that delivers (serves) content, such as Web pages, using the Hypertext Transfer Protocol (HTTP), over the World Wide Web. The term Web server can also refer to the computer or virtual machine running the program. In large commercial deployments, a server computer running a Web server can be rack-mounted in a server rack or cabinet with other servers to operate a Web farm.

Contents

Overview

The primary function of a Web server is to deliver Web pages to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and JavaScripts.

A client, commonly a Web browser or Web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource, or an error message if unable to do so. The resource is typically a real file on the server's secondary memory, but this is not necessarily the case and depends on how the Web server is implemented.

While the primary function is to serve content, a full implementation of HTTP also includes a way of receiving content from clients. This feature is used for submitting Web forms, including uploading of files.

Many generic Web servers also support server-side scripting, e.g., Apache HTTP Server and PHP. This means that the behaviour of the Web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this functionality is used to create HTML documents on-the-fly as opposed to return fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information in databases. The latter is, however, typically much faster and easily cached.

Highly niched Web servers can be found in devices such as printers and routers in order to ease administration using a familiar user interface in the form of a Web page.

History of Web servers

The world's first Web server.

In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear Research) a new project, which had the goal of easing the exchange of information between scientists by using a hypertext system. As a result of the implementation of this project, in 1990 Berners-Lee wrote two programs:

Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among lots of different social groups of people, first in scientific organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.

Common features

  1. Virtual hosting to serve many Web sites using one IP address.
  2. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
  3. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.
  4. Server-side scripting to generate dynamic Web pages, but still keeping Web server and Web site implementations separate from each other.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL) into:

  • a local file system resource (for static requests);
  • an internal or external program name (for dynamic requests).

For a static request the URL path specified by the client is relative to the Web server's root directory.

Consider the following URL as it would be requested by a client:

http://www.example.com/path/file.html

The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request:

GET /path/file.html HTTP/1.1
Host: www.example.com

The Web server on www.example.com will append the given path to the path of its root directory. On Unix machines, this is commonly /var/www. The result is the local file system resource:

/var/www/path/file.html

The Web server will then read the file, if it exists, and send a response to the client's Web browser. The response will describe the content of the file and contain the file itself.

Load limits

A Web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:

  • its own settings;
  • the HTTP request type;
  • content origin (static or dynamic);
  • the fact that the served content is or is not cached;
  • the hardware and software limits of the OS where it is working.

When a Web server is near to or over its limits, it becomes unresponsive.

Kernel-mode and user-mode Web servers

A Web server can be either implemented into the OS kernel, or in user space (like other regular applications).

An in-kernel Web server (like TUX on GNU/Linux or Microsoft IIS on Windows) will usually work faster, because, as part of the system, it can directly use all the hardware resources it needs, such as non-paged memory, CPU time-slices, network adapters, or buffers.

Web servers that run in user-mode have to ask the system the permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications.

Also, applications cannot access the system's internal buffers, which causes useless buffer copies that create another handicap for user-mode Web servers. As a consequence, the only way for a user-mode Web server to match kernel-mode performance is to raise the quality of its code to much higher standards, similar to that of the code used in Web servers that run in the kernel. This is a significant issue under Windows, where the user-mode overhead is about six times greater than that under Linux.[1]

Overload causes

At any time Web servers can be overloaded because of:

  • Too much legitimate Web traffic. Thousands or even millions of clients connecting to the Web site in a short interval, e.g., Slashdot effect;
  • DDoS. Distributed Denial of Service attacks;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
  • XSS viruses can cause high traffic because of millions of infected browsers and/or Web servers;
  • Internet Web robots. Traffic not filtered/limited on large Web sites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., DB) failures, etc.; in these cases the remaining Web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded Web server are:

  • requests are served with (possibly long) delays (from 1 second to a few hundred seconds);
  • 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned);
  • TCP connections are refused or reset (interrupted) before any content is sent to clients;
  • in very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:

  • managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • deploying Web cache techniques;
  • using different domain names to serve different (static and dynamic) content by separate Web servers, i.e.:
    • http://images.example.com
      
    • http://www.example.com
      
  • using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings;
  • using many Web servers (programs) per computer, each one bound to its own network card and IP address;
  • using many Web servers (computers) that are grouped together so that they act or are seen as one big Web server, see also: Load balancer;
  • adding more hardware resources (i.e. RAM, disks) to each computer;
  • tuning OS parameters for hardware capabilities and usage;
  • using more efficient computer programs for Web servers, etc.;
  • using other workarounds, especially if dynamic content is involved.

Market structure

File:Usage share of Web servers (Source Netcraft).svg
Market share of major Web servers

Given below is a list of top Web server software vendors published in a Netcraft survey in January 2010.

Vendor Product Web Sites Hosted (millions) Percent
Apache Apache 111 54%
Microsoft IIS 50 24%
Igor Sysoev nginx 16 8%
Google GWS 15 7%
lighttpd lighttpd 1 0%

See also

References

External links


Wiktionary

Up to date as of January 15, 2010

Definition from Wiktionary, a free dictionary

English

Noun

Wikipedia-logo.png
Wikipedia has an article on:

Wikipedia

Singular
Web server

Plural
Web servers

Web server (plural Web servers)

  1. (computing) software that delivers Web pages and other documents to browsers using the HTTP protocol
  2. (computing) the computer on which such software runs

Simple English

A web server is a computer with a boot device or other disk containing a web site. The term may also refer to an application that helps a computer to perform web server functions.

If one site runs more than one server they must use different port numbers. Alternatively, several hostnames may be mapped to the same computer in which case they are known as "virtual servers".

Apache and NCSA HTTPd are two popular web servers. There are many others including some for practically every platform. Servers differ mostly in the "server-side" features they offer such as server-side includes, and in their authentication and access control mechanisms. All decent servers support CGI and most have some binary API as well.

Server Languages

You can add lots of engines onto the server to make the server understand programming languages. A few examples of these are:

To work with the server, they must be a web-based language instead of being a language which is used to make normal computer programs.

Other pages








Got something to say? Make a comment.
Your name
Your email address
Message