|
Home Information Login Register Waiting lists Membership Hall of fame Tournaments Best game Wikichess Rating list Problems Forum Links Help About
Hot news Discussions Files search Social network |
In english the name is written World Wide Web World Wide WebThe World Wide Web ("WWW" or simply the "Web") is a system of interlinked, hypertext documents that runs over the Internet. With a Web browser, a user views Web pages that may contain text, images, and other multimedia and navigates between them using hyperlinks. The Web was first created around 1990 by the Englishman Tim Berners-Lee working at CERN in Geneva, Switzerland. As its inventor, Berners-Lee conceptualised the Web to be the Semantic Web where all its contents should be descriptively marked-up. Basic termsThe World Wide Web is the combination of four basic ideas: Hypertext: a format of information which allows, in a computer environment, one to move from one part of a document to another or from one document to another through internal connections among these documents (called "hyperlinks"); Resource Identifiers: unique identifiers used to locate a particular resource (computer file, document or other resource) on the network - this is commonly known as a URL or URI, although the two have subtle technical differences; The Client-server model of computing: a system in which client software or a client computer makes requests of server software or a server computer that provides the client with resources or services, such as data or files; and Markup language: characters or codes embedded in text which indicate structure, semantic meaning, or advice on presentation. On the World Wide Web, a client program called a user agent retrieves information resources, such as Web pages and other computer files, from Web servers using their URLs. If the user agent is a kind of Web browser, it displays the resources on a user's computer. The user can then follow hyperlinks in each web page to other World Wide Web resources, whose location is embedded in the hyperlinks. It is also possible, for example by filling in and submitting web forms, to post information back to a Web server for it to save or process in some way. Web pages are often arranged in collections of related material called "Web sites." The act of following hyperlinks from one Web site to another is referred to as "browsing" or sometimes as "surfing" the Web. The phrase "surfing the Internet" was first popularized in print by Jean Armour Polly, a librarian, in an article called "Surfing the INTERNET", published in the University of Minnesota Wilson Library Bulletin in June 1992. Although Polly may have developed the phrase independently, slightly earlier uses of similar terms appeared on Usenet in 1991 and 1992, and some recollections claim it was also used verbally in the hacker community for a couple years before that. For more information on the distinction between the World Wide Web and the Internet itself—as in everyday use the two are sometimes confused—see Dark internet where this is discussed in more detail. Although the English word worldwide is normally written as one word (without a space or hyphen), the proper name World Wide Web and abbreviation WWW are now well-established even in formal English. The earliest references to the Web called it the WorldWideWeb (an example of computer programmers' fondness for CamelCase) or the World-Wide Web (with a hyphen, this version of the name is the closest to normal English usage). Ironically, the abbreviation "WWW" is somewhat impractical as it contains two or three times as many syllables (depending on accent) as the full term "World Wide Web", and thus takes longer to say. How the Web worksViewing a Web page or other resource on the World Wide Web normally begins either by typing the URL of the page into a Web browser, or by following a hypertext link to that page or resource. The first step, behind the scenes, is for the server-name part of the URL to be resolved into an IP address by the global, distributed Internet database known as the Domain name system or DNS. The next step is for an HTTP request to be sent to the Web server at that IP address, for the page required. In the case of a typical Web page, the HTML text is first requested and parsed by the browser, which then makes additional requests for graphics and any other files that form a part of the page in quick succession. This is the distinction between a single page view and the many hits or web requests that are often necessary to view the page. The Web browser then renders the page as described by the HTML, CSS and other files received, incorporating the images and other resources as necessary. This produces the on-screen 'page' that the viewer sees. Most Web pages will themselves contain hyperlinks to other relevant and informative pages and perhaps to downloads, source documents, definitions and other Web resources. Such a collection of useful, related resources, interconnected via hypertext links, is what has been dubbed a 'web' of information. Making it available on the Internet created what Tim Berners-Lee first called the World Wide Web in the early 1990s CachingIf the user returns to a page fairly soon, it is likely that the data will not be retrieved from the source Web server, as above, again. By default, browsers cache all web resources on the local hard drive. An HTTP request will be sent by the browser that asks for the data only if it has been updated since the last download. If it has not, the cached version will be reused in the rendering step. This is particularly valuable in reducing the amount of web traffic on the internet. The decision about expiration is made independently for each resource (image, stylesheet, JavaScript file etc., as well as for the HTML itself). Thus even on sites with highly dynamic content, many of the basic resources are only supplied once per session or less. It is worth it for any Web site designer to collect all the CSS and JavaScript into a few site-wide files so that they can be downloaded into users' caches and reduce page download times and demands on the server. There are other components of the Internet that can cache Web content. The most common in practice are often built into corporate and academic firewalls where they cache web resources requested by one user for the benefit of all. Some search engines such as Google also store cached content from Web sites. Apart from the facilities built into Web servers that can ascertain when physical files have been updated, it is possible for designers of dynamically generated web pages to control the HTTP headers sent back to requesting users, so that pages are not cached when they should not be — for example Internet banking and news pages. This helps with understanding the difference between the HTTP 'GET' and 'POST' verbs — data requested with a GET may be cached, if other conditions are met, whereas data obtained after POSTing information to the server usually will not. From Wikipedia, the free encyclopedia. English History : File last modified on 2016-5-11 Contributor : devassal thibault See also this article on Wikipedia : World Wide Web All text is available under the terms of the GNU Free Documentation License. You may find another article in the encyclopedia by consulting this list.
[Chess forum]
[Rating lists]
[Countries]
[Chess openings]
[Legal informations]
[Contact]
[Social network] [Hot news] [Discussions] [Seo forums] [Meet people] [Directory] |
|
Support to all people under attack Social network : create your photo albums, discuss with your friends... Hot news & buzz : discover the latest news and buzz on the internet... Discussions : questions and answers, forums on almost everything... Seo forums : search engines optimisation forums, web directory... Play the strongest international correspondence chess players !
|