COMPUTER NETWORKING AND MANAGEMENT

Task 3 – HTTP

            Upon connecting to the Internet, there are various different protocols and services used to transmit information to the client.

            According to (2002), the Web is a collection of sites, or documents, which share a specialized protocol, HyperText Transfer Protocol (HTTP), enabling different operating systems to share the same data. Web documents are formatted in HTML, or HyperText Markup Language, to standardise the presentation of text, graphics and links to other sites, with the end result that a document will look more or less the same in a browser for Windows, MasOS or UNIX. Web browsers utilize HTTP to communicate with a server and then translate HTML code to display pages on the client computer.   Clicking on a hypertext link, which is an embedded address or URL, will effect a transfer of information, whether to another document, image or file. Hypertext is something of a mis-nomer, as links within a web document can also be anchored to images.

            (2002) further explained that HyperText Transfer Protocol enables the easy retrieval of documents from the Internet regardless of where they are held. HTTP defines URLs not only for the Web but also FTL, Gopher and Usenet sites, making it an extremely useful means of accessing a wide range of document. To implement data transfer as simply as possible, HTTP provides web authors with the ability to embed hypertext links within documents.

a) Discuss the following terms as applied to the HTTP and explain how each can provide performance improvements to HTTP applications:

• Persistent connections

            Wikipedia explains, “HTTP persistent connection is a connection method introduced in HTTP/1.1 which enables using one connection to send/receive multiple HTTP requests/responses” (see picture below).

Figure 1: Schema of multiple vs. persistent connection. (Source: http://en.wikipedia.org/wiki/Image:HTTP_persistent_connection.svg)

           

            Among its advantages are: (1) since there are less connections that are open simultaneously, there us less memory and CPU usage; (2) HTTP pipelining of responses and requests is enabled; (3) because TCP connections are fewer, network congestion is reduced; (4) no handshake is observed so subsequent requests with latency is reduced; and, (5) TCP connection was usually closed for penalty but not anymore during reports of error.

            RFC 2616 explains, “a single-user client should not maintain more than 2 connections with any server or proxy. A proxy should use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.”

            On the other hand, there are web browsers that support persistent connections to their proxies and web servers. Netscape navigator queues all persistent connections that are idling. Internet Explorer however, typically opens 2 connections for each server. Inactive persistent connections for 60 seconds goes time out. Mozilla Firefox also supports persistent connections with customized per-server and per-proxy. Finally, Opera also supports persistent connections like Mozilla’s.

 

• Pipelining

            HTTP requests are normally issued sequentially. Response to the present request must be totally received before issuing the next request. The following request depends on bandwidth limitations and network latencies causing considerable delay before it is seen by the server.

             (2005) explains, “HTTP allows multiple HTTP requests to be written out to a socket together without waiting for the corresponding responses. The requestor then waits for the responses to arrive in the order in which they were requested. The act of pipelining the requests can result in a dramatic improvement in page loading times, especially over high latency connections.”

Figure 2: Schema of non-pipelined vs. pipelined connection.

 

            There has been a striking development in page loading times with pipelining of requests, especially at high latency connections like the satellite Internet connections. Moreover, network load is abridged as few TCP packets are sent over the network. This happens because a number of HTTP requests may fit into a single TCP packet.

            HEAD and GET requests are the only idempotent requests that can be pipelined. PUT and POST requests are not to be pipelined. New connection with requests should not be pipelined also as they have not been yet determined if the proxy or origin server supports HTTP/1.1. This means that pipelining may only be done on reusing existing persistent connection.

            Both the server and the client is required in HTTP pipelining. However, servers are not required to pipeline responses. They are required not to fail if clients choose to pipeline requests.

            RFC 2616 explains that: “a client that supports persistent connections MAY “pipeline” its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received. Clients which assume persistent connections and pipeline immediately after connection establishment SHOULD be prepared to retry their connection if the first pipelined attempt fails. If a client does such a retry, it MUST NOT pipeline before it knows the connection is persistent. Client MUST also be prepared to resend their requests if the server closes the connection before sending all of the corresponding responses. Client SHOULD NOT pipeline requests using non-indempotent methods or non-indempotent sequences of methods. Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received response status for the previous request.”

 

• Conditional GET / client caching

            GET is the most common type of HTTP request. GET is used to request a “representation of the specified resource”. It is considered a safe method because they are intentionally for retrieval of information only and should not alter the status of the server. This means that they should not have side effects. However, GET requests in practice may still cause modifications on the server.

            GET is also defined to be a idempotent method. This means that numerous matching requests should have the same outcome as a single request.

            An example of a change on the server is when an HTML page uses a plain hyperlink to start deletion of a domain database record, this causes an adjustment of the server’s status as a side-effect of a GET request. This is not encouraged as it can cause problems for search engines, Web caching and other automated agents, who can create unintentional modifications on the server.

            The three basic means of HTTP for controlling caches is defined by Wikipedia as:

            “Freshness allows a response to be used without re-checking it on the origin server, and can be controlled by both the server and the client. For example, the Expires response header gives a date when the document becomes stale, and the Cache-Control: max-age directive tells the cache how many seconds the response is fresh for; Validation can be used to check whether a cached response is still good after it becomes stale. For example, if the response has a Last-Modified header, a cache can make conditional request using the If-Modified-Since header to see if it has changed; and, Invalidation is usually a side effect of another request that passes through the cache. For example, if URL associated with a cached response subsequently gets a POST, PUT or DELETE request, the cached response will be invalidated.”

 

b) Outline THREE different Internet protocols commonly used to retrieve mail from mail servers by e-mail user agents. Compare and contrast these.

            Domain Name System.  (1999) explains that: “A domain name is an easy-to-remember replacement for an Internet address. When an individual or corporation registers for a domain name, it is actually assigned an Internet Protocol (IP) address. This address "consists of several domains, `moving left to right from the most specific to the most general, with each domain separated by periods. Because IP addresses are difficult to remember, Internet users substitute unique "domain names" as pseudonyms for the computer's real identification number. When a domain name is entered into a computer it is automatically converted into the numbered address, which contacts the appropriate site.”

            Post Office Protocol. Sloboda explains that: “POP3 is the set of standards and languages that ISPs use to rout electronic mail from the sender to the receiver. It is the e-mail equivalent of a street address and zip code. Users log into their ISP am download messages to their computers from the ISP's server Traditional ISP accounts use the POP3 system. It is one of the easiest to use and requires software that generally comes with a web browser Popular e-mail software. With traditional POP e-mail, the messages are transferred from the computers of the ISP to the user's. web-based e-mail services store all "read," "unread" and "sent" e-mail on the provider's computer. The messages are never transferred to the user's computer. This gives the user the advantage of being able to log into any Internet accessible computer to access their e-mail files. With POP e-mail, the user can only access old messages from their computer. POP e-mail, or POP3, is a common e-mail system used by most ISPs and some corporate e-mail systems.”

            Internet Message Access Protocol. (2005) explains that: “Internet Message Access Protocol (IMAP) is a similar but more powerful program that was developed at  University in 1986. It is currently in its fourth version (IMAP4). Its advantage is that it allows you to search messages that are still on the mail server for keywords and thus decide which to download; that is, it allows one to create killflles.”

            Of the three, POP3 is the one that is used more as it can be accessed and used anywhere anytime. It is the most recommended one.

 

Task 4 – IntServ and QoS

            In computer networking, an architecture that states the fundamentals to guarantee quality of service (QoS) on networks is called IntServ or integrated services. For example, IntServ can be used to let sound and video reach the receiver with no disruption.

            The design of IntServ is that all application has to make an individual reservation if it requires some kind of gurantees. In addition, each router in the system must implement IntServ. The purpose of the reservation is descrive by “Flow Specs”, while the fundamental mechanism to hint it across the network is “RSVP” .

a) Explain how the Integrated Services (Intserv) architecture allows an Internet connection to have a guaranteed Quality of Service (QoS).

 

            It has been said that many establishments want the best out of their network. Now, who wouldn’t? Everyone wants a network that absolutely guarantees its performance. That is, to convene an absolute, worst-case scenario for response time or latency. Not just that, a predictable performance is not enough; everyone wants a performance bounded with the promise that end-to-end delay will not go beyond a particular limit. To achieve these, a network must have an industrial-strength quality of service (QOS).

            Manipulating internal queues to manage the order in which different traffic types are programmed for transmission on outbound links is a packet-scheduling task in most routers and some switches. First-in, first-out (FIFO) scheduling is simple compared to these sophisticated techniques.

            Application of any of these scheduling algorithms requires traffic to be classified first as it enters the router. This classification can be based upon the explicit signaling information located within each packet, TCP/UDP socket number, incoming router port or source/destination IP address.

            Packet scheduling may start as soon as traffic classification is done whereas prioritization is the most straightforward technique. “Fairness” may have to be featured into the scheduling algorithm as low-priority applications experience time-outs before transmission.

            Multiple disintegration of high-priority traffic will bring queuing interruptions while competing for the router output link. Once congestion take place, prioritization cannot guarantee vital data will reach its end in a timely style - only high-priority packets will leave the router ahead of low-priority packets.

            Router prioritization is sufficient to make sure transaction-processing applications are prioritized in a router network. However, it cannot guarantee network performance.

            Then what is enough to guarantee delay through a network if router prioritization does not? According to Passmore (1998), link bandwidth reservation is the simple answer. However, he also explains that “allocating bandwidth in a way that achieves bounded, end-to-end latency is very complex.”

            Packet scheduling is the same mechanism implemented by routers for bandwidth reservation. But there is a difference. Bandwidth (and equivalent router buffer) allotment is expected to be necessary for every link down a network pathway if end-to-end latency guarantees should be met. The test is to tweak the routers so the required bandwidth will be allocated in the occurrence of dynamically changing network topologies, a large number of traffic classes and new application requirements.

            Then a few years back, an IETF (Internet Engineering Task Force) working group developed an answer to a guaranteed latency that utilizes more administrator-friendly packet-scheduling technique with a technique to control traffic entry into the network. Integrated Services (IntServ) was born.

            Unfortunately, it still was not adequate to meet complete latency guarantees. A way to reserve bandwidth before a flow traverses the network, so the routers can allocate buffers beforehand (or point out that guaranteed end-to-end service cannot be delivered) would be needed. This is where RSVP (Resource Reservation Protocol) comes in.

            So what happens here? Wikipedia explains how: “all machines on the network capable of sending QOS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a corresponding RESV (short for “Reserve”) message which then traces the path backwards to the sender. The RESV message contains the flow specs. The routers between the sender and listener have to decide if they can support the reservation being requested, and if they cannot then send a reject message to let the listener know about it. Otherwise, once they accept the reservation they have to carry the traffic. The routers then store the nature of the flow, and also police it. This is all done in soft state, so if nothing is heard for a certain length of time, then the reader will time out and the reservation will be cancelled. This solves the problem if either the sender or the receiver crash or are shut down incorrectly without first canceling the reservation. The individual routers may, at their option, police the traffic to check that it conforms to the flow specs.”

b) Explain how the Differentiated Services (Diffserv) architecture allows an Internet connection to have a differentiated QoS.

           

            Wikipedia states that: “DiffServ or Differentiated Services is a computer networking architecture that specifies a simple, scalable and coarse-grained mechanism for classifying, managing network traffic and providing QOS guarantees on modern IP networks. DiffServe can, for example, be used to provide low-latency, guaranteed service (GS) to critical network traffic such as voice or video while providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers.”

            DiffServ is another IETF work and standards alternative that relies on the Type of Service (TOS) bits within each IP packet header to signal service quality. Each router can then classify traffic based on the TOS bits and adjust its scheduling algorithm as needed for the packets in each service class.

            DiffServ is yet another "good news, bad news" situation. Using TOS bits provides information to the router only after each packet arrives. With this signaling mechanism, there is no concept of a flow and the router doesn't have to maintain any flow state information. This is good news, because DiffServ doesn't suffer from scalability problems.

            The bad news is that the lack of state information makes the router less effective in providing QOS - it can't know how to reserve buffers before incoming traffic flows arrive. That's why the IETF calls it differentiated rather than guaranteed service. While DiffServ promises to become useful for transaction processing or other traffic that benefits from prioritization, it can't provide strict network latency guarantees.

c) Explain why it is likely to be difficult to implement Intserv in the core of the Internet.

 

            IntServ is not actually very popular. The reason? Its because it makes it difficult to track all reservations. How come? IntServ works on a small-scale but each router has to store many states. This makes it difficult to track all reservations as networks scale up to an Internet size system.

            It has been found that the solution for this problem is a mutli-level approach.

 

Task 5 – 60 Marks

            Using available Internet and library resources, research the provisions of the ISO standard 17799: 2005 – Code of Practice for Information Security Management.

a) Outline the CLAUSES and CONTROL OBJECTIVES of the standard.

            International Standards Organization (ISO) 17799, a set of recommendations organized into ten major sections covering all facets of information systems policies and procedures. the ten domains of ISO 17799 and what they help with are (Andress, 2003):

            “(1). Business continuity planning – counteract interruptions to business activities and to critical business processes from the effects of major failures or disasters. (2). System access control – control access to information; stop unauthorized admission to information systems; guarantee the fortification of networked services; thwart unauthorized computer admittance; detect unauthorized activities and guarantee information safety when traveling and telecommuting.   (3). System development and maintenance – ensure security is built into operational systems; prevent loss, modification, or misuse of user data in application systems; protect the confidentiality, authenticity, and integrity of information; maintain the security of application system software and data; and ensure that information technology (IT) projects and support activities are conducted in a secure manner. (4). Environmental and physical security – thwart unauthorized entrance and damage to and interference with business grounds and information; stop loss or compromise of property and disruption to business actions; and percent cooperation or stealing of information-processing facilities. (5). Compliance – avoid breaches of any criminal or civil law, any statutory, regulatory, or contractual obligations and any security requirements; ensure compliance of systems with organizational security policies and standards; and, maximize the effectiveness of-and minimize interference to and from-the system-audit process. (6). Personnel security – decrease risks of human mistake, robbery, deception, or mistreatment of amenities; guarantee that users are conscious of information safety threats and concerns, and are ready to support the corporate security policy in the itinerary of their usual work; and, reduce the harm from security incidents and malfunctions and learn from such incidents. (7). Security organization – manage information security within organization; maintain the security of organization information-processing facilities and information assets accessed by third parties; and, maintain the security of information when the responsibility for information processing has been outsourced to another organization. (8). Computer and network management – guarantee the right and safe process of information-processing services; reduce the danger of systems failure; defend the reliability of software and information; preserve the reliability and ease of use of information dispensation and communication; guarantee the conservation of information in networks and the defense of the sustaining communications; avoid harm to resources and interruptions to company activities; and, put off loss, alteration, or mistreatment of information exchanged amid organizations. (9). Asset classification and control – maintain appropriate protection of corporate assets and ensure that information assets receive an appropriate level of protection. (10). Security policy - provide management direction and support for information security.”

b) For each of the clauses, critically appraise how the main control objectives relate to an organisation with which you are familiar and briefly outline what would be required to enable the organisation to meet the objective.

            ISO 17799 also provides guidelines for an auditing standard. Obtaining ISO 17799 certification is a long, arduous process. Many toolkits and policy templates are available to help make everything a bit easier.

            Request for Comments (RFCs) 2196 and 2504 also provide excellent starting points for creating a solid security policy. Gartner Group, a Stamford, Connecticut, research firm, recommends that a strong information security policy contain the following key components (see below):

Gartner Group provides the following tips to help you avoid the common pitfalls in policy writing:

Avoid creating information-security policies without considering the organization's culture. Many security policies are developed using templates or sample policies from other organizations. Information-security policies that are inconsistent with the organization's culture and business practices will often lead to widespread noncompliance.

Develop policies that are realistic and explicitly endorsed by management. Before issuing the policy, address concerns regarding user acceptance and costs associated with retrofitting systems or business practices.

Don't underestimate the need for effective policy-awareness programs. Employees must understand a policy before they can be expected to comply with it. An effective awareness program should include advance notice of the policy and an announcement letter from a key stakeholder. When the policy is issued, it should mention compliance-monitoring measures and whether there will be a grace period before these activities begin. It is extremely important that the procedures for obtaining policy exceptions and reporting violations be thoroughly explained. An awareness campaign should include regular reminders to employees and might also include self-audits that can aid employees and departmental managers in identifying compliance issues.

Develop policies in conjunction with compliance-monitoring procedures and include disciplinary action for noncompliance. Compliance-monitoring procedures are needed to ensure that policy-interpretation errors and violations are detected and addressed. Where possible, organizations should implement automated tools to ensure timely, consistent policy enforcement. If manual processes are used, they must be regularly scheduled. Incidents must be formally tracked and investigated, policy violations must be handled according to severity, and disciplinary actions must be applied consistently. Incident-management procedures should address how to investigate and collect evidence and when to contact law-enforcement agencies. Finally, data on employee compliance, exceptions, and violations must be communicated regularly to senior leadership to ensure that they remain informed and supportive.

             (2003) recommends also: a statement of ownership of information, definition of an employee / user’s responsibility for the protection of the information asset and intended recourse for noncompliance. She further explains that there are keys to a successfully developed and implemented information security policy and is lies in the answers to the ff. questions: do employees understand the difference between appropriate and inappropriate use? Will employees report apparent violations? And do employees know how to report apparent violations?

            If the answer to all three questions is yes, then the employees will follow and adhere to the policies.

 

 

 


0 comments:

Post a Comment

 
Top