Task I

Introduction

            Networking had been one of the most important aspects of the evolution of computer. Because of this technology it had been easy for many companies as well as organization to share information, data, files as well as resources. This technology helps to lessen the cost that will be spending for utilities as well as it saves time and effort. It also helps to prevent redundancy of work or task because it offers a real time view of the transactions.

            At the same time the Internet technology is also a great help in connecting different network in different part of the globe, it is now considered as the global networking that connects different organizations, businesses, companies and individuals around the globe. The Internet and the networking made the life and transactions of many businesses a lot easier.

A. Internet Infrastructure

Network Server and Client Operating System

1. My workplace uses the Client-server network operating system, by doing this, all of the resources and information and data flow will be controlled by the so called mother node or the server. Windows Server 2003 that offers features like terminal services and other improved services such as active directory, message queuing and Internet Information Services or IIS. The company has seven servers that handle the printing activities, website maintenance, e-mail management, file management, virus prevention and database organization. As of the client workstation all of the 18 clients are using the Windows XP.

 

2. Hardware

Specification

Size

Processor

Intel Pentium 4 – 2.00 GHz

Random Access Memory

512 MB

Hard Disk

80 G

            All of the client computers or workstation has the above specification enough to handle the transaction inside and outside of the company. As of the cabling, because the company is using the star topology, it uses the shielded twisted pair and rj45 as the connector. It also uses a Lynksys Etherfast 10/100 LAN card. As of the hardware that is used for the servers, the PowerEdge 1950 of Dell is the one that is use by the company. For the switch, the linksys Etherfast 10/100 8 port auto-sensing is used for the needs of the company for switches. It also uses the Linksys Fast Ethernet DSL Router.

3. Internet-based Applications

The company is offering three Internet-based applications: the order system, payment system and the customer service via chat. Order system is the one that is responsible in getting the order of the customer and managing and organizing it for the inventory of the company. The payment system just like what the name says is the one that is responsible in getting the payment of the customer for specific service. The last application is the customer aid system with the use of the chat room; this is the communication place for the customer and the company about the concern of the customer about the services that have been rendered.

Internet Setup Security

            Internet and networking had been a great help for many organization and companies world wide. It had helped many businesses to communicate with their suppliers and customers that are located in different places around the globe. Internet and networking are considered as the two important aspects that can help any business and organization to expand and take advantage of the growing market worldwide. But just like any technology it also has its own share of weaknesses and that is the security issues. There are many security issues that must practice in order to prevent future security dilemma about the networking and Internet. First and foremost is the use of Firewalls. Network firewall or simple firewall is software that intended to prevent unauthorized traffic from traveling from a network to another. Firewall can help to prevent unauthorized user to retrieve, read and use data that are prohibited and confidential. As far as the company is considered it is using firewall that protects the confidential files from the millions and billions of Internet users.

            Another important aspect is the implementation of antivirus. This is important because Internet is not safe from viruses and other applications that can harm the client computers as well as server. The company has its own antivirus servers that monitor and maintain the cleanliness of the network from different types of virus. It is also important to update the antivirus. Also the process of changing the password of the router can also help to maintain the security inside the network.

            Another of the most needed software that the company must have is the e-mail filtering software. This can help to prevent viruses and other unnecessary application or data to enter the network.

Task II

Introduction

            Network nowadays are becoming more complex than ever due to different reasons as well as the demand for security and other problems and dilemma that different companies are preventing to happen. Due to the growth and complexity of network, network management as well as the process of provisioning system is done in order to enhance and improve the complexity of managing the network (2003). The reason behind this complexity and growth of the network is due to the fact that the need for data and information as well as the processes and methods in storing, retrieving and developing needed information are vital to any organization and business, and there are lot more applications that are running inside a company to generate information than that of before (2003). Another reason is that it networking can help companies and organization to save money in maintaining and storing data and information with the use of its sharing features, it only needs a high level security in order to prevent future security problem such as unauthorized access to important and confidential data and information.

            Network management is the activities, methods, procedures and tools that are used to operate, administer, maintain as well as provision the network system (2006). It focuses on the resources that are available in the network and the activities that involves in monitoring and controlling its operation. All of the resources will be combined and used to execute and employ diverse services. It includes different processes such as deployment, integration and coordination of different hardware, software as well as the human participation in order to monitor, test, poll, configure, analyze, evaluate as well as control the network as a whole and each and every element and resources that are involved in a given network to meet the real-time operational performance as well as the quality of the requirements of the service given at a reasonable price (Telecommunication Consultants India Ltd n.d.).

            Network management is labor intensive, serious and undoubtedly a not easy task. It entails several and various skills, implicit and understood knowledge of the network and compromise and negotiation skills in view of the fact that the adopted strategies are put into practice in a situation where remarkable hierarchical relations are lacking ( 2004).

Figure 1 Infrastructure for Network Management (2007)

 

 

 

 

 

 

 


 

Managing Entity

            Managing Entity is the one that can be software or an application that is responsible in controlling the collection, processing, analyzing and displaying and showing it to enable the network manager to control the device in the network. It is considered as the center of the network management because it controls the activity within the network that processes the network management information (2005).

            It is the one who has a big picture view of the entire network. It has a set of application-level program that controls and manage the network with the presence of a human intervention or rule-based Artificial Intelligence or expert system as an assistant. It has a connection to the managed entity or the application-level process that is located in each site of resource that communicates with the network manager in order to response to the queries that was queue by the manager in order to notify, report and inform the manager about any significant events

Managed Device

Managed device is a piece or unit of network equipment that is under the control of the managing entity and resides in a managed network. The said devices might be a host, a switch, a router or even a printer ( 2005).

Managed Object

            Managed object is the process about which the agent collects data and can be characterized as a variable that shows the characteristics of the device that are being managed (2004). Managing object is software process that is running inside the managed device that is responsible in communicating with the managing entity and responsible in taking action with regards to the managed device under the control of the managing entity (2004). It is also part of the managed device that can be monitored or analyzed individually. Example of the said object is a processor (2002).

Management Information Base or MIB

            Management Information Base or MIB is the one that is responsible in associating with all of the managed agents in a managed device. It is a repository or storage room for the information about the managed objects that can be used by the SNMP or Simple Network Management Protocol manager (2002). MIB is a relational database that stores all information about all the activities of the managed network elements like their operations and their maintenance personnel that are pertinent to the fault management applications (1998).

All of the information that are related to the performance of the managed device such as the number of the data packets that have been sent and received as well as information about the configuration of the managed device are all stored in the MIB processor (2002).

            MIB is also used in order to represent data using a format that is independent to the intricacies of the hardware and software processor (2002).

Network Management Protocol

            Network Management Protocol is a software that runs in between the managing entity of a given management object on the managed device. It allows the managed objects to warn the managing entity about any potential problems regarding the network and allows the managing entity to throw commands to the management object (2005).

Network Faults

Virus Infection

            Virus can affect the individual performance of different workstations in a network and eventually affect the data and information traffic of a network as a whole. That is why virus detection is one the primary concern of the network management under the security management. The use of updated and well maintained antivirus software as well as server can help to prevent hazardous virus to attract the network.

Unauthorized Access

            Unauthorized access to the network is a big problem for the company because information is vital for the performance and transaction of a business. With the use of SNMP protocol, there will be a configuration that secures the operation of the router. It has a privilege level, access control list or ACL and view that help the network manager to detect unauthorized access as well as prevent it from happening.

            Intrusion detection system or IDS can help to maintain the security inside the network. Using an application based IDS in analyzing application log files, application based IDS will be able to detect different types of computer attacks and suspicious activities ( 2005).

Task III

Introduction

            Authenticity and reliability of information and data is important in any network in any of business and organization. Security is the most important factor that will enable companies to gain the trust of their consumer as well as to maintain the secrecy as well as the confidentiality of different important and high level information of the company. Security of the network is one of the factors that can affect the competitive advantage of a certain business because it contains information regarding the operations and performance of company or organization. During the first installation and implementation of network, security and authenticity had been the two most important aspect and issues regarding the said technology. That is the reason why there have been many studies that tackles the security threat that might arise and create problems in different organizations and companies in the whole world.

            When different workstations are communicating with other workstation both of the parties are engage in a dialogue by sending multiple message towards each direction. During the time of communication, this dialogue must be secure against possible attacker that may want to read, alter, delete, add or even reply to the message. Dialogue security is the primary reason in using different code or method in securing messages.

            Encryption and cryptography can help maintain confidentiality of information and data of the company by preventing those eavesdroppers who has the motive of intercepting to the message en route using the Internet. It can also help to maintain the authentication and integrity inside the network of the company. Authentication will be maintained by making sure that the sender that is requesting for certain information is not an impostor. It can also help to maintain integrity of message by tracking changes that had happened to the message during its travel to the network. The key-pair is a result of a very large number or n, and the result of two prime numbers when multiplied that was chosen with accordance to the special rules; these primes can be 100 or more digits in length each, yielding an n with roughly twice as many digits as the prime factors.

A. Assumptions

1. 10, 000 up-to-date PC’s have each an Intel Pentium 4, having 2.8GHz processors without Hyper-Threading

2. A single key has an equivalent of 10 clock cycle to be checked.

GHz or also known as the Gigahertz has 1, 000, 000, 000 equivalent of cycles per second ( 2001).

DES or Data Encryption Standard is a block cipher that uses 56-bit key in encrypting data in blocks of 64 bits (SAS Publishing & SAS Institute, 2004, p. 7).

            Using the brute-force approach, it would be necessary to try the entire 2 56 possible combinations or 72, 057, 594, 037, 927, 936.

            The 100, 000 PC’s with the equivalent of 2.8 GHz or 2, 800, 000, 000 processors will have a 280,000,000,000,000 cycles per minute.

            Within a given key stroke, 10 cycles can be checked; therefore it will return a 28,000,000,000,000.

            Therefore; 256 / (100, 000 * 2.8GHz) will have 72, 057, 594, 037, 927, 936 / 28,000,000,000,000 = 2573.4855 seconds or 0.71485 hour

B. Private and Public Key

            Private key cryptography uses one key that is used both for the encryption and decryption process (1998). Sender uses the key for the process of encryption of the plaintext and the send the cipher text to the receiver (1998). Then receiver then will apply the same key for the process of decryption of the message and eventually recover the plaintext. It is also known as symmetric encryption ( 1998). The primary advantage of the private key is its speed that is opposite from that of the public key. But offer less security because keys are transmitted and it has a big chance that an enemy might discover the key during the process of transmission.  Private key can not perform the authentication process in an open network (1998).

            Private key cryptography scheme is used by the Data Encryption Standard or DES that was designed by IBM during the 1970s but later on adopted by the National Bureau of Standards that is now the National Institute for Standards and Technology (1998).DES is a block-cipher is employing a 56-bit key that is operating using the 64-bit blocks. It consists of complicated set of rules and protocols that are designed to submit fast implementation of hardware and slow implementation of software (1998).

            On the other hand Public key cryptography was considered as the most important and major innovation in terms of cryptography. The modern version of the public key cryptogram was first publicized by Whitfield Diffie and Martin Hellman during the 1976. The most important edge of the public key cryptography is its security and convenience. Another is that it offer and provide a method that can be use for the digital signatures. On the other hand, its main disadvantage is that its speed. It is also vulnerable to possible impersonation

            The most common and the first application and implementation of the public key cryptography were the RSA (1998). The name RSA was derived from the last name of the mathematician who developed the said application namely: (1998). It is used by most of the software products that are used for key exchangem encryption and digital signature for small block of data. It uses a variable size of encryption block and variable of size key (1998). 

           

 

 

 

 

 

 

 


 

C. Computation and Verification of Digital Signature

Digital signatures are all created with the use of cryptography that is responsible in transforming messages into code that cannot be understood by human, just like encoding and decoding the message. Digital signatures use the technology of the public key cryptography that uses an algorithm that has two different but related in mathematical aspect keys that are used to create a digital signature or to transform data into unintelligible form while the other one is use to verify a digital signature or to return the message into its original form .

            For example (Gen, Sign, V er) is a digital signature over {M}nN. Then if:

 

D. Computation and Verification of Message Authentication Code

            Authenticity of a message, information or data is more important than the confidentiality aspect because it ensures that the message had not been altered or manipulated by anyone and it is indeed genuine and correct information (2003). Message authentication code or MAC is used due to its features that ensure authenticity of information and messages in the( 2003). The process of checking and verifying of message start with computation of the MAC for the message before it will be sent to the recipient (2003). The recipient of the message will also compute the MAC for the message then compare it with the received MAC. If and only if this two values match then the message had not been altered, modified and change during its travel in the network (2003).

            In order to produce and generate the MAC, a cryptogram algorithm together with its secret key is used. The two communicating parties or known as the sender and the receiver will have the key and they are the only one who will have the unique code for future references ( 2003). The MAC is considered as an error detection code or EDC that can be used to verify if the connected secret key is known ( 2003).   

            Just like for example, A and B share a common secret key as K AB.  When A send message, for instance M to B, it will calculate the message authentication code that will serve as a function of a message and the key and will result to: MACM = F (KZB, M), then the message as well as the code will be transmitted to the target recipient. As a security feature, the receiver of the message will perform the same computation for the received message by using the same key in order to create a new MAC. If the MAC matched then it is sure that the message was not altered (2006).  

E. Digital Signatures and Message Authentication Code or MAC

Digital Signatures

             The technology of the digital signature was first introduced by  in 1976 that allows the establishments in an overwhelming way of the origin and content of the digital information (1998). During its first usage, it had been applied to many of electronic businesses that require world wide public infrastructure (1998).

            Sender hashes the message to be send and eventually creates a message digest. The message digest will be signed or encrypt with the private key of the sender that is unique and informed to the sender only and therefore produces the digital signature that is needed by the sender to appends t the message before its transmission (2004).

            After the transmission, the receiver will recomputed the message digest into two ways: first is that when the sender did the computation, the receiver will hash first the message in order to produce the message digest; and second, the receiver will decrypt the digital signature with the true party’s public key. The second way must produce message digest if and only if the sender is the true party or else, the message digest will not match and eventually the message will be rejected (2004).  Furthermore, digital signature will produce message integrity if the attacker will change the message while traveling in the network and therefore, the message digest will not match and the message itself will be rejected (2004).

            According to  (1998) the main role of the digital signature is to substitute to the real life signatures and permit the user of the electronic world to have a mechanism or device for signing diverse documents. It identifies also the signer and unequivocally associates and distinguishes the signer of the document that has been signed. Furthermore, it helps to provides non-repudiation of the sender as well as enable passing of the authenticated message in a more transitive manner ().

Message Authentication Code or MAC

            Message authentication is a device or tool that is used to verify the integrity of a message by assuring that the data that had been received are exactly just like what it is when it was sent and the purpose of the sender is valid ( 2006). MAC on the other hand is an algorithm that requires the use of a secret key. It takes a variable length message as well as secret key for an input and produces an authenticated code as an output (2006). It uses the processes such as the process of assuring the message that have been received by the receiver has not been change or modified this is done by comparing the secret code from the received code and because of that it is sure that the message will be from the real sender because only the two communicating workstations has its access to the unique and secret code.

Figure MAC Communication Process Example

 

 

 

 

 

 

 

 

 

 


 

Figure 5 MAC Process

Difference of the Digital Signatures and the MAC

            Both digital signature and MAC are tools or equipments that help to maintain the security authenticity as well as confidentiality inside the network environment.

Majority of digital signatures practical schemes are all dependent on the hardness of the numeric and theoretical problem and cause to a technical problem that it didn’t fit other applications because it require large storage space as well as extensive computations (1998 ). That is why there are still many applications that preferred to apply and use the MAC algorithms in providing integrity and origin of the data as well as its authentication (1998). The reason behind this is that MAC provides weaker guarantees than that of the digital signatures because it can only be used in a symmetric setting which shows trusting parties inside the network and it did not provide a non-repudiation of the origin. Another important issue about the MAC is that it rely on the shared symmetric keys therefore the management is costly and eventually it will be harder to scale than that of the symmetric keys (1998).

            The main difference between the two algorithm is that MAC uses an authentication tag or also known as a checksum that is obtain by applying an scheme of authentication together with a secret key to that message unlike the digital signature that uses computation and verification process using the same key that is why message can be verified only with the intended recipient (.).

Task IV

Introduction

            Transmission control protocol or TCP is protocol that specializes in IP transport that provides reliable connection-oriented transport of data with guaranteed delivery (2003). It became the most commonly used out of the two available protocols that are suited for transport ( 2003). It has been defined as well as refined from 80’s up to the present time. Its strength and longevity as well as its flexibility nowadays have evolved from 64kbps up to multi-Gbps core network (.). It was also considered as the most dominant transport layer protocol used in the Internet since 1970’s up to the present because it shows a reliable end-to-end data transmission protocol that can be used by many applications such as the WWW, the FTP, Telnet, Email and other same applications. One of the benefits of this protocol is that it enables different computers to separate networks to have a distinguished communication and enables to share data and information with each other. On the other hand, due to its popularity and eventually the increasing demand and growth of this type of computer network, congestion has become one of the most encountered problems. That is why the congestion control algorithms have developed and implemented, it was design to reduce the packet drops at the bottleneck router ().

Three Elements of TCP Congestion Control Algorithm

            Congestion has been one of the most research and study in all of the aspect of TCP since the first time that the Internet had experienced congestion for the first time and since then, there have been many solutions that have concluded and suggested that can help and even prevent the presence of congestion in the Internet (.). Since then, routers have been widely used in order to prevent the said bad dilemma by dropping packets when the size of the router buffer reaches a predetermined value or known as the Random Early Detection or RED gateway; it can also notify the sender by setting certain flag in the packet or known as the Explicit Congestion Notification or ECN and consequently adjust the congestion window and eventually adjust the packet rate with accordance to some indication coming from the network (). There are three elements of the TCP congestion control algorithm or also known as the TCP-CC: the Tahoe, the Reno and the Vegas.

Figure 6 Implementation of the TCP-CC

Measure

TCP-CC Tahoe

TCP-CC Reno

TCP-CC Vegas

RTT Variance Estimation

Exponential RTO Back off

Karn’s Algorithm

Slow Start

Dynamic Window Sizing on Congestion

Fast Retransmit

 

Fast Recovery

 

 

Modified Fast Recovery

 

 

Source: (2006)

The Tahoe TCP-CC Algorithm

            Tahoe TCP-CC algorithm was developed by Van Jacobson and Karels in October of 1986 after a series of congestion and collapses of the Internet during the said time (1998,). It was the first and original protocol that has implemented the Slow-Start, Congestion Avoidance and the Fast Retransmit algorithms (2002).

TCP-CC Tahoe focuses on its principle about conservation of packets, for example if the connection is running at the available bandwidth capacity then a certain packet is not injected into the network except that a single packet is taken out too ( ). It operates by acknowledging the clock outgoing packets; this is because the acknowledgement means that the packet was taken off the wire by the receiver of the packet ().

            Tahoe TCP-CC uses the principle of the Additive Increase Multiplicative Decrease for its congestion avoidance feature. It is always done when a packet loss is believe and found out that is a sign of congestion, Tahoe will then save the half of the current window that will serve as a threshold and later on set CWD to one and starts slow start till it reached the proper threshold value. Subsequent to that it will increment linearly till ire encounters another packet loss as a result, it will increase the window slowly as it approaches the capacity of the bandwidth (University of California, Berkeley, p.1).

Figure 6 Tahoe without the Fast Retransmit ()

The Reno TCP-CC Algorithm

            TCP Reno was the one who introduces the fast recovery algorithm that sets the congestion window to the half of its current window and eventually raises the congestion avoidance from the halved congestion window (2002).Reno TCP-CC algorithm provokes losses in order to estimate the availability of the bandwidth in the network and was first implemented during the 1990 (1998; 2004). Reno TCP-CC will continue to increase its window size by one in every round trip at a time while there are no packet losses in the network (1998). On the other hand, if it encountered packet loss, it will automatically reduce its window size by one half of the present size of its window or also know as the additive increase and multiplicative decrease (1998).

            The control mechanism of this kind of TCP-CC is designed in a way that it will inform the sender and decreases the congestion window the first time that it detects any packet losses when time out occurred as well as when duplication of acknowledgements have been received (2004).

            Although the said algorithm had helped to prevent congestion, it had encountered problem such as its congestion avoidance mechanism causes a periodic fluctuation in the window size due to the constant update of the window size (1998). Eventually, the said fluctuation of the window resulted in a much larger delay jitter and an unproductive and incompetent usage of the available bandwidth because of many retransmissions of the same packets after the first occurrence of the packet drops (1998). Therefore Reno TCP-CC shows an undesirable bias against the connections that has the longer delays (1998).   

Figure 7 The Reno TCP-CC Algorithm

The Vegas TCP-CC Algorithm

            Due to the failure of Reno TCP-CC, Vegas TCP-CC had arrived and developed. Unlike the first two predecessor of the said algorithm, it adopts more sophisticated and updated bandwidth estimation techniques and method by using the difference between the expected rates of actual flow to the estimated bandwidth that are available in the network (1998) It works by getting closer of the actual flow rate to the expected flow rate when the network is congested, and the rate of the actual flow will be smaller than that of the expected flow rate if the situation is the other way around (1998) This difference will be used for estimation of the congestion level of the network and helps to update the size of the window accordingly (1998).

            Vegas TCP-CC is well known due to its proactive approach in controlling the problem regarding the congestion (2004).  This focuses and detects the congestion before it really happens and does corrective measures. Just like for example, it will take the packet before it start to get lost in an event that the congestion window start to decrease. The said TCP-CC detects the beginning of congestion by observing different kinds of change during the round trip of time. This is the reason why it answers the problem with regards to the disadvantage of the TCP-Reno about homogeneous environment.

            It also maintains a very small backlog of packets that are in queue that will eventually result in a short queuing delay. Once the Vegas TCP-CC reaches its standard equilibrium state, the jitters as well as the delays will be very small (2004).

Source: (1998)

Figure 8 The Vegas TCP-CC Algorithm

Task V

Introduction

            Internet had been one of the most used medium of communication and is very important in everyday life of most of the people in the world. It is now used for study purposes, for the purpose of works as well as for fun and entertainment. There is no doubt that the Internet is already a necessity in every lifestyle of human race. It also helps to shrink the world and make it a much smaller place to leave. Internet helps people to travel from one place to another in just one click of the mouse, it also helps to many businesses to sell their products and offer their services in a much larger market with immeasurable target customers.  

            Due to the popularity and wide used of the said technology, speed and efficiency are essential in the World Wide Web. Most of the users get annoyed to slow web pages and sites at the same time network administrators want to make the most out of their available bandwidth. There are always billion of people that are connected in the Internet within 24 hours a day, 7 days a week, 12 months a year and 365 days a year. Due to this reason there is a big possibility for a website to bug down or server failure, in that case, web cache or http proxy will be a great help. The use of a properly designed and implemented web cache can help to reduce the network traffic as well as improve the access time for those popular web sites.    

A. What is Cache?

            The word cache is a French word that literally means to store (2001). In computer and information technology field, caching is the storage room for the recently used and retrieved computer information that will be used for the future reference (2001). That information may or may not be used again, that is why cache is only beneficial when the cost of storing of the information is less than the cost of retrieving the information or computing it again (2001).

            The concept of caching can be found in every aspect of computing especially the networking area. The reason why the cache is working well is that because it is using the principle of locality of reference that grouped the locality into two types which are the temporal and the spatial (2001).

What is Web Cache?

            A web cache is a computer system in a particular network that keeps copies of the most recently requested pages and stored in a memory or in a disks that enables a speedy retrieval of data (.). It helped the access on the Internet by storing already accessed and request webpage, therefore the page will be retrieved directly and locally from the cache and not from the Internet ( ).

Function of http proxy or Web Cache

            (2001) defined proxies as an intermediary party in any of web transaction. It is an application that is located somewhere between the client and from the server origin. It is used together with the firewall to provide security that allows and record different requests from the internal network to the outside Internet ().

            Http proxy is like both a client as well as a server. It acts like a server towards a client and the other way around. It receives and processes different requests from different clients and then forward those request to the server. It is also consider as an application layer gateways because it lives at the application layer of the OSI reference model together with the client and the server. It is used for logging, controlling of access level, filtering, virus checking, caching as well as translation. With the help of http proxy, firewall will be able to filter as well as block Java, JavaScipt, VBScript as well as ActiveX that are bad or can affect the overall system of a computer (2001).

            According to (2001) the main function of the web cache that helps manages the traffic of data in the Internet is that it copies the Web page components from the website's origin server and then stores it on cache servers, normally within service provider points of presence or POPs. Because the caches are nearer to the users, delays are reduced and eventually the response times improved ().

B. Reasons for the increase of http proxies deployment is the Internet

            Web cache or http proxies are now widely used by most of businesses as well as organization in different part of the globe. As of now, the use of web cache or http proxies is a must in company or organization that is using the technology of the Internet as one of their tool in improving their performance.

The Growth of the Network Traffic

            As the demand for the usage of the Internet increases as well as the number of different websites that are offering different information, data, products as well as services continue to grow the need to a network infrastructure that will manage the network traffic increases also (). This is the primary reason why web cache is becoming a popular item for every organization and company around the globe. It helps many organizations and companies to reduce the bandwidth consumptions and server load in their network. It also helps to reduce latency by satisfying the different request of the client directly from the cache that is closer to the client instead of the origin server; this activity eventually can help to lessen the time that will be spending in getting the representation and displaying it (2007).. This helps the web to become more responsive (2007). Web cache helps and made the Internet faster than ever.

Speed Up Interactive Applications

            Poor website performance is a big disadvantage and can affect badly the operation and performance of companies that are implementing e-business. Cahners/Data-Monitor found out in one of their studies that slow web pages or websites causes many customers to abort the 78 percent of their web transactions. Another reason why most of the companies are taking advantage of the benefits of the web cache is the seven second rule that says that most of the users will not wait and waste their time waiting for more than seven second for a single web page to be downloaded before switching to another website. This rule shows that the interest of the users to a particular site lapse after seven minutes and that is why companies especially those who are engaged with e-business are now applying their personal web cache.

            Web cache can help to improve the total response time of the real-world of interactive financial services application by minimum of 5 percent up to maximum of 15 percent. It can be enhanced with the use of a more sophisticated and slightly improved or modified software by 15 to 30 percent (2001). Web cache can improve the performance of interactive Web applications. It is expected that these improvements, together with the chance to charge for improvement for both the content generators and for the end users of the improved performance.

Control Web Accessibility

            Http proxy can help to manage as well as control accessibility of websites and webpage that was defined by the company or by an organization as filter for unnecessary or prohibited site (2000). It helps the firewall to prevent access to sites that may contain pornography, stock prices and even sports scores (2000). This is a great help for companies to maintain professionalism inside their workplace by preventing and avoiding to those out of the business topics and information.

C. Database-driven Dynamic Web Content

            One of the solutions that can increase speed as well as lessen the latency time in the Internet is the use of database. That is why there have been many companies and organizations that are switching from the static website or those websites that was done using plain html to the new and improved dynamic website or the database driven type (2004).

            Companies store important information and data regarding their products or services as well as their customer and other entities that are involved in their supply chain in the form of database that enables them to have an enormous quantity of information that can be handled at the same time with consistency and easiness (2004).

            Database-driven or dynamic website can help the web cache in improving the speed of the transaction made in the net because it uses the same procedure in processing the request for the dynamic contents that are uncacheable in the proxy like that was used for the static contents. For example, the time and space that will be used in processing the dynamic contents will be overhead because the contents will not be going to be reused by other client.

            Database-driven dynamic website also helps to maintain the traffic and help the website to perform faster. Because of the database-driven e-commerce website and the deployment of network-wide caches in order to serve the request remotely rather than being served from the origin website. With the use of database-driven website, it will be easy for the cache that is closer to the users to serve their demand as well as it will help to reduce the overall data traffic of the system ( 2001).

            The main help of the database-driven dynamic website in reducing the effectiveness of the http proxies or web cache is that it enables easy retrieval as well as storage of important information and data and eventually helps to decrease or maintain the proper level of traffic in the network.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


0 comments:

Post a Comment

 
Top