Information technology, or simply IT, is considered as an emergent trend in which value of processes are acquired and enhanced. The design features of the Internet, in particular, its robust, decentralised and open communication, have been transformed into recognisably social features of the system and these have particular importance for the way online services function for individuals and the society as a whole. For instance, the decentralised nature of the communication systems allows users access from a variety of locations using a variety of devices, placing greater responsibility on the user. It is in this multi-level interactivity that we can consider virtual storage and file sharing strategies as necessities for an entity wishing to interact with their users more effectively and hence make sense of the information that the entity can provide to their users.

            Connectivity - its speed and capacity - basically enabled by networks of computers and algorithms to move information, for instance, around and so that these information will be close to users whenever necessary. One requirement of connectivity is enough bandwidth for the purpose of building converged devices with high storage capacity (Van Horn, 2006). Bandwidth control is therefore necessary to ensure good quality from the Internet service providers (ISPs) and other applications. However, there are connectivity threats which can stem from the ISPs themselves and the network of connection or the users. Stewart, Tittle and Chapple (2005, p. 226) identified these threats as the threat of illegitimate access to resources and the threat to manipulate free space available.  

            Large data transfer over the Internet usually leads to a long latency and becomes a bottleneck among the users hence there is a need for high performing data transferring procedures. Sharing and collaboration will be evident among the users, making the requirement for effective data sharing critical. To uphold security among the users, single sign-on mechanisms will be also a requirement. With all of these, users shall be provided a platform where they can work collaboratively through forming user groups and specifying access permissions while also offering these users an unlimited space and few compatibility problems among varying disks and drives (Cerin and Li, 2007, p. 28).  

            The difficulty of estimating network solutions is important since other factors should be also considered such as size, scalability, locations and growth. When considering volume and frequency, growth rate and pattern for these data should be also considered. Madison (2004, p. 1527), on the other hand, noted that the Internet has brought, among other things, file sharing systems also known as peer-to-peer (P2P) networks. End-user oriented sharing programs enable a participant in a network of digital electronic computers to transmit content directly and horizontally, that is, from peer computer to peer computer rather than transmitting content hierarchically to, or retrieving content from, a higher level server or host computer.     

            Jia and Zhou (pp. 464-465) maintained that P2P sharing is a product of the P2P file networking technology, involving two systems sharing services or files amongst themselves. Data transfer would be quicker if the two users are geographically close to each other. P2P file sharing, nonetheless, allows users to share whatever work they had created with a big user group over the Internet. P2P have nodes, however, that do not have content of their own but provide regionally centralised directory services for the network to improve the routing of information requests. Each of these nodes provides services for portions of the network hence are working on a cooperative manner to cater the whole network (Shen and Barthes, 2005, p. 417). 

            In a pure P2P system, nevertheless, there is no central server or router hence all nodes are peers, meaning each node can operate as a router, client or server depending on the query (Jia and Zhou, p. 465). Druschel, Kaashoek and Rowstron (2002, p. 85) individual computers communicate directly with each other and share information and resources without using dedicated servers. At the application level, pure P2P architecture builds a virtual network with its own routing mechanisms. Because of its distributed design, pure P2P do not encounter discovery problems.

            Since specific issues including bandwidth, latency and reliability are being considered, there could be limitations that disadvent the implementation of sharing applications. Network-based sharing, for instance, is of complex interoperability matrices which is limited to vendor support. As it requires specific host based software, it is also difficult to implement fast metadata updates (Poelker and Nikitin, 2008, p. 389). One disadvantage of the P2P system is the selection of the source peer selection protocol which is critical to the performance of the sharing system itself. An effective source peer selection protocol can dramatically accelerate the speed of download and minimise the consumption of networks resources, for instance.

 

            Therefore, without the support of host such that ISPs can provide to P2P systems there will be disadvantages on functionality and performance that directly relates with speed, scalability, discovery and higher levels of connectivity and interactivity. This paper proposes to provide a solution to cooperate ISPs and P2P network for the purpose of creating a synergistic relationship between the two. Cooperation between ISPs and P2P users seemed problematic and could impose challenges. However, there may be advantages and benefits that are worth trading to these problems and challenges. How these two entities can cooperate to improve their performance is the core of the investigation and experimentation. 


0 comments:

Post a Comment

 
Top