Exercise 1: Explain in general terms, what concurrency (or concurrent programming) is and its purpose with respect to modern computer architectures. Be sure to identify any inherent problems associated with this kind of programming.

 

Amiably, it is recognized that concurrent programming refers to a computer programming technique that make available for execution of operations concurrently such as with lone computer and or athwart an amount of computer related systems. Then, distributed computing has been the term utilized. For example, multiprocessor equipment achieves improved performance by captivating advantage of concurrent programming. There can be about parallel instruction wherein solitary errands are turn into a number of subtasks that can be figured moderately separately and then combined to shape in single logical solution. This type of programming has been effective for errands that can effortlessly break into self-governing tasks like for example, numerical problem like, factorization. The method to attain this programming is in the course of dispersed computation, having technique of information processing in which labor is carried out by detached computers connected through communication system (Ben-Ari, 2006). There accounts to the process of concurrent computing in which several programs are being considered as collection of interrelating computational processes as accomplished in parallel programs. The concurrent programs can be performed successively on single process by interleaving execution ladder of every computational procedure, or implemented in analogous by transferring each computational procedure to lone of set of processors that may be shut across system. The challenge of concurrent programs purpose can allow ensuring of accurate sequencing of connections or relations between diverse computational process, and coordinate admission to resources so as to the common among process (Ben-Ari, 2006). Thus, amount of diverse method can be employed to put into practice concurrent programs, such as applying every computational procedure as one operating scheme process and or implementing computational process as set of threads within solitary operating scheme procedure.

The purpose also is on, concurrent computing system, the attainment of communication connecting concurrent components from programmer as handled clearly. For instance, shared memory statement by changing inside of shared recollection location as exemplified by Java as there usually require application of a number of appearance of locking in order to coordinate amid threads. Another is on swap over of communication may be approved or may employ rendezvous approach in which the sender block until the note is established. The purpose of managing access to the resource and avoid issues in concurrent computing prevent concurrent process from intrusive with every one like, consider following algorithm for making withdrawals from checking account represented by shared reserve equilibrium (Ben-Ari, 2006)

bool withdraw( int withdrawal )

{

if ( balance >= withdrawal )

{

balance -= withdrawal;

return true;

return false;

}

Thus, for the reason that concurrent system rely on the employment of shared resources, concurrent program require the exercise of a few form of the arbiter anywhere in the completion to intercede access to the resources. There can be increased application throughput; parallel execution of concurrent program allows the number of tasks completed in certain time period to increase. Aside, high responsiveness for input/output intensive application (Ben-Ari, 2006), remain for input or output operation to be whole. Concurrent programming permits the occasion that would be exhausted coming up to be utilized for one more undertaking. Thus, there implies to a more appropriate program configuration, some troubles and problem domain are well-matched to symbolize the concurrent errands or process. There keeps effective indication of concurrent programming language used as verbal communication creation for concurrency that involve threading in Java, and or shared memory stance. The language popular for concurrency has been Java and many others, uses shared memory concurrency representation, with locking offered and has been implemented on top. Many concurrent programming languages have been developed as research language rather than as languages for production use like, Erlang, Limbo in which concurrency plays an important role to every action.

Ben-Ari M (2006) Principles of Concurrent and Distributed Programming (2nd Ed.). Addison-Wesley.

There have been a restricted concerned with in order programs that execute single torrent of operations such as from within a GUI type of programming as there kept away from concurrent implementation by ending the controller before long it will finished setting up the representation and outlook. There can be about concurrent totaling making the process of programming more composite, there has been problems posed by concurrency and then manage the latter. Recognizing that, in reality of the concurrent program, number of torrent base operations may be perform concurrently and that every stream of operation performed, there would have sequential program excluding for fact that stream can correspond and get in the way with one addition (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). The programs are called lone threaded programs. The operations for each stream are strictly ordered, but the interleaving of operations from a collection of streams is undetermined and depends on the vagaries of a particular execution of the program. The Java verbal communication requirement makes no equality guarantee but several Java Virtual Machines assures justice. The threads can communicate with each other, ways that are discussed well, Java programming verbal communication relies mainly on shared variables to hold up communication among processes, but it supports a clear signaling instrument. In general, the concurrent program were not easy because the array of likely interleaving of operation among threads mean that agenda implementation is non-deterministic thus, program bugs may be intricate to duplicate (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). The intricacy set up by multiple threads and potential interaction make program difficult to investigate and reason about. Many concurrent programs including GUI application follow stylized design patterns that control the underlying complexity having better mechanism for preventing interference in concurrent programs that may be executed on multiprocessors is locking data objects (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). When data object is locked by thread, no other thread can access or modify the data object until the locking thread release it, threads can keep on executing until tried admission locked the entity. Java relies on object locking to prevent interference. An object can be locked for the duration of method invocation simply by prefixing the method declaration with the work coordinated, for instance, to define synchronized increment method, it can be, synchronized void inc () {ct++;} (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). Furthermore, there can also declare static methods to be synchronized, which locks the class object rather than an instance object.  Thus, unusual feature of Java's lock mechanism is the fact that locking an object only inhibits the execution of operations that are declared as synchronized (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). Thus, methods that are not declared as synchronized will be executed even when an object is locked! There is a strong argument for this capability as there supporting definition of classes that partition operations in two groups: those that require synchronization and those that do not. There also invite subtle synchronization bugs if synchronized modifier is inadvertently omitted from one method definition. Indeed, in modern event handling models such as those in Java and DrScheme (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html), have single event handler that executes events in succession, protocol saves the overhead of synchronization and eliminates potential impasse (Cartwright, 2000 at http://www.cs.rice.edu/~cork/book/node96.html). The concurrent programming has been a thorough treatment of Java multi threaded programming for stand-alone and distributed environment. Designed mostly for students in concurrent or parallel programming classes, the text is also an excellent reference for the practicing professional developing multi threaded programs or applets (Olsson and Keen, 2004; Bar-David and Taubenfeld, 2003). Hartley (1998) first provided complete explanation of the features of Java necessary to write concurrent programs, including topics such as exception handling, interfaces, and packages. He then gives the reader a solid background to write multi-threaded programs and also presents the problems introduced when writing concurrent programs, race conditions, mutual exclusion, and deadlock. Hartley also provides several software solutions that do not require the use of common process and thread mechanisms.  Once the groundwork is laid for writing concurrent programs, Hartley then takes a different approach than most Java references. Rather than presenting how Java handles mutual exclusion with coordinated keyword, there first looks at semaphore based solution to standard concurrent problems such as bounded-buffer, readers-writers, and the dining philosophers. Hartley also uses the same approach to develop Java classes for monitors and message passing. This unique approach to introducing concurrency allows the readers to both understand how Java threads are synchronized and how the basic synchronization mechanism can be used to construct more abstract tools such as semaphores.  If there is a shortcoming with the text it is with the lack of sufficient coverage of remote method invocation (RMI) (Olsson and Keen, 2004; Bar-David and Taubenfeld, 2003), although there is a section covering RMI. This is quite understandable as RMI is a fairly recent phenomenon with the Java community. Also, the classes that Hartley provides could easily implement RMI rather than sockets to handle communication.  Java becomes more dominant on the server side of multi level application, writing thread safe concurrent applications become even more important. Indeed, concurrent programming can be a strapping step towards teaching students and professionals’ effective skills as of today. 

 

 

Exercise 2: Using the essay developed in Exercise 1 as a reference point, explain how the different aspects of concurrent programming may be implemented in JAVA. In particular, you should provide examples of those aspects together with a detailed explanation of each.

 

There can be aspects related to parallel programming and distributed programming, as (Wittwer, 2006) basic approaches for achieving concurrency with piece of software. They are two different programming paradigms that sometimes intersect. In the past programming life, we were mostly using sequential programming. But, today’s life style is going with more faster than the past decades. Also, solving problems on the computers are enormous. Parallel computer (Wittwer, 2006) can executes two or more job within a same period of time. The events are said to be concurrent if they occur within the same time interval, the tasks executing over time interval are said to execute concurrently. Tasks that exist at the same time and perform in the same time period are concurrent. Concurrent tasks can execute in a single or multiprocessing environment (El-Rewini and Abd-El-Barr, 2005). In single processing environment, concurrent tasks exist at the same time and execute within the same time period by context switching. Thus, in a multiprocessor environment, if enough processors are free, concurrent tasks may execute at the

same instant over the same time period. The determining factor for what makes an acceptable time period for concurrency is relative to the application. Concurrency techniques (Goetz, Peierls, Bloch, Bowbeer, Holmes and Lea, 2006; Hartley, 1998) are used to allow computer program to do more work over the same time period or time interval. Rather than designing the program to do one task at a time, the program is broken down in such a way that some of the tasks can be executed concurrently. In some situations, doing more work over same time period is not the goal. Rather, simplifying programming solution is the goal. Sometimes it makes more sense to think of the solution to the problem as a set of concurrently executed tasks. The technique is used in the parallel computer architectures. Java is just a computer language (Norton and Stanek, 1996) that has secure, portable, object-oriented, multithreaded (Goetz, Peierls, Bloch, Bowbeer, Holmes and Lea, 2006; Kann, 2004; Hartley, 1998) and interpreted, byte-coded, garbage-collected, language with a strongly typed exception-handling mechanism for writing distributed programs (Kann, 2004). Java is an object-oriented programming language, which added the new features such as overriding, interface and etc. Java supports multithreaded programming, which allows you to do many things simultaneously on the same time interval. Java enables the creation of cross platform programs by compiling into an intermediate representation called java byte code (Goetz, Peierls, Bloch, Bowbeer, Holmes and Lea, 2006; Kann, 2004; Hartley, 1998). Java is intended for the distributed environment on the Internet. Java has technology called RMI that brings unparalleled level of abstraction to client programming as highly optimized set of instructions designed to be executed by the java run-time system, which is called Java Virtual Machine and Java handles de-allocation for routine, called garbage collection of tools that can be used to create Java programs. There can be about aspect oriented programming (AOP) (Bechini and Tai, 1998; Bechini, Cutajar and Prete, 1998) as promise the modularization of so called crosscutting functionalities in large applications. Then, at present approximately every approaches to AOP provide means for the description of sequential aspects that are to be applied to the sequential base program. In particular, there is no properly defined concurrent approach to AOP (Corbett, 1998; Lea, 1997), with the result that coordination issues between aspects and base programs as well as between aspects cannot precisely be investigated. There can be about concurrent event-based AOP (CEAOP), (Bechini and Tai, 1998; Tai and Carver, 1996) which addresses the issue. There has been translation into concurrent specifications using Finite Sequential Processes (FSP), thus enabling use of the Labeled Transition System Analyzer (LTSA) for formal property verification. Furthermore, the showing of how to compose concurrent aspects using set of common composition operators. There can be sketching to the Java prototype implementation for concurrent aspects, which generates coordination specific code from the FSP model defining the concurrent AO application. For concrete situation, for instance on Aspect oriented programming (AOP) (Bechini and Tai, 1998; Tai, 1997) promises means for the modularization of so-called crosscutting functionalities, which cannot be reasonably modularized using traditional programming means, such as objects and components. The proper modularization of such concerns constitutes a major problem for development of large-scale applications. Crosscutting concerns occur in many concurrent applications, at various levels, for instance, request handling in web servers, event handling graphical user interfaces, monitoring and debugging, and coordination. The implemented prototype of CEAOP for Java as model driven aspect weaver as the latter take base program written in Java, complemented with a description of the events of interest, and a composition of aspects and produces a concurrent program, which is correct with respect to the model by construction. Each aspect of the whole composition, implemented as an active object, can be seen as an LTS that progress concurrently with the other LTSs. The progress is controlled by monitor such as monitor in general acceptation of the term, actually implemented as passive Java object using the monitor associated to the object. Moreover, several approaches were proposed for analyzing, testing and debugging concurrent programs. Since Java is becoming a major language for writing concurrent programs, static and dynamic analysis of concurrent Java programs are important research topics. Some issues on static analysis of concurrent Java programs are discussed in (Corbett, 1998), which provides testing and debugging tools for concurrent Java programs. There are modifying of Java classes that support concurrent programming, Java application programs only need minor modifications within novel approach to managing threads that are needed for testing and debugging of concurrent Java programs. There was concerning coordination of concurrent aspects as there should be defined in terms of sequences of execution events triggering actions which are to be coordinated, coordination not only of complete actions but also parts thereof should be supported in order to allow flexible coordination policies, different coordination strategies should be supported in case that multiple advice apply at one execution point, for instance, in order to compose independent advice in a concurrent fashion while enabling prioritization through synchronization if necessary. Furthermore, striving compositional model of concurrent AOP, which supports coordination through suitable aspect composition operators, applied to arbitrary aspects. In addition, because of the inherent difficulty of developing

correct concurrent programs, to which aspects may even contribute, such model for concurrent aspects should support the use of automatic verification techniques, such as model checking techniques. Finally, the model should be intuitive and enable practical implementations, in particular supporting generation of implementations from the models used to define concurrent aspects (Netzer, 1993; Schwartz and Mattern, 1994). Thus, aspects and aspect weaving is formally defined by an automatic transformation into the calculus of Finite Sequential Processes (FSP) (Tsai and Yang, 1995). Second, coordination is supported by a set of general composition operators. Third, aspects and AO programs can be manipulated, simulated and can be automatically model checked using the tool Labeled Transition Systems Analyzer (LTSA) (Tsai and Yang, 1995). Lastly, implementation of CEAOP in Java which is realized by the generation of organization specific code from the FSP model defining the concurrent AO application. An aspect is a modular unit whose purpose is to modify the execution of a program, called the base program, by inserting behavior and possibly skipping some of its steps. The piece of code describing the modification is called an advice. One extremum enforces strong synchronization by weaving sequential aspects in concurrent program (Netzer, 1993; Schwartz and Mattern, 1994). Another option is not to synchronize on some synchronization events, so as to allow more concurrency between the program and the aspect. Truly, the aspect and base program may not synchronize on the event eventE update at the end of the advice. There has been expression in FSP (Tsai and Yang, 1995; Netzer, 1993; Schwartz and Mattern, 1994). simply by removing event from the base program and the aspect definitions using the hiding hand before composing them.

 

 

 

References

Bar-David, Y and Taubenfeld, G (2003) Automatic Discovery of Mutual Exclusion Algorithms

 

Bechini, A and Tai, KC (1998) Timestamps for Programs Using Messages and Shared Variables, in Proc. of 18th IEEE Inter. Conf. on Distributed Computing System

Bechini, A Cutajar, J and Prete, CA (1998) A Tool for Testing of Parallel and Distributed Programs in Message Passing Environments, in Proc. of 9th Mediterranean Electro technical Conf.

 

Cartwright, C (2000) 3.2 What is Concurrent Programming?, at http://www.cs.rice.edu/~cork/book/node96.html 

 

Corbett, C (1998) Constructing Compact Models of Concurrent Java Programs, in Proc. of ACM Inter. Symp. Software, Testing and Analysis (ACM Software Engineering Notes, Vol. 23, No. 2, March 1998) pp. 1-11

 

El-Rewini, H and Abd-El-Barr, M (2005) Advanced Computer Architecture and Parallel, A John Wiley & Sons, Inc Publication

 

Goetz, B Peierls, T Bloch, J Bowbeer, J Holmes, D and Lea, D (2006) Java Concurrency in Practice, Addison Wesley Professional

 

Hartley, S (1998) Concurrent Programming Using Java, Oxford University Press

 

Kann, C (2004) Creating Components: Object Oriented, Concurrent, and Distributed Computing in Java, Auerbach Publications

 

Lea, D (1997) Concurrent Programming in Java: Design Principles and Patterns,” Addison Wesley

 

Norton, P and Stanek, W (1996) Java Programming, Sams Publishing

 

Netzer, RB (1993) Optimal Tracing and Replay for Debugging Shared-Memory Parallel Programs, in Proc. ACM/ONR Workshop on Parallel and Distributed Debugging, pp. 1-11

 

Olsson, R and Keen, A (2004) The JR programming language, concurrent programming in an extended Java

 

Schwartz R and Mattern F (1994) Detecting Causal Relationships in Distributed Computations: in Search of the Holy Grail, Distributed Computing, Vol. 7, 1994, pp. 149-174

 

Tai KC and Carver, RH (1996) Testing of Distributed Programs”, chapter 33 of Handbook of Parallel and Distributed Computing, edited by A. Zoyama, McGraw-Hill, pp. 955-978

 

Tai, KC (1997) Race Analysis of Traces of Asynchronous Message-Passing Programs,” Proc. 17th IEEE Inter. Conf. Distributed Computing Systems, pp. 261-268

 

Tsai JJP and Yang, SJH (1995) eds., Monitoring and Debugging of Distributed Real-Time Systems,” IEEE Computer Society

 

Wittwer, T (2006) An Introduction to Parallel Programming, VSSD.

 

 

 

 

 

 

 


0 comments:

Post a Comment

 
Top