Book Image

Distributed Computing in Java 9

Book Image

Distributed Computing in Java 9

Overview of this book

Distributed computing is the concept with which a bigger computation process is accomplished by splitting it into multiple smaller logical activities and performed by diverse systems, resulting in maximized performance in lower infrastructure investment. This book will teach you how to improve the performance of traditional applications through the usage of parallelism and optimized resource utilization in Java 9. After a brief introduction to the fundamentals of distributed and parallel computing, the book moves on to explain different ways of communicating with remote systems/objects in a distributed architecture. You will learn about asynchronous messaging with enterprise integration and related patterns, and how to handle large amount of data using HPC and implement distributed computing for databases. Moving on, it explains how to deploy distributed applications on different cloud platforms and self-contained application development. You will also learn about big data technologies and understand how they contribute to distributed computing. The book concludes with the detailed coverage of testing, debugging, troubleshooting, and security aspects of distributed applications so the programs you build are robust, efficient, and secure.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Customer Feedback
2
Communication between Distributed Applications
3
RMI, CORBA, and JavaSpaces

Chapter 5. HPC Cluster Computing

Sometimes, the processing requirements of organizational applications may be more than what a regular computer configuration may offer. This can be addressed to an extent by increasing the processor capacity and other resource allocation. While this can improve the performance for a while, it restricts any future computational requirements, such as adding more powerful computational processors; it also involves an extra cost for producing such powerful systems. Also, there is a need to produce efficient algorithms and practices to produce the best results. A practical and economic substitute for these single high-power computers lies in establishing multiple low-power capacity processors that can work collectively and organize their processing capabilities. This means we'll set up parallel computers that would permit processing activities to be distributed among multiple low-capacity computers and obtain the best results. This would result in a powerful system...