Free Online Courses for Software Developers - MrBool
× Please, log in to give us a feedback. Click here to login
×

You must be logged to download. Click here to login

×

MrBool is totally free and you can help us to help the Developers Community around the world

Yes, I'd like to help the MrBool and the Developers Community before download

No, I'd like to download without make the donation

×

MrBool is totally free and you can help us to help the Developers Community around the world

Yes, I'd like to help the MrBool and the Developers Community before download

No, I'd like to download without make the donation

Clustering - Whats new in GlassFish 3.1 - Part 2

Check out the new features of Oracles application server. In this article we present new features of version 3.1 of the GlassFish application server, such as clustering, versioning, scoped resources and the RESTful API - Part 2.

2011.09 - GlassFish 3.1 - Vitor Souza - English v1_parte02-teste.doc

Clustering

One of the most significant new features of this new version of GlassFish is its clustering ability, i.e., the possibility of creating and administering clusters of GlassFish servers in the same application domain. Before creating and administering a cluster, however, it is important to understand some basic concepts:

  1. Server instance: a process executing a GlassFish application server, hosting all the enterprise applications that are installed in the cluster;
  2. Node: a computer that executes at least one server instance of the cluster. Each node has its own GlassFish configuration and can be managed locally or via SSH;
  3. Cluster: a logical entity that is composed of the many different server instances spread around different nodes of the network which, together, serve the same set of enterprise applications;
  4. Application domain: an administrative namespace, it is composed of the set of enterprise applications deployed to it and a GlassFish configuration. When executed, a domain works as a full application server. GlassFish allows for the creation of multiple domains;
  5. Domain administration server: in a cluster, this is the central server, i.e., the one that administers the application domain of that cluster.

Using multiple computers to serve the same enterprise applications increases its scalability and availability with the following characteristics:

  1. Many server instances execute in parallel (in the same machine or different machines) with the same application domain (that is, serving the same set of enterprise applications). Instances communicate among themselves and work as if they were a single server;
  2. When the load on the existing instances becomes too high and their processing capacity is not enough to serve all requests, it is possible to add a new server, creating a new instance on the cluster;
  3. If one of the servers of the cluster fails, another instance can take over and the service is not interrupted, guaranteeing the availability. For this to happen, not only the applications have to be replicated across instances, but also users sessions.

In practice, clustering in GlassFish works the following way: each cluster node should have the GlassFish server installed beforehand. Using the administration console, the asadmin tool in a terminal, an IDE integrated with GlassFish or any application that uses GlassFish's RESTful administration interface, the administrator creates a cluster in the domain administration server and then creates instances in the different nodes, adding them to the cluster. After it is running, the cluster is managed by GlassFish, which guarantees communication among the different instances and does load balancing, distributing requests among the participants of the cluster.

Creating a cluster and its instances

Through a terminal, the asadmin tool will be used to demonstrate the clustering capabilities of GlassFish. With the server running, to create a cluster, use the subcommand create-cluster:

asadmin create-cluster cluster1

The name cluster1 can be replaced by any other name you want to give the cluster being created. Then, instances have to be added to it. For example, to create two instances (instance1 and instance2) in cluster1, use the subcommand create-local-instance, as follows:

asadmin create-local-instance --cluster cluster1 instance1

asadmin create-local-instance --cluster cluster1 instance2

Whenever you want, it is possible to list all the instances of the server with the subcommand list-instances. The option -l returns a more detailed list, indicating the node and the cluster to which each instance belongs:

asadmin list-instances -l

Creating a remote node

Instances in the same computer help in the applications' availability, because in case one of the process dies, another one can take over. However, in case of failure of the entire node, and aiming to improve also the scalability, it would be interesting to create a node (and instances) in a remote machine. We can do that, benefiting from another new feature of GlassFish 3.1: centralized management. In other words, the entire cluster can be managed from the GlassFish that has been installed in the local computer.

In the following examples, SSH (Secure Shell) will be used to do centralized management. To manage nodes via SSH, the following requirements should be fulfilled (more details can be found in GlassFish's documentation on remote management via SSH – see Links):

  1. Each remote computer should have GlassFish 3.1 installed and running;
  2. Each remote host should have SHHD (SSH daemon/server) installed with a registered user that has write and execute permissions to GlassFish. You should know the user name and password for this user;
  3. The local machine should have the SSH client installed;
  4. All remote machines should be in the same sub-network of the local machine, because the current implementation of the remote management uses UDP multicast for communication, which imposes that limitation.

We have, then, created the following setup using two computers in the same network: the local host has IP 10.25.0.76, SSH client and GlassFish 3.1 installed and running; the remote computer has IP 10.25.0.74, SSHD server and GlassFish 3.1 installed and running, user name vitor and password glassfish. In the remote host, GlassFish was installed in the directory /home/vitor/Software/glassfish-3.1/.

Given this configuration, to create a node in the remote machine the file $GF_HOME/glassfish/bin/sshpwd (in the local machine) has to be created with the contents of Listing 1. Then, the following command should be executed:

asadmin --passwordfile sshpwd create-node-ssh --nodehost 10.25.0.74 --sshuser vitor remotenode

Listing 1. File with the password of the user that is used in the SSH connection.

AS_ADMIN_SSHPASSWORD=glassfish


  First off, the file $GF_HOME/glassfish/bin/sshpwd (which can be created with any name at any folder, as long as it is correctly specified in the asadmin create-node-ssh command) specifies the password of the user of the remote host which has access to the GlassFish server. In case you do not want to write this password directly in the file, authentication can also be done using a pass phrase, using the command asadmin create-password-alias mypassphrase and using the following contents for the sshpwd file: AS_ADMIN_SSHKEYPASSPHRASE=ALIAS=minha-frase-senha. Using a pass phrase is indeed the recommended approach, as it promotes more security (even better if the sshpwd file is automatically created and then deleted at each execution).

  In its turn, the command asadmin create-node-ssh specifies: the password file (--passwordfile switch), the IP address of the remote host (--nodehost), the remote user to be used by SSH (--sshuser) and, finally, the name of the node to be created (last parameter of the command). The local GlassFish, then, uses the SSH client to connect to the SSHD at the remote server and communicate with the GlassFish that is installed there, creating a new node and associating it with the cluster.

After we run this command, it is possible to see that the node was created with the subcommand list-nodes, as below (the line that starts with the # character indicates the command to execute, whereas the following lines show the result of this command in our scenario):

# asadmin list-nodes

localhost-domain1  CONFIG  localhost

remotenode  SSH  10.25.0.74

Command list-nodes executed successfully.

  The result of the subcommand list-nodes shows a local node of type CONFIG (managed locally) and the node called remotenode which we have just created, of type SSH (managed remotely via SSH). In the previous subsection we have already created instances for the local node. To create two instances in the remote node, we can use the subcommand create-instance, specifying the node, the cluster and the name of the instance to be created, as the following examples illustrate:

asadmin create-instance --cluster cluster1 --node remotenode instance3

asadmin create-instance --cluster cluster1 --node remotenode instance4

Managing clusters using the administration console

  Everything that has been done using the asadmin tool in a terminal can also be achieved in the administration console's Web interface. In this case, it is necessary to first create the remote node to then create the cluster and its instances. In the Common Tasks tree at the left-hand side of the console, click Nodes and then click New.... The form of Figure 1 should be displayed.



Figure 1. Creating a new cluster node using the administration console.

  We should fill the fields of this form with the values that have been used before to create the remote node:

  1. Name: remotenode;
  2. Node Host: 10.25.0.74;
  3. SSH User Name: vitor
  4. SSH User Authentication: password;
  5. SSH Password: glassfish;
  6. Confirm SSH Password: glassfish.

  After clicking OK to create the remote node, select Clusters in the Common Tasks tree and click the New... button. In the form that will be shown, type the name of the cluster and, below the label Server Instances to Be Created, click the New... button four times, one for each instance to be created. Then, fill in the information about the instances as previously done using asadmin. The filled out form is shown in Figure 2.


Figure 2. Creating a new cluster using the administration console.

  Click OK and wait while GlassFish contacts the remote server and creates all the necessary configuration. After the cluster has been created, it is possible to see its instances, start and stop it using the administration console. To do that, click on Clusters and use the interface presented in Figure 3. Clicking on the name of a cluster or of one of its instances it is possible to access more detailed configuration of each element.


Figure 3. Interface for managing clusters in the administration console.

High availability

  After the cluster has been created and is started, a GlassFish server will be available in each instance and an application deployed on the cluster can be accessed in any of them. Furthermore, GlassFish 3.1 can do session fail-over, which consists in replicating the users' sessions of one instance in the others, in case one of them fails.

To exemplify these features, an example enterprise applications called clusterjsp.ear, from GlassFish 2, will be used (see Links). First, it must be deployed in the cluster. To do so using asadmin, execute the following command:

asadmin deploy --target cluster1 --availabilityenabled=true clusterjsp.ear

  The --target switch indicates that the deploy should be done in the cluster, and not in the normal GlassFish server, whereas --availabilityenabled activates the high availability feature for this application.

To deploy using the administration console, click in the Clusters task to open the list of clusters shown in Figure 3. Then, click on cluster1, open the Applications tab and click on the Deploy... button. Then, specify the clusterjsp.ear file in the Location field and mark the option Enabled in the Availability field. After this, click OK to deploy the application and wait.

  An important remark: other than ticking the “high availability” checkbox during deploy, as we have just mentioned, in order for this feature to work for an application deployed in a cluster, it should be configured. For this purpose, the tag <distributable /> should be present in the configuration file WEB-INF/web.xml.

After it is deployed, the application will be available in all four instances. Before opening it on your Web browser, however, you should know the HTTP port used by each instance. In the administration console, go back to the list of clusters (Figure 3) and click in one of the instances. A page with general information about the chosen instance will be opened, and the item HTTP Port(s) will show three values. The one in the middle is the port number to be used to access the application in the browser.

  When performing our tests, instance1 and instance3 were responding on port 28080, while instance2 and instance4 were using port 28081. Therefore, in our experiment, the clusterjsp.ear application could be accessed in the following URLs: http://localhost:28080/clusterjsp/ (instance1), http://localhost:28081/clusterjsp/ (instance2), http://10.25.0.74:28080/clusterjsp/ (instance3) e http://10.25.0.74:28081/clusterjsp/ (instance4). Any of those addresses will open the application's welcome page (HaJsp.jsp), as shown in Figure 4.


Figure 4. Example application that tests the high availability feature of GlassFish clusters.

  The figure shows the application being opened in instance3. In the body of the page it is possible to see the IP of the server, the port used, name of the instance, etc. Using the form shown after this list it is possible to insert attributes with customized names and values to the session by filling out the appropriate fields and clicking the Add Session Data button. The attributes added to the session will be listed below the form, under the label Data retrieved from the HttpSession.

  As a test, open the page using instance1 (http://localhost:28080/clusterjsp/) and add some attributes to the session, verifying that they are listed in the page. Then, in a new browser tab or window, open the page in instance2 (http://localhost:28081/clusterjsp/) to realize that the attributes are also listed there. The high availability mechanism of GlassFish copied the user session, because in case one instance fails (and you can simulate this in the administration console by clicking the Stop button in the general information page of a specific instance), other instances could continue to serve the client.

The session replication mechanism might not work between local and remove nodes if, given the characteristics of your network, it is not possible to establish communication between them using UDP Multicast. The command asadmin validate-multicast can be used to check if all the nodes are responding, whereas asadmin get-health cluster1 tells if all cluster instances are working or not.

Load balancing

  Assembling a high availability cluster is not enough, as it would require from users of your applications to know all addressed and ports of all the instances to, in case of a failure, change them manually and continue to use the system. Furthermore, there is no guarantee that your users will be equally and uniformly distributed among the different instances to achieve good scalability of the application. For these reasons, we should use a load balancing tool to complement our cluster.

  The free GlassFish Server Open Source Edition 3.1 does not come with an integrated load balancer. The commercial Oracle GlassFish Server 3.1 has a plug-in that can be installed and managed via administration console (refer back to the section “Installing GlassFish components with the Update Tool”). In case you prefer the open source version, it is possible to install and integrate an external load balancing tool to GlassFish, such as, for instance, the Apache HTTP server with its mod-jk module.

  Installation of this server and module are out of the scope of this article. However, we discuss here how to integrate both servers to provide load balancing. First of all, after installing the Apache HTTP server and activating the mod-jk module, add the configuration lines of Listing 2 into Apache's configuration file. The parts in bold have to be adapted to your system (you should replace them with the correct path for the respective files).

Listing 2. Activating mod-jk for load balancing in the Apache HTTP server.

LoadModule jk_module diretorio-do-mod-jk/mod_jk.so

JkWorkersFile diretorio-de-configuracao-apache/workers.properties

JkMount /*.jsp loadbalancer


  The first line activates the module and it is possible that it is already present, depending on how you have installed Apache and mod-jk (for instance, in Debian-based Linux systems such as Ubuntu, when the package of the mod-jk module is installed the file /etc/apache2/mods-enables/jk.load is created with the configuration for module activation). The JkWorkersFile instruction indicates where Apache should find the configuration file of mod-jk, which should in turn contain the contents of Listing 3. Finally, JkMount indicates that the balancing will be performed for all URLs that end in .jsp, using the component loadbalancer.

On Windows systems, the contents of Listing 2 can be included in the conf/httpd.conf file. On Ubuntu systems, on the other hand, Apache is configured by default using virtual hosts. In this case, the JkMount instruction should be placed within the <VirtualHost *:80> tag, in the configuration file /etc/apache2/sites-available/default (names may vary depending on your specific operating system).

Listing 3. Load balancing configuration for the mod-jk module.

worker.list=loadbalancer

worker.instance1.type=ajp13

worker.instance1.host=127.0.0.1

worker.instance1.port=8011

worker.instance1.lbfactor=50

worker.instance1.socket_keepalive=1

worker.instance1.socket_timeout=300

worker.instance2.type=ajp13

worker.instance2.host=127.0.0.1

worker.instance2.port=8012

worker.instance2.lbfactor=50

worker.instance2.socket_keepalive=1

worker.instance2.socket_timeout=300

worker.loadbalancer.type=lb

worker.loadbalancer.balance_workers=instance1,instance2

 

 The module's configuration (Listing 3) creates the loadbalancer component and defines its instances with the same name as the instances of our GlassFish cluster. To simplify the example, only the two first instances (on the local node of the cluster) were considered. Using the AJP/1.3 protocol, mod-jk will receive all request to URLs that finish in .jsp from Apache and alternate between the two local cluster instances (host=127.0.0.1), using ports 8011 and 8012 to communicate with GlassFish.

  After doing the configuration at Apache's side, restarting it so the changes come into effect, it is tim to configure GlassFish to receive these request and direct them to the cluster instances. With the GlassFish server running, run the following commands using asadmin:

asadmin create-network-listener --jkenabled true --target cluster1 --protocol http-listener-1 --listenerport \${AJP_PORT} jk-listener

asadmin create-jvm-options --target cluster1 "-DjvmRoute=\${AJP_INSTANCE_NAME}"

asadmin create-system-properties --target instance1 AJP_INSTANCE_NAME=instance1

asadmin create-system-properties --target instance1 AJP_PORT=8011

asadmin create-system-properties --target instance2 AJP_INSTANCE_NAME=instance2

asadmin create-system-properties --target instance2 AJP_PORT=8012

  First of all, it is important to note that the above commands use a backslash before each dollar sign that indicates a variable (\$) because the syntax ${VARIABLE}, used inside GlassFish to indicate variables, is the same used in systems such as Linux. Given that we do not want to evaluate the variable in the command but pass it to GlassFish to be evaluated later, we use \${AJP_PORT} and \${AJP_INSTANCE_NAME} instead of ${AJP_PORT} and ${AJP_INSTANCE_NAME}. On Windows, use the version without the backslash.

  The above commands create an HTTP monitor at port ${AJP_PORT} in cluster cluster1 for the integration with mod-jk (--jkenabled true). A JVM parameter is also included in the cluster to indicate the instance to which the requests should be redirected. Both the port number and the instance name are specified as a variable, being replaced with the appropriate value depending on the situation. After executing all these commands, restart GlassFish (asadmin restart-domain) so the changes will take effect.

  All of the above configurations can be seen in GlassFish's administration console and could also have been done using its Web interface. The monitor can be found at Configurations > cluster1-config > Network Config > Network Listeners > jk-listener; the JVM parameter is at Configurations > cluster1-config > JVM Settings, in the JVM Options tab; the instance properties can be seen by selecting Clusters > cluster1, opening the Instances tab, clicking on the name of the instance and, finally, opening the Properties tab.

If everything was properly configured and there are no problems with your network (in case it does not work, you can turn off firewalls and use network analysis tools to make sure the problem is not in the network), at this point it is no longer necessary to choose an instance and open the application using the instance's specific URL (e.g., http://localhost:28080/clusterjsp/ for instance1), as the application is available in a single URL – http://localhost/clusterjsp/ – and Apache is in charge of balancing the load evenly among different instances.

Good practices for high availability

  For a better result in terms of high availability of an enterprise application it is not enough to know how to configure a cluster in GlassFish. There are many recommendations and good practices that should be followed. We list some of those below:

  1. Analyze the network topology that connects the nodes of the cluster and look for failure points. Consider all elements of the network: routers, switches, firewalls, load balancers, cables and power sources. Provide redundancy in all points that are liable to failure;
  2. Consider storing the users sessions in the database for an even greater guarantee that this information will not be lost in case of failure. In order not to compromise performance too much, minimize as much as possible the data that gets stored in the session and store in the database only those who are not changed frequently. The HTTP session can, too, serve as a cache for information that are more often read than changed;
  3. Use monitoring tools to diagnose failures such as deadlocks and memory leaks as soon as possible. Monitor Java threads, synchronization points, shared resources, etc.;
  4. Know the possible failures and how to work around them. For example, some errors coming from the data base make the current connection unusable and the application should create a new connection to replace it. Other, less fatal errors can be solved simply by retrying in a few seconds;
  5. Create a calendar of proactive reboots of the server instead of waiting for it to fail to reboot it. This way it will be possible to schedule such operations to the most appropriate moments, avoiding undesired the consequences of the gradual performance degradation of the server;
  6. Also organize a backup calendar and keep always a recent copy of both the software and the data so the application can be restored in more severe cases.

  To see the first part, access: http://mrbool.com/p/Whats-new-in-GlassFish-3-1-Part-1/22661

  In the next article, we will see Application Versioning, Application-scoped Resources and other characteristics.

Links                                                                                                                                                                    

glassfish.java.net/downloads/3.1-final.html

Download page for GlassFish 3.1.

netbeans.org/downloads/

Download page for the NetBeans IDE.

eclipse.org/downloads/

Download page for the Eclipse IDE.

download.oracle.com/docs/cd/E18930_01/html/821-2426/gkshg.html

GlassFish documentation about remote management via SSH.

blogs.sun.com/arungupta/resource/glassfish/clusterjsp.zip

Example application for testing clusters, obtained from GlassFish 2 and published in the blog of Arun Gupta.

wikis.sun.com/display/GlassFish/LoadBalanceMod_jkDemo

Instructions for the configuration of load balancing on GlassFish using Apache and mod-jk.

http://www.restfulie.org

Framework for the implementation of RESTful services and clients.

curl.haxx.se/download.html

Download page for the cURL tool.

github.com/douglascrockford/JSON-java

Source-code of the JSON-java parser.



colunista nao disponivel

What did you think of this post?
Services
[Close]
To have full access to this post (or download the associated files) you must have MrBool Credits.

  See the prices for this post in Mr.Bool Credits System below:

Individually – in this case the price for this post is US$ 0,00 (Buy it now)
in this case you will buy only this video by paying the full price with no discount.

Package of 10 credits - in this case the price for this post is US$ 0,00
This subscription is ideal if you want to download few videos. In this plan you will receive a discount of 50% in each video. Subscribe for this package!

Package of 50 credits – in this case the price for this post is US$ 0,00
This subscription is ideal if you want to download several videos. In this plan you will receive a discount of 83% in each video. Subscribe for this package!


> More info about MrBool Credits
[Close]
You must be logged to download.

Click here to login