Introduction
This article will help you to create and configure multi-instance queue managers and brokers. This article also provides step-by-step procedures to develop and test the Message Broker HA environment.
To understand Message Broker V7's multi-instance broker feature , We must first understand how the multi-instance queue manager works. This article presents a conceptual overview of MI queue manager instances and the tasks for administering them, followed by a demonstration of how we can switch over between multiple instances. After that, the article describes the concepts of MI brokers, including their setup and how to configure and test them for high availability.
Software used:
VMWare workstation 7.1.1
RHEL5 64bit iso image
Mqseries v7.1
WMB v 7
Multi-instance queue managers
The term MI queue manager refers to the combination of active and standby instances of the queue manager that share the queue manager data and logs. MI queue managers protect applications against the failure of queue manager processes by having one instance of the queue manager active on one server, and another instance on standby on another server, ready to take over automatically should the active instance fail. Replicating queue manager instances is an effective way to improve the availability of queue manager processes.
Examples in this article were run on a system using WebSphere MQ and Message Broker, with three servers running Red Hat Enterprise Linux 5.0.
The three servers are:
- wmbmi1(192.168.20.1)
- Hosts the primary active instance of the MI queue manager IBMQM and the primary active instance of the MI broker IBMBRK
- wmbmi3(192.168.20.3)
- Hosts the duplicate standby instance of the MI queue manager IBMQM and the duplicate standby instance of the MI broker IBMBRK
- wmbmi4(192.168.20.4)
- Hosts the shared network file system /mqha through NFS V4
NOTE
-> setting up above 3 instances in Vmware with user-: root and
password- :password
user:-mqm
and password:-mqm
Configuring the networked file system
An MI queue manager uses a networked file system to manage queue manager instances. The queue manager automates failover using a combination of file system locks and shared queue manager data and logs.
You need to ensure that the user ID (uid) and group ID (gid) of the user are the same for all the servers where MI queue manager instances reside
STEP-1. Matching the uid and gid of the mqm user on all member servers
Adding user(mqm) and group(mqm,mqbrkrs) in RHEL5
groupadd mqm
groupadd mqbrkrs
adduser -d /home/mqm -g mqm mqm
usermod -G mqm,mqbrkrs root
usermod -G mqbrkrs mqm
[root@wmbmi1 ~]# cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash
[root@wmbmi3 ~]# cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash
[root@wmbmi4 ~]# cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash
Additionally, you also need to set the uid and gid of the mqm group to be identical on all the systems. Create log and data directories in a common, shared folder named /mqha. Make sure that the mqha directory is owned by the user and group mqm, and that the access permissions are set to rwx for
both user and group. The commands in STEP-2, executed as root on
wmbmi4, will achieve this: |
[root@wmbmi4 mqm]# mkdir -p /mqha/WMQ/IBMQM/data
[root@wmbmi4 mqm]# mkdir -p /mqha/WMQ/IBMQM/logs
[root@wmbmi4 mqm]# mkdir -p /mqha/WMB/IBMBRK
[root@wmbmi4 mqm]# chown -R mqm:mqm /mqha
[root@wmbmi4 mqm]# chmod -R ug+rwx /mqha
|
Next, you need to configure the NFS server on wmbmi4 and then start it on the same machine. Add the excerpt in Listing 3 to the /etc/exports file, which should be executed as root on wmbmi4:
STEP-3. Configuring the NFS server
/mqha *(rw,fsid=0,no_wdelay,sync)
or
/mqha *(rw,fsid=0,wdelay,insecure,no_subtree_check,sync,anonuid=500,anongid=500)
|
Start the NFS server by executing the command in Listing 4 as root on wmbmi4:
STEP- 4. Starting the NFS server
/etc/init.d/nfs start
|
If the NFS server is already running, refresh it using the command in STEP- 5:
STEP- 5. Refreshing the NFS server
exportfs -ra
|
STEP- 6. Checking to make sure /mqha is mounted
showmount –e wmbmi4
[root@wmbmi1 ~]# /usr/sbin/showmount -e 192.168.20.4
Export list for 192.168.20.4:
/mqha *
[root@wmbmi3 ~]# /usr/sbin/showmount -e 192.168.20.4
Export list for 192.168.20.4:
/mqha *
|
If the output of the command in Listing 6 is negative, you will need to mount the shared folder from the two instance hosting servers. Execute the command in Listing 7 as root on both of the servers to mount the exported file system:
STEP-7. Mounting the exported file system
[root@wmbmi1 ~]# mount -t nfs4 -o hard,intr 192.168.20.4:/ /mqha
[root@wmbmi3 ~]# mount -t nfs4 -o hard,intr 192.168.20.4:/ /mqha
|
Note- make a directory mqha on both server give it the appropriate ownership and permission
You must run the
amqmfsck
command to
test whether your network file system will properly control access to
queue manager data and logs:- Run
amqmfsck
without any options on each system to check basic locking. - Run
amqmfsck
on both WebSphere MQ systems simultaneously, using the-c
option, to test writing to the directory concurrently. - Run
amqmfsck
on both WebSphere MQ systems simultaneously, using the-w
option, to test waiting for and releasing a lock on the directory concurrently.
To work reliably with WebSphere MQ, a shared file system must provide:
- Data write integrity
- Guaranteed exclusive access to files
- The release of locks upon failure
Creating a multi-instance queue manager
Start by creating the MI queue manager on the first server, wmbmi1. Log on as the user mqm and issue the command in Listing 8:
STEP-8 Creating a queue manager
[mqm@wmbmi1 ~]#crtmqm -md /mqha/WMQ/IBMQM/data -ld /mqha/WMQ/IBMQM/logs IBMQM
|
Once the queue manager is created, display the properties of this queue manager using the command in STEP9:
STEP- 9. Displaying the properties of the queue manager
[mqm@wmbmi1 ~]$ dspmqinf -o command IBMQM
addmqinf -s QueueManager -v Name=IBMQM -v Directory=IBMQM -v Prefix=/var/mqm –v
DataPath=/mqha/WMQ/IBMQM/data/IBMQM
|
Copy the output from the
dspmqinf
command and paste it on the command line on wmbmi2.in.ibm.com from
the console of user mqm, as shown in STEP10:STEP- 10. Configuring wmbmi2.in.ibm.com
[mqm@wmbmi3 ~]$ addmqinf -s QueueManager -v Name=IBMQM -v Directory=IBMQM
-v Prefix=/var/mqm –v DataPath=/mqha/WMQ/IBMQM/data/IBMQM
WebSphere MQ configuration information added.
|
Now display the queue managers on both servers using the
dspmq
command on each. The results should look like STEP11:STEP-11. Displaying the queue managers on both servers
[mqm@wmbmi1 ~]$ dspmq
QMNAME(IBMQM) STATUS(Ended immediately)
[mqm@wmbmi2 ~]$ dspmq
QMNAME(IBMQM) STATUS(Ended immediately)
|
The MI queue manager IBMQM has now been created both on the server wmbmi1 and on the server wmbmi3.
Starting and stopping a multi-instance queue manager
STEP- 12. Starting the queue manager in multi-instance mode
[mqm@wmbmi1 ~]$ strmqm -x IBMQM
There are 88 days left in the trial period for this copy of WebSphere MQ.
WebSphere MQ queue manager 'IBMQM' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'IBMQM' during the log replay phase.
Log replay for queue manager 'IBMQM' complete.
Transaction manager state recovered for queue manager 'IBMQM'.
WebSphere MQ queue manager 'IBMQM' started using V7.1.0.0.
[mqm@wmbmi3 ~]$ strmqm -x IBMQM
There are 90 days left in the trial period for this copy of WebSphere MQ.
WebSphere MQ queue manager 'IBMQM' starting.
The queue manager is associated with installation 'Installation1'.
A standby instance of queue manager 'IBMQM' has been started. The active
instance is running elsewhere.
|
Only two queue manager instances can run at the same time: an active instance and a standby. Should the two instances be started at the same time, WebSphere MQ has no control over which instance becomes the active instance; this is determined by the NFS server. The first instance to acquire exclusive access to the queue manager data becomes the active instance.
STEP-13. Creating and starting the queue manager listener
[mqm@wmbmi1 ~]$ runmqsc IBMQM
5724-H72 (C) Copyright IBM Corp. 1994, 2011. ALL RIGHTS RESERVED.
Starting MQSC for queue manager IBMQM.
define listener(IBMQMLISTENER) trptype(tcp) port(1414) control(qmgr)
1 : define listener(IBMQMLISTENER) trptype(tcp) port(1414) control(qmgr)
AMQ8626: WebSphere MQ listener created.
start listener(IBMQMLISTENER)
2 : start listener(IBMQMLISTENER)
AMQ8021: Request to start WebSphere MQ listener accepted.
dis lsstatus(*)
3 : dis lsstatus(*)
AMQ8631: Display listener status details.
LISTENER(IBMQMLISTENER) STATUS(RUNNING)
PID(4969)
end
|
There are two ways to stop the MI queue managers you've started. The first is to switch the active and standby instances by using the
endmqm
command with the -s
option. If you issue the endmqm -s IBMESBQM
command on the active instance, you will manually switch control to
the standby instance. The endmqm -s
command shuts down the active instance without shutting down the
standby. The exclusive access lock on the queue manager data and logs
is released, and the standby queue manager takes over.[mqm@wmbmi1 ~]$ dspmq -x
QMNAME(IBMQM) STATUS(Running)
INSTANCE(wmbmi1) MODE(Active)
INSTANCE(wmbmi3) MODE(Standby)
[mqm@wmbmi3 ~]$ dspmq -x
QMNAME(IBMQM) STATUS(Running as standby)
INSTANCE(wmbmi1) MODE(Active)
INSTANCE(wmbmi3) MODE(Standby)
[mqm@wmbmi1 ~]$ endmqm -s IBMQM
Quiesce request accepted. The queue manager will stop when all outstanding work
is complete, permitting switchover to a standby instance.
[mqm@wmbmi1 ~]$ dspmq -x
QMNAME(IBMQM) STATUS(Running elsewhere)
INSTANCE(wmbmi3) MODE(Active)
[mqm@wmbmi3 ~]$ dspmq -x
QMNAME(IBMQM) STATUS(Running)
INSTANCE(wmbmi3) MODE(Active)
No comments:
Post a Comment