Saturday, 27 September 2014

MULTI-HOPPING : ( gate way


MULTI-HOPPING : ( gate way)

Passing the messages between more than one intermediate queue managers is called Multi-
Hopping and Intermediate queue manager name called Hopping.


PROCEDURE TO CREATE MULTI-HOPPING:

1. Create a queue manager QM1, QM2, QM3, Start the queue managers QM1.



2. Create a remote queue with attributes local queue name (Remote Queue     Manager) i.e Rname (QM3_LOCALQ) & RQMname(QM3) and the transmission queue called XMITQ (TQ).


3. Create a transmission queue called (TQ).


4. Create a sender channel from (QM1.QM2)


5. In Qm2 create, Create a receiver channel (QM1.QM2)
6. Create a transmission queue with name target queue manager name called QM3.




7. Create a sender channel from (QM2.QM3) with transmission queue called XMITQ (QM3)


8. In QM3 create a local queue called (QM3_LOCALQ) which is defined in remote queue of QM1 Rqueue(QM1)


9. Create a receiver channel (QM2.QM3)





10. We should have two listeners started and running in QM2 and QM3.








Testing:

Put message to the RQD (RemoteQ ) on QM1 queue manager


 Check the queue depth QM3_LOCALQ on QM3 queue maanger.




Message is reached to the destination queue, its proves that using multihopping setup we can communicate multiple queue intermediate queue managers. 


Note:

For every queue, except remote queue we have two properties.

1. open input count ( Iproess )

2. open output count ( Oprocss )

3. the application which is connected and putting the messages is called “ O process “

4. The application which is processing(getting) the messages is called “I process “


Tuesday, 23 September 2014

Implementing Server -Client communication using CCDT client channel table

WMQ Server - Client connection Methods


Method-1:

Create one channel definition on client machine and the other on the sever.

SERVER  :  Define a Server-connection channel on WMQ Server product installed machine

DEFINE CHANNEL (CHL1) CHLTYPE (SVRCONN) TRPTYPE (TCP)


CLIENT  (on WMQ client product intalled machine)

SET MQSERVER=ChannelName/TransportType/ConnectionName


Method-2:

Define both a server-connection channel as well as client-connection channel on the server side and will use a file called CHANNEL TAB file copied to the client machine for it to use to connect to the server.

1. Define a SVRCONN channel on the server side

DEFINE CHANNEL (CHL) CHLTYPE (SVRCONN) TRPTYPE (TCP)

2. Define a CLIENTCONN channel on the server side

DEFINE CHANNEL (CHL) CHLTYPE (CLNTCONN) TRPTYPE (TCP) CONNAME (‘xx.xx.xx.xx(port)’) QMNAME(QMgr Name)


Note: Client channel definition table can contain more than 1 client-connection channel definition
Channel definition file is named as AMQCLCHL.TAB is created and can be found under /default location /qmgrs/QMGRNAME/@ipcc directory. You are required to copy the client channel definition table to the client machine as a binary file.

CLIENT (on WMQ client product intalled machine)

SET MQCHLLIB=location of the file and the Folder (Just SET the path of the file)

SET MQCHLTAB=AMQCLCHL.TAB.

AMQ9777: Channel was blocked

Do the following to resolve the channel authentication issue;

SET CHLAUTH(SYSTEM.BKR.CONFIG) TYPE(ADDRESSMAP) ADDRESS(*) USERSRC(CHANNEL)
SET CHLAUTH(SYSTEM.BKR.CONFIG) TYPE(BLOCKUSER) USERLIST('nobody')

Multi-instance QMGR and Broker on RHEL5 (achived HA)


Introduction

This article will help you to create and configure multi-instance queue managers and brokers. This article also provides step-by-step procedures to develop and test the Message Broker HA environment.

To understand Message Broker V7's multi-instance broker feature , We must first understand how the multi-instance queue manager works. This article presents a conceptual overview of MI queue manager instances and the tasks for administering them, followed by a demonstration of how we can switch over between multiple instances. After that, the article describes the concepts of MI brokers, including their setup and how to configure and test them for high availability.

Software used:

VMWare workstation 7.1.1

RHEL5 64bit iso image

Mqseries v7.1

WMB v 7

Multi-instance queue managers

The term MI queue manager refers to the combination of active and standby instances of the queue manager that share the queue manager data and logs. MI queue managers protect applications against the failure of queue manager processes by having one instance of the queue manager active on one server, and another instance on standby on another server, ready to take over automatically should the active instance fail. Replicating queue manager instances is an effective way to improve the availability of queue manager processes.

Examples in this article were run on a system using WebSphere MQ and Message Broker, with three servers running Red Hat Enterprise Linux 5.0.

The three servers are:


wmbmi1(192.168.20.1)
Hosts the primary active instance of the MI queue manager IBMQM and the primary active instance of the MI broker IBMBRK
wmbmi3(192.168.20.3)
Hosts the duplicate standby instance of the MI queue manager IBMQM and the duplicate standby instance of the MI broker IBMBRK
wmbmi4(192.168.20.4)
Hosts the shared network file system /mqha through NFS V4

NOTE -> setting up above 3 instances in Vmware with user-: root and password- :password

user:-mqm and password:-mqm

Configuring the networked file system

An MI queue manager uses a networked file system to manage queue manager instances. The queue manager automates failover using a combination of file system locks and shared queue manager data and logs.

You need to ensure that the user ID (uid) and group ID (gid) of the user are the same for all the servers where MI queue manager instances reside


STEP-1. Matching the uid and gid of the mqm user on all member servers


Adding user(mqm) and group(mqm,mqbrkrs) in RHEL5

groupadd mqm
groupadd mqbrkrs

adduser -d /home/mqm -g mqm  mqm
usermod -G mqm,mqbrkrs root
usermod -G mqbrkrs mqm

[root@wmbmi1 ~]# cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash
[root@wmbmi3 ~]#  cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash
[root@wmbmi4 ~]#  cat /etc/passwd |grep mqm
mqm:x:500:500::/home/mqm:/bin/bash



Additionally, you also need to set the uid and gid of the mqm group to be identical on all the systems. Create log and data directories in a common, shared folder named /mqha. Make sure that the mqha directory is owned by the user and group mqm, and that the access permissions are set to rwx for both user and group. The commands in STEP-2, executed as root on wmbmi4, will achieve this:
 STEP- 2. Creating and setting ownership for directories under the shared folder /mqha


[root@wmbmi4 mqm]# mkdir -p /mqha/WMQ/IBMQM/data
[root@wmbmi4 mqm]# mkdir -p /mqha/WMQ/IBMQM/logs
[root@wmbmi4 mqm]# mkdir -p /mqha/WMB/IBMBRK
[root@wmbmi4 mqm]# chown -R mqm:mqm /mqha
[root@wmbmi4 mqm]# chmod -R ug+rwx /mqha

Next, you need to configure the NFS server on wmbmi4 and then start it on the same machine. Add the excerpt in Listing 3 to the /etc/exports file, which should be executed as root on wmbmi4:


STEP-3. Configuring the NFS server


/mqha *(rw,fsid=0,no_wdelay,sync)

or
/mqha *(rw,fsid=0,wdelay,insecure,no_subtree_check,sync,anonuid=500,anongid=500)

Start the NFS server by executing the command in Listing 4 as root on wmbmi4:


STEP- 4. Starting the NFS server


/etc/init.d/nfs start

If the NFS server is already running, refresh it using the command in STEP- 5:


STEP- 5. Refreshing the NFS server


exportfs -ra


STEP- 6. Checking to make sure /mqha is mounted


showmount –e wmbmi4
[root@wmbmi1 ~]# /usr/sbin/showmount -e 192.168.20.4
Export list for 192.168.20.4:
/mqha *

[root@wmbmi3 ~]# /usr/sbin/showmount -e 192.168.20.4
Export list for 192.168.20.4:
/mqha *

If the output of the command in Listing 6 is negative, you will need to mount the shared folder from the two instance hosting servers. Execute the command in Listing 7 as root on both of the servers to mount the exported file system:


STEP-7. Mounting the exported file system


  [root@wmbmi1 ~]# mount -t nfs4 -o hard,intr 192.168.20.4:/ /mqha
  [root@wmbmi3 ~]# mount -t nfs4 -o hard,intr 192.168.20.4:/ /mqha

Note- make a directory mqha on both server give it the appropriate ownership and permission

You must run the amqmfsck command to test whether your network file system will properly control access to queue manager data and logs:


  • Run amqmfsck without any options on each system to check basic locking.
  • Run amqmfsck on both WebSphere MQ systems simultaneously, using the -c option, to test writing to the directory concurrently.
  • Run amqmfsck on both WebSphere MQ systems simultaneously, using the -w option, to test waiting for and releasing a lock on the directory concurrently.

To work reliably with WebSphere MQ, a shared file system must provide:
  • Data write integrity
  • Guaranteed exclusive access to files
  • The release of locks upon failure
If the file system does not provide these features, the queue manager data and logs may become corrupted when the shared file system is used in an MI queue manager configuration. Currently, NFS V4 provides all the above mentioned facilities, so it is the file system in use in this example.

Creating a multi-instance queue manager

Start by creating the MI queue manager on the first server, wmbmi1. Log on as the user mqm and issue the command in Listing 8:


STEP-8 Creating a queue manager


[mqm@wmbmi1 ~]#crtmqm -md /mqha/WMQ/IBMQM/data -ld /mqha/WMQ/IBMQM/logs IBMQM

Once the queue manager is created, display the properties of this queue manager using the command in STEP9:


STEP- 9. Displaying the properties of the queue manager


[mqm@wmbmi1 ~]$ dspmqinf -o command IBMQM

addmqinf -s QueueManager -v Name=IBMQM -v Directory=IBMQM -v Prefix=/var/mqm –v 
DataPath=/mqha/WMQ/IBMQM/data/IBMQM

Copy the output from the dspmqinf command and paste it on the command line on wmbmi2.in.ibm.com from the console of user mqm, as shown in STEP10:

STEP- 10. Configuring wmbmi2.in.ibm.com



[mqm@wmbmi3 ~]$ addmqinf -s QueueManager -v Name=IBMQM -v Directory=IBMQM 
-v Prefix=/var/mqm –v DataPath=/mqha/WMQ/IBMQM/data/IBMQM
 
WebSphere MQ configuration information added.

Now display the queue managers on both servers using the dspmq command on each. The results should look like STEP11:


STEP-11. Displaying the queue managers on both servers


[mqm@wmbmi1 ~]$ dspmq
QMNAME(IBMQM)                   STATUS(Ended immediately)

[mqm@wmbmi2 ~]$ dspmq
QMNAME(IBMQM)                   STATUS(Ended immediately)

The MI queue manager IBMQM has now been created both on the server wmbmi1 and on the server wmbmi3.

Starting and stopping a multi-instance queue manager


STEP- 12. Starting the queue manager in multi-instance mode


[mqm@wmbmi1 ~]$ strmqm -x IBMQM
There are 88 days left in the trial period for this copy of WebSphere MQ.
WebSphere MQ queue manager 'IBMQM' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'IBMQM' during the log replay phase.
Log replay for queue manager 'IBMQM' complete.
Transaction manager state recovered for queue manager 'IBMQM'.
WebSphere MQ queue manager 'IBMQM' started using V7.1.0.0.

[mqm@wmbmi3 ~]$ strmqm -x IBMQM
There are 90 days left in the trial period for this copy of WebSphere MQ.
WebSphere MQ queue manager 'IBMQM' starting.
The queue manager is associated with installation 'Installation1'.
A standby instance of queue manager 'IBMQM' has been started. The active
instance is running elsewhere.

Only two queue manager instances can run at the same time: an active instance and a standby. Should the two instances be started at the same time, WebSphere MQ has no control over which instance becomes the active instance; this is determined by the NFS server. The first instance to acquire exclusive access to the queue manager data becomes the active instance.


STEP-13. Creating and starting the queue manager listener


[mqm@wmbmi1 ~]$ runmqsc IBMQM
5724-H72 (C) Copyright IBM Corp. 1994, 2011.  ALL RIGHTS RESERVED.
Starting MQSC for queue manager IBMQM.


define listener(IBMQMLISTENER) trptype(tcp) port(1414) control(qmgr)
     1 : define listener(IBMQMLISTENER) trptype(tcp) port(1414) control(qmgr)
AMQ8626: WebSphere MQ listener created.
start  listener(IBMQMLISTENER)
     2 : start  listener(IBMQMLISTENER)
AMQ8021: Request to start WebSphere MQ listener accepted.
dis lsstatus(*)
     3 : dis lsstatus(*)
AMQ8631: Display listener status details.
   LISTENER(IBMQMLISTENER)                 STATUS(RUNNING)
   PID(4969)
end

There are two ways to stop the MI queue managers you've started. The first is to switch the active and standby instances by using the endmqm command with the -s option. If you issue the endmqm -s IBMESBQM command on the active instance, you will manually switch control to the standby instance. The endmqm -s command shuts down the active instance without shutting down the standby. The exclusive access lock on the queue manager data and logs is released, and the standby queue manager takes over.

[mqm@wmbmi1 ~]$ dspmq -x

QMNAME(IBMQM) STATUS(Running)

INSTANCE(wmbmi1) MODE(Active)

INSTANCE(wmbmi3) MODE(Standby)

[mqm@wmbmi3 ~]$ dspmq -x

QMNAME(IBMQM) STATUS(Running as standby)

INSTANCE(wmbmi1) MODE(Active)

INSTANCE(wmbmi3) MODE(Standby)

[mqm@wmbmi1 ~]$ endmqm -s IBMQM

Quiesce request accepted. The queue manager will stop when all outstanding work

is complete, permitting switchover to a standby instance.

[mqm@wmbmi1 ~]$ dspmq -x

QMNAME(IBMQM) STATUS(Running elsewhere)

INSTANCE(wmbmi3) MODE(Active)

[mqm@wmbmi3 ~]$ dspmq -x

QMNAME(IBMQM) STATUS(Running)

INSTANCE(wmbmi3) MODE(Active)

Automate Dead Letter Handler using Triggering



How to Automate Dead Letter Handler Mechanism using Triggering setup.


Make Inter communication between the 2 Queue managers (example A & B).








Change Destination Queue manager DEADQ(DEADQ)

ALTER QMGR DEADQ(DEADQ)

Create a local queue and assign SYSTEM.DEAD.LETTER.QUEUE attributes.
DEFINE QLOCAL (DEADQ) LIKE(SYSTEM.DEAD.LETTER.QUEUE)

Dead Letter Handler Techniques:
Scenario 1: Configuring the DLQ Handler as a server service object.
If the receiver channel is not able to put the message in the destination queue, then those messages are
be placed in the DLQ of the remote WebSphere MQ queue manager, provided a DLQ is defined for it.
The DLQ Handler is kept waiting for incoming messages on the DLQ. You specify the DLQ to accept
undelivered messages using the queue manager menu option Properties => Default dead queue. For
example, in Figure 1, the name of the DLQ for undelivered messages for the IBMQMR queue manager is specified as DEADQ:






1.      Check the Dead Letter queue is presented or Not.





To run the DLQ handler as a service, you need to define a server service object in that queue manager. This example uses a batch file and rules table for the DLQ Handler.
2.      Create dlqhandler.bat and copy the below details into the file and copy this file into D:\MQ\ dlqhandler.bat
cd "C:\Program Files\IBM\WebSphere MQ\bin"
runmqdlq %1 %2 %3 %4 
Ex: 


The rules table should be defined with some REASON and ACTION sequence and WAIT (YES), which means that the DLQ Handler waits indefinitely for further messages to arrive on the DLQ
Rule Table Definition:
RULETBL.RUL:
The content of rule table definition:
INPUTQ('DEADQ') INPUTQM(A) RETRYINT(45) WAIT(YES)
REASON(MQRC_Q_FULL) ACTION(FWD) FWDQ(BLOCAL) FWDQM(B)


Listing 3  Define server service object for DLQ Handler.

DEFINE SERVICE(dlqhandler) +
SERVTYPE(SERVER) +
CONTROL(MANUAL) +
STARTCMD('C:\IBM\WebSphere\MQ\dlqhandler.bat') +
DESCR('dead letter queue handler as server service') +
STARTARG('DEADQ IBMQMR < C:\IBM\WebSphere\MQ\RULETBL.RUL') +
STDOUT('C:\IBM\WebSphere\MQ\Log.txt') +
STDERR('C:\IBM\WebSphere\MQ\Err.txt') +
REPLACE
The value of STARTCMD is the absolute path of the executable file dlqhandler.bat (on Windows), asshown above in the Listing 3. After you have defined the server service object for the DLQ Handler, you  need to make sure that group id mqm, user id MUSR_MQADMIN (on Windows) has Read andExecutable privileges on the dlqhandler.bat executable file. RULETBL.RUL is the rules table file.

And start the service command and monitor the service status. 



Scenario 2: Triggering the DLQ Handler
Scenario 2 involves setting up the WebSphere MQ trigger facility so that the DLQ Handler runs andprocesses undelivered messages only when they arrive on the DLQ. To set up Scenario 2, define aprocess object processdlq within the queue manager B that is triggered whenever the trigger conditions are met. Specify the absolute path of the program or process to be triggered in the Application ID field, to avoid unexpected results in the DLQ with the message.




Create a Process definition:
Create a process definiton with name of PROCESSDLQ and keep the applicid as below.



Create a bat file called dlq_trigger.bat and copy it to D:\MQ path.
cd "C:\Program Files\IBM\WebSphere MQ\bin"
runmqdlq < RULETBL.RUL




Rule table should like below:
INPUTQ('DEADQ') INPUTQM(B) RETRYINT(45) WAIT(NO)
REASON(MQRC_Q_FULL) ACTION(FWD) FWDQ(DLQX) FWDQM(B)
Set the Triggering feature ON on this DEADQ of the IBMQMR queue manager, so that whenever amessage arrives on DEADQ, DLQ Handler becomes active to handle the message based on thecontents of the rules table file RULETBL.RUL. Specify the process name to be triggered as processdlq and the initiation queue as SYSTEM.DEFAULT.INITIATION.QUEUE:



DEFINE QLOCAL(DEADQ) DESCR('WebSphere MQ Default Dead Letter Queue')
INITQ(SYSTEM.DEFAULT.INITIATION.QUEUE) TRIGTYPE(EVERY) TRIGGER PROCESS('processdlq')
1 : DEFINE QLOCAL(DEADQ) DESCR('WebSphere MQ Default Dead Letter Queue')
INITQ(SYSTEM.DEFAULT.INITIATION.QUEUE) TRIGTYPE(EVERY) TRIGGER PROCESS('processdlq')
AMQ8006: WebSphere MQ queue created.
:
DISPLAY QLOCAL(DEADQ) DESCR INITQ TRIGGER TRIGTYPE PROCESS
2 : DISPLAY QLOCAL(DEADQ) DESCR INITQ TRIGGER TRIGTYPE PROCESS
AMQ8409: Display Queue details.
QUEUE(DEADQ) TYPE(QLOCAL)
DESCR(WebSphere MQ Default Dead Letter Queue)
INITQ(SYSTEM.DEFAULT.INITIATION.QUEUE)
TRIGGER PROCESS(processdlq)
TRIGTYPE(EVERY)




Listing 9. Run the trigger monitor using the command prompt.

runmqtrm -m IBMQMR -i SYSTEM.DEFAULT.INITIATION.QUEUE
            Listing 10. Definition and status of the trigger monitor server service object
display svstatus('triggerdlq') all
1 : display svstatus('triggerdlq') all
AMQ8632: Display service status details.
SERVICE(triggerdlq) STATUS(RUNNING)
PID(9540) SERVTYPE(SERVER)
STARTDA(2011-12-27) STARTTI(17.11.53)
CONTROL(MANUAL)
STARTCMD(C:\Program Files\IBM\WebSphere MQ\bin\runmqtrm)
STARTARG(-m IBMQMR) STOPCMD( )
STOPARG( ) DESCR(trigger the dlq handler)
STDOUT(C:\IBM\WebSphere\MQ\Log_runmqtrm.txt)
STDERR(C:\IBM\WebSphere\MQ\Err_runmqtrm.txt)


Monday, 22 September 2014

Websphere MQ versions in differences

  1. What's new in WebSphere MQ v5.3

    MQSeries was rebranded to IBM Websphere MQ with the release of version 5.3 launched in 2002.
    Some of the new major features introduced as part of this version are summarized below.

    1.Added security using Secure Sockets Layer (SSL), the Internet standard for secure communication.
    The biggest new feature I've learned about is that channel can utilize SSL security. This feature is enabled at the 'channel definition' level, not from the application, so recompiling is not required! This provides a secure method to transport messages over the internet. 
    2.Enhanced performance, especially for Java™ Message Service (JMS) applications, making WebSphere MQ the JMS provider of choice.
    With version 5.3 JMS is packaged with the Product.
    3.Queue file size up to 2 Terabytes from 2GB.
    Queue file limit on certain platform is initially set at 2 GB. And can go beyond that on Windows automatically.
    4. Introduced support for wildcard characters in setmqaut command.
    For e.g. setmqaut -n AB.* -t q +put -p fred
    5.DISPLAY QSTATUS command that was introduced in WMQ V5.3.
    This shows all of the processes that have a queue open, both application programs and channels. The channel programs keep a cache of recently used queue handles, which might keep a queue in use. Although the channel eventually releases that handle, the DISPLAY QSTATUS information shows the channel name so you can force it to end immediately if necessary. 

  2. What's new in WebSphere MQ v6.0

    1. An extensible Eclipse-based configuration user interface on Microsoft™ Windows™ and Linux x86 platforms.
    The WebSphere MQ Explorer is supplied with WebSphere MQ V6.0 installations for desktop platforms, such as Microsoft Windows and Linux (x86). The WebSphere MQ Explorer is a graphical user interface (GUI) for monitoring and administrating a WebSphere MQ infrastructure from a desktop workstation.
    2. 64-bit Queue Managers introduced with version 6.0

    The most significant functional difference between WebSphere MQ V5.3 and
    WebSphere MQ V6.0 on AIX 5L, Solaris™, and HP-UX is that queue managers
    on these platforms are now 64 bit.
    3. Introduced Internet Protocol Version 6 (IPv6)
    In WebSphere MQ Version 6.0, queue managers can communicate using the IPv6 protocol, in addition to the existing IPv4 protocol.
    4. Introduced GSKit for SSL in Windows  
    WebSphere MQ for Windows V6.0 uses GSKit to provide SSL functionality, inline with other platforms.
    5. Built-in publish/subscribe broker
    In WebSphere MQ V6.0, the publish/subscribe broker is supplied with the product and can be automatically started and stopped with a queue manager. The WebSphere MQ publish/subscribe broker was previously supplied separately to the WebSphere MQ product in SupportPac MA0C.
    6. Introduced functionality to use WMQ v6.0 for Web services
    WebSphere MQ V6.0 is supplied with the functionality required to allow a WebSphere MQ infrastructure to be used as a transport for Web services.
    7. Identify Application Connections to Queue Manager with DISPLAY CONN command

    WebSphere MQ V6.0 provides new functionality to identify all applications connected to a queue manager, see details of how that application has connected to the queue manager, and to see a complete list of the queues that are open for each one of those applications. This functionality is available from the Application Connections window in the WebSphere MQ Explorer. It is also available using the DISPLAY CONN and STOP CONN MQSC commands.
    8.  Listeners can be administered as MQ objects defined in Queue Manager
    In WebSphere MQ V6.0, on all platforms except z/OS where the behavior is unchanged, listeners can be administered as WebSphere MQ objects in the same way as any other object defined on a queue manager.Listeners can be administered graphical using the WebSphere MQ Explorer or be performed using the DEFINE/START/STOP/DISPLAY LISTENER and DISPLAY LSSTATUS MQSC commands.
    9. Custom services started and stopped with a queue manager

    Applications specified by an administrator can be automatically started and stopped with a queue manager on all platforms except WebSphere MQ for z/OS. This is performed by defining Service WebSphere MQ objects. These objects can be administered graphically using the WebSphere MQ Explorer, or using the DEFINE/START/STOP/DISPLAY SERVICE and DISPLAY SVSTATUS MQSC commands

  3. What's new in WebSphere MQ v7.0


    1. Integrated Publish/Subscribe engine
    WebSphere MQ V7.0 provides a new Publish/Subscribe engine that is integrated into the queue manager. This is a major enhancement because the queue manager now internally manages all the Publish/Subscribe functionality. The queue manager receives messages from publishers and subscription requests from subscribers for a range of topics, which is responsible for queuing and routing these messages to the target subscribers.

    2. WebSphere MQ Client enhancements including read ahead, conversation sharing, and asynchronous put

     Full Duplex :-The protocol that MQ uses over the TCP/IP channel between a client and a queue manager has been converted from half duplex to full duplex.Full duplex means that information can be sent from either end of the session at any time.
    Channel Sharing :- Multiple connections established by threads of a client program can share one instance of a TCP/IP client channel rather than running their own instances. Conversation sharing is controlled by value of the SHARECNV parameter on both SVRCONN-type channels and CLNTCONN-type channels.
    Read Ahead :- While the client program is requesting the messages by making repeated calls to MQGET, the MQ client libraries read ahead on the queue and store additional non-persistent messages in memory on the client system. If the requested message is in memory, it is immediately returned to the MQGET.  Read ahead reduces the overall number of interactions on the client channel and the latency for each MQGET call to communicate with the queue manager and wait for the response to come back.
    Asynchronous put :- It allows messages to be put to a queue without waiting for a status response from the queue manager containing the Completion Code and Reason Code. The status response can be obtained after the messages have all been put.
    3. Improved JMS MQ integration
    Read ahead :- The read ahead feature in WebSphere MQ V7.0 allows messages from a destination to be streamed to the WebSphere MQ classes for JMS ahead of the JMS application requesting the messages. This saves the classes from having to send a separate request to the WebSphere MQ queue manager for each message that the JMS application consumes and allows the messages to be consumed with improved performance.
    Asynchronous Put :- Prior to WebSphere MQ V7.0,the WebSphere MQ classes for JMS had to wait for a response back from the queue manager for every message sent by the JMS application. Using the enhanced asynchronous put feature in WebSphere MQ V7.0, the WebSphere MQ classes for JMS forwards each message to the queue manager and does not wait for a response. Control is immediately returned back to the JMS application and the application can proceed to send the next message.
    Asynchronous Consume :- An application that needs to consume a message asynchronously registers a callback function for a destination. When a suitable message is available at the destination, WebSphere MQ calls the function and passes the message as a parameter. The function can then process the message asynchronously.
    Conversation sharing :- It is a new feature in WebSphere MQ V7.0. It allows a single TCP/IP socket to multiplex multiple connections, provided that the two ends of the connection belong to the same process. All Java Message service applications by default use multiplexing of sessions without any code modifications.
    4. Administration enhancements including new MQSC commands and MQ Explorer views
    Remote queue managers administration :- This new MQ Explorer feature allow remote instances of MQ Explorer to administer local queue managers.
    The crtmqm command has three new options, available on WebSphere MQ for
    Windows only:
    -sa: Automatic queue manager startup
    -si: Interactive (manual) queue manager startup
    -ss: Service (manual) queue manager startup
    The strmqm command has two new options, available on WebSphere MQ for
    Windows only:
    -si: Interactive (manual) queue manager startup
    -ss: Service (manual) queue manager startup 

  4. What's new in WebSphere MQ v7.1


    1. Multi-Version Installation Introduced in WebSphere MQ v7.1

    WebSphere MQ v7.1 makes it possible to install multiple versions of the product on a single system.This means along with WMQ v7.0.1.6 which is the minimum version required for this feature to work, you can install multiple versions of WMQ v7.1 and higher.Some of the highlights of this are below. We can,
    Install update or remove WebSphere MQ installations while Queue Manager is running.
    Switch Queue Manager to a different installation.
    Develop and test applications against multiple WebSphere MQ releases on a single system.
    Run applications which connect simultaneously to queue managers in different installations.
    Every WebSphere MQ Installation has a name and cannot be changed after Installation.
    Each system supports a maximum of 128 installations.
    One instllation may be designated as a primary installation with system wide resources and other non-primary installations created using setmqenv command.
    2.  WebSphere MQ Telemetry is introduced as an optional component during installation.
    MQ Telemetry Transport (MQTT) is a lightweight network protocol used for publish/subscribe messaging between devices.WebSphere MQ Telemetry provides small client libraries that can be embedded into smart devices like sensors running on a number of different device platforms. Applications built with the clients use WebSphere MQ Telemetry Transport (MQTT) and the WebSphere MQ Telemetry service to publish and subscribe messages reliably with WebSphere MQ.Basic MQTT is a Pub/Sub architecture while WebSphere MQ can provide either Pub/Sub or point-to-point message delivery.
    3.  Channel authentication records for more control over connecting systems at channel level.
    If a client attempts to connect to a queue manager using a blank user Id we can block access to those clients using channel authentication records. Also we can block access from specific IP addresses.We can map an asserted user id on client to a valid user id on server.We can create, modify, or remove channel authentication records using the MQSC command SET CHLAUTH.
    4.  Relocatable Installations
    On AIX®, HP-UX, Linux, and Solaris, it is now possible with WebSphere v7.1 to install WebSphere MQ to a location of your choice.
    5.  Dead letter queue usage on Channels and Topics
    USEDLQ attribute can be configured so that some functions of the queue manager use the dead-letter queue, while other functions do not. This attribute enables the configuration of selected channels to not use the Dead-Letter queue. USEDLQ also enables topics to be individually configured to determine whether the Dead-Letter queue is to be used for messages that cannot be delivered to subscribers. 
    6. Dump MQ Configuration
    From WebSphere MQ v7.1 the new Dump MQ configuration command (dmpmqcfg) on UNIX, Linux, and Windows system  can dump the configuration of queue managers in various scripting formats including MQSC. The scripts produced by the Dump MQ Configuration command can be used to restore or rebuild a queue manager. The previously used SupportPac MS03 - Save Queue Manager is replaced by this command instead.
    7. MQ Configuration data removed from Windows Registry
    Before version 7.1 all WebSphere® MQ configuration information, and most queue manager configuration information, was stored in the Windows registry. From version 7.1 onwards all configuration information is stored in files.
    C:\Program Files\IBM\WebSphere MQ\qmgrs\QMNAME\qm.ini
    C:\Program Files (x86)\IBM\WebSphere MQ\mqs.ini 

  5. What's new in WebSphere MQ v7.0.1


    1. Introduced Multi-Instance Queue Managers

    Enables automatic failover to a standby Queue Manager instance in the Multi-Instance Active Queue Managers event of an incident or planned outage.This means that in the event of the failure of the MQ queue manager, or the machine it is running on, a standby instance will automatically takeover and client connections will be transferred to this new instance.The key technical feature in achieving this is the placement of the queue manager data, including its logs, in networked storage.This means that all instances of a queue manager can access the same data.Multiple instances of a queue manager can then be defined on different machines that all have access to the networked file system. It should be noted that the instances must all be running on the same operating systems.
    2.  Automatic Client Reconnect

    Provides Client-connected applications with automatic detection of failures and reconnects to alternative Queue Managers.With Auto-reconnection, the connection is automatically restored and the handles to open objects are all restored after a failure.Uses the list of addresses in CONNAME to find queue manager.
    3.  Enhanced Governance
    Service Definition wizard generates WSDL describing MQ apps.The service definition wizard simplifies the process of creating WMQ service definitions and is integrated into the WMQ Explorer.
    4. Enhanced SSL Security
    Supports certificate checks with Online Certificate Status Protocol (OCSP) as well as to Certificate Revocation Lists (CRL). Online Certificate Status Protocol (OCSP) determines whether a certificate has been revoked, and therefore, helps to determine whether the certificate can be trusted.
    5. Enhanced .NET support
    Provides IBM Message Service Client for .NET developers. Supports use of WebSphere MQ as custom channel within Windows Communication Foundation.


    What's new in WebSphere MQ v7.1 

              Multi-Version Installation

    1. MQ on Unix and Windows can install multiple levels on a system 

      Relocatable to user-chosen directories

      Can have multiple copies even at the same fixpack level

    2. Permits a single copy of V7.0.1 to remain on system , So existing systems can be migrated. 

       Must be 7.0.1.6 or later

    3. V7.5.0.1 is available as both install and update images

    4. Multi-install gives lots of routes to get to latest code with minimal disruption

    Security: Channel Access Control
    Simplifying configuration for channel accessFrom clients and from queue managers
    SET CHLAUTH definitions control who can use channels.
     
    Block connections from specific IP addresses
    Block connections from specific Userids
    Set MCAUSER value used for any channel coming from a specific IP address
    Set MCAUSER value used for any channel having a specific SSL or TLS DN
    Set MCAUSER value used for any channel connecting from a specific Qmgr
    Block connections claiming to be from a particular Qmgr unless from a specific IP address
    Block connections claiming to be from a particular Client Userid from a specific IP address
    Block connections presenting a particular certificate unless from a specific IP address.
     
    Easy to test rules that you define
     
    DISPLAY CHLAUTH can “execute” rules
    Rules can be applied in WARNING mode
    Not actually blocked, but errors generated



    What's new in WebSphere MQ v7.5v

    Clustering – Split Transmit Queue 
     
    1.With V7.5 a queue manager can automatically define a PERMANENT-DYNAMIC queue for each CLUSSDR channel.
    –Dynamic queues based upon new model queue “SYSTEM.CLUSTER.TRANSMIT.MODEL”
    –Well known queue names: “SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME>”
    2 .Controlled via attribute affecting all cluster-sdr channels on the queue manager
              Also have manual definitions
    –Multiple queues can be defined to cover all, or a subset of the cluster channels.
    3.Automatic and Manual are not mutually exclusive
    –They could be used together
    ALTER QMGR DEFCLXQ( SCTQ | CHANNEL )
    DEFINE QLOCAL(APPQMGR.CLUSTER1.XMITQ)

    CLCHNAME(CLUSTER1.TO.APPQMGR) USAGE