Global Resource Serialization for zOS – To Serialize or Not To Serialize


This Article summarizes concepts of GRS in IBM zOS mainframe systems explaining how data sharing is performed in a multi-system environment. It also illustrates design specifications of related products. Finally operational aspects for different topologies are described.


shared_dasdAs data sharing is a vital issue in IBM system z mainframe environment, there is a separate component called GRS (Global Resource Serialization) for this purpose (1). In today’s multitasking and multiprocessing mainframe environments, users, transactions, tasks, programs, processes, jobs whatever units of work are, compete for accessing resources. A topology to coordinate the accesses is required, otherwise integrity exposure may occur.

Before going deeper in these topologies, let me define “resource” first. Data sets (files for IBM system z mainframes), records of data sets, data base rows, data base fields, in-storage table entries, any object subject to update in multiuser, multiprocessing environment.

Please note that access scope is also important. Some tasks just get the information in the resource, some tasks update the resource. First type of access is called “shared access”, second type of access is called “exclusive access”. (2)


You can make foreign exchange rates analogy at this point. Suppose that foreign exchange rates in a core banking application are kept in records of a data set. Most of the transactions “reads” the rate and proceeds. When an “update” is required, all read-only accesses should be stopped (delayed, postponed), update transaction is executed, read-only transactions should be allowed to “read” exchange rates again. Any design flaw will make foreign exchange transactions wait forever, time-out and/ or collapse or even worse, allow foreign exchange transactions from cheaper or more expensive rates.

This is the easy part if you are in a single-system environment. Suppose there are more than one images of operating systems. There are transactions running in each systems accessing same resources. If no further precautions are implemented, this topology is called shared DASD (Direct Access Storage Device) environment. Disks are shared. When a system accesses a data set on a disk volume for update, whole disk volume is “reserved”. No other data set can be accesses from other systems. When update completed, disk volume is “released” and can be used by other systems (3).

grs_ring2Until 1980s this implementation was okay and suitable for multi-system environment. As clustering activity soared among IBM mainframes to increase availability, this became insufficient. IBM mainframes were becoming members of clustered structures. This single-system image structures required data to be shared extensively. All members were connected with one-another using high-speed CTCA (Channel To Channel Adapter) links. IBM called this “ring” topology. When a resource to be accessed by one of systems, it is sending type and name of the resource to all other systems, all systems are putting this type/name couples in their own queue. This was called “ENQUEUing”. When a transaction or task finished with the resource, it was again sending type and name again to all other systems, they were removing resource from their own queue. That was called “DEQUEUing” (4).

As you observe there are two deficiencies in this design. First resource name and type should be travelled to all systems before it is accessed by any system and resource related information should have been stored separately in all systems. These were time and storage consuming issues.

Type of resource is named as QNAME (Queue name or Major name), name of resource is named as RNAME (Resource or Minor name). The message carrying both information is called RSA (Ring System Authority) message. Lists of resources defined are named RNLs (Resource Name Lists).

grs_star2IBM then took this cluster thing seriously and called it base “sysplex” inspiring from SYStems compLEX phrase. After this, IBM introduced devices named coupling facilities to store and transmit data much faster, integrated into sysplexes allowing members sharing data and named clusters “parallel sysplex”. Now it was possible for each member system to query resource information in CF (Coupling Facility) structures and send if not there. This topology was called “star” topology. Star topology did not have the deficiencies of ring topology. Sending data was fast, storing was not duplicated (5).

In star topology enqueues are faster when compared with ring topology. But ring topology is only possible choice for non-parallel-sysplex zOS systems and recently used channels which we call FICON CTCAs are not supported.

But all mainframe users did not implement parallel sysplexes. Data centers using shared DASD and/ or base sysplex without data sharing continued to use ring topology. In this eco-system MIM (Multi Image Manager) Data Sharing for zOS product of CA (Computer Associates) continued to be used. Customers liked DASDONLY ease of implementation of the product. It was not relying systems connected via CTCA. There was a shared data set accessible by all systems in the MIMPLEX (MIM comPLEX). Each system was accessing the shared data set in the fraction of a second, adding/ deleting its own resources (6).

Let me get back to concepts of GRS which is under-the-hood engine for data sharing for zOS. Even if a third party data sharing product is being used, GRS is initialized at IPL (Initial Program Load) time and active at all times. GRS has its own storage management component and one large storage block is allocated during initialization.


zos_operationsAfter starting GRS complex successfully, no operator intervention is required. If a contention or serialization problem occurs, either GRS or some other zOS component will detect it and notify operator. Since the systems in star complex must match the systems in parallel sysplex, operating is even simpler and more straightforward.

Operators may display the status of systems in the GRS complex, change resource names and types dynamically, remove member systems if they restart them and notify other members from restart.


(1) IBM zOS MVS Planning: Global Resource Serialization
(2) IBM zOS Basic Skills, zOS concepts, Serializing the use of resources
(3) SHARE, The Basics of GRS: An overview of GRS ENQ processing
(4) Middle East Technical University, CENG 497 Introduction to Mainframe Architectures and Computing
(5) IBM Introduction to the New Mainframe, z/OS Basics, pp 92 – 97
__IBM Redbooks, ABCs of z/OS System Programming: Volume 5, Chapter 4
(6) CA MIM Resource Sharing Overview

This entry was posted in IBM zEnterprise Servers and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s