Tuesday, May 17, 2011

Recover TOCOPY Dataset Not Found

At our site, we use the Copy Utility to take back up of the data. The respective entries are made in the SYSCOPY catalog table. Once I came across an issue in which the TOCOPY dataset was not present.

Now what I was expecting was that the Job will simply abend saying TOCOPY dataset not found, but it ended with Maxcc 4. In the Sysprint I saw that the Recover has picked up the last full image copy dataset whose entry was present in SYSCOPY instead of failing. Now my question was What should I do if I want the restore to abend if the dataset is not found instead of restoring the last full image copy taken?

I asked the same thing in some of the forums and I received very good replies.

"By default, RECOVER looks for the last usable dataset in SYSCOPY. If the last one is unusable, unreachable or whatever RECOVER goes to the next back in time and so on, then applies the necessary log records. The RC 4 probably means the dataset it wanted originally was not found so it had to settle for another one.
If you want to recover from a specific dataset, you can use TOCOPY. Note that this will NOT apply log records and, for an IC taken with SHRLEVEL CHANGE, may leave you with inconsistent data."

"If an image copy dataset has some kind of problem, db2 will look for prior image copies that can be used. Once a valid dataset is found, the tablespace is recovered using that copy as a starting point. You may wonder 'How does db2 get the data to the place I want it?' The answer is log apply. Archived logs, incremental backups, and active logs are applied to get the data to where you want it. BTW, the newer utilities make it easy to do image copies with non-logged operations. If you do non-logged operations (e.g. load log no), then fallback recovery has some limitations. Based on your description, I don't think you had non logged operations or were lucky. BUT, my point is you must take image copies after or with non logged operations."

Hope it helps.

Cheers!!!

Tuesday, March 22, 2011

Why LEVELID REPAIR??

Hi,

In my previous post, I mentioned about the error faced while REPAIR LEVELID. But why and when a repair of LEVELID is required.

The most common scenario where you face this issue is while restoring your data from a dump produced by ADRDSSU or if you have copied your TS using DSN1COPY.

Let's take an example. You are the owner of a Library having thousands of books with hundred's of customer. The customer comes, issues a book, reads it and then returns. Hope that doesn't makes a difference to you. But what if a customer comes, issues a book and tampers it or changes it or do some nuisances? You would love to have some automated process through which you can keep an eye on such idiots!!! That's what DB2 does in case of level ID just to ensure that no one from outside is messing with its files and correct level of data is restored when a non-DB2 restoration technique is used.

DB2 creates IX and TS datasets when a TS or IX is created. Now if any of the offline utility(without informing DB2) messes with that dataset, a flag is set alerting the user indicating that the default generated Datasets have been modified or something messy is going on and makes it unavailable to the application.

Searching more in manuals I found

"At table space close, or pseudo close, all of the table space pages are written back to disk. The header page on disk now reflects the last update to the level ID. At that time, the level ID from the header page is also written to the DB2 log and to SYSLGRNX. The next time the table space is opened for an SQL statement or utility, the level ID (HPGLEVEL) in the header page of the table space is compared to the level ID recorded in SYSLGRNX. If they are not equal, DB2 will not allow access to the table space until the condition is corrected. During a DB2 restart, the level ID recorded on the log is compared to the LEVEL ID in the header page.
Again, if they are not equal, DB2 will not allow access to the table space."

A mismatch in the LEVELID gives-:

DSNB232I UNEXPECTED DATA SET LEVEL
ID ENCOUNTERED LEVEL ID= X' xxxxxxxxxxxxx ' EXPECTED LEVEL ID= X'xxxxxxxxxxxxx

Now REPAIR LEVELID indicates that the level identifier is to be reset to a new identifier. We use LEVELID Repair to accept the use of a down-level dataset.

In brief, if you are changing a file or dataset hiding from DB2 (ADRSSU,DSN1COPY etc.) DB2 finds about it and informs the user which is rectified by using REPAIR LEVELID. You can turn off the down level detection by updating the zparm DLDFREQ to 0.

Cheers!!!

Repair Level Id Operation Disallowed


During one of my assignments of copying the data from one environment to another I faced this issue. In our site we generally use DSN1COPY Db2 offline utility to copy the data. After successful translation of OBID's and PSID's in the dataset we run the REPAIR LEVELID utility in absence of which we face 'Resource Unavailable" with reason code 00C2010D(DB2 was unable to open a required data set because an unexpected level ID was encountered).
While running the REPAIR LEVELID I came across the issue of "REAPIR LEVELID OPERATION DISALLOWED".
Displaying the corresponding DATABASE didn't help me. There were no claimers. Searching the manuals I found the possible reasons like
  • The data set was in use at the time the REPAIR utility was run.
  • The page set has outstanding log apply activity (an indoubt UR has interest in the page set or the page set has pages in the logical page list (LPL)).
I checked for the second option ie. –DIS DATABASE(DBNAME) SP(SPNAME) LPL ….. but to increase my efforts it gave nothing.
Then I tracked back the sequence of the process in my JCL .In the prior step, DSN1COPY was used and translations of OBID were done and then the REPAIR LEVELID was initiated. Due to some reason the Dataset was not closed as I checked in CA Insight and hence the Repair was not successful.
So I decided to REAPIR LEVELID of all the TS and it successfully completed.
So if you face similar issue 1) just check if any utility is stopped using the resource.2) Check if the corresponding dataset is open. 3) Rerun the job after some time.
Cheers!!!

Resource Unavailable Reason 00C90080 Abend S04E

"Resource Unavailable" is very common for any DBA and generally resource with "resource type" is used by another resource apart from many other reasons.The basic approach is to check for the "Resource Type" ,"Reason" and "Name" in the DSNU message in your sysout.Accordingly check for the status of the object in DB2Interactive or any other third party tool that your site uses.
You can use the comm -DIS DATABASE (DBNAME) SP(SPNAME) CLAIMERS LIMIT(2000)
Depending on your needs choose from several options present for DIS DATABASE command in DB2 Command Reference.
The Most common Resource Types are-
00000100         Database DB
00000200         Table space DB.SP
00000201         Index space DB.SP
00000202         Table space RD.DB.TS
00000210         Partition DB.SP.PT
00000220         Data set DSN

You can also check the LPL state or any restrictive state of the object by using
-DIS DATABASE (DBNAME) SP(SPNAME) LPL
-DIS DATABASE (DBNAME) SP(SPNAME) RES respectively.
The most common reason for Resource Unavailable with 00C90080 reason code  implies that your Resource is in READONLY(RO) mode and you are trying to do some update operations that needs RW access. You can simply Start the DB with Access RW and rerun your job.

Cheers !!!

Wednesday, February 9, 2011

CA Insight Panel Display Commands..

May be you like to use some shortcuts in CA Insight..

1) System Stats Panel ---- d SYSSTATS
2) Buffer Pool List Panel ---- d BUFLISTS
3) EDM Pool Panel ---- d EDMPOOL
4) Log Status & Alloc Panel ---- d LOGALLOC
5) Active THDS Panel ---- d THRDACTV
6) Address space Snapshot ---- d MVS
7) Virtual Storage Panel ---- d MVSSTOR
8) Address Space Enqueues ---- d MVSENQS
9) Address space Programs Panel ---- d MVSPGMS
10) Address space Files Panel ---- d MVSFILES
11) Datasets Open Panel ---- d DATASETS
12) DB2 System Parameters ---- d SYSPARMS
13) Log Activity Panel ---- d LOGSTATS
14) Dataset Drain Panel ---- d SYSDRAIN
15) SQL Counts Panel ---- d SQLTOTAL
16) List Prefetch & Parallelism ---- d SYSLPRF
17) Locks Panel ---- d SYSLOCKS
18) Subsystem Services Panel ---- d SYSSVCS
19) Remote Location List Panel ---- d SYSRMOTE
20) Binds & Authorization Check ---- d SYSBIND
21) Storage Panel ---- d SYSSTRG
22) Dynamic Prepare/Direct Row ---- d SYSDYNP
23) DB2 Routine Counts Panel ---- d SYSRTN
24) Dynamic SQL List in Cache ---- d DYNSQLST
25) Storage Utilization Panel ---- d SYSSTG
26) DB2 Command History ---- d DB2CMDS

Cheers!!!

Wednesday, February 2, 2011

DCLGEN

Declaration Generator is a command through which we can extract the DDL of the table.It produces an SQL DECLARE TABLE Statement.
It also generates the data type declaration of each column in the assigned language ie. COBOL,PL/I,C.
The DCLGEN can be executed in Online as well as Batch.
In Online it can be done through DB2I(Interactive Menu)
In Batch,you can use DCLGEN command along with the parameters after connecting to the system through DSN.
For executing it in batch you can find the sample job in your SDSNSAMP library.Search for the members DSNTEJ2C and DSNTEJ2P.
You can also execute DCLGEN directly from TSO.Sign on to TSO, issue the TSO command DSN, and then issue the subcommand DCLGEN.

Following is the sample JCL to execute DCLGEN-:

Command ===>                                                                                                 Scroll ===> CSR


****** ***************************** Top of Data ******************************
000001 //DCLGENJ JOB (*******),'DCLGEN (DB2)',
000002 //       TIME=(0,20),NOTIFY=&SYSUID,MSGCLASS=X,REGION=0M,
000003 //       USER=XXXXX,PASSWORD=
000004 //*
000005 //* STEP 1: CREATE COPY FILE TABLE DESCRIPTIONS (DCLGEN)
000006 //PH02CS01 EXEC PGM=IKJEFT01,DYNAMNBR=20
000007 //STEPLIB DD DSN=XXXXXX.SDSNLOAD,DISP=SHR
000008 // DD DISP=SHR,DSN=XXXXXX.SDSNEXIT
000009 //SYSTSPRT DD SYSOUT=*,DCB=(RECFM=F,LRECL=200,BLKSIZE=200)
000010 //SYSUDUMP DD SYSOUT=*
000011 //SYSTSIN DD *
000012 DSN SYSTEM(DSN)
000013 DCLGEN TABLE(XXXXXX.TEST) +
000014 LIBRARY(XXX.XXXXX..XXX.XXXXX(XXXX)') +
000015 ACTION(ADD) APOST +
000016 LANGUAGE(IBMCOB) +
000017 STRUCTURE(POPTVAL)
000018 END
//*
 
 
Cheers!!!!
 
 

Wednesday, January 26, 2011

Online Reorg DB2

Reorg  utility behavior in SHRLEVEL CHANGE and REFERENCE is very interesting.
To know how Reorg helps in maintaining the consistency in DB2 system is very important.

REORG is like running defragmentor in ur windows OS.To bring the data in a sorted manner and to reorganise the TS and IX in such a way that the fragmented space can be reclaimed and optimized for fast retrieval.
 Data access during reorg 1) NONE 2) REFERENCE 3) CHANGE

REORG with SHRLEVEL REFERENCE -: Read Only access to the dataset is given.Any other application can read the data from that TS but cannot make any changes....How it is done?

DB2 creates a shadow copy of the original VSAM dataset (TS) and allows the other applications to read the data from the original dataset. DB2 reloads the reorganised data into the Shadow copy datasets.By the end of reorganization,DB2 switches the future access of the application from the original dataset to shadow copy.
While switching,the application does not hold any access on either dataset.

REORG with SHRLEVEL CHANGE -: DB2 reloads the reorganised data in the shadow copy datasets.In SHRLEVEL CHANGE comes the concept of Mapping Table.

When data is reloaded in the shadow dataset then the RID's of the rows in the original dataset changes.To keep the track of those RID's ie. for source and target(shadow copy) DB2 maintains a mapping table.
The structure is like -:
CREATE TABLE mapping-table-name1
(TYPE CHAR(1) NOT NULL,
SOURCE_RID CHAR(5) NOT NULL,
TARGET_XRID CHAR(9) NOT NULL,
LRSN CHAR(6) NOT NULL);


CREATE UNIQUE INDEX index-name1 ON table-name1
(SOURCE_RID ASC, TYPE, TARGET_XRID, LRSN);

For more understanding -:
1) The orginal tablespace is unloaded and the shadow copy is loaded.The indexes are built as the shadow copy is loaded.
2) The mapping table is maintained to map the position of each row(RID) in original TS and the shadow copy.
3) While the pages are loaded,they are written to the imagecopy datatset.
4) During all this time updating and reading was allowed and the changes were logged.
5) When the shadow copy is loaded,all the logged changes are copied to the shadow copy to bring it up to   date .Information from the mapping table is used to update the correct rows.
6) Pages changed since original image copy was taken are added to copy dataset.
7) The old tablespaces and indexspaces datasets are then renamed and shadow copies are given the original name.
8) Finally the old objects are deleted.

Note -:  If the datasets are not DB2 Managed then u need to preallocate shadow datasets before u execute REORG.
When preallocating the datasets define it as LINEAR and use the SHAREOPTIONS as (3,3).
Cheers!!