This is a severe error message issued while compiling a PL/1 pgm
Explanation:
The back end of the compiler either could not be found or else it detected an error from which it could not recover. The latter problem can sometimes occur, on Intel, if your disk is short of free space and, on the z/Series, if your job's region size is not large enough. Otherwise, report the problem to IBM.
Thursday, December 27, 2007
Wednesday, December 26, 2007
REXX - TRACE command
TRACE is an interactive debugging facility
Syntax : TRACE
A - All => Traces all clauses before execution
C - Commands => Traces all commands before execution
E - Error => Traces any command resulting in an error
F - Failure => Traces any command resulting in a failure . This is same as 'Normal'.
I - intermediates => Traces all clauses along with the intermediate results during evaluation of expressions and substituted names
L - Labels => Traces any labels passes during execution
N - Normal => Traces any command resulting in a negative return code after execution. This is the default setting
O - Off => Traces nothing
R - Results => Traces all clauses before execution. Displays final results of evaluating an expression
S - Scan => Traces all remaining clauses in the data without them being processed
Prefix:
! - inhibits host command execution.
Ex: TRACE !C => Causes commands to be traced but not processed. As each cmd is bypassed, the REXX RC is set to zero. You can switch off command inhibition, when it is in effect, by issuing a TRACE instruction with a prefix "!."
? - Controls interactive debug.
During usual execution, a TRACE option with a prefix of ? causes interactive debug to be switched on.
Syntax : TRACE
A - All => Traces all clauses before execution
C - Commands => Traces all commands before execution
E - Error => Traces any command resulting in an error
F - Failure => Traces any command resulting in a failure . This is same as 'Normal'.
I - intermediates => Traces all clauses along with the intermediate results during evaluation of expressions and substituted names
L - Labels => Traces any labels passes during execution
N - Normal => Traces any command resulting in a negative return code after execution. This is the default setting
O - Off => Traces nothing
R - Results => Traces all clauses before execution. Displays final results of evaluating an expression
S - Scan => Traces all remaining clauses in the data without them being processed
Prefix:
! - inhibits host command execution.
Ex: TRACE !C => Causes commands to be traced but not processed. As each cmd is bypassed, the REXX RC is set to zero. You can switch off command inhibition, when it is in effect, by issuing a TRACE instruction with a prefix "!."
? - Controls interactive debug.
During usual execution, a TRACE option with a prefix of ? causes interactive debug to be switched on.
How do I test a ISPF panel?
"Perform Dialog Testing" ==> "Dialog Services" ==> Execute the following commands
LIBDEF ISPPLIB DATASET ID ('PDS in which the panel is present')
ISPEXEC DISPLAY PANEL(Panel name)
LIBDEF ISPPLIB DATASET ID ('PDS in which the panel is present')
ISPEXEC DISPLAY PANEL(Panel name)
ISRROUTE - ISPF command
The command is used in Panel design to invoke the SELECT service in the pull-down menu
ACTION RUN (ISRROUTE) PARM('SELECT CMD(panel name)')
ACTION RUN (ISRROUTE) PARM('SELECT CMD(panel name)')
Saturday, November 24, 2007
DB2 datatype - DATE
Date is a three part value (year , month and day)designating a point in time using the Gregorian calendar.
Range:
Year - 0001 to 9999
Month - 1 to 12
Date - 1 to 28 / 31/ 30 depending on the month
The internal represenation of date is a string of 4 bytes. Each byte consists of two packed decimal digits.
1st 2 bytes represent the year
3rd byte represents the month
4th byte represents the date
The length of a DATE column as described in the catalog is the internal length (4 bytes). The length of a DATE column as described in the SQLDA is the external length, which is 10 bytes unless a date-exit routine was specified when your DB2 system is installed. In that case, the string format of a date can be up to 255 bytes in length. Accordingly DCLGEN defines fixed-length string variables for DATE columns with a length equal to the value of the field LOCAL DATE LENGTH on installation panel DSNTIP4 or a length of 10- bytes if a value for the field was not specified
Range:
Year - 0001 to 9999
Month - 1 to 12
Date - 1 to 28 / 31/ 30 depending on the month
The internal represenation of date is a string of 4 bytes. Each byte consists of two packed decimal digits.
1st 2 bytes represent the year
3rd byte represents the month
4th byte represents the date
The length of a DATE column as described in the catalog is the internal length (4 bytes). The length of a DATE column as described in the SQLDA is the external length, which is 10 bytes unless a date-exit routine was specified when your DB2 system is installed. In that case, the string format of a date can be up to 255 bytes in length. Accordingly DCLGEN defines fixed-length string variables for DATE columns with a length equal to the value of the field LOCAL DATE LENGTH on installation panel DSNTIP4 or a length of 10- bytes if a value for the field was not specified
Wednesday, October 17, 2007
RECFM for Print files
FBA stands for Fixed Block Ansi. This means that the dataset is of Fixed block and contain ANSI printer carriage control character in the first position of the record. In MVS, we put the output-report to a file of format FBA and record length 133. Then, use the utility IEBGENER to print out the report.
Other record formats used for output-print datasets are FBM (Fixed block Machine: Has machine print-control codes), VBA(Variable block ANSI) and VBM (Variable block Machine).The one byte field in the first position of the dataset is used to tell the output device how to handle the output. The types of values it can contain are:
*Page feed / Form feed: Print after advancing to top-of-form
*Sigle space: Print after advancing one line
*Double space: Print after advancing 2 lines
*N spaces (N= 1 to 12): Print after advancing N lines
*Overstrike: Print after advancing no lines
Other record formats used for output-print datasets are FBM (Fixed block Machine: Has machine print-control codes), VBA(Variable block ANSI) and VBM (Variable block Machine).The one byte field in the first position of the dataset is used to tell the output device how to handle the output. The types of values it can contain are:
*Page feed / Form feed: Print after advancing to top-of-form
*Sigle space: Print after advancing one line
*Double space: Print after advancing 2 lines
*N spaces (N= 1 to 12): Print after advancing N lines
*Overstrike: Print after advancing no lines
Wednesday, August 22, 2007
COND=ONLY in JCL
//stepname EXEC PGM=x,COND=ONLY
The step is to be executed only if one or more of the preceding steps abnormally terminates. That is, the step will not be executed, unless a preceding step abnormally terminates.
The step is to be executed only if one or more of the preceding steps abnormally terminates. That is, the step will not be executed, unless a preceding step abnormally terminates.
COND=EVEN in JCL
//Stepname EXEC PGM=x,COND=EVEN
The step is to be executed even if one or more of the preceding steps abnormally terminates. That is, the step will always be executed, whether or not a preceding step abnormally terminates.
The step is to be executed even if one or more of the preceding steps abnormally terminates. That is, the step will always be executed, whether or not a preceding step abnormally terminates.
Friday, August 10, 2007
SQLCODE = 562
SQLCODE = 562, WARNING: A GRANT OF A PRIVILEGE WAS IGNORED BECAUSE THE GRANTEE ALREADY HAS THE PRIVILEGE FROM THE GRANTOR
Tuesday, August 07, 2007
Handling NULL in the program
Embedded SQL applications must prepare for receiving null values by associating a null-indicator variable with any host variable that can receive a null. A indicator variable is shared by both the database manager and the host application. Therefore, this variable must be declared in the application as a host variable, which corresponds to the SQL data type SMALLINT.
A null-indicator variable is placed in an SQL statement immediately after the host variable, and is prefixed with a colon. A space can separate the null-indicator variable from the host variable, but is not required. However, do not put a comma between the host variable and the null-indicator variable. You can also specify a null-indicator variable by using the optional INDICATOR keyword, which you place between the host variable and its null indicator.
The null-indicator variable is examined for a negative value. If the value is not negative, the application can use the returned value of the host variable. If the value is negative, the fetched value is null and the host variable should not be used. The database manager does not change the value of the host variable in this case.
A null-indicator variable is placed in an SQL statement immediately after the host variable, and is prefixed with a colon. A space can separate the null-indicator variable from the host variable, but is not required. However, do not put a comma between the host variable and the null-indicator variable. You can also specify a null-indicator variable by using the optional INDICATOR keyword, which you place between the host variable and its null indicator.
The null-indicator variable is examined for a negative value. If the value is not negative, the application can use the returned value of the host variable. If the value is negative, the fetched value is null and the host variable should not be used. The database manager does not change the value of the host variable in this case.
Friday, July 27, 2007
Packed datasets
Packed data is data in which ISPF has replaced any repeating characters with a sequence showing how many times the character is repeated. Packing data allows you to use direct access storage devices (DASD) more efficiently because the stored data occupies less space than it would otherwise.
If the source data that you want to process is packed, it must be expanded before it can be successfully processed by any of the language processors. The expansion method you should use depends on whether your source data is:
=>A sequential dataset that contains expansion triggers:
An expansion trigger is a keyword that tells ISPF to expand additional data before copying, including or imbedding it in the source data. ISPF does not recognise expansion triggers in data stored as a sequential dataset. Therefore, for these types of datasets,
1. Manually expand the data: Edit the source data and enter PACK OFF
2. Select the Source Data Packed option before calling one of the language processors.
=>Either of the following:
A. A Sequential dataset that does not contain expansion triggers
B. Any member of a partitioned dataset, either with or without expansion triggers.
ISPF does recognize expansion triggers in data stored as members of a partitioned data set. Also, if your source data does not contain expansion triggers, you do not have to be concerned with them. Therefore, for these two types of data, select the Source Data Packed option before calling one of the language processors.
If the source data that you want to process is packed, it must be expanded before it can be successfully processed by any of the language processors. The expansion method you should use depends on whether your source data is:
=>A sequential dataset that contains expansion triggers:
An expansion trigger is a keyword that tells ISPF to expand additional data before copying, including or imbedding it in the source data. ISPF does not recognise expansion triggers in data stored as a sequential dataset. Therefore, for these types of datasets,
1. Manually expand the data: Edit the source data and enter PACK OFF
2. Select the Source Data Packed option before calling one of the language processors.
=>Either of the following:
A. A Sequential dataset that does not contain expansion triggers
B. Any member of a partitioned dataset, either with or without expansion triggers.
ISPF does recognize expansion triggers in data stored as members of a partitioned data set. Also, if your source data does not contain expansion triggers, you do not have to be concerned with them. Therefore, for these two types of data, select the Source Data Packed option before calling one of the language processors.
Thursday, July 26, 2007
Abend S013
Could be one of the following reasons:
->Conflicting or incomplete parameters in DCB, such as BLKSIZE not a multiple of LRECL, or
missing SYSIN DD;
->Tried to create a PDS without allocating directory blocks;
->Member name specified in DD not found;
->no directory allocation sub parameter in DD;
->Open output dataset as input;
->Track overflow or updating attempted, but not supported by the OS.
->May be a record length or OPEN statement error.
->Conflicting or incomplete parameters in DCB, such as BLKSIZE not a multiple of LRECL, or
missing SYSIN DD;
->Tried to create a PDS without allocating directory blocks;
->Member name specified in DD not found;
->no directory allocation sub parameter in DD;
->Open output dataset as input;
->Track overflow or updating attempted, but not supported by the OS.
->May be a record length or OPEN statement error.
Tuesday, July 17, 2007
SQLCODE = -679
Error: The object cannot be created because a drop is pending on the object. This occurred
when a create index statement is issued immediately after a drop index statement.
Drop index
create index on
(col1 Asc, Col2 Asc);
Commit work;
when a create index statement is issued immediately after a drop index statement.
Drop index
create index
(col1 Asc, Col2 Asc);
Commit work;
Monday, July 02, 2007
SQL : Error handling : WHENEVER stmt
The WHENEVER statment specifies the action to be taken when a specified exception condition occurs
This stmt can only be empedded in an application program. It is not an exeuctable stmt. It must not be specified in Java or REXX.
Syntax:
WHENEVER <>
Default :
WHENEVER <> CONTINUE.
This stmt can only be empedded in an application program. It is not an exeuctable stmt. It must not be specified in Java or REXX.
Syntax:
WHENEVER
Default :
WHENEVER <> CONTINUE.
Thursday, June 07, 2007
PLI: ENVIRONMENT(CTLASA)
CTLASA option specifies American National Standard Vertical carriage positioning characters or American National standard Pocket selection characters (Level 1). The CTL360 option specifies IBm machine-code control characters.
The American National Standard control characters:
(blank) Space 1 line before printing
0 Space 2 lines before printing
- Space 3 lines before printing
+ Suppress space before printing
1 Skip to channel 1
2 Skip to channel 2
.
.
.
9 Skip to channel 9
A Skip to channel 10
B Skip to channel 11
C skip to channel 12
V select stacker 1
W select stacker 2
The American National Standard control characters:
(blank) Space 1 line before printing
0 Space 2 lines before printing
- Space 3 lines before printing
+ Suppress space before printing
1 Skip to channel 1
2 Skip to channel 2
.
.
.
9 Skip to channel 9
A Skip to channel 10
B Skip to channel 11
C skip to channel 12
V select stacker 1
W select stacker 2
Thursday, May 10, 2007
SQLCODE = -401 SQLSTATE= 42818
THE OPERANDS OF AN ARITHMETIC OR COMPARISON OPERATION ARE NOT COMPARABLE
Thursday, May 03, 2007
SQLCODE = -181 SQLSTATE=22007
THE STRING REPRESENTATION OF A DATETIME VALUE IS NOT A VALID DATETIME VALUE
Bad data in Date/Time/Timestamp
Value for DATE, TIME, TIMESTAMP is invalid
Ex: 0000-00-00
Bad data in Date/Time/Timestamp
Value for DATE, TIME, TIMESTAMP is invalid
Ex: 0000-00-00
Thursday, April 19, 2007
SQLCODE = -180 SQLSTATE= 22007
Bad data in Date/time/timestamp
String representation of DATE, TIME, TIMESTAMP is invalid.
String representation of DATE, TIME, TIMESTAMP is invalid.
Tuesday, April 17, 2007
DB2: How does DB2 store NULL physically?
as an extra-byte prefix to the column value.
The NULL prefix is Hex '00' if value other than NULL is present in the column.
The NULL prefix is Hex 'FF' if the column has NULL value.
The NULL prefix is Hex '00' if value other than NULL is present in the column.
The NULL prefix is Hex 'FF' if the column has NULL value.
Common abends
S322
Timed out, try changing job class.
S806
Load module not found. Check library specified in the joblib.
S913
Insufficient authority. Check if you have required access to dataset.
S0C4
Storage related problem. Check your linkage section table definition and FD section
Timed out, try changing job class.
S806
Load module not found. Check library specified in the joblib.
S913
Insufficient authority. Check if you have required access to dataset.
S0C4
Storage related problem. Check your linkage section table definition and FD section
Wednesday, January 24, 2007
What is Unicode?
----------------------------------------
an extract from an article by Sarah Ellis
-----------------------------------------
In this era of globalization, the ability for systems to be able to handle data from
around the world is becoming paramount. However, workstations and servers can use
different code pages, depending on the native language of the workstation user. In
effect the workstation and servers are speaking “different languages” and this makes
communication difficult.
For example, if a workstation inserts some data into a DB2 for z/OS system, the data
is converted fro m ASCII to EBCDIC using a conversion table, which maps the code
points from the source (ASCII) CCSID to the target (EBCDIC) CCSID.
In addition to a conversion cost, a more serious issue is the potential loss of
characters. For example, if a Japanese workstation were inserting data into a
European DB2 system, many characters would not have a code point in the CCSID used
by DB2. Either the characters must be lost (enforced subset conversions) or DB2 must
map them to code points that are not already used (a round trip conversion). The
problem with the second option is that another system reading the data will not know
about this mapping and may not read the data correctly, perhaps mapping the
characters to some of its own characters.
The design objective of Unicode is to avoid these issues by having a single code page
that has a code point mapping for every character in the world. The Unicode
Consortium has devised a number of Universal Transformation Formats (UTFs) which
include unique code points for most current and historical languages, mathematical and
scientific symbols, and can be extended as new characters emerge. These UTFs have
become widely accepted, being used by technologies such as Java, XML and LDAP.
Many consider Unicode as the foundation for globalization of data and it is becoming a
strategic direction for many companies. For example, Microsoft has adopted Unicode
with products such as Word by storing data in Unicode and by providing Unicode APIs
for ODBC
Note:
Unicode only affects character data or numeric data stored as characters
ie. CHAR, VARCHAR, GRAPHIC, VARGRAPHIC, CLOB, DBCLOB. Numeric data stored as
as binary, packed or floating point are not affected.
an extract from an article by Sarah Ellis
-----------------------------------------
In this era of globalization, the ability for systems to be able to handle data from
around the world is becoming paramount. However, workstations and servers can use
different code pages, depending on the native language of the workstation user. In
effect the workstation and servers are speaking “different languages” and this makes
communication difficult.
For example, if a workstation inserts some data into a DB2 for z/OS system, the data
is converted fro m ASCII to EBCDIC using a conversion table, which maps the code
points from the source (ASCII) CCSID to the target (EBCDIC) CCSID.
In addition to a conversion cost, a more serious issue is the potential loss of
characters. For example, if a Japanese workstation were inserting data into a
European DB2 system, many characters would not have a code point in the CCSID used
by DB2. Either the characters must be lost (enforced subset conversions) or DB2 must
map them to code points that are not already used (a round trip conversion). The
problem with the second option is that another system reading the data will not know
about this mapping and may not read the data correctly, perhaps mapping the
characters to some of its own characters.
The design objective of Unicode is to avoid these issues by having a single code page
that has a code point mapping for every character in the world. The Unicode
Consortium has devised a number of Universal Transformation Formats (UTFs) which
include unique code points for most current and historical languages, mathematical and
scientific symbols, and can be extended as new characters emerge. These UTFs have
become widely accepted, being used by technologies such as Java, XML and LDAP.
Many consider Unicode as the foundation for globalization of data and it is becoming a
strategic direction for many companies. For example, Microsoft has adopted Unicode
with products such as Word by storing data in Unicode and by providing Unicode APIs
for ODBC
Note:
Unicode only affects character data or numeric data stored as characters
ie. CHAR, VARCHAR, GRAPHIC, VARGRAPHIC, CLOB, DBCLOB. Numeric data stored as
as binary, packed or floating point are not affected.
What is an encoding scheme?
An encoding scheme is a collection of code pages (CCSIDs)for various languages used on a particular computing platform. For example, the EBCDIC encoding scheme is used on Z/OS and i-series systems. The ASCII encoding scheme is used on Intel-based(Windows) systems and Unix based systems.
What is CCSID - Coded Character Set IDentifier?
A CCSID is a number to identify a particular code page. For example, North Americans use the US-English code page denoted by a CCSID 037. Germans use the CCSID 273 that includes code points for specific characters in their language such as letters with umlauts. Other examples are 1252 which is an ASCII CCSID used on Windows platform and 1208 which represents the unicode transformation format UTF-8.
What are Code points?
All data is stored as bytes. For example, in our DB2 for Z/OS (which uses EBCDIC) systems, the character 'a' is being stored as X'81', the character 'A' is stored as X'C1' and the character representation of number '1' is X'F1'.
These byte representations for characters are called Code points.
These byte representations for characters are called Code points.
Sunday, January 07, 2007
Subscribe to:
Posts (Atom)