Sunday 6 October 2013

6.1 Teradata Parallel Transporter - DDL operator

DDL operator capabilities in brief:

  • DDL operator is neither a consumer operator or producer operator. It does not read from the Data stream and does not even write to the Data stream.

  • Normally DDL operator is used to perform set up operations prior to executing loads or export tasks.







DDL Operator Definition:

Following is the example of DDL operator definition:

DEFINE OPERATOR DDL_OPERATOR
DESCRIPTION 'Teradata PT DDL OPERATOR Sample'
TYPE DDL
ATTRIBUTES
(
VARCHAR TdpId,
VARCHAR UserName,
VARCHAR UserPassword,
VARCHAR ErrorList,
VARCHAR PrivateLogName
);

1st 3 of the attributes are security attributes.
And PrivatelogName specifies the name of the
Errorlist : This attribute specifies the specific error conditions to ignore.
We can use this when we drop in the script UV and Error tables. If the tables exist then the table will be dropped. But if the table does not exist the job might fail with errorcode 3807 . In such cases if we wish to ignore this error code we can write the attribute as
VARACHAR ERRORLIST = '3807'








Supported SQL Statements:

The DDL operator supports almost every SQL statement except statements that Return data to the operator, such as the SELECT, HELP, and SHOW statements or Require the operator to send data back to the Teradata Database.
The DDL operator does not support the USING clause with the INSERT, UPDATE, and DELETE DML
SQL statements.













Specifying DDL Statements in the Teradata PT APPLY Statement:
The following examples show how to specify DDL statements in the APPLY statement:

• One SQL statement per group

APPLY
'ddl stmt1',
'ddl stmt2',
.........
'ddl stmt3'
TO OPERATOR (operator specification)

• Multiple SQL statements in a group, but only one group

APPLY
('ddl stmt1','ddl stmt2', ... ,'ddl stmtn')
TO OPERATOR (operator specification)

• Multiple SQL statements per group, and multiple groups

APPLY
('ddl stmt11','ddl stmt12', ... ,'ddl stmt1n'),
.
.
.
('ddl stmtx1','ddl stmtx2', ... ,'ddl stmtxm')
TO OPERATOR (operator specification)

If more than one statement is specified in one group, then the DDL operator combines all statements into one single multistatement request and sends it to the Teradata Database as one implicit transaction. This means that any statement failure or any error rolls back the entire transaction.


We can group several SQL statements together to perform a desired logical database task and
still take advantages of the automatic rollback feature on Teradata Database in case of any
statement failures or any errors occurring during the transaction.

We can also have one SQL statement per group if you desire to execute each statement in its own transaction.

However note that when multiple SQL statements are specified as the part of one DML group in the APPLY statement, then if the group contains a DDL statement it must be the last statement in the implicit transaction, which means the last statement in a group.

Therefore, given that the information in parentheses (below) represents a group, the validity
of the statements can be determined as follows:
• Group 1: (DDL) is valid.
• Group 2: (DDL, DDL) is invalid because only one DDL statement is allowed.
• Group 3: (DML, DML, DDL) is valid.
• Group 4: (DML, DML, DML) is valid, even though the group contains no DDL statement.
• Group 5: (DML, DDL, DML) is invalid because the DDL statement is not the last statement in the group.









Check pointing with DDL operator:

The SQL statements are executed by groups in the order they are specified in the APPLY statement.
Hence the DDL operator can take a checkpoint after each group is executed.
A checkpoint, with respect to the DDL operator, marks the last group of DDL/DML SQL statements to execute successfully.

The DDL operator restarts at the beginning of the group of SQL statements whose execution
is interrupted by an abnormal termination. If the interrupted group has only one SQL
statement, the DDL operator restarts at that statement.

If the last request was successful prior to a restart, the operator can resume at the next request
in line. If the last request failed prior to a restart, then the operator resumes at the failed
request.

Ex:

APPLY
('DROP TABLE ' || @jobvar_wrk_dbname || '.ET_Trans;'),
('DROP TABLE ' || @jobvar_wrk_dbname || '.UV_Trans;'),
('DROP TABLE ' || @jobvar_tgt_dbname || '.Trans;'),
('CREATE TABLE ' || @jobvar_tgt_dbname
|| '.Trans (Account_Number VARCHAR(50),
Trans_Number VARCHAR(50),
Trans_Date VARCHAR(50),
Trans_ID VARCHAR(50),
Trans_Amount VARCHAR(50));')
TO OPERATOR (DDL_OPERATOR);

Each of the Statements above are in own group.
Say if the statement that drop UV table fails for some reason, when we restart the job it starts after the last successful group and hence drops the UV table.




5.3 Teradata Parallel Transporter - Data Connector Operator Example (Producer and Consumer)


Example 1 : Following example uses many operators like DDL,LOAD,EXPORT,DATA CONNECTOR.
But for This lesson we concentrate only on the highlighted part of the script.
In this script we read data from a fixed width input file and load data to a Teradata Table. Next we export from this table and write data to a delimited output file.

DEFINE JOB EXPORT_EMPLOYEE_TABLE_TO_FILE
DESCRIPTION 'EXPORT SAMPLE EMPLOYEE TABLE TO A FILE'
(

/*****************************/
DEFINE SCHEMA EMPLOYEE_SCHEMA
DESCRIPTION 'SAMPLE EMPLOYEE SCHEMA'
(
EMP_ID   CHAR(10),
EMP_NAME CHAR(10)
);
/*****************************/
--The schema EMPLOYEE_SCHEMA used to read the fixed with input file.
--The DATACONNECTOR PRODUCER operator uses format a 'TEXT' and
--so all fields in the schema should be define as CHAR */

/*****************************/
DEFINE SCHEMA EMPLOYEE_SCHEMA1
DESCRIPTION 'SAMPLE EMPLOYEE SCHEMA'
(
EMP_ID   VARCHAR(10),
EMP_NAME VARCHAR(10)
);

/*****************************/
--THIS schema is used by Data connector consumer to write to a delimited file .
--When the format is specified as specified as 'DELIMITED' the column should be
--defined as VARCHAR

/*****************************/
DEFINE OPERATOR FILE_READER()
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER DATA CONNECTOR OPERATOR'
TYPE DATACONNECTOR PRODUCER
SCHEMA EMPLOYEE_SCHEMA
ATTRIBUTES
(
VARCHAR PrivateLogName    = 'file_reader_private.log',
VARCHAR FileName          = 'input_fixed_w_file.txt',
VARCHAR IndicatorMode     = 'N',
VARCHAR OpenMode          = 'Read',
VARCHAR Format            = 'text',
VARCHAR DIrectoryPath = '/home/sukul/tpttest',
VARCHAR ArchiveDirectoryPath = '/home/sukul/tpttest/archive',
INTEGER SkipRows = 1,
VARCHAR SkipRowsEveryFile = 'N'
);
/*****************************/
-- The input file input_fixed_w_file.txt is opened in the read mode indicated by the OpenMOde attribute.
-- The Directory Path in which this file would exist is '/home/sukul/tpttest'.
-- Once the file is read it will be moved to the directory /home/sukul/tpttest/archive indicated by the attribute ArchiveDirectoryPath
-- The attribute Skiprows indicates that the 1st record from the input file will be ignored.
-- Each Attribute must be separated by ,
-- Note that name of the operator has () after the name.
-- We can pass any of these attribute values through job variables files.



/*****************************/
DEFINE OPERATOR LOAD_OPERATOR
TYPE LOAD
SCHEMA *
ATTRIBUTES
(
VARCHAR PrivateLogName = 'load_log',
VARCHAR TdpId = 'prod.tdipid',
VARCHAR UserName = 'sukul',
VARCHAR UserPassword = 'mad22dam',
VARCHAR TargetTable = 'SUPPORT_DATABASE.SOURCE_EMP_TABLE',
VARCHAR LogTable = 'SUPPORT_DATABASE.LG_Trans',
VARCHAR ErrorTable1 = 'SUPPORT_DATABASE.ET_Trans',
VARCHAR ErrorTable2 = 'SUPPORT_DATABASE.UV_Trans'
);
/*****************************/

/*****************************/
DEFINE OPERATOR DDL_OPERATOR()
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER DDL OPERATOR'
TYPE DDL
ATTRIBUTES
(
VARCHAR PrivateLogName = 'DDL_log',
VARCHAR TdpId          = 'prod.tdipid',
VARCHAR UserName       = 'sukul',
VARCHAR UserPassword   = 'mad22dam',
VARCHAR ErrorList      = '3807'
);
/*****************************/


/*****************************/
DEFINE OPERATOR FILE_WRITER()
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER DATA CONNECTOR OPERATOR'
TYPE DATACONNECTOR CONSUMER
SCHEMA *
ATTRIBUTES
(
VARCHAR PrivateLogName    = 'file_writer_privatelog',
VARCHAR FileName          = 'output_delimited_file_list.txt',
VARCHAR IndicatorMode     = 'N',
VARCHAR OpenMode          = 'Write',
VARCHAR Format            = 'delimited',
VARCHAR FILELIST = 'Y',
VARCHAR TEXTDELIMITER = '~~'
);
/*****************************/
-- Filelist = 'Y' indicates that the file output_delimited_file_list.txt is not the actual output file.
-- Instead the output is written to the filename that is written inside the file output_delimited_file_list.txt
-- We can specify only one indirect file because we have used only once instance of consumer operator.
-- The Delimiter in the output file would be 2 bytes.(~~)

/*****************************/
DEFINE OPERATOR EXPORT_OPERATOR()
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER EXPORT OPERATOR'
TYPE EXPORT
SCHEMA EMPLOYEE_SCHEMA1
ATTRIBUTES
(
VARCHAR PrivateLogName    = 'export_privatelog',
INTEGER MaxSessions       =  32,
INTEGER MinSessions       =  1,
VARCHAR TdpId             = 'prod.tdipid',
VARCHAR UserName          = 'sukul',
VARCHAR UserPassword      = 'mad22dam',
VARCHAR AccountId,
VARCHAR SelectStmt        = 'SELECT TRIM(CAST(EMP_ID AS VARCHAR(10))) , TRIM(CAST(EMP_NAME AS VARCHAR(10))) FROM SUPPORT_DATABASE.SOURCE_EMP_TABLE;'
);
/*****************************/
-- Note that in the select statement we are converting columns to VARCHAR to make them suitable for writing by the DataConnector consumer operator.


-- Note that we can have multiple steps in a job . Each step has its own closing and opening brackets
STEP setup
(
APPLY
('DROP TABLE SUPPORT_DATABASE.SOURCE_EMP_TABLE;'),
('CREATE TABLE SUPPORT_DATABASE.SOURCE_EMP_TABLE(EMP_ID INTEGER,
EMP_NAME CHAR(10));')
TO OPERATOR (DDL_OPERATOR () );
);
-- Each SQL we write should have its own ;

STEP load_the_file
(
APPLY ('INSERT INTO SUPPORT_DATABASE.SOURCE_EMP_TABLE VALUES (:EMP_ID,:EMP_NAME);') TO OPERATOR (LOAD_OPERATOR())
SELECT * FROM OPERATOR (FILE_READER());
);

STEP export_to_file
(
APPLY TO OPERATOR (FILE_WRITER() )
SELECT * FROM OPERATOR (EXPORT_OPERATOR() [1] );
);
);





Contents of the input file are:
more input_fixed_w_file.txt
0000000001sukul
0000000002uma
0000000033bhanu
0000000004chettri

-- The input file contants 4 records.







Contents of output_delimited_file_list.txt are :
more output_delimited_file_list.txt
/home/sukul/tpttest/outputile.txt

-- This list file contains the name of the actual data file where the records will be written.
-- The actual name of the output file is /home/sukul/tpttest/outputile.txt







We run the this script using the following command:
tbuild -f tpttest2
-- tpttest2 is the name of the file containing the above script.





Following is the log of the above run:


Teradata Parallel Transporter Version 13.10.00.05
Job log: /opt/teradata/client/13.10/tbuild/logs/sukul-13.out
Job id is sukul-13, running on mhp36b21
Teradata Parallel Transporter SQL DDL Operator Version 13.10.00.05
DDL_OPERATOR: private log specified: DDL_log
DDL_OPERATOR: connecting sessions
DDL_OPERATOR: sending SQL requests
DDL_OPERATOR: disconnecting sessions
DDL_OPERATOR: Total processor time used = '0.12 Second(s)'
DDL_OPERATOR: Start : Sun Oct  6 08:34:07 2013
DDL_OPERATOR: End   : Sun Oct  6 08:34:08 2013
Job step setup completed successfully
Teradata Parallel Transporter Load Operator Version 13.10.00.04
LOAD_OPERATOR: private log specified: load_log
LOAD_OPERATOR: connecting sessions
Teradata Parallel Transporter DataConnector Version 13.10.00.05
FILE_READER Instance 1 directing private log report to 'file_reader_private.log-1'.
FILE_READER: TPT19008 DataConnector Producer operator Instances: 1
FILE_READER: TPT19003 ECI operator ID: FILE_READER-15386
FILE_READER: TPT19222 Operator instance 1 processing file '/home/sukul/tpttest/input_fixed_w_file.txt'.
LOAD_OPERATOR: preparing target table
LOAD_OPERATOR: entering Acquisition Phase
LOAD_OPERATOR: entering Application Phase
LOAD_OPERATOR: Statistics for Target Table:  'SUPPORT_DATABASE.SOURCE_EMP_TABLE'
LOAD_OPERATOR: Total Rows Sent To RDBMS:      3
LOAD_OPERATOR: Total Rows Applied:            3
LOAD_OPERATOR: disconnecting sessions
FILE_READER: TPT19221 Total files processed: 1.
LOAD_OPERATOR: Total processor time used = '1.51 Second(s)'
LOAD_OPERATOR: Start : Sun Oct  6 08:34:12 2013
LOAD_OPERATOR: End   : Sun Oct  6 08:34:27 2013
Job step load_the_file completed successfully
Teradata Parallel Transporter DataConnector Version 13.10.00.05
FILE_WRITER Instance 1 directing private log report to 'file_writer_privatelog-1'.
FILE_WRITER: TPT19007 DataConnector Consumer operator Instances: 1
FILE_WRITER: TPT19003 ECI operator ID: FILE_WRITER-15415
FILE_WRITER: TPT19222 Operator instance 1 processing file '/home/sukul/tpttest/outputile.txt'.
Teradata Parallel Transporter Export Operator Version 13.10.00.04
EXPORT_OPERATOR: private log specified: export_privatelog
EXPORT_OPERATOR: connecting sessions
EXPORT_OPERATOR: sending SELECT request
EXPORT_OPERATOR: entering End Export Phase
EXPORT_OPERATOR: Total Rows Exported:  3
FILE_WRITER: TPT19221 Total files processed: 1.
EXPORT_OPERATOR: disconnecting sessions
EXPORT_OPERATOR: Total processor time used = '1.51 Second(s)'
EXPORT_OPERATOR: Start : Sun Oct  6 08:34:29 2013
EXPORT_OPERATOR: End   : Sun Oct  6 08:34:42 2013
Job step export_to_file completed successfully
Job sukul completed successfully


-- In the above log we can see that only 3 records get loaded, 1st record is ignored as we specified Skiprows = 1

Above log shows  that the logfile name is /opt/teradata/client/13.10/tbuild/logs/sukul-13.out

If we try to read this file using the commands like more we get unreadable data as shown below:

TWB_EVENTSM-#^E^E^EsetupM-^@^M-A^B^CRQXM-?M-^X^A8A^EAsm017r-13,17,0,OperatorEnter,setup,DDL_OPERATOR,1,2013-10-06,,1,0M-^X^A^A;M-p^D
        ^DAPPLY_1[0001]opermsgsM-^L^L^E^LDDL_OPERATORM-^\^D^B^AM- ^EM-!^E^E^Esetup`^B     *M-^U^C^CRQXM-?M-^X^BD^G^E^G
SQL DDLL^K^E^K13.10.00.05`^B    *M-^V^C^CRQXM-?M-^X^A^BD^L^E^LDDL_OPERATORQ^G^E^GDDL_logM- ^A^A;M-p^D
^DAPPLY_1[0001]opermsgsM-^L^L^E^LDDL_OPERATORM-^\^D^B^AM- ^G^E^GDDL_logM-(^E^E^Esetup@^B
'u^B^CRQXM-?M-^X^A8^A^E^A ^Ct^B
'^R^B^CRQXM-?M-^X^A^BD^A^E^A F^C!^E^C!   ===================================================================
     =                                                                 =
     =                  TERADATA PARALLEL TRANSPORTER                  =
     =                                                                 =
     =            SQL DDL OPERATOR     VERSION 13.10.00.05             =
     =                                                                 =
     =          OPERATOR SUPPORT LIBRARY VERSION 13.10.00.04           =
     =                                                                 =
     = COPYRIGHT 2001-2010, TERADATA CORPORATION. ALL RIGHTS RESERVED. =
     =                                                                 =
     ===================================================================
t^B
'e^B^CRQXM-?M-^X^B^BD^E**** 08:34:07R^X^E^XSun Oct  6 08:34:07 2013@^B

To view this public log properly we need to use the command tlogview as shown below :

tlogview -l sukul-13.out

This command will show us a formatted log. Note that we use the option '-l' to specify the public log name.













Private log:

In each of the operators we specified private log name.
This private log provides a detailed log specific to the operator.
To find all the private logs in the public log we use the following command.

 tlogview -l sukul-13.out -p

The option -p indicates all the available private logs embedded in the public log.

Following is the output of the above command:


PXCRM

TWB_SRCTGT

TWB_STATUS

TWB_EVENTS

DDL_log

load_log

file_reader_private.log-1

file_writer_privatelog-1

export_privatelog

Note that 1st 4 are system defined private logs and remaining 5 are for each of the 5 operators we specified in our job.

To view either of the private logs we use the -f option of the tlogview command as follows:

tlogview -l sukul-13.out -f file_reader_private.log-1

Following is how the private log of the reader operator looks like:

     ==========================================================================
     =                                                                        =
     =                     TERADATA PARALLEL TRANSPORTER                      =
     =                                                                        =
     =              DATACONNECTOR OPERATOR VERSION  13.10.00.05               =
     =                                                                        =
     =           DataConnector UTILITY LIBRARY VERSION 13.10.00.17            =
     =                                                                        =
     =    COPYRIGHT 2001-2010, Teradata Corporation.  ALL RIGHTS RESERVED.    =
     =                                                                        =
     ======================================================================

     Operator name: 'FILE_READER' instance 1 of 1 [Producer]

**** 08:34:12 Processing starting at: Sun Oct  6 08:34:12 2013

     ======================================================================
     =                                                                        =
     =                    Operator module static specifics                    =
     =                                                                        =
     =                Compiled for platform: '32-bit HPUX-PA'                 =
     =         Operator module name:'dtacop', version:'13.10.00.05C'          =
     =                                                                        =
     =      pmdcomt_HeaderVersion: 'Common 13.10.00.10' - packing 'none'      =
     =      pmddamt_HeaderVersion: 'Common 13.10.00.01' - packing 'none'      =
     =                                                                        =
     ======================================================================

     Log will include stats only

     Operator 'dtacop' main source version:'13.10.00.21'
**** 08:34:12 From file 'input_fixed_w_file.txt', starting to send rows.
**** 08:34:19 Finished sending rows for input_fixed_w_file.txt (index 1)
     Rows sent: 3, (size: 84) CPU Time: 0.00 Seconds
     Sending stat to infrastructure: DCCheckPointNo=0
     Sending stat to infrastructure: DCRowsRead=3
     Sending stat to infrastructure: DCFilesRead=1
     Sending stat to infrastructure: DCRowErrorNo=0
     Sending stat to infrastructure: DCFileName='input_fixed_w_file.txt'
     Files read by this instance: 1
**** 08:34:25 Total processor time used = '0.00 Seconds(s)'
**** 08:34:25 Total files processed: 1








Only 3 records get loaded to the Table. Following are the contents of the table:
select * from SOURCE_EMP_TABLE              ;

EMP_ID        EMP_NAME
1        2        uma      
2        33        bhanu    
3        4        chettri  










The output delimited file is as follows:
more outputile.txt
2~~uma
33~~bhanu
4~~chettri

Note that the delimiter is 2 bytes.



5.2 Teradata Parallel Transporter - Data Connector attributes meaning

Following is the meaning of each of the attributes for Data Connector operator in detail:

  1. FILENAME and DIRECTORYPATH


Use of the file name differs depending upon the operator type and operating system.

  • Data Connector producer operator

When data connector operator is used as producer we can use the * wildcard character in the FileName attribute to allow processing all the in the unix directory (or all members in the PDS or PDSE -applies for mainframe)

On Unix system the filename attribute is limited to 255 characters.
We can specify the complete file name    
Ex: VARCHAR Filename = '/home/sukul/inputfile.txt'
Or we can just specify the name of the file in the directory as VARCHAR Filename = 'inputfile.txt'.
In this case job looks for the optional DirectoryPath attribute. If the DirectoryPath attribute is specified as
VARCHAR DirectoryPath = '/home/holdy/' then the file is assumed at the path '/home/holdy/inputfile.txt'.
But if the directory is not defined in the optional DirectoryPath attribute, filename is expected to
be found in the default directory.


On Z/Os Filename can be
  • Fully qualified dataset name or PDS name including member name
  • Only member name of the PDS or PDSE script library or
  • 'DD:<ddname>'
Note that when only the member name is specified for the FileName, then the (PDS or PDSE) data set name containing the member must be specified by the DirectoryPath attribute.

  • Data Connector consumer operator

When using the DataConnector operator as a consumer, the FileName attribute becomes the complete file specification, and the FileName cannot contain the wildcard character (*).


If the pathname that you specify with the FileName attribute (as filename) contains any embedded pathname syntax (“/ “ on UNIX or "\\" on Windows), the pathname is accepted as the entire pathname. However, if the DirectoryPath attribute is present, the FileName attribute is ignored, and a warning message is issued.


Z/OS
FileName = '//'name.name(member)''
z/OS PDS DSN Name.Name(Member). Note the // used when specifying the name of the dataset.

FileName = '//'name.name''
z/OS DSN (sequential) Name.Name

FileName = 'member'
z/OS PDS member expected to reside in the dsn that is defined in the DirectoryPath attribute.

FileName = 'DD:ddname'
z/OS DSN described in JCL DD statement name “ddname.”
Unix
FileName = '/tmp/user/filename'
UNIX pathname.

FileName = 'filename'
If the DirectoryPath attribute is undefined, filename is located in the default directory.
Windows
FileName = '\\tmp\userfilename'
Windows path name.

FileName = 'filename'
Windows file name expected to be found in the directory defined in the DirectoryPath attribute.
If the DirectoryPath is not defined, filename is located in the default directory



  1. FILELIST

Expected values as 'Y' and 'N'

Adding FileList = ‘Y’ indicates that the file identified by FileName contains a list of files to be processed as
input or used as containers for output.

Note that when we use the filelist we expect the filenames to be full path specifications.
If no directory name is included, the files are expected to be located within the current directory.
Supplying full paths for output files enables you to write files to multiple directories or disks.

Attributes that we cannot use with Filelist
  • Directory Path attribute
  • ArchiveDirectoryPath attribute

Imp Note : When the combination of FileName and FileList attributes are used to control output, the supplied file list must have the same number of files as there are defined consumer instances; a mismatch results in a terminal error. At execution, rows are distributed to the listed files in a round-robin fashion if the tbuild -C option is used.



  1. FORMAT attribute

We specify  this attribute to indicate the format of the input of output file.

Format
Meaning
Format = 'Binary'
Record contains a two-byte integer data length (n) followed by n bytes of data.
Format = 'Formatted'
Each record is in a format traditionally known as FastLoad or
Teradata format, which is a two-byte integer (n) followed by n bytes of data, followed by an end-of-record marker (X'0A' or X'0D0A).
Format = 'Text'
Each record is entirely character data, an arbitrary number of bytes
followed by one of the following end-of-record markers:
• A single-byte line feed (X'0A') on UNIX platforms
• A double-byte carriage-return/line-feed pair (X'0D0A') on Windows platforms
On windows the line feed is  2 bytes.

When we specify Text all the column definitions in the input/output file must be defined as CHAR.
Format = 'Delimited'
Each record is in variable-length text record format, but they
contain fields (columns) separated by one or more delimiter characters, as defined with the TextDelimiter attribute.

With this file format, all of the data types in the DEFINE SCHEMA must be VARCHARs.

Default TextDelimiter attribute value is pipe character (|)
Format = 'Unformatted'
The data does not conform to any predefined format. Instead,
the data is entirely described by the columns in the schema definition of the
DataConnector operator.



  1. OPENMODE

Attribute that specifies the read/write access mode. Read means read-only access. Write means
write-only access. If a mode value is not specified for OpenMode, it defaults to Read for a
producer instance and Write for a consumer instance.


  1. TEXTDEMILTER and EscapeTextDelimiter attribute

Indicates the delimiter in the input or output file.
Default is '|' . This mean if we don’t specify this attribute then it is assumed as |.
Delimiters can be multiple bytes
To embed a delimiter, precede the delimiter with a backslash ( \ ) or escape.
Use the EscapeTextDelimiter attribute to change the default escape delimiter to something other than the back slash
character ( \).

  1. PRIVATELOGNAME

Optional attribute that specifies the name of a log that is maintained by the Teradata PT Logger inside the public log.

  1. AccessModuleName

Optional attribute that specifies the name of the access module file.

  1. AccessModuleInitStr

Optional attribute that specifies the initialization string for the specified access mode.

  1. MultipleReaders

Use the MultipleReaders attribute to process a single, very large file with multiple instances of
the Data Connector operator.


  1. SkipRows and SkipRowsEveryFile

The SkipRows attribute expects an integer that indicates the number of rows to be skipped.

The SkipRowsEveryFile attribute expects values as Y[es] and N[o].

For example, if SkipRows = 5 and SkipRowsEveryFile = ‘Y’, the system skips the first five rows
in every file and begins processing each file at the sixth row. You might use this method to
bypass header rows that appear at the beginning of each file.