Update some attributes of this CSV Import
imports_patch_files_csv( id, name = NULL, source = NULL, destination = NULL, first_row_is_header = NULL, column_delimiter = NULL, escaped = NULL, compression = NULL, existing_table_rows = NULL, max_errors = NULL, table_columns = NULL, loosen_types = NULL, execution = NULL, redshift_destination_options = NULL )
id | integer required. The ID for the import. |
---|---|
name | string optional. The name of the import. |
source | list optional. A list containing the following elements:
|
destination | list optional. A list containing the following elements:
|
first_row_is_header | boolean optional. A boolean value indicating whether or not the first row of the source file is a header row. |
column_delimiter | string optional. The column delimiter for the file. Valid arguments are "comma", "tab", and "pipe". Defaults to "comma". |
escaped | boolean optional. A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false. |
compression | string optional. The type of compression of the source file. Valid arguments are "gzip" and "none". Defaults to "none". |
existing_table_rows | string optional. The behavior if a destination table with the requested name already exists. One of "fail", "truncate", "append", "drop", or "upsert".Defaults to "fail". |
max_errors | integer optional. The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases. |
table_columns | array optional. An array containing the following fields:
|
loosen_types | boolean optional. If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false. |
execution | string optional. In upsert mode, controls the movement of data in upsert mode. If set to "delayed", the data will be moved after a brief delay. If set to "immediate", the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to "delayed", to accommodate concurrent upserts to the same table and speedier non-upsert imports. |
redshift_destination_options | list optional. A list containing the following elements:
|
A list containing the following elements:
integer, The ID for the import.
string, The name of the import.
list, A list containing the following elements:
fileIds array, The file ID(s) to import, if importing Civis file(s).
storagePath list . A list containing the following elements:
storageHostId integer, The ID of the source storage host.
credentialId integer, The ID of the credentials for the source storage host.
filePaths array, The file or directory path(s) within the bucket from which to import. E.g. the file_path for "s3://mybucket/files/all/" would be "/files/all/"If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
list, A list containing the following elements:
schema string, The destination schema name.
table string, The destination table name.
remoteHostId integer, The ID of the destination database host.
credentialId integer, The ID of the credentials for the destination database.
primaryKeys array, A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is "upsert", this field is required;see the Civis Helpdesk article on "Advanced CSV Imports via the Civis API" for more information.
lastModifiedKeys array, A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is "upsert", this field is required.
boolean, A boolean value indicating whether or not the first row of the source file is a header row.
string, The column delimiter for the file. Valid arguments are "comma", "tab", and "pipe". Defaults to "comma".
boolean, A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
string, The type of compression of the source file. Valid arguments are "gzip" and "none". Defaults to "none".
string, The behavior if a destination table with the requested name already exists. One of "fail", "truncate", "append", "drop", or "upsert".Defaults to "fail".
integer, The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
array, An array containing the following fields:
name string, The column name.
sqlType string, The SQL type of the column.
boolean, If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
string, In upsert mode, controls the movement of data in upsert mode. If set to "delayed", the data will be moved after a brief delay. If set to "immediate", the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to "delayed", to accommodate concurrent upserts to the same table and speedier non-upsert imports.
list, A list containing the following elements:
diststyle string, The diststyle to use for the table. One of "even", "all", or "key".
distkey string, Distkey for this table in Redshift
sortkeys array, Sortkeys for this table in Redshift. Please provide a maximum of two.
boolean, The hidden status of the item.