Represents a {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job google_bigquery_job}.
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJob(
scope: Construct,
id: str,
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection] = None,
count: typing.Union[typing.Union[int, float], TerraformCount] = None,
depends_on: typing.List[ITerraformDependable] = None,
for_each: ITerraformIterator = None,
lifecycle: TerraformResourceLifecycle = None,
provider: TerraformProvider = None,
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]] = None,
job_id: str,
copy: BigqueryJobCopy = None,
extract: BigqueryJobExtract = None,
id: str = None,
job_timeout_ms: str = None,
labels: typing.Mapping[str] = None,
load: BigqueryJobLoad = None,
location: str = None,
project: str = None,
query: BigqueryJobQuery = None,
timeouts: BigqueryJobTimeouts = None
)
Name | Type | Description |
---|---|---|
scope |
constructs.Construct |
The scope in which to define this construct. |
id |
str |
The scoped construct ID. |
connection |
typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] |
No description. |
count |
typing.Union[typing.Union[int, float], cdktf.TerraformCount] |
No description. |
depends_on |
typing.List[cdktf.ITerraformDependable] |
No description. |
for_each |
cdktf.ITerraformIterator |
No description. |
lifecycle |
cdktf.TerraformResourceLifecycle |
No description. |
provider |
cdktf.TerraformProvider |
No description. |
provisioners |
typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] |
No description. |
job_id |
str |
The ID of the job. |
copy |
BigqueryJobCopy |
copy block. |
extract |
BigqueryJobExtract |
extract block. |
id |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}. |
job_timeout_ms |
str |
Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job. |
labels |
typing.Mapping[str] |
The labels associated with this job. You can use these to organize and group your jobs. |
load |
BigqueryJobLoad |
load block. |
location |
str |
The geographic location of the job. The default value is US. |
project |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}. |
query |
BigqueryJobQuery |
query block. |
timeouts |
BigqueryJobTimeouts |
timeouts block. |
- Type: constructs.Construct
The scope in which to define this construct.
- Type: str
The scoped construct ID.
Must be unique amongst siblings in the same scope
- Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]
- Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]
- Type: typing.List[cdktf.ITerraformDependable]
- Type: cdktf.ITerraformIterator
- Type: cdktf.TerraformResourceLifecycle
- Type: cdktf.TerraformProvider
- Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]
- Type: str
The ID of the job.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_id BigqueryJob#job_id}
- Type: BigqueryJobCopy
copy block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#copy BigqueryJob#copy}
- Type: BigqueryJobExtract
extract block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#extract BigqueryJob#extract}
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.
Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
- Type: str
Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_timeout_ms BigqueryJob#job_timeout_ms}
- Type: typing.Mapping[str]
The labels associated with this job. You can use these to organize and group your jobs.
Note: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field 'effective_labels' for all of the labels present on the resource.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#labels BigqueryJob#labels}
- Type: BigqueryJobLoad
load block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#load BigqueryJob#load}
- Type: str
The geographic location of the job. The default value is US.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#location BigqueryJob#location}
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.
- Type: BigqueryJobQuery
query block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}
- Type: BigqueryJobTimeouts
timeouts block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#timeouts BigqueryJob#timeouts}
Name | Description |
---|---|
to_string |
Returns a string representation of this construct. |
add_override |
No description. |
override_logical_id |
Overrides the auto-generated logical ID with a specific ID. |
reset_override_logical_id |
Resets a previously passed logical Id to use the auto-generated logical id again. |
to_hcl_terraform |
No description. |
to_metadata |
No description. |
to_terraform |
Adds this resource to the terraform JSON output. |
add_move_target |
Adds a user defined moveTarget string to this resource to be later used in .moveTo(moveTarget) to resolve the location of the move. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
has_resource_move |
No description. |
import_from |
No description. |
interpolation_for_attribute |
No description. |
move_from_id |
Move the resource corresponding to "id" to this resource. |
move_to |
Moves this resource to the target resource given by moveTarget. |
move_to_id |
Moves this resource to the resource corresponding to "id". |
put_copy |
No description. |
put_extract |
No description. |
put_load |
No description. |
put_query |
No description. |
put_timeouts |
No description. |
reset_copy |
No description. |
reset_extract |
No description. |
reset_id |
No description. |
reset_job_timeout_ms |
No description. |
reset_labels |
No description. |
reset_load |
No description. |
reset_location |
No description. |
reset_project |
No description. |
reset_query |
No description. |
reset_timeouts |
No description. |
def to_string() -> str
Returns a string representation of this construct.
def add_override(
path: str,
value: typing.Any
) -> None
- Type: str
- Type: typing.Any
def override_logical_id(
new_logical_id: str
) -> None
Overrides the auto-generated logical ID with a specific ID.
- Type: str
The new logical ID to use for this stack element.
def reset_override_logical_id() -> None
Resets a previously passed logical Id to use the auto-generated logical id again.
def to_hcl_terraform() -> typing.Any
def to_metadata() -> typing.Any
def to_terraform() -> typing.Any
Adds this resource to the terraform JSON output.
def add_move_target(
move_target: str
) -> None
Adds a user defined moveTarget string to this resource to be later used in .moveTo(moveTarget) to resolve the location of the move.
- Type: str
The string move target that will correspond to this resource.
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def has_resource_move() -> typing.Union[TerraformResourceMoveByTarget, TerraformResourceMoveById]
def import_from(
id: str,
provider: TerraformProvider = None
) -> None
- Type: str
- Type: cdktf.TerraformProvider
def interpolation_for_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def move_from_id(
id: str
) -> None
Move the resource corresponding to "id" to this resource.
Note that the resource being moved from must be marked as moved using it's instance function.
- Type: str
Full id of resource being moved from, e.g. "aws_s3_bucket.example".
def move_to(
move_target: str,
index: typing.Union[str, typing.Union[int, float]] = None
) -> None
Moves this resource to the target resource given by moveTarget.
- Type: str
The previously set user defined string set by .addMoveTarget() corresponding to the resource to move to.
- Type: typing.Union[str, typing.Union[int, float]]
Optional The index corresponding to the key the resource is to appear in the foreach of a resource to move to.
def move_to_id(
id: str
) -> None
Moves this resource to the resource corresponding to "id".
- Type: str
Full id of resource to move to, e.g. "aws_s3_bucket.example".
def put_copy(
source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]],
create_disposition: str = None,
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration = None,
destination_table: BigqueryJobCopyDestinationTable = None,
write_disposition: str = None
) -> None
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]]
source_tables block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_tables BigqueryJob#source_tables}
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
def put_extract(
destination_uris: typing.List[str],
compression: str = None,
destination_format: str = None,
field_delimiter: str = None,
print_header: typing.Union[bool, IResolvable] = None,
source_model: BigqueryJobExtractSourceModel = None,
source_table: BigqueryJobExtractSourceTable = None,
use_avro_logical_types: typing.Union[bool, IResolvable] = None
) -> None
- Type: typing.List[str]
A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_uris BigqueryJob#destination_uris}
- Type: str
The compression type to use for exported files.
Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#compression BigqueryJob#compression}
- Type: str
The exported file format.
Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_format BigqueryJob#destination_format}
- Type: str
When extracting data in CSV format, this defines the delimiter to use between fields in the exported data.
Default is ','
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to print out a header row in the results. Default is true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#print_header BigqueryJob#print_header}
source_model block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_model BigqueryJob#source_model}
source_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_table BigqueryJob#source_table}
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to use logical types when extracting to AVRO format.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_avro_logical_types BigqueryJob#use_avro_logical_types}
def put_load(
destination_table: BigqueryJobLoadDestinationTable,
source_uris: typing.List[str],
allow_jagged_rows: typing.Union[bool, IResolvable] = None,
allow_quoted_newlines: typing.Union[bool, IResolvable] = None,
autodetect: typing.Union[bool, IResolvable] = None,
create_disposition: str = None,
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration = None,
encoding: str = None,
field_delimiter: str = None,
ignore_unknown_values: typing.Union[bool, IResolvable] = None,
json_extension: str = None,
max_bad_records: typing.Union[int, float] = None,
null_marker: str = None,
parquet_options: BigqueryJobLoadParquetOptions = None,
projection_fields: typing.List[str] = None,
quote: str = None,
schema_update_options: typing.List[str] = None,
skip_leading_rows: typing.Union[int, float] = None,
source_format: str = None,
time_partitioning: BigqueryJobLoadTimePartitioning = None,
write_disposition: str = None
) -> None
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
- Type: typing.List[str]
The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_uris BigqueryJob#source_uris}
- Type: typing.Union[bool, cdktf.IResolvable]
Accept rows that are missing trailing optional columns.
The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_jagged_rows BigqueryJob#allow_jagged_rows}
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
The default value is false.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_quoted_newlines BigqueryJob#allow_quoted_newlines}
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if we should automatically infer the options and schema for CSV and JSON sources.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#autodetect BigqueryJob#autodetect}
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
- Type: str
The character encoding of the data.
The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#encoding BigqueryJob#encoding}
- Type: str
The separator for fields in a CSV file.
The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if BigQuery should allow extra values that are not represented in the table schema.
If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#ignore_unknown_values BigqueryJob#ignore_unknown_values}
- Type: str
If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#json_extension BigqueryJob#json_extension}
- Type: typing.Union[int, float]
The maximum number of bad records that BigQuery can ignore when running the job.
If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#max_bad_records BigqueryJob#max_bad_records}
- Type: str
Specifies a string that represents a null value in a CSV file.
For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#null_marker BigqueryJob#null_marker}
parquet_options block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parquet_options BigqueryJob#parquet_options}
- Type: typing.List[str]
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#projection_fields BigqueryJob#projection_fields}
- Type: str
The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#quote BigqueryJob#quote}
- Type: typing.List[str]
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}
- Type: typing.Union[int, float]
The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#skip_leading_rows BigqueryJob#skip_leading_rows}
- Type: str
The format of the data files.
For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE". The default value is CSV.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_format BigqueryJob#source_format}
time_partitioning block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#time_partitioning BigqueryJob#time_partitioning}
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
def put_query(
query: str,
allow_large_results: typing.Union[bool, IResolvable] = None,
create_disposition: str = None,
default_dataset: BigqueryJobQueryDefaultDataset = None,
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration = None,
destination_table: BigqueryJobQueryDestinationTable = None,
flatten_results: typing.Union[bool, IResolvable] = None,
maximum_billing_tier: typing.Union[int, float] = None,
maximum_bytes_billed: str = None,
parameter_mode: str = None,
priority: str = None,
schema_update_options: typing.List[str] = None,
script_options: BigqueryJobQueryScriptOptions = None,
use_legacy_sql: typing.Union[bool, IResolvable] = None,
use_query_cache: typing.Union[bool, IResolvable] = None,
user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] = None,
write_disposition: str = None
) -> None
- Type: str
SQL query text to execute.
The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL. NOTE: queries containing DML language ('DELETE', 'UPDATE', 'MERGE', 'INSERT') must specify 'create_disposition = ""' and 'write_disposition = ""'.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}
- Type: typing.Union[bool, cdktf.IResolvable]
If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance.
Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_large_results BigqueryJob#allow_large_results}
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
default_dataset block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#default_dataset BigqueryJob#default_dataset}
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
- Type: typing.Union[bool, cdktf.IResolvable]
If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results.
allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#flatten_results BigqueryJob#flatten_results}
- Type: typing.Union[int, float]
Limits the billing tier for this job.
Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_billing_tier BigqueryJob#maximum_billing_tier}
- Type: str
Limits the bytes billed for this job.
Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_bytes_billed BigqueryJob#maximum_bytes_billed}
- Type: str
Standard SQL only.
Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parameter_mode BigqueryJob#parameter_mode}
- Type: str
Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"].
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#priority BigqueryJob#priority}
- Type: typing.List[str]
Allows the schema of the destination table to be updated as a side effect of the query job.
Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}
script_options block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#script_options BigqueryJob#script_options}
- Type: typing.Union[bool, cdktf.IResolvable]
Specifies whether to use BigQuery's legacy SQL dialect for this query.
The default value is true. If set to false, the query will use BigQuery's standard SQL.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_legacy_sql BigqueryJob#use_legacy_sql}
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to look for the result in the query cache.
The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_query_cache BigqueryJob#use_query_cache}
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
user_defined_function_resources block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#user_defined_function_resources BigqueryJob#user_defined_function_resources}
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
def put_timeouts(
create: str = None,
delete: str = None,
update: str = None
) -> None
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}.
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}.
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}.
def reset_copy() -> None
def reset_extract() -> None
def reset_id() -> None
def reset_job_timeout_ms() -> None
def reset_labels() -> None
def reset_load() -> None
def reset_location() -> None
def reset_project() -> None
def reset_query() -> None
def reset_timeouts() -> None
Name | Description |
---|---|
is_construct |
Checks if x is a construct. |
is_terraform_element |
No description. |
is_terraform_resource |
No description. |
generate_config_for_import |
Generates CDKTF code for importing a BigqueryJob resource upon running "cdktf plan ". |
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJob.is_construct(
x: typing.Any
)
Checks if x
is a construct.
Use this method instead of instanceof
to properly detect Construct
instances, even when the construct library is symlinked.
Explanation: in JavaScript, multiple copies of the constructs
library on
disk are seen as independent, completely different libraries. As a
consequence, the class Construct
in each copy of the constructs
library
is seen as a different class, and an instance of one class will not test as
instanceof
the other class. npm install
will not create installations
like this, but users may manually symlink construct libraries together or
use a monorepo tool: in those cases, multiple copies of the constructs
library can be accidentally installed, and instanceof
will behave
unpredictably. It is safest to avoid using instanceof
, and using
this type-testing method instead.
- Type: typing.Any
Any object.
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJob.is_terraform_element(
x: typing.Any
)
- Type: typing.Any
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJob.is_terraform_resource(
x: typing.Any
)
- Type: typing.Any
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJob.generate_config_for_import(
scope: Construct,
import_to_id: str,
import_from_id: str,
provider: TerraformProvider = None
)
Generates CDKTF code for importing a BigqueryJob resource upon running "cdktf plan ".
- Type: constructs.Construct
The scope in which to define this construct.
- Type: str
The construct id used in the generated config for the BigqueryJob to import.
- Type: str
The id of the existing BigqueryJob that should be imported.
Refer to the {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#import import section} in the documentation of this resource for the id to use
- Type: cdktf.TerraformProvider
? Optional instance of the provider where the BigqueryJob to import is found.
Name | Type | Description |
---|---|---|
node |
constructs.Node |
The tree node. |
cdktf_stack |
cdktf.TerraformStack |
No description. |
fqn |
str |
No description. |
friendly_unique_id |
str |
No description. |
terraform_meta_arguments |
typing.Mapping[typing.Any] |
No description. |
terraform_resource_type |
str |
No description. |
terraform_generator_metadata |
cdktf.TerraformProviderGeneratorMetadata |
No description. |
connection |
typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] |
No description. |
count |
typing.Union[typing.Union[int, float], cdktf.TerraformCount] |
No description. |
depends_on |
typing.List[str] |
No description. |
for_each |
cdktf.ITerraformIterator |
No description. |
lifecycle |
cdktf.TerraformResourceLifecycle |
No description. |
provider |
cdktf.TerraformProvider |
No description. |
provisioners |
typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] |
No description. |
copy |
BigqueryJobCopyOutputReference |
No description. |
effective_labels |
cdktf.StringMap |
No description. |
extract |
BigqueryJobExtractOutputReference |
No description. |
job_type |
str |
No description. |
load |
BigqueryJobLoadOutputReference |
No description. |
query |
BigqueryJobQueryOutputReference |
No description. |
status |
BigqueryJobStatusList |
No description. |
terraform_labels |
cdktf.StringMap |
No description. |
timeouts |
BigqueryJobTimeoutsOutputReference |
No description. |
user_email |
str |
No description. |
copy_input |
BigqueryJobCopy |
No description. |
extract_input |
BigqueryJobExtract |
No description. |
id_input |
str |
No description. |
job_id_input |
str |
No description. |
job_timeout_ms_input |
str |
No description. |
labels_input |
typing.Mapping[str] |
No description. |
load_input |
BigqueryJobLoad |
No description. |
location_input |
str |
No description. |
project_input |
str |
No description. |
query_input |
BigqueryJobQuery |
No description. |
timeouts_input |
typing.Union[cdktf.IResolvable, BigqueryJobTimeouts] |
No description. |
id |
str |
No description. |
job_id |
str |
No description. |
job_timeout_ms |
str |
No description. |
labels |
typing.Mapping[str] |
No description. |
location |
str |
No description. |
project |
str |
No description. |
node: Node
- Type: constructs.Node
The tree node.
cdktf_stack: TerraformStack
- Type: cdktf.TerraformStack
fqn: str
- Type: str
friendly_unique_id: str
- Type: str
terraform_meta_arguments: typing.Mapping[typing.Any]
- Type: typing.Mapping[typing.Any]
terraform_resource_type: str
- Type: str
terraform_generator_metadata: TerraformProviderGeneratorMetadata
- Type: cdktf.TerraformProviderGeneratorMetadata
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection]
- Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]
count: typing.Union[typing.Union[int, float], TerraformCount]
- Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]
depends_on: typing.List[str]
- Type: typing.List[str]
for_each: ITerraformIterator
- Type: cdktf.ITerraformIterator
lifecycle: TerraformResourceLifecycle
- Type: cdktf.TerraformResourceLifecycle
provider: TerraformProvider
- Type: cdktf.TerraformProvider
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]]
- Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]
copy: BigqueryJobCopyOutputReference
effective_labels: StringMap
- Type: cdktf.StringMap
extract: BigqueryJobExtractOutputReference
job_type: str
- Type: str
load: BigqueryJobLoadOutputReference
query: BigqueryJobQueryOutputReference
status: BigqueryJobStatusList
- Type: BigqueryJobStatusList
terraform_labels: StringMap
- Type: cdktf.StringMap
timeouts: BigqueryJobTimeoutsOutputReference
user_email: str
- Type: str
copy_input: BigqueryJobCopy
- Type: BigqueryJobCopy
extract_input: BigqueryJobExtract
- Type: BigqueryJobExtract
id_input: str
- Type: str
job_id_input: str
- Type: str
job_timeout_ms_input: str
- Type: str
labels_input: typing.Mapping[str]
- Type: typing.Mapping[str]
load_input: BigqueryJobLoad
- Type: BigqueryJobLoad
location_input: str
- Type: str
project_input: str
- Type: str
query_input: BigqueryJobQuery
- Type: BigqueryJobQuery
timeouts_input: typing.Union[IResolvable, BigqueryJobTimeouts]
- Type: typing.Union[cdktf.IResolvable, BigqueryJobTimeouts]
id: str
- Type: str
job_id: str
- Type: str
job_timeout_ms: str
- Type: str
labels: typing.Mapping[str]
- Type: typing.Mapping[str]
location: str
- Type: str
project: str
- Type: str
Name | Type | Description |
---|---|---|
tfResourceType |
str |
No description. |
tfResourceType: str
- Type: str
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobConfig(
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection] = None,
count: typing.Union[typing.Union[int, float], TerraformCount] = None,
depends_on: typing.List[ITerraformDependable] = None,
for_each: ITerraformIterator = None,
lifecycle: TerraformResourceLifecycle = None,
provider: TerraformProvider = None,
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]] = None,
job_id: str,
copy: BigqueryJobCopy = None,
extract: BigqueryJobExtract = None,
id: str = None,
job_timeout_ms: str = None,
labels: typing.Mapping[str] = None,
load: BigqueryJobLoad = None,
location: str = None,
project: str = None,
query: BigqueryJobQuery = None,
timeouts: BigqueryJobTimeouts = None
)
Name | Type | Description |
---|---|---|
connection |
typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] |
No description. |
count |
typing.Union[typing.Union[int, float], cdktf.TerraformCount] |
No description. |
depends_on |
typing.List[cdktf.ITerraformDependable] |
No description. |
for_each |
cdktf.ITerraformIterator |
No description. |
lifecycle |
cdktf.TerraformResourceLifecycle |
No description. |
provider |
cdktf.TerraformProvider |
No description. |
provisioners |
typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] |
No description. |
job_id |
str |
The ID of the job. |
copy |
BigqueryJobCopy |
copy block. |
extract |
BigqueryJobExtract |
extract block. |
id |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}. |
job_timeout_ms |
str |
Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job. |
labels |
typing.Mapping[str] |
The labels associated with this job. You can use these to organize and group your jobs. |
load |
BigqueryJobLoad |
load block. |
location |
str |
The geographic location of the job. The default value is US. |
project |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}. |
query |
BigqueryJobQuery |
query block. |
timeouts |
BigqueryJobTimeouts |
timeouts block. |
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection]
- Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]
count: typing.Union[typing.Union[int, float], TerraformCount]
- Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]
depends_on: typing.List[ITerraformDependable]
- Type: typing.List[cdktf.ITerraformDependable]
for_each: ITerraformIterator
- Type: cdktf.ITerraformIterator
lifecycle: TerraformResourceLifecycle
- Type: cdktf.TerraformResourceLifecycle
provider: TerraformProvider
- Type: cdktf.TerraformProvider
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]]
- Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]
job_id: str
- Type: str
The ID of the job.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_id BigqueryJob#job_id}
copy: BigqueryJobCopy
- Type: BigqueryJobCopy
copy block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#copy BigqueryJob#copy}
extract: BigqueryJobExtract
- Type: BigqueryJobExtract
extract block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#extract BigqueryJob#extract}
id: str
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.
Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
job_timeout_ms: str
- Type: str
Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_timeout_ms BigqueryJob#job_timeout_ms}
labels: typing.Mapping[str]
- Type: typing.Mapping[str]
The labels associated with this job. You can use these to organize and group your jobs.
Note: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field 'effective_labels' for all of the labels present on the resource.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#labels BigqueryJob#labels}
load: BigqueryJobLoad
- Type: BigqueryJobLoad
load block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#load BigqueryJob#load}
location: str
- Type: str
The geographic location of the job. The default value is US.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#location BigqueryJob#location}
project: str
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.
query: BigqueryJobQuery
- Type: BigqueryJobQuery
query block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}
timeouts: BigqueryJobTimeouts
- Type: BigqueryJobTimeouts
timeouts block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#timeouts BigqueryJob#timeouts}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopy(
source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]],
create_disposition: str = None,
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration = None,
destination_table: BigqueryJobCopyDestinationTable = None,
write_disposition: str = None
)
Name | Type | Description |
---|---|---|
source_tables |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] |
source_tables block. |
create_disposition |
str |
Specifies whether the job is allowed to create new tables. |
destination_encryption_configuration |
BigqueryJobCopyDestinationEncryptionConfiguration |
destination_encryption_configuration block. |
destination_table |
BigqueryJobCopyDestinationTable |
destination_table block. |
write_disposition |
str |
Specifies the action that occurs if the destination table already exists. |
source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]]
source_tables block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_tables BigqueryJob#source_tables}
create_disposition: str
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
destination_table: BigqueryJobCopyDestinationTable
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
write_disposition: str
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopyDestinationEncryptionConfiguration(
kms_key_name: str
)
Name | Type | Description |
---|---|---|
kms_key_name |
str |
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. |
kms_key_name: str
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopyDestinationTable(
table_id: str,
dataset_id: str = None,
project_id: str = None
)
Name | Type | Description |
---|---|---|
table_id |
str |
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not. |
dataset_id |
str |
The ID of the dataset containing this table. |
project_id |
str |
The ID of the project containing this table. |
table_id: str
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
dataset_id: str
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopySourceTables(
table_id: str,
dataset_id: str = None,
project_id: str = None
)
Name | Type | Description |
---|---|---|
table_id |
str |
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not. |
dataset_id |
str |
The ID of the dataset containing this table. |
project_id |
str |
The ID of the project containing this table. |
table_id: str
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
dataset_id: str
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtract(
destination_uris: typing.List[str],
compression: str = None,
destination_format: str = None,
field_delimiter: str = None,
print_header: typing.Union[bool, IResolvable] = None,
source_model: BigqueryJobExtractSourceModel = None,
source_table: BigqueryJobExtractSourceTable = None,
use_avro_logical_types: typing.Union[bool, IResolvable] = None
)
Name | Type | Description |
---|---|---|
destination_uris |
typing.List[str] |
A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written. |
compression |
str |
The compression type to use for exported files. |
destination_format |
str |
The exported file format. |
field_delimiter |
str |
When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. |
print_header |
typing.Union[bool, cdktf.IResolvable] |
Whether to print out a header row in the results. Default is true. |
source_model |
BigqueryJobExtractSourceModel |
source_model block. |
source_table |
BigqueryJobExtractSourceTable |
source_table block. |
use_avro_logical_types |
typing.Union[bool, cdktf.IResolvable] |
Whether to use logical types when extracting to AVRO format. |
destination_uris: typing.List[str]
- Type: typing.List[str]
A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_uris BigqueryJob#destination_uris}
compression: str
- Type: str
The compression type to use for exported files.
Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#compression BigqueryJob#compression}
destination_format: str
- Type: str
The exported file format.
Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_format BigqueryJob#destination_format}
field_delimiter: str
- Type: str
When extracting data in CSV format, this defines the delimiter to use between fields in the exported data.
Default is ','
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}
print_header: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to print out a header row in the results. Default is true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#print_header BigqueryJob#print_header}
source_model: BigqueryJobExtractSourceModel
source_model block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_model BigqueryJob#source_model}
source_table: BigqueryJobExtractSourceTable
source_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_table BigqueryJob#source_table}
use_avro_logical_types: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to use logical types when extracting to AVRO format.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_avro_logical_types BigqueryJob#use_avro_logical_types}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtractSourceModel(
dataset_id: str,
model_id: str,
project_id: str
)
Name | Type | Description |
---|---|---|
dataset_id |
str |
The ID of the dataset containing this model. |
model_id |
str |
The ID of the model. |
project_id |
str |
The ID of the project containing this model. |
dataset_id: str
- Type: str
The ID of the dataset containing this model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
model_id: str
- Type: str
The ID of the model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#model_id BigqueryJob#model_id}
project_id: str
- Type: str
The ID of the project containing this model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtractSourceTable(
table_id: str,
dataset_id: str = None,
project_id: str = None
)
Name | Type | Description |
---|---|---|
table_id |
str |
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not. |
dataset_id |
str |
The ID of the dataset containing this table. |
project_id |
str |
The ID of the project containing this table. |
table_id: str
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
dataset_id: str
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoad(
destination_table: BigqueryJobLoadDestinationTable,
source_uris: typing.List[str],
allow_jagged_rows: typing.Union[bool, IResolvable] = None,
allow_quoted_newlines: typing.Union[bool, IResolvable] = None,
autodetect: typing.Union[bool, IResolvable] = None,
create_disposition: str = None,
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration = None,
encoding: str = None,
field_delimiter: str = None,
ignore_unknown_values: typing.Union[bool, IResolvable] = None,
json_extension: str = None,
max_bad_records: typing.Union[int, float] = None,
null_marker: str = None,
parquet_options: BigqueryJobLoadParquetOptions = None,
projection_fields: typing.List[str] = None,
quote: str = None,
schema_update_options: typing.List[str] = None,
skip_leading_rows: typing.Union[int, float] = None,
source_format: str = None,
time_partitioning: BigqueryJobLoadTimePartitioning = None,
write_disposition: str = None
)
Name | Type | Description |
---|---|---|
destination_table |
BigqueryJobLoadDestinationTable |
destination_table block. |
source_uris |
typing.List[str] |
The fully-qualified URIs that point to your data in Google Cloud. |
allow_jagged_rows |
typing.Union[bool, cdktf.IResolvable] |
Accept rows that are missing trailing optional columns. |
allow_quoted_newlines |
typing.Union[bool, cdktf.IResolvable] |
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. |
autodetect |
typing.Union[bool, cdktf.IResolvable] |
Indicates if we should automatically infer the options and schema for CSV and JSON sources. |
create_disposition |
str |
Specifies whether the job is allowed to create new tables. |
destination_encryption_configuration |
BigqueryJobLoadDestinationEncryptionConfiguration |
destination_encryption_configuration block. |
encoding |
str |
The character encoding of the data. |
field_delimiter |
str |
The separator for fields in a CSV file. |
ignore_unknown_values |
typing.Union[bool, cdktf.IResolvable] |
Indicates if BigQuery should allow extra values that are not represented in the table schema. |
json_extension |
str |
If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON. |
max_bad_records |
typing.Union[int, float] |
The maximum number of bad records that BigQuery can ignore when running the job. |
null_marker |
str |
Specifies a string that represents a null value in a CSV file. |
parquet_options |
BigqueryJobLoadParquetOptions |
parquet_options block. |
projection_fields |
typing.List[str] |
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. |
quote |
str |
The value that is used to quote data sections in a CSV file. |
schema_update_options |
typing.List[str] |
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. |
skip_leading_rows |
typing.Union[int, float] |
The number of rows at the top of a CSV file that BigQuery will skip when loading the data. |
source_format |
str |
The format of the data files. |
time_partitioning |
BigqueryJobLoadTimePartitioning |
time_partitioning block. |
write_disposition |
str |
Specifies the action that occurs if the destination table already exists. |
destination_table: BigqueryJobLoadDestinationTable
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
source_uris: typing.List[str]
- Type: typing.List[str]
The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_uris BigqueryJob#source_uris}
allow_jagged_rows: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Accept rows that are missing trailing optional columns.
The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_jagged_rows BigqueryJob#allow_jagged_rows}
allow_quoted_newlines: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
The default value is false.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_quoted_newlines BigqueryJob#allow_quoted_newlines}
autodetect: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if we should automatically infer the options and schema for CSV and JSON sources.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#autodetect BigqueryJob#autodetect}
create_disposition: str
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
encoding: str
- Type: str
The character encoding of the data.
The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#encoding BigqueryJob#encoding}
field_delimiter: str
- Type: str
The separator for fields in a CSV file.
The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}
ignore_unknown_values: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Indicates if BigQuery should allow extra values that are not represented in the table schema.
If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#ignore_unknown_values BigqueryJob#ignore_unknown_values}
json_extension: str
- Type: str
If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#json_extension BigqueryJob#json_extension}
max_bad_records: typing.Union[int, float]
- Type: typing.Union[int, float]
The maximum number of bad records that BigQuery can ignore when running the job.
If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#max_bad_records BigqueryJob#max_bad_records}
null_marker: str
- Type: str
Specifies a string that represents a null value in a CSV file.
For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#null_marker BigqueryJob#null_marker}
parquet_options: BigqueryJobLoadParquetOptions
parquet_options block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parquet_options BigqueryJob#parquet_options}
projection_fields: typing.List[str]
- Type: typing.List[str]
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#projection_fields BigqueryJob#projection_fields}
quote: str
- Type: str
The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#quote BigqueryJob#quote}
schema_update_options: typing.List[str]
- Type: typing.List[str]
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}
skip_leading_rows: typing.Union[int, float]
- Type: typing.Union[int, float]
The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#skip_leading_rows BigqueryJob#skip_leading_rows}
source_format: str
- Type: str
The format of the data files.
For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE". The default value is CSV.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_format BigqueryJob#source_format}
time_partitioning: BigqueryJobLoadTimePartitioning
time_partitioning block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#time_partitioning BigqueryJob#time_partitioning}
write_disposition: str
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadDestinationEncryptionConfiguration(
kms_key_name: str
)
Name | Type | Description |
---|---|---|
kms_key_name |
str |
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. |
kms_key_name: str
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadDestinationTable(
table_id: str,
dataset_id: str = None,
project_id: str = None
)
Name | Type | Description |
---|---|---|
table_id |
str |
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not. |
dataset_id |
str |
The ID of the dataset containing this table. |
project_id |
str |
The ID of the project containing this table. |
table_id: str
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
dataset_id: str
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadParquetOptions(
enable_list_inference: typing.Union[bool, IResolvable] = None,
enum_as_string: typing.Union[bool, IResolvable] = None
)
Name | Type | Description |
---|---|---|
enable_list_inference |
typing.Union[bool, cdktf.IResolvable] |
If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type. |
enum_as_string |
typing.Union[bool, cdktf.IResolvable] |
If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default. |
enable_list_inference: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enable_list_inference BigqueryJob#enable_list_inference}
enum_as_string: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enum_as_string BigqueryJob#enum_as_string}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadTimePartitioning(
type: str,
expiration_ms: str = None,
field: str = None
)
Name | Type | Description |
---|---|---|
type |
str |
The only type supported is DAY, which will generate one partition per day. |
expiration_ms |
str |
Number of milliseconds for which to keep the storage for a partition. |
field |
str |
If not set, the table is partitioned by pseudo column '_PARTITIONTIME'; |
type: str
- Type: str
The only type supported is DAY, which will generate one partition per day.
Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#type BigqueryJob#type}
expiration_ms: str
- Type: str
Number of milliseconds for which to keep the storage for a partition.
A wrapper is used here because 0 is an invalid value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#expiration_ms BigqueryJob#expiration_ms}
field: str
- Type: str
If not set, the table is partitioned by pseudo column '_PARTITIONTIME';
if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field BigqueryJob#field}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQuery(
query: str,
allow_large_results: typing.Union[bool, IResolvable] = None,
create_disposition: str = None,
default_dataset: BigqueryJobQueryDefaultDataset = None,
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration = None,
destination_table: BigqueryJobQueryDestinationTable = None,
flatten_results: typing.Union[bool, IResolvable] = None,
maximum_billing_tier: typing.Union[int, float] = None,
maximum_bytes_billed: str = None,
parameter_mode: str = None,
priority: str = None,
schema_update_options: typing.List[str] = None,
script_options: BigqueryJobQueryScriptOptions = None,
use_legacy_sql: typing.Union[bool, IResolvable] = None,
use_query_cache: typing.Union[bool, IResolvable] = None,
user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] = None,
write_disposition: str = None
)
Name | Type | Description |
---|---|---|
query |
str |
SQL query text to execute. |
allow_large_results |
typing.Union[bool, cdktf.IResolvable] |
If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. |
create_disposition |
str |
Specifies whether the job is allowed to create new tables. |
default_dataset |
BigqueryJobQueryDefaultDataset |
default_dataset block. |
destination_encryption_configuration |
BigqueryJobQueryDestinationEncryptionConfiguration |
destination_encryption_configuration block. |
destination_table |
BigqueryJobQueryDestinationTable |
destination_table block. |
flatten_results |
typing.Union[bool, cdktf.IResolvable] |
If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. |
maximum_billing_tier |
typing.Union[int, float] |
Limits the billing tier for this job. |
maximum_bytes_billed |
str |
Limits the bytes billed for this job. |
parameter_mode |
str |
Standard SQL only. |
priority |
str |
Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"]. |
schema_update_options |
typing.List[str] |
Allows the schema of the destination table to be updated as a side effect of the query job. |
script_options |
BigqueryJobQueryScriptOptions |
script_options block. |
use_legacy_sql |
typing.Union[bool, cdktf.IResolvable] |
Specifies whether to use BigQuery's legacy SQL dialect for this query. |
use_query_cache |
typing.Union[bool, cdktf.IResolvable] |
Whether to look for the result in the query cache. |
user_defined_function_resources |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] |
user_defined_function_resources block. |
write_disposition |
str |
Specifies the action that occurs if the destination table already exists. |
query: str
- Type: str
SQL query text to execute.
The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL. NOTE: queries containing DML language ('DELETE', 'UPDATE', 'MERGE', 'INSERT') must specify 'create_disposition = ""' and 'write_disposition = ""'.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}
allow_large_results: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance.
Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_large_results BigqueryJob#allow_large_results}
create_disposition: str
- Type: str
Specifies whether the job is allowed to create new tables.
The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}
default_dataset: BigqueryJobQueryDefaultDataset
default_dataset block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#default_dataset BigqueryJob#default_dataset}
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration
destination_encryption_configuration block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}
destination_table: BigqueryJobQueryDestinationTable
destination_table block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}
flatten_results: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results.
allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#flatten_results BigqueryJob#flatten_results}
maximum_billing_tier: typing.Union[int, float]
- Type: typing.Union[int, float]
Limits the billing tier for this job.
Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_billing_tier BigqueryJob#maximum_billing_tier}
maximum_bytes_billed: str
- Type: str
Limits the bytes billed for this job.
Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_bytes_billed BigqueryJob#maximum_bytes_billed}
parameter_mode: str
- Type: str
Standard SQL only.
Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parameter_mode BigqueryJob#parameter_mode}
priority: str
- Type: str
Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"].
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#priority BigqueryJob#priority}
schema_update_options: typing.List[str]
- Type: typing.List[str]
Allows the schema of the destination table to be updated as a side effect of the query job.
Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}
script_options: BigqueryJobQueryScriptOptions
script_options block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#script_options BigqueryJob#script_options}
use_legacy_sql: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Specifies whether to use BigQuery's legacy SQL dialect for this query.
The default value is true. If set to false, the query will use BigQuery's standard SQL.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_legacy_sql BigqueryJob#use_legacy_sql}
use_query_cache: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
Whether to look for the result in the query cache.
The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_query_cache BigqueryJob#use_query_cache}
user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
user_defined_function_resources block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#user_defined_function_resources BigqueryJob#user_defined_function_resources}
write_disposition: str
- Type: str
Specifies the action that occurs if the destination table already exists.
The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDefaultDataset(
dataset_id: str,
project_id: str = None
)
Name | Type | Description |
---|---|---|
dataset_id |
str |
The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not. |
project_id |
str |
The ID of the project containing this table. |
dataset_id: str
- Type: str
The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDestinationEncryptionConfiguration(
kms_key_name: str
)
Name | Type | Description |
---|---|---|
kms_key_name |
str |
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. |
kms_key_name: str
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDestinationTable(
table_id: str,
dataset_id: str = None,
project_id: str = None
)
Name | Type | Description |
---|---|---|
table_id |
str |
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not. |
dataset_id |
str |
The ID of the dataset containing this table. |
project_id |
str |
The ID of the project containing this table. |
table_id: str
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
dataset_id: str
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
project_id: str
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryScriptOptions(
key_result_statement: str = None,
statement_byte_budget: str = None,
statement_timeout_ms: str = None
)
Name | Type | Description |
---|---|---|
key_result_statement |
str |
Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job. |
statement_byte_budget |
str |
Limit on the number of bytes billed per statement. Exceeding this budget results in an error. |
statement_timeout_ms |
str |
Timeout period for each statement in a script. |
key_result_statement: str
- Type: str
Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job.
Possible values: ["LAST", "FIRST_SELECT"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#key_result_statement BigqueryJob#key_result_statement}
statement_byte_budget: str
- Type: str
Limit on the number of bytes billed per statement. Exceeding this budget results in an error.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_byte_budget BigqueryJob#statement_byte_budget}
statement_timeout_ms: str
- Type: str
Timeout period for each statement in a script.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_timeout_ms BigqueryJob#statement_timeout_ms}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryUserDefinedFunctionResources(
inline_code: str = None,
resource_uri: str = None
)
Name | Type | Description |
---|---|---|
inline_code |
str |
An inline resource that contains code for a user-defined function (UDF). |
resource_uri |
str |
A code resource to load from a Google Cloud Storage URI (gs://bucket/path). |
inline_code: str
- Type: str
An inline resource that contains code for a user-defined function (UDF).
Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#inline_code BigqueryJob#inline_code}
resource_uri: str
- Type: str
A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#resource_uri BigqueryJob#resource_uri}
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatus()
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrorResult()
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrors()
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobTimeouts(
create: str = None,
delete: str = None,
update: str = None
)
Name | Type | Description |
---|---|---|
create |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}. |
delete |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}. |
update |
str |
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}. |
create: str
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}.
delete: str
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}.
update: str
- Type: str
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}.
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopyDestinationEncryptionConfigurationOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
kms_key_version |
str |
No description. |
kms_key_name_input |
str |
No description. |
kms_key_name |
str |
No description. |
internal_value |
BigqueryJobCopyDestinationEncryptionConfiguration |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
kms_key_version: str
- Type: str
kms_key_name_input: str
- Type: str
kms_key_name: str
- Type: str
internal_value: BigqueryJobCopyDestinationEncryptionConfiguration
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopyDestinationTableOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_dataset_id |
No description. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_dataset_id() -> None
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
table_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
table_id |
str |
No description. |
internal_value |
BigqueryJobCopyDestinationTable |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
table_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
table_id: str
- Type: str
internal_value: BigqueryJobCopyDestinationTable
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopyOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
put_destination_encryption_configuration |
No description. |
put_destination_table |
No description. |
put_source_tables |
No description. |
reset_create_disposition |
No description. |
reset_destination_encryption_configuration |
No description. |
reset_destination_table |
No description. |
reset_write_disposition |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def put_destination_encryption_configuration(
kms_key_name: str
) -> None
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
def put_destination_table(
table_id: str,
dataset_id: str = None,
project_id: str = None
) -> None
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def put_source_tables(
value: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]
) -> None
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]]
def reset_create_disposition() -> None
def reset_destination_encryption_configuration() -> None
def reset_destination_table() -> None
def reset_write_disposition() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
destination_encryption_configuration |
BigqueryJobCopyDestinationEncryptionConfigurationOutputReference |
No description. |
destination_table |
BigqueryJobCopyDestinationTableOutputReference |
No description. |
source_tables |
BigqueryJobCopySourceTablesList |
No description. |
create_disposition_input |
str |
No description. |
destination_encryption_configuration_input |
BigqueryJobCopyDestinationEncryptionConfiguration |
No description. |
destination_table_input |
BigqueryJobCopyDestinationTable |
No description. |
source_tables_input |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] |
No description. |
write_disposition_input |
str |
No description. |
create_disposition |
str |
No description. |
write_disposition |
str |
No description. |
internal_value |
BigqueryJobCopy |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfigurationOutputReference
destination_table: BigqueryJobCopyDestinationTableOutputReference
source_tables: BigqueryJobCopySourceTablesList
create_disposition_input: str
- Type: str
destination_encryption_configuration_input: BigqueryJobCopyDestinationEncryptionConfiguration
destination_table_input: BigqueryJobCopyDestinationTable
source_tables_input: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]]
write_disposition_input: str
- Type: str
create_disposition: str
- Type: str
write_disposition: str
- Type: str
internal_value: BigqueryJobCopy
- Type: BigqueryJobCopy
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopySourceTablesList(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
wraps_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
wraps_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
all_with_map_key |
Creating an iterator for this complex list. |
compute_fqn |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
get |
No description. |
def all_with_map_key(
map_key_attribute_name: str
) -> DynamicListTerraformIterator
Creating an iterator for this complex list.
The list will be converted into a map with the mapKeyAttributeName as the key.
- Type: str
def compute_fqn() -> str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def get(
index: typing.Union[int, float]
) -> BigqueryJobCopySourceTablesOutputReference
- Type: typing.Union[int, float]
the index of the item to return.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
internal_value |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
internal_value: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]]
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobCopySourceTablesOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
complex_object_index: typing.Union[int, float],
complex_object_is_from_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
complex_object_index |
typing.Union[int, float] |
the index of this item in the list. |
complex_object_is_from_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: typing.Union[int, float]
the index of this item in the list.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_dataset_id |
No description. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_dataset_id() -> None
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
table_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
table_id |
str |
No description. |
internal_value |
typing.Union[cdktf.IResolvable, BigqueryJobCopySourceTables] |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
table_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
table_id: str
- Type: str
internal_value: typing.Union[IResolvable, BigqueryJobCopySourceTables]
- Type: typing.Union[cdktf.IResolvable, BigqueryJobCopySourceTables]
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtractOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
put_source_model |
No description. |
put_source_table |
No description. |
reset_compression |
No description. |
reset_destination_format |
No description. |
reset_field_delimiter |
No description. |
reset_print_header |
No description. |
reset_source_model |
No description. |
reset_source_table |
No description. |
reset_use_avro_logical_types |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def put_source_model(
dataset_id: str,
model_id: str,
project_id: str
) -> None
- Type: str
The ID of the dataset containing this model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#model_id BigqueryJob#model_id}
- Type: str
The ID of the project containing this model.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def put_source_table(
table_id: str,
dataset_id: str = None,
project_id: str = None
) -> None
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def reset_compression() -> None
def reset_destination_format() -> None
def reset_field_delimiter() -> None
def reset_print_header() -> None
def reset_source_model() -> None
def reset_source_table() -> None
def reset_use_avro_logical_types() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
source_model |
BigqueryJobExtractSourceModelOutputReference |
No description. |
source_table |
BigqueryJobExtractSourceTableOutputReference |
No description. |
compression_input |
str |
No description. |
destination_format_input |
str |
No description. |
destination_uris_input |
typing.List[str] |
No description. |
field_delimiter_input |
str |
No description. |
print_header_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
source_model_input |
BigqueryJobExtractSourceModel |
No description. |
source_table_input |
BigqueryJobExtractSourceTable |
No description. |
use_avro_logical_types_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
compression |
str |
No description. |
destination_format |
str |
No description. |
destination_uris |
typing.List[str] |
No description. |
field_delimiter |
str |
No description. |
print_header |
typing.Union[bool, cdktf.IResolvable] |
No description. |
use_avro_logical_types |
typing.Union[bool, cdktf.IResolvable] |
No description. |
internal_value |
BigqueryJobExtract |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
source_model: BigqueryJobExtractSourceModelOutputReference
source_table: BigqueryJobExtractSourceTableOutputReference
compression_input: str
- Type: str
destination_format_input: str
- Type: str
destination_uris_input: typing.List[str]
- Type: typing.List[str]
field_delimiter_input: str
- Type: str
print_header_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
source_model_input: BigqueryJobExtractSourceModel
source_table_input: BigqueryJobExtractSourceTable
use_avro_logical_types_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
compression: str
- Type: str
destination_format: str
- Type: str
destination_uris: typing.List[str]
- Type: typing.List[str]
field_delimiter: str
- Type: str
print_header: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
use_avro_logical_types: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
internal_value: BigqueryJobExtract
- Type: BigqueryJobExtract
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtractSourceModelOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
model_id_input |
str |
No description. |
project_id_input |
str |
No description. |
dataset_id |
str |
No description. |
model_id |
str |
No description. |
project_id |
str |
No description. |
internal_value |
BigqueryJobExtractSourceModel |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
model_id_input: str
- Type: str
project_id_input: str
- Type: str
dataset_id: str
- Type: str
model_id: str
- Type: str
project_id: str
- Type: str
internal_value: BigqueryJobExtractSourceModel
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobExtractSourceTableOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_dataset_id |
No description. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_dataset_id() -> None
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
table_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
table_id |
str |
No description. |
internal_value |
BigqueryJobExtractSourceTable |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
table_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
table_id: str
- Type: str
internal_value: BigqueryJobExtractSourceTable
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadDestinationEncryptionConfigurationOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
kms_key_version |
str |
No description. |
kms_key_name_input |
str |
No description. |
kms_key_name |
str |
No description. |
internal_value |
BigqueryJobLoadDestinationEncryptionConfiguration |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
kms_key_version: str
- Type: str
kms_key_name_input: str
- Type: str
kms_key_name: str
- Type: str
internal_value: BigqueryJobLoadDestinationEncryptionConfiguration
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadDestinationTableOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_dataset_id |
No description. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_dataset_id() -> None
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
table_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
table_id |
str |
No description. |
internal_value |
BigqueryJobLoadDestinationTable |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
table_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
table_id: str
- Type: str
internal_value: BigqueryJobLoadDestinationTable
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def put_destination_encryption_configuration(
kms_key_name: str
) -> None
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
def put_destination_table(
table_id: str,
dataset_id: str = None,
project_id: str = None
) -> None
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def put_parquet_options(
enable_list_inference: typing.Union[bool, IResolvable] = None,
enum_as_string: typing.Union[bool, IResolvable] = None
) -> None
- Type: typing.Union[bool, cdktf.IResolvable]
If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enable_list_inference BigqueryJob#enable_list_inference}
- Type: typing.Union[bool, cdktf.IResolvable]
If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enum_as_string BigqueryJob#enum_as_string}
def put_time_partitioning(
type: str,
expiration_ms: str = None,
field: str = None
) -> None
- Type: str
The only type supported is DAY, which will generate one partition per day.
Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#type BigqueryJob#type}
- Type: str
Number of milliseconds for which to keep the storage for a partition.
A wrapper is used here because 0 is an invalid value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#expiration_ms BigqueryJob#expiration_ms}
- Type: str
If not set, the table is partitioned by pseudo column '_PARTITIONTIME';
if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field BigqueryJob#field}
def reset_allow_jagged_rows() -> None
def reset_allow_quoted_newlines() -> None
def reset_autodetect() -> None
def reset_create_disposition() -> None
def reset_destination_encryption_configuration() -> None
def reset_encoding() -> None
def reset_field_delimiter() -> None
def reset_ignore_unknown_values() -> None
def reset_json_extension() -> None
def reset_max_bad_records() -> None
def reset_null_marker() -> None
def reset_parquet_options() -> None
def reset_projection_fields() -> None
def reset_quote() -> None
def reset_schema_update_options() -> None
def reset_skip_leading_rows() -> None
def reset_source_format() -> None
def reset_time_partitioning() -> None
def reset_write_disposition() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
destination_encryption_configuration |
BigqueryJobLoadDestinationEncryptionConfigurationOutputReference |
No description. |
destination_table |
BigqueryJobLoadDestinationTableOutputReference |
No description. |
parquet_options |
BigqueryJobLoadParquetOptionsOutputReference |
No description. |
time_partitioning |
BigqueryJobLoadTimePartitioningOutputReference |
No description. |
allow_jagged_rows_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
allow_quoted_newlines_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
autodetect_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
create_disposition_input |
str |
No description. |
destination_encryption_configuration_input |
BigqueryJobLoadDestinationEncryptionConfiguration |
No description. |
destination_table_input |
BigqueryJobLoadDestinationTable |
No description. |
encoding_input |
str |
No description. |
field_delimiter_input |
str |
No description. |
ignore_unknown_values_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
json_extension_input |
str |
No description. |
max_bad_records_input |
typing.Union[int, float] |
No description. |
null_marker_input |
str |
No description. |
parquet_options_input |
BigqueryJobLoadParquetOptions |
No description. |
projection_fields_input |
typing.List[str] |
No description. |
quote_input |
str |
No description. |
schema_update_options_input |
typing.List[str] |
No description. |
skip_leading_rows_input |
typing.Union[int, float] |
No description. |
source_format_input |
str |
No description. |
source_uris_input |
typing.List[str] |
No description. |
time_partitioning_input |
BigqueryJobLoadTimePartitioning |
No description. |
write_disposition_input |
str |
No description. |
allow_jagged_rows |
typing.Union[bool, cdktf.IResolvable] |
No description. |
allow_quoted_newlines |
typing.Union[bool, cdktf.IResolvable] |
No description. |
autodetect |
typing.Union[bool, cdktf.IResolvable] |
No description. |
create_disposition |
str |
No description. |
encoding |
str |
No description. |
field_delimiter |
str |
No description. |
ignore_unknown_values |
typing.Union[bool, cdktf.IResolvable] |
No description. |
json_extension |
str |
No description. |
max_bad_records |
typing.Union[int, float] |
No description. |
null_marker |
str |
No description. |
projection_fields |
typing.List[str] |
No description. |
quote |
str |
No description. |
schema_update_options |
typing.List[str] |
No description. |
skip_leading_rows |
typing.Union[int, float] |
No description. |
source_format |
str |
No description. |
source_uris |
typing.List[str] |
No description. |
write_disposition |
str |
No description. |
internal_value |
BigqueryJobLoad |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfigurationOutputReference
destination_table: BigqueryJobLoadDestinationTableOutputReference
parquet_options: BigqueryJobLoadParquetOptionsOutputReference
time_partitioning: BigqueryJobLoadTimePartitioningOutputReference
allow_jagged_rows_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
allow_quoted_newlines_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
autodetect_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
create_disposition_input: str
- Type: str
destination_encryption_configuration_input: BigqueryJobLoadDestinationEncryptionConfiguration
destination_table_input: BigqueryJobLoadDestinationTable
encoding_input: str
- Type: str
field_delimiter_input: str
- Type: str
ignore_unknown_values_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
json_extension_input: str
- Type: str
max_bad_records_input: typing.Union[int, float]
- Type: typing.Union[int, float]
null_marker_input: str
- Type: str
parquet_options_input: BigqueryJobLoadParquetOptions
projection_fields_input: typing.List[str]
- Type: typing.List[str]
quote_input: str
- Type: str
schema_update_options_input: typing.List[str]
- Type: typing.List[str]
skip_leading_rows_input: typing.Union[int, float]
- Type: typing.Union[int, float]
source_format_input: str
- Type: str
source_uris_input: typing.List[str]
- Type: typing.List[str]
time_partitioning_input: BigqueryJobLoadTimePartitioning
write_disposition_input: str
- Type: str
allow_jagged_rows: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
allow_quoted_newlines: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
autodetect: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
create_disposition: str
- Type: str
encoding: str
- Type: str
field_delimiter: str
- Type: str
ignore_unknown_values: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
json_extension: str
- Type: str
max_bad_records: typing.Union[int, float]
- Type: typing.Union[int, float]
null_marker: str
- Type: str
projection_fields: typing.List[str]
- Type: typing.List[str]
quote: str
- Type: str
schema_update_options: typing.List[str]
- Type: typing.List[str]
skip_leading_rows: typing.Union[int, float]
- Type: typing.Union[int, float]
source_format: str
- Type: str
source_uris: typing.List[str]
- Type: typing.List[str]
write_disposition: str
- Type: str
internal_value: BigqueryJobLoad
- Type: BigqueryJobLoad
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadParquetOptionsOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_enable_list_inference |
No description. |
reset_enum_as_string |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_enable_list_inference() -> None
def reset_enum_as_string() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
enable_list_inference_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
enum_as_string_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
enable_list_inference |
typing.Union[bool, cdktf.IResolvable] |
No description. |
enum_as_string |
typing.Union[bool, cdktf.IResolvable] |
No description. |
internal_value |
BigqueryJobLoadParquetOptions |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
enable_list_inference_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
enum_as_string_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
enable_list_inference: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
enum_as_string: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
internal_value: BigqueryJobLoadParquetOptions
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobLoadTimePartitioningOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_expiration_ms |
No description. |
reset_field |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_expiration_ms() -> None
def reset_field() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
expiration_ms_input |
str |
No description. |
field_input |
str |
No description. |
type_input |
str |
No description. |
expiration_ms |
str |
No description. |
field |
str |
No description. |
type |
str |
No description. |
internal_value |
BigqueryJobLoadTimePartitioning |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
expiration_ms_input: str
- Type: str
field_input: str
- Type: str
type_input: str
- Type: str
expiration_ms: str
- Type: str
field: str
- Type: str
type: str
- Type: str
internal_value: BigqueryJobLoadTimePartitioning
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDefaultDatasetOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
internal_value |
BigqueryJobQueryDefaultDataset |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
internal_value: BigqueryJobQueryDefaultDataset
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDestinationEncryptionConfigurationOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
kms_key_version |
str |
No description. |
kms_key_name_input |
str |
No description. |
kms_key_name |
str |
No description. |
internal_value |
BigqueryJobQueryDestinationEncryptionConfiguration |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
kms_key_version: str
- Type: str
kms_key_name_input: str
- Type: str
kms_key_name: str
- Type: str
internal_value: BigqueryJobQueryDestinationEncryptionConfiguration
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryDestinationTableOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_dataset_id |
No description. |
reset_project_id |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_dataset_id() -> None
def reset_project_id() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
dataset_id_input |
str |
No description. |
project_id_input |
str |
No description. |
table_id_input |
str |
No description. |
dataset_id |
str |
No description. |
project_id |
str |
No description. |
table_id |
str |
No description. |
internal_value |
BigqueryJobQueryDestinationTable |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
dataset_id_input: str
- Type: str
project_id_input: str
- Type: str
table_id_input: str
- Type: str
dataset_id: str
- Type: str
project_id: str
- Type: str
table_id: str
- Type: str
internal_value: BigqueryJobQueryDestinationTable
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def put_default_dataset(
dataset_id: str,
project_id: str = None
) -> None
- Type: str
The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def put_destination_encryption_configuration(
kms_key_name: str
) -> None
- Type: str
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.
The BigQuery Service Account associated with your project requires access to this encryption key.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}
def put_destination_table(
table_id: str,
dataset_id: str = None,
project_id: str = None
) -> None
- Type: str
The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}
- Type: str
The ID of the dataset containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}
- Type: str
The ID of the project containing this table.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}
def put_script_options(
key_result_statement: str = None,
statement_byte_budget: str = None,
statement_timeout_ms: str = None
) -> None
- Type: str
Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job.
Possible values: ["LAST", "FIRST_SELECT"]
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#key_result_statement BigqueryJob#key_result_statement}
- Type: str
Limit on the number of bytes billed per statement. Exceeding this budget results in an error.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_byte_budget BigqueryJob#statement_byte_budget}
- Type: str
Timeout period for each statement in a script.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_timeout_ms BigqueryJob#statement_timeout_ms}
def put_user_defined_function_resources(
value: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
) -> None
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
def reset_allow_large_results() -> None
def reset_create_disposition() -> None
def reset_default_dataset() -> None
def reset_destination_encryption_configuration() -> None
def reset_destination_table() -> None
def reset_flatten_results() -> None
def reset_maximum_billing_tier() -> None
def reset_maximum_bytes_billed() -> None
def reset_parameter_mode() -> None
def reset_priority() -> None
def reset_schema_update_options() -> None
def reset_script_options() -> None
def reset_use_legacy_sql() -> None
def reset_use_query_cache() -> None
def reset_user_defined_function_resources() -> None
def reset_write_disposition() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
default_dataset |
BigqueryJobQueryDefaultDatasetOutputReference |
No description. |
destination_encryption_configuration |
BigqueryJobQueryDestinationEncryptionConfigurationOutputReference |
No description. |
destination_table |
BigqueryJobQueryDestinationTableOutputReference |
No description. |
script_options |
BigqueryJobQueryScriptOptionsOutputReference |
No description. |
user_defined_function_resources |
BigqueryJobQueryUserDefinedFunctionResourcesList |
No description. |
allow_large_results_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
create_disposition_input |
str |
No description. |
default_dataset_input |
BigqueryJobQueryDefaultDataset |
No description. |
destination_encryption_configuration_input |
BigqueryJobQueryDestinationEncryptionConfiguration |
No description. |
destination_table_input |
BigqueryJobQueryDestinationTable |
No description. |
flatten_results_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
maximum_billing_tier_input |
typing.Union[int, float] |
No description. |
maximum_bytes_billed_input |
str |
No description. |
parameter_mode_input |
str |
No description. |
priority_input |
str |
No description. |
query_input |
str |
No description. |
schema_update_options_input |
typing.List[str] |
No description. |
script_options_input |
BigqueryJobQueryScriptOptions |
No description. |
use_legacy_sql_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
use_query_cache_input |
typing.Union[bool, cdktf.IResolvable] |
No description. |
user_defined_function_resources_input |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] |
No description. |
write_disposition_input |
str |
No description. |
allow_large_results |
typing.Union[bool, cdktf.IResolvable] |
No description. |
create_disposition |
str |
No description. |
flatten_results |
typing.Union[bool, cdktf.IResolvable] |
No description. |
maximum_billing_tier |
typing.Union[int, float] |
No description. |
maximum_bytes_billed |
str |
No description. |
parameter_mode |
str |
No description. |
priority |
str |
No description. |
query |
str |
No description. |
schema_update_options |
typing.List[str] |
No description. |
use_legacy_sql |
typing.Union[bool, cdktf.IResolvable] |
No description. |
use_query_cache |
typing.Union[bool, cdktf.IResolvable] |
No description. |
write_disposition |
str |
No description. |
internal_value |
BigqueryJobQuery |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
default_dataset: BigqueryJobQueryDefaultDatasetOutputReference
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfigurationOutputReference
destination_table: BigqueryJobQueryDestinationTableOutputReference
script_options: BigqueryJobQueryScriptOptionsOutputReference
user_defined_function_resources: BigqueryJobQueryUserDefinedFunctionResourcesList
allow_large_results_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
create_disposition_input: str
- Type: str
default_dataset_input: BigqueryJobQueryDefaultDataset
destination_encryption_configuration_input: BigqueryJobQueryDestinationEncryptionConfiguration
destination_table_input: BigqueryJobQueryDestinationTable
flatten_results_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
maximum_billing_tier_input: typing.Union[int, float]
- Type: typing.Union[int, float]
maximum_bytes_billed_input: str
- Type: str
parameter_mode_input: str
- Type: str
priority_input: str
- Type: str
query_input: str
- Type: str
schema_update_options_input: typing.List[str]
- Type: typing.List[str]
script_options_input: BigqueryJobQueryScriptOptions
use_legacy_sql_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
use_query_cache_input: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
user_defined_function_resources_input: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
write_disposition_input: str
- Type: str
allow_large_results: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
create_disposition: str
- Type: str
flatten_results: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
maximum_billing_tier: typing.Union[int, float]
- Type: typing.Union[int, float]
maximum_bytes_billed: str
- Type: str
parameter_mode: str
- Type: str
priority: str
- Type: str
query: str
- Type: str
schema_update_options: typing.List[str]
- Type: typing.List[str]
use_legacy_sql: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
use_query_cache: typing.Union[bool, IResolvable]
- Type: typing.Union[bool, cdktf.IResolvable]
write_disposition: str
- Type: str
internal_value: BigqueryJobQuery
- Type: BigqueryJobQuery
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryScriptOptionsOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_key_result_statement |
No description. |
reset_statement_byte_budget |
No description. |
reset_statement_timeout_ms |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_key_result_statement() -> None
def reset_statement_byte_budget() -> None
def reset_statement_timeout_ms() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
key_result_statement_input |
str |
No description. |
statement_byte_budget_input |
str |
No description. |
statement_timeout_ms_input |
str |
No description. |
key_result_statement |
str |
No description. |
statement_byte_budget |
str |
No description. |
statement_timeout_ms |
str |
No description. |
internal_value |
BigqueryJobQueryScriptOptions |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
key_result_statement_input: str
- Type: str
statement_byte_budget_input: str
- Type: str
statement_timeout_ms_input: str
- Type: str
key_result_statement: str
- Type: str
statement_byte_budget: str
- Type: str
statement_timeout_ms: str
- Type: str
internal_value: BigqueryJobQueryScriptOptions
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryUserDefinedFunctionResourcesList(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
wraps_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
wraps_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
all_with_map_key |
Creating an iterator for this complex list. |
compute_fqn |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
get |
No description. |
def all_with_map_key(
map_key_attribute_name: str
) -> DynamicListTerraformIterator
Creating an iterator for this complex list.
The list will be converted into a map with the mapKeyAttributeName as the key.
- Type: str
def compute_fqn() -> str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def get(
index: typing.Union[int, float]
) -> BigqueryJobQueryUserDefinedFunctionResourcesOutputReference
- Type: typing.Union[int, float]
the index of the item to return.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
internal_value |
typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
internal_value: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
- Type: typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobQueryUserDefinedFunctionResourcesOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
complex_object_index: typing.Union[int, float],
complex_object_is_from_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
complex_object_index |
typing.Union[int, float] |
the index of this item in the list. |
complex_object_is_from_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: typing.Union[int, float]
the index of this item in the list.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_inline_code |
No description. |
reset_resource_uri |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_inline_code() -> None
def reset_resource_uri() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
inline_code_input |
str |
No description. |
resource_uri_input |
str |
No description. |
inline_code |
str |
No description. |
resource_uri |
str |
No description. |
internal_value |
typing.Union[cdktf.IResolvable, BigqueryJobQueryUserDefinedFunctionResources] |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
inline_code_input: str
- Type: str
resource_uri_input: str
- Type: str
inline_code: str
- Type: str
resource_uri: str
- Type: str
internal_value: typing.Union[IResolvable, BigqueryJobQueryUserDefinedFunctionResources]
- Type: typing.Union[cdktf.IResolvable, BigqueryJobQueryUserDefinedFunctionResources]
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrorResultList(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
wraps_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
wraps_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
all_with_map_key |
Creating an iterator for this complex list. |
compute_fqn |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
get |
No description. |
def all_with_map_key(
map_key_attribute_name: str
) -> DynamicListTerraformIterator
Creating an iterator for this complex list.
The list will be converted into a map with the mapKeyAttributeName as the key.
- Type: str
def compute_fqn() -> str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def get(
index: typing.Union[int, float]
) -> BigqueryJobStatusErrorResultOutputReference
- Type: typing.Union[int, float]
the index of the item to return.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrorResultOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
complex_object_index: typing.Union[int, float],
complex_object_is_from_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
complex_object_index |
typing.Union[int, float] |
the index of this item in the list. |
complex_object_is_from_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: typing.Union[int, float]
the index of this item in the list.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
location |
str |
No description. |
message |
str |
No description. |
reason |
str |
No description. |
internal_value |
BigqueryJobStatusErrorResult |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
location: str
- Type: str
message: str
- Type: str
reason: str
- Type: str
internal_value: BigqueryJobStatusErrorResult
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrorsList(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
wraps_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
wraps_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
all_with_map_key |
Creating an iterator for this complex list. |
compute_fqn |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
get |
No description. |
def all_with_map_key(
map_key_attribute_name: str
) -> DynamicListTerraformIterator
Creating an iterator for this complex list.
The list will be converted into a map with the mapKeyAttributeName as the key.
- Type: str
def compute_fqn() -> str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def get(
index: typing.Union[int, float]
) -> BigqueryJobStatusErrorsOutputReference
- Type: typing.Union[int, float]
the index of the item to return.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusErrorsOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
complex_object_index: typing.Union[int, float],
complex_object_is_from_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
complex_object_index |
typing.Union[int, float] |
the index of this item in the list. |
complex_object_is_from_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: typing.Union[int, float]
the index of this item in the list.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
location |
str |
No description. |
message |
str |
No description. |
reason |
str |
No description. |
internal_value |
BigqueryJobStatusErrors |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
location: str
- Type: str
message: str
- Type: str
reason: str
- Type: str
internal_value: BigqueryJobStatusErrors
- Type: BigqueryJobStatusErrors
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusList(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
wraps_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
wraps_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
all_with_map_key |
Creating an iterator for this complex list. |
compute_fqn |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
get |
No description. |
def all_with_map_key(
map_key_attribute_name: str
) -> DynamicListTerraformIterator
Creating an iterator for this complex list.
The list will be converted into a map with the mapKeyAttributeName as the key.
- Type: str
def compute_fqn() -> str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def get(
index: typing.Union[int, float]
) -> BigqueryJobStatusOutputReference
- Type: typing.Union[int, float]
the index of the item to return.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobStatusOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str,
complex_object_index: typing.Union[int, float],
complex_object_is_from_set: bool
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
complex_object_index |
typing.Union[int, float] |
the index of this item in the list. |
complex_object_is_from_set |
bool |
whether the list is wrapping a set (will add tolist() to be able to access an item via an index). |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
- Type: typing.Union[int, float]
the index of this item in the list.
- Type: bool
whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
error_result |
BigqueryJobStatusErrorResultList |
No description. |
errors |
BigqueryJobStatusErrorsList |
No description. |
state |
str |
No description. |
internal_value |
BigqueryJobStatus |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
error_result: BigqueryJobStatusErrorResultList
errors: BigqueryJobStatusErrorsList
state: str
- Type: str
internal_value: BigqueryJobStatus
- Type: BigqueryJobStatus
from cdktf_cdktf_provider_google import bigquery_job
bigqueryJob.BigqueryJobTimeoutsOutputReference(
terraform_resource: IInterpolatingParent,
terraform_attribute: str
)
Name | Type | Description |
---|---|---|
terraform_resource |
cdktf.IInterpolatingParent |
The parent resource. |
terraform_attribute |
str |
The attribute on the parent resource this class is referencing. |
- Type: cdktf.IInterpolatingParent
The parent resource.
- Type: str
The attribute on the parent resource this class is referencing.
Name | Description |
---|---|
compute_fqn |
No description. |
get_any_map_attribute |
No description. |
get_boolean_attribute |
No description. |
get_boolean_map_attribute |
No description. |
get_list_attribute |
No description. |
get_number_attribute |
No description. |
get_number_list_attribute |
No description. |
get_number_map_attribute |
No description. |
get_string_attribute |
No description. |
get_string_map_attribute |
No description. |
interpolation_for_attribute |
No description. |
resolve |
Produce the Token's value at resolution time. |
to_string |
Return a string representation of this resolvable object. |
reset_create |
No description. |
reset_delete |
No description. |
reset_update |
No description. |
def compute_fqn() -> str
def get_any_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Any]
- Type: str
def get_boolean_attribute(
terraform_attribute: str
) -> IResolvable
- Type: str
def get_boolean_map_attribute(
terraform_attribute: str
) -> typing.Mapping[bool]
- Type: str
def get_list_attribute(
terraform_attribute: str
) -> typing.List[str]
- Type: str
def get_number_attribute(
terraform_attribute: str
) -> typing.Union[int, float]
- Type: str
def get_number_list_attribute(
terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
- Type: str
def get_number_map_attribute(
terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
- Type: str
def get_string_attribute(
terraform_attribute: str
) -> str
- Type: str
def get_string_map_attribute(
terraform_attribute: str
) -> typing.Mapping[str]
- Type: str
def interpolation_for_attribute(
property: str
) -> IResolvable
- Type: str
def resolve(
_context: IResolveContext
) -> typing.Any
Produce the Token's value at resolution time.
- Type: cdktf.IResolveContext
def to_string() -> str
Return a string representation of this resolvable object.
Returns a reversible string representation.
def reset_create() -> None
def reset_delete() -> None
def reset_update() -> None
Name | Type | Description |
---|---|---|
creation_stack |
typing.List[str] |
The creation stack of this resolvable which will be appended to errors thrown during resolution. |
fqn |
str |
No description. |
create_input |
str |
No description. |
delete_input |
str |
No description. |
update_input |
str |
No description. |
create |
str |
No description. |
delete |
str |
No description. |
update |
str |
No description. |
internal_value |
typing.Union[cdktf.IResolvable, BigqueryJobTimeouts] |
No description. |
creation_stack: typing.List[str]
- Type: typing.List[str]
The creation stack of this resolvable which will be appended to errors thrown during resolution.
If this returns an empty array the stack will not be attached.
fqn: str
- Type: str
create_input: str
- Type: str
delete_input: str
- Type: str
update_input: str
- Type: str
create: str
- Type: str
delete: str
- Type: str
update: str
- Type: str
internal_value: typing.Union[IResolvable, BigqueryJobTimeouts]
- Type: typing.Union[cdktf.IResolvable, BigqueryJobTimeouts]