Skip to content

Files

Latest commit

547ccab · May 14, 2025

History

History
14169 lines (9016 loc) · 618 KB
·

bigqueryJob.python.md

File metadata and controls

14169 lines (9016 loc) · 618 KB
·

bigqueryJob Submodule

Constructs

BigqueryJob

Represents a {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job google_bigquery_job}.

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJob(
  scope: Construct,
  id: str,
  connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection] = None,
  count: typing.Union[typing.Union[int, float], TerraformCount] = None,
  depends_on: typing.List[ITerraformDependable] = None,
  for_each: ITerraformIterator = None,
  lifecycle: TerraformResourceLifecycle = None,
  provider: TerraformProvider = None,
  provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]] = None,
  job_id: str,
  copy: BigqueryJobCopy = None,
  extract: BigqueryJobExtract = None,
  id: str = None,
  job_timeout_ms: str = None,
  labels: typing.Mapping[str] = None,
  load: BigqueryJobLoad = None,
  location: str = None,
  project: str = None,
  query: BigqueryJobQuery = None,
  timeouts: BigqueryJobTimeouts = None
)
Name Type Description
scope constructs.Construct The scope in which to define this construct.
id str The scoped construct ID.
connection typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] No description.
count typing.Union[typing.Union[int, float], cdktf.TerraformCount] No description.
depends_on typing.List[cdktf.ITerraformDependable] No description.
for_each cdktf.ITerraformIterator No description.
lifecycle cdktf.TerraformResourceLifecycle No description.
provider cdktf.TerraformProvider No description.
provisioners typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] No description.
job_id str The ID of the job.
copy BigqueryJobCopy copy block.
extract BigqueryJobExtract extract block.
id str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.
job_timeout_ms str Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.
labels typing.Mapping[str] The labels associated with this job. You can use these to organize and group your jobs.
load BigqueryJobLoad load block.
location str The geographic location of the job. The default value is US.
project str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.
query BigqueryJobQuery query block.
timeouts BigqueryJobTimeouts timeouts block.

scopeRequired
  • Type: constructs.Construct

The scope in which to define this construct.


idRequired
  • Type: str

The scoped construct ID.

Must be unique amongst siblings in the same scope


connectionOptional
  • Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]

countOptional
  • Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]

depends_onOptional
  • Type: typing.List[cdktf.ITerraformDependable]

for_eachOptional
  • Type: cdktf.ITerraformIterator

lifecycleOptional
  • Type: cdktf.TerraformResourceLifecycle

providerOptional
  • Type: cdktf.TerraformProvider

provisionersOptional
  • Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]

job_idRequired
  • Type: str

The ID of the job.

The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_id BigqueryJob#job_id}


copyOptional

copy block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#copy BigqueryJob#copy}


extractOptional

extract block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#extract BigqueryJob#extract}


idOptional
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.

Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.


job_timeout_msOptional
  • Type: str

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_timeout_ms BigqueryJob#job_timeout_ms}


labelsOptional
  • Type: typing.Mapping[str]

The labels associated with this job. You can use these to organize and group your jobs.

Note: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field 'effective_labels' for all of the labels present on the resource.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#labels BigqueryJob#labels}


loadOptional

load block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#load BigqueryJob#load}


locationOptional
  • Type: str

The geographic location of the job. The default value is US.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#location BigqueryJob#location}


projectOptional
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.


queryOptional

query block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}


timeoutsOptional

timeouts block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#timeouts BigqueryJob#timeouts}


Methods

Name Description
to_string Returns a string representation of this construct.
add_override No description.
override_logical_id Overrides the auto-generated logical ID with a specific ID.
reset_override_logical_id Resets a previously passed logical Id to use the auto-generated logical id again.
to_hcl_terraform No description.
to_metadata No description.
to_terraform Adds this resource to the terraform JSON output.
add_move_target Adds a user defined moveTarget string to this resource to be later used in .moveTo(moveTarget) to resolve the location of the move.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
has_resource_move No description.
import_from No description.
interpolation_for_attribute No description.
move_from_id Move the resource corresponding to "id" to this resource.
move_to Moves this resource to the target resource given by moveTarget.
move_to_id Moves this resource to the resource corresponding to "id".
put_copy No description.
put_extract No description.
put_load No description.
put_query No description.
put_timeouts No description.
reset_copy No description.
reset_extract No description.
reset_id No description.
reset_job_timeout_ms No description.
reset_labels No description.
reset_load No description.
reset_location No description.
reset_project No description.
reset_query No description.
reset_timeouts No description.

to_string
def to_string() -> str

Returns a string representation of this construct.

add_override
def add_override(
  path: str,
  value: typing.Any
) -> None
pathRequired
  • Type: str

valueRequired
  • Type: typing.Any

override_logical_id
def override_logical_id(
  new_logical_id: str
) -> None

Overrides the auto-generated logical ID with a specific ID.

new_logical_idRequired
  • Type: str

The new logical ID to use for this stack element.


reset_override_logical_id
def reset_override_logical_id() -> None

Resets a previously passed logical Id to use the auto-generated logical id again.

to_hcl_terraform
def to_hcl_terraform() -> typing.Any
to_metadata
def to_metadata() -> typing.Any
to_terraform
def to_terraform() -> typing.Any

Adds this resource to the terraform JSON output.

add_move_target
def add_move_target(
  move_target: str
) -> None

Adds a user defined moveTarget string to this resource to be later used in .moveTo(moveTarget) to resolve the location of the move.

move_targetRequired
  • Type: str

The string move target that will correspond to this resource.


get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

has_resource_move
def has_resource_move() -> typing.Union[TerraformResourceMoveByTarget, TerraformResourceMoveById]
import_from
def import_from(
  id: str,
  provider: TerraformProvider = None
) -> None
idRequired
  • Type: str

providerOptional
  • Type: cdktf.TerraformProvider

interpolation_for_attribute
def interpolation_for_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

move_from_id
def move_from_id(
  id: str
) -> None

Move the resource corresponding to "id" to this resource.

Note that the resource being moved from must be marked as moved using it's instance function.

idRequired
  • Type: str

Full id of resource being moved from, e.g. "aws_s3_bucket.example".


move_to
def move_to(
  move_target: str,
  index: typing.Union[str, typing.Union[int, float]] = None
) -> None

Moves this resource to the target resource given by moveTarget.

move_targetRequired
  • Type: str

The previously set user defined string set by .addMoveTarget() corresponding to the resource to move to.


indexOptional
  • Type: typing.Union[str, typing.Union[int, float]]

Optional The index corresponding to the key the resource is to appear in the foreach of a resource to move to.


move_to_id
def move_to_id(
  id: str
) -> None

Moves this resource to the resource corresponding to "id".

idRequired
  • Type: str

Full id of resource to move to, e.g. "aws_s3_bucket.example".


put_copy
def put_copy(
  source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]],
  create_disposition: str = None,
  destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration = None,
  destination_table: BigqueryJobCopyDestinationTable = None,
  write_disposition: str = None
) -> None
source_tablesRequired

source_tables block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_tables BigqueryJob#source_tables}


create_dispositionOptional
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


destination_encryption_configurationOptional

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


destination_tableOptional

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


write_dispositionOptional
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


put_extract
def put_extract(
  destination_uris: typing.List[str],
  compression: str = None,
  destination_format: str = None,
  field_delimiter: str = None,
  print_header: typing.Union[bool, IResolvable] = None,
  source_model: BigqueryJobExtractSourceModel = None,
  source_table: BigqueryJobExtractSourceTable = None,
  use_avro_logical_types: typing.Union[bool, IResolvable] = None
) -> None
destination_urisRequired
  • Type: typing.List[str]

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_uris BigqueryJob#destination_uris}


compressionOptional
  • Type: str

The compression type to use for exported files.

Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#compression BigqueryJob#compression}


destination_formatOptional
  • Type: str

The exported file format.

Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_format BigqueryJob#destination_format}


field_delimiterOptional
  • Type: str

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data.

Default is ','

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}


print_headerOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to print out a header row in the results. Default is true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#print_header BigqueryJob#print_header}


source_modelOptional

source_model block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_model BigqueryJob#source_model}


source_tableOptional

source_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_table BigqueryJob#source_table}


use_avro_logical_typesOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to use logical types when extracting to AVRO format.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_avro_logical_types BigqueryJob#use_avro_logical_types}


put_load
def put_load(
  destination_table: BigqueryJobLoadDestinationTable,
  source_uris: typing.List[str],
  allow_jagged_rows: typing.Union[bool, IResolvable] = None,
  allow_quoted_newlines: typing.Union[bool, IResolvable] = None,
  autodetect: typing.Union[bool, IResolvable] = None,
  create_disposition: str = None,
  destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration = None,
  encoding: str = None,
  field_delimiter: str = None,
  ignore_unknown_values: typing.Union[bool, IResolvable] = None,
  json_extension: str = None,
  max_bad_records: typing.Union[int, float] = None,
  null_marker: str = None,
  parquet_options: BigqueryJobLoadParquetOptions = None,
  projection_fields: typing.List[str] = None,
  quote: str = None,
  schema_update_options: typing.List[str] = None,
  skip_leading_rows: typing.Union[int, float] = None,
  source_format: str = None,
  time_partitioning: BigqueryJobLoadTimePartitioning = None,
  write_disposition: str = None
) -> None
destination_tableRequired

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


source_urisRequired
  • Type: typing.List[str]

The fully-qualified URIs that point to your data in Google Cloud.

For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_uris BigqueryJob#source_uris}


allow_jagged_rowsOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Accept rows that are missing trailing optional columns.

The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_jagged_rows BigqueryJob#allow_jagged_rows}


allow_quoted_newlinesOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.

The default value is false.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_quoted_newlines BigqueryJob#allow_quoted_newlines}


autodetectOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#autodetect BigqueryJob#autodetect}


create_dispositionOptional
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


destination_encryption_configurationOptional

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


encodingOptional
  • Type: str

The character encoding of the data.

The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#encoding BigqueryJob#encoding}


field_delimiterOptional
  • Type: str

The separator for fields in a CSV file.

The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}


ignore_unknown_valuesOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if BigQuery should allow extra values that are not represented in the table schema.

If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#ignore_unknown_values BigqueryJob#ignore_unknown_values}


json_extensionOptional
  • Type: str

If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.

For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#json_extension BigqueryJob#json_extension}


max_bad_recordsOptional
  • Type: typing.Union[int, float]

The maximum number of bad records that BigQuery can ignore when running the job.

If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#max_bad_records BigqueryJob#max_bad_records}


null_markerOptional
  • Type: str

Specifies a string that represents a null value in a CSV file.

For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#null_marker BigqueryJob#null_marker}


parquet_optionsOptional

parquet_options block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parquet_options BigqueryJob#parquet_options}


projection_fieldsOptional
  • Type: typing.List[str]

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.

Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#projection_fields BigqueryJob#projection_fields}


quoteOptional
  • Type: str

The value that is used to quote data sections in a CSV file.

BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#quote BigqueryJob#quote}


schema_update_optionsOptional
  • Type: typing.List[str]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.

Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}


skip_leading_rowsOptional
  • Type: typing.Union[int, float]

The number of rows at the top of a CSV file that BigQuery will skip when loading the data.

The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#skip_leading_rows BigqueryJob#skip_leading_rows}


source_formatOptional
  • Type: str

The format of the data files.

For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE". The default value is CSV.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_format BigqueryJob#source_format}


time_partitioningOptional

time_partitioning block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#time_partitioning BigqueryJob#time_partitioning}


write_dispositionOptional
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


put_query
def put_query(
  query: str,
  allow_large_results: typing.Union[bool, IResolvable] = None,
  create_disposition: str = None,
  default_dataset: BigqueryJobQueryDefaultDataset = None,
  destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration = None,
  destination_table: BigqueryJobQueryDestinationTable = None,
  flatten_results: typing.Union[bool, IResolvable] = None,
  maximum_billing_tier: typing.Union[int, float] = None,
  maximum_bytes_billed: str = None,
  parameter_mode: str = None,
  priority: str = None,
  schema_update_options: typing.List[str] = None,
  script_options: BigqueryJobQueryScriptOptions = None,
  use_legacy_sql: typing.Union[bool, IResolvable] = None,
  use_query_cache: typing.Union[bool, IResolvable] = None,
  user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] = None,
  write_disposition: str = None
) -> None
queryRequired
  • Type: str

SQL query text to execute.

The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL. NOTE: queries containing DML language ('DELETE', 'UPDATE', 'MERGE', 'INSERT') must specify 'create_disposition = ""' and 'write_disposition = ""'.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}


allow_large_resultsOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance.

Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_large_results BigqueryJob#allow_large_results}


create_dispositionOptional
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


default_datasetOptional

default_dataset block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#default_dataset BigqueryJob#default_dataset}


destination_encryption_configurationOptional

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


destination_tableOptional

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


flatten_resultsOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results.

allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#flatten_results BigqueryJob#flatten_results}


maximum_billing_tierOptional
  • Type: typing.Union[int, float]

Limits the billing tier for this job.

Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_billing_tier BigqueryJob#maximum_billing_tier}


maximum_bytes_billedOptional
  • Type: str

Limits the bytes billed for this job.

Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_bytes_billed BigqueryJob#maximum_bytes_billed}


parameter_modeOptional
  • Type: str

Standard SQL only.

Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parameter_mode BigqueryJob#parameter_mode}


priorityOptional
  • Type: str

Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"].

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#priority BigqueryJob#priority}


schema_update_optionsOptional
  • Type: typing.List[str]

Allows the schema of the destination table to be updated as a side effect of the query job.

Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}


script_optionsOptional

script_options block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#script_options BigqueryJob#script_options}


use_legacy_sqlOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Specifies whether to use BigQuery's legacy SQL dialect for this query.

The default value is true. If set to false, the query will use BigQuery's standard SQL.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_legacy_sql BigqueryJob#use_legacy_sql}


use_query_cacheOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to look for the result in the query cache.

The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_query_cache BigqueryJob#use_query_cache}


user_defined_function_resourcesOptional

user_defined_function_resources block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#user_defined_function_resources BigqueryJob#user_defined_function_resources}


write_dispositionOptional
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


put_timeouts
def put_timeouts(
  create: str = None,
  delete: str = None,
  update: str = None
) -> None
createOptional
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}.


deleteOptional
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}.


updateOptional
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}.


reset_copy
def reset_copy() -> None
reset_extract
def reset_extract() -> None
reset_id
def reset_id() -> None
reset_job_timeout_ms
def reset_job_timeout_ms() -> None
reset_labels
def reset_labels() -> None
reset_load
def reset_load() -> None
reset_location
def reset_location() -> None
reset_project
def reset_project() -> None
reset_query
def reset_query() -> None
reset_timeouts
def reset_timeouts() -> None

Static Functions

Name Description
is_construct Checks if x is a construct.
is_terraform_element No description.
is_terraform_resource No description.
generate_config_for_import Generates CDKTF code for importing a BigqueryJob resource upon running "cdktf plan ".

is_construct
from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJob.is_construct(
  x: typing.Any
)

Checks if x is a construct.

Use this method instead of instanceof to properly detect Construct instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the constructs library on disk are seen as independent, completely different libraries. As a consequence, the class Construct in each copy of the constructs library is seen as a different class, and an instance of one class will not test as instanceof the other class. npm install will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the constructs library can be accidentally installed, and instanceof will behave unpredictably. It is safest to avoid using instanceof, and using this type-testing method instead.

xRequired
  • Type: typing.Any

Any object.


is_terraform_element
from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJob.is_terraform_element(
  x: typing.Any
)
xRequired
  • Type: typing.Any

is_terraform_resource
from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJob.is_terraform_resource(
  x: typing.Any
)
xRequired
  • Type: typing.Any

generate_config_for_import
from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJob.generate_config_for_import(
  scope: Construct,
  import_to_id: str,
  import_from_id: str,
  provider: TerraformProvider = None
)

Generates CDKTF code for importing a BigqueryJob resource upon running "cdktf plan ".

scopeRequired
  • Type: constructs.Construct

The scope in which to define this construct.


import_to_idRequired
  • Type: str

The construct id used in the generated config for the BigqueryJob to import.


import_from_idRequired
  • Type: str

The id of the existing BigqueryJob that should be imported.

Refer to the {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#import import section} in the documentation of this resource for the id to use


providerOptional
  • Type: cdktf.TerraformProvider

? Optional instance of the provider where the BigqueryJob to import is found.


Properties

Name Type Description
node constructs.Node The tree node.
cdktf_stack cdktf.TerraformStack No description.
fqn str No description.
friendly_unique_id str No description.
terraform_meta_arguments typing.Mapping[typing.Any] No description.
terraform_resource_type str No description.
terraform_generator_metadata cdktf.TerraformProviderGeneratorMetadata No description.
connection typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] No description.
count typing.Union[typing.Union[int, float], cdktf.TerraformCount] No description.
depends_on typing.List[str] No description.
for_each cdktf.ITerraformIterator No description.
lifecycle cdktf.TerraformResourceLifecycle No description.
provider cdktf.TerraformProvider No description.
provisioners typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] No description.
copy BigqueryJobCopyOutputReference No description.
effective_labels cdktf.StringMap No description.
extract BigqueryJobExtractOutputReference No description.
job_type str No description.
load BigqueryJobLoadOutputReference No description.
query BigqueryJobQueryOutputReference No description.
status BigqueryJobStatusList No description.
terraform_labels cdktf.StringMap No description.
timeouts BigqueryJobTimeoutsOutputReference No description.
user_email str No description.
copy_input BigqueryJobCopy No description.
extract_input BigqueryJobExtract No description.
id_input str No description.
job_id_input str No description.
job_timeout_ms_input str No description.
labels_input typing.Mapping[str] No description.
load_input BigqueryJobLoad No description.
location_input str No description.
project_input str No description.
query_input BigqueryJobQuery No description.
timeouts_input typing.Union[cdktf.IResolvable, BigqueryJobTimeouts] No description.
id str No description.
job_id str No description.
job_timeout_ms str No description.
labels typing.Mapping[str] No description.
location str No description.
project str No description.

nodeRequired
node: Node
  • Type: constructs.Node

The tree node.


cdktf_stackRequired
cdktf_stack: TerraformStack
  • Type: cdktf.TerraformStack

fqnRequired
fqn: str
  • Type: str

friendly_unique_idRequired
friendly_unique_id: str
  • Type: str

terraform_meta_argumentsRequired
terraform_meta_arguments: typing.Mapping[typing.Any]
  • Type: typing.Mapping[typing.Any]

terraform_resource_typeRequired
terraform_resource_type: str
  • Type: str

terraform_generator_metadataOptional
terraform_generator_metadata: TerraformProviderGeneratorMetadata
  • Type: cdktf.TerraformProviderGeneratorMetadata

connectionOptional
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection]
  • Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]

countOptional
count: typing.Union[typing.Union[int, float], TerraformCount]
  • Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]

depends_onOptional
depends_on: typing.List[str]
  • Type: typing.List[str]

for_eachOptional
for_each: ITerraformIterator
  • Type: cdktf.ITerraformIterator

lifecycleOptional
lifecycle: TerraformResourceLifecycle
  • Type: cdktf.TerraformResourceLifecycle

providerOptional
provider: TerraformProvider
  • Type: cdktf.TerraformProvider

provisionersOptional
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]]
  • Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]

copyRequired
copy: BigqueryJobCopyOutputReference

effective_labelsRequired
effective_labels: StringMap
  • Type: cdktf.StringMap

extractRequired
extract: BigqueryJobExtractOutputReference

job_typeRequired
job_type: str
  • Type: str

loadRequired
load: BigqueryJobLoadOutputReference

queryRequired
query: BigqueryJobQueryOutputReference

statusRequired
status: BigqueryJobStatusList

terraform_labelsRequired
terraform_labels: StringMap
  • Type: cdktf.StringMap

timeoutsRequired
timeouts: BigqueryJobTimeoutsOutputReference

user_emailRequired
user_email: str
  • Type: str

copy_inputOptional
copy_input: BigqueryJobCopy

extract_inputOptional
extract_input: BigqueryJobExtract

id_inputOptional
id_input: str
  • Type: str

job_id_inputOptional
job_id_input: str
  • Type: str

job_timeout_ms_inputOptional
job_timeout_ms_input: str
  • Type: str

labels_inputOptional
labels_input: typing.Mapping[str]
  • Type: typing.Mapping[str]

load_inputOptional
load_input: BigqueryJobLoad

location_inputOptional
location_input: str
  • Type: str

project_inputOptional
project_input: str
  • Type: str

query_inputOptional
query_input: BigqueryJobQuery

timeouts_inputOptional
timeouts_input: typing.Union[IResolvable, BigqueryJobTimeouts]

idRequired
id: str
  • Type: str

job_idRequired
job_id: str
  • Type: str

job_timeout_msRequired
job_timeout_ms: str
  • Type: str

labelsRequired
labels: typing.Mapping[str]
  • Type: typing.Mapping[str]

locationRequired
location: str
  • Type: str

projectRequired
project: str
  • Type: str

Constants

Name Type Description
tfResourceType str No description.

tfResourceTypeRequired
tfResourceType: str
  • Type: str

Structs

BigqueryJobConfig

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobConfig(
  connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection] = None,
  count: typing.Union[typing.Union[int, float], TerraformCount] = None,
  depends_on: typing.List[ITerraformDependable] = None,
  for_each: ITerraformIterator = None,
  lifecycle: TerraformResourceLifecycle = None,
  provider: TerraformProvider = None,
  provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]] = None,
  job_id: str,
  copy: BigqueryJobCopy = None,
  extract: BigqueryJobExtract = None,
  id: str = None,
  job_timeout_ms: str = None,
  labels: typing.Mapping[str] = None,
  load: BigqueryJobLoad = None,
  location: str = None,
  project: str = None,
  query: BigqueryJobQuery = None,
  timeouts: BigqueryJobTimeouts = None
)

Properties

Name Type Description
connection typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection] No description.
count typing.Union[typing.Union[int, float], cdktf.TerraformCount] No description.
depends_on typing.List[cdktf.ITerraformDependable] No description.
for_each cdktf.ITerraformIterator No description.
lifecycle cdktf.TerraformResourceLifecycle No description.
provider cdktf.TerraformProvider No description.
provisioners typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]] No description.
job_id str The ID of the job.
copy BigqueryJobCopy copy block.
extract BigqueryJobExtract extract block.
id str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.
job_timeout_ms str Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.
labels typing.Mapping[str] The labels associated with this job. You can use these to organize and group your jobs.
load BigqueryJobLoad load block.
location str The geographic location of the job. The default value is US.
project str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.
query BigqueryJobQuery query block.
timeouts BigqueryJobTimeouts timeouts block.

connectionOptional
connection: typing.Union[SSHProvisionerConnection, WinrmProvisionerConnection]
  • Type: typing.Union[cdktf.SSHProvisionerConnection, cdktf.WinrmProvisionerConnection]

countOptional
count: typing.Union[typing.Union[int, float], TerraformCount]
  • Type: typing.Union[typing.Union[int, float], cdktf.TerraformCount]

depends_onOptional
depends_on: typing.List[ITerraformDependable]
  • Type: typing.List[cdktf.ITerraformDependable]

for_eachOptional
for_each: ITerraformIterator
  • Type: cdktf.ITerraformIterator

lifecycleOptional
lifecycle: TerraformResourceLifecycle
  • Type: cdktf.TerraformResourceLifecycle

providerOptional
provider: TerraformProvider
  • Type: cdktf.TerraformProvider

provisionersOptional
provisioners: typing.List[typing.Union[FileProvisioner, LocalExecProvisioner, RemoteExecProvisioner]]
  • Type: typing.List[typing.Union[cdktf.FileProvisioner, cdktf.LocalExecProvisioner, cdktf.RemoteExecProvisioner]]

job_idRequired
job_id: str
  • Type: str

The ID of the job.

The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_id BigqueryJob#job_id}


copyOptional
copy: BigqueryJobCopy

copy block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#copy BigqueryJob#copy}


extractOptional
extract: BigqueryJobExtract

extract block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#extract BigqueryJob#extract}


idOptional
id: str
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#id BigqueryJob#id}.

Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.


job_timeout_msOptional
job_timeout_ms: str
  • Type: str

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#job_timeout_ms BigqueryJob#job_timeout_ms}


labelsOptional
labels: typing.Mapping[str]
  • Type: typing.Mapping[str]

The labels associated with this job. You can use these to organize and group your jobs.

Note: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field 'effective_labels' for all of the labels present on the resource.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#labels BigqueryJob#labels}


loadOptional
load: BigqueryJobLoad

load block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#load BigqueryJob#load}


locationOptional
location: str
  • Type: str

The geographic location of the job. The default value is US.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#location BigqueryJob#location}


projectOptional
project: str
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project BigqueryJob#project}.


queryOptional
query: BigqueryJobQuery

query block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}


timeoutsOptional
timeouts: BigqueryJobTimeouts

timeouts block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#timeouts BigqueryJob#timeouts}


BigqueryJobCopy

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopy(
  source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]],
  create_disposition: str = None,
  destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration = None,
  destination_table: BigqueryJobCopyDestinationTable = None,
  write_disposition: str = None
)

Properties

Name Type Description
source_tables typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] source_tables block.
create_disposition str Specifies whether the job is allowed to create new tables.
destination_encryption_configuration BigqueryJobCopyDestinationEncryptionConfiguration destination_encryption_configuration block.
destination_table BigqueryJobCopyDestinationTable destination_table block.
write_disposition str Specifies the action that occurs if the destination table already exists.

source_tablesRequired
source_tables: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]

source_tables block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_tables BigqueryJob#source_tables}


create_dispositionOptional
create_disposition: str
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


destination_encryption_configurationOptional
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfiguration

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


destination_tableOptional
destination_table: BigqueryJobCopyDestinationTable

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


write_dispositionOptional
write_disposition: str
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


BigqueryJobCopyDestinationEncryptionConfiguration

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopyDestinationEncryptionConfiguration(
  kms_key_name: str
)

Properties

Name Type Description
kms_key_name str Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

kms_key_nameRequired
kms_key_name: str
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


BigqueryJobCopyDestinationTable

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopyDestinationTable(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
)

Properties

Name Type Description
table_id str The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
dataset_id str The ID of the dataset containing this table.
project_id str The ID of the project containing this table.

table_idRequired
table_id: str
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
dataset_id: str
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobCopySourceTables

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopySourceTables(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
)

Properties

Name Type Description
table_id str The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
dataset_id str The ID of the dataset containing this table.
project_id str The ID of the project containing this table.

table_idRequired
table_id: str
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
dataset_id: str
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobExtract

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtract(
  destination_uris: typing.List[str],
  compression: str = None,
  destination_format: str = None,
  field_delimiter: str = None,
  print_header: typing.Union[bool, IResolvable] = None,
  source_model: BigqueryJobExtractSourceModel = None,
  source_table: BigqueryJobExtractSourceTable = None,
  use_avro_logical_types: typing.Union[bool, IResolvable] = None
)

Properties

Name Type Description
destination_uris typing.List[str] A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.
compression str The compression type to use for exported files.
destination_format str The exported file format.
field_delimiter str When extracting data in CSV format, this defines the delimiter to use between fields in the exported data.
print_header typing.Union[bool, cdktf.IResolvable] Whether to print out a header row in the results. Default is true.
source_model BigqueryJobExtractSourceModel source_model block.
source_table BigqueryJobExtractSourceTable source_table block.
use_avro_logical_types typing.Union[bool, cdktf.IResolvable] Whether to use logical types when extracting to AVRO format.

destination_urisRequired
destination_uris: typing.List[str]
  • Type: typing.List[str]

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_uris BigqueryJob#destination_uris}


compressionOptional
compression: str
  • Type: str

The compression type to use for exported files.

Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#compression BigqueryJob#compression}


destination_formatOptional
destination_format: str
  • Type: str

The exported file format.

Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_format BigqueryJob#destination_format}


field_delimiterOptional
field_delimiter: str
  • Type: str

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data.

Default is ','

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}


print_headerOptional
print_header: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to print out a header row in the results. Default is true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#print_header BigqueryJob#print_header}


source_modelOptional
source_model: BigqueryJobExtractSourceModel

source_model block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_model BigqueryJob#source_model}


source_tableOptional
source_table: BigqueryJobExtractSourceTable

source_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_table BigqueryJob#source_table}


use_avro_logical_typesOptional
use_avro_logical_types: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to use logical types when extracting to AVRO format.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_avro_logical_types BigqueryJob#use_avro_logical_types}


BigqueryJobExtractSourceModel

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtractSourceModel(
  dataset_id: str,
  model_id: str,
  project_id: str
)

Properties

Name Type Description
dataset_id str The ID of the dataset containing this model.
model_id str The ID of the model.
project_id str The ID of the project containing this model.

dataset_idRequired
dataset_id: str
  • Type: str

The ID of the dataset containing this model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


model_idRequired
model_id: str
  • Type: str

The ID of the model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#model_id BigqueryJob#model_id}


project_idRequired
project_id: str
  • Type: str

The ID of the project containing this model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobExtractSourceTable

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtractSourceTable(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
)

Properties

Name Type Description
table_id str The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
dataset_id str The ID of the dataset containing this table.
project_id str The ID of the project containing this table.

table_idRequired
table_id: str
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
dataset_id: str
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobLoad

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoad(
  destination_table: BigqueryJobLoadDestinationTable,
  source_uris: typing.List[str],
  allow_jagged_rows: typing.Union[bool, IResolvable] = None,
  allow_quoted_newlines: typing.Union[bool, IResolvable] = None,
  autodetect: typing.Union[bool, IResolvable] = None,
  create_disposition: str = None,
  destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration = None,
  encoding: str = None,
  field_delimiter: str = None,
  ignore_unknown_values: typing.Union[bool, IResolvable] = None,
  json_extension: str = None,
  max_bad_records: typing.Union[int, float] = None,
  null_marker: str = None,
  parquet_options: BigqueryJobLoadParquetOptions = None,
  projection_fields: typing.List[str] = None,
  quote: str = None,
  schema_update_options: typing.List[str] = None,
  skip_leading_rows: typing.Union[int, float] = None,
  source_format: str = None,
  time_partitioning: BigqueryJobLoadTimePartitioning = None,
  write_disposition: str = None
)

Properties

Name Type Description
destination_table BigqueryJobLoadDestinationTable destination_table block.
source_uris typing.List[str] The fully-qualified URIs that point to your data in Google Cloud.
allow_jagged_rows typing.Union[bool, cdktf.IResolvable] Accept rows that are missing trailing optional columns.
allow_quoted_newlines typing.Union[bool, cdktf.IResolvable] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
autodetect typing.Union[bool, cdktf.IResolvable] Indicates if we should automatically infer the options and schema for CSV and JSON sources.
create_disposition str Specifies whether the job is allowed to create new tables.
destination_encryption_configuration BigqueryJobLoadDestinationEncryptionConfiguration destination_encryption_configuration block.
encoding str The character encoding of the data.
field_delimiter str The separator for fields in a CSV file.
ignore_unknown_values typing.Union[bool, cdktf.IResolvable] Indicates if BigQuery should allow extra values that are not represented in the table schema.
json_extension str If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
max_bad_records typing.Union[int, float] The maximum number of bad records that BigQuery can ignore when running the job.
null_marker str Specifies a string that represents a null value in a CSV file.
parquet_options BigqueryJobLoadParquetOptions parquet_options block.
projection_fields typing.List[str] If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
quote str The value that is used to quote data sections in a CSV file.
schema_update_options typing.List[str] Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
skip_leading_rows typing.Union[int, float] The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
source_format str The format of the data files.
time_partitioning BigqueryJobLoadTimePartitioning time_partitioning block.
write_disposition str Specifies the action that occurs if the destination table already exists.

destination_tableRequired
destination_table: BigqueryJobLoadDestinationTable

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


source_urisRequired
source_uris: typing.List[str]
  • Type: typing.List[str]

The fully-qualified URIs that point to your data in Google Cloud.

For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_uris BigqueryJob#source_uris}


allow_jagged_rowsOptional
allow_jagged_rows: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Accept rows that are missing trailing optional columns.

The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_jagged_rows BigqueryJob#allow_jagged_rows}


allow_quoted_newlinesOptional
allow_quoted_newlines: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.

The default value is false.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_quoted_newlines BigqueryJob#allow_quoted_newlines}


autodetectOptional
autodetect: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#autodetect BigqueryJob#autodetect}


create_dispositionOptional
create_disposition: str
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


destination_encryption_configurationOptional
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfiguration

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


encodingOptional
encoding: str
  • Type: str

The character encoding of the data.

The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#encoding BigqueryJob#encoding}


field_delimiterOptional
field_delimiter: str
  • Type: str

The separator for fields in a CSV file.

The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field_delimiter BigqueryJob#field_delimiter}


ignore_unknown_valuesOptional
ignore_unknown_values: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Indicates if BigQuery should allow extra values that are not represented in the table schema.

If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#ignore_unknown_values BigqueryJob#ignore_unknown_values}


json_extensionOptional
json_extension: str
  • Type: str

If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.

For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#json_extension BigqueryJob#json_extension}


max_bad_recordsOptional
max_bad_records: typing.Union[int, float]
  • Type: typing.Union[int, float]

The maximum number of bad records that BigQuery can ignore when running the job.

If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#max_bad_records BigqueryJob#max_bad_records}


null_markerOptional
null_marker: str
  • Type: str

Specifies a string that represents a null value in a CSV file.

For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#null_marker BigqueryJob#null_marker}


parquet_optionsOptional
parquet_options: BigqueryJobLoadParquetOptions

parquet_options block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parquet_options BigqueryJob#parquet_options}


projection_fieldsOptional
projection_fields: typing.List[str]
  • Type: typing.List[str]

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.

Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#projection_fields BigqueryJob#projection_fields}


quoteOptional
quote: str
  • Type: str

The value that is used to quote data sections in a CSV file.

BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#quote BigqueryJob#quote}


schema_update_optionsOptional
schema_update_options: typing.List[str]
  • Type: typing.List[str]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.

Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}


skip_leading_rowsOptional
skip_leading_rows: typing.Union[int, float]
  • Type: typing.Union[int, float]

The number of rows at the top of a CSV file that BigQuery will skip when loading the data.

The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#skip_leading_rows BigqueryJob#skip_leading_rows}


source_formatOptional
source_format: str
  • Type: str

The format of the data files.

For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE". The default value is CSV.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#source_format BigqueryJob#source_format}


time_partitioningOptional
time_partitioning: BigqueryJobLoadTimePartitioning

time_partitioning block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#time_partitioning BigqueryJob#time_partitioning}


write_dispositionOptional
write_disposition: str
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


BigqueryJobLoadDestinationEncryptionConfiguration

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadDestinationEncryptionConfiguration(
  kms_key_name: str
)

Properties

Name Type Description
kms_key_name str Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

kms_key_nameRequired
kms_key_name: str
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


BigqueryJobLoadDestinationTable

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadDestinationTable(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
)

Properties

Name Type Description
table_id str The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
dataset_id str The ID of the dataset containing this table.
project_id str The ID of the project containing this table.

table_idRequired
table_id: str
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
dataset_id: str
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobLoadParquetOptions

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadParquetOptions(
  enable_list_inference: typing.Union[bool, IResolvable] = None,
  enum_as_string: typing.Union[bool, IResolvable] = None
)

Properties

Name Type Description
enable_list_inference typing.Union[bool, cdktf.IResolvable] If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type.
enum_as_string typing.Union[bool, cdktf.IResolvable] If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.

enable_list_inferenceOptional
enable_list_inference: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enable_list_inference BigqueryJob#enable_list_inference}


enum_as_stringOptional
enum_as_string: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enum_as_string BigqueryJob#enum_as_string}


BigqueryJobLoadTimePartitioning

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadTimePartitioning(
  type: str,
  expiration_ms: str = None,
  field: str = None
)

Properties

Name Type Description
type str The only type supported is DAY, which will generate one partition per day.
expiration_ms str Number of milliseconds for which to keep the storage for a partition.
field str If not set, the table is partitioned by pseudo column '_PARTITIONTIME';

typeRequired
type: str
  • Type: str

The only type supported is DAY, which will generate one partition per day.

Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#type BigqueryJob#type}


expiration_msOptional
expiration_ms: str
  • Type: str

Number of milliseconds for which to keep the storage for a partition.

A wrapper is used here because 0 is an invalid value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#expiration_ms BigqueryJob#expiration_ms}


fieldOptional
field: str
  • Type: str

If not set, the table is partitioned by pseudo column '_PARTITIONTIME';

if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field BigqueryJob#field}


BigqueryJobQuery

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQuery(
  query: str,
  allow_large_results: typing.Union[bool, IResolvable] = None,
  create_disposition: str = None,
  default_dataset: BigqueryJobQueryDefaultDataset = None,
  destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration = None,
  destination_table: BigqueryJobQueryDestinationTable = None,
  flatten_results: typing.Union[bool, IResolvable] = None,
  maximum_billing_tier: typing.Union[int, float] = None,
  maximum_bytes_billed: str = None,
  parameter_mode: str = None,
  priority: str = None,
  schema_update_options: typing.List[str] = None,
  script_options: BigqueryJobQueryScriptOptions = None,
  use_legacy_sql: typing.Union[bool, IResolvable] = None,
  use_query_cache: typing.Union[bool, IResolvable] = None,
  user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] = None,
  write_disposition: str = None
)

Properties

Name Type Description
query str SQL query text to execute.
allow_large_results typing.Union[bool, cdktf.IResolvable] If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance.
create_disposition str Specifies whether the job is allowed to create new tables.
default_dataset BigqueryJobQueryDefaultDataset default_dataset block.
destination_encryption_configuration BigqueryJobQueryDestinationEncryptionConfiguration destination_encryption_configuration block.
destination_table BigqueryJobQueryDestinationTable destination_table block.
flatten_results typing.Union[bool, cdktf.IResolvable] If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results.
maximum_billing_tier typing.Union[int, float] Limits the billing tier for this job.
maximum_bytes_billed str Limits the bytes billed for this job.
parameter_mode str Standard SQL only.
priority str Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"].
schema_update_options typing.List[str] Allows the schema of the destination table to be updated as a side effect of the query job.
script_options BigqueryJobQueryScriptOptions script_options block.
use_legacy_sql typing.Union[bool, cdktf.IResolvable] Specifies whether to use BigQuery's legacy SQL dialect for this query.
use_query_cache typing.Union[bool, cdktf.IResolvable] Whether to look for the result in the query cache.
user_defined_function_resources typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] user_defined_function_resources block.
write_disposition str Specifies the action that occurs if the destination table already exists.

queryRequired
query: str
  • Type: str

SQL query text to execute.

The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL. NOTE: queries containing DML language ('DELETE', 'UPDATE', 'MERGE', 'INSERT') must specify 'create_disposition = ""' and 'write_disposition = ""'.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#query BigqueryJob#query}


allow_large_resultsOptional
allow_large_results: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance.

Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#allow_large_results BigqueryJob#allow_large_results}


create_dispositionOptional
create_disposition: str
  • Type: str

Specifies whether the job is allowed to create new tables.

The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion Default value: "CREATE_IF_NEEDED" Possible values: ["CREATE_IF_NEEDED", "CREATE_NEVER"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create_disposition BigqueryJob#create_disposition}


default_datasetOptional
default_dataset: BigqueryJobQueryDefaultDataset

default_dataset block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#default_dataset BigqueryJob#default_dataset}


destination_encryption_configurationOptional
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfiguration

destination_encryption_configuration block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_encryption_configuration BigqueryJob#destination_encryption_configuration}


destination_tableOptional
destination_table: BigqueryJobQueryDestinationTable

destination_table block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#destination_table BigqueryJob#destination_table}


flatten_resultsOptional
flatten_results: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results.

allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#flatten_results BigqueryJob#flatten_results}


maximum_billing_tierOptional
maximum_billing_tier: typing.Union[int, float]
  • Type: typing.Union[int, float]

Limits the billing tier for this job.

Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_billing_tier BigqueryJob#maximum_billing_tier}


maximum_bytes_billedOptional
maximum_bytes_billed: str
  • Type: str

Limits the bytes billed for this job.

Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#maximum_bytes_billed BigqueryJob#maximum_bytes_billed}


parameter_modeOptional
parameter_mode: str
  • Type: str

Standard SQL only.

Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#parameter_mode BigqueryJob#parameter_mode}


priorityOptional
priority: str
  • Type: str

Specifies a priority for the query. Default value: "INTERACTIVE" Possible values: ["INTERACTIVE", "BATCH"].

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#priority BigqueryJob#priority}


schema_update_optionsOptional
schema_update_options: typing.List[str]
  • Type: typing.List[str]

Allows the schema of the destination table to be updated as a side effect of the query job.

Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#schema_update_options BigqueryJob#schema_update_options}


script_optionsOptional
script_options: BigqueryJobQueryScriptOptions

script_options block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#script_options BigqueryJob#script_options}


use_legacy_sqlOptional
use_legacy_sql: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Specifies whether to use BigQuery's legacy SQL dialect for this query.

The default value is true. If set to false, the query will use BigQuery's standard SQL.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_legacy_sql BigqueryJob#use_legacy_sql}


use_query_cacheOptional
use_query_cache: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

Whether to look for the result in the query cache.

The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#use_query_cache BigqueryJob#use_query_cache}


user_defined_function_resourcesOptional
user_defined_function_resources: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]

user_defined_function_resources block.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#user_defined_function_resources BigqueryJob#user_defined_function_resources}


write_dispositionOptional
write_disposition: str
  • Type: str

Specifies the action that occurs if the destination table already exists.

The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Default value: "WRITE_EMPTY" Possible values: ["WRITE_TRUNCATE", "WRITE_APPEND", "WRITE_EMPTY"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#write_disposition BigqueryJob#write_disposition}


BigqueryJobQueryDefaultDataset

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDefaultDataset(
  dataset_id: str,
  project_id: str = None
)

Properties

Name Type Description
dataset_id str The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not.
project_id str The ID of the project containing this table.

dataset_idRequired
dataset_id: str
  • Type: str

The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobQueryDestinationEncryptionConfiguration

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDestinationEncryptionConfiguration(
  kms_key_name: str
)

Properties

Name Type Description
kms_key_name str Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

kms_key_nameRequired
kms_key_name: str
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


BigqueryJobQueryDestinationTable

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDestinationTable(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
)

Properties

Name Type Description
table_id str The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.
dataset_id str The ID of the dataset containing this table.
project_id str The ID of the project containing this table.

table_idRequired
table_id: str
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
dataset_id: str
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
project_id: str
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


BigqueryJobQueryScriptOptions

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryScriptOptions(
  key_result_statement: str = None,
  statement_byte_budget: str = None,
  statement_timeout_ms: str = None
)

Properties

Name Type Description
key_result_statement str Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job.
statement_byte_budget str Limit on the number of bytes billed per statement. Exceeding this budget results in an error.
statement_timeout_ms str Timeout period for each statement in a script.

key_result_statementOptional
key_result_statement: str
  • Type: str

Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job.

Possible values: ["LAST", "FIRST_SELECT"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#key_result_statement BigqueryJob#key_result_statement}


statement_byte_budgetOptional
statement_byte_budget: str
  • Type: str

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_byte_budget BigqueryJob#statement_byte_budget}


statement_timeout_msOptional
statement_timeout_ms: str
  • Type: str

Timeout period for each statement in a script.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_timeout_ms BigqueryJob#statement_timeout_ms}


BigqueryJobQueryUserDefinedFunctionResources

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryUserDefinedFunctionResources(
  inline_code: str = None,
  resource_uri: str = None
)

Properties

Name Type Description
inline_code str An inline resource that contains code for a user-defined function (UDF).
resource_uri str A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

inline_codeOptional
inline_code: str
  • Type: str

An inline resource that contains code for a user-defined function (UDF).

Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#inline_code BigqueryJob#inline_code}


resource_uriOptional
resource_uri: str
  • Type: str

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#resource_uri BigqueryJob#resource_uri}


BigqueryJobStatus

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatus()

BigqueryJobStatusErrorResult

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrorResult()

BigqueryJobStatusErrors

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrors()

BigqueryJobTimeouts

Initializer

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobTimeouts(
  create: str = None,
  delete: str = None,
  update: str = None
)

Properties

Name Type Description
create str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}.
delete str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}.
update str Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}.

createOptional
create: str
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#create BigqueryJob#create}.


deleteOptional
delete: str
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#delete BigqueryJob#delete}.


updateOptional
update: str
  • Type: str

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#update BigqueryJob#update}.


Classes

BigqueryJobCopyDestinationEncryptionConfigurationOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopyDestinationEncryptionConfigurationOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
kms_key_version str No description.
kms_key_name_input str No description.
kms_key_name str No description.
internal_value BigqueryJobCopyDestinationEncryptionConfiguration No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

kms_key_versionRequired
kms_key_version: str
  • Type: str

kms_key_name_inputOptional
kms_key_name_input: str
  • Type: str

kms_key_nameRequired
kms_key_name: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobCopyDestinationEncryptionConfiguration

BigqueryJobCopyDestinationTableOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopyDestinationTableOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_dataset_id No description.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_dataset_id
def reset_dataset_id() -> None
reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
table_id_input str No description.
dataset_id str No description.
project_id str No description.
table_id str No description.
internal_value BigqueryJobCopyDestinationTable No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

table_id_inputOptional
table_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

table_idRequired
table_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobCopyDestinationTable

BigqueryJobCopyOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopyOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
put_destination_encryption_configuration No description.
put_destination_table No description.
put_source_tables No description.
reset_create_disposition No description.
reset_destination_encryption_configuration No description.
reset_destination_table No description.
reset_write_disposition No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

put_destination_encryption_configuration
def put_destination_encryption_configuration(
  kms_key_name: str
) -> None
kms_key_nameRequired
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


put_destination_table
def put_destination_table(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
) -> None
table_idRequired
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


put_source_tables
def put_source_tables(
  value: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]
) -> None
valueRequired

reset_create_disposition
def reset_create_disposition() -> None
reset_destination_encryption_configuration
def reset_destination_encryption_configuration() -> None
reset_destination_table
def reset_destination_table() -> None
reset_write_disposition
def reset_write_disposition() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
destination_encryption_configuration BigqueryJobCopyDestinationEncryptionConfigurationOutputReference No description.
destination_table BigqueryJobCopyDestinationTableOutputReference No description.
source_tables BigqueryJobCopySourceTablesList No description.
create_disposition_input str No description.
destination_encryption_configuration_input BigqueryJobCopyDestinationEncryptionConfiguration No description.
destination_table_input BigqueryJobCopyDestinationTable No description.
source_tables_input typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] No description.
write_disposition_input str No description.
create_disposition str No description.
write_disposition str No description.
internal_value BigqueryJobCopy No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

destination_encryption_configurationRequired
destination_encryption_configuration: BigqueryJobCopyDestinationEncryptionConfigurationOutputReference

destination_tableRequired
destination_table: BigqueryJobCopyDestinationTableOutputReference

source_tablesRequired
source_tables: BigqueryJobCopySourceTablesList

create_disposition_inputOptional
create_disposition_input: str
  • Type: str

destination_encryption_configuration_inputOptional
destination_encryption_configuration_input: BigqueryJobCopyDestinationEncryptionConfiguration

destination_table_inputOptional
destination_table_input: BigqueryJobCopyDestinationTable

source_tables_inputOptional
source_tables_input: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]

write_disposition_inputOptional
write_disposition_input: str
  • Type: str

create_dispositionRequired
create_disposition: str
  • Type: str

write_dispositionRequired
write_disposition: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobCopy

BigqueryJobCopySourceTablesList

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopySourceTablesList(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  wraps_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
wraps_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


wraps_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
all_with_map_key Creating an iterator for this complex list.
compute_fqn No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
get No description.

all_with_map_key
def all_with_map_key(
  map_key_attribute_name: str
) -> DynamicListTerraformIterator

Creating an iterator for this complex list.

The list will be converted into a map with the mapKeyAttributeName as the key.

map_key_attribute_nameRequired
  • Type: str

compute_fqn
def compute_fqn() -> str
resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

get
def get(
  index: typing.Union[int, float]
) -> BigqueryJobCopySourceTablesOutputReference
indexRequired
  • Type: typing.Union[int, float]

the index of the item to return.


Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
internal_value typing.Union[cdktf.IResolvable, typing.List[BigqueryJobCopySourceTables]] No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

internal_valueOptional
internal_value: typing.Union[IResolvable, typing.List[BigqueryJobCopySourceTables]]

BigqueryJobCopySourceTablesOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobCopySourceTablesOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  complex_object_index: typing.Union[int, float],
  complex_object_is_from_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
complex_object_index typing.Union[int, float] the index of this item in the list.
complex_object_is_from_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


complex_object_indexRequired
  • Type: typing.Union[int, float]

the index of this item in the list.


complex_object_is_from_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_dataset_id No description.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_dataset_id
def reset_dataset_id() -> None
reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
table_id_input str No description.
dataset_id str No description.
project_id str No description.
table_id str No description.
internal_value typing.Union[cdktf.IResolvable, BigqueryJobCopySourceTables] No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

table_id_inputOptional
table_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

table_idRequired
table_id: str
  • Type: str

internal_valueOptional
internal_value: typing.Union[IResolvable, BigqueryJobCopySourceTables]

BigqueryJobExtractOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtractOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
put_source_model No description.
put_source_table No description.
reset_compression No description.
reset_destination_format No description.
reset_field_delimiter No description.
reset_print_header No description.
reset_source_model No description.
reset_source_table No description.
reset_use_avro_logical_types No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

put_source_model
def put_source_model(
  dataset_id: str,
  model_id: str,
  project_id: str
) -> None
dataset_idRequired
  • Type: str

The ID of the dataset containing this model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


model_idRequired
  • Type: str

The ID of the model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#model_id BigqueryJob#model_id}


project_idRequired
  • Type: str

The ID of the project containing this model.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


put_source_table
def put_source_table(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
) -> None
table_idRequired
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


reset_compression
def reset_compression() -> None
reset_destination_format
def reset_destination_format() -> None
reset_field_delimiter
def reset_field_delimiter() -> None
reset_print_header
def reset_print_header() -> None
reset_source_model
def reset_source_model() -> None
reset_source_table
def reset_source_table() -> None
reset_use_avro_logical_types
def reset_use_avro_logical_types() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
source_model BigqueryJobExtractSourceModelOutputReference No description.
source_table BigqueryJobExtractSourceTableOutputReference No description.
compression_input str No description.
destination_format_input str No description.
destination_uris_input typing.List[str] No description.
field_delimiter_input str No description.
print_header_input typing.Union[bool, cdktf.IResolvable] No description.
source_model_input BigqueryJobExtractSourceModel No description.
source_table_input BigqueryJobExtractSourceTable No description.
use_avro_logical_types_input typing.Union[bool, cdktf.IResolvable] No description.
compression str No description.
destination_format str No description.
destination_uris typing.List[str] No description.
field_delimiter str No description.
print_header typing.Union[bool, cdktf.IResolvable] No description.
use_avro_logical_types typing.Union[bool, cdktf.IResolvable] No description.
internal_value BigqueryJobExtract No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

source_modelRequired
source_model: BigqueryJobExtractSourceModelOutputReference

source_tableRequired
source_table: BigqueryJobExtractSourceTableOutputReference

compression_inputOptional
compression_input: str
  • Type: str

destination_format_inputOptional
destination_format_input: str
  • Type: str

destination_uris_inputOptional
destination_uris_input: typing.List[str]
  • Type: typing.List[str]

field_delimiter_inputOptional
field_delimiter_input: str
  • Type: str

print_header_inputOptional
print_header_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

source_model_inputOptional
source_model_input: BigqueryJobExtractSourceModel

source_table_inputOptional
source_table_input: BigqueryJobExtractSourceTable

use_avro_logical_types_inputOptional
use_avro_logical_types_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

compressionRequired
compression: str
  • Type: str

destination_formatRequired
destination_format: str
  • Type: str

destination_urisRequired
destination_uris: typing.List[str]
  • Type: typing.List[str]

field_delimiterRequired
field_delimiter: str
  • Type: str

print_headerRequired
print_header: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

use_avro_logical_typesRequired
use_avro_logical_types: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

internal_valueOptional
internal_value: BigqueryJobExtract

BigqueryJobExtractSourceModelOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtractSourceModelOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
model_id_input str No description.
project_id_input str No description.
dataset_id str No description.
model_id str No description.
project_id str No description.
internal_value BigqueryJobExtractSourceModel No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

model_id_inputOptional
model_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

model_idRequired
model_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobExtractSourceModel

BigqueryJobExtractSourceTableOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobExtractSourceTableOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_dataset_id No description.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_dataset_id
def reset_dataset_id() -> None
reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
table_id_input str No description.
dataset_id str No description.
project_id str No description.
table_id str No description.
internal_value BigqueryJobExtractSourceTable No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

table_id_inputOptional
table_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

table_idRequired
table_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobExtractSourceTable

BigqueryJobLoadDestinationEncryptionConfigurationOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadDestinationEncryptionConfigurationOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
kms_key_version str No description.
kms_key_name_input str No description.
kms_key_name str No description.
internal_value BigqueryJobLoadDestinationEncryptionConfiguration No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

kms_key_versionRequired
kms_key_version: str
  • Type: str

kms_key_name_inputOptional
kms_key_name_input: str
  • Type: str

kms_key_nameRequired
kms_key_name: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobLoadDestinationEncryptionConfiguration

BigqueryJobLoadDestinationTableOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadDestinationTableOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_dataset_id No description.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_dataset_id
def reset_dataset_id() -> None
reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
table_id_input str No description.
dataset_id str No description.
project_id str No description.
table_id str No description.
internal_value BigqueryJobLoadDestinationTable No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

table_id_inputOptional
table_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

table_idRequired
table_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobLoadDestinationTable

BigqueryJobLoadOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
put_destination_encryption_configuration No description.
put_destination_table No description.
put_parquet_options No description.
put_time_partitioning No description.
reset_allow_jagged_rows No description.
reset_allow_quoted_newlines No description.
reset_autodetect No description.
reset_create_disposition No description.
reset_destination_encryption_configuration No description.
reset_encoding No description.
reset_field_delimiter No description.
reset_ignore_unknown_values No description.
reset_json_extension No description.
reset_max_bad_records No description.
reset_null_marker No description.
reset_parquet_options No description.
reset_projection_fields No description.
reset_quote No description.
reset_schema_update_options No description.
reset_skip_leading_rows No description.
reset_source_format No description.
reset_time_partitioning No description.
reset_write_disposition No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

put_destination_encryption_configuration
def put_destination_encryption_configuration(
  kms_key_name: str
) -> None
kms_key_nameRequired
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


put_destination_table
def put_destination_table(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
) -> None
table_idRequired
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


put_parquet_options
def put_parquet_options(
  enable_list_inference: typing.Union[bool, IResolvable] = None,
  enum_as_string: typing.Union[bool, IResolvable] = None
) -> None
enable_list_inferenceOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

If sourceFormat is set to PARQUET, indicates whether to use schema inference specifically for Parquet LIST logical type.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enable_list_inference BigqueryJob#enable_list_inference}


enum_as_stringOptional
  • Type: typing.Union[bool, cdktf.IResolvable]

If sourceFormat is set to PARQUET, indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#enum_as_string BigqueryJob#enum_as_string}


put_time_partitioning
def put_time_partitioning(
  type: str,
  expiration_ms: str = None,
  field: str = None
) -> None
typeRequired
  • Type: str

The only type supported is DAY, which will generate one partition per day.

Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#type BigqueryJob#type}


expiration_msOptional
  • Type: str

Number of milliseconds for which to keep the storage for a partition.

A wrapper is used here because 0 is an invalid value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#expiration_ms BigqueryJob#expiration_ms}


fieldOptional
  • Type: str

If not set, the table is partitioned by pseudo column '_PARTITIONTIME';

if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#field BigqueryJob#field}


reset_allow_jagged_rows
def reset_allow_jagged_rows() -> None
reset_allow_quoted_newlines
def reset_allow_quoted_newlines() -> None
reset_autodetect
def reset_autodetect() -> None
reset_create_disposition
def reset_create_disposition() -> None
reset_destination_encryption_configuration
def reset_destination_encryption_configuration() -> None
reset_encoding
def reset_encoding() -> None
reset_field_delimiter
def reset_field_delimiter() -> None
reset_ignore_unknown_values
def reset_ignore_unknown_values() -> None
reset_json_extension
def reset_json_extension() -> None
reset_max_bad_records
def reset_max_bad_records() -> None
reset_null_marker
def reset_null_marker() -> None
reset_parquet_options
def reset_parquet_options() -> None
reset_projection_fields
def reset_projection_fields() -> None
reset_quote
def reset_quote() -> None
reset_schema_update_options
def reset_schema_update_options() -> None
reset_skip_leading_rows
def reset_skip_leading_rows() -> None
reset_source_format
def reset_source_format() -> None
reset_time_partitioning
def reset_time_partitioning() -> None
reset_write_disposition
def reset_write_disposition() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
destination_encryption_configuration BigqueryJobLoadDestinationEncryptionConfigurationOutputReference No description.
destination_table BigqueryJobLoadDestinationTableOutputReference No description.
parquet_options BigqueryJobLoadParquetOptionsOutputReference No description.
time_partitioning BigqueryJobLoadTimePartitioningOutputReference No description.
allow_jagged_rows_input typing.Union[bool, cdktf.IResolvable] No description.
allow_quoted_newlines_input typing.Union[bool, cdktf.IResolvable] No description.
autodetect_input typing.Union[bool, cdktf.IResolvable] No description.
create_disposition_input str No description.
destination_encryption_configuration_input BigqueryJobLoadDestinationEncryptionConfiguration No description.
destination_table_input BigqueryJobLoadDestinationTable No description.
encoding_input str No description.
field_delimiter_input str No description.
ignore_unknown_values_input typing.Union[bool, cdktf.IResolvable] No description.
json_extension_input str No description.
max_bad_records_input typing.Union[int, float] No description.
null_marker_input str No description.
parquet_options_input BigqueryJobLoadParquetOptions No description.
projection_fields_input typing.List[str] No description.
quote_input str No description.
schema_update_options_input typing.List[str] No description.
skip_leading_rows_input typing.Union[int, float] No description.
source_format_input str No description.
source_uris_input typing.List[str] No description.
time_partitioning_input BigqueryJobLoadTimePartitioning No description.
write_disposition_input str No description.
allow_jagged_rows typing.Union[bool, cdktf.IResolvable] No description.
allow_quoted_newlines typing.Union[bool, cdktf.IResolvable] No description.
autodetect typing.Union[bool, cdktf.IResolvable] No description.
create_disposition str No description.
encoding str No description.
field_delimiter str No description.
ignore_unknown_values typing.Union[bool, cdktf.IResolvable] No description.
json_extension str No description.
max_bad_records typing.Union[int, float] No description.
null_marker str No description.
projection_fields typing.List[str] No description.
quote str No description.
schema_update_options typing.List[str] No description.
skip_leading_rows typing.Union[int, float] No description.
source_format str No description.
source_uris typing.List[str] No description.
write_disposition str No description.
internal_value BigqueryJobLoad No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

destination_encryption_configurationRequired
destination_encryption_configuration: BigqueryJobLoadDestinationEncryptionConfigurationOutputReference

destination_tableRequired
destination_table: BigqueryJobLoadDestinationTableOutputReference

parquet_optionsRequired
parquet_options: BigqueryJobLoadParquetOptionsOutputReference

time_partitioningRequired
time_partitioning: BigqueryJobLoadTimePartitioningOutputReference

allow_jagged_rows_inputOptional
allow_jagged_rows_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

allow_quoted_newlines_inputOptional
allow_quoted_newlines_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

autodetect_inputOptional
autodetect_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

create_disposition_inputOptional
create_disposition_input: str
  • Type: str

destination_encryption_configuration_inputOptional
destination_encryption_configuration_input: BigqueryJobLoadDestinationEncryptionConfiguration

destination_table_inputOptional
destination_table_input: BigqueryJobLoadDestinationTable

encoding_inputOptional
encoding_input: str
  • Type: str

field_delimiter_inputOptional
field_delimiter_input: str
  • Type: str

ignore_unknown_values_inputOptional
ignore_unknown_values_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

json_extension_inputOptional
json_extension_input: str
  • Type: str

max_bad_records_inputOptional
max_bad_records_input: typing.Union[int, float]
  • Type: typing.Union[int, float]

null_marker_inputOptional
null_marker_input: str
  • Type: str

parquet_options_inputOptional
parquet_options_input: BigqueryJobLoadParquetOptions

projection_fields_inputOptional
projection_fields_input: typing.List[str]
  • Type: typing.List[str]

quote_inputOptional
quote_input: str
  • Type: str

schema_update_options_inputOptional
schema_update_options_input: typing.List[str]
  • Type: typing.List[str]

skip_leading_rows_inputOptional
skip_leading_rows_input: typing.Union[int, float]
  • Type: typing.Union[int, float]

source_format_inputOptional
source_format_input: str
  • Type: str

source_uris_inputOptional
source_uris_input: typing.List[str]
  • Type: typing.List[str]

time_partitioning_inputOptional
time_partitioning_input: BigqueryJobLoadTimePartitioning

write_disposition_inputOptional
write_disposition_input: str
  • Type: str

allow_jagged_rowsRequired
allow_jagged_rows: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

allow_quoted_newlinesRequired
allow_quoted_newlines: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

autodetectRequired
autodetect: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

create_dispositionRequired
create_disposition: str
  • Type: str

encodingRequired
encoding: str
  • Type: str

field_delimiterRequired
field_delimiter: str
  • Type: str

ignore_unknown_valuesRequired
ignore_unknown_values: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

json_extensionRequired
json_extension: str
  • Type: str

max_bad_recordsRequired
max_bad_records: typing.Union[int, float]
  • Type: typing.Union[int, float]

null_markerRequired
null_marker: str
  • Type: str

projection_fieldsRequired
projection_fields: typing.List[str]
  • Type: typing.List[str]

quoteRequired
quote: str
  • Type: str

schema_update_optionsRequired
schema_update_options: typing.List[str]
  • Type: typing.List[str]

skip_leading_rowsRequired
skip_leading_rows: typing.Union[int, float]
  • Type: typing.Union[int, float]

source_formatRequired
source_format: str
  • Type: str

source_urisRequired
source_uris: typing.List[str]
  • Type: typing.List[str]

write_dispositionRequired
write_disposition: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobLoad

BigqueryJobLoadParquetOptionsOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadParquetOptionsOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_enable_list_inference No description.
reset_enum_as_string No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_enable_list_inference
def reset_enable_list_inference() -> None
reset_enum_as_string
def reset_enum_as_string() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
enable_list_inference_input typing.Union[bool, cdktf.IResolvable] No description.
enum_as_string_input typing.Union[bool, cdktf.IResolvable] No description.
enable_list_inference typing.Union[bool, cdktf.IResolvable] No description.
enum_as_string typing.Union[bool, cdktf.IResolvable] No description.
internal_value BigqueryJobLoadParquetOptions No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

enable_list_inference_inputOptional
enable_list_inference_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

enum_as_string_inputOptional
enum_as_string_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

enable_list_inferenceRequired
enable_list_inference: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

enum_as_stringRequired
enum_as_string: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

internal_valueOptional
internal_value: BigqueryJobLoadParquetOptions

BigqueryJobLoadTimePartitioningOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobLoadTimePartitioningOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_expiration_ms No description.
reset_field No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_expiration_ms
def reset_expiration_ms() -> None
reset_field
def reset_field() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
expiration_ms_input str No description.
field_input str No description.
type_input str No description.
expiration_ms str No description.
field str No description.
type str No description.
internal_value BigqueryJobLoadTimePartitioning No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

expiration_ms_inputOptional
expiration_ms_input: str
  • Type: str

field_inputOptional
field_input: str
  • Type: str

type_inputOptional
type_input: str
  • Type: str

expiration_msRequired
expiration_ms: str
  • Type: str

fieldRequired
field: str
  • Type: str

typeRequired
type: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobLoadTimePartitioning

BigqueryJobQueryDefaultDatasetOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDefaultDatasetOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
dataset_id str No description.
project_id str No description.
internal_value BigqueryJobQueryDefaultDataset No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobQueryDefaultDataset

BigqueryJobQueryDestinationEncryptionConfigurationOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDestinationEncryptionConfigurationOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
kms_key_version str No description.
kms_key_name_input str No description.
kms_key_name str No description.
internal_value BigqueryJobQueryDestinationEncryptionConfiguration No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

kms_key_versionRequired
kms_key_version: str
  • Type: str

kms_key_name_inputOptional
kms_key_name_input: str
  • Type: str

kms_key_nameRequired
kms_key_name: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobQueryDestinationEncryptionConfiguration

BigqueryJobQueryDestinationTableOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryDestinationTableOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_dataset_id No description.
reset_project_id No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_dataset_id
def reset_dataset_id() -> None
reset_project_id
def reset_project_id() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
dataset_id_input str No description.
project_id_input str No description.
table_id_input str No description.
dataset_id str No description.
project_id str No description.
table_id str No description.
internal_value BigqueryJobQueryDestinationTable No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

dataset_id_inputOptional
dataset_id_input: str
  • Type: str

project_id_inputOptional
project_id_input: str
  • Type: str

table_id_inputOptional
table_id_input: str
  • Type: str

dataset_idRequired
dataset_id: str
  • Type: str

project_idRequired
project_id: str
  • Type: str

table_idRequired
table_id: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobQueryDestinationTable

BigqueryJobQueryOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
put_default_dataset No description.
put_destination_encryption_configuration No description.
put_destination_table No description.
put_script_options No description.
put_user_defined_function_resources No description.
reset_allow_large_results No description.
reset_create_disposition No description.
reset_default_dataset No description.
reset_destination_encryption_configuration No description.
reset_destination_table No description.
reset_flatten_results No description.
reset_maximum_billing_tier No description.
reset_maximum_bytes_billed No description.
reset_parameter_mode No description.
reset_priority No description.
reset_schema_update_options No description.
reset_script_options No description.
reset_use_legacy_sql No description.
reset_use_query_cache No description.
reset_user_defined_function_resources No description.
reset_write_disposition No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

put_default_dataset
def put_default_dataset(
  dataset_id: str,
  project_id: str = None
) -> None
dataset_idRequired
  • Type: str

The dataset. Can be specified '{{dataset_id}}' if 'project_id' is also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


put_destination_encryption_configuration
def put_destination_encryption_configuration(
  kms_key_name: str
) -> None
kms_key_nameRequired
  • Type: str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table.

The BigQuery Service Account associated with your project requires access to this encryption key.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#kms_key_name BigqueryJob#kms_key_name}


put_destination_table
def put_destination_table(
  table_id: str,
  dataset_id: str = None,
  project_id: str = None
) -> None
table_idRequired
  • Type: str

The table. Can be specified '{{table_id}}' if 'project_id' and 'dataset_id' are also set, or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}' if not.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#table_id BigqueryJob#table_id}


dataset_idOptional
  • Type: str

The ID of the dataset containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#dataset_id BigqueryJob#dataset_id}


project_idOptional
  • Type: str

The ID of the project containing this table.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#project_id BigqueryJob#project_id}


put_script_options
def put_script_options(
  key_result_statement: str = None,
  statement_byte_budget: str = None,
  statement_timeout_ms: str = None
) -> None
key_result_statementOptional
  • Type: str

Determines which statement in the script represents the "key result", used to populate the schema and query results of the script job.

Possible values: ["LAST", "FIRST_SELECT"]

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#key_result_statement BigqueryJob#key_result_statement}


statement_byte_budgetOptional
  • Type: str

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_byte_budget BigqueryJob#statement_byte_budget}


statement_timeout_msOptional
  • Type: str

Timeout period for each statement in a script.

Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/6.35.0/docs/resources/bigquery_job#statement_timeout_ms BigqueryJob#statement_timeout_ms}


put_user_defined_function_resources
def put_user_defined_function_resources(
  value: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]
) -> None
valueRequired

reset_allow_large_results
def reset_allow_large_results() -> None
reset_create_disposition
def reset_create_disposition() -> None
reset_default_dataset
def reset_default_dataset() -> None
reset_destination_encryption_configuration
def reset_destination_encryption_configuration() -> None
reset_destination_table
def reset_destination_table() -> None
reset_flatten_results
def reset_flatten_results() -> None
reset_maximum_billing_tier
def reset_maximum_billing_tier() -> None
reset_maximum_bytes_billed
def reset_maximum_bytes_billed() -> None
reset_parameter_mode
def reset_parameter_mode() -> None
reset_priority
def reset_priority() -> None
reset_schema_update_options
def reset_schema_update_options() -> None
reset_script_options
def reset_script_options() -> None
reset_use_legacy_sql
def reset_use_legacy_sql() -> None
reset_use_query_cache
def reset_use_query_cache() -> None
reset_user_defined_function_resources
def reset_user_defined_function_resources() -> None
reset_write_disposition
def reset_write_disposition() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
default_dataset BigqueryJobQueryDefaultDatasetOutputReference No description.
destination_encryption_configuration BigqueryJobQueryDestinationEncryptionConfigurationOutputReference No description.
destination_table BigqueryJobQueryDestinationTableOutputReference No description.
script_options BigqueryJobQueryScriptOptionsOutputReference No description.
user_defined_function_resources BigqueryJobQueryUserDefinedFunctionResourcesList No description.
allow_large_results_input typing.Union[bool, cdktf.IResolvable] No description.
create_disposition_input str No description.
default_dataset_input BigqueryJobQueryDefaultDataset No description.
destination_encryption_configuration_input BigqueryJobQueryDestinationEncryptionConfiguration No description.
destination_table_input BigqueryJobQueryDestinationTable No description.
flatten_results_input typing.Union[bool, cdktf.IResolvable] No description.
maximum_billing_tier_input typing.Union[int, float] No description.
maximum_bytes_billed_input str No description.
parameter_mode_input str No description.
priority_input str No description.
query_input str No description.
schema_update_options_input typing.List[str] No description.
script_options_input BigqueryJobQueryScriptOptions No description.
use_legacy_sql_input typing.Union[bool, cdktf.IResolvable] No description.
use_query_cache_input typing.Union[bool, cdktf.IResolvable] No description.
user_defined_function_resources_input typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] No description.
write_disposition_input str No description.
allow_large_results typing.Union[bool, cdktf.IResolvable] No description.
create_disposition str No description.
flatten_results typing.Union[bool, cdktf.IResolvable] No description.
maximum_billing_tier typing.Union[int, float] No description.
maximum_bytes_billed str No description.
parameter_mode str No description.
priority str No description.
query str No description.
schema_update_options typing.List[str] No description.
use_legacy_sql typing.Union[bool, cdktf.IResolvable] No description.
use_query_cache typing.Union[bool, cdktf.IResolvable] No description.
write_disposition str No description.
internal_value BigqueryJobQuery No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

default_datasetRequired
default_dataset: BigqueryJobQueryDefaultDatasetOutputReference

destination_encryption_configurationRequired
destination_encryption_configuration: BigqueryJobQueryDestinationEncryptionConfigurationOutputReference

destination_tableRequired
destination_table: BigqueryJobQueryDestinationTableOutputReference

script_optionsRequired
script_options: BigqueryJobQueryScriptOptionsOutputReference

user_defined_function_resourcesRequired
user_defined_function_resources: BigqueryJobQueryUserDefinedFunctionResourcesList

allow_large_results_inputOptional
allow_large_results_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

create_disposition_inputOptional
create_disposition_input: str
  • Type: str

default_dataset_inputOptional
default_dataset_input: BigqueryJobQueryDefaultDataset

destination_encryption_configuration_inputOptional
destination_encryption_configuration_input: BigqueryJobQueryDestinationEncryptionConfiguration

destination_table_inputOptional
destination_table_input: BigqueryJobQueryDestinationTable

flatten_results_inputOptional
flatten_results_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

maximum_billing_tier_inputOptional
maximum_billing_tier_input: typing.Union[int, float]
  • Type: typing.Union[int, float]

maximum_bytes_billed_inputOptional
maximum_bytes_billed_input: str
  • Type: str

parameter_mode_inputOptional
parameter_mode_input: str
  • Type: str

priority_inputOptional
priority_input: str
  • Type: str

query_inputOptional
query_input: str
  • Type: str

schema_update_options_inputOptional
schema_update_options_input: typing.List[str]
  • Type: typing.List[str]

script_options_inputOptional
script_options_input: BigqueryJobQueryScriptOptions

use_legacy_sql_inputOptional
use_legacy_sql_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

use_query_cache_inputOptional
use_query_cache_input: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

user_defined_function_resources_inputOptional
user_defined_function_resources_input: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]

write_disposition_inputOptional
write_disposition_input: str
  • Type: str

allow_large_resultsRequired
allow_large_results: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

create_dispositionRequired
create_disposition: str
  • Type: str

flatten_resultsRequired
flatten_results: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

maximum_billing_tierRequired
maximum_billing_tier: typing.Union[int, float]
  • Type: typing.Union[int, float]

maximum_bytes_billedRequired
maximum_bytes_billed: str
  • Type: str

parameter_modeRequired
parameter_mode: str
  • Type: str

priorityRequired
priority: str
  • Type: str

queryRequired
query: str
  • Type: str

schema_update_optionsRequired
schema_update_options: typing.List[str]
  • Type: typing.List[str]

use_legacy_sqlRequired
use_legacy_sql: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

use_query_cacheRequired
use_query_cache: typing.Union[bool, IResolvable]
  • Type: typing.Union[bool, cdktf.IResolvable]

write_dispositionRequired
write_disposition: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobQuery

BigqueryJobQueryScriptOptionsOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryScriptOptionsOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_key_result_statement No description.
reset_statement_byte_budget No description.
reset_statement_timeout_ms No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_key_result_statement
def reset_key_result_statement() -> None
reset_statement_byte_budget
def reset_statement_byte_budget() -> None
reset_statement_timeout_ms
def reset_statement_timeout_ms() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
key_result_statement_input str No description.
statement_byte_budget_input str No description.
statement_timeout_ms_input str No description.
key_result_statement str No description.
statement_byte_budget str No description.
statement_timeout_ms str No description.
internal_value BigqueryJobQueryScriptOptions No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

key_result_statement_inputOptional
key_result_statement_input: str
  • Type: str

statement_byte_budget_inputOptional
statement_byte_budget_input: str
  • Type: str

statement_timeout_ms_inputOptional
statement_timeout_ms_input: str
  • Type: str

key_result_statementRequired
key_result_statement: str
  • Type: str

statement_byte_budgetRequired
statement_byte_budget: str
  • Type: str

statement_timeout_msRequired
statement_timeout_ms: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobQueryScriptOptions

BigqueryJobQueryUserDefinedFunctionResourcesList

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryUserDefinedFunctionResourcesList(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  wraps_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
wraps_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


wraps_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
all_with_map_key Creating an iterator for this complex list.
compute_fqn No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
get No description.

all_with_map_key
def all_with_map_key(
  map_key_attribute_name: str
) -> DynamicListTerraformIterator

Creating an iterator for this complex list.

The list will be converted into a map with the mapKeyAttributeName as the key.

map_key_attribute_nameRequired
  • Type: str

compute_fqn
def compute_fqn() -> str
resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

get
def get(
  index: typing.Union[int, float]
) -> BigqueryJobQueryUserDefinedFunctionResourcesOutputReference
indexRequired
  • Type: typing.Union[int, float]

the index of the item to return.


Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
internal_value typing.Union[cdktf.IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]] No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

internal_valueOptional
internal_value: typing.Union[IResolvable, typing.List[BigqueryJobQueryUserDefinedFunctionResources]]

BigqueryJobQueryUserDefinedFunctionResourcesOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobQueryUserDefinedFunctionResourcesOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  complex_object_index: typing.Union[int, float],
  complex_object_is_from_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
complex_object_index typing.Union[int, float] the index of this item in the list.
complex_object_is_from_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


complex_object_indexRequired
  • Type: typing.Union[int, float]

the index of this item in the list.


complex_object_is_from_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_inline_code No description.
reset_resource_uri No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_inline_code
def reset_inline_code() -> None
reset_resource_uri
def reset_resource_uri() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
inline_code_input str No description.
resource_uri_input str No description.
inline_code str No description.
resource_uri str No description.
internal_value typing.Union[cdktf.IResolvable, BigqueryJobQueryUserDefinedFunctionResources] No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

inline_code_inputOptional
inline_code_input: str
  • Type: str

resource_uri_inputOptional
resource_uri_input: str
  • Type: str

inline_codeRequired
inline_code: str
  • Type: str

resource_uriRequired
resource_uri: str
  • Type: str

internal_valueOptional
internal_value: typing.Union[IResolvable, BigqueryJobQueryUserDefinedFunctionResources]

BigqueryJobStatusErrorResultList

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrorResultList(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  wraps_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
wraps_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


wraps_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
all_with_map_key Creating an iterator for this complex list.
compute_fqn No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
get No description.

all_with_map_key
def all_with_map_key(
  map_key_attribute_name: str
) -> DynamicListTerraformIterator

Creating an iterator for this complex list.

The list will be converted into a map with the mapKeyAttributeName as the key.

map_key_attribute_nameRequired
  • Type: str

compute_fqn
def compute_fqn() -> str
resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

get
def get(
  index: typing.Union[int, float]
) -> BigqueryJobStatusErrorResultOutputReference
indexRequired
  • Type: typing.Union[int, float]

the index of the item to return.


Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

BigqueryJobStatusErrorResultOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrorResultOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  complex_object_index: typing.Union[int, float],
  complex_object_is_from_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
complex_object_index typing.Union[int, float] the index of this item in the list.
complex_object_is_from_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


complex_object_indexRequired
  • Type: typing.Union[int, float]

the index of this item in the list.


complex_object_is_from_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
location str No description.
message str No description.
reason str No description.
internal_value BigqueryJobStatusErrorResult No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

locationRequired
location: str
  • Type: str

messageRequired
message: str
  • Type: str

reasonRequired
reason: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobStatusErrorResult

BigqueryJobStatusErrorsList

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrorsList(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  wraps_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
wraps_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


wraps_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
all_with_map_key Creating an iterator for this complex list.
compute_fqn No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
get No description.

all_with_map_key
def all_with_map_key(
  map_key_attribute_name: str
) -> DynamicListTerraformIterator

Creating an iterator for this complex list.

The list will be converted into a map with the mapKeyAttributeName as the key.

map_key_attribute_nameRequired
  • Type: str

compute_fqn
def compute_fqn() -> str
resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

get
def get(
  index: typing.Union[int, float]
) -> BigqueryJobStatusErrorsOutputReference
indexRequired
  • Type: typing.Union[int, float]

the index of the item to return.


Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

BigqueryJobStatusErrorsOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusErrorsOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  complex_object_index: typing.Union[int, float],
  complex_object_is_from_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
complex_object_index typing.Union[int, float] the index of this item in the list.
complex_object_is_from_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


complex_object_indexRequired
  • Type: typing.Union[int, float]

the index of this item in the list.


complex_object_is_from_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
location str No description.
message str No description.
reason str No description.
internal_value BigqueryJobStatusErrors No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

locationRequired
location: str
  • Type: str

messageRequired
message: str
  • Type: str

reasonRequired
reason: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobStatusErrors

BigqueryJobStatusList

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusList(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  wraps_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
wraps_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


wraps_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
all_with_map_key Creating an iterator for this complex list.
compute_fqn No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
get No description.

all_with_map_key
def all_with_map_key(
  map_key_attribute_name: str
) -> DynamicListTerraformIterator

Creating an iterator for this complex list.

The list will be converted into a map with the mapKeyAttributeName as the key.

map_key_attribute_nameRequired
  • Type: str

compute_fqn
def compute_fqn() -> str
resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

get
def get(
  index: typing.Union[int, float]
) -> BigqueryJobStatusOutputReference
indexRequired
  • Type: typing.Union[int, float]

the index of the item to return.


Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

BigqueryJobStatusOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobStatusOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str,
  complex_object_index: typing.Union[int, float],
  complex_object_is_from_set: bool
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.
complex_object_index typing.Union[int, float] the index of this item in the list.
complex_object_is_from_set bool whether the list is wrapping a set (will add tolist() to be able to access an item via an index).

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


complex_object_indexRequired
  • Type: typing.Union[int, float]

the index of this item in the list.


complex_object_is_from_setRequired
  • Type: bool

whether the list is wrapping a set (will add tolist() to be able to access an item via an index).


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
error_result BigqueryJobStatusErrorResultList No description.
errors BigqueryJobStatusErrorsList No description.
state str No description.
internal_value BigqueryJobStatus No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

error_resultRequired
error_result: BigqueryJobStatusErrorResultList

errorsRequired
errors: BigqueryJobStatusErrorsList

stateRequired
state: str
  • Type: str

internal_valueOptional
internal_value: BigqueryJobStatus

BigqueryJobTimeoutsOutputReference

Initializers

from cdktf_cdktf_provider_google import bigquery_job

bigqueryJob.BigqueryJobTimeoutsOutputReference(
  terraform_resource: IInterpolatingParent,
  terraform_attribute: str
)
Name Type Description
terraform_resource cdktf.IInterpolatingParent The parent resource.
terraform_attribute str The attribute on the parent resource this class is referencing.

terraform_resourceRequired
  • Type: cdktf.IInterpolatingParent

The parent resource.


terraform_attributeRequired
  • Type: str

The attribute on the parent resource this class is referencing.


Methods

Name Description
compute_fqn No description.
get_any_map_attribute No description.
get_boolean_attribute No description.
get_boolean_map_attribute No description.
get_list_attribute No description.
get_number_attribute No description.
get_number_list_attribute No description.
get_number_map_attribute No description.
get_string_attribute No description.
get_string_map_attribute No description.
interpolation_for_attribute No description.
resolve Produce the Token's value at resolution time.
to_string Return a string representation of this resolvable object.
reset_create No description.
reset_delete No description.
reset_update No description.

compute_fqn
def compute_fqn() -> str
get_any_map_attribute
def get_any_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Any]
terraform_attributeRequired
  • Type: str

get_boolean_attribute
def get_boolean_attribute(
  terraform_attribute: str
) -> IResolvable
terraform_attributeRequired
  • Type: str

get_boolean_map_attribute
def get_boolean_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[bool]
terraform_attributeRequired
  • Type: str

get_list_attribute
def get_list_attribute(
  terraform_attribute: str
) -> typing.List[str]
terraform_attributeRequired
  • Type: str

get_number_attribute
def get_number_attribute(
  terraform_attribute: str
) -> typing.Union[int, float]
terraform_attributeRequired
  • Type: str

get_number_list_attribute
def get_number_list_attribute(
  terraform_attribute: str
) -> typing.List[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_number_map_attribute
def get_number_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[typing.Union[int, float]]
terraform_attributeRequired
  • Type: str

get_string_attribute
def get_string_attribute(
  terraform_attribute: str
) -> str
terraform_attributeRequired
  • Type: str

get_string_map_attribute
def get_string_map_attribute(
  terraform_attribute: str
) -> typing.Mapping[str]
terraform_attributeRequired
  • Type: str

interpolation_for_attribute
def interpolation_for_attribute(
  property: str
) -> IResolvable
propertyRequired
  • Type: str

resolve
def resolve(
  _context: IResolveContext
) -> typing.Any

Produce the Token's value at resolution time.

_contextRequired
  • Type: cdktf.IResolveContext

to_string
def to_string() -> str

Return a string representation of this resolvable object.

Returns a reversible string representation.

reset_create
def reset_create() -> None
reset_delete
def reset_delete() -> None
reset_update
def reset_update() -> None

Properties

Name Type Description
creation_stack typing.List[str] The creation stack of this resolvable which will be appended to errors thrown during resolution.
fqn str No description.
create_input str No description.
delete_input str No description.
update_input str No description.
create str No description.
delete str No description.
update str No description.
internal_value typing.Union[cdktf.IResolvable, BigqueryJobTimeouts] No description.

creation_stackRequired
creation_stack: typing.List[str]
  • Type: typing.List[str]

The creation stack of this resolvable which will be appended to errors thrown during resolution.

If this returns an empty array the stack will not be attached.


fqnRequired
fqn: str
  • Type: str

create_inputOptional
create_input: str
  • Type: str

delete_inputOptional
delete_input: str
  • Type: str

update_inputOptional
update_input: str
  • Type: str

createRequired
create: str
  • Type: str

deleteRequired
delete: str
  • Type: str

updateRequired
update: str
  • Type: str

internal_valueOptional
internal_value: typing.Union[IResolvable, BigqueryJobTimeouts]