ml
v1
|
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the –config command-line argument. For details, see the guide to submitting a training job. More...
Properties | |
virtual System.Collections.Generic.IList< string > | Args [get, set] |
Optional. Command line arguments to pass to the program. More... | |
virtual GoogleCloudMlV1EncryptionConfig | EncryptionConfig [get, set] |
Custom encryption key options for a training job. If this is set, then all resources created by the training job will be encrypted with the provided encryption key. More... | |
virtual GoogleCloudMlV1HyperparameterSpec | Hyperparameters [get, set] |
Optional. The set of Hyperparameters to tune. More... | |
virtual string | JobDir [get, set] |
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '–job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. More... | |
virtual GoogleCloudMlV1ReplicaConfig | MasterConfig [get, set] |
Optional. The configuration for your master worker. More... | |
virtual string | MasterType [get, set] |
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM . More... | |
virtual System.Collections.Generic.IList< string > | PackageUris [get, set] |
Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100. More... | |
virtual GoogleCloudMlV1ReplicaConfig | ParameterServerConfig [get, set] |
Optional. The configuration for parameter servers. More... | |
virtual System.Nullable< long > | ParameterServerCount [get, set] |
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type . More... | |
virtual string | ParameterServerType [get, set] |
Optional. Specifies the type of virtual machine to use for your training job's parameter server. More... | |
virtual string | PythonModule [get, set] |
Required. The Python module name to run after installing the packages. More... | |
virtual string | PythonVersion [get, set] |
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri . More... | |
virtual string | Region [get, set] |
Required. The region to run the training job in. See the available regions for AI Platform Training. More... | |
virtual string | RuntimeVersion [get, set] |
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri . More... | |
virtual string | ScaleTier [get, set] |
Required. Specifies the machine types, the number of replicas for workers and parameter servers. More... | |
virtual GoogleCloudMlV1Scheduling | Scheduling [get, set] |
Optional. Scheduling options for a training job. More... | |
virtual System.Nullable< bool > | UseChiefInTfConfig [get, set] |
Optional. Use 'chief' instead of 'master' in TF_CONFIG when Custom Container is used and evaluator is not specified. More... | |
virtual GoogleCloudMlV1ReplicaConfig | WorkerConfig [get, set] |
Optional. The configuration for workers. More... | |
virtual System.Nullable< long > | WorkerCount [get, set] |
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type . More... | |
virtual string | WorkerType [get, set] |
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. More... | |
virtual string | ETag [get, set] |
The ETag of the item. More... | |
Properties inherited from Google::Apis::Requests::IDirectResponseSchema | |
string | ETag |
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the –config command-line argument. For details, see the guide to submitting a training job.
|
getset |
Optional. Command line arguments to pass to the program.
|
getset |
Custom encryption key options for a training job. If this is set, then all resources created by the training job will be encrypted with the provided encryption key.
|
getset |
The ETag of the item.
|
getset |
Optional. The set of Hyperparameters to tune.
|
getset |
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '–job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
|
getset |
Optional. The configuration for your master worker.
You should only set masterConfig.acceleratorConfig
if masterType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.
Set masterConfig.imageUri
only if you build a custom image. Only one of masterConfig.imageUri
and runtimeVersion
should be set. Learn more about configuring custom containers.
|
getset |
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier
is set to CUSTOM
.
You can use certain Compute Engine machine types directly in this field. The following types are supported:
n1-standard-4
- n1-standard-8
- n1-standard-16
- n1-standard-32
- n1-standard-64
- n1-standard-96
- n1-highmem-2
- n1-highmem-4
- n1-highmem-8
- n1-highmem-16
- n1-highmem-32
- n1-highmem-64
- n1-highmem-96
- n1-highcpu-16
- n1-highcpu-32
- n1-highcpu-64
- n1-highcpu-96
Learn more about using Compute Engine machine types.
Alternatively, you can use the following legacy machine types:
standard
- large_model
- complex_model_s
- complex_model_m
- complex_model_l
- standard_gpu
- complex_model_m_gpu
- complex_model_l_gpu
- standard_p100
- complex_model_m_p100
- standard_v100
- large_model_v100
- complex_model_m_v100
- complex_model_l_v100
Learn more about using legacy machine types.
Finally, if you want to use a TPU for training, specify cloud_tpu
in this field. Learn more about the special configuration options for training with TPUs.
|
getset |
Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
|
getset |
Optional. The configuration for parameter servers.
You should only set parameterServerConfig.acceleratorConfig
if parameterServerConfigType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.
Set parameterServerConfig.imageUri
only if you build a custom image for your parameter server. If parameterServerConfig.imageUri
has not been set, AI Platform uses the value of masterConfig.imageUri
. Learn more about configuring custom containers.
|
getset |
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type
.
This value can only be used when scale_tier
is set to CUSTOM
.If you set this value, you must also set parameter_server_type
.
The default value is zero.
|
getset |
Optional. Specifies the type of virtual machine to use for your training job's parameter server.
The supported values are the same as those described in the entry for master_type
.
This value must be consistent with the category of machine type that masterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types.
This value must be present when scaleTier
is set to CUSTOM
and parameter_server_count
is greater than zero.
|
getset |
Required. The Python module name to run after installing the packages.
|
getset |
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri
.
The following Python versions are available:
runtime_version
is set to '1.15' or later. * Python '3.5' is available when runtime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version
is set to '1.15' or earlier.Read more about the Python versions available for each runtime version.
|
getset |
Required. The region to run the training job in. See the available regions for AI Platform Training.
|
getset |
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri
.
For more information, see the runtime version list and learn how to manage runtime versions.
|
getset |
Required. Specifies the machine types, the number of replicas for workers and parameter servers.
|
getset |
Optional. Scheduling options for a training job.
|
getset |
Optional. Use 'chief' instead of 'master' in TF_CONFIG when Custom Container is used and evaluator is not specified.
Defaults to false.
|
getset |
Optional. The configuration for workers.
You should only set workerConfig.acceleratorConfig
if workerType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.
Set workerConfig.imageUri
only if you build a custom image for your worker. If workerConfig.imageUri
has not been set, AI Platform uses the value of masterConfig.imageUri
. Learn more about configuring custom containers.
|
getset |
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type
.
This value can only be used when scale_tier
is set to CUSTOM
. If you set this value, you must also set worker_type
.
The default value is zero.
|
getset |
Optional. Specifies the type of virtual machine to use for your training job's worker nodes.
The supported values are the same as those described in the entry for masterType
.
This value must be consistent with the category of machine type that masterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types.
If you use cloud_tpu
for this value, see special instructions for configuring a custom TPU machine.
This value must be present when scaleTier
is set to CUSTOM
and workerCount
is greater than zero.