AWS SageMaker (version v1.*.*)

add_tags

Adds or overwrites one or more tags for the specified Amazon SageMaker resource. You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see AWS Tagging Strategies.
Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the Tags parameter of CreateHyperParameterTuningJob

Parameters

$body

Type: object

{
  "ResourceArn" : "The Amazon Resource Name (ARN) of the resource that you want to tag.",
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_algorithm

Create a machine learning algorithm that you can use in Amazon SageMaker and list in the AWS Marketplace.

Parameters

$body

Type: object

{
  "ValidationSpecification" : {
    "ValidationRole" : "The IAM roles that Amazon SageMaker uses to run the training jobs.",
    "ValidationProfiles" : [ {
      "ProfileName" : "The name of the profile for the algorithm. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).",
      "TransformJobDefinition" : {
        "TransformResources" : {
          "InstanceCount" : "The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1.",
          "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
          "InstanceType" : "The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types."
        },
        "MaxConcurrentTransforms" : "The maximum number of parallel requests that can be sent to each instance in a transform job. The default value is 1.",
        "MaxPayloadInMB" : "The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata).",
        "TransformOutput" : {
          "AssembleWith" : "Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None. To add a newline character at the end of every transformed record, specify Line.",
          "Accept" : "The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.",
          "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
          "S3OutputPath" : "The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix. \nFor every S3 object used as input for the transform job, batch transform stores the transformed data with an .out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv, batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out. Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an .out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation."
        },
        "Environment" : "The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.",
        "TransformInput" : {
          "ContentType" : "The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.",
          "SplitType" : "The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. \nWhen splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.  \nSome data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord. \nFor more information about the RecordIO, see Data Format in the MXNet documentation. For more information about the TFRecord, see Consuming TFRecord data in the TensorFlow documentation.",
          "CompressionType" : "If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.",
          "DataSource" : {
            "S3DataSource" : {
              "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:  \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following S3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-1   ...   The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf. ",
              "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.  \nThe following values are compatible: ManifestFile, S3Prefix  \nThe following value is not compatible: AugmentedManifestFile "
            }
          }
        },
        "BatchStrategy" : "A string that determines the number of records included in a single mini-batch. \n SingleRecord means only one record is used per mini-batch. MultiRecord means a mini-batch is set to contain as many records that can fit within the MaxPayloadInMB limit."
      },
      "TrainingJobDefinition" : {
        "HyperParameters" : "The hyperparameters used for the training job.",
        "StoppingCondition" : {
          "MaxRuntimeInSeconds" : "The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.",
          "MaxWaitTimeInSeconds" : "The maximum length of time, in seconds, how long you are willing to wait for a managed spot training job to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the training job runs. It must be equal to or greater than MaxRuntimeInSeconds. "
        },
        "OutputDataConfig" : {
          "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you use a KMS key ID or an alias of your master key, the Amazon SageMaker execution role must include permissions to call kms:Encrypt. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side encryption with KMS-managed keys for OutputDataConfig. If you use a bucket policy with an s3:PutObject permission that only allows objects with server-side encryption, set the condition key of s3:x-amz-server-side-encryption to \"aws:kms\". For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTrainingJob, CreateTransformJob, or CreateHyperParameterTuningJob requests. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
          "S3OutputPath" : "Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix. "
        },
        "TrainingInputMode" : "The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker algorithms support, see Algorithms. \nIf an algorithm supports the File input mode, Amazon SageMaker downloads the training data from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an algorithm supports the Pipe input mode, Amazon SageMaker streams data directly from S3 to the container.",
        "ResourceConfig" : {
          "InstanceCount" : "The number of ML compute instances to use. For distributed training, provide a value greater than 1. ",
          "VolumeSizeInGB" : "The size of the ML storage volume that you want to provision.  \nML storage volumes store model artifacts and incremental states. Training algorithms might also use the ML storage volume for scratch space. If you want to store the training data in the ML storage volume, choose File as the TrainingInputMode in the algorithm specification.  \nYou must specify sufficient ML storage for your scenario.   \n Amazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume type. ",
          "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
          "InstanceType" : "The ML compute instance type. "
        },
        "InputDataConfig" : [ {
          "InputMode" : "(Optional) The input mode to use for the data channel in a training job. If you don't set a value for InputMode, Amazon SageMaker uses the value set for TrainingInputMode. Use this parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, use File input mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode. \nTo use a model for incremental training, choose File input model.",
          "ChannelName" : "The name of the channel. ",
          "ContentType" : "The MIME type of the data.",
          "RecordWrapperType" : " \nSpecify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.  \nIn File mode, leave this field unset or set it to None.",
          "ShuffleConfig" : {
            "Seed" : "Determines the shuffling order in ShuffleConfig value."
          },
          "CompressionType" : "If training data is compressed, the compression type. The default value is None. CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to None.",
          "DataSource" : {
            "FileSystemDataSource" : {
              "FileSystemAccessMode" : "The access mode of the mount of the directory associated with the channel. A directory can be mounted either in ro (read-only) or rw (read-write) mode.",
              "DirectoryPath" : "The full path to the directory to associate with the channel.",
              "FileSystemType" : "The file system type. ",
              "FileSystemId" : "The file system id."
            },
            "S3DataSource" : {
              "S3DataDistributionType" : "If you want Amazon SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated.  \nIf you want Amazon SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.  \nDon't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.  \nIn distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode is set to File), this copies 1/n of the number of objects. ",
              "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:   \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following s3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-2   ...  The complete set of s3uris in this manifest is the input data for the channel for this datasource. The object that each s3uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.  ",
              "AttributeNames" : [ "string" ],
              "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects that match the specified key name prefix for model training.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for model training.  \nIf you choose AugmentedManifestFile, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile can only be used if the Channel's input mode is Pipe."
            }
          }
        } ]
      }
    } ]
  },
  "InferenceSpecification" : {
    "SupportedContentTypes" : [ "string" ],
    "SupportedRealtimeInferenceInstanceTypes" : [ "string. Possible values: ml.t2.medium | ml.t2.large | ml.t2.xlarge | ml.t2.2xlarge | ml.m4.xlarge | ml.m4.2xlarge | ml.m4.4xlarge | ml.m4.10xlarge | ml.m4.16xlarge | ml.m5.large | ml.m5.xlarge | ml.m5.2xlarge | ml.m5.4xlarge | ml.m5.12xlarge | ml.m5.24xlarge | ml.m5d.large | ml.m5d.xlarge | ml.m5d.2xlarge | ml.m5d.4xlarge | ml.m5d.12xlarge | ml.m5d.24xlarge | ml.c4.large | ml.c4.xlarge | ml.c4.2xlarge | ml.c4.4xlarge | ml.c4.8xlarge | ml.p2.xlarge | ml.p2.8xlarge | ml.p2.16xlarge | ml.p3.2xlarge | ml.p3.8xlarge | ml.p3.16xlarge | ml.c5.large | ml.c5.xlarge | ml.c5.2xlarge | ml.c5.4xlarge | ml.c5.9xlarge | ml.c5.18xlarge | ml.c5d.large | ml.c5d.xlarge | ml.c5d.2xlarge | ml.c5d.4xlarge | ml.c5d.9xlarge | ml.c5d.18xlarge | ml.g4dn.xlarge | ml.g4dn.2xlarge | ml.g4dn.4xlarge | ml.g4dn.8xlarge | ml.g4dn.12xlarge | ml.g4dn.16xlarge | ml.r5.large | ml.r5.xlarge | ml.r5.2xlarge | ml.r5.4xlarge | ml.r5.12xlarge | ml.r5.24xlarge | ml.r5d.large | ml.r5d.xlarge | ml.r5d.2xlarge | ml.r5d.4xlarge | ml.r5d.12xlarge | ml.r5d.24xlarge" ],
    "Containers" : [ {
      "ContainerHostname" : "The DNS host name for the Docker container.",
      "ImageDigest" : "An MD5 hash of the training algorithm that identifies the Docker image used for training.",
      "ModelDataUrl" : "The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).",
      "ProductId" : "The AWS Marketplace product ID of the model package.",
      "Image" : "The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored. \nIf you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker."
    } ],
    "SupportedTransformInstanceTypes" : [ "string. Possible values: ml.m4.xlarge | ml.m4.2xlarge | ml.m4.4xlarge | ml.m4.10xlarge | ml.m4.16xlarge | ml.c4.xlarge | ml.c4.2xlarge | ml.c4.4xlarge | ml.c4.8xlarge | ml.p2.xlarge | ml.p2.8xlarge | ml.p2.16xlarge | ml.p3.2xlarge | ml.p3.8xlarge | ml.p3.16xlarge | ml.c5.xlarge | ml.c5.2xlarge | ml.c5.4xlarge | ml.c5.9xlarge | ml.c5.18xlarge | ml.m5.large | ml.m5.xlarge | ml.m5.2xlarge | ml.m5.4xlarge | ml.m5.12xlarge | ml.m5.24xlarge" ],
    "SupportedResponseMIMETypes" : [ "string" ]
  },
  "AlgorithmDescription" : "A description of the algorithm.",
  "TrainingSpecification" : {
    "SupportedTrainingInstanceTypes" : [ "string. Possible values: ml.m4.xlarge | ml.m4.2xlarge | ml.m4.4xlarge | ml.m4.10xlarge | ml.m4.16xlarge | ml.m5.large | ml.m5.xlarge | ml.m5.2xlarge | ml.m5.4xlarge | ml.m5.12xlarge | ml.m5.24xlarge | ml.c4.xlarge | ml.c4.2xlarge | ml.c4.4xlarge | ml.c4.8xlarge | ml.p2.xlarge | ml.p2.8xlarge | ml.p2.16xlarge | ml.p3.2xlarge | ml.p3.8xlarge | ml.p3.16xlarge | ml.p3dn.24xlarge | ml.c5.xlarge | ml.c5.2xlarge | ml.c5.4xlarge | ml.c5.9xlarge | ml.c5.18xlarge" ],
    "TrainingImageDigest" : "An MD5 hash of the training algorithm that identifies the Docker image used for training.",
    "SupportedHyperParameters" : [ {
      "DefaultValue" : "The default value for this hyperparameter. If a default value is specified, a hyperparameter cannot be required.",
      "Type" : "The type of this hyperparameter. The valid types are Integer, Continuous, Categorical, and FreeText.",
      "Description" : "A brief description of the hyperparameter.",
      "IsRequired" : "Indicates whether this hyperparameter is required.",
      "IsTunable" : "Indicates whether this hyperparameter is tunable in a hyperparameter tuning job.",
      "Range" : {
        "IntegerParameterRangeSpecification" : {
          "MinValue" : "The minimum integer value allowed.",
          "MaxValue" : "The maximum integer value allowed."
        },
        "CategoricalParameterRangeSpecification" : {
          "Values" : [ "string" ]
        },
        "ContinuousParameterRangeSpecification" : {
          "MinValue" : "The minimum floating-point value allowed.",
          "MaxValue" : "The maximum floating-point value allowed."
        }
      },
      "Name" : "The name of this hyperparameter. The name must be unique."
    } ],
    "SupportsDistributedTraining" : "Indicates whether the algorithm supports distributed training. If set to false, buyers can’t request more than one instance during training.",
    "MetricDefinitions" : [ {
      "Regex" : "A regular expression that searches the output of a training job and gets the value of the metric. For more information about using regular expressions to define metrics, see Defining Objective Metrics.",
      "Name" : "The name of the metric."
    } ],
    "TrainingChannels" : [ {
      "SupportedInputModes" : [ "string. Possible values: Pipe | File" ],
      "Description" : "A brief description of the channel.",
      "IsRequired" : "Indicates whether the channel is required by the algorithm.",
      "SupportedContentTypes" : [ "string" ],
      "SupportedCompressionTypes" : [ "string. Possible values: None | Gzip" ],
      "Name" : "The name of the channel."
    } ],
    "TrainingImage" : "The Amazon ECR registry path of the Docker image that contains the training algorithm.",
    "SupportedTuningJobObjectiveMetrics" : [ {
      "MetricName" : "The name of the metric to use for the objective metric.",
      "Type" : "Whether to minimize or maximize the objective metric."
    } ]
  },
  "AlgorithmName" : "The name of the algorithm.",
  "CertifyForMarketplace" : "Whether to certify the algorithm so that it can be listed in AWS Marketplace."
}

create_code_repository

Creates a Git repository as a resource in your Amazon SageMaker account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your Amazon SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with. The repository can be hosted either in AWS CodeCommit or in any other Git repository.

Parameters

$body

Type: object

{
  "CodeRepositoryName" : "The name of the Git repository. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).",
  "GitConfig" : {
    "SecretArn" : "The Amazon Resource Name (ARN) of the AWS Secrets Manager secret that contains the credentials used to access the git repository. The secret must have a staging label of AWSCURRENT and must be in the following format: \n {\"username\": UserName, \"password\": Password} ",
    "Branch" : "The default branch for the Git repository.",
    "RepositoryUrl" : "The URL where the Git repository is located."
  }
}

create_compilation_job

Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with AWS IoT Greengrass. In that case, deploy them as an ML resource. In the request body, you provide the following:
A name for the compilation job
Information about the input model artifacts
The output location for the compiled model and the device (target) that the model runs on
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job
You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job. To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

Parameters

$body

Type: object

{
  "OutputConfig" : {
    "S3OutputLocation" : "Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.",
    "TargetDevice" : "Identifies the device that you want to run your model on after it has been compiled. For example: ml_c5."
  },
  "StoppingCondition" : {
    "MaxRuntimeInSeconds" : "The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.",
    "MaxWaitTimeInSeconds" : "The maximum length of time, in seconds, how long you are willing to wait for a managed spot training job to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the training job runs. It must be equal to or greater than MaxRuntimeInSeconds. "
  },
  "CompilationJobName" : "A name for the model compilation job. The name must be unique within the AWS Region and within your AWS account. ",
  "InputConfig" : {
    "DataInputConfig" : "Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.   \n  TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.   Examples for one input:   If using the console, {\"input\":[1,1024,1024,3]}    If using the CLI, {\\\"input\\\":[1,1024,1024,3]}      Examples for two inputs:   If using the console, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}    If using the CLI, {\\\"data1\\\": [1,28,28,1], \\\"data2\\\":[1,28,28,1]}       \n  MXNET/ONNX: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.   Examples for one input:   If using the console, {\"data\":[1,3,1024,1024]}    If using the CLI, {\\\"data\\\":[1,3,1024,1024]}      Examples for two inputs:   If using the console, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}     If using the CLI, {\\\"var1\\\": [1,1,28,28], \\\"var2\\\":[1,1,28,28]}       \n  PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.   Examples for one input in dictionary format:   If using the console, {\"input0\":[1,3,224,224]}    If using the CLI, {\\\"input0\\\":[1,3,224,224]}      Example for one input in list format: [[1,3,224,224]]    Examples for two inputs in dictionary format:   If using the console, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}    If using the CLI, {\\\"input0\\\":[1,3,224,224], \\\"input1\\\":[1,3,224,224]}       Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]     \n  XGBOOST: input data name and shape are not needed. ",
    "S3Uri" : "The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).",
    "Framework" : "Identifies the framework in which the model was trained. For example: TENSORFLOW."
  },
  "RoleArn" : "The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.  \nDuring model compilation, Amazon SageMaker needs your permission to:  \n Read input data from an S3 bucket  \n Write model artifacts to an S3 bucket  \n Write logs to Amazon CloudWatch Logs  \n Publish metrics to Amazon CloudWatch   \nYou grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission. For more information, see Amazon SageMaker Roles. "
}

create_endpoint

Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig API.
Use this API only for hosting models using Amazon SageMaker hosting services.
You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.
The endpoint name must be unique within an AWS Region in your AWS account.
When it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.
When Amazon SageMaker receives the request, it sets the endpoint status to Creating. After it creates the endpoint, it sets the status to InService. Amazon SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint API. For an example, see Exercise 1: Using the K-Means Algorithm Provided by Amazon SageMaker.
If any of the models hosted at this endpoint get model data from an Amazon S3 location, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provided. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS i an AWS Region in the AWS Identity and Access Management User Guide.

Parameters

$body

Type: object

{
  "EndpointName" : "The name of the endpoint. The name must be unique within an AWS Region in your AWS account.",
  "EndpointConfigName" : "The name of an endpoint configuration. For more information, see CreateEndpointConfig. ",
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_endpoint_config

Creates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the CreateEndpoint API.
Use this API only if you want to use Amazon SageMaker hosting services to deploy models into production.
In the request, you define one or more ProductionVariants, each of which identifies a model. Each ProductionVariant parameter also describes the resources that you want Amazon SageMaker to provision. This includes the number and type of ML compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.

Parameters

$body

Type: object

{
  "ProductionVariants" : [ {
    "ModelName" : "The name of the model that you want to host. This is the name that you specified when creating the model.",
    "VariantName" : "The name of the production variant.",
    "InitialInstanceCount" : "Number of instances to launch initially.",
    "InstanceType" : "The ML compute instance type.",
    "AcceleratorType" : "The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.",
    "InitialVariantWeight" : "Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeight to the sum of all VariantWeight values across all ProductionVariants. If unspecified, it defaults to 1.0. "
  } ],
  "KmsKeyId" : "The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.  \nNitro-based instances do not support encryption with AWS KMS. If any of the models that you specify in the ProductionVariants parameter use nitro-based instances, do not specify a value for the KmsKeyId parameter. If you specify a value for KmsKeyId when using any nitro-based instances, the call to CreateEndpointConfig fails. \nFor a list of nitro-based instances, see Nitro-based Instances in the Amazon Elastic Compute Cloud User Guide for Linux Instances. \nFor more information about storage volumes on nitro-based instances, see Amazon EBS and NVMe on Linux Instances.",
  "EndpointConfigName" : "The name of the endpoint configuration. You specify this name in a CreateEndpoint request. ",
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_hyper_parameter_tuning_job

Starts a hyperparameter tuning job. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose.

Parameters

$body

Type: object

{
  "WarmStartConfig" : {
    "ParentHyperParameterTuningJobs" : [ {
      "HyperParameterTuningJobName" : "The name of the hyperparameter tuning job to be used as a starting point for a new hyperparameter tuning job."
    } ],
    "WarmStartType" : "Specifies one of the following:  IDENTICAL_DATA_AND_ALGORITHM  \nThe new hyperparameter tuning job uses the same input data and training image as the parent tuning jobs. You can change the hyperparameter ranges to search and the maximum number of training jobs that the hyperparameter tuning job launches. You cannot use a new version of the training algorithm, unless the changes in the new version do not affect the algorithm itself. For example, changes that improve logging or adding support for a different data format are allowed. You can also change hyperparameters from tunable to static, and from static to tunable, but the total number of static plus tunable hyperparameters must remain the same as it is in all parent jobs. The objective metric for the new tuning job must be the same as for all parent jobs.  TRANSFER_LEARNING  \nThe new hyperparameter tuning job can include input data, hyperparameter ranges, maximum number of concurrent training jobs, and maximum number of training jobs that are different than those of its parent hyperparameter tuning jobs. The training image can also be a different version from the version used in the parent hyperparameter tuning job. You can also change hyperparameters from tunable to static, and from static to tunable, but the total number of static plus tunable hyperparameters must remain the same as it is in all parent jobs. The objective metric for the new tuning job must be the same as for all parent jobs."
  },
  "HyperParameterTuningJobName" : "The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same AWS account and AWS Region. The name must have { } to { } characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.",
  "TrainingJobDefinition" : {
    "EnableManagedSpotTraining" : "A Boolean indicating whether managed spot training is enabled (True) or not (False).",
    "EnableNetworkIsolation" : "Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.  \nThe Semantic Segmentation built-in algorithm does not support network isolation.",
    "EnableInterContainerTrafficEncryption" : "To encrypt all communications between ML compute instances in distributed training, choose True. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.",
    "AlgorithmSpecification" : {
      "TrainingInputMode" : "The input mode that the algorithm supports: File or Pipe. In File input mode, Amazon SageMaker downloads the training data from Amazon S3 to the storage volume that is attached to the training instance and mounts the directory to the Docker volume for the training container. In Pipe input mode, Amazon SageMaker streams data directly from Amazon S3 to the container.  \nIf you specify File mode, make sure that you provision the storage volume that is attached to the training instance with enough capacity to accommodate the training data downloaded from Amazon S3, the model artifacts, and intermediate information. \n \nFor more information about input modes, see Algorithms. ",
      "MetricDefinitions" : [ {
        "Regex" : "A regular expression that searches the output of a training job and gets the value of the metric. For more information about using regular expressions to define metrics, see Defining Objective Metrics.",
        "Name" : "The name of the metric."
      } ],
      "TrainingImage" : " The registry path of the Docker image that contains the training algorithm. For information about Docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.",
      "AlgorithmName" : "The name of the resource algorithm to use for the hyperparameter tuning job. If you specify a value for this parameter, do not specify a value for TrainingImage."
    },
    "StoppingCondition" : {
      "MaxRuntimeInSeconds" : "The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.",
      "MaxWaitTimeInSeconds" : "The maximum length of time, in seconds, how long you are willing to wait for a managed spot training job to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the training job runs. It must be equal to or greater than MaxRuntimeInSeconds. "
    },
    "VpcConfig" : {
      "Subnets" : [ "string" ],
      "SecurityGroupIds" : [ "string" ]
    },
    "OutputDataConfig" : {
      "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you use a KMS key ID or an alias of your master key, the Amazon SageMaker execution role must include permissions to call kms:Encrypt. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side encryption with KMS-managed keys for OutputDataConfig. If you use a bucket policy with an s3:PutObject permission that only allows objects with server-side encryption, set the condition key of s3:x-amz-server-side-encryption to \"aws:kms\". For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTrainingJob, CreateTransformJob, or CreateHyperParameterTuningJob requests. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
      "S3OutputPath" : "Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix. "
    },
    "CheckpointConfig" : {
      "S3Uri" : "Identifies the S3 path where you want Amazon SageMaker to store checkpoints. For example, s3://bucket-name/key-name-prefix.",
      "LocalPath" : "(Optional) The local directory where checkpoints are written. The default directory is /opt/ml/checkpoints/. "
    },
    "StaticHyperParameters" : "Specifies the values of hyperparameters that do not change for the tuning job.",
    "ResourceConfig" : {
      "InstanceCount" : "The number of ML compute instances to use. For distributed training, provide a value greater than 1. ",
      "VolumeSizeInGB" : "The size of the ML storage volume that you want to provision.  \nML storage volumes store model artifacts and incremental states. Training algorithms might also use the ML storage volume for scratch space. If you want to store the training data in the ML storage volume, choose File as the TrainingInputMode in the algorithm specification.  \nYou must specify sufficient ML storage for your scenario.   \n Amazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume type. ",
      "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
      "InstanceType" : "The ML compute instance type. "
    },
    "InputDataConfig" : [ {
      "InputMode" : "(Optional) The input mode to use for the data channel in a training job. If you don't set a value for InputMode, Amazon SageMaker uses the value set for TrainingInputMode. Use this parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, use File input mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode. \nTo use a model for incremental training, choose File input model.",
      "ChannelName" : "The name of the channel. ",
      "ContentType" : "The MIME type of the data.",
      "RecordWrapperType" : " \nSpecify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.  \nIn File mode, leave this field unset or set it to None.",
      "ShuffleConfig" : {
        "Seed" : "Determines the shuffling order in ShuffleConfig value."
      },
      "CompressionType" : "If training data is compressed, the compression type. The default value is None. CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to None.",
      "DataSource" : {
        "FileSystemDataSource" : {
          "FileSystemAccessMode" : "The access mode of the mount of the directory associated with the channel. A directory can be mounted either in ro (read-only) or rw (read-write) mode.",
          "DirectoryPath" : "The full path to the directory to associate with the channel.",
          "FileSystemType" : "The file system type. ",
          "FileSystemId" : "The file system id."
        },
        "S3DataSource" : {
          "S3DataDistributionType" : "If you want Amazon SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated.  \nIf you want Amazon SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.  \nDon't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.  \nIn distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode is set to File), this copies 1/n of the number of objects. ",
          "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:   \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following s3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-2   ...  The complete set of s3uris in this manifest is the input data for the channel for this datasource. The object that each s3uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.  ",
          "AttributeNames" : [ "string" ],
          "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects that match the specified key name prefix for model training.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for model training.  \nIf you choose AugmentedManifestFile, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile can only be used if the Channel's input mode is Pipe."
        }
      }
    } ],
    "RoleArn" : "The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches."
  },
  "HyperParameterTuningJobConfig" : {
    "TrainingJobEarlyStoppingType" : "Specifies whether to use early stopping for training jobs launched by the hyperparameter tuning job. This can be one of the following values (the default value is OFF):  OFF  \nTraining jobs launched by the hyperparameter tuning job do not use early stopping.  AUTO  \nAmazon SageMaker stops training jobs launched by the hyperparameter tuning job when they are unlikely to perform better than previously completed training jobs. For more information, see Stop Training Jobs Early.",
    "HyperParameterTuningJobObjective" : {
      "MetricName" : "The name of the metric to use for the objective metric.",
      "Type" : "Whether to minimize or maximize the objective metric."
    },
    "ResourceLimits" : {
      "MaxParallelTrainingJobs" : "The maximum number of concurrent training jobs that a hyperparameter tuning job can launch.",
      "MaxNumberOfTrainingJobs" : "The maximum number of training jobs that a hyperparameter tuning job can launch."
    },
    "Strategy" : "Specifies how hyperparameter tuning chooses the combinations of hyperparameter values to use for the training job it launches. To use the Bayesian search stategy, set this to Bayesian. To randomly search, set it to Random. For information about search strategies, see How Hyperparameter Tuning Works.",
    "ParameterRanges" : {
      "CategoricalParameterRanges" : [ {
        "Values" : [ "string" ],
        "Name" : "The name of the categorical hyperparameter to tune."
      } ],
      "IntegerParameterRanges" : [ {
        "ScalingType" : "The scale that hyperparameter tuning uses to search the hyperparameter range. For information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:  Auto  \nAmazon SageMaker hyperparameter tuning chooses the best scale for the hyperparameter.  Linear  \nHyperparameter tuning searches the values in the hyperparameter range by using a linear scale.  Logarithmic  \nHyperparemeter tuning searches the values in the hyperparameter range by using a logarithmic scale. \nLogarithmic scaling works only for ranges that have only values greater than 0.",
        "MinValue" : "The minimum value of the hyperparameter to search.",
        "MaxValue" : "The maximum value of the hyperparameter to search.",
        "Name" : "The name of the hyperparameter to search."
      } ],
      "ContinuousParameterRanges" : [ {
        "ScalingType" : "The scale that hyperparameter tuning uses to search the hyperparameter range. For information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:  Auto  \nAmazon SageMaker hyperparameter tuning chooses the best scale for the hyperparameter.  Linear  \nHyperparameter tuning searches the values in the hyperparameter range by using a linear scale.  Logarithmic  \nHyperparameter tuning searches the values in the hyperparameter range by using a logarithmic scale. \nLogarithmic scaling works only for ranges that have only values greater than 0.  ReverseLogarithmic  \nHyperparemeter tuning searches the values in the hyperparameter range by using a reverse logarithmic scale. \nReverse logarithmic scaling works only for ranges that are entirely within the range 0<=x<1.0.",
        "MinValue" : "The minimum value for the hyperparameter. The tuning job uses floating-point values between this value and MaxValuefor tuning.",
        "MaxValue" : "The maximum value for the hyperparameter. The tuning job uses floating-point values between MinValue value and this value for tuning.",
        "Name" : "The name of the continuous hyperparameter to tune."
      } ]
    }
  },
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_labeling_job

Creates a job that uses workers to label the data objects in your input dataset. You can use the labeled data to train machine learning models. You can select your workforce from one of three providers:
A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required.
One or more vendors that you select from the AWS Marketplace. Vendors provide expertise in specific areas.
The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information.
You can also use automated data labeling to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses active learning to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see Using Automated Data Labeling. The data objects to be labeled are contained in an Amazon S3 bucket. You create a manifest file that describes the location of each object. For more information, see Using Input and Output Data. The output can be used as the manifest file for another labeling job or as training data for your machine learning models.

Parameters

$body

Type: object

{
  "LabelAttributeName" : "The attribute name to use for the label in the output manifest file. This is the key for the key/value pair formed with the label that a worker assigns to the object. The name can't end with \"-metadata\". If you are running a semantic segmentation labeling job, the attribute name must end with \"-ref\". If you are running any other kind of labeling job, the attribute name must not end with \"-ref\".",
  "LabelingJobName" : "The name of the labeling job. This name is used to identify the job in a list of labeling jobs.",
  "OutputConfig" : {
    "KmsKeyId" : "The AWS Key Management Service ID of the key used to encrypt the output data, if any. \nIf you use a KMS key ID or an alias of your master key, the Amazon SageMaker execution role must include permissions to call kms:Encrypt. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side encryption with KMS-managed keys for LabelingJobOutputConfig. If you use a bucket policy with an s3:PutObject permission that only allows objects with server-side encryption, set the condition key of s3:x-amz-server-side-encryption to \"aws:kms\". For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateLabelingJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
    "S3OutputPath" : "The Amazon S3 location to write output data."
  },
  "InputConfig" : {
    "DataAttributes" : {
      "ContentClassifiers" : [ "string. Possible values: FreeOfPersonallyIdentifiableInformation | FreeOfAdultContent" ]
    },
    "DataSource" : {
      "S3DataSource" : {
        "ManifestS3Uri" : "The Amazon S3 location of the manifest file that describes the input data objects."
      }
    }
  },
  "HumanTaskConfig" : {
    "UiConfig" : {
      "UiTemplateS3Uri" : "The Amazon S3 bucket location of the UI template. For more information about the contents of a UI template, see  Creating Your Custom Labeling Task Template."
    },
    "WorkteamArn" : "The Amazon Resource Name (ARN) of the work team assigned to complete the tasks.",
    "MaxConcurrentTaskCount" : "Defines the maximum number of data objects that can be labeled by human workers at the same time. Each object may have more than one worker at one time.",
    "TaskDescription" : "A description of the task for your human workers.",
    "AnnotationConsolidationConfig" : {
      "AnnotationConsolidationLambdaArn" : "The Amazon Resource Name (ARN) of a Lambda function implements the logic for annotation consolidation. \nFor the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions:  \n  Bounding box - Finds the most similar boxes from different workers based on the Jaccard index of the boxes.  arn:aws:lambda:us-east-1:432418664414:function:ACS-BoundingBox   arn:aws:lambda:us-east-2:266458841044:function:ACS-BoundingBox   arn:aws:lambda:us-west-2:081040173940:function:ACS-BoundingBox   arn:aws:lambda:eu-west-1:568282634449:function:ACS-BoundingBox   arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-BoundingBox   arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-BoundingBox   arn:aws:lambda:ap-south-1:565803892007:function:ACS-BoundingBox   arn:aws:lambda:eu-central-1:203001061592:function:ACS-BoundingBox   arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-BoundingBox   arn:aws:lambda:eu-west-2:487402164563:function:ACS-BoundingBox   arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-BoundingBox   arn:aws:lambda:ca-central-1:918755190332:function:ACS-BoundingBox   \n  Image classification - Uses a variant of the Expectation Maximization approach to estimate the true class of an image based on annotations from individual workers.  arn:aws:lambda:us-east-1:432418664414:function:ACS-ImageMultiClass   arn:aws:lambda:us-east-2:266458841044:function:ACS-ImageMultiClass   arn:aws:lambda:us-west-2:081040173940:function:ACS-ImageMultiClass   arn:aws:lambda:eu-west-1:568282634449:function:ACS-ImageMultiClass   arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-ImageMultiClass   arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-ImageMultiClass   arn:aws:lambda:ap-south-1:565803892007:function:ACS-ImageMultiClass   arn:aws:lambda:eu-central-1:203001061592:function:ACS-ImageMultiClass   arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-ImageMultiClass   arn:aws:lambda:eu-west-2:487402164563:function:ACS-ImageMultiClass   arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-ImageMultiClass   arn:aws:lambda:ca-central-1:918755190332:function:ACS-ImageMultiClass   \n  Semantic segmentation - Treats each pixel in an image as a multi-class classification and treats pixel annotations from workers as \"votes\" for the correct label.  arn:aws:lambda:us-east-1:432418664414:function:ACS-SemanticSegmentation   arn:aws:lambda:us-east-2:266458841044:function:ACS-SemanticSegmentation   arn:aws:lambda:us-west-2:081040173940:function:ACS-SemanticSegmentation   arn:aws:lambda:eu-west-1:568282634449:function:ACS-SemanticSegmentation   arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-SemanticSegmentation   arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-SemanticSegmentation   arn:aws:lambda:ap-south-1:565803892007:function:ACS-SemanticSegmentation   arn:aws:lambda:eu-central-1:203001061592:function:ACS-SemanticSegmentation   arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-SemanticSegmentation   arn:aws:lambda:eu-west-2:487402164563:function:ACS-SemanticSegmentation   arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-SemanticSegmentation   arn:aws:lambda:ca-central-1:918755190332:function:ACS-SemanticSegmentation   \n  Text classification - Uses a variant of the Expectation Maximization approach to estimate the true class of text based on annotations from individual workers.  arn:aws:lambda:us-east-1:432418664414:function:ACS-TextMultiClass   arn:aws:lambda:us-east-2:266458841044:function:ACS-TextMultiClass   arn:aws:lambda:us-west-2:081040173940:function:ACS-TextMultiClass   arn:aws:lambda:eu-west-1:568282634449:function:ACS-TextMultiClass   arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-TextMultiClass   arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-TextMultiClass   arn:aws:lambda:ap-south-1:565803892007:function:ACS-TextMultiClass   arn:aws:lambda:eu-central-1:203001061592:function:ACS-TextMultiClass   arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-TextMultiClass   arn:aws:lambda:eu-west-2:487402164563:function:ACS-TextMultiClass   arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-TextMultiClass   arn:aws:lambda:ca-central-1:918755190332:function:ACS-TextMultiClass   \n  Named entity eecognition - Groups similar selections and calculates aggregate boundaries, resolving to most-assigned label.  arn:aws:lambda:us-east-1:432418664414:function:ACS-NamedEntityRecognition   arn:aws:lambda:us-east-2:266458841044:function:ACS-NamedEntityRecognition   arn:aws:lambda:us-west-2:081040173940:function:ACS-NamedEntityRecognition   arn:aws:lambda:eu-west-1:568282634449:function:ACS-NamedEntityRecognition   arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-NamedEntityRecognition   arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-NamedEntityRecognition   arn:aws:lambda:ap-south-1:565803892007:function:ACS-NamedEntityRecognition   arn:aws:lambda:eu-central-1:203001061592:function:ACS-NamedEntityRecognition   arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-NamedEntityRecognition   arn:aws:lambda:eu-west-2:487402164563:function:ACS-NamedEntityRecognition   arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-NamedEntityRecognition   arn:aws:lambda:ca-central-1:918755190332:function:ACS-NamedEntityRecognition    \nFor more information, see Annotation Consolidation."
    },
    "PublicWorkforceTaskPrice" : {
      "AmountInUsd" : {
        "Dollars" : "The whole number of dollars in the amount.",
        "Cents" : "The fractional portion, in cents, of the amount. ",
        "TenthFractionsOfACent" : "Fractions of a cent, in tenths."
      }
    },
    "NumberOfHumanWorkersPerDataObject" : "The number of human workers that will label an object. ",
    "TaskTitle" : "A title for the task for your human workers.",
    "TaskAvailabilityLifetimeInSeconds" : "The length of time that a task remains available for labeling by human workers. If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43200). For private and vendor workforces, the maximum is as listed.",
    "PreHumanTaskLambdaArn" : "The Amazon Resource Name (ARN) of a Lambda function that is run before a data object is sent to a human worker. Use this function to provide input to a custom labeling job. \nFor the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions: \n US East (Northern Virginia) (us-east-1):   \n  arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox   \n  arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClass   \n  arn:aws:lambda:us-east-1:432418664414:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClass   \n  arn:aws:lambda:us-east-1:432418664414:function:PRE-NamedEntityRecognition    \n US East (Ohio) (us-east-2):   \n  arn:aws:lambda:us-east-2:266458841044:function:PRE-BoundingBox   \n  arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClass   \n  arn:aws:lambda:us-east-2:266458841044:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass   \n  arn:aws:lambda:us-east-2:266458841044:function:PRE-NamedEntityRecognition    \n US West (Oregon) (us-west-2):   \n  arn:aws:lambda:us-west-2:081040173940:function:PRE-BoundingBox   \n  arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClass   \n  arn:aws:lambda:us-west-2:081040173940:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClass   \n  arn:aws:lambda:us-west-2:081040173940:function:PRE-NamedEntityRecognition    \n Canada (Central) (ca-central-1):   \n  arn:awslambda:ca-central-1:918755190332:function:PRE-BoundingBox   \n  arn:awslambda:ca-central-1:918755190332:function:PRE-ImageMultiClass   \n  arn:awslambda:ca-central-1:918755190332:function:PRE-SemanticSegmentation   \n  arn:awslambda:ca-central-1:918755190332:function:PRE-TextMultiClass   \n  arn:awslambda:ca-central-1:918755190332:function:PRE-NamedEntityRecognition    \n EU (Ireland) (eu-west-1):   \n  arn:aws:lambda:eu-west-1:568282634449:function:PRE-BoundingBox   \n  arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClass   \n  arn:aws:lambda:eu-west-1:568282634449:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClass   \n  arn:aws:lambda:eu-west-1:568282634449:function:PRE-NamedEntityRecognition    \n EU (London) (eu-west-2):   \n  arn:awslambda:eu-west-2:487402164563:function:PRE-BoundingBox   \n  arn:awslambda:eu-west-2:487402164563:function:PRE-ImageMultiClass   \n  arn:awslambda:eu-west-2:487402164563:function:PRE-SemanticSegmentation   \n  arn:awslambda:eu-west-2:487402164563:function:PRE-TextMultiClass   \n  arn:awslambda:eu-west-2:487402164563:function:PRE-NamedEntityRecognition    \n EU Frankfurt (eu-central-1):   \n  arn:awslambda:eu-central-1:203001061592:function:PRE-BoundingBox   \n  arn:awslambda:eu-central-1:203001061592:function:PRE-ImageMultiClass   \n  arn:awslambda:eu-central-1:203001061592:function:PRE-SemanticSegmentation   \n  arn:awslambda:eu-central-1:203001061592:function:PRE-TextMultiClass   \n  arn:awslambda:eu-central-1:203001061592:function:PRE-NamedEntityRecognition    \n Asia Pacific (Tokyo) (ap-northeast-1):   \n  arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-BoundingBox   \n  arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClass   \n  arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClass   \n  arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-NamedEntityRecognition    \n Asia Pacific (Seoul) (ap-northeast-2):   \n  arn:awslambda:ap-northeast-2:845288260483:function:PRE-BoundingBox   \n  arn:awslambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClass   \n  arn:awslambda:ap-northeast-2:845288260483:function:PRE-SemanticSegmentation   \n  arn:awslambda:ap-northeast-2:845288260483:function:PRE-TextMultiClass   \n  arn:awslambda:ap-northeast-2:845288260483:function:PRE-NamedEntityRecognition    \n Asia Pacific (Mumbai) (ap-south-1):   \n  arn:awslambda:ap-south-1:565803892007:function:PRE-BoundingBox   \n  arn:awslambda:ap-south-1:565803892007:function:PRE-ImageMultiClass   \n  arn:awslambda:ap-south-1:565803892007:function:PRE-SemanticSegmentation   \n  arn:awslambda:ap-south-1:565803892007:function:PRE-TextMultiClass   \n  arn:awslambda:ap-south-1:565803892007:function:PRE-NamedEntityRecognition    \n Asia Pacific (Singapore) (ap-southeast-1):   \n  arn:awslambda:ap-southeast-1:377565633583:function:PRE-BoundingBox   \n  arn:awslambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClass   \n  arn:awslambda:ap-southeast-1:377565633583:function:PRE-SemanticSegmentation   \n  arn:awslambda:ap-southeast-1:377565633583:function:PRE-TextMultiClass   \n  arn:awslambda:ap-southeast-1:377565633583:function:PRE-NamedEntityRecognition    \n Asia Pacific (Sydney) (ap-southeast-2):   \n  arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-BoundingBox   \n  arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClass   \n  arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-SemanticSegmentation   \n  arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClass   \n  arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-NamedEntityRecognition  ",
    "TaskKeywords" : [ "string" ],
    "TaskTimeLimitInSeconds" : "The amount of time that a worker has to complete a task."
  },
  "StoppingConditions" : {
    "MaxPercentageOfInputDatasetLabeled" : "The maximum number of input data objects that should be labeled.",
    "MaxHumanLabeledObjectCount" : "The maximum number of objects that can be labeled by human workers."
  },
  "LabelingJobAlgorithmsConfig" : {
    "LabelingJobAlgorithmSpecificationArn" : "Specifies the Amazon Resource Name (ARN) of the algorithm used for auto-labeling. You must select one of the following ARNs:  \n  Image classification   arn:aws:sagemaker:region:027400017018:labeling-job-algorithm-specification/image-classification   \n  Text classification   arn:aws:sagemaker:region:027400017018:labeling-job-algorithm-specification/text-classification   \n  Object detection   arn:aws:sagemaker:region:027400017018:labeling-job-algorithm-specification/object-detection   \n  Semantic Segmentation   arn:aws:sagemaker:region:027400017018:labeling-job-algorithm-specification/semantic-segmentation  ",
    "InitialActiveLearningModelArn" : "At the end of an auto-label job Amazon SageMaker Ground Truth sends the Amazon Resource Nam (ARN) of the final model used for auto-labeling. You can use this model as the starting point for subsequent similar jobs by providing the ARN of the model here. ",
    "LabelingJobResourceConfig" : {
      "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  "
    }
  },
  "LabelCategoryConfigS3Uri" : "The S3 URL of the file that defines the categories used to label the data objects. \nThe file is a JSON structure in the following format: \n {  \n  \"document-version\": \"2018-11-28\"  \n  \"labels\": [  \n  {  \n  \"label\": \"label 1\"  \n  },  \n  {  \n  \"label\": \"label 2\"  \n  },  \n  ...  \n  {  \n  \"label\": \"label n\"  \n  }  \n  ]  \n } ",
  "RoleArn" : "The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during data labeling. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete data labeling.",
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_model

Creates a model in Amazon SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the docker image containing inference code, artifacts (from prior training), and custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use Amazon SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment.
To run a batch transform using your model, you start a job with the CreateTransformJob API. Amazon SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location. In the CreateModel request, you must define a container with the PrimaryContainer parameter. In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role.

Parameters

$body

Type: object

{
  "ExecutionRoleArn" : "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.   \nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission.",
  "EnableNetworkIsolation" : "Isolates the model container. No inbound or outbound network calls can be made to or from the model container.  \nThe Semantic Segmentation built-in algorithm does not support network isolation.",
  "PrimaryContainer" : {
    "ContainerHostname" : "This parameter is ignored for models that contain only a PrimaryContainer. \nWhen a ContainerDefinition is part of an inference pipeline, the value of ths parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.",
    "ModelPackageName" : "The name or Amazon Resource Name (ARN) of the model package to use to create the model.",
    "ModelDataUrl" : "The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.  \nIf you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.  \nIf you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl.",
    "Environment" : "The environment variables to set in the Docker container. Each key and value in the Environment string to string map can have length of up to 1024. We support up to 16 entries in the map. ",
    "Image" : "The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored. If you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker "
  },
  "ModelName" : "The name of the new model.",
  "VpcConfig" : {
    "Subnets" : [ "string" ],
    "SecurityGroupIds" : [ "string" ]
  },
  "Containers" : [ {
    "ContainerHostname" : "This parameter is ignored for models that contain only a PrimaryContainer. \nWhen a ContainerDefinition is part of an inference pipeline, the value of ths parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.",
    "ModelPackageName" : "The name or Amazon Resource Name (ARN) of the model package to use to create the model.",
    "ModelDataUrl" : "The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.  \nIf you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.  \nIf you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl.",
    "Environment" : "The environment variables to set in the Docker container. Each key and value in the Environment string to string map can have length of up to 1024. We support up to 16 entries in the map. ",
    "Image" : "The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored. If you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker "
  } ],
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_model_package

Creates a model package that you can use to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker. To create a model package by specifying a Docker container that contains your inference code and the Amazon S3 location of your model artifacts, provide values for InferenceSpecification. To create a model from an algorithm resource that you created or subscribed to in AWS Marketplace, provide a value for SourceAlgorithmSpecification.

Parameters

$body

Type: object

{
  "ValidationSpecification" : {
    "ValidationRole" : "The IAM roles to be used for the validation of the model package.",
    "ValidationProfiles" : [ {
      "ProfileName" : "The name of the profile for the model package.",
      "TransformJobDefinition" : {
        "TransformResources" : {
          "InstanceCount" : "The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1.",
          "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
          "InstanceType" : "The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types."
        },
        "MaxConcurrentTransforms" : "The maximum number of parallel requests that can be sent to each instance in a transform job. The default value is 1.",
        "MaxPayloadInMB" : "The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata).",
        "TransformOutput" : {
          "AssembleWith" : "Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None. To add a newline character at the end of every transformed record, specify Line.",
          "Accept" : "The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.",
          "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
          "S3OutputPath" : "The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix. \nFor every S3 object used as input for the transform job, batch transform stores the transformed data with an .out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv, batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out. Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an .out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation."
        },
        "Environment" : "The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.",
        "TransformInput" : {
          "ContentType" : "The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.",
          "SplitType" : "The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. \nWhen splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.  \nSome data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord. \nFor more information about the RecordIO, see Data Format in the MXNet documentation. For more information about the TFRecord, see Consuming TFRecord data in the TensorFlow documentation.",
          "CompressionType" : "If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.",
          "DataSource" : {
            "S3DataSource" : {
              "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:  \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following S3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-1   ...   The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf. ",
              "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.  \nThe following values are compatible: ManifestFile, S3Prefix  \nThe following value is not compatible: AugmentedManifestFile "
            }
          }
        },
        "BatchStrategy" : "A string that determines the number of records included in a single mini-batch. \n SingleRecord means only one record is used per mini-batch. MultiRecord means a mini-batch is set to contain as many records that can fit within the MaxPayloadInMB limit."
      }
    } ]
  },
  "SourceAlgorithmSpecification" : {
    "SourceAlgorithms" : [ {
      "ModelDataUrl" : "The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).",
      "AlgorithmName" : "The name of an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your Amazon SageMaker account or an algorithm in AWS Marketplace that you are subscribed to."
    } ]
  },
  "ModelPackageName" : "The name of the model package. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).",
  "ModelPackageDescription" : "A description of the model package.",
  "InferenceSpecification" : {
    "SupportedContentTypes" : [ "string" ],
    "SupportedRealtimeInferenceInstanceTypes" : [ "string. Possible values: ml.t2.medium | ml.t2.large | ml.t2.xlarge | ml.t2.2xlarge | ml.m4.xlarge | ml.m4.2xlarge | ml.m4.4xlarge | ml.m4.10xlarge | ml.m4.16xlarge | ml.m5.large | ml.m5.xlarge | ml.m5.2xlarge | ml.m5.4xlarge | ml.m5.12xlarge | ml.m5.24xlarge | ml.m5d.large | ml.m5d.xlarge | ml.m5d.2xlarge | ml.m5d.4xlarge | ml.m5d.12xlarge | ml.m5d.24xlarge | ml.c4.large | ml.c4.xlarge | ml.c4.2xlarge | ml.c4.4xlarge | ml.c4.8xlarge | ml.p2.xlarge | ml.p2.8xlarge | ml.p2.16xlarge | ml.p3.2xlarge | ml.p3.8xlarge | ml.p3.16xlarge | ml.c5.large | ml.c5.xlarge | ml.c5.2xlarge | ml.c5.4xlarge | ml.c5.9xlarge | ml.c5.18xlarge | ml.c5d.large | ml.c5d.xlarge | ml.c5d.2xlarge | ml.c5d.4xlarge | ml.c5d.9xlarge | ml.c5d.18xlarge | ml.g4dn.xlarge | ml.g4dn.2xlarge | ml.g4dn.4xlarge | ml.g4dn.8xlarge | ml.g4dn.12xlarge | ml.g4dn.16xlarge | ml.r5.large | ml.r5.xlarge | ml.r5.2xlarge | ml.r5.4xlarge | ml.r5.12xlarge | ml.r5.24xlarge | ml.r5d.large | ml.r5d.xlarge | ml.r5d.2xlarge | ml.r5d.4xlarge | ml.r5d.12xlarge | ml.r5d.24xlarge" ],
    "Containers" : [ {
      "ContainerHostname" : "The DNS host name for the Docker container.",
      "ImageDigest" : "An MD5 hash of the training algorithm that identifies the Docker image used for training.",
      "ModelDataUrl" : "The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).",
      "ProductId" : "The AWS Marketplace product ID of the model package.",
      "Image" : "The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored. \nIf you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker."
    } ],
    "SupportedTransformInstanceTypes" : [ "string. Possible values: ml.m4.xlarge | ml.m4.2xlarge | ml.m4.4xlarge | ml.m4.10xlarge | ml.m4.16xlarge | ml.c4.xlarge | ml.c4.2xlarge | ml.c4.4xlarge | ml.c4.8xlarge | ml.p2.xlarge | ml.p2.8xlarge | ml.p2.16xlarge | ml.p3.2xlarge | ml.p3.8xlarge | ml.p3.16xlarge | ml.c5.xlarge | ml.c5.2xlarge | ml.c5.4xlarge | ml.c5.9xlarge | ml.c5.18xlarge | ml.m5.large | ml.m5.xlarge | ml.m5.2xlarge | ml.m5.4xlarge | ml.m5.12xlarge | ml.m5.24xlarge" ],
    "SupportedResponseMIMETypes" : [ "string" ]
  },
  "CertifyForMarketplace" : "Whether to certify the model package for listing on AWS Marketplace."
}

create_notebook_instance

Creates an Amazon SageMaker notebook instance. A notebook instance is a machine learning (ML) compute instance running on a Jupyter notebook.
In a CreateNotebookInstance request, specify the type of ML compute instance that you want to run. Amazon SageMaker launches the instance, installs common libraries that you can use to explore datasets for model training, and attaches an ML storage volume to the notebook instance.
Amazon SageMaker also provides a set of example notebooks. Each notebook demonstrates how to use Amazon SageMaker with a specific algorithm or with a machine learning framework.
After receiving the request, Amazon SageMaker does the following:
Creates a network interface in the Amazon SageMaker VPC.
(Option) If you specified SubnetId, Amazon SageMaker creates a network interface in your own VPC, which is inferred from the subnet ID that you provide in the input. When creating this network interface, Amazon SageMaker attaches the security group that you specified in the request to the network interface that it creates in your VPC.
Launches an EC2 instance of the type specified in the request in the Amazon SageMaker VPC. If you specified SubnetId of your VPC, Amazon SageMaker specifies both network interfaces when launching this instance. This enables inbound traffic from your own VPC to the notebook instance, assuming that the security groups allow it.
After creating the notebook instance, Amazon SageMaker returns its Amazon Resource Name (ARN). You can't change the name of a notebook instance after you create it. After Amazon SageMaker creates the notebook instance, you can connect to the Jupyter server and work in Jupyter notebooks. For example, you can write code to explore a dataset that you can use for model training, train a model, host models by creating Amazon SageMaker endpoints, and validate hosted models.
For more information, see How It Works.

Parameters

$body

Type: object

{
  "KmsKeyId" : "The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to your notebook instance. The KMS key you provide must be enabled. For information, see Enabling and Disabling Keys in the AWS Key Management Service Developer Guide.",
  "VolumeSizeInGB" : "The size, in GB, of the ML storage volume to attach to the notebook instance. The default value is 5 GB.",
  "DirectInternetAccess" : "Sets whether Amazon SageMaker provides internet access to the notebook instance. If you set this to Disabled this notebook instance will be able to access resources only in your VPC, and will not be able to connect to Amazon SageMaker training and endpoint services unless your configure a NAT Gateway in your VPC. \nFor more information, see Notebook Instances Are Internet-Enabled by Default. You can set the value of this parameter to Disabled only if you set a value for the SubnetId parameter.",
  "DefaultCodeRepository" : "A Git repository to associate with the notebook instance as its default code repository. This can be either the name of a Git repository stored as a resource in your account, or the URL of a Git repository in AWS CodeCommit or in any other Git repository. When you open a notebook instance, it opens in the directory that contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker Notebook Instances.",
  "AdditionalCodeRepositories" : [ "string" ],
  "SubnetId" : "The ID of the subnet in a VPC to which you would like to have a connectivity from your ML compute instance. ",
  "AcceleratorTypes" : [ "string. Possible values: ml.eia1.medium | ml.eia1.large | ml.eia1.xlarge | ml.eia2.medium | ml.eia2.large | ml.eia2.xlarge" ],
  "SecurityGroupIds" : [ "string" ],
  "RoleArn" : " When you send any requests to AWS resources from the notebook instance, Amazon SageMaker assumes this role to perform tasks on your behalf. You must grant this role necessary permissions so Amazon SageMaker can perform these tasks. The policy must allow the Amazon SageMaker service principal (sagemaker.amazonaws.com) permissionsto to assume this role. For more information, see Amazon SageMaker Roles.   \nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission.",
  "RootAccess" : "Whether root access is enabled or disabled for users of the notebook instance. The default value is Enabled.  \nLifecycle configurations need root access to be able to set up a notebook instance. Because of this, lifecycle configurations associated with a notebook instance always run with root access even if you disable root access for users.",
  "NotebookInstanceName" : "The name of the new notebook instance.",
  "InstanceType" : "The type of ML compute instance to launch for the notebook instance.",
  "LifecycleConfigName" : "The name of a lifecycle configuration to associate with the notebook instance. For information about lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.",
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_notebook_instance_lifecycle_config

Creates a lifecycle configuration that you can associate with a notebook instance. A lifecycle configuration is a collection of shell scripts that run when you create or start a notebook instance. Each lifecycle configuration script has a limit of 16384 characters. The value of the $PATH environment variable that is available to both scripts is /sbin:bin:/usr/sbin:/usr/bin. View CloudWatch Logs for notebook instance lifecycle configurations in log group /aws/sagemaker/NotebookInstances in log stream [notebook-instance-name]/[LifecycleConfigHook]. Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started. For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.

Parameters

$body

Type: object

{
  "OnStart" : [ {
    "Content" : "A base64-encoded string that contains a shell script for a notebook instance lifecycle configuration."
  } ],
  "NotebookInstanceLifecycleConfigName" : "The name of the lifecycle configuration.",
  "OnCreate" : [ {
    "Content" : "A base64-encoded string that contains a shell script for a notebook instance lifecycle configuration."
  } ]
}

create_presigned_notebook_instance_url

Returns a URL that you can use to connect to the Jupyter server from a notebook instance. In the Amazon SageMaker console, when you choose Open next to a notebook instance, Amazon SageMaker opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance.For example, you can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the NotIpAddress condition operator and the aws:SourceIP condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address.
The URL that you get from a call to is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page.

Parameters

$body

Type: object

{
  "SessionExpirationDurationInSeconds" : "The duration of the session, in seconds. The default is 12 hours.",
  "NotebookInstanceName" : "The name of the notebook instance."
}

create_training_job

Starts a model training job. After training completes, Amazon SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than Amazon SageMaker, provided that you know how to use them for inferences.
In the request body, you provide the following:
AlgorithmSpecification - Identifies the training algorithm to use.
HyperParameters - Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms.
InputDataConfig - Describes the training dataset and the Amazon S3, EFS, or FSx location where it is stored.
OutputDataConfig - Identifies the Amazon S3 bucket where you want Amazon SageMaker to save the results of model training.
ResourceConfig - Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.
EnableManagedSpotTraining - Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see Managed Spot Training.
RoleARN - The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete model training.
StoppingCondition - To help cap training costs, use MaxRuntimeInSeconds to set a time limit for training. Use MaxWaitTimeInSeconds to specify how long you are willing to to wait for a managed spot training job to complete.
For more information about Amazon SageMaker, see How It Works.

Parameters

$body

Type: object

{
  "EnableManagedSpotTraining" : "To train models using managed spot training, choose True. Managed spot training provides a fully managed and scalable infrastructure for training machine learning models. this option is useful when training jobs can be interrupted and when there is flexibility when the training job is run.  \nThe complete and intermediate results of jobs are stored in an Amazon S3 bucket, and can be used as a starting point to train models incrementally. Amazon SageMaker provides metrics and logs in CloudWatch. They can be used to see when managed spot training jobs are running, interrupted, resumed, or completed. ",
  "HyperParameters" : "Algorithm-specific parameters that influence the quality of the model. You set hyperparameters before you start the learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms.  \nYou can specify a maximum of 100 hyperparameters. Each hyperparameter is a key-value pair. Each key and value is limited to 256 characters, as specified by the Length Constraint. ",
  "TrainingJobName" : "The name of the training job. The name must be unique within an AWS Region in an AWS account. ",
  "AlgorithmSpecification" : {
    "TrainingInputMode" : "The input mode that the algorithm supports. For the input modes that Amazon SageMaker algorithms support, see Algorithms. If an algorithm supports the File input mode, Amazon SageMaker downloads the training data from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an algorithm supports the Pipe input mode, Amazon SageMaker streams data directly from S3 to the container.  \n In File mode, make sure you provision ML storage volume with sufficient capacity to accommodate the data download from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container use ML storage volume to also store intermediate information, if any.  \n For distributed algorithms using File mode, training data is distributed uniformly, and your training duration is predictable if the input data objects size is approximately same. Amazon SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed where one host in a training cluster is overloaded, thus becoming bottleneck in training. ",
    "MetricDefinitions" : [ {
      "Regex" : "A regular expression that searches the output of a training job and gets the value of the metric. For more information about using regular expressions to define metrics, see Defining Objective Metrics.",
      "Name" : "The name of the metric."
    } ],
    "TrainingImage" : "The registry path of the Docker image that contains the training algorithm. For information about docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.",
    "AlgorithmName" : "The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on AWS Marketplace. If you specify a value for this parameter, you can't specify a value for TrainingImage."
  },
  "OutputDataConfig" : {
    "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you use a KMS key ID or an alias of your master key, the Amazon SageMaker execution role must include permissions to call kms:Encrypt. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side encryption with KMS-managed keys for OutputDataConfig. If you use a bucket policy with an s3:PutObject permission that only allows objects with server-side encryption, set the condition key of s3:x-amz-server-side-encryption to \"aws:kms\". For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTrainingJob, CreateTransformJob, or CreateHyperParameterTuningJob requests. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
    "S3OutputPath" : "Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix. "
  },
  "VpcConfig" : {
    "Subnets" : [ "string" ],
    "SecurityGroupIds" : [ "string" ]
  },
  "CheckpointConfig" : {
    "S3Uri" : "Identifies the S3 path where you want Amazon SageMaker to store checkpoints. For example, s3://bucket-name/key-name-prefix.",
    "LocalPath" : "(Optional) The local directory where checkpoints are written. The default directory is /opt/ml/checkpoints/. "
  },
  "InputDataConfig" : [ {
    "InputMode" : "(Optional) The input mode to use for the data channel in a training job. If you don't set a value for InputMode, Amazon SageMaker uses the value set for TrainingInputMode. Use this parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, use File input mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode. \nTo use a model for incremental training, choose File input model.",
    "ChannelName" : "The name of the channel. ",
    "ContentType" : "The MIME type of the data.",
    "RecordWrapperType" : " \nSpecify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.  \nIn File mode, leave this field unset or set it to None.",
    "ShuffleConfig" : {
      "Seed" : "Determines the shuffling order in ShuffleConfig value."
    },
    "CompressionType" : "If training data is compressed, the compression type. The default value is None. CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to None.",
    "DataSource" : {
      "FileSystemDataSource" : {
        "FileSystemAccessMode" : "The access mode of the mount of the directory associated with the channel. A directory can be mounted either in ro (read-only) or rw (read-write) mode.",
        "DirectoryPath" : "The full path to the directory to associate with the channel.",
        "FileSystemType" : "The file system type. ",
        "FileSystemId" : "The file system id."
      },
      "S3DataSource" : {
        "S3DataDistributionType" : "If you want Amazon SageMaker to replicate the entire dataset on each ML compute instance that is launched for model training, specify FullyReplicated.  \nIf you want Amazon SageMaker to replicate a subset of data on each ML compute instance that is launched for model training, specify ShardedByS3Key. If there are n ML compute instances launched for a training job, each instance gets approximately 1/n of the number of S3 objects. In this case, model training on each machine uses only the subset of training data.  \nDon't choose more ML compute instances for training than available S3 objects. If you do, some nodes won't get any data and you will pay for nodes that aren't getting any training data. This applies in both File and Pipe modes. Keep this in mind when developing algorithms.  \nIn distributed training, where you use multiple ML compute EC2 instances, you might choose ShardedByS3Key. If the algorithm requires copying training data to the ML storage volume (when TrainingInputMode is set to File), this copies 1/n of the number of objects. ",
        "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:   \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following s3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-2   ...  The complete set of s3uris in this manifest is the input data for the channel for this datasource. The object that each s3uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.  ",
        "AttributeNames" : [ "string" ],
        "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects that match the specified key name prefix for model training.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for model training.  \nIf you choose AugmentedManifestFile, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile can only be used if the Channel's input mode is Pipe."
      }
    }
  } ],
  "RoleArn" : "The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.  \nDuring model training, Amazon SageMaker needs your permission to read input data from an S3 bucket, download a Docker image that contains training code, write model artifacts to an S3 bucket, write logs to Amazon CloudWatch Logs, and publish metrics to Amazon CloudWatch. You grant permissions for all of these tasks to an IAM role. For more information, see Amazon SageMaker Roles.   \nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission.",
  "EnableNetworkIsolation" : "Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.  \nThe Semantic Segmentation built-in algorithm does not support network isolation.",
  "EnableInterContainerTrafficEncryption" : "To encrypt all communications between ML compute instances in distributed training, choose True. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training. For more information, see Protect Communications Between ML Compute Instances in a Distributed Training Job.",
  "StoppingCondition" : {
    "MaxRuntimeInSeconds" : "The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.",
    "MaxWaitTimeInSeconds" : "The maximum length of time, in seconds, how long you are willing to wait for a managed spot training job to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the training job runs. It must be equal to or greater than MaxRuntimeInSeconds. "
  },
  "ResourceConfig" : {
    "InstanceCount" : "The number of ML compute instances to use. For distributed training, provide a value greater than 1. ",
    "VolumeSizeInGB" : "The size of the ML storage volume that you want to provision.  \nML storage volumes store model artifacts and incremental states. Training algorithms might also use the ML storage volume for scratch space. If you want to store the training data in the ML storage volume, choose File as the TrainingInputMode in the algorithm specification.  \nYou must specify sufficient ML storage for your scenario.   \n Amazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume type. ",
    "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
    "InstanceType" : "The ML compute instance type. "
  },
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_transform_job

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify. To perform batch transformations, you create a transform job and use the data that you have readily available. In the request body, you provide the following:
TransformJobName - Identifies the transform job. The name must be unique within an AWS Region in an AWS account.
ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see CreateModel.
TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.
TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.
TransformResources - Identifies the ML compute instances for the transform job.
For more information about how batch transformation works Amazon SageMaker, see How It Works.

Parameters

$body

Type: object

{
  "TransformResources" : {
    "InstanceCount" : "The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1.",
    "VolumeKmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId can be any of the following formats:  \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"  ",
    "InstanceType" : "The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types."
  },
  "ModelName" : "The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an AWS Region in an AWS account.",
  "MaxConcurrentTransforms" : "The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the optimal settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms.",
  "MaxPayloadInMB" : "The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.  \nFor cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.",
  "TransformOutput" : {
    "AssembleWith" : "Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None. To add a newline character at the end of every transformed record, specify Line.",
    "Accept" : "The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.",
    "KmsKeyId" : "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:   \n // KMS Key ID  \"1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // Amazon Resource Name (ARN) of a KMS Key  \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"   \n // KMS Key Alias  \"alias/ExampleAlias\"   \n // Amazon Resource Name (ARN) of a KMS Key Alias  \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"    \nIf you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.  \nThe KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.",
    "S3OutputPath" : "The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix. \nFor every S3 object used as input for the transform job, batch transform stores the transformed data with an .out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv, batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out. Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an .out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation."
  },
  "Environment" : "The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.",
  "TransformInput" : {
    "ContentType" : "The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.",
    "SplitType" : "The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. \nWhen splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.  \nSome data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord. \nFor more information about the RecordIO, see Data Format in the MXNet documentation. For more information about the TFRecord, see Consuming TFRecord data in the TensorFlow documentation.",
    "CompressionType" : "If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.",
    "DataSource" : {
      "S3DataSource" : {
        "S3Uri" : "Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:  \n  A key name prefix might look like this: s3://bucketname/exampleprefix.   \n  A manifest might look like this: s3://bucketname/example.manifest   The manifest is an S3 object which is a JSON file with the following format:   [    {\"prefix\": \"s3://customer_bucket/some/prefix/\"},    \"relative/path/to/custdata-1\",    \"relative/path/custdata-2\",    ...    ]   The preceding JSON matches the following S3Uris:   s3://customer_bucket/some/prefix/relative/path/to/custdata-1   s3://customer_bucket/some/prefix/relative/path/custdata-1   ...   The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf. ",
        "S3DataType" : "If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.  \nIf you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.  \nThe following values are compatible: ManifestFile, S3Prefix  \nThe following value is not compatible: AugmentedManifestFile "
      }
    }
  },
  "TransformJobName" : "The name of the transform job. The name must be unique within an AWS Region in an AWS account. ",
  "BatchStrategy" : "Specifies the number of records to include in a mini-batch for an HTTP inference request. A record  is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.  \nTo enable the batch strategy, you must set SplitType to Line, RecordIO, or TFRecord. \nTo use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line. \nTo fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line.",
  "DataProcessing" : {
    "OutputFilter" : "A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want Amazon SageMaker to store the entire input dataset in the output file, leave the default value, $. If you specify indexes that aren't within the dimension size of the joined dataset, you get an error. \nExamples: \"$\", \"$[0,5:]\", \"$['id','SageMakerOutput']\" ",
    "JoinSource" : "Specifies the source of the data to join with the transformed data. The valid values are None and Input The default value is None which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input.  \nFor JSON or JSONLines objects, such as a JSON array, Amazon SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput. The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, Amazon SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput. \nFor CSV files, Amazon SageMaker combines the transformed data with the input data at the end of the input data and stores it in the output file. The joined data has the joined input data followed by the transformed data and the output is a CSV file. ",
    "InputFilter" : "A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want Amazon SageMaker to pass the entire input dataset to the algorithm, accept the default value $. \nExamples: \"$\", \"$[1:]\", \"$.features\" "
  },
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

create_workteam

Creates a new work team for labeling your data. A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team. You cannot create more than 25 work teams in an account and region.

Parameters

$body

Type: object

{
  "Description" : "A description of the work team.",
  "NotificationConfiguration" : {
    "NotificationTopicArn" : "The ARN for the SNS topic to which notifications should be published."
  },
  "WorkteamName" : "The name of the work team. Use this name to identify the work team.",
  "MemberDefinitions" : [ {
    "CognitoMemberDefinition" : {
      "UserPool" : "An identifier for a user pool. The user pool must be in the same region as the service that you are calling.",
      "ClientId" : "An identifier for an application client. You must create the app client ID using Amazon Cognito.",
      "UserGroup" : "An identifier for a user group."
    }
  } ],
  "Tags" : [ {
    "Value" : "The tag value.",
    "Key" : "The tag key."
  } ]
}

delete_algorithm

Removes the specified algorithm from your account.

Parameters

$body

Type: object

{
  "AlgorithmName" : "The name of the algorithm to delete."
}

delete_code_repository

Deletes the specified Git repository from your account.

Parameters

$body

Type: object

{
  "CodeRepositoryName" : "The name of the Git repository to delete."
}

delete_endpoint

Deletes an endpoint. Amazon SageMaker frees up all of the resources that were deployed when the endpoint was created.
Amazon SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't need to use the RevokeGrant API call.

Parameters

$body

Type: object

{
  "EndpointName" : "The name of the endpoint that you want to delete."
}

delete_endpoint_config

Deletes an endpoint configuration. The DeleteEndpointConfig API deletes only the specified configuration. It does not delete endpoints created using the configuration.

Parameters

$body

Type: object

{
  "EndpointConfigName" : "The name of the endpoint configuration that you want to delete."
}

delete_model

Deletes a model. The DeleteModel API deletes only the model entry that was created in Amazon SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.

Parameters

$body

Type: object

{
  "ModelName" : "The name of the model to delete."
}

delete_model_package

Deletes a model package. A model package is used to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker.

Parameters

$body

Type: object

{
  "ModelPackageName" : "The name of the model package. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen)."
}

delete_notebook_instance

Deletes an Amazon SageMaker notebook instance. Before you can delete a notebook instance, you must call the StopNotebookInstance API.
When you delete a notebook instance, you lose all of your data. Amazon SageMaker removes the ML compute instance, and deletes the ML storage volume and the network interface associated with the notebook instance.

Parameters

$body

Type: object

{
  "NotebookInstanceName" : "The name of the Amazon SageMaker notebook instance to delete."
}

delete_notebook_instance_lifecycle_config

Deletes a notebook instance lifecycle configuration.

Parameters

$body

Type: object

{
  "NotebookInstanceLifecycleConfigName" : "The name of the lifecycle configuration to delete."
}

delete_tags

Deletes the specified tags from an Amazon SageMaker resource. To list a resource's tags, use the ListTags API.
When you call this API to delete tags from a hyperparameter tuning job, the deleted tags are not removed from training jobs that the hyperparameter tuning job launched before you called this API.

Parameters

$body

Type: object

{
  "ResourceArn" : "The Amazon Resource Name (ARN) of the resource whose tags you want to delete.",
  "TagKeys" : [ "string" ]
}

delete_workteam

Deletes an existing work team. This operation can't be undone.

Parameters

$body

Type: object

{
  "WorkteamName" : "The name of the work team to delete."
}

describe_algorithm

Returns a description of the specified algorithm that is in your account.

Parameters

$body

Type: object

{
  "AlgorithmName" : "The name of the algorithm to describe."
}

describe_code_repository

Gets details about the specified Git repository.

Parameters

$body

Type: object

{
  "CodeRepositoryName" : "The name of the Git repository to describe."
}

describe_compilation_job

Returns information about a model compilation job. To create a model compilation job, use CreateCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

Parameters

$body

Type: object

{
  "CompilationJobName" : "The name of the model compilation job that you want information about."
}

describe_endpoint

Returns the description of an endpoint.

Parameters

$body

Type: object

{
  "EndpointName" : "The name of the endpoint."
}

describe_endpoint_config

Returns the description of an endpoint configuration created using the CreateEndpointConfig API.

Parameters

$body

Type: object

{
  "EndpointConfigName" : "The name of the endpoint configuration."
}

describe_hyper_parameter_tuning_job

Gets a description of a hyperparameter tuning job.

Parameters

$body

Type: object

{
  "HyperParameterTuningJobName" : "The name of the tuning job to describe."
}

describe_labeling_job

Gets information about a labeling job.

Parameters

$body

Type: object

{
  "LabelingJobName" : "The name of the labeling job to return information for."
}

describe_model

Describes a model that you created using the CreateModel API.

Parameters

$body

Type: object

{
  "ModelName" : "The name of the model."
}

describe_model_package

Returns a description of the specified model package, which is used to create Amazon SageMaker models or list them on AWS Marketplace. To create models in Amazon SageMaker, buyers can subscribe to model packages listed on AWS Marketplace.

Parameters

$body

Type: object

{
  "ModelPackageName" : "The name of the model package to describe."
}

describe_notebook_instance

Returns information about a notebook instance.

Parameters

$body

Type: object

{
  "NotebookInstanceName" : "The name of the notebook instance that you want information about."
}

describe_notebook_instance_lifecycle_config

Returns a description of a notebook instance lifecycle configuration. For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.

Parameters

$body

Type: object

{
  "NotebookInstanceLifecycleConfigName" : "The name of the lifecycle configuration to describe."
}

describe_subscribed_workteam

Gets information about a work team provided by a vendor. It returns details about the subscription with a vendor in the AWS Marketplace.

Parameters

$body

Type: object

{
  "WorkteamArn" : "The Amazon Resource Name (ARN) of the subscribed work team to describe."
}

describe_training_job

Returns information about a training job.

Parameters

$body

Type: object

{
  "TrainingJobName" : "The name of the training job."
}

describe_transform_job

Returns information about a transform job.

Parameters

$body

Type: object

{
  "TransformJobName" : "The name of the transform job that you want to view details of."
}

describe_workteam

Gets information about a specific work team. You can see information such as the create date, the last updated date, membership information, and the work team's Amazon Resource Name (ARN).

Parameters

$body

Type: object

{
  "WorkteamName" : "The name of the work team to return a description of."
}

get_search_suggestions

An auto-complete API for the search functionality in the Amazon SageMaker console. It returns suggestions of possible matches for the property name to use in Search queries. Provides suggestions for HyperParameters, Tags, and Metrics.

Parameters

$body

Type: object

{
  "Resource" : "The name of the Amazon SageMaker resource to Search for. The only valid Resource value is TrainingJob.",
  "SuggestionQuery" : {
    "PropertyNameQuery" : {
      "PropertyNameHint" : "Text that is part of a property's name. The property names of hyperparameter, metric, and tag key names that begin with the specified text in the PropertyNameHint."
    }
  }
}

list_algorithms

Lists the machine learning algorithms that have been created.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only algorithms created before the specified time (timestamp).",
  "SortBy" : "The parameter by which to sort the results. The default is CreationTime.",
  "SortOrder" : "The sort order for the results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only algorithms created after the specified time (timestamp).",
  "NameContains" : "A string in the algorithm name. This filter returns only algorithms whose name contains the specified string."
}

list_code_repositories

Gets a list of the Git repositories in your account.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only Git repositories that were created before the specified time.",
  "LastModifiedTimeBefore" : "A filter that returns only Git repositories that were last modified before the specified time.",
  "LastModifiedTimeAfter" : "A filter that returns only Git repositories that were last modified after the specified time.",
  "SortBy" : "The field to sort results by. The default is Name.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only Git repositories that were created after the specified time.",
  "NameContains" : "A string in the Git repositories name. This filter returns only repositories whose name contains the specified string."
}

list_compilation_jobs

Lists model compilation jobs that satisfy various filters. To create a model compilation job, use CreateCompilationJob. To get information about a particular model compilation job you have created, use DescribeCompilationJob.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns the model compilation jobs that were created before a specified time.",
  "StatusEquals" : "A filter that retrieves model compilation jobs with a specific DescribeCompilationJobResponse$CompilationJobStatus status.",
  "LastModifiedTimeBefore" : "A filter that returns the model compilation jobs that were modified before a specified time.",
  "LastModifiedTimeAfter" : "A filter that returns the model compilation jobs that were modified after a specified time.",
  "SortBy" : "The field by which to sort results. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns the model compilation jobs that were created after a specified time. ",
  "NameContains" : "A filter that returns the model compilation jobs whose name contains a specified string."
}

list_endpoint_configs

Lists endpoint configurations.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only endpoint configurations created before the specified time (timestamp).",
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Descending.",
  "CreationTimeAfter" : "A filter that returns only endpoint configurations with a creation time greater than or equal to the specified time (timestamp).",
  "NameContains" : "A string in the endpoint configuration name. This filter returns only endpoint configurations whose name contains the specified string. "
}

list_endpoints

Lists endpoints.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only endpoints that were created before the specified time (timestamp).",
  "StatusEquals" : " A filter that returns only endpoints with the specified status.",
  "LastModifiedTimeBefore" : " A filter that returns only endpoints that were modified before the specified timestamp. ",
  "LastModifiedTimeAfter" : " A filter that returns only endpoints that were modified after the specified timestamp. ",
  "SortBy" : "Sorts the list of results. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Descending.",
  "CreationTimeAfter" : "A filter that returns only endpoints with a creation time greater than or equal to the specified time (timestamp).",
  "NameContains" : "A string in endpoint names. This filter returns only endpoints whose name contains the specified string."
}

list_hyper_parameter_tuning_jobs

Gets a list of HyperParameterTuningJobSummary objects that describe the hyperparameter tuning jobs launched in your account.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only tuning jobs that were created before the specified time.",
  "StatusEquals" : "A filter that returns only tuning jobs with the specified status.",
  "LastModifiedTimeBefore" : "A filter that returns only tuning jobs that were modified before the specified time.",
  "LastModifiedTimeAfter" : "A filter that returns only tuning jobs that were modified after the specified time.",
  "SortBy" : "The field to sort results by. The default is Name.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only tuning jobs that were created after the specified time.",
  "NameContains" : "A string in the tuning job name. This filter returns only tuning jobs whose name contains the specified string."
}

list_labeling_jobs

Gets a list of labeling jobs.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only labeling jobs created before the specified time (timestamp).",
  "StatusEquals" : "A filter that retrieves only labeling jobs with a specific status.",
  "LastModifiedTimeBefore" : "A filter that returns only labeling jobs modified before the specified time (timestamp).",
  "LastModifiedTimeAfter" : "A filter that returns only labeling jobs modified after the specified time (timestamp).",
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only labeling jobs created after the specified time (timestamp).",
  "NameContains" : "A string in the labeling job name. This filter returns only labeling jobs whose name contains the specified string."
}

list_labeling_jobs_for_workteam

Gets a list of labeling jobs assigned to a specified work team.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only labeling jobs created before the specified time (timestamp).",
  "WorkteamArn" : "The Amazon Resource Name (ARN) of the work team for which you want to see labeling jobs for.",
  "JobReferenceCodeContains" : "A filter the limits jobs to only the ones whose job reference code contains the specified string.",
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only labeling jobs created after the specified time (timestamp)."
}

list_model_packages

Lists the model packages that have been created.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only model packages created before the specified time (timestamp).",
  "SortBy" : "The parameter by which to sort the results. The default is CreationTime.",
  "SortOrder" : "The sort order for the results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only model packages created after the specified time (timestamp).",
  "NameContains" : "A string in the model package name. This filter returns only model packages whose name contains the specified string."
}

list_models

Lists models created with the CreateModel API.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only models created before the specified time (timestamp).",
  "SortBy" : "Sorts the list of results. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Descending.",
  "CreationTimeAfter" : "A filter that returns only models with a creation time greater than or equal to the specified time (timestamp).",
  "NameContains" : "A string in the training job name. This filter returns only models in the training job whose name contains the specified string."
}

list_notebook_instance_lifecycle_configs

Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig API.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only lifecycle configurations that were created before the specified time (timestamp).",
  "LastModifiedTimeBefore" : "A filter that returns only lifecycle configurations that were modified before the specified time (timestamp).",
  "LastModifiedTimeAfter" : "A filter that returns only lifecycle configurations that were modified after the specified time (timestamp).",
  "SortBy" : "Sorts the list of results. The default is CreationTime.",
  "SortOrder" : "The sort order for results.",
  "CreationTimeAfter" : "A filter that returns only lifecycle configurations that were created after the specified time (timestamp).",
  "NameContains" : "A string in the lifecycle configuration name. This filter returns only lifecycle configurations whose name contains the specified string."
}

list_notebook_instances

Returns a list of the Amazon SageMaker notebook instances in the requester's account in an AWS Region.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only notebook instances that were created before the specified time (timestamp). ",
  "StatusEquals" : "A filter that returns only notebook instances with the specified status.",
  "AdditionalCodeRepositoryEquals" : "A filter that returns only notebook instances with associated with the specified git repository.",
  "LastModifiedTimeAfter" : "A filter that returns only notebook instances that were modified after the specified time (timestamp).",
  "NotebookInstanceLifecycleConfigNameContains" : "A string in the name of a notebook instances lifecycle configuration associated with this notebook instance. This filter returns only notebook instances associated with a lifecycle configuration with a name that contains the specified string.",
  "SortBy" : "The field to sort results by. The default is Name.",
  "SortOrder" : "The sort order for results. ",
  "DefaultCodeRepositoryContains" : "A string in the name or URL of a Git repository associated with this notebook instance. This filter returns only notebook instances associated with a git repository with a name that contains the specified string.",
  "LastModifiedTimeBefore" : "A filter that returns only notebook instances that were modified before the specified time (timestamp).",
  "CreationTimeAfter" : "A filter that returns only notebook instances that were created after the specified time (timestamp).",
  "NameContains" : "A string in the notebook instances' name. This filter returns only notebook instances whose name contains the specified string."
}

list_subscribed_workteams

Gets a list of the work teams that you are subscribed to in the AWS Marketplace. The list may be empty if no work team satisfies the filter specified in the NameContains parameter.

Parameters

$body

Type: object

{
  "NameContains" : "A string in the work team name. This filter returns only work teams whose name contains the specified string."
}

list_tags

Returns the tags for the specified Amazon SageMaker resource.

Parameters

$body

Type: object

{
  "ResourceArn" : "The Amazon Resource Name (ARN) of the resource whose tags you want to retrieve."
}

list_training_jobs

Lists training jobs.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only training jobs created before the specified time (timestamp).",
  "StatusEquals" : "A filter that retrieves only training jobs with a specific status.",
  "LastModifiedTimeBefore" : "A filter that returns only training jobs modified before the specified time (timestamp).",
  "LastModifiedTimeAfter" : "A filter that returns only training jobs modified after the specified time (timestamp).",
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "CreationTimeAfter" : "A filter that returns only training jobs created after the specified time (timestamp).",
  "NameContains" : "A string in the training job name. This filter returns only training jobs whose name contains the specified string."
}

list_training_jobs_for_hyper_parameter_tuning_job

Gets a list of TrainingJobSummary objects that describe the training jobs that a hyperparameter tuning job launched.

Parameters

$body

Type: object

{
  "StatusEquals" : "A filter that returns only training jobs with the specified status.",
  "SortBy" : "The field to sort results by. The default is Name. \nIf the value of this field is FinalObjectiveMetricValue, any training jobs that did not return an objective metric are not listed.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "HyperParameterTuningJobName" : "The name of the tuning job whose training jobs you want to list."
}

list_transform_jobs

Lists transform jobs.

Parameters

$body

Type: object

{
  "CreationTimeBefore" : "A filter that returns only transform jobs created before the specified time.",
  "StatusEquals" : "A filter that retrieves only transform jobs with a specific status.",
  "LastModifiedTimeBefore" : "A filter that returns only transform jobs modified before the specified time.",
  "LastModifiedTimeAfter" : "A filter that returns only transform jobs modified after the specified time.",
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Descending.",
  "CreationTimeAfter" : "A filter that returns only transform jobs created after the specified time.",
  "NameContains" : "A string in the transform job name. This filter returns only transform jobs whose name contains the specified string."
}

list_workteams

Gets a list of work teams that you have defined in a region. The list may be empty if no work team satisfies the filter specified in the NameContains parameter.

Parameters

$body

Type: object

{
  "SortBy" : "The field to sort results by. The default is CreationTime.",
  "SortOrder" : "The sort order for results. The default is Ascending.",
  "NameContains" : "A string in the work team's name. This filter returns only work teams whose name contains the specified string."
}

render_ui_template

Renders the UI template so that you can preview the worker's experience.

Parameters

$body

Type: object

{
  "Task" : {
    "Input" : "A JSON object that contains values for the variables defined in the template. It is made available to the template under the substitution variable task.input. For example, if you define a variable task.input.text in your template, you can supply the variable in the JSON object as \"text\": \"sample text\"."
  },
  "UiTemplate" : {
    "Content" : "The content of the Liquid template for the worker user interface."
  },
  "RoleArn" : "The Amazon Resource Name (ARN) that has access to the S3 objects that are used by the template."
}

Finds Amazon SageMaker resources that match a search query. Matching resource objects are returned as a list of SearchResult objects in the response. You can sort the search results by any resource property in a ascending or descending order. You can query against the following value types: numerical, text, Booleans, and timestamps.

Parameters

$body

Type: object

{
  "SortBy" : "The name of the resource property used to sort the SearchResults. The default is LastModifiedTime.",
  "SearchExpression" : {
    "NestedFilters" : [ {
      "Filters" : [ {
        "Operator" : "A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:  Equals  \nThe specified resource in Name equals the specified Value.  NotEquals  \nThe specified resource in Name does not equal the specified Value.  GreaterThan  \nThe specified resource in Name is greater than the specified Value. Not supported for text-based properties.  GreaterThanOrEqualTo  \nThe specified resource in Name is greater than or equal to the specified Value. Not supported for text-based properties.  LessThan  \nThe specified resource in Name is less than the specified Value. Not supported for text-based properties.  LessThanOrEqualTo  \nThe specified resource in Name is less than or equal to the specified Value. Not supported for text-based properties.  Contains  \nOnly supported for text-based properties. The word-list of the property contains the specified Value.   \nIf you have specified a filter Value, the default is Equals.",
        "Value" : "A value used with Resource and Operator to determine if objects satisfy the filter's condition. For numerical properties, Value must be an integer or floating-point decimal. For timestamp properties, Value must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS.",
        "Name" : "A property name. For example, TrainingJobName. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource."
      } ],
      "NestedPropertyName" : "The name of the property to use in the nested filters. The value must match a listed property name, such as InputDataConfig ."
    } ],
    "Operator" : "A Boolean operator used to evaluate the search expression. If you want every conditional statement in all lists to be satisfied for the entire search expression to be true, specify And. If only a single conditional statement needs to be true for the entire search expression to be true, specify Or. The default value is And.",
    "Filters" : [ {
      "Operator" : "A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:  Equals  \nThe specified resource in Name equals the specified Value.  NotEquals  \nThe specified resource in Name does not equal the specified Value.  GreaterThan  \nThe specified resource in Name is greater than the specified Value. Not supported for text-based properties.  GreaterThanOrEqualTo  \nThe specified resource in Name is greater than or equal to the specified Value. Not supported for text-based properties.  LessThan  \nThe specified resource in Name is less than the specified Value. Not supported for text-based properties.  LessThanOrEqualTo  \nThe specified resource in Name is less than or equal to the specified Value. Not supported for text-based properties.  Contains  \nOnly supported for text-based properties. The word-list of the property contains the specified Value.   \nIf you have specified a filter Value, the default is Equals.",
      "Value" : "A value used with Resource and Operator to determine if objects satisfy the filter's condition. For numerical properties, Value must be an integer or floating-point decimal. For timestamp properties, Value must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS.",
      "Name" : "A property name. For example, TrainingJobName. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource."
    } ],
    "SubExpressions" : [ {
      "NestedFilters" : [ {
        "Filters" : [ {
          "Operator" : "A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:  Equals  \nThe specified resource in Name equals the specified Value.  NotEquals  \nThe specified resource in Name does not equal the specified Value.  GreaterThan  \nThe specified resource in Name is greater than the specified Value. Not supported for text-based properties.  GreaterThanOrEqualTo  \nThe specified resource in Name is greater than or equal to the specified Value. Not supported for text-based properties.  LessThan  \nThe specified resource in Name is less than the specified Value. Not supported for text-based properties.  LessThanOrEqualTo  \nThe specified resource in Name is less than or equal to the specified Value. Not supported for text-based properties.  Contains  \nOnly supported for text-based properties. The word-list of the property contains the specified Value.   \nIf you have specified a filter Value, the default is Equals.",
          "Value" : "A value used with Resource and Operator to determine if objects satisfy the filter's condition. For numerical properties, Value must be an integer or floating-point decimal. For timestamp properties, Value must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS.",
          "Name" : "A property name. For example, TrainingJobName. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource."
        } ],
        "NestedPropertyName" : "The name of the property to use in the nested filters. The value must match a listed property name, such as InputDataConfig ."
      } ],
      "Operator" : "A Boolean operator used to evaluate the search expression. If you want every conditional statement in all lists to be satisfied for the entire search expression to be true, specify And. If only a single conditional statement needs to be true for the entire search expression to be true, specify Or. The default value is And.",
      "Filters" : [ {
        "Operator" : "A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:  Equals  \nThe specified resource in Name equals the specified Value.  NotEquals  \nThe specified resource in Name does not equal the specified Value.  GreaterThan  \nThe specified resource in Name is greater than the specified Value. Not supported for text-based properties.  GreaterThanOrEqualTo  \nThe specified resource in Name is greater than or equal to the specified Value. Not supported for text-based properties.  LessThan  \nThe specified resource in Name is less than the specified Value. Not supported for text-based properties.  LessThanOrEqualTo  \nThe specified resource in Name is less than or equal to the specified Value. Not supported for text-based properties.  Contains  \nOnly supported for text-based properties. The word-list of the property contains the specified Value.   \nIf you have specified a filter Value, the default is Equals.",
        "Value" : "A value used with Resource and Operator to determine if objects satisfy the filter's condition. For numerical properties, Value must be an integer or floating-point decimal. For timestamp properties, Value must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS.",
        "Name" : "A property name. For example, TrainingJobName. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource."
      } ],
      "SubExpressions" : "SearchExpressionList"
    } ]
  },
  "Resource" : "The name of the Amazon SageMaker resource to search for. Currently, the only valid Resource value is TrainingJob.",
  "SortOrder" : "How SearchResults are ordered. Valid values are Ascending or Descending. The default is Descending."
}

start_notebook_instance

Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume. After configuring the notebook instance, Amazon SageMaker sets the notebook instance status to InService. A notebook instance's status must be InService before you can connect to your Jupyter notebook.

Parameters

$body

Type: object

{
  "NotebookInstanceName" : "The name of the notebook instance to start."
}

stop_compilation_job

Stops a model compilation job. To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal. This gracefully shuts the job down. If the job hasn't stopped, it sends the SIGKILL signal. When it receives a StopCompilationJob request, Amazon SageMaker changes the CompilationJobSummary$CompilationJobStatus of the job to Stopping. After Amazon SageMaker stops the job, it sets the CompilationJobSummary$CompilationJobStatus to Stopped.

Parameters

$body

Type: object

{
  "CompilationJobName" : "The name of the model compilation job to stop."
}

stop_hyper_parameter_tuning_job

Stops a running hyperparameter tuning job and all running training jobs that the tuning job launched. All model artifacts output from the training jobs are stored in Amazon Simple Storage Service (Amazon S3). All data that the training jobs write to Amazon CloudWatch Logs are still available in CloudWatch. After the tuning job moves to the Stopped state, it releases all reserved resources for the tuning job.

Parameters

$body

Type: object

{
  "HyperParameterTuningJobName" : "The name of the tuning job to stop."
}

stop_labeling_job

Stops a running labeling job. A job that is stopped cannot be restarted. Any results obtained before the job is stopped are placed in the Amazon S3 output bucket.

Parameters

$body

Type: object

{
  "LabelingJobName" : "The name of the labeling job to stop."
}

stop_notebook_instance

Terminates the ML compute instance. Before terminating the instance, Amazon SageMaker disconnects the ML storage volume from it. Amazon SageMaker preserves the ML storage volume. Amazon SageMaker stops charging you for the ML compute instance when you call StopNotebookInstance. To access data on the ML storage volume for a notebook instance that has been terminated, call the StartNotebookInstance API. StartNotebookInstance launches another ML compute instance, configures it, and attaches the preserved ML storage volume so you can continue your work.

Parameters

$body

Type: object

{
  "NotebookInstanceName" : "The name of the notebook instance to terminate."
}

stop_training_job

Stops a training job. To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts, so the results of the training is not lost.
When it receives a StopTrainingJob request, Amazon SageMaker changes the status of the job to Stopping. After Amazon SageMaker stops the job, it sets the status to Stopped.

Parameters

$body

Type: object

{
  "TrainingJobName" : "The name of the training job to stop."
}

stop_transform_job

Stops a transform job. When Amazon SageMaker receives a StopTransformJob request, the status of the job changes to Stopping. After Amazon SageMaker stops the job, the status is set to Stopped. When you stop a transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3.

Parameters

$body

Type: object

{
  "TransformJobName" : "The name of the transform job to stop."
}

update_code_repository

Updates the specified Git repository with the specified values.

Parameters

$body

Type: object

{
  "CodeRepositoryName" : "The name of the Git repository to update.",
  "GitConfig" : {
    "SecretArn" : "The Amazon Resource Name (ARN) of the AWS Secrets Manager secret that contains the credentials used to access the git repository. The secret must have a staging label of AWSCURRENT and must be in the following format: \n {\"username\": UserName, \"password\": Password} "
  }
}

update_endpoint

Deploys the new EndpointConfig specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous EndpointConfig (there is no availability loss).
When Amazon SageMaker receives the request, it sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint API.
You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

Parameters

$body

Type: object

{
  "EndpointName" : "The name of the endpoint whose configuration you want to update.",
  "EndpointConfigName" : "The name of the new endpoint configuration."
}

update_endpoint_weights_and_capacities

Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, Amazon SageMaker sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint API.

Parameters

$body

Type: object

{
  "DesiredWeightsAndCapacities" : [ {
    "VariantName" : "The name of the variant to update.",
    "DesiredWeight" : "The variant's weight.",
    "DesiredInstanceCount" : "The variant's capacity."
  } ],
  "EndpointName" : "The name of an existing Amazon SageMaker endpoint."
}

update_notebook_instance

Updates a notebook instance. NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements.

Parameters

$body

Type: object

{
  "DisassociateAdditionalCodeRepositories" : "A list of names or URLs of the default Git repositories to remove from this notebook instance. This operation is idempotent. If you specify a Git repository that is not associated with the notebook instance when you call this method, it does not throw an error.",
  "VolumeSizeInGB" : "The size, in GB, of the ML storage volume to attach to the notebook instance. The default value is 5 GB. ML storage volumes are encrypted, so Amazon SageMaker can't determine the amount of available free space on the volume. Because of this, you can increase the volume size when you update a notebook instance, but you can't decrease the volume size. If you want to decrease the size of the ML storage volume in use, create a new notebook instance with the desired size.",
  "DefaultCodeRepository" : "The Git repository to associate with the notebook instance as its default code repository. This can be either the name of a Git repository stored as a resource in your account, or the URL of a Git repository in AWS CodeCommit or in any other Git repository. When you open a notebook instance, it opens in the directory that contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker Notebook Instances.",
  "AdditionalCodeRepositories" : [ "string" ],
  "AcceleratorTypes" : [ "string. Possible values: ml.eia1.medium | ml.eia1.large | ml.eia1.xlarge | ml.eia2.medium | ml.eia2.large | ml.eia2.xlarge" ],
  "DisassociateDefaultCodeRepository" : "The name or URL of the default Git repository to remove from this notebook instance. This operation is idempotent. If you specify a Git repository that is not associated with the notebook instance when you call this method, it does not throw an error.",
  "RoleArn" : "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access the notebook instance. For more information, see Amazon SageMaker Roles.   \nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission.",
  "RootAccess" : "Whether root access is enabled or disabled for users of the notebook instance. The default value is Enabled.  \nIf you set this to Disabled, users don't have root access on the notebook instance, but lifecycle configuration scripts still run with root permissions.",
  "DisassociateAcceleratorTypes" : "A list of the Elastic Inference (EI) instance types to remove from this notebook instance. This operation is idempotent. If you specify an accelerator type that is not associated with the notebook instance when you call this method, it does not throw an error.",
  "NotebookInstanceName" : "The name of the notebook instance to update.",
  "InstanceType" : "The Amazon ML compute instance type.",
  "LifecycleConfigName" : "The name of a lifecycle configuration to associate with the notebook instance. For information about lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.",
  "DisassociateLifecycleConfig" : "Set to true to remove the notebook instance lifecycle configuration currently associated with the notebook instance. This operation is idempotent. If you specify a lifecycle configuration that is not associated with the notebook instance when you call this method, it does not throw an error."
}

update_notebook_instance_lifecycle_config

Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig API.

Parameters

$body

Type: object

{
  "OnStart" : [ {
    "Content" : "A base64-encoded string that contains a shell script for a notebook instance lifecycle configuration."
  } ],
  "NotebookInstanceLifecycleConfigName" : "The name of the lifecycle configuration.",
  "OnCreate" : [ {
    "Content" : "A base64-encoded string that contains a shell script for a notebook instance lifecycle configuration."
  } ]
}

update_workteam

Updates an existing work team with new member definitions or description.

Parameters

$body

Type: object

{
  "Description" : "An updated description for the work team.",
  "NotificationConfiguration" : {
    "NotificationTopicArn" : "The ARN for the SNS topic to which notifications should be published."
  },
  "WorkteamName" : "The name of the work team to update.",
  "MemberDefinitions" : [ {
    "CognitoMemberDefinition" : {
      "UserPool" : "An identifier for a user pool. The user pool must be in the same region as the service that you are calling.",
      "ClientId" : "An identifier for an application client. You must create the app client ID using Amazon Cognito.",
      "UserGroup" : "An identifier for a user group."
    }
  } ]
}