Image Annotation Overview

This is the recommended task type for annotating images with vector geometric shapes. The available geometries are box, polygon, line, point, cuboid, and ellipse.
This endpoint creates an imageannotation task. Given an image, Scale will annotate the image with the geometries you specify.
The required parameters for this task are attachment and geometries.

Body Params

projectstring

The name of the project to associate this task with.

batchstring

The name of the batch to associate this task with. Note that if a batch is specified, you need not specify the project, as the task will automatically be associated with the batch's project. For Scale Rapid projects specifying a batch is required. See Batches section for more details.

instructionstring

A markdown-enabled string or iframe embedded Google Doc explaining how to do the task. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.

callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.

attachmentstringrequired

A URL to the image you'd like to be annotated.

context_attachmentsarray of objects

An array of objects in the form of {"attachment": "<link to actual attachment>"} to show to taskers as a reference. Context images themselves can not be labeled. Context images will appear like this in the UI. You cannot use the task's attachment url as a context attachment's url.

geometriesobjectrequired

This object is used to define which objects need to be annotated and which annotation geometries (box, polygon, line, point, cuboid, or ellipse) should be used for each annotation. Further description of each geometry can be found in each respective section below

annotation_attributesobject

This field is used to add additional attributes that you would like to capture per annotation. See Annotation Attributes for more details about annotation attributes.

linksobject

Use this field to define links between annotations. See Links for more details about links.

hypothesisobject

Editable annotations that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

layerobject

Read-only annotations to be pre-drawn on the task. See the Layers section for more details.

base_annotationsboolean

Editable annotations, with the option to be "locked", that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

can_add_base_annotationsboolean

Whether or not new annotations can be added to the task if base_annotations are used. If set to true, new annotations can be added to the task in addition to base_annotations. If set to false, new annotations will not be able to be added to the task.

can_edit_base_annotationsboolean

Whether or not base_annotations can be edited in the task. If set to true, base_annotations can be edited by the tasker (position of annotation, attributes, etc). If set to false, all aspects of base_annotations will be locked.

can_edit_base_annotation_labelsboolean

Whether or not base_annotations labels can be edited in the task. If set to true, the label of base_annotations can be edited by the tasker. If set to false, the label will be locked.

can_delete_base_annotationsboolean

Whether or not base_annotations can be removed from the task. If set to true, base_annotations can be deleted from the task. If set to false, base_annotations cannot be deleted from the task.

image_metadataobject

This field accepts specified image metadata, supported fields include:
- date_time - displays the date and time the image is taken
- resolution - configures the units of the ruler tools, resolution_ratio holds the number of resolution_units corresponding to one pixel; e.g. {resolution_ratio: 3, resolution_unit: 'm'}, one pixel in the image corresponds to three meters in the real world.
- location - the real-world location where this image was captured, in the standard geographic coordinate system; e.g. {lat: 37.77, long: -122.43}

metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB. See the Metadata section for more detail.

paddinginteger

paddingXinteger

The amount of padding in pixels added to the left and right of the image. Overrides padding if set.

paddingYinteger

The amount of padding in pixels added to the top and bottom of the image. Overrides padding if set.

priorityinteger

A value of 10, 20, or 30 that defines the priority of a task within a project. The higher the number, the higher the priority.

unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.

clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically

tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.

Request

POST/v1/task/imageannotation
import requests

url = "https://api.scale.com/v1/task/imageannotation"

payload = {
    "instruction": "**Instructions:** Please label all the things",
    "attachment": "https://i.imgur.com/iDZcXfS.png",
    "geometries": {
        "box": {
            "min_height": None,
            "min_width": None,
            "can_rotate": None,
            "integer_pixels": None
        },
        "polygon": {
            "min_vertices": None,
            "max_vertices": None
        },
        "line": {
            "min_vertices": None,
            "max_vertices": None
        },
        "cuboid": {
            "min_height": None,
            "min_width": None,
            "camera_intrinsics": {
                "fx": None,
                "fy": None,
                "cx": None,
                "cy": None,
                "skew": None,
                "scalefactor": None
            },
            "camera_rotation_quaternion": {
                "w": None,
                "x": None,
                "y": None,
                "z": None
            },
            "camera_height": None
        }
    },
    "padding": None,
    "paddingX": None,
    "paddingY": None,
    "priority": None
}
headers = {
    "accept": "application/json",
    "content-type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "imageannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "attachment_type": "image",
    "attachment": "http://i.imgur.com/3Cpje3l.jpg",
    "geometries": {
      "box": {
        "objects_to_annotate": [
          null
        ],
        "min_height": 5,
        "min_width": 5
      },
      "polygon": {
        "objects_to_annotate": [
          null
        ]
      },
      "point": {
        "objects_to_annotate": [
          null
        ]
      }
    },
    "annotation_attributes": {
      "additionalProp": {
        "type": "category",
        "description": "string",
        "choice": "string"
      }
    }
  }
}

Video Annotation Overview

Label Nesting and Options

There are often annotation tasks that have too many label choices for a tasker to efficiently sort through them all at once, or times when you want to show one version of a label name to a tasker, but would like another version in the response.

In those cases, you can utilize LabelDescription objects to support nested labels, where labels may have subcategories within them, as well as setting display values for the label.

When declaring objects_to_annotate in your task parameters, we accept a mixed array of strings and the more complex LabelDescription objects.


Definition: LabelDescription

A simple example is illustrated in the example JSON below, where objects_to_annotate can simply be a string, a nested label with choices and subchoices, or a nested label where the subchoices themselves are LabelDescription objects with a display value.

While there may be a large number of total labels, using subchoices a tasker can first categorize an object as a road, pedestrian, or vehicle, and based on that choice, further select the specific type of pedestrian or vehicle.

Nested labels may be specified both for the object labels (the objects_to_annotate array parameter), as well as in the choices array of a categorical annotation attribute. In both cases, you would specify a nested label by using a LabelDescription object instead of a string.

For example, for an objects_to_annotate array of ["Vehicle", "Pedestrian"], you could instead add a nested label by passing an array, like ["Vehicle", {"choice": "Pedestrian", "subchoices": ["Animal", "Adult", "Child"]}]. Then, if a tasker selected "Pedestrian" for an annotation, they would be further prompted to choose one of the corresponding subchoices for that annotation.

The LabelDescription object has the following structure:

Parameter

Type

Description

choice*

string

The name of the label. This should be singular and descriptive (ex: car, background, pole).

When both a choice and subchoices are defined, the choice will not be selectable, it will only be used for UX navigation. Only the "leaf" nodes will be returned in Scale's response.

subchoices

Array<LabelDescription | string>

Optional: Descriptions of the sub-labels to be shown under this parent label. Array can be a mix of LabelDescription objects or strings.

instance_label

boolean
default false

Optional: For Segmentation-based Tasks - Whether this label should be segmented on a per-instance basis. For example, if you set instance_label to true, each individual car would get a separate mask in the image, allowing you to distinguish between them.

display

string
default choice

Optional: The value to be shown to a Tasker for a given label. Visually overrides the choice field in the user experience, but does not affect the task response or conditionality.

LabelDescription Example

objects_to_annotate = [
  "Road",
  {
    "choice": "Vehicle",
    "subchoices": ["Car", "Truck", "Train", "Motorcycle"]
  },
  {
    "choice": "Pedestrian",
    "subchoices": [
      "Animal", 
      {"choice": "Ped_HeightOverMeter", "display": "Adult" }, 
      {"choice": "Ped_HeightUnderMeter", "display": "Child" }, 
    ]
  }
]
Updated about 1 month ago