BigObject service allows users to import data from other sources through BigObject's import feature. For the beginners, import will make a good guess on what the schema is from the data, and suggest how to create the object to store the data.

POST /import

Supported File Types

All import must start by obtaining a session.

If the service agrees, this will yield a callback url for the user's import session

curl -v -X POST

The return json document, if successful

    "callback_url": "/import/1qazWSF45?token=123cdksjd"

POST /import/sesssion_id/?token=secret

Once granted permission to create a session to import, the users can upload their data through the callback_url given above. Users must provide the mime type appropriate for the data they want to import.

curl -v -F 'file=@./data.csv;type=text/csv'
Make the most out of concurrent processing
Fetch guessed schema through `GET` or tell the BigObject import service to take action right away with users' custom schema with `PUT`

GET /import/session_id/?token=secret

Users may optionally ask BigObject to import service reporting what the guessed data schema is. It will wait until the schema parser can determine a type, which means your upload should be in progress.


The response is a json document describing the data schema. name is the object name that will be created to store users' data in transit. columns represent users' data fields as a list, with each item as a map.

Users may optionally provide a misc field detailing specific instructions for handling the data they are importing. Refer to supported file type for more info.

    "name": "object_name",
    "columns": [
            "attr": "column_1_name",
            "type": "column_1_type",
            "key": true_or_false,
            "datefmt": "how_to_interpret_your_datetime",
            "default": value
            "attr": "column_2_name",
            "type": "column_2_type",
    "misc":  {
        // Contains file type specific info
  • attr for the fields name

  • type for what the data type is

  • key for whether this field is indexed; optional

  • datefmt for helping BigObject to interpret users' date time format; optional

  • default for what value to use if none present; optional
Users may not need to fetch guessed result
Users' data might be obscure for BigObject's data schema parser. In this case, Users can make the `PUT` command with their custom schema and actions.

PUT /import/session_id/?token=secret&action=action

Once users are confident with the schema for this data, they need to commit to importing data into the object.

curl -v -X PUT -d '{ \
    "name": "table_name", \
    "columns": [ \
    { \
        "attr": "col_1_name", \
        "type": "col_1_type" \
    }, \
    { \
        "attr": "col_1_name", \
        "type": "col_1_type" \
    }] \
}' ''

The command below demonstrates an intent to create a table object with a schema (defined above) from the import session.

  • create: creating a new object. All create restrictions apply.

  • append: importing data to an existing object identified by name name field in the schema.

  • overwrite: truncating, but keep the same schema, the object identified by name namefield in the schema.

The response will indicate whether the action performed succeed, and the callback URL for querying status of importance.

    "status": 0,
    "callback_url": "/import/status/1qazWSF45"
We still need your token!!
Notice that the `callback_url` does not contain `token`. This is because **BigObject does not own the token**, only the hashed result of the user's `token`.

GET /import/status/session_id/?token=secret

curl -v

Returns done when job is completed.