Track 1 Results Submissions Instructions

Submission format
As part of Track 1 of the NVIDIA AI City Challenge, teams build models that can detect, localize, and classify objects in keyframes extracted from videos at several intersections. Teams were given training and validation subsets from three datasets, aic480, aic540, and aic1080. A few days before the challenge results are due, test sets will be provided, in the form of sets of keyframe images. For each image, teams will execute prediction models and provide results in one of two formats:

- A zip archive (NOT tar.gz, .z, .rar, or any other type of archive) containing one file for each test image, with the same name as the image, except using the '.txt' extension. Text files should not be in a sub-directory. Each text file will have one line for each predicted bounding box, in the following format:

class xmin ymin xmax ymax confidence


- A JSON file containing a dictionary with elements for each image in the test set, in the following format:

{
  "great_neck_first_colonial_20140604_00016": [
    {
     "class": "Van",
     "confidence": 0.93,
     "xmax": 506.0,
     "xmin": 424.0,
     "ymax": 297.0,
     "ymin": 252.0
    },
    {
     "class": "SUV",
     "confidence": 0.24,
     "xmax": 281.0,
     "xmin": 179.0,
     "ymax": 348.0,
     "ymin": 293.0
    }
  ],
}

Note that image names in the JSON format do not have the file extension. Confidence scores are float values in the range[0,1]. Bounding box coordinates are number of pixels from the lower-left corner of the image. Class is the string value representing the class (e.g., Car). For derived datasets that have numeric classes, please note the 0-indexed class list can be found in /custom_classes.txt, where is the AIC version of the dataset (e.g., 2 -> SUV). An example of each type of submission formats is included as an attachment.

Submission site
The submission site can be found at: http://52.173.197.169:3000/nvidiacity/.

Submission deadline: August 3, 2017, 5:00 PM PST.

Teams can use any of the credentials they were given during the Annotation phase to log into the submission system. Once logged in, click Add Submission to add a new result. Enter the required information described below, attach your submission file, and click Submit.

-Submission Name: a short name that describes your submission, maximum 10 characters, alphanumeric or underscore characters only. These names will be used in charts to differentiate submissions.
-Description: a short description of your model, e.g., framework, layers, pre-training weights, data trained on (if more or other than the training set), and other distinctive modeling aspects. Maximum 1000 characters allowed.
-Dataset: choose one of the three datasets that was tested: aic480, aic540, or aic1080.
-Environment: DGX or Other. If training was done in another environment, choose Other and include a description of the environment that was used for training: CPU(s), RAM, GPU(s), interconnect (if multiple nodes used), etc.
-Training Iterations: the number of iterations the model was trained for.
-Training Time: the amount of time training took, in seconds.
-DGX inference: average frames per second executing inference for this dataset on the DGX server.
-TX2 Inference: average frames per second executing inference for this dataset on the Jetson TX2 edge device.
-Results File: the results .zip or .json file.

After submitting a result file, you will be returned to the Submissions page, where you will see a summary of your submission. The status of your submission will be automatically updated every 15 seconds. If a problem is encountered with the submission, the status will display an error message. Otherwise, the message "Evaluation successful." will be displayed.

Please note that, for each test set (aic480, aic540, and aic1080), only 5 submissions are allowed for each team (NOT each user in each team). Submissions returning an error are not counted towards the total number of submissions.

For each AIC dataset, teams should only submit one result (their best result) for each algorithm/model they train. They should use the val dataset to tune parameters for the algorithm/model and should only execute inference on the test set using the model that performed best on the val dataset. The results from that inference should be submitted to our evaluation server. The evaluation server should not be used for parameter tuning.

Evaluation
An evaluation script (evaluate.py) has been provided in /datasets/scripts. Using this script, teams can test their model performance on their own training, test, or validation sets. For a given set of true and predicted labels in a dataset, the script computes several prediction efficiency scores (e.g., Average Precision, F1-score). We follow the Pascal VOC 2012 challenge in our methodology for computing these scores. Namely, a detected bounding box of class X is assigned to the true bounding box of class X with the highest Intersection over Union (IoU) score as long as the true bounding box has not already been detected and their IoU score is at least 50%. Teams will be ranked based on the mean Average Precision (mAP) of the per-class object predictions and the Localization Average Precision (AP), where the Localization AP is computed by ignoring class assignments in all true and predicted bounding boxes. Given that some annotators did not annotate small (far away) objects in keyframes, while others did, we ignore small bounding boxes (less than 30x30 pixels) from both true and predicted labels.

Evaluation script usage

python2 evaluate.py <true_labels> <predicted_labels>

The <true_labels> argument can be a path to the AIC dataset directory containing the labels of the images you are predicting, e.g., /datasets/aic540/val/labels for images in /datasets/aic540/val/images. Note that labels have not and will not be provided for the test sets. Given a file predicted_aic540_val.json containing predictions for the AIC540 validation set, one can evaluate the predictions by executing,

python2 evaluate.py /datasets/aic540/val/labels predicted_aic540_val.json

In addition to the printed output, the script can be used to generate structured JSON output and even precision-recall graphs for the prediction results. Please see the script source code or execute evaluate.py --help for details. For example, the following execution will generate a interpolated precision-recall graph in graph.png and write a JSON output file in output.json:

python2 evaluate.py /datasets/aic540/val/labels predicted.json -g graph.png -i -o output.json

Reporting results
Teams should prepare a short report between 4 to 6 pages, in IEEE 2-column format, describing the modeling methods they employed and the results they obtained as part of the challenge. After the submission deadline, teams will be able to see scores for their submissions which they can include in the report. The deadline for the report submission has been extended until August 12, 2017. Additional details regarding the report submission site and details will be provided in due course.

labels.json
labels.zip

Docker for Jetson TX2

This information is being made available to the NVIDIA AI City Challenge Participants courtesy of Chris Dye and team at IBM.
Challenge teams can pull prebuilt images from the docker hub repo or build them themselves using Dockerfiles and other supporting code for Jetson TX1 and Jetson TX2

After updating their TX2s to JetPack3.1, teams can follow the guide at the wiki below and docker.io will install correctly from apt. Testing has been completed of the darknet / yolo docker container image for TX2 and it performs as expected.

Repo: https://github.com/open-horizon/cogwerx-jetson-tx2/
Wiki: https://github.com/open-horizon/cogwerx-jetson-tx2/wiki


For any questions, please email nvidiaAICitychallenge@gmail.com.