TensorFlow Custom Object Tracking (locally/GCP) — Practical Recipe

Shahar Gino
3 min readOct 31, 2018

--

A lot have been said and written about the titled topic.

This page provides my two cents, wrapped as a practical cookbook for the desired flow (in a “how-to” fashion..).

Results:

Custom Object Detection (Lego), TensorFlow framework, SR300 depth-camera

Prerequisites:

  1. Install Python
  2. Install TensorFlow
  3. Recommended: Create a GCP account, install gcloud CLI and enable the ML Engine API (see step 2.4.2 below)

Steps:

Phase 1: Creating a dataset

  • Create an ‘images’ folder with sub-folders per each class, e.g. ‘images/my_obj_1’, ‘images/my_obj_2’, etc.
    Note: one way to create a database is filming a video of the objects, and extracting its images (e.g. with ffmpeg):
% ffmpeg -i <*.mov> -vf fps=3 img%03d.jpg
Dataset Generation, extracted from brief panoramic videos
  • Hand-label the objects per each image, e.g. by using LabelImg, FIAT, etc. Save all the PASCAL VOC annotations (XMLs) in an ‘annotations’ folder.
  • Merge all XML annotations into a CSV format (edit the xml_to_csv.py accordingly), locate it in a ‘data’ folder:
% python xml_to_csv.py
  • Convert the PASCAL VOC annotations into a TFRecords format (*.record), as required by TensorFlow, locate it in the ‘data’ folder as well:
% python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=data/test.record
  • Add a classes declaration file at ‘data/object-detection.pbtxt’, e.g.:
item {
id: 1
name: ‘my_obj_1’
}
item {
id: 2
name: ‘my_obj_2’
}
...

Phase 2: Training

Locally:
% cd tensorflow/models/research
% python object_detection/model_main.py --pipeline_config_path=~/Projects/lego_dataset/training/ssd_mobilenet_v1_pets.config --model_dir=~/Projects/lego_dataset/training --num_train_steps=50000 --sample_1_of_n_eval_examples=1 --alsologtostderr
GCP:
% export YOUR_GCS_BUCKET=<my GCP bucket name>
% export GOOGLE_APPLICATION_CREDENTIALS={path to the crediential json file you got after enabling ML Engine API}
% cd tensorflow/models/research
% gcloud config set project [selected-project-id]
% gcloud ml-engine jobs submit training object_detection_`date +%m_%d_%Y_%H_%M_%S` --runtime-version 1.9 — job-dir=gs://${YOUR_GCS_BUCKET}/training --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz --module-name object_detection.model_main --region us-central1 --config training/cloud.yml -- --model_dir=gs://${YOUR_GCS_BUCKET}/training --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/training/ssd_mobilenet_v1_pets.config
  • Track the training progress:
Locally:
% tensorboard --logdir=training
GCP:
% tensorboard --logdir=gs://${YOUR_GCS_BUCKET}

Phase 3: Export

  • Export the model into a Tensorflow graph prototype:
% cd tensorflow/models/research
% python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path={path to pipeline config file} --trained_checkpoint_prefix={path to model.ckpt* files} --output_directory={path to folder that will be used for export}
  • For example (GCP):
% python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix=gs://${YOUR_GCS_BUCKET}/training/model.ckpt-326670 --output_directory=~/Projects/lego_dataset/data

--

--