Modeling Camera Effects

introfig - Alexandra Carlson.jpg
 
 

Modeling Camera Effects to Improve Deep Vision for Real and Synthetic Data

Alexandra Carlson, Katie Skinner, Ram Vasudevan, Matt Johnson-Roberson

PaPER 

https://arxiv.org/abs/1803.07721

Abstract

Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physically- based augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augment- ing training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.