This paper presents a novel approach for synthesizing intermediate or virtual viewpoints (VVs) of a 3D scene based on information from a number of known reference viewpoints (RVs). The proposed approach directly estimates the pixel value (and corresponding depth) for each pixel in the VV. This is contrast to the more traditional 2 stage approach of firstly building a full 3D or 2.5D model for the scene and then synthesising the desired VV. The potential advantage of this approach is that it works directly with the target virtual view and is hopefully less susceptible to the propagation of errors from the depth estimation stage to the interpolation stage
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Bristol's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to email@example.com.
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
Name of Conference: International Conference on Image Processing
- depth estimation, direct viewpoint synthesis