Computer Science - The City College of New York
CSC I6716 - Spring 2012 3D Computer Vision
Assignment 3 Camera Models and Camera Clibration (
March 22 Thursday before class)
(Those marked with * are optional for extra credits)
Note: All the writings must be sumitted in hard copies. Please
your **names** and IDs (last four
digits) in your submissions.
1 (Camera Models- 20 points) Prove that the vector
from the viewpoint of a pinhole camera to the vanishing point (in
image plane) of a set of 3D parallel lines is parallel to the
Hint: You can either use geometric reasoning or algebraic
If you choose to use geometric reasoning, you can use the fact that
projection of a 3D line in space is the intersection of its
with the image plane. Here the interpretation plane (IP) is a
passing through the 3D line and the center of projection (viewpoint)
the camera. Also, the interpretation planes of two parallel
intersect in a line passing through the viewpoint, and the
line is parallel to the parallel lines.
If you select to use algebraic calculation, you may use the
representation of a 3D line: P = P0 +tV, where P= (X,Y,Z)T
is any point on the line (here T denote for
transpose), P0 =
(X0,Y0,Z0)T is a given fixed point on the line, vector V
(a,b,c)T represents the direction of the line, and t is
scalar parameter that controls the distance (with sign) between P
2. (Camera Models- 20 points) Show that relation between any image
point (xim, yim)T of a plane (in the form
in projective space ) and its corresponding point (Xw, Yw, Zw)T
on the plane in 3D space can be represented by a 3x3 matrix. You
start from the general form of the camera model (x1,x2,x3)T
MintMext (Xw, Yw, Zw, 1)T, where
center (ox, oy), the focal length f, the scaling factors( sx and
rotation matrix R and the translation vector T are all unknown. Note
in the course slides and the lecture notes, I used a simplified
model of the perspective
project by assuming ox and oy
known and sx = sy =1, and only discussed the special cases of
So you cannot directly copy those equations I used.
Instead you should use the general form of the projective matrix,
the general form of a plane nx
Xw + ny
Yw + nz Zw = d.
3. (Calibration- 20 points ) Prove the
by geometric arguments: Let T be the triangle on the image plane
by the three vanishing points of three mutually orthogonal sets of
lines in space. Then the image center is the orthocenter of the
T (i.e., the common intersection of the three altitudes.
(1) Basic proof: use the
of Question 1, assuming the aspect ratio of the camera is 1. (20
(2) If you do not know the
length of the camera, can you still find the image center (together
with the focal length) using the
Show why or why not. (5
(3) If you do not know the aspect ratio and the
length of the camera, can you still find the image center using the
Show why or why not. (5
4. Proof-reading (20 points). While you are reading the lecture
notes on camera models, please use Track Changes of the Word
program to make changes for typos, unclear sentences, and etc.. You
might also put your comments for me to further improve the writing
of the document, using the Comment Tool.
5. Proof-reading (20 points). While you are reading the lecture
notes on camera calibration, please use Track Changes of the
Word program to make changes for typos, unclear sentences, and etc..
You might also put your comments for me to further improve the
writing of the document, using the Comment Tool.
6. *Calibration Programming Exercises (20
extra points): Implement the direct parameter calibration method in
order to (1) learn how to use SVD to solve systems of linear
(2) understand the physical constraints of the camera parameters;
(3) understand important issues related to calibration, such as
calibration pattern design, point localization accuracy and
of the algorithms. Since calibrating a real camera involves lots of
work in calibration pattern design, image processing and error
as well as solving the equations, we will mainly use simulated data
understand the algorithms. As a
we will also learn how to generate 2D images from 3D models using a
“virtual” pinhole camera.
- Calibration pattern “design”.
Generate data of a “virtual” 3D cube similar to the one shown
in Fig. 1 of the lecture notes in camera calibration. For
example, you can
hypothesize a 1x1x1 m3
cube and pick up coordinates of 3-D points on one corner of
square in your world coordinate system. Make sure that your
sufficient for the following calibration procedures. In order
the correctness of your data, draw your cube (with the control
marked) using Matlab (or whatever tools you are selecting). I
provided a piece of starting code in
you to use.
- “Virtual” camera and images.
a “virtual” camera with known intrinsic parameters including
length f, image center (ox, oy) and
pixel size (sx,
sy). As an example,
assume that the focal length is f = 16 mm, the image frame
512*512 (pixels) with (ox,oy) = (256,
the size of the image sensor inside
camera is 8.8 mm *6.6 mm (so the pixel size is (sx,sy)
6.6/512) ). Capture an image of your “virtual” calibration
cube with your virtual camera in a given pose (R and T). For example, you can take the
picture of the cube 4 meters
away and with a tilt angle of 30 degree. Use three rotation
alpha, beta, gamma to
matrix R (refer to the lecture notes in camera model). You may need to try different
pose in order to have a suitable image of your calibration
- Direction calibration method:
Estimate the intrinsic (fx, fy, aspect
ratio a, image
extrinsic (R, T and further alpha,
beta, gamma) parameters. Use SVD to solve the
system and the least square problem, and to enforce the
constraint on the estimate of R.
Use the accurately simulated data (both 3D world
2D image coordinates) to the
compare the results with the “ground truth” data (which are given
and step (b)). Remember you are
practicing a camera calibration, so you should pretend you know
the camera parameters (i.e. you cannot use the ground truth data
calibration process). However, in the direct calibration method,
the knowledge of the image center (in the homogeneous system to
extrinsic parameters) and the
ratio (in the Orthocenter theorem method to find image center).
Study whether the unknown aspect ratio matters in
the image center, and how the initial estimation of image center
estimating of the remaining parameters.
Give a solution to solve the problems if any.
Accuracy Issues. Add in some random noises to the
simulated data and run
the calibration algorithms again. See how the “design tolerance”
calibration target and the localization errors of 2D image points
calibration accuracy. For example, you can add 0.1 mm random error
and 0.5 pixel random error to 2D points. Also analyze how
Orthocenter method is to the extrinsic parameters in imaging the
the orthogonal parallel lines.
In all of the steps,
give you results using either tables or graphs, or both of them.
A 2D image of the “3D cube”
control 16+16 points.