Linear Determination of a Camera's Intrinsic Parameters Using Two Intersecting Circles

An approach using two intersecting circles is proposed as a linear approach for determining a camera's intrinsic parameters. The two intersecting coplanar circles have four intersection points in the projective plane: two real points and two circular points. In the image plane, the diagonal triangle - on which the image of the four intersection points composes a complete quadrangle - is a self-polar triangle for the projection curves of the circles. The vertex of the self-polar triangle is the null space of the degenerate conic formed by the image of the four intersection points. By solving the three vertices of the self-polar triangle using the image coordinates of the two real intersection points, the degenerate conic can be obtained. The image of the two circular points is then computed from the intersection points of the degenerate conic. Using the image of the circular points from the three images of the same planar pattern with different directions, the intrinsic parameters can be linearly determined.


Introduction
In computer vision, camera calibration is a basic requirement in providing three-dimensional (3D) geometry information from two-dimensional (2D) images.This is an essential step in many visual tasks.Camera calibration techniques are used in a variety of scientific and technical fields.For example, intelligent transportation systems (ITSs) for video processing and analysis, video-based road traffic monitoring [1][2][3].Camera calibration is used for vehicle speed estimation [1], target classification and tracking [2], as well as recovering momentarily missed vehicles or distinguishing obscured vehicles [3].
Due to the importance of camera calibration, different approaches to it have been proposed for different scenarios.Traditional camera calibration approaches provide high precision but are difficult to calibrate, thus making them difficult to operate.To solve this problem, Z. Zhang [4] proposed a calibration method that utilizes a planar pattern in place of the traditional calibration object.This method is simple, convenient and low cost, though while it provides high precision it also needs to accurately determine the physical coordinates of the points in the model.C. Yang et al. [5] further developed the method of Z. Zhang [4] using the corresponding conic curves between a planar pattern and their image to calibrate a camera, rather than using the correspondence between points.Because a conic is more compact and more globally primitive, the calibration stability could be further improved.Thus, calibration approaches that use a curve have been studied extensively [6][7][8].A new, easy approach to camera calibration has been proposed based on circular points [9].This approach utilizes a circle and a pencil of lines passing through the centre of it as a calibration pattern, introducing circular points into the camera calibration.Currently, circular points form the theoretical basis for camera self-calibration [10].
A circle is a special type of conic.All circles pass through circular points on a plane.One widely-studied camera calibration method uses a circle as a calibration pattern in combination with the theory of circular points [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28].Y. Wu et al. [12] proposed a calibration method using parallel circles.Accordingly, the intersections of parallel circles are circular points.The calibration task can be completed by computing the intersections of the parallel circles in the images.C. Han et al. [13] discusses the positional relationship of any two coplanar circles on a plane and how they may be used to compute the intersections of conics in the image plane to obtain the image of the circular points.This method of camera calibration using circles as calibration patterns has been widely researched, and it is suitable for calibrating cameras that monitor roundabout traffic scenes.Reference [14] uses a single circle as a calibration pattern to compute a camera's intrinsic parameters.References [15][16][17][18][19][20][21][22][23] use one or multiple pairs of coplanar concentric circles for calibration.References [24][25][26][27][28] use two or more arbitrary coplanar non-concentric circles for camera calibration.The above-mentioned methods require multiple concentric or coplanar circle patterns.Based on the above research, we use two intersecting circles as a calibration pattern to calculate the intrinsic camera parameters.Two intersecting circles intersect at four points in a projective plane.According to this property, an image of the four points comprises a complete quadrangle in the image plane.Using the theory of epipolar geometry, the image coordinates of the two circular points can be solved for.By solving for the images of the circular points in three images of the same planar pattern from different orientations, the intrinsic parameters can be linearly determined.Our experimental results show that this approach is precise and effective.
The rest of this paper is organized as follows.Section 2 describes the pinhole camera model and the proposed camera calibration method using two intersecting linear circles.Section 3 presents the experimental results.Section 4 provides concluding remarks.

Notation and Basic Equations
In this section, we describe a pinhole camera model.We then describe the method for computing the image of the circular points from the two intersecting circles.T T X Y Z fX Z fY Z  describes the central projection mapping from the world to the image coordinates.If the world and image points are represented by a homogeneous vector, the central projection is expressed simply as a linear mapping between the homogeneous coordinates, as follows:

Pinhole Camera Model
This can be written compactly as m PM =   , defining the camera matrix for the pinhole model of the central  1) may be expressed in homogeneous coordinates as: where [ , ] T x y p p are the coordinates of the principal point. Because , the matrix K is the camera calibration matrix.If one considers the camera rotation and translation, the points in space are expressed in terms of a different Euclidean coordinate frame, known as the 'world coordinate frame'.The two coordinate frames are related through a rotation and translation.
, where C  represents the coordinates of the camera centre in the world coordinate frame and R is a This pinhole camera model assumes that the image coordinates are Euclidean coordinates with equal scales in both axial directions.In the case of CCD cameras, there is the additional possibility of having non-square pixels.Measuring the image coordinates in pixels has the effect of introducing unequal scale factors in each direction.In particular, if the numbers of pixels per unit distance in the image coordinates are x m and y m in the x and y directions, respectively, then the transformation from world coordinates to pixel coordinates is obtained by multiplying K on the left by ( , ,1) x y diag m m .Thus, the general form of the calibration matrix of a CCD camera is: For added generality, we can consider a calibration matrix of the following form: The added parameter s is referred to as the skew parameter.The skew parameter is zero for most normal cameras.However, in certain unusual instances, it can have non-zero values.
The projection relationship between a space point M and its image point m is: where λ is a non-zero scale factor, P is the camera projection matrix, ( , ) R t is the extrinsic rotation matrix and translation vector, and K is the camera intrinsic matrix.
, where the coordinates of the principal point are 0 0 ( , ) u v , u f and v f are the scale factors of the u and v axes in the image, respectively, and s is the skew factor.

Computing the Image of Two Circular Points
Suppose there are two intersecting circles in a plane of the world coordinate system, with two real intersection points , A B (Figure 2).Any circle that passes through the circular points , I J has two intersection circles and four intersection points in the extensional Euclidean plane, denoted by , , , A B I J , respectively.
In Figure 3 m m , can be described as a linear combination, as follows: ), as shown in Figure 3. Based on the theory of projective geometry, 1 2 3 v v v Δ is the self-polar triangle of conic C [29].Thus, the maximum of the three curves degenerates as these curves pass through the four points, thereby satisfying the following equation: , C C can be obtained from the image plane.In (7), the constraint is a cubic polynomial equation of μ that has three solutions, denoted by , 1,2,3 i i μ = . Thus, the three corresponding degenerate conics i C μ can be obtained.
The three degenerate conics are of rank two.Therefore, they can be written as: where i l and i l ′ are three pairs of lines in the image plane, as shown in Figure 3.

Proposition. If
i C μ is obtained according to: the intersection point i v of a pair of lines i l and i l ′ can be computed.

Proof. Let
i C μ * be the adjoined matrix of i C μ .From (8), we can know the following: In other words, i v is the null space of 0, 0 Because , I J form a pair of conjugate points, under the perspective transformation, , m m also form a pair of conjugate points.Hence, the constraints of the two equations in (13) are identical.
A conic can be represented in the form of a symmetric matrix.Let: Define a six-dimensional vector: .From (13): with .Because A is a complex vector and c is a real vector, equation ( 16) is equivalent to the following two homogeneous equations: Re( ) 0 Im( ) If n pictures of a model plane are taken in different orientations, the stack of n , as in (17), is: where V is a 2 6 n × matrix.If 3 n ≥ , c can be uniquely determined up to a scale factor.The solution from ( 18) is well known as the eigenvector of T VV associated with the smallest eigenvalue.Once c is obtained, we can obtain the symmetric positive definite matrix ω .Then, 1 K − can be computed using the Cholesky factorization [10].The computation of the Cholesky factorization is Summarizing the above discussion, the camera calibration algorithm proposed in this paper is outlined as follows: Step 1. Make a calibration pattern, as in Figure 2.
Step 2. Take three or more images of the pattern in different orientations by moving either the pattern or the camera.
Step 3.For each image, detect the feature points [30], extract the pixels u from the images of the two intersecting circles, and fit them with C C by least squares fitting [31].
Step 4. According to (9), solve for the sides of the complete quadrangle.The image of the circular points can be obtained by computing the intersection points of corresponding sides.
Step 5. Solve for the image of the circular points in the three images of the same planar pattern in different orientations.The camera's intrinsic parameters can be linearly determined, as shown in Section 2.3.

Experiments
We performed a number of simulations and real experiments to verify that our approach is effective and operable.

Simulation Experiments
For the computer simulations, the camera settings for the intrinsic parameters were 1800 .50 points were distributed uniformly on every circle.Gaussian noise with a mean of 0 and a standard deviation σ was added to the projected image points.The noise level ranged from 0 to 6.8 pixels.
For each noise level, we performed 100 independent trials.The average values were calculated using a computer.Under the same conditions, several previously-used approaches [4,9,12] were also simulated.The absolute errors for the five intrinsic parameters under different noise levels are shown in Figure 4. We can see from Figure 4 that the calibration results are comparable and that the variances are almost linear with noise.For high noise levels, the proposed approach was robust.Thus, good camera calibration results were obtained, even with high noise levels.
( Noise level (pixels) Absolute error of v0 (pixels) Ours Zhang [4] Meng [9] Wu [12] (e) The second set of simulation experiments investigated performance with respect to the number of images.The settings for the first three images were the same as in the first experiment.For the fourth image, we randomly selected a rotation axis from a uniform sphere with a rotation angle of 30 degrees.We varied the number of images from 3 to 10.For each image, Gaussian noise with a mean of 0 and a standard deviation of 1 pixel was added to the projected image points.50 trials for independent plane orientations were conducted.Several previously-used approaches [4,9,12] were also simulated with the same conditions.The results are shown in Figure 5.We can see that: (ⅰ) the errors decreased when more images were used, and (ⅱ) the errors decreased significantly as the number of images increased from three to eight.

Experiments with Real Images
A series of real experiments verified our method.For the real experiments, real images were taken with a CCD digital camera.The image resolution was 640×480.We used a calibration object with two CDs, as shown in Figure 6(a).We captured three or more images of the template in different orientations by moving either the plane or the camera.We then selected three of the best images as the experimental pictures.The edges were extracted using Canny's edge detector (Figure 6(b)), and the ellipses were fitted using a least squares ellipse-fitting algorithm (Figure 6(c)) (Step 3 of our algorithm).When the equations of the two ellipses were obtained, the four intersection points were computed from the equations of the curves.For comparison, we used the approaches from references [4,9,12].We took 10 images for each calibration object.Some of the sample images are shown in Figures 7(a       The stereo vision experiments and 3D reconstructions of the scenes were used to verify the intrinsic parameters of the camera that we proposed as a rational.If one takes two pictures using the previously-calibrated camera (Figures 10(a) and 10(b)), one will find 132 matching points from each side (indicated by the cross -74 on the left, 58 on the right).Using the structure-from-motion method, Z. Zhang [32] was able to reconstruct the pattern using the calibrated camera's intrinsic parameters presented above.The reconstructed patterns are shown in Figure 11, in which the reconstructed points are coplanar on each side.The results demonstrate that the intrinsic parameters of the calibrated camera are accurate and precise.
Finally, we used the intrinsic parameters shown in Table 1 to reconstruct the points in Figure 10.Using the intrinsic parameters of the four approaches, we computed the angles between two reconstructed orthogonal sections on the left and right sides (Figure 10).Our approach Zhang [4] Meng [9] Wu [12] Angle 89.67  89.44  89.08  89.21 

Table 2. Angles of the two planes
From Table 2, we can see that both are close to the ground truth of 90°.In Figure 10, we selected six row points.
After obtaining the reconstructed points, we fitted the lines of the six row points using the least squares method, considered the reconstructed vertical parallel lines, computed the angles between any two of them, and obtained the averages (Table 3).
Our approach Zhang [4] Meng [9] Wu [12] Angle 0.000220  0.000301  0.000259  0.000437  Table 3.The angles between any two of the reconstructed vertical parallel lines Both are close to the ground truth of 0°.These results validate the proposed approach.

Conclusion
In this paper, we used the theory of two intersecting circles with four intersection points and an epipolar geometry in the projective plane to solve for the images of two circular points.We then linearly calculated the intrinsic parameters.This approach does not require a priori information about the circles.Instead, it only requires a camera to take three images of a template plane at a few orientations such that the intrinsic parameters can be obtained linearly.Computer simulations and real data validated our new approach.The experimental results show that this approach is precise and effective.

Figure 1 .
Figure 1.Pinhole camera geometry.C represents the camera centre and is placed at the coordinate origin.o is the principal point.Note that the image plane is placed in front of the camera centre.
If one considers the principal point offset, equation (

3 3 ×
rotation matrix representing the orientation of the camera coordinate frame.Let [ ] P KR I C = −  .It is often convenient not to make the camera centre explicit.Thus, the camera matrix is [ ] P KR I t = , where t RC = −  .
length of the camera in terms of pixel dimensions in the x and y directions, respectively.Similarly, 0 0 ( , ) u v is the principal point in terms of pixel dimensions, with the coordinates 0 x x u m p = and 0 y y v m p = .
, the points , A B m m are images of intersections , A B , , I J m m are the images of the circular points , I J , and 1 2 , C C are the projected curves of the two intersection circles 1 2 , Q Q .These curves still intersect in the image plane.The intersection points are denoted by , A B m m .Conic C , which passes through the four image points ,

iC 2 . 3
μ .Solving for the three vertices of the self-polar triangle i v from(11), with the coordinates of , Linearly Solving the Camera's Intrinsic ParametersThe circular points are on the absolute conic ∞ Ω .The image coordinates of two circular points satisfy the following constraint equations: B is a n n × symmetric positive definite matrix L is a n n × lower triangular matrix.K can be obtained from the inverse of 1 K − .
resolution = 2000×1600.In the experiments, we used three images, with rotation angles and translation vectors with respect to three coordinate axes of

Figure 4 .
Figure 4.The absolute errors for the five intrinsic parameters (a) u f , (b) v f , (c) 0 u , (d) 0 v and (e) s for different noise levels )-9(a).The ellipses were obtained using a least squares ellipse-fitting algorithm, as shown in Figures8(b)-9(b).The results of the five approaches are shown in

Table
. From Table1, one can see that the calibration results obtained using these methods are similar.