Home People Research Projects Teaching Theses Downloads Links   Contacts English Italiano
Projects

AUTOBEAM - AUtomatic real Time system tO test vehicle headlamp BEAMs

OBJECTIVE The goal of this industrial research is to develop an automatic system exploiting image analysis to characterize automatically the photometric and the geometric properties of vehicle headlamp beams projected into an optical chamber.

PARTNER This research has been carried out together with SIMPESFAIP SPA (CORGHI Group).

SUMMARY The vehicle's headlamp orientation, luminous and geometrical beam properties are a matter strictly ruled by European Commission for Transportation. To test the headlamps, the test system is firstly aligned (usually manually) to the vehicle and then the human being has the definite opinion even on the beam-related measures derived by looking at the reference points of the rear panel (Fig. 1, right) of the Optical Projection System (OPS) (Fig. 1, left and middle). The outcome of the project is an industrial prototype that performs both alignment and measurements automatically, where the operator's eyes are replaced with CCD cameras and real time image and video analysis algorithms [HP1]
OPS OPS panel

Fig. 1: From left to right: the side- and the front-view of the Optical Projection System (OPS); the rear panel of the OPS, with reference points.

OUTLINE In the automotive field, the vision-based technology (often exploiting 3D sensors) has become popular when introduced to detect obstacles to reduce braking time or to assist parking, just to cite a few examples. Nevertheless, the automatic and accurate testing of automotive equipment is increasingly stirring up the industrial research and it stimulating new proposals even in regulation requirements, in order to improve safety standards.
Nowadays, automotive industries and the European regulation agencies are interested in developing automated tools for the measurements of car headlamps beam geometric and photometric properties, in order to prevent drivers in the opposite direction from being dazzled by dipped headlights as well as to check that main headlights direct their beam in the proper direction. The common practice provides that the test system is firstly aligned to the vehicle by the operator, checking visually the optical collimation with some reference points on the front part of the vehicle. Subsequently, the geometric characterization is performed by looking into the Optical Projection System (OPS, Fig. 1, left and middle) and checking visually the displacement of some reference points of the beam profile with respect to the reference points on the rear panel (Fig. 1, right) of the OPS. This yields an operator dependent alignment, most of time being inaccurate and yielding rough measurements, thus jeopardizing the reliability of the subsequent assessment of the headlamp's beam.
Different techniques and technologies are used to face the problem. The testing systems currently available on the market are either not automatic, requiring awkward set up procedures and the continuous intervention of human operators, or complex and expensive, suitable for end-of-line installations only. Usually, in the latter case, a large number of sensors (photogoniometers, lightmeters, etc.), often arranged in complex and expensive systems, are necessary in order to get the accuracy needed. In addition, these systems do not address specifically the geometric aspect of the luminous profile as seen by the human operator.

METHOD Our research work [HP2][HP3] represents the first approach based on automatic video analysis to characterize headlamp beam's profiles in an industrial prototype, suitable to be employed in garages for routine use in periodic car tests. In particular, our automatic real time system exploits image analysis in 3D to perform accurate alignment and in 2D for measurements of geometric and photometric regulation parameters for both driving and passing headlamps light beams. The system is made of two subunits, the alignment and the beam characterization units, that act sequentially. Firstly, the algorithm based on the stereo couple recovers the 3D alignment parameters while the vehicle approaches the OPS and stops at about 1 m. Then these parameters are passed to the control engine that aligns the OPS. A camera sensor looks toward the panel at a prefixed distance and inclination, in order not to interfere with the incident beam light (Fig. 1, left).

Alignment unit. In the first stage, the vehicle is aligned to the testing device. To this purpose, we have conceived an algorithm to estimate in real time the 3D instant trajectory of a vehicle, while approaching the testing device, so as to derive the vehicle's 3D orientation. The alignment procedure has been accomplished through our 3D stereo-based technology [HP4]. The license plate of the vehicle has been chosen as the reference pattern to be tracked along the sequence, since it is rigidly integral with the vehicle, easy to detect automatically.
The accuracy of our 3D trajectory orientation algorithm has been firstly assessed using a license plate mounted on a mechanical support free of moving with controlled velocity along a rigid linear guide, placed at the ground level. The deviation angle with respect to a reference line can be varied and measured with an accuracy of 0.02°. The left image of the stereo couple is shown in Fig. 2, left. In order to establish a ground truth even in real conditions, a track of the car trajectory has been obtained by fixing to the vehicle a rigid case releasing sand to draw a colored track (the "sand track") on the floor (Fig. 2, middle). This track is processed with an accuracy of 0.05°.
Fig. 2, right, shows the angular error versus the ground-truth deviation angle for 14 test sequences is shown. Most of the trials (9 out of 14) have been accomplished for small variations of 5° in absolute value. This is because we need to study small angles with the highest accuracy, supposing that while approaching the test system the vehicle is already somehow aligned ([−15°,+15°]). The absolute angular error lower than 0.1° proves the accuracy of the system.
stereo system left support only left support only 3D_abs_err

Fig. 2: Left rectified images of: the license plate moving along a known trajectory (left) and the vehicle moving toward the stereo rig with on the floor the "sand track" of the vehicle’s trajectory (middle); the absolute angular error yielded by our alignment procedure (right).


Beam characterization unit. After the vehicle has been properly aligned, our self-adaptive algorithm can compute reliable parameters referring to the luminous profile of the beam that is projected onto a panel and acquired by a CCD camera. Excluding the instrumentation assessment algorithm, our characterization algorithm is made of different parts. Two of them are camera dependent and have to be performed once and for all after choosing and installing the camera on the prototype.

Instrumentation assessment. In order to validate the characterization method, together with the industrial partner we have built a Numerical Control Unit (hereinafter, NCU), where headlamps projectors can be mounted on and oriented according to three degrees of freedom (roll, pitch and yaw). These movements are measured electronically with a resolution r=0.06°.
NCU NCU

Fig. 2: A schematic representation of the NCU (left) and an image of the real prototype with headlamp (right).

Since the aim of this work is to test systems and algorithms that must comply with strict regulations, before performing any measurements and algorithm assessment, we thoroughly investigated the NCU accuracy [HP1]. An experimental procedure, based on pattern recognition and image analysis methods, has been devised to quantify the accuracy of the ground-truth measurement achieved by the NCU. The aim of the algorithm is to collect and process the measurements of the angular variations returned by the NCU in response to a fixed angular distance, whose magnitude is measured through pattern matching techniques using a couple of black-filled circular patterns. The yaw angle is varied on the NCU so that the camera moves toward the second pattern. The pattern recognition algorithm detects when the pattern centre is found at the same distance from the image centre as recorded in the first configuration, with a tolerance of Δd=0.5 pix, and the second reference position is set. Statistics about the results collected show a standard deviation of about σ=0.031°. Therefore, the accuracy of the measures provided by the NCU is proved to be comparable with the resolution of the instrument in the 86.5% (2σ) confidence interval.

Analysis of optical device. The problem in achieving radiometric measurements through analyzing the gray level values of image pixels arises from finding out the relationship that binds the scene radiance and the image irradiance, that is the "power of light" recorded by the vision sensor. This relationship is known as the Response Function (RF) of the camera and needs to be found out. Fig. 3, left, shows the RF of the industrial B/W camera we employed in this stage and recovered through our method (more details are given in [HP5]).

System calibration. After the camera is fixed onto the OPS, it is necessary to compensate for the perspective effects caused by the inclination of the camera's optical axis with respect to the rear panel's plane, as it can be seen in Fig. 1, left. Fig. 3 shows the calibration pattern before (middle) and after (right) correction [HP6].
panel OPS OPS

Fig. 3: From left to right: the calibration pattern as seen by the CCD before (left) and after (middle) our correction; the recovered camera's RF and the samples directly extracted using a calibration chart (right).

The remaining two parts of the algorithms are camera independent, just exploits previous results, whatever the chosen camera is, to cope with two basic issues: auto-exposure and profile segmentation by human eyes.

Auto-exposure. The light of the vehicle headlamp beams has an extremely wide dynamic range, whereas the CCD sensors available on the market that are economically compatible with the commercial cost of the final diagnostic equipment show a limited dynamic range. To face this challenging problem, we have conceived an original algorithm capable to extract all the useful information by adjusting the radiometric resolution of the CCD and preventing it enters saturation [HP7]. To this purpose, our algorithm uses all the image pixels of the real time sequence to find the optimal exposure time, assuring that the whole image is not in saturation, even in case of such highly contrasted scenes. This permits the acquisition system to work with all the possible light sources.

Profile segmentation. We want now to mimic the response of the human eye even in a highly contrasted and untextured scene such as the one generated by beams projected on a white panel. Therefore, instead of processing a synthetic image generated by tone mapping operators, we exploit the camera RF knowledge with a locally adaptive segmentation algorithm based on the gradient of visual perception property, performed on the acquired non-saturated image. The automatic exposure algorithm just ensures that the radiometric content is preserved in the image. Nevertheless, the difference between what human eyes perceive looking at the panel directly and watching the image of the panel could be relevant. We have conceived an accurate and automatic eye-like segmentation algorithm able to detect the line corresponding to the light-dark border of the projected beam as perceived by the human being rather than as it appears on the image captured by the CCD [HP8]. The algorithm is based on a method devised to find suitable local thresholding values that can fit spatial luminous variations and automatically adjust to different light intensities. We have taken into account that the non-linear response of the human visual system depends on the relationship between local variations and the surrounding luminance rather than on the absolute luminance.
panel OPS

Fig. 4: Level sets and profiles (left); a particular (right).

Fig. 4, left, shows the level sets of a passing beam headlamp together with the profiles computed by our algorithm and detected by human operators. While these are consistent to each other, they do not match with any contour defined by the level sets (Fig. 4, right), because the human perception of what being imaged is far different.

Extraction of regulation points. Interesting points representing geometrical references according to the current European regulations can now be identified by computing first and second derivatives of this profile. In particular, for passing beam headlamps the "elbow" point, which is related to a strong profile slope change, can be extracted from the maximum value of the second derivative of the cut-off line. In Fig. 5, from top to bottom, the first derivative signal and its smoothed version, here achieved using running mean filtering, are shown together with the second derivative signal and its smoothed version. Also, in this figure the final cut-off profile is represented by two line segments, obtained by a linear regression (according to Least Square Method, LSM) on the points of the profile pertaining to the left and the right side of the detected elbow. Thus, we can compute another important geometric parameter required by regulations: the "deviation" angle between the two line segments. Finally, the algorithm can give even the maximum peak of illumination (identified by the cross in Fig. 5).
panel

Fig. 5: Profile derivative analysis for a passing headlamp beam. The extracted points of the light profile (thin red curve) and the segmented profile (blue line segments). From top to bottom: first derivative signal (1) and its smoothed version (2), second derivative signal (3) and its smoothed version (4).

RESULTS In Fig. 6, left, the raw image of a luminous profile projected by a passing beam headlamp is shown. Two of the most representative (according to their difference) perceived profiles are superimposed in Fig. 6, middle (dotted lines), together with the profile extracted by our algorithm (continuous green line). As one can realize, the trend of the profile is "correctly" followed even in the last (right) part of Fig. 6, left, where the SNR in the displayed image is dramatically low and where a common level set method will fail (continuous blue line).
raw_profile proc_profile error

Fig. 6: The image of a luminous profile of a halogen passing headlamp (left); two significant perceived profiles (dotted lines) and the superimposed extracted profiles, using our method (continuous line) and level set processing (middle); maximum distance and standard deviations (both in pixels) of 14 profiles, referring to the two most distant profiles seen by the human operators.

In Fig. 6, right, the average distances and the standard deviations (in pixels) are represented for 14 different profiles, generated by halogen, lenticular and xenon passing headlamps, for the two most representative human operators: that is, for each test the two most distant profile are taken for comparison. The average distance is about 6.8 pixel with a standard deviation of about 2.9 pixel. Since the vertical resolution of the camera we used is about 0.16 mm/pixel, that means that average distance and standard deviation are about 1.1 mm and 0.47 mm, respectively. Therefore, we can conclude that this represent an excellent result, since the accuracy of our method is comparable with the inter-operator standard deviation (about 0.3 mm).
Fig. 7 presents the results attained for yaw and pitch perturbations referred to the elbow of a beam profile of an halogen passing headlamp equipped with lenticular lens. The hatched boxes show that in the European regulations range [−1.5°,+1.5°] the accuracy fulfils requirements for both pitch and yaw. In terms of precision, the standard deviation shown in Fig. 7 is very low and even in the worst case (the yaw angle) it keeps below 0.02°. More experiments accomplished on different kinds of headlamps are reported in [ICIAR2009].
yaw_accuracy yaw_accuracy pitch_accuracy pitch_accuracy

Fig. 7: Alignment measurements (precision and accuracy) for halogen passing beam headlamp equipped with lenticular lens.

Finally, it is worth noticing that this the only automatic method known to characterize the beam profile of a vehicle headlamp achieving measures compliant with the European regulation.

REFERENCES
[HP1] A. Bevilacqua, A. Gherardi, L. Carozza, An automatic system for the real time characterization of vehicle headlamp beams exploiting image analysis, to appear on IEEE Transactions on Instrumentation and Measurement, 2010
[HP2] A. Bevilacqua, A. Gherardi, L. Carozza, A fully automatic real time system for the characterization of automotive headlamps, 2009 IEEE International Instrumentation and Measurement Technology Conference (I2MTC 2009), Singapore, May 5-7, 2009, pp.36-39
[HP3] A. Bevilacqua, A. Gherardi, L. Carozza, An industrial vision-based technology system for the automatic test of vehicle beams, IEEE International Symposium on Industrial Electronics (ISIE 2009), Seoul, Korea, July 5-8, 2009, pp.2178-2183
[HP4] A. Bevilacqua, A. Gherardi, L. Carozza, High Accuracy Estimation of Vehicle Trajectory using Real Time Stereo Vision, IEEE International Symposium on Industrial Electronics (ISIE 2009), Seoul, Korea, July 5-8, 2009, pp.2230-2235
[HP5] A. Bevilacqua, A. Gherardi, L. Carozza, A robust approach to reconstruct experimentally the camera response function, 1st IEEE International Workshops on Image Processing Theory, Tools & Applications (IPTA08), Sousse, Tunisia, November 23-26, 2008, pp.340-345
[HP6] A. Bevilacqua, A. Gherardi, L. Carozza, Automatic perspective camera calibration based on an incomplete set of chessboard markers, 6th IEEE Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP 2008), Bhubaneswar, India, December 16-19, 2008, pp.126-133
[HP7] A. Bevilacqua, A. Gherardi, L. Carozza, Accurate eye-like segmentation in a heavily untextured contrasted scene, 1st IEEE International Workshops on Image Processing Theory, Tools & Applications (IPTA08), Sousse, Tunisia, November 23-26, 2008, pp.414-420
[HP8] A. Bevilacqua, A. Gherardi, L. Carozza, A visual perception approach for accurate segmentation of light profiles, IEEE International Conference on Image Analysis and Recognition (ICIAR 2009), Halifax, Canada, July 6-8, 2009, Vol.5627, pp.168-177

Copyright © 2008, A.G. - All Rights Reserved