Figure 3.1 California License Plate
This design includes five primary algorithms used for identifying the license plate characters:
- Image Processing – Simple Algorithm, which converts the pixel value to either 0 or 255, based on the set threshold. Basically, this step extracts the intensity information from the image and converts it to either black (0) or white (255) similar to the intensity defined by a 1-bit per pixel where 0 represents black and 1 represents white. This technique is most commonly known as Image Binarization.
- Defragmentation – This algorithm finds the registration number from the California license plate, segments the registration number and stores coordinate of each character from the plate.
- Resizing – Converts each individual character to a fixed size template. In this project, the plate registration characters/numbers are converted to fixed size templates of 64 × 128 pixels. This makes the system completely independent of the detected plate size.
- Four Quadrant Method – Each of these templates are divided into four quadrants and used for generating four 16-bit vectors. In short, this technique is used to store a huge chunk of data from a template into four 16-bit vectors.
- Template Matching – The four 16-bit vectors generated from the detected characters are compared in a bitwise order with the vectors that are stored in the database. The image used to generate these database vectors is as shown in Figure 3.2 below and they are generated by using the above listed steps (1 to 4). Based on the comparison, the best match character from the database is displayed.
Figure 3.2 Template for Database
3.3 Algorithm Description
3.3.1 Image Processing
This Algorithm copies the pixel values in a two Dimensional Array obtained from the Frame Buffer (DSP Kit) or from the stored image (Simulator). Based on the set threshold and pixel values obtained from the frame it converts the pixel value to either 0 or 255. Here ‘0’ stands for complete black pixel and ‘255’ stands for complete white pixel. In short, the whole image is converted to binary image of 0’s and 1’s. This step is very important as both defragmentation algorithm and resizing algorithm performance depends on the said digital conversion.
3.3.2 Defragmentation
Once the pixel values are converted into a digital format (0 or 255), the next step is to look for the characters present in the plate. The “California License Plate” has details such as Registration No., Registration Valid Info and State Information mentioned on it. Therefore, to defragment each individual character this algorithm is divided into two parts namely Horizontal Defragmentation and Vertical Defragmentation. Horizontal defragmentation separates registration number and state information from the plate. To perform this operation, a logic based on aspect ratio is used. Once the registration number information is obtained, vertical defragmentation separates individual characters and stores each character’s coordinate information.
3.3.3 Resizing
Now to make this design completely independent of size of the license plate and to match each character with the fixed size template (64 × 128 pixel) present in the database, Resizing algorithm is used. The basic function of this algorithm is to grab the coordinate information provided by the Defragmentation Algorithm and resize it to a fixed size template. The major problem with the resizing algorithm is the loss of data during its resizing process. Hence, in order to make sure that only good information related to a character is replicated, this algorithm is applied to the binary image. This action was done during the Image processing step where it converts a gray scale image to a digital format of 1’s and 0’s using a fixed threshold. Thus, this step helps in replicating useful data during resizing and is very efficient for this particular application. It works on a simple concept of adding or removing white and black pixels uniformly from the image based on whether the image needs to get bigger or vice - versa. Basic graphical view for the resizing algorithm is as shown in the Figure 3.3.
Figure 3.3 Resizing Algorithm Basic Concept
Now to implement this algorithm, two 1-Dimensional Arrays (template_y_coordinate and template_x_coordinate) each of size 128 and other of size 64 are required. The size of this array depends on the size of the template used for matching. For this project, in order to generate four 16-bit vectors, the whole template needs to be divided into blocks of 8 × 16 pixels, which is the next step after resizing. Based on this block size and length of the vector (16 bit), the size of the template was decided. Now assume that the template shown in the Figure 3.4 is the size of the character, defined by width X and height Y. In order to resize this character to a known size of 64 × 128 pixel, divisional ratio should be such that divisor_x = X/X′ and divisor_y = Y/Y′.
Figure 3.4 Resizing Algorithm Requirement
The divisor value helps to maintain the uniformity while adding or removing the pixel values during the image resizing. The pictorial representation of this algorithm is as shown in Figure 3.4. This divisional value is multiplied with each address value of the respective 1-dimensional array and the values obtained are stored at the same address location of the array. The complete sequence of this flow is as shown in Figure 3.5.
Figure 3.5 Resizing Algorithm Graphical Representation
3.3.4 Four Quadrant Algorithm
Once the image is resized, the next step is to generate four 16-bit vectors. To do so, the obtained resized image is divided into tiny blocks of 8 × 16 pixel. Each of this block was scanned for its pixel values and based on majority color pixel’s (either black or white) this whole block was transformed into 1’s or 0’s format. For example, out of 128 pixel values, if a block consists of 85 black pixels and 43 white pixels, then all pixels in the block are converted to black pixels with the binary value of 0’s. Once each block is transformed to 1’s or 0’s, this digitized format is further used for the vector generation. The complete sequence of this flow is as shown in Figure 3.6. The Figure 3.7 shows a way in which the template is divided into four quadrants and how these four 16-bit vectors were generated from the whole template.
Figure 3.6 Four Quadrant Generation Basic Flow
Figure 3.7 16-bit Vector Generation using Four Quadrant Algorithm
3.3.5 Template Matching
This algorithm is used to match two generated vectors in a bitwise order obtained from different templates. Now, the first requirement for template matching is to create a database. To enforce this, the only thing required is an image with 36 characters in a particular font format. In California, the type of font found on license plate is “Penitentiary Gothic” [3]. Based on this specification, the image is selected and is as shown in Figure 3.2. A database of four 16-bit vectors is created for each individual character present in the image. These generated vectors are stored in a text file. This text file is as shown in Table 3.1 with additional details included for reader’s convenience. All these vectors for the database are created using the same steps that were mentioned above. Using file I/O operation, this text file is copied in a 2-dimensional array and each of these vectors are compared in their respective quadrant during template matching. The complete sequence of this flow is as shown in the Figure 3.8. There are two parts to the Template Matching Algorithm. In the first part, it calculates the matching percentile for the whole image i.e., each template consists of 8192 pixels. Based on the number of matched pixels between two templates, it calculates the overall matched percentile. In the second part, it calculates the region wise percentile if, the value obtained in part one is less than 85% i.e., it calculates the percentile of matched pixels, quadrant - wise and averages out the final value. Depending on the best match obtained from all the 36 templates, the output is displayed in the form of electronic data and can be used further by the developer for implementing various applications.
Figure 3.8 Template Matching Vector Comparison
The following Table 3.1 displays four information viz. characters, quadrant, vector and a reference number. Every character has its own unique reference number, which is used as identification during template matching. It also displays the value of 16-bit vector in a decimal format for each respective quadrant. These vectors were saved separately in a text file and were called and stored in the 2-dimensional array named “Quadrant” using file I/O operation during Template Matching.
Table 3.1 Vectors for 36 Templates (Characters + Numbers)
Chapter 4TMS320DM6437 DVDP PLATFORM OVERVIEW
4.1 Introduction
To evaluate and develop various video applications, EVM DM6437 provides a good platform with several onboard devices. Key features include DM6437 processor operating up to 600 Mhz, TVP5146M2 video decoder, ports for composite or S video, four video DAC outputs, 128 Mbytes of DDR2 DRAM, 16 Mbytes of non-volatile Flash memory, 64 Mbytes NAND Flash, 2 Mbytes SRAM, configurable boot load options and Embedded JTAG emulation interface. The EVM is designed to work with TI’s Code Composer Studio Development. Code Composer communicates with the board through the embedded emulator. Block diagram of this platform is shown in Figure 4.1.
Figure 4.1 Block Diagram of EVM DM6437 [6]
4.2 Video Interfaces on TMS320DM6437 DVDP
Texas Instruments DM6437 processor is interfaced to various on-board peripherals through integrated device interfaces and 8-bit wide EMIF bus. The DM6437 EVM comprises of input and output video ports that can support a variety of user application. These interfaces are discussed in the following two sections below [7].
- Input Video Port Interfaces:
The DM6437 EVM supports video capture through S video or composite video input port that is decoded by TVP5146M2 decoder. Texas Instrument’s TVP5146M2 is a high quality video digital decoder that digitizes and decodes all popular analog video formats into digital component.
- On Chip Video Output DAC:
The DM6437 incorporates four output DACs interfacing to various output standards. The outputs of the DACs are programmable to support composite video, component video or RGB color format. The DACs can operate at either 27 Mhz or 54 Mhz sampling rate to support either SDTV (interlaced) or EDTV (progressive) signals.
4.3 DM6437 Functional Overview
Functional block diagram of DM6437 is shown in Figure 4.2. Only Video Processing Subsystem is explained briefly in this project.
Figure 4.2 Functional Block Diagram of DM6437 [6]
The DM6437 device includes a Video Processing Subsystem (VPSS) with two configurable video/imaging peripherals [6].
1) Video Processing Front-End (VPFE)
2) Video Processing Back-End (VPBE)
The Video Processing Front-End (VPFE) is used to capture input video. It comprises of following modules:
- CCD Controller (CCDC) – The CCDC is responsible for accepting raw image/video data from a CMOS or CCD sensor. It can also accept YUV video data from video decoder devices. The raw data obtained can be used to compute various statistic to eventually control the image/video tuning parameters.
- Preview Engine (Previewer) - It is a real-time image-processing engine that takes raw image data from a CMOS sensor or CCD and converts it into YCrCb 422 format that is amenable for compression or display.
- Hardware 3A (H3A) - It provides statistical information on the raw color data, which can be further used to adjust various parameters for video or image processing.
- Resizer - It provides a mean to size the input image data to the desired display or video encoding resolution. It accepts image data for separate horizontal and vertical resizing from 1/4× to 4× in increments of 256/N, where N is between 64 and 1024.
The Video Processing Back-End (VPBE) provides an output interface for display devices. It comprises of following sub modules:
- On-Screen Display Engine (OSD) - The primary function of this module is to gather and blend display windows with video data and pass it to the video encoder in YCrCb format. It is capable of handling 2 separate video windows and 2 separate OSD windows.
- Video Encoder (VENC) - It takes the display frame from the OSD and formats it to the desired output signal that are required to interface to the display devices. It provides four analog DACs providing means for composite video, S-Video, and/or Component video output. The VENC also provides up to 24 bits of digital output to interface to RGB888 devices.
4.4 TMS 320C64X+ DSP Cache
The TMS 320C64X+ utilizes a highly efficient two-level real-time cache for internal program and data storage. The cache provides code and data to the CPU at the speed of the processor, thus reducing the CPU to memory processing bottleneck. On the first level, it consist of dedicated program and data memory which can be configured into SRAM and cache. The size of cache is user configurable and can be set to 4K, 8K, 16K or 32K bytes. The L1 memory is connected to a second-level memory of on-chip memory called L2. The L2 memory acts as a bridge between the L1 memory and memory located off-chip. Level 2 memory is also configurable and can be split into L2 SRAM and cache with up to 256 Kbytes of cache. Thus, the L2 memory can function as mapped SRAM, as a cache, or as a combination of both. The L2MODE field in the cache configuration register (CCFG) determines what portion of L2 is mapped as SRAM and what portion acts as cache. The mode with no L2 cache is referred to as ‘ALL SRAM’ mode [10].
4.5 Design Implementation
The main objective of this project was to create a complete model for the LPR system and EVM-DM6437 kit provided the required base needed for its implementation. This kit provides different board support libraries and pre – built software packages that can be used based on specific application required. “Video Preview” code example from Texas Instrument was used as a reference and seven functions needed for character recognition were embedded in this code to achieve the required functionality. The board provided an interface where camera was connected at the video input port through which front or rear end car images were captured. The stored car images were used for license plate detection [9]. The goal of this project was to test template-matching algorithm for character recognition from the detected license plate. All algorithms, previously mentioned in chapter 3 were divided into seven functions and are discussed very briefly below. For testing and debugging purposes, two functions namely Reprocess Image and Process Image were created that can be completely ignored during performance analysis.
- Image_copy - This function simply copies the pixel values from the image
after processing it to either 0 or 255 based on the comparison made on the set threshold.
- Defragmentation - This function divides the whole license plate and localizes its characters coordinates.
- Resizing_quad_gen - Once the coordinate of each characters are known, the next step is to resize it to a fixed size template and then divide this template into four 16 - bit vectors. This function is used to achieve the two tasks as described. Firstly, it resizes to a fixed size template i.e. 64 x 128 pixel. Secondly, it converts this resize image into a block of 8 x 16 pixel and generates four 16 bit vectors. In chapter 3 above this was explained in detail.
- File_mapping - This function grabs the data from the text file, stores it in an array and uses it for template matching. For template matching, all the data for 36 templates (26 characters + 10 numbers) are stored in a text file in the form of four 16 - bit vectors. This data is then used for matching the contents from the plate.
- Reprocess_image - This function is use to create the visibility of results on the TV screen. It clears all the image data from the screen like a blank white Board. This is an extra function and can be ignore during performance analysis.
- Template_matching – It is used for matching two templates one from the database and one from the License plate.
- Process_image – Process_image is a function used to display all the matched characters on the TV screen. It is created not only to test the results but also to provide a high level of convenience to the designer during debugging. Like Reprocess_image, this function is ignored during performance analysis.
Chapter 5
SIMULATION AND IMPLEMENTATION RESULTS
- Results and Performance Analysis
In order to expect best results from the plate recognition algorithm, an image acquisition system must provide a stable, balanced and good quality image under all working conditions. However, due to several technical shortcomings of the image acquisition system, the captured image may have some noise, blurriness etc., associated with it that may affect the recognition results. The table 5.1 summarizes the simulation and implementation results acquired from random 20 plates. Based on different test criteria, functionality and integrity of the algorithm for the recognized characters are tested. Forty percent of the plates used during testing are images with good high contrast, good sharpness and good lighting conditions. Another forty percent of the test cases are done on low quality images captured under poor lighting conditions and the remaining 20 percent of them are blurry images.
Table 5.1 Simulation and Implementation Results
From the above table, it can be concluded that conditions such as poor lighting, noise, blurriness etc., that causes the characters to change are likely to return false recognition results. Image Results for the above-mentioned criteria are as shown in various test cases below.
5.2 Simulation Results
In order to test design algorithms during the initial phase of the implementation, a visual environment is created in Microsoft Visual Studio using OpenCV libraries. Using several built in features of this library, Three display windows were created that provided image results at several stages during simulation [4][5]. Open CV library functions are the most powerful tool that allows computers to see and make decision based on the data. The library functions available are highly optimized that can run code in real time and helps user to develop sophisticated vision application quickly.
5.2.1 Case 1: Clear Image and High Resolution
This type of image provides the best results and serves as an ideal case for character recognition. Image clarity and resolution of the picture, both are highly dependent on the type of the camera used for capturing the image. As shown in figure 5.1 below, along with an original plate used as an input image during simulation, it also consists of a Processed Plate Results that is the binary image and the result of the resizing algorithm. The results of character recognition stored in a file are as shown in the Table 5.2.
Figure 5.1 Simulation Results for Case 1
Table 5.2 Case 1 Simulation Results for Character Recognition
The above results provide us with the following information. Column 1 is the number field that indicates the location of numbers on the plate starting from LHS. Thus, number zero indicates the first number/character while number 6 indicates the last number/character. The maximum number of characters found in a plate are usually the highest number i.e., six in this case plus one that gives seven numbers. The reference number field in column2 indicates that the numbers/characters on the plate have been matched with the nth location of the template stored in the database. Finally, the matching ratio field in the last column shows the matching percentile between the two templates i.e., the number of vector bits matched. Hence, the characters recognized are 4PKC592.
5.2.2 Case 2: Low Quality Image and Poor Lighting Condition
In this test case, images are captured from the camera under dark lighting and cloudy conditions. These images captured under poor visibility conditions have low contrast and brightness elements that generate a low quality image. It is very important to feed these types of images as an input to the recognition algorithm in order to check the integrity of the characters being identified. The figure 5.2 consists of an original plate used as an input image during simulation, processed plate results that is the binary image and the third image is the result of the resizing algorithm. The results of character recognition are stored in the file which are as shown in Table 5.3.
Figure 5.2 Simulation Results for Case 2
Table 5.3 Case 2 Simulation Results for Character Recognition
The tabulated results above provide us with the following three fields. The number field indicates the location of numbers/characters on the plate starting from LHS i.e., number 0 indicates the first number/character while number 6 indicates the last number/character. The reference number indicates whether the number/character on the plate has matched with the nth location of the template stored in database. The matching ratio field in the last column shows the matching percentile between the two templates i.e., the number of vector bits matched during template matching. Hence, the characters are 5EKR790.
5.2.3 Case 3: Blurry Image
This is another test case in which due to low shutter speed of the image acquisition system, the image gets blurred. This type of image results in a loss of fine details associated with the characters that may affect the recognition results. Figure 5.3 consists of similar 3 things mentioned in the above two simulation results comprising of original plate, a binary image and the results of a resizing algorithm. The results of character recognition were stored in the file, which are as shown after Figure 5.3.
Figure 5.3 Simulation Results for Case 3
Table 5.4 Case 3 Simulation Results for Character Recognition
The extent to which the algorithm accurately detects and recognizes the characters is possible by inclusion of these types of images. In this test case, the characters recognized are 4NQE750.
5.2.4 Case Analysis
Three cases are analyzed in order to make sure that all the algorithms work properly and required functionality is achieved. This project concentrates mainly on the recognition part and plate detection is not included in the above simulation. Each of the above cases discussed, consists of three images and log file results created and saved in a file during simulation. The first image is an original image; second image is the processed image while the third image is the result of the resizing algorithm. The simulation results obtained provide us with the following information. The first member of the column in the simulation results table i.e., Number gives the matched results for the corresponding character present in the plate. The Reference part is a look up table with values for each character of the template (26 characters and 10 numbers) as mentioned in Table 3.2. This reference field is the confidence factor that tells the user that, this is the best match achieved from all the 36 characters and matching ratio gives the match between the two (plate characters and template characters) in terms of percentage. From the above results it is brought to light that as long as the characters in the test image are not affected by the environmental conditions, all the characters are perfectly recognized.
5.3 Implementation Results
5.3.1 Case 1: High Quality Image
Test cases used above for simulation are reconsidered for testing at the hardware level. Figure 5.4 consists of the following four images displayed on the TV screen. An original plate that is an input image captured from the camera, processed plate binary image results, result of the resizing algorithm and last is result of the character recognition algorithm.
Figure 5.4 Implementation Results for Case 1
5.3.2 Case 2: Low Quality Image (Plate Detection + Recognition)
This test case performed on low quality image involves two parts as illustrated in Figure 5.5. Firstly, it involves the detection of a license plate from the rear end picture of a car and its display on the TV screen. Secondly, using the recognition algorithm, all the characters are identified from the detected plate [9] and the matched results are outputted on the TV screen.
Figure 5.5 Implementation Results for Case 2
5.3.3 Case 3: Bright Image (Plate Detection + Recognition)
This test case performed on a bright image also involves two parts as illustrated in Figure 5.6. Firstly, it involves the detection of a license plate from the rear end picture of a car and its display on the TV screen. Secondly, using the recognition algorithm, all the characters are distinguished from the detected plate [9] and the matched results are outputted on the TV screen.
Figure 5.6 Implementation Results for Case 3
5.3.4 Case Analysis
Three cases considered for testing all the algorithms on the DSP kit ensures the required functionality achieved. In all the three cases listed above, the first case illustrates the recognition algorithm results based on a detected license plate. The remaining two cases depict both license plate detection [9] and character recognition. In Case 1, the first image is an original image, while the second image is the processed image, third image is the result of the resizing algorithm and the fourth image is the obtained result after Template Matching. The last image generated from the template vector is created for verification purposes only. This feature is not realized when Performance Analysis is carried out. While for case 2 and 3, the first image is an original image, second image is the detected image [9] while the third image is the result obtained after Template Matching. All the characters in the third image are perfectly recognized. However, in one particular scenario of Case 2, the character B revealed as number eight on the TV screen. Both, character B and number 8 are identical in their shape and a slight variation on these characters result in the false recognition. Further analysis carried out on the test picture, confirms that along with low brightness and contrast the plate also included grain, noise and dust. Hence, during plate detection, processing steps such as dilation and noise removal [9] affected the character orientation and resulted into false recognition results.
5.4 Performance Analysis
Based on CCS (Code Compose Studio) profiling tool we see that after the plate is detected from an image size of 720 × 480 pixels with a processor speed of 600 MHz, it requires approximately 90 ms for recognizing all the characters on the plate. The overall system performance that includes both plate detection [9] and recognition requires approximately 300 ms execution time. The table 5.2 summarizes the average performance profile of various algorithms used for character recognition purposes for 20 input images, considered during test cases. Out of the seven functions used in the recognition algorithm, functions like Image copy, Defragmentation, Process Image and Reprocess Image are dependent on the size of the plate. Hence, the size of a plate is an important factor and must be considered during performance analysis.
Table 5.5 Performance Profile Summary
From the table 5.2, it is evident that Template Matching algorithm takes only 7 ms to recognize all the seven characters on the plate. This is said to be rationally fast as it takes only 1 ms to identify each character on the plate. The Process Image and Reprocess Image functions of an algorithm require maximum computational time. These functions are optional features used to create the visibility of results on the TV screen and may be ignored during Performance Analysis. Hence, the overall performance estimated for image recognition part for the remaining five functions is 40 ms approximately. The performance data mentioned by Clemens Arth from Graz University of Technology in an IEEE paper entitled, “Real-Time License Plate Recognition on an Embedded DSP-Platform” - shows that it takes 52.11 ms for both plate detection and recognition from an image with a frame resolution of 355 × 288 [12]. Based on performance analysis measurements chart and summing up Image Acquisition, Segmentation and Classification functions yields execution time of 11 ms approximately to recognize all the characters once the plate has been detected [12]. However, Clemens, Florian and Horst made their analysis for a fixed plate size of 90 × 30 and no resizing was taken into consideration. However, the frame resolution in this project is twice the size of the frame mentioned in their paper and as the size of the detected plate varies from 150 × 50 to 275 × 100, a rough estimation of 20 ms is set as an achievable target. The following functions namely, Image_copy, Defragmentation, Resizing_quad_gen, File mapping and Template Matching are the governing factors that decide the above estimation.
5.5 Optimization Techniques
In order to optimize the above design, it is necessary to analyze the part of the code that is consuming the maximum amount of time. From Table 5.2, it is obvious that the best way to optimize the above code is by working on the Resizing_quad_gen algorithm. There are several steps taken into consideration during our design implementation in order to have better performance. Some of the design optimization steps that are taken into account during the design as well as the one that will be used further are described briefly in the given sections.
5.5.1 Code Optimization
The most common method used for code optimization is to use library routines wherever possible instead of redesigning the code for the same functionality. To a large extent, library routines are highly optimized and can improve the performance of the code drastically. Through profiling, it is observed that the area of code that needs maximum optimization is Resizing_Quad_gen function. By careful analysis, it is found that a certain piece of code can be replaced by the power function to generate four 16-bit vectors. Hence, the power function from the math library routine was used to optimize the resizing_quad_gen function. Similarly, another technique was used to optimize the ‘for’ loop in Template_matching function. Let us see the basic requirement to perform Template Matching before diving into the optimization technique. In order to do Template Matching, four 16-bit vectors are created from the detected numbers and these vectors are compared bitwise with other template vectors in their respective quadrant. The degree of mismatch is found by the number of 1's present in the result. This can be achieved in the following ways as shown below.
for (j = 0; j < 4; j ++) // J defines the quadrant
{
a = ((quadrant1[tt][j])) ^ ((quadrant[x][j])); // Two 16-bit vectors comparision
for (k = 0; k < 16; k ++) // tt defines 36 templates, x defines detected Numbers
{a >> k;
if ((a & 0x0001) == 1) {cnt = cnt + 1;} else {cnt = cnt;}
}
The above code takes approximately 2300 iterations (16 × 4 × 36 template) to find out the number of mismatches in four 16-bit vectors for 36 templates (26 characters and 10 numbers). This will be multiplied with 7 characters on the plate that will result into 16,000 iterations. The above code was optimized in the following manner.
for (j = 0; j < 4; j ++)
{
a = ((quadrant1[tt][j])) ^ ((quadrant[x][j]));
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match ;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2) + ((a & 0x0001)== 1)+ result_match;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001) == 1)+ result_match;
a = a>>4 ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001) == 1)+ result_match; }
To get the same functionality as with initial 16000 iterations the above code now requires only 1000 iterations i.e., 16 times faster. Another technique which was used for code optimization was loop combining. In this technique, when the two loops iterate for the same amount of time, their bodies can be combined with a condition that variables in both the loops are not dependent on each other. This technique is utilized in the Defragmentation Algorithm and it drastically improved the performance by 50%.
5.5.2 Cache Optimization
In order to speed up the execution speed of the CPU, cache plays a very important role provided it is configured correctly. As mentioned in Chapter 4 above, that there are two types of caches available on the DSP board namely, L1 cache and L2 cache. In L1 cache, both program and data can be configured separately. Similarly, L2 may be configured as a mapped SRAM, a cache, or a combination of both. For the previous performance profiling shown in Table 5.2, L1 is configured with a space of 32 KB for both program and data cache. Since this project is related to the real time applications, configuring them to the maximum allocation helps in the faster execution. L2 cache is configured in an “ALL SRAM” mode means a part of it was not configured as cache. Thus in order to configure L2 partly as cache and partly as RAM, one needs to configure the Gel file that is available with the video preview code provided by Texas Instrument. By changing the configuration bits, L2 was configured as 768 KB RAM and 256 KB cache. This is the maximum size that SRAM can be configured as a cache. The following table shows the performance profile of the recognition system with and without L2 cache.
Table 5.6 Performance Profile Summary After Cache Optimization
From the above Table 5.3, it can be seen clearly that there is hardly any difference in the performance improvement. This is obvious as L2 cache is more effective when an external memory is used as an interface for storage and data transfer.
5.5.3 Compiler Optimization
In this step, the output of the compiler is tuned by minimizing or maximizing the particular attribute of the program viz. code size vs. speed to achieve the required performance. Code composer studio provides user with several optimizing option that the user can use and tune them according to their requirement. These options are available in the build option of the project tab menu. There are two optimization option which can be used to improve the performance viz., opt speed vs. size and opt level. Opt Speed vs Size option is used to let the compiler know whether the user is more interested in optimizing the speed or in reducing the code area. As improving the performance was primary goal, this option was set to −ms0 that favors maximum performance against code size. Opt Level option sets the level of optimization that needs to be performed on the program code [8]. There are four Opt level available in code composer studio:
- O0 – Provides register level optimization.
- O1 – Addition to register optimization, compiler performs local optimization.
- O2 – Additional to above optimization, global optimization is performed.
- O3 – Additional to all above optimization, file level optimization is performed.
From all the above four level of optimization and considering the tradeoff between memory usage and speed, O1 level was used as compiler optimization for this project.
5.5.4 Optimization Results and Summary
After applying all these optimization techniques that were explained above, the Table 5.4 summarizes the performance improvement achieved. After comparing both the results, it is prominent that there is 50% improvement in the performance.
Table 5.7 Performance Profile Summary Before and After Optimization with CPU of 600Mhz
Based on the approximations made in Performance Analysis section 5.4 before, the achievable target was around 20 ms in which Process Image function and Reprocess function were ignored. Hence, after ignoring these function’s execution times, the performance is brought to 15 ms. Finally, it can be concluded that the performance goal of the declared assumption was successfully achieved.
Chapter 6
CONCLUSION
License Plate Character Recognition system was designed, tested and implemented successfully on Texas EVM DM6437 DSP platform. The new approach of dividing the template into four 16-bit vectors and use it during Template Matching proved to be very beneficial as it only requires 3.5 ms approximately for recognizing the seven characters on the plate (after optimization). Moreover the amount of memory required to store 36 templates into the database was negligible and thus improving the performance to maximum extent. The design was verified for several test cases and as far as Template Matching algorithm works on a predefined standard, the results obtained are highly reliable. Conditions like poor lighting, noise, blurriness that can cause a character to vary may result into uncertain results. Recognition of each and every character on the plate also depends highly on successful defragmentation of every character. Any variation or false segmentation will amplify the error during the resizing algorithm and hence will result in false recognition.
The EVM DM6437 DSP platform also provides wireless feature, which can be used with this application to send the data through internet. This can be used further to create various law enforcement applications to fight crime and thus to improve public safety. Several enhancements can be made in this project by using several other features of the kit.
APPENDIX
Simulation and Implementation Code
1. SIMULATION CODE USING OPEN CV LIBRARIES
#include "stdafx.h"
#include <iostream>
#include <cstdlib>
#include <math.h>
#include <stdio.h>
#include <vector>
#include <fstream>
#include "cv.h"
#include "highgui.h"
#define image_row 302
#define image_column 85
#define template_size_x 64
#define template_size_y 128
using namespace std;
using namespace cv;
int main() {
int value;
int black;
int value1 [40];
int value2 [40];
int raw_count=0;
int total_nos = 0;
int width = 0;
int height = 0;
int row_no1 [5];
int row_no2 [5];
int total_row = 0;
int new_row_no1 [5];
int new_row_no2 [5];
int new_total_row = 0;
int max_difference = 0;
int row_difference[5];
double pixelvalue;
double pixel_sum;
double pixel_average; //[image_column];
int template_x_coordinate[template_size_x];
int template_y_coordinate[template_size_y];
float temp_size_x = template_size_x;
float temp_size_y = template_size_y;
int temporary_str;
cout << "************************************************************\n";
cout << " WELCOME TO THE WORLD OF IMAGE PROCESSING FOR CAR PLATE RECOGNITION \n";
cout << "*************************************************************\n";
// declare a new IplImage pointer
IplImage* myimage;
IplImage* mysample;
CvScalar p ;
p.val[0] = 0;
p.val[1] = 0;
p.val[2] = 0;
p.val[3] = 0;
// load an image
myimage = cvLoadImage("car4_plate2.jpg",0);//change the file name with your own image
mysample = cvLoadImage ("template.jpg",0);
if(!myimage)
cout << "Could not load image file \n";
else cout << "Sample image successfully loaded\n";
if(!mysample)
cout << "Could not load plain image file \n";
else cout << "Plain image successfully loaded\n";
for (int y=0; y < image_column; y++) // image_column
{
for (int x = 0; x < image_row; x++) // image_row
{
pixelvalue =(double)cvGet2D (myimage,y,x).val[0]; // any value above 50 convert it into 255(white) and below it convert it to 0 (black)
//cout << "value of pixel" << y << "\t" << x << "\t" << pixelvalue << endl ;
if (pixelvalue < 150) //50
{
p.val[0] = 0;
p.val[1] = 0;
p.val[2] = 0;
p.val[3] = 0;
cvSet2D (myimage,y,x,p);
}
else
{
p.val[0] = 255;
cvSet2D (myimage,y,x,p);
}
}
}
//-----------------------------------------------------------------------------------------------
cout << "DEFRAGMENTING THE PLATE HORIZONTALLY" << endl;
for (int y = 0; y <image_column ; y++)
{
pixel_sum = 0;
for (int x=0; x < image_row; x++)
{
pixel_sum = pixel_sum + (double)cvGet2D (myimage,y,x).val[0];
}
pixel_average = image_row - (pixel_sum/255);
if (pixel_average < 4)
{
if ((raw_count > 5)|| (y == image_column)) // false checkin made
{
row_no1[total_row] = y - raw_count;
row_no2[total_row] = y;
//cout << "pixel_value" << pixel_average_horizontal[y] << endl;
raw_count = 0;
total_row = total_row + 1;
}
else
{
raw_count = 0 ;
}
}
else
{
raw_count = raw_count + 1;
}
}
for (int x = 0; x < total_row ; x++)
{
cout <<"Number_sequence" << "\t" << x << "\t" << "value1" << "\t" << row_no1 [x]<<"\t" << "value2" << "\t" << row_no2[x] << endl;
row_difference[x] = row_no2[x]-row_no1[x];
if (max_difference < row_difference[x])
{ max_difference = row_difference[x];
}
else{}
}
for (int x = 0; x < total_row ; x++)
{
if ((max_difference - row_difference[x]) < 10)
{
new_row_no1[new_total_row] = row_no1[x];
new_row_no2[new_total_row] = row_no2[x];
new_total_row = new_total_row + 1 ;
}
else {}
}
//-------------------------------------------------------------------------------------------------------
// till here plate processed, divided horizontally and found the no of rows
//---------------------------------------------------------------------------------------------------------
cout << "max difference" << max_difference << endl ;
cout << "DEFRAGMENTING THE PLATE VERTICALLY" << endl;
raw_count = 0;
for (int x = 0; x < new_total_row ; x++)
{
cout <<"Number_sequence" << "\t" << x << "\t" << "value1" << "\t" << new_row_no1 [x]<<"\t" << "value2" << "\t" << new_row_no2[x] << endl;
for (int xx = 0; xx < image_row; xx++) // image_row check
{
pixel_sum = 0;
for (int y = new_row_no1[x]; y < new_row_no2[x] ; y++) // row row_no1[x]; y < row_no2[x] no pro
{
pixel_sum = pixel_sum + (double)cvGet2D (myimage,y,xx).val[0];
}
pixel_average = (new_row_no2[x]- new_row_no1[x]) - (pixel_sum/255);
if (pixel_average < 5)
{
if (raw_count > 10) //&&(raw_count < ((570*2)/7))) // false checkin
{
value1[total_nos] = xx - raw_count;
value2[total_nos] = xx;
raw_count = 0;
total_nos = total_nos + 1;
}
else
{
raw_count = 0 ;
}
}
else
{
raw_count = raw_count + 1;
}
}
}
cout << "ALGORITHM FOR RESIZING STARTS HERE" << endl;
cout << "total nos" << total_nos << endl;
int kk = 0;
int modified_value;
float divisor_x[40];
float divisor_y [40];
double bas = 2;
unsigned short int storage_vector;
unsigned short int quadrant [40][4];
for (int n = 0; n < (total_nos ) ; n ++)
{
divisor_x[n] = (value2[n]-value1[n])/ temp_size_x;
if (value2[n]-value2[n-1] < 0)
{
kk = kk + 1;
divisor_y[n] = (new_row_no2[kk]- new_row_no1[kk]) / temp_size_y;
}
else
{
divisor_y[n] = (new_row_no2[kk]- new_row_no1[kk]) / temp_size_y;
}
for (int q = 0; q < temp_size_x; q++)
{
temporary_str = (divisor_x[n] * q) + value1[n];
template_x_coordinate[q] = temporary_str;
}
for (int q = 0; q < temp_size_y; q++)
{
temporary_str = (divisor_y[n] * q)+ new_row_no1[kk];
template_y_coordinate[q] = temporary_str;
}
for (int y=0; y < temp_size_y; y++) // image_column
{
for (int x = 0; x < temp_size_x; x++) // image_row
{
pixelvalue =(double)cvGet2D (myimage,template_y_coordinate[y],template_x_coordinate[x]).val[0];
if (pixelvalue == 0 )
{
p.val[0]= 0;
p.val[1]= 0;
p.val[2]= 0;
p.val[3]= 0;
cvSet2D (mysample,y,x,p);
}
else
{
p.val[0]= 255;
cvSet2D (mysample,y,x,p);
}
}
}
//--------------------------------------------------------------
// start defining here algorithm for four quadrant method
//
for (int m = 0; m < 4; m++)
{
value = 16;
storage_vector = 0;
for (int l = 0; l < 2; l++)
{
for (int k = 0; k < 8; k++)
{
black = 128;
value = value - 1 ;
for (int i = 0; i < 16; i++) // 8
{
for (int j = 0; j < 8; j++)
{
width = j + 8*k;
height = i + 16*l + 32*m; // m= 16 l = 8
pixelvalue = (double)cvGet2D (mysample,height,width).val[0]; // ht,wth
if (pixelvalue == 0)
{
black = black + 1;
}
else
{
black = black - 1;
}
}
}
if (black > 60) modified_value = 0; else modified_value = 1; // 0 - black; 1 - white
//use power function and assign the value
storage_vector = storage_vector + (pow (bas,value))* modified_value;
}
}
quadrant [n][m] = storage_vector;
}
}
// Algorithm for File Mapping
// code to Generate output vectors from the Template
//fstream file_op("file_quad1.txt",ios::out);
//for (int i = 0; i < total_nos; i ++)
//{
// for (int j = 0; j < 4; j ++)
// {
// file_op << quadrant [i][j] << endl;
// }
//}
//file_op.close();
char str [40];
unsigned short int quadrant1 [35][4];
fstream file_in("file1.txt",ios::in);
for (int i = 0; i < 36; i ++)
{
for (int j = 0; j < 4; j ++)
{
file_in.getline(str,40);
quadrant1[i][j] = atoi (str);
}
}
file_in.close();
// Algorithm for Template Matching
unsigned short int result_match ;
unsigned short int a = 0;
int tt = 0;
unsigned short int match_percentage = 0 ;
unsigned short int match_percentage1 = 0 ;
unsigned short int match_percentage2= 0 ;
unsigned short int match_percentage3 = 0 ;
unsigned short int match_percentage4 = 0 ;
unsigned short int highest_percentage = 0 ;
unsigned short int mapping_quadrant[36];
unsigned short int mapping_percentile[36];
for (int x = 0 ; x < total_nos; x++)
{
tt=0;
highest_percentage = 0;
while (tt < 36)
{
a = 0;
result_match = 0 ;
match_percentage = 0 ;
for (int j = 0; j < 4; j ++)
{
a = ((quadrant1[tt][j])) ^ ((quadrant[x][j]));
cout << a << endl ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match ;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match;
a = a>>4 ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match ;
}
match_percentage = ((64 - result_match)*100)/64 ;
if (highest_percentage < match_percentage)
{
highest_percentage = match_percentage;
mapping_quadrant[x] = tt;
mapping_percentile[x] = highest_percentage;
}
else {
}
tt= tt + 1;
}
if (mapping_percentile[x] < 75 )
{
tt=0;
highest_percentage = 0;
while (tt < 36)
{
a = 0;
result_match = 0 ;
match_percentage = 0 ;
for (int j = 0; j < 4; j ++)
{
a = ((quadrant1[tt][j])) ^ ((quadrant[x][j]));
cout << a << endl ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1) ;
match_percentage1 = ((16 - result_match)*100)/16;
result_match = 0;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1);
match_percentage2 = ((16 - result_match)*100)/16;
result_match = 0;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1);
match_percentage3 = ((16 - result_match)*100)/16 ;
result_match = 0;
a = a>>4 ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1) ;
match_percentage4 = ((16 - result_match)*100)/16 ;
}
match_percentage = (match_percentage1 + match_percentage2 + match_percentage3 + match_percentage4)/4;
if (highest_percentage < match_percentage)
{
highest_percentage = match_percentage;
mapping_quadrant[x] = tt;
mapping_percentile[x] = highest_percentage;
}
else {
}
tt = tt + 1;
}
}
}
for (int i = 0; i < total_nos; i ++)
{
cout << "mapping_quadrant" << "\t" << i << "\t" << mapping_quadrant[i] << "\t" << mapping_percentile[i]<< endl;
}
cvNamedWindow("Smile", 1);
cvMoveWindow("Smile", 10, 10);
cvShowImage("Smile", myimage);
//wait for key to close the window
cvWaitKey(0);
cvDestroyWindow( "Smile" );
cvReleaseImage( &myimage );
cvReleaseImage( &mysample );
return 0;
}
2. HARDWARE IMPLEMENTATION CODE
/*
* The code below consist of three parts
* 1) Video Preview Code provided by Texas Instrument during the purchase of EVMDM
* 6437 Evaluation kit. Only modified part of this code is shown here.
* 2) Plate Detection - Only Global Variables for plate detection is shown here as they
* were used in the plate recognition. Function for plate detection are not shown below
* 3) Plate Recognition - consist of seven functions and are shown in this code
*/
/* runtime include files */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdarg.h>
#include <math.h>
/* BIOS include files */
#include <std.h>
#include <gio.h>
#include <tsk.h>
#include <trc.h>
/* PSP include files */
#include <psp_i2c.h>
#include <psp_vpfe.h>
#include <psp_vpbe.h>
#include <fvid.h>
#include <psp_tvp5146_extVidDecoder.h>
#include <c6x.h>
/* CSL include files */
#include <soc.h>
#include <cslr_sysctl.h>
/* BSL include files */
#include <evmdm6437.h>
#include <evmdm6437_dip.h>
/* Video Params Defaults */
#include <vid_params_default.h>
#include <csl.h>
//#include <csl_cache.h>
#include <bcache.h>
// IMAGE PROCESSING HEADER FILES
/* This example supports either PAL or NTSC depending on position of JP1 */
#define STANDARD_PAL 0
#define STANDARD_NTSC 1
#define FRAME_BUFF_CNT 6
#define image_row1 720
#define image_column1 480
#define template_size_x 64
#define template_size_y 128
static int read_JP1(void);
static CSL_SysctlRegsOvly sysModuleRegs = (CSL_SysctlRegsOvly )CSL_SYS_0_REGS;
//*******************************
// Pramod global variables
//*******************************
unsigned char my_image[480][720];
int value1 [40];
int value2 [40];
int total_nos = 0;
int new_row_no1 [5];
int new_row_no2 [5];
int new_total_row = 0;
float divisor_x[40];
float divisor_y [40];
unsigned int check[8][8];
unsigned int template_gen[8][8]; // 16 8
char str [40];
FILE * myfile;
int template_x_coordinate [template_size_x];
int template_y_coordinate [template_size_y];
float temp_size_x = template_size_x;
float temp_size_y = template_size_y;
float div_x;
float div_y;
int pixel_sum;
int pixel_average; //[480];
int row_no1 [5];
int row_no2 [5];
int total_row = 0 ;
int max_difference = 0;
int row_difference[5];
int raw_count = 0;
unsigned short int storage_vector;
unsigned short int quadrant [40][4];
unsigned short int quadrant1 [35][4];
int my_sample[128][64];
//*******************************************************
// USER DEFINED FUNCTIONS
//*******************************************************
...
...
... STATE USER DEFINED FUNCTION FOR DETECTION HERE
...
...
void image_copy (); // Pass image coordinate, size -- height and width of the detected // image to this function
void process_image(void * currentFrame);
void defragmentation (int image_column,int image_row);
void resizing_quad_gen ();
void template_matching (int temp_size_y, int temp_size_x);
void file_mapping();
void reprocess_image(int image_column, int image_row);
//*******************************************************
// VARIABLE ARRAYS
//*******************************************************
// Global variables used by detection algorithm
unsigned char I[480][720];
unsigned char I_temp[480][720];
int row_width, col_width;
int start, stop;
//*******************************************************
/*
* ======== main ========
*/
void main() {
printf("Video Preview Application\n");
fflush(stdout);
/* Initialize BSL library to read jumper switches: */
EVMDM6437_DIP_init();
sysModuleRegs -> PINMUX0 &= (0x005482A3u);
sysModuleRegs -> PINMUX0 |= (0x005482A3u);
sysModuleRegs -> PINMUX1 &= (0xFFFFFFFEu);
sysModuleRegs -> VPSSCLKCTL = (0x18u);
return;
}
/*
* ======== video_preview ========
*/
void video_preview(void) {
FVID_Frame *frameBuffTable[FRAME_BUFF_CNT];
FVID_Frame *frameBuffPtr;
GIO_Handle hGioVpfeCcdc;
GIO_Handle hGioVpbeVid0;
GIO_Handle hGioVpbeVenc;
int status = 0;
int result;
int i;
int standard;
int width;
int height;
int flag = 1;
...
...
...
... // Video Preview Code provided by Texas Instrument with their evaluation
board EVMDM647 ... // goes here
...
...
/* loop forever performing video capture and display */
while ( flag && status == 0 ) { // ADDED FLAG TO RUN THIS LOOP ONLY ONE TIME
/* grab a fresh video input frame */
FVID_exchange(hGioVpfeCcdc, &frameBuffPtr);
//*************************
// DETECTION PART
//*********************************
...........
..........
.........
.// FUNCTIONS FOR DETECTION GOES HERE
.........
........
//*********************************
// RECOGNITION PART
//*********************************
image_copy ();
defragmentation (row_width,col_width);
resizing_quad_gen ();
file_mapping();
reprocess_image(image_column1,image_row1);
template_matching (template_size_y,template_size_x);
process_image (frameBuffPtr->frame.frameBufferPtr);
//*******************************************************************
BCACHE_wbInv((void*)(frameBuffPtr->frame.frameBufferPtr), 480*720*2, 1);
/* display the video frame */
FVID_exchange(hGioVpbeVid0, &frameBuffPtr);
flag = 0;
}
}
/*
* ======== read_JP1 ========
* Read the PAL/NTSC jumper.
*
* Retry, as I2C sometimes fails:
*/
static int read_JP1(void)
{
int jp1 = -1;
while (jp1 == -1) {
jp1 = EVMDM6437_DIP_get(JP1_JUMPER);
TSK_sleep(1);
}
return(jp1);
}
//******************************************************
// FUNCTIONS FOR RECOGNITION ALGORITHM
//*****************************************************
//*****************************************************
// PROCESS IMAGE TO DIGITAL FORMAT
//*****************************************************
void image_copy ()
{
int x,y;
for (y=0; y < row_width; y++) // image_column
{
for ( x = 0; x < col_width; x++) // image_row
{
if(my_image[y][x] < 150)
{
my_image[y][x] = 0;
} // THRESHOLD
else
{
my_image[y][x]= 255;
}
}
}
}
//******************************************************
// FUNCTION TO DISPLAY RESULTS ON THE TV
//******************************************************
void process_image (void * currentFrame)
{
int x,y,m; // change y
m = 0;
for ( y=0; y < 480; y++) // image_column
{
x = 0 ;
for ( m = 0; m < (720 * 2); m=m+2) // image_row
{
* (((unsigned char * )currentFrame)+ (y * 720 * 2 ) + m) = 0x80;
* (((unsigned char * )currentFrame)+ (y * 720 * 2) + m+1) = my_image[y][x];
x = x + 1;
}
}
}
//*****************************************************
// FUNCTION TO FIND CHARACTERS ON THE PLATE
//*****************************************************
void defragmentation (int image_column,int image_row)
{
int y,x,xx,e;
for ( y = 0; y <image_column ; y++)
{
pixel_sum = 0;
for ( x=0; x < image_row; x++)
{
pixel_sum = pixel_sum + my_image[y][x];
}
pixel_average = (image_row - (pixel_sum/255));
if ((pixel_average < 1)||(y == (image_column -1))) // check a value
{
if (raw_count > 5) // false checkin made
{
row_no1[total_row] = y - raw_count;
row_no2[total_row] = y;
raw_count = 0;
total_row = total_row + 1;
}
else
{
raw_count = 0 ;
}
}
else
{
raw_count = raw_count + 1;
}
}
for ( e = 0; e < total_row ; e++)
{
row_difference[e] = row_no2[e]-row_no1[e];
if (max_difference < row_difference[e])
{
max_difference = row_difference[e];
}
else
{
}
}
for ( x = 0; x < total_row ; x++)
{
if ((max_difference - row_difference[x]) < 20)
{
new_row_no1[ new_total_row] = row_no1[x];
new_row_no2[ new_total_row] = row_no2[x];
new_total_row = new_total_row + 1;
}
else {}
}
//--------------------------------------------------------------------------------------------------
// Till here plate processed, divided horizontally and found the no of rows
//-------------------------------------------------------------------------------------------------
raw_count = 0 ;
for ( x = 0; x < ( new_total_row) ; x++)
{
for ( xx = 0; xx < image_row; xx++) // image_row check
{
pixel_sum = 0;
for ( y = new_row_no1[x]; y < new_row_no2[x] ; y++) //
{
pixel_sum = pixel_sum + my_image[y][xx]; // troubleshoot if not getting
}
pixel_average = (new_row_no2[x]-new_row_no1[x])- (pixel_sum/255);
if (pixel_average < 5)
{pixel_average = 0;} else {}
if ((pixel_average == 0 )||(xx == (image_row -1)))
{
if (raw_count > 20) //&&(raw_count < ((570*2)/7))) // false checking
{
value1[total_nos] = xx - raw_count;
value2[total_nos] = xx;
raw_count = 0;
total_nos = total_nos + 1;
}
else
{
raw_count = 0 ;
}
}
else
{
raw_count = raw_count + 1;
}
}
}
}
//*******************************************************************
// RESIZING AND FOUR QUADRANT ALGORITHM
//******************************************************************
void resizing_quad_gen ()
{
int n,pixel_value,it,j,k,l,m,x,y,q,black,height1,width1,modified_value,value3;
double bas = 2;
int kk = 0;
int temporary_str_x;
int temporary_str_y;
for ( n = 0; n < (total_nos) ; n ++)
{
divisor_x[n] = (value2[n]-value1[n])/ temp_size_x;
if (value2[n]-value2[n-1] < 0)
{
kk = kk + 1;
divisor_y[n] = (new_row_no2[kk]- new_row_no1[kk]) / temp_size_y;
}
else
{
divisor_y[n] = (new_row_no2[kk]- new_row_no1[kk]) / temp_size_y;
}
for ( q = 0; q < temp_size_x; q++)
{
temporary_str_x = (divisor_x[n] * q) + value1[n];
template_x_coordinate[q] = temporary_str_x;
}
for ( q = 0; q < temp_size_y; q++)
{
temporary_str_y = (divisor_y[n] * q)+ new_row_no1[kk];
template_y_coordinate[q] = temporary_str_y;
}
for ( y=0; y < temp_size_y; y++) // image_column flaw----
{
for ( x = 0; x < temp_size_x; x++) // image_row
{
pixel_value = my_image[template_y_coordinate[y]][template_x_coordinate[x]]; // any value above 50 convert it into 255(white) and below it convert it to 0 (black)
if (pixel_value == 0 )
{
my_sample[y][x] = 0; // define my_sample
}
else
{
my_sample[y][x] = 255;
}
}
}
//-----------------------------------------
// Algorithm for four quadrant method
//-------------------------------------------------
for ( m = 0; m < 4; m++)
{
value3 = 16;
storage_vector = 0;
for ( l = 0; l < 2; l++)
{
for ( k = 0; k < 8; k++)
{
black = 128;
value3 = value3 - 1 ;
for ( it = 0; it < 16; it++) // 8
{
for ( j = 0; j < 8; j++)
{
width1 = j + 8*k;
height1 = it + 16*l + 32*m; // m= 16 l = 8
pixel_value = my_sample[height1][width1]; // ht,wth
if (pixel_value == 0)
{
black = black + 1;
}
else
{
black = black - 1;
}
}
}
if (black > 60){ modified_value = 0;} else {modified_value = 1;} // 0 - black; 1 - white
//use power function and assign the value
storage_vector = storage_vector + (pow(bas,value3))* modified_value;
}
}
quadrant [n][m] = storage_vector;
}
}
}
//*******************************************
// FUNCTION FOR FILE MAPPING
//*******************************************
void file_mapping()
{
char str [40];
int i,j;
myfile = fopen ("file1.txt","r");
for ( i = 0; i < 36; i ++)
{
for ( j = 0; j < 4; j ++)
{
fscanf (myfile,"%s", & str);
quadrant1[i][j] = atoi (str);
}
}
fclose (myfile);
}
//********************************************************
// FUNCTION TO CLEAR IMAGE ON THE TV
//********************************************************
void reprocess_image(int image_column, int image_row)
{
int y,m;
for ( y=0; y < image_column; y++) // image_column
{
for ( m = 0; m < image_row; m++) // image_row
{
my_image[y][m] = 255;
}
}
}
//*************************************************
// TEMPLATE MATCHING ALGORITHM
//************************************************
void template_matching (int temp_size_y,int temp_size_x)
{
int x,y,j,i,rr,l,k,q,bit_mapp,pixel_value;
int temporary_str_x;
int temporary_str_y;
unsigned short int result_match ;
unsigned short int a = 0;
unsigned short int match_percentage = 0 ;
unsigned short int match_percentage1 = 0 ;
unsigned short int match_percentage2= 0 ;
unsigned short int match_percentage3 = 0 ;
unsigned short int match_percentage4 = 0 ;
unsigned short int highest_percentage = 0 ;
unsigned short int mapping_quadrant[36];
unsigned short int mapping_percentile[36];
int tt = 0;
int displace;
for ( x = 0 ; x < total_nos; x++)
{
tt=0;
highest_percentage = 0;
while (tt < 36)
{
a = 0;
result_match = 0 ;
match_percentage = 0 ;
for ( j = 0; j < 4; j ++)
{
a = ((quadrant1[tt][j])) ^ ((quadrant[x][j]));
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match ;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match;
a = a>>4 ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1)+ result_match ;
}
match_percentage = ((64 - result_match)*100)/64 ;
if (highest_percentage < match_percentage)
{
highest_percentage = match_percentage;
mapping_quadrant[x] = tt;
mapping_percentile[x] = highest_percentage;
}
else {
}
tt= tt + 1;
}
}
for ( i = 0; i < total_nos; i ++)
{
if (mapping_percentile[i] < 75 )
{
tt=0;
highest_percentage = 0;
while (tt < 36)
{
a = 0;
result_match = 0 ;
match_percentage = 0 ;
for ( j = 0; j < 4; j ++)
{
a = ((quadrant1[tt][j])) ^ ((quadrant[i][j]));
//cout << a << endl ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1) ;
match_percentage1 = ((16 - result_match)*100)/16;
result_match = 0;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1);
match_percentage2 = ((16 - result_match)*100)/16;
result_match = 0;
a = a>>4;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1);
match_percentage3 = ((16 - result_match)*100)/16 ;
result_match = 0;
a = a>>4 ;
result_match = ((a & 0x0008) == 8 ) + ((a & 0x0004) == 4) + ((a & 0x0002) == 2 ) + ((a & 0x0001)== 1) ;
match_percentage4 = ((16 - result_match)*100)/16 ;
}
match_percentage = (match_percentage1 + match_percentage2 + match_percentage3 + match_percentage4)/4;
if (highest_percentage < match_percentage)
{
highest_percentage = match_percentage;
mapping_quadrant[i] = tt;
mapping_percentile[i] = highest_percentage;
}
else {
}
tt = tt + 1;
}
}
}
for ( i = 0; i < ( total_nos ); i ++)
{
rr = mapping_quadrant[i];
for ( l = 0; l < 4; l ++) // i 16
{
a = quadrant1[rr][l];
for ( j = 0; j < 2; j ++)
{
for ( k = 0; k < 8; k ++)
{
bit_mapp = (a & 0x8000);
if (bit_mapp == 0)
{
template_gen[j+2*l][k] = 0;
}
else
{
template_gen[j+2*l][k] = 255;
}
a = a<<1;
}
}
}
div_y = 0.0625; // value obtained by dividing 8 by 128 and 8 by 64
div_x = 0.125;
displace = 50;
for ( q = 0; q < temp_size_x; q++)
{
temporary_str_x = (div_x * q) ;
template_x_coordinate[q] = temporary_str_x;
}
for ( q = 0; q < temp_size_y; q++)
{
temporary_str_y = (div_y * q);
template_y_coordinate[q] = temporary_str_y;
}
for ( y=0; y < temp_size_y; y++) // image_column
{
for ( x = 0; x < temp_size_x; x++) // image_row
{
pixel_value = template_gen[template_y_coordinate[y]][template_x_coordinate[x]]; // any value above 50 convert it into 255(white) and below it convert it to 0 (black)
//printf ("pixel_value %d " , pixel_value);
if (pixel_value == 0 )
{
my_image[y+100][x + 70*i + displace ] = 0; // define my_sample
}
else
{
my_image[y+100][x + 70*i + displace] = 255;
}
}
}
}
}
REFERENCES
[1] Federal Signal Corporation, “Automatic License Plate Recognition - Investment Justification and Purchasing Guide”, pp 1-7, August 2008.
[2] Xilinx Inc., “The Xilinx LogiCORE™ IP RGB to YCrCb Color-Space Converter”, pp 1-5, July 2010.
[3] CA Department of Motor Vehicles License Plate Introduction. http://www.dmv.ca.gov/pubs/plates/platestitlepage.htm
[4] Gary Bradski and Adrian Kachler, “Learning OpenCV”, O’Reilly Media Inc, First Edition, September 2008.
[5] Intel Corp., “Open Source Computer Vision Library”, Reference Manual, December 2000.
[6] Texas Instrument Inc., "TMS 320DM 6437 Digital Media Processor", Texas, pp 1-5, 211-234, June 2008.
[7] Texas Instrument Inc., "TMS320DM643x DMP Peripherals Overview Reference Guide", pp 15-17,June 2007.
[8] Texas Instrument Inc., "TMS320C6000 Programmers Guide”, Texas, pp 37-84,March 2000.
[9] Naikur Gohil, “Car License Plate Detection”, Masters Project Report, California State University, Sacramento, Fall 2010.
[10] Texas Instrument Inc., “TMS320C64X+ DSP Cache”, User Guide, pp 14-26, February 2009.
[11] ITU Recommendation BT.601-5, International Telecommunication Union, 1995.
[12] Clemens Arth, Florian and Horst, “Real-Time License Plate Recognition on an Embedded DSP-Platform”, Proceedings of IEEE conference on Computer Vision and Pattern Recognition, pp 1-8, June 2007.