InvestigativeImageProcessing

=from= [] =Investigative Image Processing II (Proceedings Volume)=



Table of Contents
Listed below are the papers found in this volume. Click the paper title to view an abstract or to order an individual paper. [|Hide Abstracts]

[|Automatic video analysis and compilation system AVACS] The analysis of video observation tapes can be tedious and tiring work. An analysis system can relieve this burden and create a compilation tape autonomously. Working unattended AVACS creates a tape and a Compact Disk (CD) with only the images-of-interest. [|Sampling theory for digital video acquisition: the guide for the perplexed user] Recently, the law enforcement community with professional interests in applications of image/video processing technology, has been exposed to scientifically flawed salesmanship assertions regarding the advantages and disadvantages of various hardware image acquisition devices (video digitizing cards). These assertions state a necessity of using SMPTE CCIR-601 standard when digitizing NTSC composite video signals from surveillance videotapes. In particular, it would imply that the pixel-sampling rate of 720*486 is absolutely required to capture all the available video information encoded in the composite video signal. Fortunately, these erroneous statements can be directly analyzed within the strict mathematical context of Shannon's Sampling Theory. Here we apply the classical Shannon-Nyquist results to the process of digitizing composite analog video from videotapes to dispel the theoretically unfounded, wrong assertions. [|Use of gait parameters of persons in video surveillance systems] The gait parameters of eleven subjects were evaluated to provide data for recognition purposes of subjects. Video images of these subjects were acquired in frontal, transversal, and sagittal (a plane parallel to the median of the body) view. The subjects walked by at their usual walking speed. The measured parameters were hip, knee and ankle joint angle and their time averaged values, thigh, foot and trunk angle, step length and width, cycle time and walking speed. Correlation coefficients within and between subjects for the hip, knee and ankle rotation pattern in the sagittal aspect and for the trunk rotation pattern in the transversal aspect were almost similar. (were similar or were almost identical) This implies that the intra and inter individual variance were equal. Therefore, these gait parameters could not distinguish between subjects. A simple ANOVA with a follow-up test was used to detect significant differences for the mean hip, knee and ankle joint angle, thigh angle, step length, step width, walking speed, cycle time and foot angle. The number of significant differences between subjects defined the usefulness of the gait parameter. The parameter with the most significant difference between subjects was the foot angle (64 % - 73 % of the maximal attainable significant differences), followed by the time average hip joint angle (58 %) and the step length (45 %). The other parameters scored less than 25 %, which is poor for recognition purposes. The use of gait for identification purposes it not yet possible based on this research. [|Semi-automatic image segmentation and object tracking framework for investigative and surveillance-oriented applications] In this paper we discuss a combination of several image processing and computer vision components for the purpose of semi-automatically delineating and tracking moving objects. First, we introduce our motion based segmentation framework which uses an improved watershed technique to obtain an image pre-segmentation, and an improved block or segment matching technique to obtain an initial estimation of the motion field. The initial pre-segmentation and motion estimation results are then fed into an additional component which reduces the typical watershed oversegmentation until only a few coherently moving objects remain. Next, we discuss two tools that can be used to improve or correct the obtained segmentation results. Also, we investigate a simple, yet efficient object oriented approach for tracking moving segments; we discuss the concept of truncated segment matching, which combines characteristics of both traditional block matching and feature based motion estimation processes. Additionally, we use polynomial motion models to describe and predict the observed motion. The proposed segment matching approach is shown to allow controllable and relatively fast computation, which is illustrated with image segmentation and video tracking results. Finally, we briefly discuss the use of these techniques within the domain of investigative and surveillance oriented applications. [|CCD fingerprint method for digital still cameras] We have reported the Charge Coupled Device (CCD) fingerprint method for identification of digital still cameras. The CCD fingerprint method utilizes the nonhomogeneous nature of dark currents in CCDs. In this study, we have measured CCD defects patterns of various digital still cameras including professional cameras and cheap ones with various resolution and compression rates. As a result, CCD defect pattern was detected in all cameras except for a low-resolution cheap camera using only one image. Resolution mode change of digital cameras did not affect the position of defect points in general but in some cases, relative pixel intensity varied. Image compression did not affect the pixel position for blank images within normal compression rate, but when there existed light in the background, the pixel position was blurred as the compression rate became high. In conclusion, it is recognized that the CCD fingerprint method can be applied in principle to digital still cameras, that is, individual camera identification can be achieved in principle by using images taken with the camera. [|High-quality still images from video frame sequences] This article tackles the classic super-resolution (SR) problem Elad99 of obtaining a high-resolution (HR) still image from a sequence of low-resolution (LR) images that have been warped and sub-sampled. The goal here is to recover frequencies higher than the Nyquist frequency by merging the LR information. We focus on the critical step of the SR process before any fusion technique can be applied which consists of the registration of LR images with an arbitrary reference image at sub-pixel accuracy. We propose a registration algorithm for color images, derived from the one described by Djamdji and Bijaoui in Ref. 2. This algorithm achieves automatic feature-based registration at sub-pixel accuracy. It seeks to take advantage of the multi-band (RGB) information in a color image to improve the robustness and accuracy compared to more usual greyscale registration. The fusion of the data from LR images into a higher resolution image is then carried out through thin plate spline interpolation. The results show the algorithm's performance for simulated image sets. The influence of several parameters on the registration algorithm is described. [|Advancing the science of forensic data management] Many individual elements comprise a typical forensics process. Collecting evidence, analyzing it, and using results to draw conclusions are all mutually distinct endeavors. Different physical locations and personnel are involved, juxtaposed against an acute need for security and data integrity. Using digital technologies and the Internet's ubiquity, these diverse elements can be conjoined using digital data as the common element. This result is a new data management process that can be applied to serve all elements of the community. The first step is recognition of a forensics lifecycle. Evidence gathering, analysis, storage, and use in legal proceedings are actually just distinct parts of a single end-to-end process, and thus, it is hypothesized that a single data system that can also accommodate each constituent phase using common network and security protocols. This paper introduces the idea of web-based Central Data Repository. Its cornerstone is anywhere, anytime Internet upload, viewing, and report distribution. Archives exist indefinitely after being created, and high-strength security and encryption protect data and ensure subsequent case file additions do not violate chain-of-custody or other handling provisions. Several legal precedents have been established for using digital information in courts of law, and in fact, effective prosecution of cyber crimes absolutely relies on its use. An example is a US Department of Agriculture division's use of digital images to back up its inspection process, with pictures and information retained on secure servers to enforce the Perishable Agricultural Commodities Act. Forensics is a cumulative process. Secure, web-based data management solutions, such as the Central Data Repository postulated here, can support each process step. Logically marrying digital technologies with Internet accessibility should help nurture a thought process to explore alternatives that make forensics data accessible to authorized individuals, whenever and wherever they need it. [|Strategies for the automated recognition of marks in forensic science] To enable the efficient comparison of striation marks in forensic science, tools for the automated detection of similarities between them are necessary. Such marks show a groove-like texture which can be considered as a fingerprint of the associated tool. Thus, a reliable detection of connections between different toolmarks from the identical tool can be established. In order to avoid the time-consuming visual inspection of toolmarks, automated approaches for the evaluation of marks are essential. Such approaches are commonly based on meaningful characteristics extracted from images of the marks that are to be examined. Besides of a high recognition rate, the required computation time plays an important role within the design of an adequate comparison strategy. The cross-correlation function presented in this paper provides a faithful quantitative measure to determine the degree of similarity. It is shown that appropriate modeling of the signal characteristics considerably improves the performance of methods based on the cross-correlation function. A strategy for quantitative assessment of comparison strategies is introduced. It is based on the processing of a test archive of marks and analyses the comparison results statistically. For a convenient description of the assessment results, meaningful index numbers are discussed. [|Automated comparison of striation marks with the system GE/2] In this paper, the newly developed system GE/2 for the automated identification of toolmarks and firearms is presented. It is based on a signal processing strategy that enables an automated evaluation of pictures taken from striation patterns. To this end, a signal model is introduced which is suitable to describe the characteristics of groove textures. To obtain high quality data, a powerful imaging approach is presented. The underlying strategy, which is based on fusion techniques, is applicable to both illumination problems and focusing issues. To ensure a high reliability in the automated comparison, characteristic features, which are called signatures, are required. To this end, a newly developed strategy is proposed to straighten curved grooves by means of the signal model mentioned above. Based on the signatures obtained, an automated comparison on the basis of correlation techniques is applied. As a result, the marks in question are sorted in a descending order with respect to the detected similarity. Finally, the realized system GE/2 is presented, consisting of the Image Acquisition Station and a computer to perform the processing of the data. A benchmarking test is applied to the system GE/2, which proves its high qualification for forensic applications. [|Data mining in forensic image databases] Forensic Image Databases appear in a wide variety. The oldest computer database is with fingerprints. Other examples of databases are shoeprints, handwriting, cartridge cases, toolmarks drugs tablets and faces. In these databases searches are conducted on shape, color and other forensic features. There exist a wide variety of methods for searching in images in these databases. The result will be a list of candidates that should be compared manually. The challenge in forensic science is to combine the information acquired. The combination of the shape of a partial shoe print with information on a cartridge case can result in stronger evidence. It is expected that searching in the combination of these databases with other databases (e.g. network traffic information) more crimes will be solved. Searching in image databases is still difficult, as we can see in databases of faces. Due to lighting conditions and altering of the face by aging, it is nearly impossible to find a right face from a database of one million faces in top position by a image searching method, without using other information. The methods for data mining in images in databases (e.g. MPEG-7 framework) are discussed, and the expectations of future developments are presented in this study. [|3D view on the crossing lines problem in document investigation] Optical examination, lifting techniques and electron microscopy are the most widely used methods for the determination of the writing order of crossing texts. Measuring the topography of the surface can provide additional information or determine the writing order where the common methods fail. Laser profilometry, as a non- contact technique, leaves the surface of the questioned documents unaltered. Several samples of crossing strokes, differing in writing pressure and support, were scanned. Depending on the type of support, a paper block or a metal surface, two phenomena were observed: the paper fibers are compressed and the line shows through. Despite of the complicating inhomogeneous structure of the paper, the writing sequence was determined unambiguously in the majority of the experiments. This makes laser profilometry a promising technique in fraud detection. [|Forensic aspects of digital evidence: contributions and initiatives by the National Center for Forensic Science (NCFS)] Digital evidence is information of probative value that is either stored or transmitted in a digital form. Digital evidence can exist as words (text), sound (audio), or images (video or still pictures). Law enforcement and forensic scientists are faced with collecting and analyzing these new forms of evidence that previously existed on paper or on magnetic tapes. They must apply the law and science to the processes they use. Extrapolating the old processes into the new formats has been proceeding since the 1980's. Regardless of the output format, all digital evidence has a certain commonality. One would assume that the rules of evidence and the scientific approach would also have some common characteristics. Obviously, there is also a divergence due to the differences in outputs. It is time to approach the issues regarding digital evidence in a more deliberate, organized, and scientific manner. The program outlined by the NCFS would explore these various formats, the features common to traditional types of forensic evidence, and their divergent features and explore the scientific basis for handling of digital evidence. Our web site, www.ncfs.org, describes our programs. [|Diffraction image method to measure shape distribution function of micrometer particles] The study on the shape effect of particles for measuring result is more important in the measurement of particles' size distribution, therefore, appearance of particles' shape distribution must be considered in the advanced measuring method. In the traditional measuring method, the shape varying of particles is not considered. In this paper, it is necessary. The microscope is not suited to review the micrometer particles in a great number, a diffraction image processing method is put forward, that is, a sample board receives the moving particles and made as diffraction sample, which is irradiation by Laser, the diffraction pattern produced by it is processing based on the data computing by PC, some models are used within it. At last, the shape distribution function can be made at one time for greater number particles