How to set up smartphones and PCs. Informational portal
  • home
  • Reviews
  • Mapping data objects. Matching images based on "features"

Mapping data objects. Matching images based on "features"

Here is my commented solution in ES3 (details after the code):

Object. equals = function(x, y) ( if (x === y) return true; // if both x and y are null or undefined and exactly the same if (! (x instanceof Object) || ! (y instanceof Object)) return false; // if they are not strictly equal, they both need to be Objects if (x.constructor !== y.constructor) return false; // they must have the exact same prototype chain, the closest we can do is // test there constructor. for (var p in x) ( if (! x.hasOwnProperty(p)) continue; // other properties were tested using x.constructor === y.constructor if (! y .hasOwnProperty(p)) return false // allows to compare x[ p ] and y[ p ] when set to undefined if (x[ p ] === y[ p ]) continue; // if they have the same strict value or identity then they are equal if (typeof(x[ p ]) !== "object") return false; // Numbers, Strings, Functions, Booleans must be strictly equal if (! Object. equals(x[ p ], y[ p ])) return false; // Objects and Arrays must be tested recursively ) for (p in y) ( if (y.hasOwn Property(p) && ! x.hasOwnProperty(p)) return false; // allows x[ p ] to be set to undefined ) return true; )

While developing this solution, I took a particularly close look at corner cases, efficiency, but trying to give a simple solution that works, hopefully with some elegance. JavaScript allows that zero And uncertain properties and objects are prototypes, which can lead to very different behavior if not tested.

First I decided to expand Object instead of Object.prototype, mainly because null cannot be one of the comparison objects and that I believe that null must be a valid object to compare with another. There are other legitimate issues noted by others regarding the expansion Object.prototype regarding possible side effects for other code.

Extra care must be taken to allow JavaScript so that object properties can be set as uncertain, i.e. There are properties whose values ​​are set as uncertain. The solution above confirms that both objects have the same properties which not defined to report equality. This can only be achieved by checking the existence of properties using Object.hasOwnProperty(property_name). Also note that JSON.stringify() removes properties that are set as uncertain, and so comparisons using this form will ignore properties set to value undefined.

Functions should only be considered equal if they share the same reference, not just the same code, because that won't take the prototype of those functions into account. So comparing a line of code doesn't work to ensure they have the same prototype object.

Both objects must have the same prototype chain, not the same properties. This can only be verified cross-browser by comparing constructor both objects for strict equality. ECMAScript 5 will allow you to check their actual prototype with Object.getPrototypeOf(). Some web browsers also offer the property __proto__, which does the same. A possible improvement to the above code would allow one of these methods to be used whenever possible.

The use of strict comparisons is paramount here because 2 should not be considered equal "2.0000" , but false should be considered equal zero, uncertain or 0 .

Efficiency considerations lead me to compare for feature equality as soon as possible. Then, only if that fails, look for type these properties. The speed boost can be significant for large objects with many scalar properties.

No more than two loops are required, the first to check the properties from the left object, the second to check the properties on the right and only check for existence (not value) to catch those properties that are defined with uncertain value.

Overall this code handles most corner cases in only 16 lines of code (no comments).

Update (8/13/2015). I've implemented a more efficient version as the value_equals() function is faster, handles the right corner cases like NaN and 0 other than -0, optionally applying object property ordering and testing for circular references supported by over 100 automated tests as part test set of projects

To properly display this page you need a browser with JavaScript support.

Processing "Mapping and Fixing Objects"

Object Mapping and Fixing processing is applied if previously mapped objects have been mapped in Distribution Management. Then in the processing form for downloading data from the distributor, a link will appear for correcting documents containing changed data ( Rice. 7.16 ) .

Processing can be started in two ways:

    from the subsystem "Data of distributors" to change the comparison directly through processing;

To select data for a distributor, enter the name of the distributor in the "Distributor" field and click the "" button to display the data loaded from the distributor's RS.

Let's take item matching as an example.

Rice. 7.13. Processing "Mapping and Fixing Objects"

The tabular section displays the directory element in the distributor's database and its correspondence in the manufacturer's database.

Compliance is indicated from the directory " Nomenclature". This can be done manually or using the button " Find similar". The search can be carried out by article (if the "Display article" setting is enabled) or by name. If the search does not find a full match, then an item with a partial match of the article or name will be substituted.

A modified but unsaved mapping is highlighted in bold.

Checkbox " Display SKU" controls the display of the product SKU of the distributor and the matched product.

The view mode of the list of elements in the tabular section is configured in the menu of the tabular section "More" ( Rice. 7.14) . If processing is started after making changes via " Downloading data from distributors"or directory" Distributor nomenclature", then the list of products will be filtered by the checkbox " Only changed".

To correct documents, you need:

Print (Ctrl+P)

Mechanism for data comparison when exchanging through a universal format

The data matching mechanism is designed to solve the problem of data synchronization between the source base and the receiver base during the exchange

Internal object identifiers

Ideally, the data of synchronized applications could be matched by unique internal object identifiers (GUIDs). But this requires that the addition of data to be synchronized is carried out only in one application, and in another this data appears exclusively as a result of synchronization. In this case, the GUIDs in two applications for the same objects will be the same, and it will be possible to uniquely match the objects by them.

In practice, it is not always possible to comply with this requirement, especially in the case of setting up synchronization between applications that were operated independently. This is because two identical objects created in parallel in each application will have two different GUIDs.

In some cases, the data cannot be matched by a GUID because it does not exist (special cases not covered in this article).

Public object identifiers

To successfully match objects with different GUIDs, there must be a place to store their match information. This place is the information register. Public IDs of Synced Objects(Further FIR).

Rice. 1 Information register Public identifiers of synchronized objects

The structure of the register is presented in the table:

To compare the data of two programs, BSP 2.3 provides the processing “Comparison of infobase objects” for direct use when synchronizing data


Fig 2. The main processing form “Comparison of infobase objects”

The list is opened by command Execute Mapping On the page Data mapping Assistant for interactive data synchronization. You can also double-click on a row that has data matching problems.

The list consists of two columns, each of which corresponds to the infobase involved in the exchange. The data is grouped by program objects (documents, lists). An information line is displayed at the bottom of the list: how many elements are matched, how many are not matched.

In field Output You can choose which data to show in the list. Displayed by default Unmatched data.

Object Mapping

  • Click Match automatically(recommended), select the fields to match using the checkboxes. Some fields are selected by the program by default. To confirm your choice, click Execute Mapping. After the search, the program displays the data it has compared for viewing. Press to confirm Apply.
  • After automatic matching, you can manually match the remaining objects or change the object mapping. Select the required objects of the two bases, click Cancel match, to try to match objects manually, click Set match to match objects.
  • Press to confirm Write and close.

Customizing Mapping Table Fields

  • Click speakers to add fields to list columns. Additional fields can be checked using the checkboxes, to confirm, click Apply.

Getting data from another program

Object matching order

  • It is recommended to perform data matching and loading taking into account reference links. Especially if the field is used to match objects.
  • For example, in the configuration there is a directory of counterparty agreements, which is subordinate to the directory of counterparties. Comparison of counterparty agreements is carried out according to the directory-owner, i.e. according to the directory of counterparties. Therefore, in order to correctly compare data, you must first compare and download the directory of counterparties, and then the directory of counterparty agreements.
  • Otherwise, the fields of the mapping table may contain dummy links of the form:
    <Объект не найден>(26:a0b9001b24e002fe11dfba347dd41412).
  • The dummy link points to an object in the current infobase that has not yet been loaded from the exchange message.

Entries in FIR are also created on the sender's side when confirming the receipt of data by the correspondent through the acknowledgment mechanism. In field Identifier such records set the original identifier of the object. Registration of such records is necessary so that when receiving other data from the correspondent, it can be understood that this object should be excluded from the search procedure by fields and by a unique identifier.

Options for identifying objects upon receipt

The procedure for automatic matching of objects upon retrieval is contained in the Object Conversion Rules (OCRs) for data retrieval. PKO rules are in the general module


Fig 3 Sections of the general module ExchangeManagerViaUniversalFormat

Note what's in the common module Exchange ManagerViaUniversalFormat there are all components (data processing rules, object conversion rules, etc.) that determine the applied logic of data processing in the process of receiving or sending them. The program code of this module is generated automatically using the Data Conversion 3.0 application based on the configured exchange rules. The program code of the module can be created manually, but requires great skill from the developer.

The option of automatic comparison (identification) of objects upon receipt is set using the property Variant Identification PKO


Fig 4. Identification settings in the manager module

There are 3 options (3 values) for object identification

  1. By UniqueID - identification by GUID,
  2. First By Unique Identifier Then By Search Fields identification by GUID and search fields,
  3. ByFieldsSearch - identification by search fields,

Another property that defines the matching logic is an array of search fields, defined in the property Search Fields PKO.

Field search algorithm

There is a sequential application of the search options specified in the property Search Fields The FSP used when loading the object.

Limitation.
When matching at the stage of data analysis, only the 1st search option is applied - By UniqueID

The transition to the next option is carried out in two cases:

  1. The loaded object does not have any of the fields specified in the search option filled in.
  2. The search option did not return any results.

If the loaded object contains information about the initial GUID and the identification option for the object “By GUID” or “By GUID and search fields”, then the search is performed among all objects of the specified type, except for those for which the FIR has already been matched.

In other cases, the search is performed among all infobase objects of the corresponding type.

Peculiarities.
When matching at the stage of data analysis, the loaded objects do not check the filling of the fields involved in the search.

At the data analysis stage, a match will be established only if one recipient object was found for one sender object.

At the data loading stage, a match will also be established in the case when several recipient objects were found for one sender object. In such a situation, a match will be established with one of them.

At the data loading stage, the search option Number + date of for documents it works as follows: the number of the required document is checked for an exact match, the date determines the interval in which the search is performed by number. The interval itself is defined as the period of uniqueness of document numbers, which includes the specified date. For example, if the document numbers are unique within a month and the date is December 10, 2001, then the search will be carried out in the interval from December 01 to December 31, 2001.
During the data analysis phase, this search option will work as usual: both fields will be checked for an exact match.

Material from Technical Vision

When solving the problem of comparing images, the most important role is played by a hierarchical analysis of the "primary" features of images - the so-called "characteristic features". Such "features" can be used to compare the current and reference images in a variety of methods, such as hierarchical correlation processing, voting methods, or volumetric comparison schemes. In this case, special points, lines, regions and structures (groups of features) are used as image features. Let us briefly consider approaches based on the use of point and contour features.

Matching based on point features.

The main advantages of using feature points for detection tasks are the simplicity and speed of extraction (compared to other features used). In addition, it is not always possible to highlight other characteristic features in images (good and clear contours or areas), while local features can be distinguished in the vast majority of cases.

The task of detecting an object in an image is reduced to finding characteristic points and fixing their relative position. These procedures are performed first on the reference image, then on the studied image, often in a certain limited search area. The general scheme of the algorithm for finding the corresponding points consists of several stages:

Selection of point features in images;

Formation of vectors of signs of points;

Comparison of points in the feature space;

The selection and description of characteristic points in the image is the initial and key stage in the identification algorithm, on which the result of the entire algorithm depends. This step was discussed earlier in Section 4.1.

However, no matter how complex the invariants may be, they are still unable to uniquely characterize an object in $100$(\%) cases. Ambiguities, that is, cases when different objects (points, areas) in the image are characterized by very similar parameters, can be associated with the imperfection of the chosen invariants, low resolution or noise in the image. Ambiguities also arise when there are repeating objects in the image. One way to resolve ambiguities is to develop better invariants or other descriptors; this direction is very relevant among researchers involved in machine vision. The parallel approach is to use spatial relationships between objects.

Algorithms based on spatial relations, which belong to a higher level of processing than raster algorithms, are characterized by a higher resistance to various geometric and radiometric distortions. One of the indicators of the "correctness" of the found pair can be the accumulation around the points that form such pairs, a large number of other correctly matched points. Another criterion on the basis of which it is possible to weed out incorrectly linked

Keypoint distribution

points, there may be an arrangement of points relative to lines. This section discusses metric and topological filters that reject incorrect matches based on the relative position of objects in the image.

Metric matching.

In order to check the correctness of pairing of candidates, additional information about the mutual spatial arrangement of points on the image plane is involved. In other words, the spatial arrangement of points on the right and left images should be similar in a certain sense. The spatial arrangement can be described as a distance matrix. Consider a set of points $A_(1), A_(2), \ldots, A_(i), \ldots, A_(N)$ in the image plane (Fig. 8).

Distances between points can be written as distance matrix $\vert \vert r_(ij)\vert \vert $ as follows:

\[ (\begin(array)% & (A_1 ) & (A_2 ) & (...) & (A_i ) & (...) & (A_N ) \\ \hline (A_1 ) & 0 & (r_( 12) ) & (...) & (r_(1i) ) & (...) & (r_(1N) ) \\ (A_2 ) & & 0 & (...) & (r_(2i) ) & (...) & (r_(2N) ) \\ (...) & & & (...) & (...) & (...) & (...) \\ (A_i ) & & & & 0 & (...) & (r_(iN) ) \\ (...) & & & & & (...) & (...) \\ (A_N ) & & & &&& 0 \\ \end(array) ) \]

where $r_(ik) =\sqrt ((x_i -x_k)^2-(y_i -y_k)^2) $ is the Euclidean distance between $A_(i)$ and $A_(k)$, $x_(i) $, $y_(i)$ - coordinates of point $A_(i)$ on the image, $x_(k)$, $y_(k)$ - coordinates of point $A_(k)$ on the image.

To check the correctness of the formation of conjugate pairs of points, the distance matrices of the left $\vert \vert r_(ij)^(L)\vert \vert $ and right $\vert \vert r_(ij)^(R)\vert \vert $ images are compared . To quantify erroneous binding, the variable $\delta _(ij)$ is introduced,

$$ \delta _(ij) = r_(ij)^(R) - r_( ij)^(L). $$

Analysis of the $\delta _(ij)$ distribution histogram makes it possible to estimate the value of the $\Delta $ erroneous pair rejection threshold according to the criterion described below. Note that the point with number $i$ has $N-1$ connections, and the corresponding distances in the matrix $\vert \vert r_(ij)\vert \vert $ are: $r_(1i)$, $r_(2i) ,(\ldots), r_(ii)$, $r_(i,i+1),(\ldots),r_(i,N)$. Accordingly, the distance vector associated with the pair number $i$ is $$ \delta_(i)=\(\delta _(1i), \delta _(2i),(\ldots), \delta _(ii), \ delta _(i,i+1),(\ldots), \delta _(i,N)\) $$ where $$ \vert \vert \delta _(i)\vert \vert = \min\(\ delta _(1i), \delta _(2i),(\ldots), \delta _(ii), \delta _(i,i+1),(\ldots), \delta _(i,N)\ )$$ is the norm of the vector $\delta_(i)$.


Filtered Point Pairs

A pair of conjugate points is accepted if $\vert \vert \delta _(i)\vert \vert< \Delta $ и отклоняется в противоположном случае. Процедура проверки выполняется для каждого $i$ от $1$ до $N$. Существенно, что предложенный критерий отбора на основе анализа матрицы (5) инвариантен к вращению изображений.

In order to make the algorithm more efficient, an image pyramid is used. The initial approximation for points of interest is at the top level of the pyramid and then refined at the next levels using correlation. An example of the operation of the algorithm when comparing two test video frames is shown in Fig. nine.

Topological matching.

Consider a triple of objects $\langle R_1^1 ,R_1^2 ,R_1^3 \rangle$ on the image $V_1 $ and the corresponding triple of objects $\langle R_2^1 ,R_2^2 ,R_2^3 \rangle$ on the image $ V_2 $. An object is a region of an image, for example, an "interesting point" (say, a corner or a local brightness extremum) and its surroundings, or a region of a more complex shape.

Let $c_v^i =\langle x_v^i ,y_v^i \rangle$ be the center of the object (region) $R_v^i $. Function

$$ \begin(gather)\tag(1) \textrm(side) (R_v^1 ,R_v^2 ,R_v^3)= \textrm(sign) \left(\det \left[ ((\begin(array )(*(20)c) (x_v^3 -x_v^2 ) & (x_v^1 -x_v^2 ) \\ (y_v^3 -y_v^2 ) & (y_v^3 -y_v^2 ) \\ \end(array) )) \right] \right) \end(gather) $$

takes the value $-\mbox()1$ if $c_v^1 $ lies on the right side of the vector directed from $c_v^2 $ to $c_v^2 $, or the value 1 if this point lies on the left side of him. So the equation

$$ \begin(gather)\tag(2) \textrm(side)(R_1^1 ,R_1^2 ,R_1^3)=\textrm(side)(R_2^1 ,R_2^2 ,R_2^3) \ end(gather) $$ means that the point $c^1$ lies on the same side of the vector on both images. If equality (9) is not satisfied for some point, we will say that the point violates the side relation. This happens when at least

Sideness relation - - the point $c^1$ must lie on the same side (here - on the left) of the directed segment from $c^2$ to $c^3$ in both images

at least one of the three objects is incorrectly referenced to its counterpart in another image, or if the objects are not coplanar and there is a camera shift in a direction perpendicular to the 3D plane containing their centers. In the latter case, the point may move to the other side of the vector (that is, its parallax will change), but this happens only with a small number of triples. The points $R_v^1 $, $R_v^2 $, and $R_v^3 $ satisfy equality (9) or violate it, regardless of the order in which they appear in the triple; it is only necessary that in both images they be numbered in the same cyclic order (clockwise or counterclockwise). On fig. 10 shows triples of corresponding points that satisfy relation (9).

When equality (9) is violated, one can conclude that one of the objects in the triple is incorrectly bound, but at this stage it is not clear which one. One triple is not enough for such a conclusion, however, by considering all possible triples, you can find objects that are more likely than others to be tied incorrectly. The main idea of ​​the method proposed in is that mismatched objects more often violate the side relationship.

Equality (9) is verified for all triplets of domains $\langle R^i,R^j,R^k\rangle,R^i,R^j,R^k\in \Phi _(12)$, where $\ Phi _(12) $ is the set of areas present both in the image $V_1$ and in the image $V_2$. Let $\Phi =\left\( (i\vert R^i\in \Phi _(12) ) \right\)$. At the beginning of the algorithm, the penalty is calculated $$ \begin(gather)\tag(3) h(i)=\sum\limits_(j,k\in \Phi \backslash i,j>k) (\left| (\textrm(side)(R_1^ i ,R_1^j ,R_1^k)-\textrm(side)(R_2^i ,R_2^j ,R_2^k)) \right|) , \end(gather) $$ that is, the number of times the object $R^i$ violates the side relation (9), for all $i\in \Phi $. Then the fine is normalized by the maximum number of all possible violations:

$$ \begin(gather)\tag(4) h_N (i)=\frac(h(i))((n-1)(n-2)), \quad n=\left| \Phi \right|. \end(gather) $$

Based on (11), we get that $h_N (i)\in $. Threshold $t_(\textrm(topo)) \in $ is selected by the user. After analyzing the penalty for all objects, the object $R^w$ is determined, where $w=\arg \max _i h_N (i)$, which violates relation (9) more often than others. If $h_N (w)>t_(\textrm(topo)) $, then the object $R^w$ (i.e. the pair of objects $R_1^w ,R_2^w)$ is considered to be mislinked and removed from the set $\Phi . $ At each iteration, the penalty $h_N (i)$ is recalculated based on the objects remaining in $\Phi $, and the pairs most frequently violating relation (9) are removed. The process continues as long as there are objects to delete, that is, until the maximum value of the penalty on the remaining objects becomes less than the $t_(\textrm(topo))$ threshold.

During the first iterations, while there are enough candidates for deletion in the $\Phi $ set, even correctly attached objects can have a high penalty value. However, for incorrectly linked objects, the penalty will be even higher. After removing the worst pair of objects, $h_N (i)$ for the remaining objects will decrease. When only properly anchored pairs of objects remain, small parallax changes will still result in non-zero penalty values.

The $t_(\textrm(topo))$ threshold value affects the number of objects remaining after topological filtering. A zero value of the threshold leads to the fact that a small number of objects remain, but all of them completely satisfy the topological sideness relation. This choice of threshold is reasonable on relatively flat images with shallow depth. In most cases, one should keep in mind that a small threshold value leads to the undesirable effect of erroneously deleting a number of points/areas as incorrectly assigned. Based on numerous experiments with ground and aerial photographs, it is most desirable to choose the threshold $t_(\textrm(topo))$ from the range [$0(,)03$, $0(,)15$].

Let us illustrate the operation of the algorithm with an example. Let $50$ pairs of points be found and tied to each other by some algorithm (Fig. 11). It can be determined by eye that a number of points are tied to each other incorrectly, that is, points marked with the same number are in different places on the left and right images.

Now let's pass the coordinates of pairs of points through the topological filter with $t_(\textrm(topo)) =0(,)15$ - $21$ pair of points will remain (Fig. 12). If, however, a more stringent

Found and tied to each other $50$ pairs of points. Approximately 2/3 of matches are false

After applying the topological filter with $t_(\textrm(topo)) =0(,)15$ $29$ pairs of points are removed as false matches, leaving $21$ pair

After applying the topological filter with $t_(\textrm(topo)) =0(,)05$ $34$ pairs of points are removed as false matches, leaving $16$ pairs

filtering with $t_(\textrm(topo)) = 0(,)05$, then $16$ pairs of points remain (Fig. 13), and all matches are correct. No valid matches were removed, and this method successfully filtered out $34$ pairs, which means that $68\%$ of the original matches were false.

As can be seen, the topological filtering method is not so sensitive to the exact spatial localization of points. The main emphasis in the method is on the relative position of the points on the image.

The computational complexity of the method depends on the number of incorrectly bound pairs and, to a greater extent, on the initial number of pairs of bound objects. The largest part of the calculations falls on the calculation of the determinant in formula (8) to check all possible triples of objects. In the original set of $\left| \Phi \right|=n$ of candidate pairs, it is necessary to check $C_n^3 =\frac (n(n-1)(n-2)) (6)$ triples, so the total complexity of the algorithm is $O(n^ 3) $, which is quite a lot, and this is one of the disadvantages of the method. As the objects are rejected, the number of possible triplets decreases, and to speed up the work, in formula (10) it is possible not to recalculate the penalties, but to calculate only those terms that included the removed object, and then subtract these terms from the expression for $h(i)$ .

It should be noted that this method does not cope well with situations where the image has a pronounced foreground and background. For example, if most of the areas are in the foreground, then the background areas will often violate the side relation (9) due to non-coplanarity with the foreground areas. Part of the correct areas in this case will be rejected.

Mapping based on contour features.

The main disadvantage of point features is the instability to radiometric changes in the image. At the same time, this type of distortion is quite common in real images: glare, shadows, and other effects associated with changes in lighting conditions, time, or shooting season. Another disadvantage of point features is their instability to aspectual distortions. This type of distortion is also found in many problems of practical interest. Therefore, there is a need to involve information about the shape of the object itself as the most resistant to changes of this kind, to solve problems of coordinate-planning reference. The shape of an object is, of course, its most stable characteristic. One of the difficulties of the task is that in practice there are quite common cases of seasonal changes in the shape of natural (forests, water bodies) and artificial objects (roads) that are not associated with radiometric distortions. The lack of a priori information about the models of seasonal changes in the shapes of objects significantly complicates the solution of this problem.

From an intuitive point of view, the shape of an object is largely determined by its boundaries. In a flat image, the boundaries are contours. Psychological studies show that the human brain relies on contour information to the greatest extent when recognizing images. Contours are more resistant to changes in illumination, angle distortions, they are invariant to rotation and scale changes. The advantages of contour representation also include a significant reduction in the amount of information processed when comparing two or more images, due to the fact that contour points make up a small part of all points in the image.

In this section, contours are understood as sharp changes in brightness in images. In the process of using contour information for automatic matching (binding) of images, four main stages can be distinguished:

  1. selection of contour points;
  2. contour tracing;
  3. description of contours;
  4. comparison of contours in the selected feature space.

Methods for selecting contour points have already been discussed in detail in Section 3.4. The problems of tracing and description of contours were discussed in section 4.1. Consider now the problem of contour comparison.

One of the key problems when comparing contours in two digital images is the choice of attributes that determine the individual features of the contour. At the same time, several main types of features can be distinguished: metric (length, width, orientation, angle), analytical (parallelism, straightness, curvature), topological (nesting, neighborhood, intersection, adjacency, overlay). In practice, a fairly large number of contour attributes are used: length, curvature, area, perimeter, number and position of singular points, compactness index, position of the center of gravity. To create more reliable recognition algorithms, it is advisable to use combinations of features of various types.

Note also that it is not always possible to select a sufficient number of closed contours in real images. Therefore, for the problem of contour identification, it is better to use attributes that do not depend on the closedness properties of the contour.

Depending on the selected attributes, different methods of contour comparison are used.

Comparison of contours in natural representation.

Let the reference image contain $N$ different contours $i=1 ,\ldots, N$, then $C_L ^i$ is the $i$-th contour of length $l_L ^i$. The search area on the other image contains $M$ distinct contours $j=1 ,\ldots, M$, then $C_R ^j$ is the $j$th contour of the search area of ​​length $l_R ^j$. $C_L ^i$ and $C_R ^j$ are represented by curvature (inflection) functions $K_L (l)$ and $K_R (l)$, respectively.

To solve the problem, the procedure for comparing two contours can be used, the essence of which is to sequentially move the function $K_(\textrm(E))(l)$ (the contour $C_(\textrm(E))$) along the function $K_( \textrm(OP))(l)$ (contour $C_(\textrm(OP))$), and in each current position the value of the normalized correlation coefficient $$ k(m, C_(\textrm(Э)) , C_ (\textrm(OP))) = \frac (\sum\limits_(i=1)^(l_\textrm(E)) (\left((K_(\textrm(E)) \left((l_i ) \ right)-\bar (K)_(\textrm(E)) ) \right)\left((K_(\textrm(OP)) \left((l_(i+m) ) \right)-\bar ( K)_(\textrm(OP)) ^m) \right)) )(\sqrt (\sum\limits_(i=1)^(l_(\textrm(E)) ) (\left((K_(\ textrm(E)) \left((l_i ) \right)-\bar (K)_(\textrm(E)) ) \right)^2) ) \sqrt (\sum\limits_(i=1)^( l_(\textrm(E)) ) (\left((K_(\textrm(OP)) \left((l_(i+m) ) \right)-\bar (K)_(\textrm(OP)) ^m) \right)^2) ) ), $$ where $m=1 ,\ldots, l_(\textrm(OP)) -l_(\textrm(E)) $; $K_(\textrm(E)) (l)$ - - curvature function $C_(\textrm(E)) $ of the contour; $K_(\textrm(OP)) (l)$ - - curvature function $C_(\textrm(OP))$ of the contour; $\bar (K)_(\textrm(E)) $, $\bar (K)_(\textrm(O)) ^m$ - - average values ​​of contour curvature intensity $C_(\textrm(E)) $ and the contour fragment $C_(\textrm(OP)) $ respectively.

In this case, it is necessary that the following condition be satisfied: $l_(\textrm(E))

The position is fixed at which the maximum value of the correlation coefficient is reached, while the pair of contours $C_(\textrm(E))$ and $C_(\textrm(O))$ is assigned the value of the correlation coefficient in this position.

After the correlation coefficients are found for all contours of the search area, it is necessary to select a pair of contours ($C_(L ) ^i$ and $C_R ^j)$ for which the correlation coefficient takes the maximum value. However, the maximum value of the coefficient in a limited search area does not guarantee the reliability of the result, so it is necessary to use additional information about the relative position of the contours. The use of such information makes it possible to identify false identities.

In this work, to check the reliability of the identification, the distances between the centers of gravity of the contours were used, while the found pairs of contours ($C_(L)^i$, $C_R^j)$ and ($C_(L)^l$, $C_R ^ m)$ can be considered correct if $$ \left| (L_(i,l) -L_(j,m) ) \right|\le \Delta , $$ $C_(_L ) ^l$; $L_(j,m) $ - - distance between the centers of gravity of the contours $C_R ^j$ and $C_R ^m$.

This scheme for identifying curves does not allow comparing rectilinear segments of the contour with each other, which, of course, is a disadvantage of the method. This is due to the fact that when comparing any two segments, the correlation coefficient will take values ​​close to unity. Such a feature of the correlation of the curvature function requires the introduction of additional filtering conditions. All rectilinear segments should be excluded from the set of contours selected on the image.

Comparison of the characteristic points of the contour.

Let $N_(\textrm(E))$ of singular points be found for the contour $C_(\textrm(E))^i$ of the reference image by some method, and for the contour $C_(\textrm(E))^j$ from search area found $N_(\textrm(OP))$ points. At the same time, the search area $C_(\textrm(E))^i$ itself contains $N$ contours. Then any contour $C^i$ can be represented as a function $F^i (l)$, which takes values ​​other than zero only at found characteristic points of the contour. Moreover, if only the mutual arrangement of points is used when comparing contours, then the values ​​of the function at singular points can be set equal to one (Fig. 14).

Contour representation as a function $F(l)$

It is necessary for each contour $C_L ^i$ of the reference image to find the corresponding contours $C_R ^j$ from the search area.

To solve the problem, a procedure for comparing two contours is used, the essence of which is the sequential combination of the point $i$ of the contour $C_(\textrm(E)) $ ($i=1,\ldots, N_(\textrm(E)))$ and $j$ contour points $C_(\textrm(OP)) $ ($j=1, \ldots, N_(\textrm(OP)))$. In this case, it is necessary that the condition $l_(\textrm(E))

In each fixed position, the number of corresponding points is determined for which the condition

\begin(gather*) F_(\textrm(E)) (l_(\textrm(E)) ^i+\Delta _m)=F_(\textrm(F)) (l_(\textrm(F)) ^j+\ Delta _m)\ne 0,\\ \Delta _m =l_(\textrm(E)) ^(i+m)-l_(\textrm(E)) ^i, \quad m=1 ,\ldots, N_( \textrm(E)) -i. \end(gather*) As a result of performing $N$ operations of contour comparisons, it is necessary to select the contour $C_(\textrm(OP)) ^\ast $ containing the maximum number of corresponding points. However, to reduce the number of false identifications, it is necessary to limit from below the maximum number of corresponding points found. Contours $C_(\textrm(E)) ^i$ and $C_(\textrm(O)) ^\ast $ are considered to match if the number of points found is greater than a certain threshold $T$.

This method of comparison is one of the fastest and does not require the calculation of additional characteristics at points, but the reliability of such an algorithm is low. The instability of the algorithm is due to the fact that for real data OP))^j+\Delta _m \pm \Delta E_m)\ne 0, $$ where $\Delta E_m $ is the error value due to the discreteness of the initial data and the influence of various noises.

An alternative way to search for the corresponding points on two contours is a scheme in which not the brightness, but the geometric features of the object are used for comparison, and all characteristics are calculated not from the two-dimensional intensity function $I(x,y)$, but from the one-dimensional function $F(l )$. The algorithm for finding the corresponding points consists of three main stages:

  1. choice of attributes;
  2. search for corresponding points in the multidimensional feature space;
  3. checking the reliability of identification using the relative position of points on the image .;

The following characteristics are used as point attributes: $M_(0)$, $D$, asymmetry coefficient. The skewness coefficient is calculated by the formula $$ a=\frac((\bar M)_3 )(\sigma ^3), $$ where $(\bar M)_3 $ is the central moment of the third order.

Unlike the previous method, the problem of identifying points is solved using a geometric search in a multidimensional feature space. For the specified attributes, the similarity measure of points in the feature space will have the form $$ S_(ij) =\frac(\vert M^(\textrm(E))_(0i) -M^(\textrm(OP))_(0j ) \vert )(M_(0\max) -M_(0\min)) +\frac(\vert D^(\textrm(E))_i -D^(\textrm(OP))_j \vert )( D_(\max) -D_(\min) ) +\frac(\vert a^(\textrm(E))_i -a^(\textrm(OP))_j \vert )(a_(\max) -a_ (\min) ) $$ The search for corresponding points consists in finding a pair of points $\langle i,j \rangle$, $i\in C_(\textrm(E)) $, $j\in C_(\textrm(OP) ) $ for which $S_(ij) $ takes the smallest value in the contour search area.

This point identification algorithm is more reliable. This is because the Euclidean distance between the points was used for the validation.

Top Related Articles