Approximate String Joins in a Database (Almost) for Free

pdf
Số trang Approximate String Joins in a Database (Almost) for Free 10 Cỡ tệp Approximate String Joins in a Database (Almost) for Free 208 KB Lượt tải Approximate String Joins in a Database (Almost) for Free 0 Lượt đọc Approximate String Joins in a Database (Almost) for Free 0
Đánh giá Approximate String Joins in a Database (Almost) for Free
4.8 ( 10 lượt)
Nhấn vào bên dưới để tải tài liệu
Để tải xuống xem đầy đủ hãy nhấn vào bên trên
Chủ đề liên quan

Nội dung

Approximate String Joins in a Database (Almost) for Free Luis Gravano Columbia University Panagiotis G. Ipeirotis Columbia University H. V. Jagadish University of Michigan gravano@cs.columbia.edu pirot@cs.columbia.edu jag@eecs.umich.edu Nick Koudas AT&T Labs–Research S. Muthukrishnan AT&T Labs–Research Divesh Srivastava AT&T Labs–Research koudas@research.att.com muthu@research.att.com divesh@research.att.com Abstract String data is ubiquitous, and its management has taken on particular importance in the past few years. Approximate queries are very important on string data especially for more complex queries involving joins. This is due, for example, to the prevalence of typographical errors in data, and multiple conventions for recording attributes such as name and address. Commercial databases do not support approximate string joins directly, and it is a challenge to implement this functionality efficiently with user-defined functions (UDFs). In this paper, we develop a technique for building approximate string join capabilities on top of commercial databases by exploiting facilities already available in them. At the core, our technique relies on matching short substrings of length , called -grams, and taking into account both positions of individual matches and the total number of such matches. Our approach applies to both approximate full string matching and approximate substring matching, with a variety of possible edit distance functions. The approximate string match predicate, with a suitable edit distance threshold, can be mapped into a vanilla relational expression and optimized by conventional relational optimizers. We demonstrate experimentally the benefits of our technique over the direct use of UDFs, using commercial database systems and real data. To study the I/O and CPU behavior of approximate string join algorithms with variations in edit distance and -gram length, we also describe detailed experiments based on a prototype implementation. 1 Introduction String data is ubiquitous. To name only a few commonplace applications, consider product catalogs (for books, music, software, etc.), electronic white and yellow page directories, specialized information sources such as patent databases, and customer relationship management data. As a consequence, management of string data in databases has taken on particular importance in the past few years. Applications that collect and correlate data from independent data sources for warehousing, mining, and statistical analysis rely on efficient string matching to perform their tasks. Here, correlation between the data is typically based on joins between descriptive string attributes in the various sources. However, the quality of the string information residing in various databases can be degraded due to a variety of reasons, including human typing errors and flexibility in specifying string attributes. Hence the results of the joins based on exact matching of string attributes are often of lower quality than expected. The following example illustrates these problems: Example 1.1 [String Joins] Consider a corporation maintaining various customer databases. Requests for correlating data sources are very common in this context. A specific customer might be present in more than one database because the customer subscribes to multiple services that the corporation offers, and each service may have developed its database independently. In one database, a customer’s name may be recorded as John A. Smith, while in another database the name may be recorded as Smith, John. In a different database, due to a typing error, this name may be recorded as Jonh Smith. A request to correlate these databases and create a unified view of customers will fail to produce the desired output if exact string matching is used in the join.  Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 27th VLDB Conference, Roma, Italy, 2001 Unfortunately, commercial databases do not directly support approximate string processing functionality. Specialized tools, such as those available from Trillium Software1 , are useful for matching specific types of values such as addresses, but these tools are not integrated with 1 www.trillium.com databases. To use such tools for information stored in databases, one would either have to process data outside the database, or be able to use them as user-defined functions (UDFs) in an object-relational database. The former approach is undesirable in general. The latter approach is quite inefficient, especially for joins, because relational engines evaluate joins involving UDFs whose arguments include attributes belonging to multiple tables by essentially computing the cross-products and applying the UDFs in a post-processing fashion. To address such difficulties, we need techniques for efficiently identifying all pairs of approximately matching strings in a database of strings. Whenever one deals with matching in an approximate fashion, one has to specify the approximation metric. Several proposals exist for strings to capture the notion of “approximate equality.” Among those, the notion of edit distance between two strings is very popular. According to this notion, deletion, insertion, and substitution of a character are considered as unit cost operations and the edit distance between two strings is defined as the lowest cost sequence of operations that can transform one string to the other. Although there is a fair amount of work on the problem of approximately matching strings (see Section 6), we are not aware of work related to approximately matching all string pairs based on edit distance (or variants of it), as is needed in approximate string joins. Moreover, we are not aware of any work related to this problem in the context of a relational DBMS. In this paper, we present a technique for computing approximate string joins efficiently. At the core, our technique relies on matching short substrings of length of the database strings (also known as -grams). We show how a relational schema can be augmented to directly represent -grams of database strings in auxiliary tables within the database in a way that will enable use of traditional relational techniques and access methods for the calculation of approximate string joins. By taking into account the total number of such matches and the positions of individual -gram matches we guarantee no false dismissals under the edit distance metric, as well as variations of it, and the identification of a set of candidate pairs with a few false positives that can be later verified for correctness. Instead of trying to invent completely new join algorithms from scratch (which would be unlikely to be incorporated into existing commercial DBMSs), we opted for a design that would require minimal changes to existing database systems. We show how the approximate string match predicate, with a suitable edit distance threshold, can be mapped into a vanilla SQL expression and optimized by conventional optimizers. The immediate practical benefit of our technique is that approximate string processing can be widely and effectively deployed in commercial relational databases without extensive changes to the underlying database system. Furthermore, by not requiring any changes in the DBMS internals, we can re-use existing facilities, like the query optimizer, join ordering algorithms and selectivity estimation. The rest of the paper is organized as follows: In Section 2 we give the notation and the definitions that we will use. Then, in Section 3 we introduce formally the prob- lem of approximate string joins and we present our proposal. In Section 4 we present the results of an experimental study comparing the proposed approach to other applicable methods, demonstrating performance benefits and presenting performance trends for several parameters of interest. Finally, in Section 5 we describe how we can adapt our techniques to address further problems of interest. In particular we show how to incorporate an alternate string distance function, namely the block edit distance (where edit operations on contiguous substrings are inexpensive), and we address the problem of approximate substring joins. 2 Preliminaries 2.1 Notation  We use , possibly with subscripts, to denote tables, , possibly with subscripts, to denote attributes, and , possibly with subscripts, to denote records in tables. We use the notation to refer to attribute of table , and to refer to the value in attribute of record . Let be a finite alphabet of size . We use lowercase Greek symbols, such as , possibly with subscripts, to denote strings in . Let be a string of length . We use , , to denote a substring of of length starting at position .                                $# !" ! &%'    Definition 2.1 [Edit Distance] The edit distance between two strings is the minimum number of edit operations (i.e., insertions, deletions, and substitutions) of single characters needed to transform the first string into the second.  2.2 ( -grams: A Foundation for Approximate String Processing Below, we briefly review the notion of positional -grams from the literature, and we give the intuition behind their use for approximate string matching [16, 15, 13]. Given a string , its positional -grams are obtained by “sliding” a window of length over the characters of . Since -grams at the beginning and the end of the string can have fewer than characters from , we introduce new characters “#” and “$” not in , and conceptually extend the string by prefixing it with occurrences of “#” and suffixing it with occurrences of “$”. Thus, each -gram contains exactly characters, though some of these may not be from the alphabet .    #  #    *)+    % #    ,. /0% #      % Definition 2.2 [Positional -gram] A positional -gram , where of a string is a pair is the -gram of that starts at position , counting on the extended string. The set of all positional -grams of pairs constructed a string is the set of all the from all -grams of . #       The intuition behind the use of -grams as a foundation for approximate string processing is that when two strings and are within a small edit distance of each other, they share a large number of -grams in common [15, 13]. The following example illustrates this observation. 21 43 Example 2.1 [Positional -gram] The positional grams of length =3 for string john smith are (1,##j), (2,#jo), (3,joh), (4,ohn), (5,hn ), (6,n s), (7, sm), (8,smi), (9,mit), (10,ith), (11,th$), (12,h$$)  . Similarly, the positional grams of length =3 for string john a smith, which is at an edit distance of two from john smith, are (1,##j), (2,#jo), (3,joh), (4,ohn), (5,hn ), (6,n a), (7, a ), (8,a s), (9, sm), (10,smi), (11,mit), (12,ith), (13,th$), (14,h$$)  . If we ignore the position information, the two -gram sets have 11 -grams in common. Interestingly, only the first five positional -grams of the first string are also positional -grams of the second string. However, an additional six positional -grams in the two strings differ in their position by just two positions. This illustrates that, in general, the use of positional -grams for approximate string processing will involve comparing positions of “matching” grams within a certain “band.”  In the next section we describe how we exploit the concept of -grams to devise effective algorithms for approximate string joins (as opposed to the individual approximate string matches described above). 3 Approximate String Joins In the context of a relational database, we wish to study techniques and algorithms enabling efficient calculation of approximate string joins. More formally, we wish to address the following problem: 1  $ 3   )   1  $  +0) 3     1 Problem 1 (Approximate String Joins) Given tables and with string attributes and , and an  integer  , retrieve all pairs of records  such that edit distance(  )  . 3 1 3 Our techniques for approximate string processing in databases share a principle common in multimedia and spatial algorithms. First, a set of candidate answers is obtained using a cheap, approximate algorithm that guarantees no false dismissals. We achieve this by performing a join on the -grams along with some additional filters that are guaranteed not to eliminate any real approximate match. Then, as a second step, we use an expensive, in-memory algorithm to check the edit distance between each candidate string pair and we eliminate all false positives. In the rest of this section we describe in detail the algorithms used, and how they can be mapped into vanilla SQL expressions. More specifically, the rest of the section is organized as follows. In Section 3.1 we describe the naive solution, which involves the direct application of user-defined functions (UDFs) to address the problem. In Section 3.2 we describe how to augment a database with -gram information that is needed to run the approximate string joins. Finally, in Section 3.3 we describe a set of filters that we use to ensure a small set of candidates and we describe how to map these filters into SQL queries that can be subsequently optimized by regular query optimizers. 3.1 Exploiting User-Defined Functions Our problem can be expressed easily in any objectrelational database system that supports UDFs, such as Oracle or DB2. One could register with the database a ternary UDF edit distance(s1, s2, k) that returns true if its two string arguments are within edit distance of the integer argument  . Then, the approximate string join problem for edit distance  could be represented in SQL as: ) 1    ) 3  " 1, 3 Q1: SELECT FROM WHERE edit distance( 1  $*) 3   )  ) To evaluate this query, relational engines would essentially have to compute the cross-product of tables and , and apply the UDF comparison as a post-processing filter. However, the cross-products of large tables are huge and the UDF invocation, which is an expensive predicate, on every record in the cross-product makes the cost of the join operation prohibitive. For these reasons, we seek a better solution and we describe our approach next. 1 3 3.2 Augmenting a Database with Positional -Grams To enable approximate string processing in a database system through the use of -grams, we need a principled mechanism for augmenting the database with positional -grams corresponding to the original database strings. Let be a table with schema   , such is the key attribute that uniquely identifies records that in , and some attributes ,  , are string-valued. For each string attribute that we wish to consider for approximate string processing, we create an auxiliary ta  ble with three attributes. For a of a record of , its postring in attribute sitional -grams are represented as separate records in the table , where  identifies the position of the -gram contained in !" . These records all share the same value for the attribute  , which serves as the foreign key attribute to table . Since the auxiliary -gram tables are used only during the approximate join operation, they can be created on-thefly, when the database wants to execute such an operation, and deleted upon completion. In the experimental evaluation (Section 4) we will show that the time overhead is negligible compared to the cost of the actual join. The space overhead for the auxiliary -gram table for a string field of a relation with records is:     (  )   ( #   ( $ )(       (    ( (   )* 1 )  )  / % #   /*%  #   (       #   %&% %  %'%(*) + 1         #  1        where % is the size of the additional fields in the auxiliary -gram table (i.e., -, and ./ ). Since (0) + , for any reasonable value of , it follows # 1 2% (*) + that . Thus, the size of the auxiliary table is bounded by some linear function of times the size of the corresponding column in the original table. After creating an augmented database with the auxiliary tables for each of the string attributes of interest, we can  1         (  %  efficiently calculate approximate string joins using simple SQL queries. We describe the methods next. 3.3 Filtering Results Using -gram Properties In this section, we present our basic techniques for processing approximate string joins based on the edit distance metric. The key objective here is to efficiently identify candidate answers to our problems by taking advantage of the -grams in the auxiliary database tables and using features already available in database systems such as traditional access and join methods. For reasons of correctness and efficiency, we require no false dismissals and few false positives respectively. To achieve these objectives our technique takes advantage of three key properties of -grams, and uses the three filtering techniques described below. Count Filtering: The basic idea of C OUNT F ILTERING is to take advantage of the information conveyed by the sets and  of -grams of the strings and , ignoring positional information, in determining whether and are within edit distance  . The intuition here is that strings that are within a small edit distance of each other share a large number of -grams in common. This intuition has appeared in the literature earlier [14], and can be formalized as follows. Consider a string , and let be obtained by a substitution of a single character in . Then, the sets of -grams and  differ by at most (the length of the -gram). This is because -grams that do not overlap with the substituted character must be common to the two sets, and there are only -grams that can overlap with the substituted character. A similar observation holds true for single character insertions and deletions. and must have at least In other words, in these cases,   -grams $ in common. When the edit distance between and is  , the following lower bound on the number of matching -grams holds. 1 1 1 , - 1 3 ,-    1  )   3  % #  #  3 , 3 1 , - 3   1 )  3  #  1 1 3 3 ,    1  )  43  # Proposition 3.1 Consider strings and , of lengths and , respectively. If and are within an edit  distance of  , then the cardinality of  , ignoring positional information, must be at least   .  1   3   #  #  1 3 ,-  Position Filtering: While C OUNT F ILTERING is effective in improving the efficiency of approximate string processing, it does not take advantage of -gram position information. In general, the interaction between -gram match positions and the edit distance threshold is quite complex. Any given -gram in one string may not occur at all in the other string, and positions of successive -grams may be off due to insertions and deletions. Furthermore, as always, we must keep in mind the possibility of a -gram in one string occurring at multiple positions in the other string. in one string to We define a positional -gram correspond to a positional -gram in another string  )  1  ) 3 1    ,    ,    ,    ,  ,  ,     !"   AND #  ! "   AND  '(+6 * AND . $%" &')(+*,-" .!  /     "    0 1  2 4 3      "    0 1 2 AND . .5! 6 278'9;:<>=%  8?@3A27')9;:<@=B #? GROUP BY   C/ DC/  ECE#  6 HAVING COUNT(*) FG27')9;:<@=B 8?@3IH3!= 3-H?KJL AND 6 COUNT(*) FG27')9;:<@=B )?@3IH3!= 3IH?KJ L AND 6 edit distance(  ECE# #C ) SELECT FROM WHERE Figure 1: Query Q2: Expressing C OUNT F ILTERING, P O SITION F ILTERING , and L ENGTH F ILTERING as an SQL expression. 3 1 3  ) 1 , after the sequence of edit opera 1 to  3 , “becomes” -gram   ) 3 in the $ M if and tions that convert edited string. Example 3.1 [Corresponding -grams] Consider the $ONQP>NKR>NQP>NSNKP>N and &$ONKP>NSNQP>NTNQP>N . The strings edit distance between these strings is 1 (delete x to transform the first string to the second). Then /U SV  in corresponds to /W SV  in but not to /X SV  . 21 3  ) 3  )  ) 1  Notwithstanding the complexity of matching positional -grams in the presence of edit errors in strings, a useful filter can be devised based on the following observation [13]. 1 3 Proposition 3.2 If strings and are within an edit distance of  , then a positional -gram in one cannot correspond to a positional -gram in the other that differs from it by more than  positions.  Length Filtering: We finally observe that string length provides useful information to quickly prune strings that are not within the desired edit distance. 21 43 Proposition 3.3 If two strings and are within edit distance  , their lengths cannot differ by more than  .  SQL Expression and Evaluation: What is particularly interesting is that C OUNT F ILTER ING , P OSITION F ILTERING , and L ENGTH F ILTERING can be naturally expressed as an SQL expression on the augmented database described in Section 3.2, and efficiently implemented by a commercial relational query engine. The SQL expression Q2, shown in Figure 1, modifies query Q1 in Section 3.1 to return the desired answers. Consequently, if a relational engine receives a request for an approximate string join, it can directly map it to a conventional SQL expression and optimize it as usual. (Of course,  and are constants that need to be instantiated before the query is evaluated.) Essentially, the above SQL query expression joins the auxiliary tables corresponding to the string-valued atand on their Qgram attributes, along tributes with the foreign-key/primary-key joins with the original 1   3  1 3 database tables and to retrieve the string pairs that need to be returned to the user. The P OSITION F ILTERING is implemented as a condition to the WHERE clause of the SQL expression above. The  WHERE clause will prune out any pair of strings in that share many -grams in common but that are such that the positions of the identical -grams differ substantially. Hence, such pairs of strings will be eliminated from consideration before the COUNT(*) conditions in the HAVING clause are tested. Furthermore, this filter reduces the size of the -gram join, hence it makes the computation of the query faster, since fewer pairs of -grams have to be examined by the GROUP BY and the HAVING clause. The simplicity of this check when coupled with the ability of relational engines to use techniques like band-join processing [6] makes this a worthwhile filter. The L ENGTH F ILTERING is implemented as an additional condition to the WHERE clause of the SQL expression above, which compares the lengths of the two strings. Again, like the P OSITION F ILTERING technique, this filter reduces the size of the -gram join, and subsequently the size of the candidate set. Finally the C OUNT F ILTERING is implemented mainly by the conditions in the HAVING clause. The string pairs that share only a few -grams (and not significantly many) will be eliminated by the COUNT(*) conditions in the that do not HAVING clause. Any string pairs in  share any -grams are eliminated by the conditions in the WHERE clause. However, even after the filtering steps the candidate set may still have false positives. Hence, the expensive UDF invocation edit distance(  ) still needs to be performed, but hopefully on just a small fraction of all possible string pairs. We have included all the three filtering mechanisms in Q2. Of course any one of these filtering mechanisms may be left out of query Q2, and resulting queries will still perform our task albeit perhaps less efficiently. In Section 4, we quantify the benefits of each of the filtering mechanisms individually. In Section 4, we quantify this performance difference using commercial database systems and real data sets. By examining the query evaluation plans generated by commercial database systems, under varying availability of access methods, we observed that relational engines make good use of traditional access methods and join methods in efficiently evaluating the above SQL expression. 1 1 3 3 1  $ ) 3   ) 4 Experimental Evaluation In this section we present the results of an experimental comparison analyzing various trends in the approximate string processing operations. We start in Section 4.1 by describing the data sets that we used in our experiments. Then, in Section 4.2 we discuss the baseline experiments that we conducted using a commercial DBMS to compare our approach for approximate string joins against an implementation that uses SQL extensions in a straightforward way. Finally, in Section 4.3 we report additional experimental results for our technique using a prototype relational system we developed. 4.1 Data Sets All data sets used in our experiments are real, with string attributes extracted by sampling from the AT&T WorldNet customer relation database. We have used three different data sets set1, set2, and set3 for our experiments with different distributional characteristics. Set1 consists of the first and last names of people. Set1 has approximately 40K tuples, each with an average length of 14 characters. The distribution of the string lengths in set1 is depicted in Figure 2(a): the lengths are mostly around the mean value, with small deviation. Set2 was constructed by concatenating three string attributes from the customer database. Set2 has approximately 30K tuples, each with an average length of 38 characters. The distribution of the string lengths in set2 is depicted in Figure 2(b): the lengths follow a close-to-Gaussian distribution, with an additional peak around 65 characters. Finally, set3 was constructed by concatenating two string attributes from the customer database. Set3 has approximately 30K tuples, each with an average length of 33 characters. The distribution of the string lengths in set3 is depicted in Figure 2(c): the length distribution is almost uniform up to a maximum string length of 67 characters. 4.2 DBMS Implementation The first experiment we performed was to compare our approach with a straightforward SQL formulation of the problem with a function to compute the edit distance of two strings as a UDF, and performing a join query by essentially using the UDF invocation as the join predicate. This is a baseline comparison to establish the benefits of our approach. We implemented the function to assess the edit distance of two strings as a UDF2 and we registered it in a commercial DBMS (Oracle 8i) running on a SUN 20 Enterprise Server. We started by issuing the Q1 query (see Section 3.1) to the DBMS, to evaluate a self-join on set1. As expected, the DBMS chose a nested loop join algorithm to evaluate the join. We tried to measure the execution times over this data set, but unfortunately the estimated time to finish the processing was extremely high (more than 3 days). Therefore, to compare our approach with the direct use of UDFs we decided to compare the methods for a random subset of set1 consisting of 1,000 strings. Hence, we issued the Q1 self-join query to determine string pairs in the small data set within edit distance of  . Moreover, to assess the utility of the proposed filters when applied as UDF functions, we registered an additional UDF that first applies the filtering techniques we proposed on pairs of strings supplied in the input, and if the string pair passes the filter, then determines if the strings are within distance  . Each of these queries took about 30 minutes to complete for this small data set. Applying filtering and edit distance computation within the UDF requires slightly longer time compared to Q1. Finally, we issued query Q2, which implements our technique (Section 3.3). The execution times in this case are in the order of one minute. The execution time increases as edit distance 6 2 We implemented the to decide whether =B< ? decision algorithm 6 two strings match or not within edit distance . Number of Strings Number of Strings 10000 8000 6000 4000 2000 0 1 6 11 16 21 2 Number of Strings 1200 12000 1000 800 600 400 200 31 0 1 9 String Length 17 25 33 41 49 57 65 String Length (a) 600 500 400 300 200 100 0 1 8 15 22 29 36 43 50 57 64 String Length (c) (b) Figure 2: Distribution of String Lengths for the (a) set1, (b) set2, and (c) set3 Data Sets. 10000 Response Time (sec) Q1 (UDF only) Q2 (Filtering) 1000 100 10 Q1 (UDF only) Q2 (Filtering) k=1 k=2 k=3 1954 2028 2044 48 68 91 Figure 3: Executing Queries Q1 and Q2 over the Sample Database. increases, since more strings are expected to be within the specified edit distance, and we have to verify more string pairs. The results are reported in Figure 3. It is evident that using our relational technique offers very large performance benefits, being more than 20 times faster than the straightforward UDF implementation. Using query Q2, we also experimented with various physical database organizations for the commercial DBMS and observed the plans generated. When there were no indexes available on the -gram tables, the joins are executed using hash-join algorithms and the group-by clause is executed using hashing. When there is an index on one or on both -gram tables joins use sort-merge-join algorithms and the group-by clause is executed using hashing. 4.3 Performance of Approximate String Processing Algorithms Based on the intuition obtained using the commercial DBMS, we developed a home-grown relational system prototype to conduct further experiments in a more controlled and flexible fashion, disassociating ourselves and our observations from component interactions between DBMS modules. We emphasize that our objective is to observe performance trends under the parameters that are associated with our problem (i.e., -gram size, number of errors allowed). These experiments are not meant to evaluate the relative performance of the join algorithms. Choosing which algorithm to use in each case is the task of the query optimizer and modern optimizers are effective for this task. We conducted experiments using our prototype and the data sets of Section 4.1. In our prototype, L ENGTH F IL TERING and P OSITION F ILTERING are applied before creating the join on the -gram relations. Then, C OUNT F IL TERING takes place using hashing on the output of the join operations between -gram relations. We used two performance metrics: the size of the candidate set and the total running time of the algorithm decomposed into processor time and I/O time. The processor time includes the time to validate the distance between candidate pairs, and the I/O time includes the time for querying the auxiliary tables. The results below do not include the time to generate and index the auxiliary tables. For all the data sets the time spent to generate the auxiliary tables was less than 100 seconds and the time to create a B-tree index on them, using bulk loading, was less than 200 seconds. Hence, it seems feasible to generate these tables on the fly before an approximate string join. We now analyze the performance of approximate string join algorithms under various parameters of interest. Effect of Filters In the worst case (like in query Q1), the cross product of the relations has to be tested for edit distance. The aim of introducing filters was to reduce the number of candidate pairs tested. The perfect filter would eliminate all the false positives, giving the exact answer that would need no further verification. To examine how effective each filter and each combination of filters is, we ran different queries, enabling different filters each time, and measured the size of the candidate set. Then, we compared its size against that of the cross product and against the size of the real answer with no false positives. We examined first the effectiveness of L ENGTH F IL TERING for the three data sets. As expected, L ENGTH F IL TERING was not so effective for set1, which has a limited spread of string lengths (Figure 2(a)). L ENGTH F ILTER ING gave a candidate set that was between 40% to 70% of the cross-product size (depending on the edit distance). On the other hand, L ENGTH F ILTERING was quite effective for set2 and set3, which have strings of broadly variable lengths (Figure 2(b) and (c)). The candidate set size was between 1.5% to 10% of the cross-product size. The detailed results are shown in Figure 4. Enabling C OUNT F ILTERING in conjunction with L ENGTH F ILTERING causes a dramatic reduction on the number of candidate pairs: on average (over the various combinations of  and tested) the reduction is more than 99% for all three data sets. On the other hand, enabling P OSITION F ILTERING with L ENGTH F ILTERING reduces the number of candidate pairs, but the difference is not so dramatic. On average it shrinks the size of the candidate set by 50%. Finally, enabling all the filters together worked best, as expected, with only 50% as many candidate pairs as those without P OSITION F ILTERING, confirming our previous measurement that position filtering reduces the candidate set by a factor of two. The comparative results for the three data sets are depicted in Figure 4. CP=Cross Product, L=Length Filtering, LP=Length and Position Filtering, LC=Length and Count Filtering, LPC=Length, Position, and Count Filtering, Real=Number of Real matches L LP LC LPC Real 1.E+10 1.E+09 1.E+08 1.E+07 1.E+06 CP L LP LC LPC 1.E+09 1.E+08 1.E+07 1.E+06 1.E+05 1.E+05 k=1 k=2 k=3 CP 1.E+10 Real Candidate Set Size CP Candidate Set Size Candidate Set Size 1.E+10 L LP LC LPC Real 1.E+09 1.E+08 1.E+07 1.E+06 1.E+05 k=1 (a) k=2 k=3 k=1 k=2 (b) k=3 (c) Figure 4: Candidate Set Size for Various Filter Combinations and the (a) set1, (b) set2, and (c) set3 Data Sets ( =2). 1.E+11 q=1 q=2 q=3 q=4 q=5 1.E+06 Q-gram Join Size Candidate Set Size 1.E+07 1.E+09 h 1.E+08 k=1 k=2 k=3 q=2 Equijoin 1.E+05 set1 set2 set3 (a) Edit Distance  =2 k=1 L LP k=2 k=3 q=3 (a) Size of the -gram Join for the set1 Data Set 1.E+11 q=1 q=2 q=3 q=4 q=5 1.E+07 Q-gram Join Size 1.E+08 Candidate Set Size 1.E+10 1.E+10 1.E+09 1.E+08 k=1 1.E+06 set1 set2 k=2 q=2 set3 k=3 Equijoin k=1 L LP k=2 k=3 q=3 (b) Edit Distance  =3 (b) Size of the -gram Join for the set2 Data Set Figure 5: Candidate Set Size for Various -gram Lengths with All Filters Enabled. Figure 6: Size of the -gram Joins for Various Filter Combinations. Our experiments indicate that a small value of tends to give better results. We observe that values of greater than three give consistently worse results compared to smaller values. This is due to the threshold for C OUNT F ILTER ING , which gets less tight for higher ’s. Furthermore, the value of =1 gives worse results than =2, because the value =1 does not allow for -gram overlap. When =2 or =3, the results are inconclusive. However, since a higher value of results in increased space overhead (Section 3.2), =2 seems preferable. The increased efficiency for confirms approximate theoretical estimations in [10] about the optimal value of for approximate string matching with    very long strings ( $  , where  is the length of the string). In Figure 5 we plot the results for  =2 and  =3. Finally, we examined the effect of L ENGTH F ILTERING and P OSITION F ILTERING on the size of the -gram join (i.e., the number of tuples in the join of the -gram tables before the application of the GROUP BY, HAVING clause). The effectiveness of these filters plays an important role in the execution time of the algorithm. If the filters are effective, the -gram join is small and the calculation of C OUNT F ILTERING is faster, because the GROUP BY, HAVING clauses have to examine fewer -gram pairs. Our measurements show that L ENGTH F ILTERING decreases the size of the -gram join by a factor of 2 to 12 compared to the naive equijoin on the -gram attribute (the decrease was higher for set2 and set3). Furthermore, P OSITION F IL TERING , combined with L ENGTH F ILTERING , gives even  better results, resulting in join sizes that are up to two orders of magnitude smaller than that from the equijoin. In Figure 6 we illustrate the effectiveness of the filters for set1 and set2 (the results for set3 were similar to the ones for set2). These results validate our intuition that P OSITION F ILTERING is a useful filter, especially in terms of time efficiency. Effect of Different Query Plans We first report the trends for algorithms that do not make use of indexes on the -gram relations, and then show the trends for the algorithms that use indexes on the -gram relations to perform the join operations. We describe our observations in the sequel. Due to space constraints, we present only results for self-joins using set1. The performance trends are similar for the other data sets and for joins that are not self-joins. No Index Available: In the absence of indexes on relations , the applicable algorithms are Nested Loops (NL), Hash Join (HJ), and Sort-Merge Join (SM). We omit NL from the plots, as this algorithm takes approximately 14 hours to complete for edit distance  =1. Figures 7(a)(b) show the results as the edit distance threshold is increased for -gram size of 3 (Figure 7(a)). We observe that as the distance threshold is increased, the overall execution time increases both in processor and I/O time. The trends match our expectations: I/O time increases as the number of candidate pairs increases, because more pairs are hashed  ( during the hash-based counting phase, for both algorithms. Processor time increases since the candidate set has more string pairs, thus more strings have to be tested. As  and increase, the overall time increases (Figure 7(b)). Both algorithms become heavily processor bound for  =3, =5 as C OUNT F ILTERING becomes less effective and large numbers of false positive candidates are generated that subsequently have to be verified. Indexes Available: We differentiate between two cases: (a) one of the two relations joined is indexed, and (b) both relations have B-tree indexes on them. In the first case, we present results for Indexed Nested Loops (INL) and SM. When both relations have indexes on them, SM is tested. When there is only one index, then SM performs much better than INL. Figures 7(c)(d) present the results for this case. INL performs multiple index probes and incurs a high I/O time. The performance trends are consistent with those observed above for the no index case, both for varying gram size and for varying the edit distance threshold. When the size of the -gram relations involved varies (e.g., when one relation consists only of a few strings), the trends are the same both for increasing the -gram size as well as for increasing  . In this case, however, INL performs fewer index probes and might be chosen by the optimizer. In practice, the DBMS picks INL as the algorithm of choice only when one of the -gram relations is very small compared to the other. For all the other cases, SM was the algorithm of choice and this is also confirmed by our measurements with the prototype implementation. Figures 7(e)(f) present the results for the case when indexes are available on both relations for increasing number of errors and two -gram sizes (Figure 7(e)) and increasing -gram size for two values of  (Figure 7(f)). The performance trends are consistent with those observed so far, both for varying -gram size and for varying the edit distance threshold. 5 Extensions We now illustrate the utility of our techniques for two extensions of our basic problems: (i) approximate substring joins, and (ii) approximate string joins when we allow block moves to be an inexpensive operation on strings. 5.1 Approximate Substring Joins The kinds of string matches that are of interest are often based on one string being a substring of another, possibly allowing for some errors. For example, an attribute CityState of one table may contain city and state information for every city in the United States, while another attribute CustAddress (of a different table) may contain addresses of customers. One might be interested in correlating information in the two tables based on values in the CityState attribute being substrings of the CustAddress attribute, allowing for errors based on an edit distance threshold. The formal statement of the approximate substring join problem is: 1   3  " Problem 2 (Approximate Substring Join) Given tables and with string attributes and ,  and an integer  , retrieve all pairs of records 1 3  )   1 3  1  $  + )  such that for some substring edit distance( )  .  3     of , In order to use our approach, we reexamine what filtering techniques can be applied for this problem. For a string to be within edit distance  of a substring of , it must be the case that and (and hence ) must have a certain minimum number of matching -grams. Additionally, the positions of these matches must be in the right order and cannot be too far apart. Clearly, L ENGTH F ILTERING is not applicable in this case. However, it follows from the first observation above that C OUNT F ILTERING is still applicable. Proposition 3.1 needs to be replaced by the following (weaker) proposition: 21 1  3  3 1 3 3 ., - , - #   1  %'  %  Proposition 5.1 Consider strings and . If has a substring such that and are within an edit distance  of  , then the cardinality of  , ignoring positional information, must be at least .  1    The applicability of P OSITION F ILTERING is complex. While it is true from the second observation above that the positions of the -gram matches cannot be too far apart, the -gram at position in may match at any arbitrary   . Hence, P OSITION position in and not just in F ILTERING is not directly applicable for approximate substring matching. The SQL query expression for computing an approximate substring join between and incorporating “substring style” can be easily devised from query Q2, if we remove the clauses that perform the position and length filtering and we replace the edit distance UDF with the appropriate one. The standard algorithm for determining all approximate occurrences of a string in is rather expensive, taking in the worst case. Here we develop an alternatime  tive filtering algorithm, called Substring Position Filtering (SPF), that is based on -grams and their relative positions, and quickly (in quadratic time) provides a check whether one string is an approximate substring of another string . We will briefly describe the SPF algorithm here; for and , SPF certifies to be a candidate given strings for approximate match of as an approximate substring in one or more places within threshold  . As before, this is a filter with no false dismissals. SPF works by finding any one place where potentially occurs in , if any. Let  be the  -gram in string ,  $ . Let .   be the set of positions in at which -gram  occurs; this set may be empty. The algorithm, shown in Figure 8, may be thought of as using standard dynamic programming for edit-distance computation, but savings are achieved by (i) applying the algorithm sparsely only at a subset of positions in guided by the occurrences of certain -grams (line 3 of SPF), and (ii) applying only part of the dynamic programming, again guided by certain -grams (line 5 of SPF). Algorithm SubMatch is the dynamic programming part, which is described here in a top-down recursive way where the table SubMatchArray is filled in as it is computed and read as needed (this is needed since which entries of the SubMatchArray will be computed depends on  3 1  1 1  3 3 1 3 1 1  3  3 3  1 3 3    1  % #   1   ) 3 5000 16000 Processor 4500 14000 IO 4000 I/O Response Time Response Time Processor 12000 3500 3000 2500 2000 1500 10000 8000 6000 4000 1000 500 2000 0 k=1 k=2 k=3 k=1 HJ k=2 0 k=3 q=2 q=3 SM q=4 q=5 q=2 q=3 HJ edit distance q=4 q=5 SM qgram size (a) Increasing Edit Distance  for -gram Length =3 (b) Increasing -gram Length for Edit Distance  =3 No Index Available. 14000 35000 Processor 12000 Processor 30000 I/O IO 25000 Response Time Response Time 10000 8000 6000 4000 2000 20000 15000 10000 5000 0 0 k=1 k=2 k=3 k=1 INL k=2 k=3 q=2 q=3 SM q=4 q=5 q=2 q=3 INL q=4 q=5 SM edit distance qgram size (c) Increasing Edit Distance  for -gram Length =3 (d) Increasing -gram Length for Edit Distance  =3 One Index Available. 16000 14000 14000 12000 Processor Processor I/O I/O 10000 10000 Response Time Response Time 12000 8000 6000 4000 2000 8000 6000 4000 2000 0 k=1 k=2 k=3 k=1 q=3 k=2 0 k=3 q=2 q=5 edit distance (e) Increasing Edit Distance  q=3 q=4 q=5 q=2 k=1 q=3 q=4 q=5 k=3 for -gram Length =3,5(f) Increasing -gram Length for Edit Distance  =1,3 Two Indexes Available. Figure 7: Response Time (in seconds) for Various Physical Database Organizations. 6  Algorithm SPF(  ,  , ) move operations as well. A natural example is in matchmissing = 0; 5 ing names of people; we would like to be able to match  ;  for (> H ;  ) “first-name last-name” with “last-name, first-name” using for 12)= EC ? an error metric that is independent of the length of firstcost = SubMatch( 5! C 6 HC  -H );  if ((missing cost) JDL ) name or last-name. It turns out that the -gram method is    ; return  well suited for this enhanced metric. For this purpose, we missing  ; begin by extending the definition of edit distance. return  ;   SubMatch( , ,  ) if ( ) return 0; if SubMatchArray  C already computed, return it. if ( 1 2 = EC ? ) return SubMatch( C -HC -H ); insertion cost = 1 SubMatch( C-HC! ); deletion cost = 1 SubMatch( CC! -H ); substitution cost = 1 SubMatch( C-HC"#H );  SubMatchArray  C$K&%('*) insertion cost, deletion cost, substitution cost  return SubMatchArray  C ;  Figure 8: Substring Position Filtering (SPF) Algorithm. relative occurrences of -grams, and cannot be ascertained a priori). 5.2 Allowing for Block Moves Traditional string edit distance computations are for single character insertions, deletions, and substitutions. However, in many applications we would like to allow block Definition 5.1 [Extended Edit Distance] The extended edit distance between two strings is the minimum cost of edit operations needed to transform the first string into the second. The operations allowed are single-character insertion, deletion, and substitution at unit cost, and the movement of a block of contiguous characters at a cost of + units.  3   1 and   1 )+ 3 " The extended edit distance between two strings is symmetric and  extended edit distance . -,   1 )  3  ,.1 , 3 Theorem 5.1 Let and  be the sets of -grams (of length ) for strings and in the database. If and are within an extended edit distance of  , then the cardinality of # positional information, is at. least  , ignoring /.    #0  1$2 &  +  , where +  $ 3+ . , - ,.   1  )   3  #  #  #  1 3  ) Intuitively, the bound arises from the fact that the block move operation can transform a string of the form 46587#9 . ;. to 4:7 589 , which can result in up to mismatching -grams. # Based on the above observations, it is easy to see that one can apply C OUNT F ILTERING (with a suitably modified threshold) and L ENGTH F ILTERING for approximate string processing with block moves. However, incorporating P OSITION F ILTERING is difficult as described earlier because block moves may end up moving -grams arbitrarily. Nevertheless, we can design an enhanced filtering mechanism (just as we did with the SPF algorithm in the previous section) and incorporate it together with count filtering into a SQL query as before. Due to space limitation we do not list the details. 6 Related Work A large body of work has been devoted to the development of efficient solutions to the approximate string matching problem. For two strings of length and  , available in main memory, there exists a folklore dynamic programming algorithm to compute the edit distance of the strings  time and space [12]. Improvements to the bain  sic algorithm have appeared, offering better average and worst case running times as well as graceful space behavior. Due to space limitations, we do not include a detailed survey here, but we refer the reader to [10] for an excellent overview of the work as well as additional references. Identifying strings approximately in secondary storage is a relatively new area. Indexes such as Glimpse [9] store a dictionary and use a main memory algorithm to obtain a set of words to retrieve. Exact text searching is applied thereafter. These approaches are rather limited in scope due to the static nature of the dictionary, and they are not suitable for dynamic environments or when the domain of possible strings is unbounded. Other approaches rely on suffix trees to guide the search for approximate string matches [4, 11]. In [1], Baeza-Yates and Gonnet solve the problem of exact substring joins, using suffix arrays and outside the context of a relational database. In the context of databases, several indexing techniques proposed for arbitrary metric spaces [3, 2] could be applied for the problem of approximately retrieving strings. However such structures have to be supported by the database management system. Cohen [5] presented a framework for the integration of heterogeneous databases based on textual similarity and proposed WHIRL, a logic that reasons explicitly about string similarity using TF-IDF term weighting, from the vector-space retrieval model, rather than the notions of edit distance on which we focus in this paper. Grossman et al. [7, 8] presented techniques for representing text documents and their associated term frequencies in relational tables, as well as for mapping boolean and vector-space queries into standard SQL queries. In this paper, we follow the same general approach of translating complex functionality not natively supported by a DBMS (approximate string queries in our case) into operations and queries that a DBMS can optimize and execute efficiently.   7 Conclusions String processing in databases is a very fertile and useful area of research, especially given the proliferation of web based information systems. The main contribution of this paper is an effective technique for supporting approximate string processing on top of a database system, by using the unmodified capabilities of the underlying system. We showed that significant performance benefits are to be had by using our techniques. Acknowledgments L. Gravano and P. Ipeirotis were funded in part by the National Science Foundation (NSF) under Grants No. IIS-9733880 and IIS-98-17434. P. Ipeirotis is also partially supported by Empeirikeio Foundation. The work of H.V. Jagadish was funded in part by NSF under Grant No. IIS00085945. References [1] R. Baeza-Yates and G. Gonnet. A fast algorithm on average for all-against-all sequence matching. In Proceedings of String Processing and Information Retrieval Symposium (SPIRE’99), pages 16–23, 1999. [2] T. Bozkaya and Z. M. Ozsoyoglu. Distance based indexing for high dimensional metric spaces. In Proceedings of the 1997 ACM SIGMOD Conference on Management of Data, pages 357–368,1997. [3] S. Brin. Near neighbor search in large metric spaces. In Proceedings of the 21st International Conference on Very Large Databases (VLDB’95), pages 574–584, 1995. [4] A. Cobbs. Fast approximate matching using suffix trees. In Combinatorial Pattern Matching, 6th Annual Symposium (CPM’95), pages 41–54, 1995. [5] W. Cohen. Integration of heterogeneous databases without common domains using queries based on textual similarity. In Proceedings of the 1998 ACM SIGMOD Conference on Management of Data, pages 201–212, 1998. [6] D. J. DeWitt, J. F. Naughton, and D. A. Schneider. An evaluation of non-equijoin algorithms. In Proceedings of the 17th International Conference on Very Large Databases (VLDB’91), pages 443–452, 1991. [7] D. A. Grossman, O. Frieder, D. O. Holmes, and D. C. Roberts. Integrating structured data and text: A relational approach. In Journal of the American Society for Information Science (JASIS), 48(2):122– 132, 1997. [8] C. Lundquist, O. Frieder, D. O. Holmes, and D. A. Grossman. A parallel relational database management system approach to relevance feedback in information retrieval. In Journal of the American Society for Information Science (JASIS), 50(5):413–426, 1999. [9] U. Manber and S. Wu. GLIMPSE: A tool to search through entire file systems. In Proceedings of USENIX Winter 1994 Technical Conference, pages 23–32, 1994. [10] G. Navarro. A guided tour to approximate string matching. To appear in ACM Computing Surveys, 2001. [11] S. Sahinalp and U. Vishkin. Efficient approximate and dynamic matching of patterns using a labeling paradigm (extended abstract). In 37th Annual Symposium on Foundations of Computer Science, pages 320–328, 1996. [12] T. F. Smith and M. S. Waterman. Identification of common molecular subsequences. In Journal of Molecular Biology, 147:195–197, 1981. [13] E. Sutinen and J. Tarhio. On using L -gram locations in approximate string matching. In Proceedings of Third Annual European Symposium (ESA’95), pages 327–340, 1995. [14] E. Sutinen and J. Tarhio. Filtration with L -samples in approximate string matching. In Combinatorial Pattern Matching, 7th Annual Symposium (CPM’96), pages 50–63, 1996. [15] E. Ukkonen. Approximate string matching with L -grams and maximal matches. In Theoretical Computer Science (TCS), 92(1):191– 211, 1992 [16] J. Ullman. A binary < -gram technique for automatic correction of substitution, deletion, insertion, and reversal errors in words. In The Computer Journal 20(2):141–147, 1977.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.