1 <chapter id="fields-and-charsets">
2 <!-- $Id: field-structure.xml,v 1.13 2007-12-19 09:30:29 adam Exp $ -->
3 <title>Field Structure and Character Sets
7 In order to provide a flexible approach to national character set
8 handling, &zebra; allows the administrator to configure the set up the
9 system to handle any 8-bit character set — including sets that
10 require multi-octet diacritics or other multi-octet characters. The
11 definition of a character set includes a specification of the
12 permissible values, their sort order (this affects the display in the
13 SCAN function), and relationships between upper- and lowercase
14 characters. Finally, the definition includes the specification of
15 space characters for the set.
19 The operator can define different character sets for different fields,
20 typical examples being standard text fields, numerical fields, and
21 special-purpose fields such as WWW-style linkages (URx).
25 Zebra 1.3 and Zebra 2.0 series require that the field type is
26 a single character, e.g. <literal>w</literal> (for word), and
27 <literal>p</literal> for phrase. Zebra 2.1 allows field types to
28 be any string. This allows for greater flexibility - in particular
29 per-locale (language) fields can be defined.
33 Version 2.1 of Zebra can also be configured - per field - to use the
34 <ulink url="&url.icu;">ICU</ulink> library to perform tokenization and
35 normalization of strings. This is an alternative to the "charmap"
36 files which has been part of Zebra since its first release.
39 <section id="default-idx-file">
40 <title>The default.idx file</title>
42 The field types, and hence character sets, are associated with data
43 elements by the indexing rules (say <literal>title:w</literal>) in the
44 various filters. Fields are defined in a field definition file which,
45 by default, is called <filename>default.idx</filename>.
46 This file provides the association between field type codes
47 and the character map files (with the .chr suffix). The format
48 of the .idx file is as follows
55 <term>index <replaceable>field type code</replaceable></term>
58 This directive introduces a new search index code.
59 The argument is a one-character code to be used in the
60 .abs files to select this particular index type. An index, roughly,
61 corresponds to a particular structure attribute during search. Refer
62 to <xref linkend="zebrasrv-search"/>.
64 </listitem></varlistentry>
66 <term>sort <replaceable>field code type</replaceable></term>
69 This directive introduces a
70 sort index. The argument is a one-character code to be used in the
71 .abs fie to select this particular index type. The corresponding
72 use attribute must be used in the sort request to refer to this
73 particular sort index. The corresponding character map (see below)
74 is used in the sort process.
76 </listitem></varlistentry>
78 <term>completeness <replaceable>boolean</replaceable></term>
81 This directive enables or disables complete field indexing.
82 The value of the <replaceable>boolean</replaceable> should be 0
83 (disable) or 1. If completeness is enabled, the index entry will
84 contain the complete contents of the field (up to a limit), with words
85 (non-space characters) separated by single space characters
86 (normalized to " " on display). When completeness is
87 disabled, each word is indexed as a separate entry. Complete subfield
88 indexing is most useful for fields which are typically browsed (eg.
89 titles, authors, or subjects), or instances where a match on a
90 complete subfield is essential (eg. exact title searching). For fields
91 where completeness is disabled, the search engine will interpret a
92 search containing space characters as a word proximity search.
94 </listitem></varlistentry>
96 <varlistentry id="default.idx.firstinfield">
97 <term>firstinfield <replaceable>boolean</replaceable></term>
100 This directive enables or disables first-in-field indexing.
101 The value of the <replaceable>boolean</replaceable> should be 0
104 </listitem></varlistentry>
106 <varlistentry id="default.idx.alwaysmatches">
107 <term>alwaysmatches <replaceable>boolean</replaceable></term>
110 This directive enables or disables alwaysmatches indexing.
111 The value of the <replaceable>boolean</replaceable> should be 0
114 </listitem></varlistentry>
117 <term>charmap <replaceable>filename</replaceable></term>
120 This is the filename of the character
121 map to be used for this index for field type.
122 See <xref linkend="character-map-files"/> for details.
124 </listitem></varlistentry>
127 <term>icuchain <replaceable>filename</replaceable></term>
130 Specifies the filename with ICU tokenization and
132 See <xref linkend="icuchain-files"/> for details.
133 Using icuchain for a field type is an alternative to
134 charmap. It does not make sense to define both
135 icuchain and charmap for the same field type.
137 </listitem></varlistentry>
141 <title>Field types</title>
143 Following are three excerpts of the standard
144 <filename>tab/default.idx</filename> configuration file. Notice
145 that the <literal>index</literal> and <literal>sort</literal>
146 are grouping directives, which bind all other following directives
149 # Traditional word index
150 # Used if completenss is 'incomplete field' (@attr 6=1) and
151 # structure is word/phrase/word-list/free-form-text/document-text
161 # Null map index (no mapping at all)
162 # Used if structure=key (@attr 4=3)
179 <section id="character-map-files">
180 <title>Charmap Files</title>
182 The character map files are used to define the word tokenization
183 and character normalization performed before inserting text into
184 the inverse indexes. &zebra; ships with the predefined character map
185 files <filename>tab/*.chr</filename>. Users are allowed to add
186 and/or modify maps according to their needs.
189 <table id="character-map-table" frame="top">
190 <title>Character maps predefined in &zebra;</title>
194 <entry>File name</entry>
195 <entry>Intended type</entry>
196 <entry>Description</entry>
201 <entry><literal>numeric.chr</literal></entry>
202 <entry><literal>:n</literal></entry>
203 <entry>Numeric digit tokenization and normalization map. All
204 characters not in the set <literal>-{0-9}.,</literal> will be
205 suppressed. Note that floating point numbers are processed
206 fine, but scientific exponential numbers are trashed.</entry>
209 <entry><literal>scan.chr</literal></entry>
210 <entry><literal>:w or :p</literal></entry>
211 <entry>Word tokenization char map for Scandinavian
212 languages. This one resembles the generic word tokenization
213 character map <literal>tab/string.chr</literal>, the main
214 differences are sorting of the special characters
215 <literal>üzæäøöå</literal> and equivalence maps according to
216 Scandinavian language rules.</entry>
219 <entry><literal>string.chr</literal></entry>
220 <entry><literal>:w or :p</literal></entry>
221 <entry>General word tokenization and normalization character
222 map, mostly useful for English texts. Use this to derive your
223 own language tokenization and normalization derivatives.</entry>
226 <entry><literal>urx.chr</literal></entry>
227 <entry><literal>:u</literal></entry>
228 <entry>URL parsing and tokenization character map.</entry>
231 <entry><literal>@</literal></entry>
232 <entry><literal>:0</literal></entry>
233 <entry>Do-nothing character map used for literal binary
234 indexing. There is no existing file associated to it, and
235 there is no normalization or tokenization performed at all.</entry>
242 The contents of the character map files are structured as follows:
245 <term>encoding <replaceable>encoding-name</replaceable></term>
248 This directive must be at the very beginning of the file, and it
249 specifies the character encoding used in the entire file. If
250 omitted, the encoding <literal>ISO-8859-1</literal> is assumed.
253 For example, one of the test files found at
254 <literal>test/rusmarc/tab/string.chr</literal> contains the following
260 <literal>test/charmap/string.utf8.chr</literal> is encoded
266 </listitem></varlistentry>
269 <term>lowercase <replaceable>value-set</replaceable></term>
272 This directive introduces the basic value set of the field type.
273 The format is an ordered list (without spaces) of the
274 characters which may occur in "words" of the given type.
275 The order of the entries in the list determines the
276 sort order of the index. In addition to single characters, the
277 following combinations are legal:
285 Backslashes may be used to introduce three-digit octal, or
286 two-digit hex representations of single characters
287 (preceded by <literal>x</literal>).
288 In addition, the combinations
289 \\, \\r, \\n, \\t, \\s (space — remember that real
290 space-characters may not occur in the value definition), and
291 \\ are recognized, with their usual interpretation.
297 Curly braces {} may be used to enclose ranges of single
298 characters (possibly using the escape convention described in the
299 preceding point), eg. {a-z} to introduce the
300 standard range of ASCII characters.
301 Note that the interpretation of such a range depends on
302 the concrete representation in your local, physical character set.
308 paranthesises () may be used to enclose multi-byte characters -
309 eg. diacritics or special national combinations (eg. Spanish
310 "ll"). When found in the input stream (or a search term),
311 these characters are viewed and sorted as a single character, with a
312 sorting value depending on the position of the group in the value
321 For example, <literal>scan.chr</literal> contains the following
322 lowercase normalization and sorting order:
324 lowercase {0-9}{a-y}üzæäøöå
327 </listitem></varlistentry>
329 <term>uppercase <replaceable>value-set</replaceable></term>
332 This directive introduces the
333 upper-case equivalences to the value set (if any). The number and
334 order of the entries in the list should be the same as in the
335 <literal>lowercase</literal> directive.
338 For example, <literal>scan.chr</literal> contains the following
339 uppercase equivalent:
341 uppercase {0-9}{A-Y}ÜZÆÄØÖÅ
344 </listitem></varlistentry>
346 <term>space <replaceable>value-set</replaceable></term>
349 This directive introduces the character
350 which separate words in the input stream. Depending on the
351 completeness mode of the field in question, these characters either
352 terminate an index entry, or delimit individual "words" in
353 the input stream. The order of the elements is not significant —
354 otherwise the representation is the same as for the
355 <literal>uppercase</literal> and <literal>lowercase</literal>
359 For example, <literal>scan.chr</literal> contains the following
362 space {\001-\040}!"#$%&'\()*+,-./:;<=>?@\[\\]^_`\{|}~
365 </listitem></varlistentry>
367 <term>map <replaceable>value-set</replaceable>
368 <replaceable>target</replaceable></term>
371 This directive introduces a mapping between each of the
372 members of the value-set on the left to the character on the
373 right. The character on the right must occur in the value
374 set (the <literal>lowercase</literal> directive) of the
375 character set, but it may be a parenthesis-enclosed
376 multi-octet character. This directive may be used to map
377 diacritics to their base characters, or to map HTML-style
378 character-representations to their natural form, etc. The
379 map directive can also be used to ignore leading articles in
380 searching and/or sorting, and to perform other special
384 For example, <literal>scan.chr</literal> contains the following
385 map instructions among others, to make sure that HTML entity
386 encoded Danish special characters are mapped to the
387 equivalent Latin-1 characters:
395 In addition to specifying sort orders, space (blank) handling,
396 and upper/lowercase folding, you can also use the character map
397 files to make &zebra; ignore leading articles in sorting records,
398 or when doing complete field searching.
401 This is done using the <literal>map</literal> directive in the
402 character map file. In a nutshell, what you do is map certain
403 sequences of characters, when they occur <emphasis> in the
404 beginning of a field</emphasis>, to a space. Assuming that the
405 character "@" is defined as a space character in your file, you
411 The effect of these directives is to map either 'the' or 'The',
412 followed by a space character, to a space. The hat ^ character
413 denotes beginning-of-field only when complete-subfield indexing
414 or sort indexing is taking place; otherwise, it is treated just
415 as any other character.
418 Because the <literal>default.idx</literal> file can be used to
419 associate different character maps with different indexing types
420 -- and you can create additional indexing types, should the need
421 arise -- it is possible to specify that leading articles should
422 be ignored either in sorting, in complete-field searching, or
426 If you ignore certain prefixes in sorting, then these will be
427 eliminated from the index, and sorting will take place as if
428 they weren't there. However, if you set the system up to ignore
429 certain prefixes in <emphasis>searching</emphasis>, then these
430 are deleted both from the indexes and from query terms, when the
431 client specifies complete-field searching. This has the effect
432 that a search for 'the science journal' and 'science journal'
433 would both produce the same results.
435 </listitem></varlistentry>
437 <term>equivalent <replaceable>value-set</replaceable></term>
440 This directive introduces equivalence classes of characters
441 and/or strings for sorting purposes only. It resembles the map
442 directive, but does not affect search and retrieval indexing,
443 but only sorting order under present requests.
446 For example, <literal>scan.chr</literal> contains the following
447 equivalent sorting instructions, which can be uncommented:
455 </listitem></varlistentry>
460 <section id="icuchain-files">
461 <title>ICU Chain Files</title>
463 The <ulink url="&url.icu;">ICU</ulink> chain files defines a
464 <emphasis>chain</emphasis> of rules
465 which specify the conversion process to be carried out for each
466 record string for indexing.
469 Both searching and sorting is based on the <emphasis>sort</emphasis>
470 normalization that ICU provides. This means that scan and sort will
471 return terms in the sort order given by ICU.
474 Zebra is using YAZ' ICU wrapper. Refer to the
475 <ulink url="&url.yaz.yaz-icu;">yaz-icu man page</ulink> for
476 documentation about the ICU chain rules.
480 Use the yaz-icu program to test your icuchain rules.
483 <example><title>Indexing Greek text</title>
485 Consider a system where all "regular" text is to be indexed
486 using as Greek (locale: EL).
487 We would have to change our index type file - to read
498 The ICU chain file <filename>greek.xml</filename> could look
501 <icu_chain locale="el">
502 <transform rule="[:Control:] Any-Remove"/>
504 <transform rule="[[:WhiteSpace:][:Punctuation:]] Remove"/>
512 Zebra is shipped with a field types file <filename>icu.idx</filename>
513 which is an ICU chain version of <filename>default.idx</filename>.
516 <example><title>MARCXML indexing using ICU</title>
518 The directory <filename>examples/marcxml</filename> includes
519 a complete sample with MARCXML recordst that are DOM XML indexed
520 using ICU chain rules. Study the
521 <filename>README</filename> in the <filename>marcxml</filename>
522 directory for details.
528 <!-- Keep this comment at the end of the file
533 sgml-minimize-attributes:nil
534 sgml-always-quote-attributes:t
537 sgml-parent-document: "zebra.xml"
538 sgml-local-catalogs: nil
539 sgml-namecase-general:t