:meta-keywords: cubrid strings, charset, codeset, collation, cubrid locale :meta-description: Globalization includes internationalization and localization. Internationalization can be applied to various languages and regions. Localization fits the language and culture in a specific area as appending the language-specific components. CUBRID supports multilingual collations including Europe and Asia to facilitate the localization. .. _globalization-overview: An Overview of Globalization ============================ Character data -------------- Character data (strings) may be stored with VARCHAR(STRING), CHAR, ENUM, and they support charset and collation. Charset(character set, codeset) controls how characters are stored (on any type of storage) as bytes, or how a series of bytes forms a character. CUBRID supports ISO-88591-1, UTF-8 and EUC-KR charsets. For UTF-8, we support only the Unicode characters up to codepoint 10FFFF (encoded on up to four bytes). For instance, the character "?" is encoded in codeset ISO-8859-1 using a single byte (C7), in UTF-8 is encoded with 2 bytes (C3 88), while in EUC-KR this character is not available. Collation decides how strings compare. Most often, users require case insensitive and case sensitive comparisons. For instance, the strings "ABC" and "abc" are equal in a case insensitive collation, while in a case sensitive collation, they are not, and depending on other collation settings, the relationship can be "ABC" < "abc" , or "ABC" > "abc". Collation means more than comparing character casing. Collation decides the relationship between two strings (greater, lower, equal), is used in string matching (LIKE), or computing boundaries in index scan. In CUBRID, a collation implies a charset. For instance, collations "utf8_en_ci" and "iso88591_en_ci" perform case insensitive compare, but operate on different charsets. Although for ASCII range, in these particular cases the results are similar, the collation with "utf8_en_ci" is slower, since it needs to work on variable number of bytes (UTF-8 encoding). * "'a' COLLATE iso88591_en_ci" indicates "_iso88591'a' COLLATE iso88591_en_ci". * "'a' COLLATE utf8_en_ci" indicates "_utf8'a' COLLATE utf8_en_ci". All string data types support precision. Special care is required with fixed characters(CHAR). The values of this types are padded to fill up the precision. For instance, inserting "abc" into a CHAR(5) column, will result in storing "abc " (2 padding spaces are added). Space (ASCII 32 decimal, Unicode 0020) is the padding character for most charsets. But, for EUC-KR charset, the padding consists of one character which is stored with two bytes (A1 A1). Related Terms ------------- .. In the below, currency part is removed because MONETARY is deprecated. * **Character set**: A group of encoded symbols (giving a specific number to a certain symbol) * **Collation**: A set of rules for comparison of characters in the character set and for sorting data * **Locale**: A set of parameters that defines any special variant preferences such as number format, calendar format (month and day in characters), date/time format, and collation depending on the operator's language and country. Locale defines the linguistic localization. Character set of locale defines how the month in characters and other data are encoded. A locale identifier consists of at least a language identifier and a region identifier, and it is expressed as language[_territory][.codeset] (For example, Australian English using UTF-8 encoding is written as en_AU.UTF-8). * **Unicode normalization**: The specification by the Unicode character encoding standard where some sequences of code points represent essentially the same character. CUBRID uses Normalization Form C (NFC: codepoint is decomposed and then composed) for input and Normalization Form D (NFD: codepoint is composed and then decomposed) for output. However, CUBRID does not apply the canonical equivalence rule as an exception. For example, canonical equivalence is applied in general NFC rule so codepoint 212A (Kelvin K) is converted to codepoint 4B (ASCII code uppercase K). Since CUBRID does not perform the conversion by using the canonical equivalence rule to make normalization algorithm quicker and easier, it does not perform reverse-conversion, too. * **Canonical equivalence**: A basic equivalence between characters or sequences of characters, which cannot be visually distinguished when they are correctly rendered. For example, let's see '?' ('A' with an angstrom). '?' (Unicode U + 212B) and Latin 'A' (Unicode U + 00C5) have same A and different codepoints, however, the decomposed result is 'A' and U+030A, so it is canonical equivalence. * **Compatibility equivalence**: A weaker equivalence between characters or sequences of characters that represent the same abstract character. For example, let's see number '2' (Unicode U + 0032) and superscript '?'(Unicode U + 00B2). '?' is a different format of number '2', however, it is visually distinguished and has a different meaning, so it is not canonical equivalence. When normalizing '2?' with NFC, '2?' is maintained since it uses canonical equivalence. However, with NFKC, '?' is decomposed to '2' which is compatibility equivalence and then it can be recomposed to '22'. Unicode normalization of CUBRID does not apply the compatibility equivalence rule. For explanation on Unicode normalization, see :ref:`unicode-normalization`. For more details, see http://unicode.org/reports/tr15/. The default value of the system parameter related to Unicode normalization is unicode_input_normalization=no and unicode_output_normalization=no. For a more detailed description on parameters, see :ref:`stmt-type-parameters`. Locale Attributes ----------------- Locale is defined by following attributes. .. ", monetary currency" is removed from above: MONETARY is deprecated. * **Charset (codeset)**: How bytes are interpreted into single characters (Unicode codepoints) * **Collations**: Among all collations defined in locale of `LDML(UNICODE Locale Data Markup Language) `_ file, the last one is the default collation. Locale data may contain several collations. * **Alphabet (casing rules)**: One locale data may have up 2 alphabets, one for identifier and one for user data. One locale data can have two types of alphabets. * **Calendar**: Names of weekdays, months, day periods (AM/PM) * **Numbering settings**: Symbols for digit grouping * **Text conversion data**: For CSQL conversion. Option. * **Unicode normalization data**: Data converted by normalizing several characters with the same shape into one based on a specified rule. After normalization, characters with the same shape will have the same code value even though the locale is different. Each locale can activate/deactivate the normalization functionality. .. note:: Generally, locale supports a variety of character sets. However, CUBRID locale supports both ISO and UTF-8 character sets for English and Korean. The other operator-defined locales using the LDML file support the UTF-8 character set only. .. _collation-properties: Collation Properties -------------------- A collation is an assembly of information which defines an order for characters and strings. In CUBRID, collation has the following properties. * **Strength**: This is a measure of how "different" basic comparable items (characters) are. This affects selectivity. In LDML files, collation strength is configurable and has four levels. For example a Case insensitive collation should be set with level = "secondary" (2) or "primary" (1). * Whether it supports or not **expansions** and **contractions** Each column has a collation, so when applying :func:`LOWER`, :func:`UPPER` functions the casing rules of locale which defines the collation's default language is used. Depending on collation properties some CUBRID optimizations may be disabled for some collations: * **LIKE** rewrite: is disabled for collations which maps several different character to the same weight (case insensitive collations for example) and for collations with expansions. * Covering index scan: disabled for collations which maps several different character to the same weight (see :ref:`covering-index`). For more information, see :ref:`collation-setting-impacted-features` . .. _collation-naming-rules: Collation Naming Rules ---------------------- The collation name in CUBRID follows the conversion: :: ____... * : The full charset name as used by CUBRID. iso88591, utf8, euckr. * : a region/language specific. The language code is expected as two characters; en, de, es, fr, it, ja, km, ko, tr, vi, zh, ro. "gen" if it does not address a specific language, but a more general sorting rule. * __...: They have the following meaning. Most of them apply only to LDML collations. * ci: case insensitive In LDML, can be obtained using the settings: strength="secondary" caseLevel="off" caseFirst="off". * cs: case sensitive; By default all collations are case sensitive. In LDML, can be obtained using at least: strength="tertiary". * bin: it means that the sorting order under such collation is almost the same with the order of codepoints; If memory (byte) comparison is used, then almost the same result is obtained. Space character and EUC double-byte padding character are always sorted as zero in "bin" collation. No collations with such setting are currently configured in LDML (they are already available as built-in), but a similar one can be obtained using the maximum setting strength="quaternary" or strength="identical". * ai: accent insensitive; this means that '?' is sorted the same as 'A'. Due to particularities of the UCA based algorithms, an accent insensitive collation is also a case insensitive collation. In LDML, can be obtained using: strength="primary". * uca: this signals a UCA based collation; this is used only to differentiate such collations from similar built-in variants. All LDML collations are based on UCA, but in order to keep shorter names only two collations ( 'utf8_ko_cs_uca' , 'utf8_tr_cs_uca' ) have this description in their names, in order to differentiate them from 'utf8_ko_cs' and 'utf8_tr_cs' collations. * exp: this collations use a full-word matching/compare algorithm, contrary to the rest of collations which use character-by-character compare. This collation uses a more complex algorithm, with multiple passes which is much slower, but may prove useful for alphabetical sorts. In LDML, the :ref:`expansion` needs to be explicit by adding CUBRIDExpansions="use". * ab: accent backwards; it is particularity of French-Canadian sorting, where level 2 of UCA (used to store accents weights) is compared from end of string towards the beginning. This collation setting can be used only when :ref`expansion` setting is also activated. The "backwards" setting allows for the following sorting: * Normal Accent Ordering: cote < cot¨¦ < c?te < c?t¨¦ * Backward Accent Ordering: cote < c?te < cot¨¦ < c?t¨¦ * cbm: contraction boundary match; it is a particularity of collations with :ref:`expansion` and :ref:`contraction` and refers to how it behaves at string matching when a :ref:`contraction` is found. Suppose the collation has defined the :ref:`contraction` "ch"; then normally, the pattern "bac" will not match the string"bachxxx" But when the collation is configured to allow "matching the characters starting a contraction", the above matching will return a positive. Only one collation is configured in this manner - 'utf8_ja_exp_cbm' - Japanese sorting requires a lot of contractions. The collation names are not dynamically generated. They are user defined (configured in LDML), and should reflect the settings of the collation. The name of collation influences the internal numeric id of the collation. For instance, in CUBRID only 256 collations are allowed, and the numeric IDs are assigned as: * 0 -31: built-in collations (for these collations the name and id are hard-coded) * 32 - 46: LDML collations having "gen" as "language" part * 47 - 255: the rest of LDML collations If you want to include all locales into the database which CUBRID provide, first, copy cubrid_locales.all.txt of $CUBRID/conf directory into cubrid_locales.txt and next, run make_locale script(in extension, Linux is .sh, Windows is .bat). For more details on make_locale script, see :ref:`locale-compilation`. If you want to include the newly added locale information into the existing database, run "cubrid synccolldb ". For more information, see :ref:`synccolldb`. If you include all locales defined in LDML files, CUBRID has the following collations. .. _cubrid-all-collation: CUBRID Collation ^^^^^^^^^^^^^^^^ +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | Collation | Locale for casing | Character range | +===================+=======================================================================+===========================================+ | iso88591_bin | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | iso88591_en_cs | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | iso88591_en_ci | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_bin | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | euckr_bin | ko_KR - Korean, same as en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_en_cs | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_en_ci | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_tr_cs | tr_TR - Turkish | Turkish alphabet | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ko_cs | ko_KR - Korean, same as en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen_ai_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_de_exp_ai_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_de_exp | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ro_cs | ro_RO - Romanian, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_es_cs | es_ES - Spanish, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_fr_exp_ab | fr_FR - French, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ja_exp | ja_JP - Japanese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ja_exp_cbm | ja_JP - Japanese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_km_exp | km_KH - Cambodian, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ko_cs_uca | ko_KR - Korean, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_tr_cs_uca | tr_TR - Turkish, generic Unicode casing customized with Turkish rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_vi_cs | vi_VN - Vietnamese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | binary | none (invariant to casing operations) | any byte value (zero is nul-terminator) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ The Turkish casing rules changes the casing for character i,I,?,?. The German casing rules changes the casing for ?. On the above collations, 9 collations like iso88591_bin, iso88591_en_cs, iso88591_en_ci, utf8_bin, euckr_bin, utf8_en_cs, utf8_en_ci, utf8_tr_cs and utf8_ko_cs, are built in the CUBRID before running make_locale script. If a collation is included in more than one locale (.ldml) file, the locale for casing (default locale of collation) is the locale in which it is first included. The order of loading is the locales order from $CUBRID/conf/cubrid_locales.txt. The above locale casing for collations utf8_gen, utf8_gen_ci, utf8_gen_ai_ci, assumes the default order (alphabetical) in cubrid_locales.txt, so the default locale for all generic LDML collations is de_DE (German). Files For Locale Setting ------------------------ CUBRID uses following directories and files to set the locales. * **$CUBRID/conf/cubrid_locales.txt** file: A configuration file containing the list of locales to be supported * **$CUBRID/conf/cubrid_locales.all.txt** file: A configuration file template with the same structure as **cubrid_locales.txt**. Contains the entire list of all the locales that the current version of CUBRID is capable of supporting without any efforts from the end user's side. * **$CUBRID/locales/data** directory: This contains files required to generate locale data. * **$CUBRID/locales/loclib** directory: contains a C header file, **locale_lib_common.h** and OS dependent makefile which are used in the process of creating / generating locales shared libraries. * **$CUBRID/locales/data/ducet.txt** file: Text file containing default universal collation information (codepoints, contractions and expansions, to be more specific) and their weights, as standardized by The Unicode Consortium, which is the starting point for the creation of collations. For more information, see http://unicode.org/reports/tr10/#Default_Unicode_Collation_Element_Table . * **$CUBRID/locales/data/unicodedata.txt** file: Text file containing information about each Unicode codepoint regarding casing, decomposition, normalization etc. CUBRID uses this to determine casing. For more information, see https://docs.python.org/3/library/unicodedata.html . * **$CUBRID/locales/data/ldml** directory: common_collations.xml and XML files, name with the convention cubrid_<*locale_name*>.xml. common_collations.xml file contains shared collation information in all locale files, and each cubrid_<*locale_name*>.xml file contains a locale information for the supported language. * **$CUBRID/locales/data/codepages** directory: contains codepage console conversion for single byte codepages(8859-1.txt , 8859-15.txt, 8859-9.txt) and codepage console conversion for double byte codepages(CP1258.txt , CP923.txt, CP936.txt , CP949.txt). * **$CUBRID/bin/make_locale.sh** file or **%CUBRID%\\bin\\make_locale.bat** file: A script file used to generate shared libraries for locale data * **$CUBRID/lib** directory: Shared libraries for generated locales will be stored here. .. _locale-setting: Locale Setting ============== When you want to use a charset and collation of a specific language, the charset should be identical with a database which will be created newly. Supported CUBRID charsets are ISO-8859-1, EUC-KR and UTF-8 and the charset to be used is specified when creating a database. For example, when you created a database with a locale ko_KR.utf8, you can use collations starting with "utf8\_" like utf8_ja_exp. However, if you set the locale as ko_KR.euckr, you cannot use all collations which are related with other charset(see :ref:`cubrid-all-collation`). The following is an example which used utf8_ja_exp after creating a database with en_US.utf8. #. cd $CUBRID/conf #. cp cubrid_locales.all.txt cubrid_locales.txt #. make_locale.sh -t64 # 64 bit locale library creation #. cubrid createdb testdb en_US.utf8 #. cubrid server start testdb #. csql -u dba testdb #. run below query on csql .. code-block:: sql SET NAMES utf8; CREATE TABLE t1 (i1 INT , s1 VARCHAR(20) COLLATE utf8_ja_exp, a INT, b VARCHAR(20) COLLATE utf8_ja_exp); INSERT INTO t1 VALUES (1, '¤¤¥¤»ù±P',1,'¤¤¥¤ ÀO'); For more details, see the following. .. _locale-selection: Step 1: Selecting a Locale -------------------------- Configure locales to use on **$CUBRID/conf/cubrid_locales.txt**. You can select all or some of locales which are supported. CUBRID supports locales as follows: en_US, de_DE, es_ES, fr_FR, it_IT, ja_JP, km_KH, ko_KR, tr_TR, vi_VN, zh_CN, ro_RO. The language and country for each locale are shown in the following table. +-----------------+------------------------+ | Locale Name | Language - Country | +-----------------+------------------------+ | en_US | English - U.S.A. | +-----------------+------------------------+ | de_DE | German - Germany | +-----------------+------------------------+ | es_ES | Spanish - Spain | +-----------------+------------------------+ | fr_FR | French - France | +-----------------+------------------------+ | it_IT | Italian - Italy | +-----------------+------------------------+ | ja_JP | Japanese - Japan | +-----------------+------------------------+ | km_KH | Khmer - Cambodia | +-----------------+------------------------+ | ko_KR | Korean - Korea | +-----------------+------------------------+ | tr_TR | Turkish - Turkey | +-----------------+------------------------+ | vi_VN | Vietnamese - Vietnam | +-----------------+------------------------+ | zh_CN | Chinese - China | +-----------------+------------------------+ | ro_RO | Romanian - Romania | +-----------------+------------------------+ .. note:: The LDML files for the supported locales are named cubrid_<*locale_name*>.xml and they can be found in the **$CUBRID/locales/data/ldml** directory. If only a subset of these locales are to be supported by CUBRID, one must make sure their corresponding LDML files are present in the **$CUBRID/locales/data/ldml** folder. A locale cannot be used by CUBRID, unless it has an entry in **cubrid_locales.txt file** and it has a corresponding cubrid_<*locale_name*>.xml. Locale libraries are generated according to the contents of **$CUBRID/conf/cubrid_locales.txt** configuration file. This file contains the language codes of the wanted locales (all user defined locales are generated with UTF-8 charset). Also, in this file can be configured the file paths for each locale LDML file and libraries can be optionally configured. :: ko_KR /home/CUBRID/locales/data/ldml/cubrid_ko_KR.xml /home/CUBRID/lib/libcubrid_ko_KR.so By default, the LDML files are found in **$CUBRID/locales/data/ldml** and the locale libraries in **$CUBRID/lib**; the filenames for LDML are formatted like: cubrid_<*lang_name*>.ldml. The filenames for libraries: libcubrid_<*lang_name*>.dll (.so for Linux). .. _locale-compilation: Step 2: Compiling Locale ------------------------ Once the requirements described above are met, the locales can be compiled. Regarding the embedded locales in CUBRID, they can be used without compiling user locale library, so they can be used by skipping the step 2. But there are differences between the embedded locale and the library locale. Regarding this, see :ref:`Built-in Locale and Library Locale `. To compile the locale libraries, one must use the **make_locale** (**.bat** for Windows and **.sh** for Linux) utility script from command console. The file is delivered in **$CUBRID/bin** folder so it should be resolved by **$PATH** environment variable. Here **$CUBRID, $PATH** are the environment variables of Linux, **%CUBRID%**, **%PATH%** are the environment variables of Windows. .. note:: To run a **make_locale** script in Windows, it requires Visual C++ 2005, 2008 or 2010. Usage can be displayed by running **make_locale.sh -h**. (**make_locale /h** in Windows.) :: make_locale.sh [options] [locale] options ::= [-t 32|64 ] [-m debug|release] locale ::= [de_DE|es_ES|fr_FR|it_IT|ja_JP|km_KH|ko_KR|tr_TR|vi_VN|zh_CN|ro_RO] * *options* * **-t**: Selects 32bit or 64bit (default value: **64**). * **-m**: Selects release or debug. In general, release is selected (default value: release). The debug mode is provided for developers who would like to write the locale library themselves. Selects release or debug. In general, release is selected (default value: release). The debug mode is provided for developers who would like to write the locale library themselves. * *locale*: The locale name of the library to build. If *locale* is not specified, the build includes data from all configured locales. In this case, library file is stored in **$CUBRID/lib** directory with the name of **libcubrid_all_locales.so** (**.dll** for Windows). To create user defined locale shared libraries, two choices are available: * Creating a single lib with all locales to be supported. :: make_locale.sh -t64 # Build and pack all locales (64/release) * Creating one lib for each locale to be supported. :: make_locale.sh -t 64 -m release ko_KR The first choice is recommended. In this scenario, some data may be shared among locales. If you choose the first one, a lib supporting all locales has less than 15 MB; in the second one, consider for each locale library from 1 MB to more than 5 MB. Also the first one is recommended because it has no runtime overhead during restarting the servers when you choose the second one. .. warning:: **Limitations and Rules** * Do not change the contents of **$CUBRID/conf/cubrid_locales.txt** after locales generation; once the locales libraries are generated, the contents of **$CUBRID/conf/cubrid_locales.txt** should not be changed (order of languages within the file must also be preserved). During locale compiling, the generic collation uses the first one as default locale; changing the order may cause different results with casing for such collation (utf8_gen_*). * Do not change the contents for **$CUBRID/locales/data/*.txt** files. .. note:: **Procedure of Executing make_locale.sh(.bat) Script** The processing in **make_locale.sh(.bat)** script #. Reads the **.ldml** file corresponding to a language, along with some other installed common data files like **$CUBRID/locales/data/ducet.txt**, **$CUBRID/locales/data/unicodedata.txt**, and **$CUBRID/locales/data/codepages/*.txt** #. After processing of raw data, it writes in a temporary **$CUBRID/locales/loclib/locale.c** file C constants values and arrays consisting of locales data. #. The temporary file **locale.c** is passed to the platform compiler to build a **.dll/.so** file. This step assumes that the machines has an installed C/C++ compiler and linker. Currently, only the MS Visual Studio for Windows and gcc for Linux compilers are supported. #. Temporary files are removed. Step 3: Setting CUBRID to Use a Specific Locale ----------------------------------------------- Only one locale can be selected as the default locale when you create DB. In addition to the possibility of specifying a default locale, one can override the default calendar settings with the calendar settings from another locale, using the **intl_date_lang** system parameter. * The locale will be in the format: <*locale_name*>.[**utf8** | **iso**] (e.g. tr_TR.utf8, en_EN.ISO, ko_KR.utf8) * **intl_date_lang**: <*locale_name*>. The possible values for <*locale_name*> are listed on :ref:`locale-selection`. .. note:: **Setting the Month/Day in Characters, AM/PM, and Number Format** For the function that inputs and outputs the day/time, you can set the month/day in characters, AM/PM, and number format by the locale in the **intl_date_lang** system parameter. Also for the function that converts a string to numbers or the numbers to a string, you can set the string format by the locale in **intl_number_lang** system parameter. .. _built-in-locale-limit: Built-in Locale and Library Locale ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Regarding the embedded locales in CUBRID, they can be used without compiling user locale library, so they can be used by skipping the step 2. But there are two differences between the embedded locale and the library locale. * Embedded(built-in) locale(and collation) are not aware of Unicode data For instance, casing (lower, upper) of (?, ¨¢) is not available in embedded locales. The LDML locales provide data for Unicode codepoints up to 65535. * Also, the embedded collations deals only with ASCII range, or in case of 'utf8_tr_cs' - only ASCII and letters from Turkish alphabet. Embedded UTF-8 locales are not Unicode compatible, while compiled (LDML) locales are. Currently, the built-in locales which can be set during creating DB are as follows: * en_US.iso88591 * en_US.utf8 * ko_KR.utf8 * ko_KR.euckr * ko_KR.iso88591: Will have Romanized Korean names for month, day names. * tr_TR.utf8 * tr_TR.iso88591: Will have Romanized Turkish names for month, day names. The order stated above is important; if no charset is defined while creating DB, the charset is the charset of the locale shown first. For example, if the locale is set as ko_KR(e.g. cubrid createdb testdb ko_KR), the charset is specified as ko_KR.utf8, the first locale among the ko_KR in the above list. Locales of the other languages except the built-in locales should end with **.utf8**. For example, specify the locale as de_DE.utf8 for German. The names of month and day for ko_KR.iso88591 and tr_TR.iso88591 should be Romanized. For example, "???" for Korean (Sunday in English) is Romanized to "Iryoil". Providing ISO-8859-1 characters only is required. For more information, see :ref:`romanized-names`. .. _romanized-names: The Month/Day in Korean and Turkish Characters for ISO-8859-1 Charset ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In Korean or Turkish which have charset UTF-8 or in Korean which have charset EUC-KR, the month/day in characters and AM/PM are encoded according to the country. However, for ISO-8859-1 charset, if the month/day in characters and AM/PM in Korean or Turkish is used as its original encoding, an unexpected behavior may occur in the server process because of its complex expression. Therefore, the name should be Romanized. The default charset of CUBRID is ISO-8859-1 and the charset can be used for Korean and Turkish. The Romanized output format is as follows: **Day in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | Day in Characters Long/Short Format | Long/Short Romanized Korean | Long/Short Romanized Turkish | +=======================================+==================================+==================================+ | Sunday / Sun | Iryoil / Il | Pazar / Pz | +---------------------------------------+----------------------------------+----------------------------------+ | Monday / Mon | Woryoil / Wol | Pazartesi / Pt | +---------------------------------------+----------------------------------+----------------------------------+ | Tuesday / Tue | Hwayoil / Hwa | Sali / Sa | +---------------------------------------+----------------------------------+----------------------------------+ | Wednesday / Wed | Suyoil / Su | Carsamba / Ca | +---------------------------------------+----------------------------------+----------------------------------+ | Thursday / Thu | Mogyoil / Mok | Persembe / Pe | +---------------------------------------+----------------------------------+----------------------------------+ | Friday / Fri | Geumyoil / Geum | Cuma / Cu | +---------------------------------------+----------------------------------+----------------------------------+ | Saturday / Sat | Toyoil / To | Cumartesi / Ct | +---------------------------------------+----------------------------------+----------------------------------+ **Month in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | Month in Characters Long/Short Format | Long/Short Romanized Korean | Long/Short Romanized Turkish | | | (Not Classified) | | +=======================================+==================================+==================================+ | January / Jan | 1wol | Ocak / Ock | +---------------------------------------+----------------------------------+----------------------------------+ | February / Feb | 2wol | Subat / Sbt | +---------------------------------------+----------------------------------+----------------------------------+ | March / Mar | 3wol | Mart / Mrt | +---------------------------------------+----------------------------------+----------------------------------+ | April / Apr | 4wol | Nisan / Nsn | +---------------------------------------+----------------------------------+----------------------------------+ | May / May | 5wol | Mayis / Mys | +---------------------------------------+----------------------------------+----------------------------------+ | June / Jun | 6wol | Haziran / Hzr | +---------------------------------------+----------------------------------+----------------------------------+ | July / Jul | 7wol | Temmuz / Tmz | +---------------------------------------+----------------------------------+----------------------------------+ | August / Aug | 8wol | Agustos / Ags | +---------------------------------------+----------------------------------+----------------------------------+ | September / Sep | 9wol | Eylul / Eyl | +---------------------------------------+----------------------------------+----------------------------------+ | October / Oct | 10wol | Ekim / Ekm | +---------------------------------------+----------------------------------+----------------------------------+ | November / Nov | 11wol | Kasim / Ksm | +---------------------------------------+----------------------------------+----------------------------------+ | December / Dec | 12wol | Aralik / Arl | +---------------------------------------+----------------------------------+----------------------------------+ **AM/PM in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | AM/PM in Characters Long/Short Format | Romanized in Korean | Romanized in Turkish | +=======================================+==================================+==================================+ | AM | ojeon | AM | +---------------------------------------+----------------------------------+----------------------------------+ | PM | ohu | PM | +---------------------------------------+----------------------------------+----------------------------------+ Step 4: Creating a Database with the Selected Locale Setting ------------------------------------------------------------ When issuing the command "**cubrid createdb** <*db_name*> <*locale_name.charset*>", a database will be created using the settings in the variables described above. Once the database is created a locale setting which was given to the database cannot be changed. The charset and locale name are stored in "**db_root**" system catalog table. .. _dumplocale: Step 5 (optional): Manually Verifying the Locale File ----------------------------------------------------- The contents of locales libraries may be displayed in human readable form using the **dumplocale** CUBRID utility. Execute **cubrid dumplocale -h** to output the usage. The used syntax is as follows. :: cubrid dumplocale [options] [language-string] options ::= -i|--input-file -d|--calendar -n|--numeric {-a |--alphabet=}{l|lower|u|upper|both} -c|--codepoint-order -w|--weight-order {-s|--start-value} {-e|--end-value} -k -z language-string ::= de_DE|es_ES|fr_FR|it_IT|ja_JP|km_KH|ko_KR|tr_TR|vi_VN|zh_CN|ro_RO * **dumplocale**: A command which dumps the contents of locale shared library previously generated using LDML input file. * *language-string*: One of de_DE, es_ES, fr_FR, it_IT, ja_JP, km_KH, ko_KR, tr_TR, vi_VN, zh_CN and ro_RO. Configures the locale language to dump the locale shared library. If it's not set, all languages which are configured on **cubrid_locales.txt** are given. The following are [options] for **cubrid dumplocale**. .. program:: dumplocale .. option:: -i, --input-file=FILE The name of the locale shared library file (< *shared_lib*>) created previously. It includes the directory path. .. option:: -d, --calendar Dumps the calendar and date/time data. Default value: No .. option:: -n, --numeric Dumps the number data. Default value: No .. option:: -a, --alphabet=l|lower|u|upper|both Dumps the alphabet and case data. Default value: No .. option:: --identifier-alphabet=l|lower|u|upper Dumps the alphabet and case data for the identifier. Default value: No .. option:: -c, --codepoint-order Dumps the collation data sorted by the codepoint value. Default value: No (displayed data: cp, char, weight, next-cp, char and weight) .. option:: -w, --weight-order Dumps the collation data sorted by the weight value. Default value: No (displayed data: weight, cp, char) .. option:: -s, --start-value=CODEPOINT Specifies the dump scope. Starting codepoint for **-a, --identifier-alphabet, -c, -w** options. Default value: 0 .. option:: -e, --end-value=CODEPOINT Specifies the dump scope. Ending codepoint for **-a, --identifier-alphabet, -c, -w** options. Default value: Max value read from the locale shared library. .. option:: -k, --console-conversion Dumps the data of console conversion. Default value: No .. option:: -z, --normalization Dumps the normalization data. Default value: No The following example shows how to dump the calendar, number formatting, alphabet and case data, alphabet and case data for the identifier, collation sorting based on the codepoint order, collation sorting based on the weight, and the data in ko_KR locale into ko_KR_dump.txt by normalizing: :: % cubrid dumplocale -d -n -a both -c -w -z ko_KR > ko_KR_dump.txt It is highly recommended to redirect the console output to a file, as it can be very big data, and seeking information could prove to be difficult. Step 6: Starting CUBRID-Related Processes ----------------------------------------- All CUBRID-related processes should be started in an identical environmental setting. The CUBRID server, the broker, CAS, and CSQL should use the locale binary file of an identical version. Also CUBRID HA should use the same setting. For example, in the CUBRID HA, master server, slave server and replica server should use the same environmental variable setting. There is no check on the compatibility of the locale used by server and CAS (client) process, so the user should make sure the LDML files used are the same. Locale library loading is one of the first steps in CUBRID start-up. Locale (collation) information is required for initializing databases structures (indexes depends on collation). This process is performed by each CUBRID process which requires locale information: server, CAS, CSQL, createdb, copydb, unload, load DB. The process of loading a locale library is as follows. * If no lib path is provided, CUBRID will try to load $CUBRID/lib/libcubrid_<*lang_name*>.so file; if this file is not found, then CUBRID assumes all locales are found in a single library: **$CUBRID/lib/libcubrid_all_locales.so**. * If suitable locale library cannot be found or any other error occurs during loading, the CUBRID process stops. * If collations between the database and the locale library are different, the CUBRID process cannot start. To include the newly changed collations of the locale library, firstly synchronize the database collation with the system collation by running **cubrid synccolldb** command. Next, update from the existing database to the wanted collations of schemas and data. For more details, see :ref:`synccolldb`. .. _synccolldb: Synchronization of Database Collations with System Collations ------------------------------------------------------------- CUBRID's normal operation requires that the system collation and the database collation must be the same. The system locale means that the locale which include built-in locales and library locales created through cubrid_locales.txt (see :ref:`locale-setting`), and it includes the system collation information. The database collation information is stored on the **_db_collation** system catalog table. **cubrid synccolldb** utility checks if the database collation is the same with the system collation, and synchronize into the system collation if they are different. However, note that this utility doesn't transform the data itself stored on the database. This utility can be used when the existing database collation should be changed after the system locale is changed. However, there are operations which the user have to do manually. The user should do this operations before the synchronization. These operations can be done by running CSQL with cubrid_synccolldb_<*database_name*>.sql file, which is created by **cubrid synccolldb -c**. * Change collation using ALTER TABLE .. MODIFY statement. * Remove any views, indexes, triggers or partitions containing the collation. Run synchronization with **cubrid synccolldb**. After then, do the following operations. * Recreate views, indexes, triggers, or partitions * Update application statements to use new collations This utility should work only in offline mode. **synccolldb** syntax is as follows. :: cubrid synccolldb [options] database_name * **cubrid**: An integrated utility for the CUBRID service and database management. * **synccolldb**: A command to synchronize collations of a database with collations from the system(according to contents of locales libraries and $CUBRID/conf/cubrid_locales.txt). * *database_name*: A database name to be synchronized with collations from the system. If [options] is omitted, **synccolldb** checks the collation differences between the system and the database, synchronize the database collation with the system collation, and create the cubrid_synccolldb_<*database_name*>.sql file including the queries of objects to be dropped before the synchronization. The following are [options] which are used on **cubrid synccolldb**. .. program:: synccolldb .. option:: -c, --check-only This option prints out the collation information which is different between the database collation and the system collation. .. option:: -f, --force-only This option doesn't ask when updating the database collation with the system collation. The following shows that how it works when the system collation and the database collation are different. Firstly, make locale library about ko_KR locale. :: $ echo ko_KR > $CUBRID/conf/cubrid_locales.txt $ make_locale.sh -t 64 Next, create the database. :: $ cubrid createdb --db-volume-size=20M --log-volume-size=20M xdb en_US Create a schema. At this time, specify the needed collation in each table. :: $ csql -S -u dba xdb -i in.sql .. code-block:: sql CREATE TABLE dept(depname STRING PRIMARY KEY) COLLATE utf8_ko_cs_uca; CREATE TABLE emp(eid INT PRIMARY KEY, depname STRING,address STRING) COLLATE utf8_ko_cs_uca; ALTER TABLE emp ADD CONSTRAINT FOREIGN KEY (depname) REFERENCES dept(depname); Change the locale setting of the system. If you do not any values on **cubrid_locales.txt**, the database consider that only built-in locales exist :: $ echo "" > $CUBRID/conf/cubrid_locales.txt Check the difference between system and database by running **cubrid synccolldb -c** command. :: $ cubrid synccolldb -c xdb ---------------------------------------- ---------------------------------------- Collation 'utf8_ko_cs_uca' (Id: 133) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen_ci' (Id: 44) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen_ai_ci' (Id: 37) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen' (Id: 32) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- There are 4 collations in database which are not configured or are changed compared to system collations. Synchronization of system collation into database is required. Run 'cubrid synccolldb -f xdb' If the indexes exist, firstly you should remove the indexes, and change the collation of each table, then recreate the indexes directly. The process to remove indexes and change the collation of tables can be executed by using cubrid_synccolldb_xdb.sql file which was created by **synccolldb** command. On the below example, a foreign key is the index which you should recreate. :: $ cat cubrid_synccolldb_xdb.sql ALTER TABLE [dept] COLLATE utf8_bin; ALTER TABLE [emp] COLLATE utf8_bin; ALTER TABLE [emp] DROP FOREIGN KEY [fk_emp_depname]; ALTER TABLE [dept] MODIFY [depname] VARCHAR(1073741823) COLLATE utf8_bin; ALTER TABLE [emp] MODIFY [address] VARCHAR(1073741823) COLLATE utf8_bin; ALTER TABLE [emp] MODIFY [depname] VARCHAR(1073741823) COLLATE utf8_bin; $ csql -S -u dba -i cubrid_synccolldb_xdb.sql xdb Removing the obsolete collations by executing the above cubrid_synccolldb_xdb.sql script file must be performed before forcing the synchronization of system collations into database. Run **cubrid synccolldb** command. If the option is omitted, the message is shown to ask to run this command or not; if the **-f** option is given, the synchronization is run without checking message. :: $ cubrid synccolldb xdb Updating system collations may cause corruption of database. Continue (y/n) ? Contents of '_db_collation' system table was updated with new system collations. Recreate the dropped foreign key. :: $ csql -S -u dba xdb ALTER TABLE emp ADD CONSTRAINT FOREIGN KEY fk_emp_depname(depname) REFERENCES dept(depname); .. note:: In CUBRID, collations are identified by the ID number on the CUBRID server, and its range is from 0 to 255. LDML file is compiled with shared library, which offers the mapping information between the ID and the collation(name, attribute). * The system collation is the collation which is loaded from the locale library, by the CUBRID server and the CAS module. * The database collation is the collation which is stored into the **_db_collation** system table. .. _collation: Collation ========= A collation is an assembly of information which defines an order for characters and strings. One common type of collation is called alphabetization. If not explicitly set otherwise at column creation, the charset and collation of columns are charset and collation of table. The charset and collation are taken (in order in is found first) from the client. If the result of an expression is a character data type, gets the collation and charset by the collation inference with the operands of the expression. .. note:: \ In CUBRID, collations are supported for a number of languages, including European and Asian. In addition to the different alphabets, some of these languages may require the definition of expansions or contractions for some characters or character groups. Most of these aspects have been put together by the Unicode Consortium into The Unicode Standard (up to version 6.1.0 in 2012). Most of the information is stored in the DUCET file `http://www.unicode.org/Public/UCA/latest/allkeys.txt `_ which contains all characters required by most languages. Most of the codepoints represented in DUCET, are in range 0 - FFFF, but codepoints beyond this range are included. However, CUBRID will ignore the latest ones, and use only the codepoints in range 0 - FFFF (or a lower value, if configured). Each codepoint in DUCET has one or more 'collation elements' attached to it. A collation element is a set of four numeric values, representing weights for 4 levels of comparison. Weight values are in range 0 - FFFF. In DUCET, a character is represented on a single line, in the form: :: < codepoint_or_multiple_codepoints > ; [.W1.W2.W3.W4][....].... # < readable text explanation of the symbol/character > A Korean character kiyeok is represented as follows: :: 1100 ; [.313B.0020.0002.1100] # HANGUL CHOSEONG KIYEOK For example, 1100 is a codepoint, [.313B.0020.0002.1100] is one collation element, 313B is the weight of Level 1, 0020 is the weight of Level 2, 0002 is the weight of Level 3, and 1100 is the weight of Level 4. Expansion support, defined as a functional property, means supporting the interpretation of a composed character as a pair of the same characters which it's made of. A rather obvious example is interpreting the character ''?'' in the same way as the two character string ''ae''. This is an expansion. In DUCET, expansions are represented by using more than one collation element for a codepoint or contraction. By default, CUBRID has expansions disabled. Handling collations with expansions requires when comparing two strings several passes (up to the collation strength/level). .. _collation-charset-column: Charset and Collation of Column ------------------------------- Charset and Collation apply to string data types: **VARCHAR** (**STRING**), **CHAR** and **ENUM**. By default, all string data types inherit the default database collation and character set, but CUBRID supports two modifiers which affect collation and character set. Charset ^^^^^^^ Character set may be specified as character string literal or as non-quoted identifier. Supported character sets: * ISO-8859-1 * UTF-8 (with maximum 4 bytes per characters, which means it supports codepoints from 0 to 0x10FFFF) * EUC-KR (the support for this character set is only for backward compatibility reasons, its usage is not recommended) .. note:: Previous versions of CUBRID 9.0 supported EUC-KR characters when ISO-8859-1 charset (the single one available) was set. From CUBRID 9.0 Beta, this is no longer available. EUC-KR characters should be used only with EUC-KR charset. String Check ^^^^^^^^^^^^ By default, all input data is assumed to be in the server character specified when creating DB. This may be overridden by **SET NAMES** or charset introducer (or **COLLATE** string literal modifier) (For more information, see :ref:`collation-charset-string`. Invalid data may lead to undefined behavior or even crashes if string checking is disabled (by default is disabled). This can be enabled by **intl_check_input_string** system parameter. However, if you are sure that only valid data is input, you can obtain better performance by disabling string check. Only UTF-8 and EUC-KR text data is checked for valid encodings. Since ISO-8859-1 is single byte encoding and all byte values are valid, there is no checking on this charset. Charset Conversion ^^^^^^^^^^^^^^^^^^ When **collation** / **charset** modifiers or normal collation inference requires it, character conversion may occur. Conversions are not reversible. Generally, charset conversion is character transcoding (the bytes representing a character in one charset are replaced with other bytes representing the same character but in the destination charset). With any conversion, losses may occur. If a character from source charset cannot be encoded in destination charset, it is replaced with a '?' character. This also applies to conversions from binary charset to any other charset. The widest character support is with UTF-8 charset, and since it encodes Unicode, one expects that all character can be encoded. However, during conversion from ISO-8859-1 to UTF-8 some "losses" occur: bytes range 80-A0 are not valid ISO-8859-1 characters but may appear in strings. After conversion to UTF-8 these characters are replaced with '?'. Rules for conversion of values from one charset to another: +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | Source \\ Destination | Binary | ISO-8859-1 | UTF-8 | EUC-KR | +========================+===================================+===================================+===============================+===============================+ | **Binary** | No change | No change | No change. | No change. | | | | The byte size unchanged. | Validation per character. | Validation per character. | | | | Character length unchanged. | Invalid char replace with '?' | Invalid char replace with '?' | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **ISO-8859-1** | No change | No change | Byte conversion. | Byte conversion. | | | | | The byte size increases. | Byte size increase. | | | | | No loss of useful characters. | No loss of useful characters. | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **UTF-8** | No change. | Byte conversion. | No change | Byte conversion. | | | The byte size unchanged. | Byte size may decrease. | | Byte size may decrease. | | | Character length increases. | Expect loss of characters. | | Expect loss of characters. | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **EUC-KR** | No change. | Byte conversion. | Byte conversion. | No change | | | The byte size unchanged. | Byte size may decrease. | Byte size may increase. | | | | Character length increases | Expect loss of characters | No loss of useful characters. | | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ .. note:: Previous versions of CUBRID 9.x didn't supported binary charset. The ISO-8859-1 charset had the role of existing binary charset. Conversions from UTF-8 and EUC-KR charsets to ISO-8859-1 were performed by reinterpreting the byte content of source, not by character translation. .. _collation-setting: Collation ^^^^^^^^^ Collation may be specified as character string literal or as non-quoted identifier. The following is a query(SELECT * FROM db_collation WHERE is_builtin='Yes') on the **db_collation** system table. :: coll_id coll_name charset_name is_builtin has_expansions contractions uca_strength ================================================================================================ 0 'iso88591_bin' 'iso88591' 'Yes' 'No' 0 'Not applicable' 1 'utf8_bin' 'utf8' 'Yes' 'No' 0 'Not applicable' 2 'iso88591_en_cs' 'iso88591' 'Yes' 'No' 0 'Not applicable' 3 'iso88591_en_ci' 'iso88591' 'Yes' 'No' 0 'Not applicable' 4 'utf8_en_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 5 'utf8_en_ci' 'utf8' 'Yes' 'No' 0 'Not applicable' 6 'utf8_tr_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 7 'utf8_ko_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 8 'euckr_bin' 'euckr' 'Yes' 'No' 0 'Not applicable' 9 'binary' 'binary' 'Yes' 'No' 0 'Not applicable' Built-in collations are available without requiring additional user locale libraries. Each **collation** has an associated **charset**. For this reason, it is not allowed to set incompatible pair to **character** set and **collation**. When **COLLATE** modifier is specified without **CHARSET** modifier, then the default charset of collation is set. When **CHARSET** modifier is specified without **COLLATE** modifier, then the default collation is set. The default collation for character sets are the bin collation: * ISO-8859-1: iso88591_bin * UTF-8: utf8_bin * EUC-KR: euckr_bin * Binary: binary Binary is the name of both the collation and its associated charset. For more information on how to determine the collation among the expression parameters (operands) with different collations (and charsets), see :ref:`determine-collation-columns`. .. _charset-collate-modifier: CHARSET and COLLATE modifier ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ CUBRID supports two modifiers which affect collation and character set without following the default database collation and character set. * **CHARACTER_SET** (alias **CHARSET**) changes the columns character set * **COLLATE** changes the collation :: ::= [] [] ::= {CHARACTER_SET | CHARSET} { | } ::= {COLLATE } { | } The following example shows how to set the charset of the **VARCHAR** type column to UTF-8 .. code-block:: sql CREATE TABLE t1 (s1 VARCHAR (100) CHARSET utf8); The following example shows how to change the name of column s1 to c1 and the type to CHAR(10) with the collation of utf8_en_cs (the charset is the default charset of the collation, UTF-8). .. code-block:: sql ALTER TABLE t1 CHANGE s1 c1 CHAR(10) COLLATE utf8_en_cs; The value of the c1 column is changed to the VARCHAR(5) type whose collation is iso88591_en_ci. It is performed by using the collation iso88591_en_ci for the type of column selected first or by using sorting. .. code-block:: sql SELECT CAST (c1 as VARCHAR(5) COLLATE 'iso88591_en_ci') FROM t1 ORDER BY 1; The following query (same sorting) is similar to the above but the output column result is the original value. .. code-block:: sql SELECT c1 FROM t1 ORDER BY CAST (c1 as VARCHAR(5) COLLATE iso88591_en_ci); .. _determine-collation-columns: How to Determine Collation among Columns with Different Collation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs ); -- insert values into both columns SELECT s1, s2 FROM t WHERE s1 > s2; :: ERROR: '>' requires arguments with compatible collations. In the above example, column *s1* and column *s2* have different collations. Comparing *s1* with *s2* means comparing the strings to determine which column value is "larger" among the records on the table t. In this case, an error will occur because the comparison between the collation utf8_en_cs and the collation utf8_tr_cs cannot be done. The rules to determine the types of arguments for an expression are also applied to the rules to determine the collations. #. A common collation and a characterset are determined by considering all arguments of an expression. #. If an argument has a different collation(and a characterset) with a common collation(and a characterset) decided in No. 1., it is changed into the common collation(and a characterset). #. To change the collation, :func:`CAST` operator can be used. Collation coercibility is used to determine the result collation of comparison expression. It expresses how easily the collation can be converted to the collation of the opposite argument. High collation coercibility when comparing two operands of an expression means that the collation can be easily converted to the collation of the opposite argument. That is, an argument with high collation coercibility can be changed to the collation of an argument with lower collation coercibility. When an expression has various arguments with different collation, a common collation is computed based on each arguments collation and coercibility. The rules for collation inference are: #. Arguments with higher coercibility are coerced (or casted) to collation of arguments with lower coercibility. #. When arguments have different collation but same coercibility, the expression's collation cannot be resolved and an error is returned. However, when comparing two operands of which collation coercibility level is 11(session variable, host variable) and charset is the same, one of their collation is changed as non-bin collation if one of them is bin collation(utf8_bin, iso88591_bin, euckr_bin). See :ref:`Converting Collation of Session Variable and/or Host Variable `. Below table shows the collation coercibility about arguments of the expression .. _collation-coercibility: +------------------------+------------------------------------------------------------------------------------------------------+ | Collation Coercibility | Arguments of the Expression(Operands) | +========================+======================================================================================================+ | -1 | As an expression which has arguments with only host variables, this coercibility cannot be | | | determined before the execution step. | +------------------------+------------------------------------------------------------------------------------------------------+ | 0 | Operand having **COLLATE** modifier | +------------------------+------------------------------------------------------------------------------------------------------+ | 1 | **Columns** with non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 2 | **Columns** with binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 3 | **Columns** with bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 4 | **SELECT values**, **Expression** With non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 5 | **SELECT values**, **Expression** With binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 6 | **SELECT values**, **Expression** With bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 7 | **Special functions** (:func:`SYSTEM_USER`, :func:`DATABASE`, :func:`SCHEMA`, :func:`VERSION`) | +------------------------+------------------------------------------------------------------------------------------------------+ | 8 | **Constants(string literals)** With non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 9 | **Constants(string literals)** With binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 10 | **Constants(string literals)** With bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 11 | host variables, session variables | +------------------------+------------------------------------------------------------------------------------------------------+ .. note:: In 9.x versions, the binary collation was not available. The iso85891_bin collation had the role of existing binary collation. Since version 10.0, the coercibility of columns with iso88591_bin was demoted from 2 to 3, that of expressions with iso88591_bin from 5 to 6, and of constants with iso88591_bin from 9 to 10. Regarding an expression which has arguments with only host variables, (e.g. UPPER(?) as the below) this coercibility can be determined on the execution step. That is, the coercibility like this expression cannot be determined on the parsing step; therefore, COERCIBILITY function returns -1. .. code-block:: sql SET NAMES utf8 PREPARE st FROM 'SELECT COLLATION(UPPER(?)) col1, COERCIBILITY(UPPER(?)) col2'; EXECUTE st USING 'a', 'a'; :: col1 col2 =================================== 'utf8_bin' -1 For expressions having all arguments with coercibility 11 and with different collations, the common collation is resolved at run-time (this is an exception from the coercibility value-based rule for inference which would require to raise an error). .. code-block:: sql PREPARE st1 FROM 'SELECT INSERT(?,2,2,?)'; EXECUTE st1 USING _utf8'abcd', _binary'ef'; :: insert( ?:0 , 2, 2, ?:1 ) ====================== 'aefd' The following shows converting two parameters with different collation to one collation. * **Converting into the Wanted Collation** The **SELECT** statement, failing to execute in the above example, is successfully executed by specifying a collation on one column by using the **CAST** operator as shown in the following query; then the two operands have the same collation. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_en_cs); Also, by **CAST** s2 to bin collation, the collation coercibility of CAST (6) is higher then coercibility of s1 (1). .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_bin); In the following query, the second operand "CAST (s2 AS STRING COLLATE utf8_tr_cs)" is a sub-expression. The sub-expression has higher coercibility than the column (s1) so "CAST (s2 AS STRING COLLATE utf8_tr_cs)" is converted to the collation of s1. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_tr_cs); Any expression has higher coercibility than any column. So "CONCAT (s2,'')" is converted to the collation of s1 in the following query and the query is successfully performed. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CONCAT (s2,''); * **Converting Collation of Constant and Column** In the following case, comparison is made by using the collation of s1. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > 'abc'; * **When a Column is Created with Bin Collation** .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; In this case, s2 column's coercibility is 6(bin collation) and s2 can be "fully convertible" to the collation of s1. utf8_en_cs is used. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE iso88591_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; In this case, utf8_en_cs is used as collation, too. However, some overhead occurs to convert the charset to UTF-8 since s2 is the ISO charset. In the following query, the charset is not converted (UTF-8 byte data in s2 is easily reinterpreted to the ISO-8859-1 charset) but character comparison is made by using the iso88591_en_cs collation. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE iso88591_en_cs, s2 STRING COLLATE utf8_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; * **Converting Collation of Sub-Expression and Column** .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs ); SELECT s1, s2 FROM t WHERE s1 > s2 + 'abc'; In this case, the second operand is the expression, so the collation of s1 is used. In the following example, an error occurs. An error occurs because '+' operation is tried for s2 and s3 where the collation is different. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs, s3 STRING COLLATE utf8_en_ci ); SELECT s1, s2 FROM t WHERE s1 > s2 + s3; :: ERROR: '+' requires arguments with compatible collations. In the following example, the collation of s2 and s3 is utf8_tr_cs. Therefore, the collation of '+' expression is utf8_tr_cs, too. Expressions have higher coercibility than columns. Therefore, comparison operation is made by using the utf8_en_cs collation. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs, s3 STRING COLLATE utf8_tr_cs ); SELECT s1, s2 FROM t WHERE s1 > s2 + s3; * **Converting Collation of Number, Date** Number or date constant which is convertible into string during operation always coercible into the other string's collation. .. _comparison-between-session-and-or-host-variables: * **Converting Collation of Session Variable and/or Host Variable** When comparing the two operands of which collation coercibility level is 11(session variable, host variable) and charset is the same, one of their collation is changed as non-bin collation. .. code-block:: sql SET NAMES utf8; SET @v1='a'; PREPARE stmt FROM 'SELECT COERCIBILITY(?), COERCIBILITY(@v1), COLLATION(?), COLLATION(@v1), ? = @v1'; SET NAMES utf8 COLLATE utf8_en_ci; EXECUTE stmt USING 'A', 'A', 'A'; When comparing @v1 and 'A', @v1's collation will be changed as utf8_en_ci, non-bin collation; therefore, @v1's value and 'A' will be the same and the result of "? = @v1" will be 1 as below. :: coercibility( ?:0 ) coercibility(@v1) collation( ?:1 ) collation(@v1) ?:2 =@v1 =================================================================================================== 11 11 'utf8_en_ci' 'utf8_bin' 1 .. code-block:: sql SET NAMES utf8 COLLATE utf8_en_cs; EXECUTE stmt USING 'A', 'A', 'A'; When comparing @v1 and 'A', @v1's collation will be changed as utf8_en_cs, non-bin collation; therefore, @v1's value and 'A' will be different and "? = @v1"'s result will be 0 as below. :: coercibility( ?:0 ) coercibility(@v1) collation( ?:1 ) collation(@v1) ?:2 =@v1 =================================================================================================== 11 11 'utf8_en_cs' 'utf8_bin' 0 However, if collations of @v1 and 'A' are different as below and the two collations are different, an error occurs. .. code-block:: sql DEALLOCATE PREPARE stmt; SET NAMES utf8 COLLATE utf8_en_ci; SET @v1='a'; PREPARE stmt FROM 'SELECT COERCIBILITY(?), COERCIBILITY(@v1), COLLATION(?), COLLATION(@v1), ? = @v1'; SET NAMES utf8 COLLATE utf8_en_cs; EXECUTE stmt USING 'A', 'A', 'A'; :: ERROR: Context requires compatible collations. Charset and Collation of an ENUM type column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Charset and Collation of an ENUM type column follow the locale specified when creating DB. For example, create the below table after creating DB with en_US.iso88591. .. code-block:: sql CREATE TABLE tbl (e ENUM (_utf8'a', _utf8'b')); a column 'e' of the above table has ISO88591 charset and iso88591_bin collation even if the charset of the element is defined as UTF8. If the user want to apply the other charset or collation, it should be specified to the column of the table. Below is an example to specify the collation about the column of the table. .. code-block:: sql CREATE TABLE t (e ENUM (_utf8'a', _utf8'b') COLLATE utf8_bin); CREATE TABLE t (e ENUM (_utf8'a', _utf8'b')) COLLATE utf8_bin; .. _collation-charset-table: Charset and Collation of Tables ------------------------------- The charset and the collation can be specified after the table creation syntax. :: CREATE TABLE table_name () [CHARSET charset_name] [COLLATE collation_name] If the charset and the collation of a column are omitted, the charset and the collation of a table is used. If the charset and the collation of a table are omitted, the charset and the collation of a system is used. The following shows how to specify the collation on the table. .. code-block:: sql CREATE TABLE tbl( i1 INTEGER, s STRING ) CHARSET utf8 COLLATE utf8_en_cs; If the charset of a column is specified and the collation of a table is specified, the collation of this column is specified as the default collation(_bin) about this column's charset. .. code-block:: sql CREATE TABLE tbl (col STRING CHARSET utf8) COLLATE utf8_en_ci; On the above query, the collation of the column col becomes utf8_bin, the default collation about this column. :: csql> ;sc tbl tbl COLLATE utf8_en_ci col CHARACTER VARYING(1073741823) COLLATE utf8_bin .. _collation-charset-string: Charset and Collation of String Literals ---------------------------------------- The charset and the collation of a string literal are determined based on the following priority. #. :ref:`charset-introducer` introducer or :ref:`COLLATE modifier ` of string literal #. The charset and the collation defined by the :ref:`set-names-stmt` #. System charset and collation(Default collation by the locale specified when creating DB) .. _set-names-stmt: SET NAMES Statement ^^^^^^^^^^^^^^^^^^^ The **SET NAMES** statement changes the default client charset and the collation. Therefore, all sentences in the client which has executed the statement have the specified charset and collation. The syntax is as follows. :: SET NAMES [ charset_name ] [ COLLATE collation_name] * *charset_name*: Valid charset name is iso88591, utf8, euckr and binary. * *collation_name*: Collation setting can be omitted and all available collations can be set. The collation should be compatible with the charset; otherwise, an error occurs. To find the available collation names, look up the **db_collation** catalog VIEW (see :ref:`collation-charset-column`). Specifying a collation with **SET NAMES** statement is the same as specifying a system parameter **intl_collation**. Therefore, the following two statements are the same behavior. .. code-block:: sql SET NAMES utf8; SET SYSTEM PARAMETERS 'intl_collation=utf8_bin'; The following example shows how to create the string literal with the default charset and collation. .. code-block:: sql SELECT 'a'; The following example shows how to create the string literal with the utf8 charset and utf8_bin collation(the default collation is the bin collation of the charset) .. code-block:: sql SET NAMES utf8; SELECT 'a'; .. _charset-introducer: Charset Introducer ^^^^^^^^^^^^^^^^^^ In front of the constant string, the charset introducer and the **COLLATE** modifier can be positioned. The charset introducer is the charset name starting with a underscore (_), coming before the constant string. The syntax to specify the **CHARSET** introducer and the **COLLATE** modifier for a string is as follows. :: [charset_introducer]'constant-string' [ COLLATE collation_name ] * *charset_introducer*: a charset name starting with an underscore (_), can be omitted. One of _utf8, _iso88591, _euckr and _binary can be entered. * *constant-string*: a constant string value. * *collation_name*: the name of a collation, which can be used in the system, can be omitted. The default charset and collation of the constant string is determined based on the current database connected (the **SET NAMES** statement executed last or the default value). * When the string charset introducer is specified and the **COLLATE** modifier is omitted, the collation is: * if the charset introducer is the same as client charset (from a previous SET NAMES), then the client collation is applied. * if the charset introducer does not match the client charset, then the bin collation(one of euckr_bin, iso88591_bin and utf8_bin) corresponding to charset introducer is applied. * When the charset introducer is omitted and the **COLLATE** modifier is specified, the character is determined based on collation. The following example shows how to specify the charset introducer and the **COLLATE** modifier. .. code-block:: sql SELECT 'cubrid'; SELECT _utf8'cubrid'; SELECT _utf8'cubrid' COLLATE utf8_en_cs; The following example shows how to create the string literal with utf8 charset and utf8_en_cs collation. The **COLLATE** modifier of **SELECT** statement overrides the collation specified by **SET NAMES** syntax. .. code-block:: sql SET NAMES utf8 COLLATE utf8_en_ci; SELECT 'a' COLLATE utf8_en_cs; Charset and Collation of Expressions ------------------------------------ The charset and collation of expression's result are inferred from charset and collation of arguments in the expression. Collation inference in CUBRID is based on coercibility. For more information, see :ref:`determine-collation-columns`. All string matching function(LIKE, REPLACE, INSTR, POSITION, LOCATE, SUBSTRING_INDEX, FIND_IN_SET, etc) and comparison operators(<, >, =, etc) take collation into account. Charset and Collation of System Data ------------------------------------ The system charset is taken from the locale specified when creating DB. The system collation is always the bin collation (<*charset*>_bin) of system charset. CUBRID supports three charset(iso88591, euckr, utf8), and accordingly three system collations(iso88591_bin, euckr_bin, utf8_bin). Impact of Charset Specified When Creating DB -------------------------------------------- The locale specified when creating DB affects the following. * Character supported in identifiers and casing rules (called "alphabet") * Default locale for date - string conversion functions * Default locale for number - string conversion functions * Console conversion in CSQL .. _casing-and-identifiers: Casing and identifiers ^^^^^^^^^^^^^^^^^^^^^^ In CUBRID, identifiers are cases insensitive. Tables, columns, session variables, triggers, stored procedures are stored in lower case. Authentication identifiers (user and group names) are stored in upper case. The ISO-8859-1 charset contains only 255 characters, so the primitives are able to use built-in data. Also the EUC-KR charset, from which only the ASCII compatible characters are considered for casing (and are handled in the code), is built-in. The UTF-8 charset is a special case: There are built-in variants of UTF-8 locales (like en_US.utf8, tr_TR.utf8 and ko_KR.utf8) and LDML locales. The built-in variant implement only the characters specific to the locale (ASCII characters for en_US.utf8 and ko_KR.utf8, ASCII + Turkish glyphs [#f1]_ for tr_TR.utf8). This means that while all UTF-8 characters encoded on maximum 4 bytes are still supported and accepted as identifiers, most of them are not handled as letters, and treated as any normal Unicode character by casing primitives. For instance, character "?" (Unicode codepoint 00C8) is allowed, but an identifier containing it will not be normalized to "¨¨" (lower case). .. code-block:: sql CREATE TABLE ?ABC; Therefore, after running above query, it will have a table name with "?abc" into the system table, **_db_class**. Using a LDML locale (built-in variants can also be overridden with a LDML variant), extends the supported Unicode characters up to codepoint FFFF. For instance, if the locale is set by es_ES.utf8 when creating DB and the corresponding locale library is loaded, the previous statement will create a table with the name "¨¨abc". As previously mentioned, a set of casing rules and supported characters (letters) forms an "alphabet" in CUBRID (this is actually a tag in LDML). Some locales, like tr_TR and de_DE have specific casing rules: - in Turkish: lower('I')='?' (dot-less lower i); upper ('i') = '?' (capital I with dot). - in German: upper ('?')='SS' (two capital S letters). Because of this, such locales have two sets of alphabets: one which applies to system data (identifiers) and one which applies to user data. The alphabet applying to user data include the special rules, while the system (identifiers) alphabet do not, thus making the system alphabets compatible between locales. This is required to avoid issues with identifiers (like in Turkish, where casing of the group name "public" results in errors -> "PUBL?C" != "PUBLIC"). It also provides a compatibility between databases with different locales (should be able to export - import schema and data). String literal input and output ------------------------------- String literals data may be entered to CUBRID by various ways: * API interface (CCI) * language dependent interface - JDBC, Perl driver, etc. * CSQL - command line from console or input file When receiving character data through drivers, CUBRID cannot be aware of the charset of those strings. All text data contained between quotes (string literals) are handled by CUBRID as raw bytes; the charset meta-information must be provided by client. CUBRID provides a way for the client to instruct it about which type of encoding is using for its character data. This is done with the SET NAMES statement or with charset introducer. Text Conversion for CSQL ^^^^^^^^^^^^^^^^^^^^^^^^ Text console conversion works in CSQL console interface. Most locales have associated character set (or codepage in Windows) which make it easy to write non-ASCII characters from console. For example in LDML for tr_TR.utf8 locale, there is a line: :: If the user set its console in one of the above settings (chcp 28599 in Windows, or export LANG=tr_TR.iso88599 in Linux), CUBRID assumes all input is encoded in ISO-8859-9 charset, and converts all data to UTF-8. Also when printing results, CUBRID performs the reverse conversion (from UTF-8 to ISO-8859-9). In Linux, to prevent this transform, using UTF-8(ex: export LANG=tr_TR.utf8) directly is recommended. The setting is optional in the sense that the XML tag is not required in LDML locale file. For example, the locale km_KH.utf8 does not have an associated codepage. **Example for configuring French language and inputting French characters** Enable fr_FR in cubrid_locales.txt, compile the locales(see :ref:`locale-setting`) and set fr_FR.utf8 when you create DB. In Linux: * Set console to receive UTF-8; set LANG=fr_FR.utf8 or en_US.utf8 (any locale with UTF-8). This setting will allow to input any UTF-8 character (not only French specific) * or, set console to receive ISO-8859-15; set LANG=fr_FR.iso885915; in LDML tag, set linux_charset="iso885915". This will receive only ISO-8859-15 characters which will be converted by CSQL to UTF-8 encoding. In Windows: * Set windows codepage to 28605 (chcp 28605 in a command prompt); in LDML tag, set windows_codepage="28605". Codepage 28605 is the corresponding for ISO-8859-15 charset. **Example for configuring Romanian and inputting Romanian characters** Enable ro_RO in cubrid_locales.txt, compile the locales(see :ref:`locale-setting`) and set ro_RO.utf8 when you create DB. In Linux: * Set console to receive UTF-8; set LANG=ro_RO.utf8 or en_US.utf8 (any locale with UTF-8). This setting will allow to input any UTF-8 character (not only Romanian specific) * or, set console to receive ISO-8859-2; set LANG=ro_RO.iso88592; in LDML tag, set linux_charset="iso88592". This will receive only ISO-8859-15 characters which will be converted by CSQL to UTF-8 encoding. In Windows: * Set windows codepage to 1250 (chcp 1250 in a command prompt); in LDML tag, set windows_codepage="1250". Codepage 1250 is the corresponding for ISO-8859-2 charset. Codepage 1250 contains characters specific to some Central and Eastern European languages, including Romanian. Please note that characters outside codepage 1250 will not be properly displayed. To use special characters which exist on Romanian alphabet(e.g. "S" and "T" with cedilla bellow), the Romanian legacy keyboard setting of "Control Panel" on Windows is required. * ISO8859-2 contains some characters which codepage 1250 does not have, so you cannot input or output all characters of ISO8859-2 with CSQL. At input, the console conversion process takes all input (including statements) and performs the conversion (only if it is required - if it contains characters that needs conversion). At output (printing results, error messages), CSQL is more selective and does not convert all texts. For instance, printing of numeric values is not filtered through console conversion (since number text contains only ASCII characters). .. _unicode-normalization: Unicode Normalization ^^^^^^^^^^^^^^^^^^^^^ Glyphs [#f1]_ can be written in various forms using Unicode characters/codepoints. Most known are the decomposed and composed forms. For instance, the glyph '?' is written in composed form with a single codepoint: 00C4, in UTF-8 these has two bytes: C3 84. In (fully) decomposed form, it written with two codepoints: 0041 ('A') and 0308 (COMBINING DIAERESIS), and in UTF-8 is encode using 3 bytes: 41 CC 88. Most text editors are able to handle both forms, so both encodings will appear as the same glyph: '?'. Internally, CUBRID "knows" to work only with "fully composed" text. For clients working with "fully decomposed" text, CUBRID can be configured to convert such text to "fully composed" and serve them back as "fully decomposed". Normalization is not a locale specific feature, it does not depend on locale. **unicode_input_normalization** system parameter controls the composition at system level. For more details, see :ref:`unicode_input_normalization `. The main use case is with both enabled (**unicode_input_normalization**, **unicode_output_normalization**): this ensures that a string from a client knowing only decomposed Unicode is still properly handled by CUBRID. A second use case is with **unicode_input_normalization** = yes and **unicode_output_normalization** = no, for a client able to handle both types of Unicode writing. .. _collation-cont-exp: Contraction and Expansion of Collation -------------------------------------- CUBRID supports contraction and expansion for collation. Contraction and expansion are available for UTF-8 charset collation. You can see the contraction and expansion of collation in the collation setting in the LDML file. Using contraction and expansion affects the size of locale data (shared library) and server performance. .. _contraction: Contraction ^^^^^^^^^^^ A contraction is a sequence consisting of two or more codepoints, considered a single letter in sorting. For example, in the traditional Spanish sorting order, "ch" is considered a single letter. All words that begin with "ch" sort after all other words beginning with "c", but before words starting with "d". Other examples of contractions are "ch" in Czech, which sorts after "h", and "lj" and "nj" in Croatian and Latin Serbian, which sort after "l" and "n" respectively. See http://userguide.icu-project.org/collation/concepts for additional information. There are also some contractions defined in `http://www.unicode.org/Public/UCA/latest/allkeys.txt `_ DUCET. Contractions are supported in both collation variants: with expansions and without expansions. Contractions support requires changes in a significant number of key areas. It also involves storing a contraction table inside the collation data. The handling of contractions is controlled by LDML parameters **DUCETContractions="ignore/use"** **TailoringContractions="ignore/use"** in tag of collation definition. The first one controls if contractions in DUCET file are loaded into collation, the second one controls if contractions defined by rules in LDML are ignore or not (easier way then adding-deleting all rules introducing contractions). .. _expansion: Expansion ^^^^^^^^^ Expansions refer to codepoints which have more than one collation element. Enabling expansions in CUBRID radically changes the collation's behavior as described below. The CUBRIDExpansions="use" parameter controls the this behavior. **Collation without Expansion** In a collation without expansions, each codepoint is treated independently. Based on the strength of the collation, the alphabet may or may not be fully sorted. A collation algorithm will sort the codepoints by comparing the weights in a set of levels, and then will generate a single value, representing the weight of the codepoint. String comparison will be rather straight-forward. Comparing two strings in an expansion-free collation means comparing codepoint by codepoint using the computed weight values. **Collation with Expansion** In a collation with expansions, some composed characters (codepoints) are to be interpreted as an ordered list of other characters (codepoints). For example, '?' might require to be interpreted the same way as 'ae', or '?' as ''ae'' or ''aa''. In DUCET, the collation element list of '?' will be the concatenation of collation element lists of both 'a' and 'e', in this order. Deciding a particular order for the codepoints is no longer possible, and neither is computing new weight values for each character/codepoint. In a collation with expansions, string comparison is done by concatenating the collation elements for the codepoints/contractions in two lists (for the two strings) and then comparing the weights in those lists for each level. **Example 1** The purpose of these examples is to show that under different collation settings (with or without expansion support), string comparison might yield different results. Here there are the lines from DUCET which correspond to a subset of codepoints to be used for comparisons in the examples below. :: 0041 ; [.15A3.0020.0008.0041] # LATIN CAPITAL LETTER A 0052 ; [.1770.0020.0008.0052] # LATIN CAPITAL LETTER R 0061 ; [.15A3.0020.0002.0061] # LATIN SMALL LETTER A 0072 ; [.1770.0020.0002.0072] # LATIN SMALL LETTER R 00C4 ; [.15A3.0020.0008.0041][.0000.0047.0002.0308] # LATIN CAPITAL LETTER A WITH DIAERESIS; 00E4 ; [.15A3.0020.0002.0061][.0000.0047.0002.0308] # LATIN SMALL LETTER A WITH DIAERESIS; Three types of settings for the collation will be illustrated: * Primary strength, no casing (level 1 only) * Secondary strength, no casing (levels 1 and 2) * Tertiary strength, uppercase first (levels 1, 2 and 3) From now on, sorting of the strings "Ar" and "?r" will be attempted. **Collation without Expansions Support** When expansions are disabled, each codepoint is reassigning a new single valued weight. Based on the algorithms described above the weights for A, ?, R and their lowercase correspondents, the order of the codepoints for these characters, for each collation settings example above, will be as follows. * Primary strength: A = ? < R = r * Secondary strength: A < ? < R = r * Tertiary strength: A < ? < R < r The sort order for the chosen strings is easy to decide, since there are computed weights for each codepoint. * Primary strength: "Ar" = "?r" * Secondary strength: "Ar" < "?r" * Tertiary strength: "Ar" < "?r" **Collation with Expansions** The sorting order is changed for collation with expansion. Based on DUCET, the concatenated lists of collation elements for the strings from our samples are provided below: :: Ar [.15A3.0020.0008.0041][.1770.0020.0002.0072] ?r [.15A3.0020.0008.0041][.0000.0047.0002.0308][.1770.0020.0002.0072] It is rather obvious that on the first pass, for level 1 weights, 0x15A3 will be compared with 0x15A3. In the second iteration, the 0x0000 weight will be skipped, and 0x1770 will be compared with 0x1770. Since the strings are declared identical so far, the comparison will continue on the level 2 weights, first comparing 0x0020 with 0x0020, then 0x0020 with 0x0047, yielding "Ar" < "?r". The example above was meant to show how strings comparison is done when using a collation with expansion support. Let us change the collation settings, and show how one may obtain a different order for the same strings when using a collation for German, where "?" is supposed to be interpreted as the character group "AE". The codepoints and collation elements of the characters involved in this example are as follows. :: 0041 ; [.15A3.0020.0008.0041] # LATIN CAPITAL LETTER A 0045 ; [.15FF.0020.0008.0045] # LATIN CAPITAL LETTER E 0072 ; [.1770.0020.0002.0072] # LATIN SMALL LETTER R 00C4 ; [.15A3.0020.0008.0041][.15FF.0020.0008.0045] # LATIN CAPITAL LETTER A WITH DIAERESIS; EXPANSION When comparing the strings "?r" and "Ar", the algorithm for string comparison when using a collation with expansion support will involve comparing the simulated concatenation of collation element lists for the characters in the two strings. :: Ar [.15A3.0020.0008.0041][.1770.0020.0002.0072] ?r [.15A3.0020.0008.0041][.15FF.0020.0008.0045][.1770.0020.0002.0072] On the first pass, when comparing level 1 weights, 0x15A3 will be compared with 0x15A3, then 0x1770 with 0x15FF, where a difference is found. This comparison yields "Ar" > "?r", a result completely different than the one for the previous example. **Example 2** In Canadian French sorting by the collation with expansion, accent is compared from end of string towards the beginning. * Normal Accent Ordering: cote < cot¨¦ < c?te < c?t¨¦ * Backward Accent Ordering: cote < c?te < cot¨¦ < c?t¨¦ .. _operations-charset-collation: Operations Requiring Collation and Charset ------------------------------------------ Charset ^^^^^^^ Charset information is required for functions which use character primitives. There are exceptions: :func:`OCTET_LENGTH` and :func:`BIT_LENGTH` do not require charset internally to return the length in bytes and bits. However, for the same glyph (character symbol) stored in different charset, they return different values: .. code-block:: sql CREATE TABLE t (s_iso STRING CHARSET iso88591, s_utf8 STRING CHARSET utf8); SET NAMES iso88591; INSERT INTO t VALUES ('?','?'); -- the first returns 1, while the second does 2 SELECT OCTET_LENGTH(s_iso), OCTET_LENGTH(s_utf8) FROM t; The previous example should be run from console (or a client) with ISO-8859-1 charset. Collation ^^^^^^^^^ Collation is required in functions and operators which involves a comparison between two strings or matching two strings. These includes functions like: :func:`STRCMP`, :func:`POSITION`, LIKE condition, and operators (<,= , >=, etc.). Also clauses like ORDER BY, GROUP BY and aggregates(:func:`MIN`, :func:`MAX`, :func:`GROUP_CONCAT`) use collation. Also, collation is considered in :func:`UPPER` and :func:`LOWER` functions, in the following manner: * Each collation has a default (parent) locale. * UPPER and LOWER functions are performed using the user alphabet of the default locale of the collation. For most collations, the default locale is obvious (is embedded in the name): * utf8_tr_cs ¡ú tr_TR.utf8 * iso88591_en_ci ¡ú en_US (ISO-8859-1 charset) The bin collations have the following default locales: * iso88591_bin ¡ú en_US (ISO-8859-1 charset) * utf8_bin (en_US.utf8 - built-in locale - and handles ASCII characters only) * euckr_bin (ko_KR.euckr - built-in locale - and handles ASCII characters only) There are some generic collations available in LDML. These collations have as default locale, the locale in which they are first found. The order of loading is the locales order from **$CUBRID/conf/cubrid_locales.txt**. Assuming the default order (alphabetical), the default locale for all generic LDML collations is de_DE (German). Charset conversion ^^^^^^^^^^^^^^^^^^ For the three charsets supported by CUBRID the conversion rules are: * General rules is that character transcoding occurs (representation of bytes is changed to the destination charset) - precision is kept, while byte size may change (for variable character data). When changing charset of a column with fixed precision (ALTER..CHANGE), the size in bytes always changes (size = precision x charset multiplier). * Exceptions are: utf8 and euckr to iso88591 - the precision is kept and data can be truncated. The following is an example that you run queries by changing the charset as utf8 in the database that the locale specified when creating DB is en_US(.iso88591). .. code-block:: sql SET NAMES utf8; CREATE TABLE t1(col1 CHAR(1)); INSERT INTO t1 VALUES ('?'); When you run above queries, the data of col1 is truncated because '?' is two bytes character and col1's size is one byte. The charset of database is iso88591, and the charset of input data is utf8; it converts utf8 to iso88591. .. _collation-setting-impacted-features: Collation settings impacting CUBRID features -------------------------------------------- LIKE Conditional Optimization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The **LIKE** conditional expression compares patterns between string data, and returns TRUE if a string whose pattern matches the search word is found. As already proven above, when using a "collation without expansion support", each codepoint will receive a single integer value, representing its weight in the comparison process. This weight value is computed based on collation settings (strength, casing etc.). Due to the fact that characters can always be regarded as single entities, trying to match a string with a pattern using the **LIKE** predicate is equivalent to checking if the string can be found in a certain range of strings. For example in order to process a predicate such as ''s LIKE 'abc%' '', CUBRID will first rewrite it as a range restriction for the string "s". "s LIKE 'abc%'" means that "s" must start with the string "abc". In terms of string comparison, this is equivalent, in expansion-free collations, with "s" being greater than "abc", but smaller than its successor (using the English alphabet, the successor of "abc" would be "abd"). :: s LIKE 'abc%' ¡ú s ¡Ý 'abc' AND s < 'abd' (if using strictly the English alphabet) This way, the actual interpretation of **LIKE** is replaced with simple comparisons, but "Collations with expansion support" behave differently. To compare strings when using such a collation means comparing the concatenated lists of collation elements for each codepoint or expansion, level by level. For more information about comparing strings on the collation with expansion, see :ref:`expansion`. If the **LIKE** predicate rewrite method is kept the same as in a collation with no expansion support as above example, the comparison result can be wrong. To ensure the right query result, the **LIKE** predicate rewrite method is ran differently as the below example. That is, the **LIKE** predicate is added as a filter to exclude the wrong data which can be added in a collation with expansion. :: s LIKE 'abc%' ¡ú s ¡Ý 'abc' AND s < 'abd' and s LIKE 'abc%' (if using strictly the English alphabet) Index Covering ^^^^^^^^^^^^^^ Covering index scan is query optimization, in which if all values in query can be computed using only the values found in the index, without requiring additional row lookup in heap file. For more information, see :ref:`covering-index`. In the collation without casing, for two strings values, 'abc' and 'ABC', only one value is stored in the index(this is either 'abc' or 'ABC' depending which one was inserted first). As a result, the incorrect result may happen when at least two different strings produce the same sort key in a given collation. For this reason, for all UTF-8 collations with strength level less than 4 (quaternary), the index covering query optimization is disabled. This is controlled by strength="tertiary/quaternary" in tag of collation definition in LDML. It should be considered to set this level as maximum strength, because the quaternary strength level requires not only more memory space and bigger size of the shared library file, but also string-comparison time. For more information about collations, see :ref:`collation`. Summary of CUBRID Features for Each Collation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------+---------------------------------------------+------------------------+ | Collation | LIKE condition kept after rewrite to range | Allows index covering | +=====================+=============================================+========================+ | iso88591_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | iso88591_en_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | iso88591_en_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | euckr_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_en_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_en_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_tr_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_ko_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_gen | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_gen_ai_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_gen_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_de_exp_ai_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_de_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ro_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_es_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_fr_exp_ab | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ja_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ja_exp_cbm | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_km_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ko_cs_uca | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_tr_cs_uca | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_vi_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | binary | No | Yes | +---------------------+---------------------------------------------+------------------------+ Viewing Collation Information ----------------------------- To view the collation information, use :func:`CHARSET`, :func:`COLLATION` and :func:`COERCIBILITY` functions. The information of the database collation can be shown on db_collation system view or :ref:`show-collation-statement`. Using i18n characters with JDBC =============================== CUBRID JDBC stores string type values received from server using String and CUBRIDBinaryString objects. String objects uses UTF-16 internally to store each character. It should be used to store any string DB value except those having binary charset. CUBRIDBinaryString uses byte array and should be used to store database string values of binary charset. The data buffer of each string value received from server is accompanied by the charset of the value from server. The charset is either the charset of column or expression's result or system charset for any other string without correspondent in database. .. note:: In previous versions, the connection charset was used to instantiate JDBC String objects from database values. Create table with one column having UTF-8 charset and the other of binary charset: .. code-block:: sql CREATE TABLE t1(col1 VARCHAR(10) CHARSET utf8, col2 VARCHAR(10) CHARSET binary); Insert one row (the second column has random value bytes): .. code-block:: java Connection conn = getConn(null); PreparedStatement st = conn.prepareStatement("insert into t1 values( ?, ? )"); byte[] b = new byte[]{(byte)161, (byte)224}; CUBRIDBinaryString cbs = new CUBRIDBinaryString(b); String utf8_str = new String("abc"); st.setObject(1, utf8_str); st.setObject(2, cbs); st.executeUpdate(); Query the table and show contents (for binary string - we display a hex dump, for other charsets - the String value): .. code-block:: java ResultSet rs = null; Statement stmt = null; rs = stmt.executeQuery("select col1, col2 from t1;"); ResultSetMetaData rsmd = null; rsmd = rs.getMetaData(); int numberofColumn = rsmd.getColumnCount(); while (rs.next()) { for (int j = 1; j <= numberofColumn; j++) { String columnTypeName = rsmd.getColumnTypeName(j); int columnType = rsmd.getColumnType(j); if (((CUBRIDResultSetMetaData) rsmd).getColumnCharset(j).equals("BINARY")) { // database string with binary charset Object res; byte[] byte_array = ((CUBRIDBinaryString) res).getBytes(); res = rs.getObject(j); System.out.println(res.toString()); } else { // database string with any other charset String res; res = rs.getString(j); System.out.print(res); } } } Timezone Setting ================ Timezone can be set by system parameters; a **timezone** parameter which is set on a session, and a **server_timezone** parameter which is set on a database server. For details, see :ref:`timezone-parameters`. A **timezone** parameter is a parameter about a session. This setting value can be kept by session unit. :: SET SYSTEM PARAMETERS 'timezone=Asia/Seoul'; If this value is not set, it follows **server_timezone**\'s setting as default. A **server_timezone** parameter is a parameter about a database server. :: SET SYSTEM PARAMETERS 'server_timezone=Asia/Seoul'; If this value is not set, it follows OS's setting. To use timezone information, timezone type should be used. For details, see :ref:`timezone-type`. When timezone is set by a region name, it requires a separate timezone library, To use an updated library which has a changed timezone information, not an installed timezone information, timezone library should be compiled after timezone information is changed. The following is an example to compile a timezone library after updating a timezone information through IANA (http://www.iana.org/time-zones). For details, see the following description. .. _timezone-library: Compiling Timezone Library -------------------------- To use a timezone by specifying a timezone region name, timezone library is required. This is provided as default when CUBRID is installed. By the way, to update a timezone region information as a latest one, timezone library should be compiled after updating a recent code which can be downloaded in IANA (http://www.iana.org/time-zones). The following is a process to update timezone library as a recent one. At first, stop cubrid service and operate the following process. Windows ^^^^^^^ 1. Download the recent data from http://www.iana.org/time-zones . Download the linked file in "Latest version"'s "Time Zone Data". 2. Decompress the compressed file to **%CUBRID%/timezones/tzdata** directory. 3. Run **%CUBRID%/bin/make_tz.bat**. **libcubrid_timezones.dll** is created at the **%CUBRID%/lib** directory. :: make_tz.bat .. note:: To run **make_locale** script in Windows, one of Visual C++ 2005, 2008 or 2010 should be installed. Linux ^^^^^ 1. Download the recent data from http://www.iana.org/time-zones . Download the linked file in "Latest version"'s "Time Zone Data". 2. Decompress the compressed file to **$CUBRID/timezones/tzdata** directory. 3. Run **make_tz.sh**. **libcubrid_timezones.so** is created at the **$CUBRID/lib** directory. :: make_tz.sh Timezone library and database compatibility ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The timezone library built by CUBRID includes an MD5 checksum of the timezone data it contains. This hash is stored in all databases which are created with that library, into column **timezone_checksum** of **db_root** system table. If timezone library is changed (recompiled with newer timezone data) and the checksum changes, the CUBRID server, nor any other CUBRID tool cannot be started with existing databases. To avoid such incompatibility, one option is to use **extend** argument of the **make_tz** tool. When this option is provided, the **timezone_checksum** value of database is also changed with the new MD5 checksum of the new timezone library. The extend feature should be used when you decide to use a different version of timezone library from the IANA site. It does two things: - It generates a new library by merging the old timezone data with the new timezone data. After the merge, all the timezone regions that aren't present in the new timezone database are kept in order to ensure backward compatibility with the data in the database tables. - The second thing that it does is to update the timezone data in the tables in the situation when backward compatibility could not be ensured in the first phase when the new timezone library was generated. This situation can occur when an offset rule or a daylight saving rule changes. When you run **make_tz** with the extend option all the databases in your database directory file (**databases.txt**) are updated together with the MD5 checksum. There are some corner cases: - There is the situation when multiple users share the same **CUBRID** installation and an extend is done by one of them. If that user doesn't have access rights on the files that contain the databases of the other users, those databases will not be updated. After that, if a different user for whom the update wasn't made will try to do an extend on his or her databases, he or she will not be able to do this because the checksum of the library will be different from the one in his or her databases. - Also, if the **CUBRID_DATABASES** environment variable has a different value for some users, they will have different **databases.txt** files. In this situation, currently, the update of all the databases at once will not be possible. There would be two solutions for this with the current implementation: - Each user grants access to the folders that contain his or her databases and also the **CUBRID_DATABASES** variable must have the same value. - If this is is not possible, then we can do the following for each user except the last one: - Backup the current timezone library and the **databases.txt** file - Delete from the **databases.txt** file all the databases except for the ones of the current user - Run the extend - Restore the **databases.txt** file and the timezone library For the last user the single difference is that only the **databases.txt** file should be backed up and restored. In Linux: :: make_tz.sh -g extend In Windows: :: make_tz.bat /extend Usage of timezone data types with JDBC ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ JDBC CUBRID driver is completely dependent on CUBRID server for timezone information. Although, Java and CUBRID uses the same primary source of information for timezone (IANA), the names of regions and timezone information should be considered as incompatible. All CUBRID data types having timezone are mapped to **CUBRIDTimestamptz** Java objects. Using JDBC to insert value with timezone: .. code-block:: java String datetime = "2000-01-01 01:02:03.123"; String timezone = "Europe/Kiev"; CUBRIDTimestamptz dt_tz = CUBRIDTimestamptz.valueOf(datetime, false, timezone); PreparedStatement pstmt = conn.prepareStatement("insert into t values(?)"); pstmt.setObject(1, ts); pstmt.executeUpdate(); Using JDBC to retrieve value with timezone: .. code-block:: java String sql = "select * from tz"; Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(sql); CUBRIDTimestamptz object1 = (CUBRIDTimestamptz) rs.getObject(1); System.out.println("object: " + object1.toString()); Internally, CUBRID JDBC stores the date/time parts of **CUBRIDTimestamptz** object into a 'long' value (inheritanted through Date object) which holds the number of milliseconds elapsed since 1st January 1970 (Unix epoch). The internal encoding is performed in UTC time reference, which is different from Timestamp objects which uses local timezone. For this reason, a **CUBRIDTimestamptz** object created with the same timezone as Java local timezone will not hold the same internal epoch value. In order to provide the Unix epoch, the **getUnixTime()** method may be used: .. code-block:: java String datetime = "2000-01-01 01:02:03.123"; String timezone = "Asia/Seoul"; CUBRIDTimestamptz dt_tz = CUBRIDTimestamptz.valueOf(datetime, false, timezone); System.out.println("dt_tz.getTime: " + dt_tz.getTime()); System.out.println("dt_tz.getUnixTime: " + dt_tz.getUnixTime ()); .. _char-data-conf-guide: Configuration Guide for Characters ================================== Database designers should take into account character data properties when designing the database structure. The following is the summarized guide when configuring aspects related to CUBRID character data. Locale ------ * By default, en_US gives best performance. If you have a plan to use only English, this is recommended. * Using UTF-8 locale will increase storage requirement of fixed char(CHAR) by 4 times; using EUC-KR increases storage 3 times. * If user string literals have different charset and collation from system, query strings will grow as the string literals are decorated with them. * If localized (non-ASCII) characters will be used for identifiers, then use an .utf8 locale * Once established the UTF-8 charset for DB, it is best to use a LDML locale (this ensures that identifier names containing most Unicode characters are correctly cased) than a system locale. * Setting a locale affects also conversion functions(intl_date_lang, intl_number_lang). * When you set the locale during creating DB, there should be no concern on charset and collation of string-literals or user tables columns; all of them can be changed at run-time (with :func:`CAST` in queries) or ALTER .. CHANGE for a permanent change. CHAR and VARCHAR ---------------- * Generally, use VARCHAR if there are large variations in actual number of characters in user data. * CHAR type is fixed length type. Therefore, Even if you store only English character in CHAR type, it requires 4 bytes storage in UTF-8 and 3 bytes in EUC-KR. * The precision of columns refers to the number of characters (glyphs). * After choosing precision, charset and collation should be set according to most used scenarios. Choosing Charset ---------------- * Even if your text contains non-ASCII character, use utf8 or euckr charsets only if application requires character counting, inserting, replacing. * For CHAR data, the main concern should be storage requirement (4x or utf8, 3x for euckr). * For both CHAR and VARCHAR data, there is some overhead when inserting/updating data: counting the precision (number of characters) of each instance is more consuming for non-ISO charsets. * In queries, charset of expressions may be converted using :func:`CAST` operator. Choosing Collation ------------------ * If no collation dependent operations are performed (string searching, sorting, comparisons, casing), than choose bin collation for that charset or binary collation * Collation may be easily overridden using :func:`CAST` operator, and :ref:`COLLATE modifier ` (in 9.1 version) if charset is unchanged between original charset of expression and the new collation. * Collation controls also the casing rules of strings * Collations with expansions are slower, but are more flexible and they perform whole-word sorting Normalization ------------- * If your client applications send text data to CUBRID in decomposed form, then configure **unicode_input_normalization** = yes, so that CUBRID re-composes it and handles it in composed form * If your client "knows" to handle data only in decomposed form, than set **unicode_output_normalization** = yes, so that CUBRID always sends in decomposed form. * If the client "knows" both forms, then leave **unicode_output_normalization** = no CAST vs COLLATE --------------- * When building statements, the :func:`CAST` operator is more costly than :ref:`COLLATE modifier ` (even more when charset conversion occurs). * :ref:`COLLATE modifier ` does not add an additional execution operator; using :ref:`COLLATE modifier ` should enhance execution speed over using :func:`CAST` operator. * :ref:`COLLATE modifier ` can be used only when charset is not changed Remark ====== * Query plans printing: collation is not displayed in plans for results with late binding. * Only the Unicode code-points in range 0000-FFFF (Basic Multilingual Plan) are normalized. * Some locales use space character as separator for digit grouping (thousands, millions, ..). Space is allowed but not working properly in some cases of localized conversion from string to number. .. note:: * In 9.2 or lower version, user defined variable cannot be changed into the different collation from the system collation. For example, "set @v1='a' collate utf8_en_cs;" syntax cannot be executed when the system collation is iso88591. * In 9.3 or higher version, the above constraint no more exists. Guide for Adding Locales and Collations ======================================= Most new locales and/or collations can be added by user simply by adding (or changing) a new (existing) LDML file. The LDML files format used by CUBRID are derived from generic Unicode Locale Data Markup Language (http://www.unicode.org/reports/tr35/). The tags and attributes which are specific only to CUBRID can be easily identified (they contain a "cubrid" into the naming). The best approach to add a new locale is to copy existing LDML file and tweak various setting until desired results are obtained. The filename must be formatted like cubrid_<*language*>.xml and be placed in the folder **$CUBRID/locales/data/ldml**. The <*language*> part should be a ASCII string (normally five characters) in IETF format (https://en.wikipedia.org/wiki/BCP_47). After creating the LDML file, the <*language*> part string must be added into CUBRID configuration file **$CUBRID/conf/cubrid_locales.txt**. Note that the order in this file is the order of generating (compiling) locale library and loading locales at start-up. The **make_locale** script must be used to compile the new added locale and add its data into the CUBRID locales library (locale in **$CUBRID/lib/**). The LDML file is expected in UTF-8 encoding, and it is not possible to add more than one locale into the same LDML file. Adding a new locale in LDML file requires: .. "and currency" is removed because MONETARY is deprecated. * to specify calendar information (CUBRID date formats, name of months and week days in various forms, names for AM/PM day periods). CUBRID supports only Gregorian calendar (generic LDML specifies other calendar types which are not supported by CUBRID). * to specify number settings (digit grouping symbols) * providing an alphabet (set of rules for how letters are upper-cased and lower-cased) * optionally, some collations can be added * also optionally, console conversion rules for Windows CSQL application can be defined LDML Calendar Information ------------------------- * The first part consists in providing default CUBRID formats for **DATE**, **DATETIME**, **TIME**, **TIMESTAMP**, **DATETIME WITH TIME ZONE** and **TIMESTAMP WITH TIME ZONE** data type conversion to/from string. This formats are used by functions :func:`TO_DATE`, :func:`TO_TIME`, :func:`TO_DATETIME`, :func:`TO_TIMESTAMP`, :func:`TO_CHAR`, :func:`TO_DATETIME_TZ`, :func:`TO_TIMESTAMP_TZ`. The formats elements allowed depend on data type and are the ones used for :func:`TO_CHAR` function (:ref:`Date/Time Format 1 `). Only ASCII characters are allowed in the format strings. The allowed size are 30 bytes (characters) for **DATE** and **TIME** formats, 48 characters for **DATETIME** and **TIMESTAMP** formats and 70 characters for **DATETIME WITH TIME ZONE** and **TIMESTAMP WITH TIME ZONE**. * The requires to specify the names for months in both long form and abbreviated form. The allowed size are 15 (or 60 bytes) for abbreviated form and 25 characters (or 100 bytes) for normal form. * The requires week day names in both long and abbreviated form. The allowed size are 10 characters (or 40 bytes) for abbreviated form and 15 characters (or 60 bytes) for full day name. * The sub-tree requires to define the string for AM/PM format variants (according to type attribute). The allowed size is 10 characters (or 40 bytes). The months and week-days names (in both long and abbreviated form) must be specified in Camel case format (first letter upper case, the rest in lower case). CUBRID checks only the maximum allowed size in bytes; the size in characters is computed only for full-width UTF-8 characters (4 bytes), so it would be possible to set a month name having 100 ASCII-only characters (the 25 characters limit is when each character from month name is encoded on 4 bytes in UTF-8). .. LDML Numbers and currency information .. currency part will not be used because it is used only in MONETARY, and MONETARY will be deprecated. .. keep the currency related info as comments for the future.(if currency part is revised, it's explanation can be used again.) LDML Numbers information ------------------------ * The tag defines the characters used as symbols for splitting decimal part from integer part in numbers and for grouping the digits. CUBRID expects only ASCII characters for these symbols. Empty of space character is not allowed. CUBRID performs grouping for 3 digits. .. below is commented out. .. * The tag defines the three letter ISO 4217 code (http://en.wikipedia.org/wiki/ISO_4217) for default currency of the locale. Only one currency for locale is allowed. Setting the currency, instructs CUBRID how to interpret conversions were MONETARY data type is implied (:func:`TO_NUMBER`). .. CUBRID supports a limited number of currencies. +------------------------+----------------------+ | Currency ISO symbol | Meaning | +========================+======================+ | USD | dollar sign | +------------------------+----------------------+ | JPY | japanese yen | +------------------------+----------------------+ | GBP | british pound | +------------------------+----------------------+ | KRW | Korean won | +------------------------+----------------------+ | TRY | turkish lira | +------------------------+----------------------+ | KHR | cambodian riel | +------------------------+----------------------+ | CNY | chinese renminbi | +------------------------+----------------------+ | INR | indian rupee | +------------------------+----------------------+ | RUB | russian ruble | +------------------------+----------------------+ | AUD | australian dollar| +------------------------+----------------------+ | CAD | canadian dollar | +------------------------+----------------------+ | BRL | brasilian real | +------------------------+----------------------+ | RON | romanian leu | +------------------------+----------------------+ | EUR | euro | +------------------------+----------------------+ | CHF | swiss franc | +------------------------+----------------------+ | DKK | danish krone | +------------------------+----------------------+ | NOK | norwegian krone | +------------------------+----------------------+ | BGN | bulgarian lev | +------------------------+----------------------+ | VND | vietnamese dong | +------------------------+----------------------+ | CZK | Czech koruna | +------------------------+----------------------+ | PLN | Polish zloty | +------------------------+----------------------+ | SEK | Swedish krona | +------------------------+----------------------+ | HRK | Croatian kuna | +------------------------+----------------------+ | RSD | serbian dinar | +------------------------+----------------------+ LDML Alphabet ------------- These allow to define casing rules for alphabet of the locale. The 'CUBRIDAlphabetMode' attribute defines the primary source of data for characters. Normally, this should be set to "UNICODEDATAFILE", values which instructs CUBRID to use the Unicode data file (**$CUBRID/locales/data/unicodedata.txt**). This file must not be modified, any customization on certain characters should be done in LDML file. If such value is configured, all Unicode characters up to codepoint 65535 are loaded with casing information. The other allowed value for `CUBRIDAlphabetMode` is "ASCII" which will lead to only ASCII character can be lower case, upper case or case-insensitive compare in matching functions. This does not affect CUBRID's ability to support all UTF-8 4 bytes encoded Unicode characters, it just limits the casing ability for characters not included. The casing rules are optional and apply on top of the primary source of character information (UNICODEDATAFILE or ASCII). CUBRID allows to define upper casing rules ( tag) and lower casing rules ( tag).Each of upper and lower casing rules set consists for pairs of source-destination ( = source, = destination). For instance, the following defines a rule that each character "A" is lower cased to "aa" (two character "a"). .. code-block:: xml A aa LDML Console Conversion ----------------------- In Windows, the console does not support UTF-8 encoding, so CUBRID allows to translate characters from their UTF-8 encoding to a desired encoding. After configuring console conversion for a locale, the user must set prior to starting CSQL application the codepage of the console using 'chcp' command (the codepage argument must match the 'windows_codepage' attribute in LDML). Conversion will work bidirectionally (input and output in CSQL), but is only limited to Unicode characters which can be converted in the configured codepage. The element is optional and allows to instruct CSQL how to print (in which character encoding) the text in interactive command. The 'type' attribute defines the conversion scheme. The allowed values are: * ISO: is a generic scheme in which the destination codepage is a single byte charset * ISO88591: is a predefined single byte scheme for ISO-8859-1 charset (the 'file' attribute is not required, is ignored) * ISO88599: is a predefined single byte scheme for ISO-8859-9 charset (also the 'file' attribute is not required) * DBCS: Double Byte Code-Set; it is a generic scheme in which the destination codepage is a double byte charset The 'windows_codepage' is the value for Windows codepage which CUBRID automatically activates console conversion. The 'linux_charset' is corresponding value for charset part in **LANG** environment variable from UNIX system. It is recommended to use native CUBRID charset in Linux console. The 'file' attribute is required only for "ISO" and "DBCS" values of 'type' attribute and is the file containing the translation information (**$CUBRID/locales/data/codepages/**). LDML Collation -------------- Configuring a collation is the most complex task for adding LDML locale in CUBRID. Only collation having UTF-8 codeset can be configured. CUBRID allows to configure most constructs specified by UCA - Unicode Collation Algorithm (http://www.unicode.org/reports/tr10/) including contractions and expansions, but the properties for the collation are mostly controlled via the 'settings' attribute. A LDML file can contain multiple collations. Collations can be included from external file using the 'include' tag. The 'validSubLocales' attribute of 'collations' tag is a filter allowing to control locale compilation when external collations (from external files) are included. Its values can be either a list of locales or "*" in which case the collations in sub-tree are added in all locales from which the file is included. One collation is defined using the 'collation' tag and its sub-tree. The 'type' attribute indicates the name for the collation as it will be added in CUBRID. The 'settings' tag defines the properties of the collation: * 'id' is the (internal) numeric identifier used by CUBRID. It is integer value in range (32 - 255) and is optional, but is strongly recommended that an explicit unassigned values is set. Please see :ref:`collation-naming-rules`. * 'strength' is a measure of how strings compare. See :ref:`collation-properties`. The allowed values are : * "quaternary": different graphic symbols of the same character compare differently, but different Unicode codepoints may compare equal. * "tertiary": graphic symbols of the same character are equal, case-sensitive collation. * "secondary": case insensitive collation, characters with accents compare different * "primary": accents are ignored, all characters compare as the base character. * 'caseLevel': special setting to enable case sensitive compare for collations having strength < tertiary. Valid values are "on" or "off". * 'caseFirst': order of casing. Valid values are "lower", "upper" and "off". The "upper" values means upper case letters are ordered before the corresponding lower case letter. * 'CUBRIDMaxWeights': it is the number of codepoints (or last codepoint + 1) which are customized in the collation. Maximum value is 65536. Increasing this value increases the size of collation data. * 'DUCETContractions': valid values are "use" or "ignore". When "use" - enable CUBRID to use in the collation the contractions defined by DUCET file ($CUBRID/locales/data/ducet.txt) or ignoring them. * 'TailoringContractions': same as previous but refers to the contractions defined or derived from explicit collation rules. Enabling contractions leads to a more complex collation (slower string compares). * 'CUBRIDExpansions': allowed values are "use" or "ignore" (default) and refers to usage of collation expansions from both the DUCET file and tailoring rules; This has the most influence on collation properties. Enabling it will result in a compare with multiple passes (up to collation strength) when comparing strings. Also it greatly increases collation data, with the benefit of obtaining a more "natural" sort order. See :ref:`expansion`. * 'backwards': "on" or "off": used to obtain "french" order by performing an end-to-start compare on secondary level (for accents). It has effect only when 'CUBRIDExpansions' are enabled. * 'MatchContractionBoundary': "true" or "false". This is used in collation having expansions and contractions to configure behavior at string matching when a contraction is found. The main data for a collation is loaded from the DUCET file. After this step, the collation may be customized using "tailoring rules". These are the "" (LDML) and "" (CUBRID specific). The 'cubridrules' tag is optional and can be used to explicitly set weight values for a codepoint or a range of codepoints. The cubridrules apply after loading the primary collation data from DUCET file and before applying the UCA rules (from '' tag). Each of these rule is enclosed in '' tag. If the rule refers to only one Unicode codepoint, then a '' tag is provided which contains the hexadecimal value of codepoint. All available CUBRID collations contain this cubrid-rule: .. code-block:: xml 20 [0.0.0.0] This rule says that weight values (UCA defines four weight values per collation element) of the codepoints starting with 20 (which is ASCII space character) are all set to zero. Since there is no '' tag, the only codepoint affected is 20. In CUBRID, space character compares as zero. The allowed tags inside of a '' rule are: * '': rule to set the weights for single codepoint. * '': rule to set the weights for single character. Similar to previous one, but instead of codepoint it expects a Unicode character (in UTF-8 encoding). * '': rule to set the weights for a range of codepoints. This is the starting codepoint. * '': rule to set the weights for a range of characters. This is the starting character. In this context, the order of characters is given by their Unicode codepoints. * '': end codepoint for a range rule. * '': end character for a range rule. * '': weight values to set (single value). The weight values are expected in hexadecimal. Each collation element has four values which are delimited by point and enclosed by square brackets([]). There can be up to 10 collation elements. * '': starting weight values to set for a range. Optionally, there is 'step' attribute of this tag, which sets the increasing step after each codepoint. By default the step is [0001.0000.0000.0000], which means that after setting the first weight values for the starting codepoint, one value is added to primary level weight and set to the next codepoint in range, and the process is repeated until end codepoint. Examples: .. code-block:: xml 0 20 [0.0.0.0] 30 39 [30.0.0.0][30.0.0.0] The Rule 1, sets for codepoints ranging from 0 to 20 (including) the weight values 0. The Rule 2, sets for codepoints ranging from 30 to 39 (which are the digits), a set of two collation elements with increasing weights; In this example, codepoint 39 (character "9") will have the weights with two collation elements [39.0.0.0][39.0.0.0]. The '' tag is also optional but is according to LDML and UCA specifications. The meanings of sub-ordinates tags are : * '': anchor collation element. It defines the reference to which subsequent rules (up to next ``) are tailored. It can be a single characters or multiple characters in which case is either a contraction or an expansion. By default, all the tailoring rules after the anchor are sort "after" (element from first rule is after the anchor, element from second rule sorts after element in first rule); if the optional attribute "before" is present, then only the first rule after the sorts before the anchor, while the second and the following rules resumes the normal "after" sort (element in second rule sorts after element in first rule). * '

': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at primary level. * '': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at secondary level. * '': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at tertiary level. * '': the character sorts identically to previous one * '', '', '', '': same as '

', '', '', '' but applies to a range of characters * '': specifies the expansion character * '': specifies the second character of expansion. * '': specifies the context in which a rule applies. A variant to specify contractions and expansions. For more information on UCA tailoring with LDML rules see http://www.unicode.org/reports/tr35/tr35-collation.html. .. rubric:: Footnotes .. [#f1] glyph: an element for the shape of a character; a graphic symbol which indicates a shape or a form for a character. Because a glyph specifies the shape which is shown, several glyphs about one character can exist. Day widened into its first perfection as we moved down the highroad toward a near fork whose right was to lead Harry and his solemn cort¨¦ge southward, while the left should be our eastward course. Camille and I rode horseback, side by side, with no one near enough to smile at my sentimental laudations of the morning's splendors, or at her for repaying my eloquence with looks so full of tender worship, personal acceptance and self-bestowal, that to tell of them here would make as poor a show as to lift a sea-flower out of the sea; they call for piccolo notes and I am no musician. The little man elected to have a cab. When Bow Street was reached Prout had the satisfaction of finding that all his birds had been netted. He received the warm congratulations of his inspector modestly. Caloric or air engines. I did not feel very comfortable after what had happened to those soldiers who lost their lives so cruelly sudden, or in any case had been seriously wounded, while the officers took little notice of them. But it was desirable to behave as discreetly as possible, and so to get a permit to Maastricht. "Louvain, ¡°It will check up, that way, too,¡± smiled Larry. "Me run away," thought Shorty, as they walked along. "Hosses couldn't drag me away. I only hope that house is 10 miles off." Reuben began to take off his coat¡ªyoung Realf drew back almost in disgust. "Well, would Robert have stolen money, or Albert disgraced your name, to get free, if you and your farm hadn't made them slaves? If you hadn't been a heartless slave-driver would George have died the other night alone on the Moor?¡ªor would Richard have taken advantage of a neighbour's charity to escape from you? Don't you see that your ambition has driven you to make slaves of your children?" "D?an't tell me," said Coalbran in the bar, "as it wurn't his fault. Foot-and-mouth can't just drop from heaven. He must have bought some furriners, and they've carried it wud 'em, surelye." But meantime a strange restlessness consumed her, tinctured by a horrible boldness. There were moments when she no longer was afraid of Handshut, when she felt herself impelled to seek him out, and make the most of the short time they had together. There could be no danger, for he was going so soon ... so few more words, so few more glances.... Thus her mind worked. "What mean you, woman?" quickly returned De Boteler; "do you accuse the keeper of my chase as having plotted against your son, or whom do you suspect?" Her face was shrivelled and yellow, and the dark full eyes that now, as it were, stood forth from the sunken cheeks, looked with a strange brightness on the scene, and seemed well adapted to stamp the character of witch on so withered a form. And perhaps there were few of those entirely uninterested in the matter who now gazed upon her, who would not have sworn that she merited the stake. "Thou art set over the people, and to the Lord's anointed I come to seek for justice." HoMEÅ·ÃÀÒ»¼¶Ã«Æ¬Ãâ·Ñ¸ßÇåÈÕº« ENTER NUMBET 0018www.zgsjysp.com.cn
991zhenti.com.cn
020808.com.cn
www.sinodig.com.cn
5gmifi.com.cn
vezm.com.cn
www.yondu.com.cn
kibp.com.cn
cnaomen.com.cn
www.shelters.com.cn

曹柳社区 迷人的浪屄 美国成人性节目ed2k 航太郎操妈妈 我和姐姐在车上乱伦激情 大胆人体摸特乱轮小说 02xixi 四房播搏五月天 雯雯落难记 我爱操骚 欧美无码性爱故事 酒色五月丁香 泽井芽衣怀孕 天海翼人体悠悠人体 综合图区亚洲色 WWW.AU8AU.COM WWW.6667O.COM WWW.SF5203.COM WWW.GO24K.COM WWW.CECE3.COM WWW.CAOB98.COM WWW.SXHGHJ.COM WWW.JUZHIXIAN.COM WWW.EBE1.COM WWW.97XFDY.COM WWW.TFKRQ.COM WWW.2N5N.COM WWW.DI2000.COM WWW.110AU.COM WWW.38RP.COM WWW.NCQCZX.COM WWW.CVADC.ORG WWW.7777XXXX.COM WWW.NN432.COM WWW.BSRCW.COM WWW.BBB145.COM WWW.NSL892.COM WWW.JAVPEE.COM WWW.168771.COM WWW.100MBUY.COM WWW.SEWHM.COM WWW.DXCHT.COM WWW.222XFZY.COM WWW.F99Y.COM WWW.23RIRI.COM WWW.99ZXAV1.COM WWW.FQ48.COM WWW.XX690.COM WWW.168BBK.COM WWW.26L5L.COM WWW.634PP.COM WWW.CCC598.COM WWW.835.COM WWW.JNUXX.COM WWW.ZGHXSJ.COM WWW.JIMOGAN.COM WWW.ADMINBUY.CN WWW.33WWXX.COM WWW.VVVCCC.COM WWW.FDYCZZ.COM WWW.110ZK.COM WWW.0312A.COM WWW.HHH000.COM WWW.RRAIAV.COM WWW.CCC509.COM WWW.269AV.COM WWW.JNJRNO1.COM WWW.SESE8DD.COM WWW.UDW168.COM WWW.HAOAV06.COM WWW.7369.COM WWW.XLN6.COM ILYA.IOSIFOV WWW.958SHOP.COM RASRI.BALENCIA WWW.778UV.COM WWW.48XX.COM WWW.3W20RI.COM WWW.61K8.COM WWW.W7707.COM WWW.8090AND.COM WWW.PARACN.COM WWW.A0V7.COM 大奶妈妈女教师 国产母子偷拍自拍国语 在线免费观看视频偷拍 国产自拍偷拍三部 wwwyjzzcom 晚上电影网址 女生阴面照片真人版 se52se www551com最新地址 人妻淫姐影院 女友的厕所淫情 好好图BB图片 av520超碰视频 好叼色在线视频 丁香花骚婷婷 白丝在线 偷拍自拍做爱操逼 哥哥射av在线530kkcom 天天瑟瑟天天撸 熟女撸撸色 男人装五月 乱伦校园网 日本公公搞媳妇视频免费播放 a片三级电影免费看 激情视频免费在线观看 色小嫂 欲色影影院在线观看 熟妇乱伦16p 66mPPcom 亚洲制服欧美日韩 久久热爱视屏 543ac 天天电影网吧 成人激情网,色五月 狠狠撸床戏 千百撸在线视频影院 www97ddycom 三级a片在线 老司机老女人缸交 wwwjiba888comwwwjiba888com 母亲的春天在哪能播放 迅雷下载a片电影下载 古典武侠综合台湾妹 操你大爷免费AV 岛国人兽在线观看 亚洲巨乳少女色27p 亚州天堂AV201759evrcom 6cccjjjcom 性生活影片体内射精 亚洲老鸭窝在线视频 超碰AV国产少妇影院 美丽人妻14p 手机新域名ady手机看片 979vvcom很狠狠 成人动漫国内 超碰超碰超碰Av大香蕉cao388com 偷拍影音先锋电影网 管野静香视频 日本妹子黄色一级片 日本黄色图片网站www99pipicom 8050电影大全苹果 日本少女avxxo天堂网 色色村姑 55eekkcm 伦理无码片在线看 邪恶久久 jjzzhyy 狠狠干www92eeeecom 淫乱人妻辣文 手机天堂妹妹网淫荡人妻小说 妻子被人妖强 av女神第一社区在线 久草李雅 少女粉嫩穴图片写真 迅雷下载5555avco 帅哥自慰视频 黑丝做爱图片区 中文字幕干孕妇的影吧 勾魂伦理免插 制服丝袜av西瓜影音 射射撸av wwwhaleav 五月激情夜 dilidili艳母 a片玛利亚 丝袜人体网站大全 wwwbb18cnm 沙发上大白屁股 童玲同体 那个软件可以看金梅瓶电影 caoporn超碰在线视频人人碰免费视频哥哥干哥哥去 奇米嫂子骚 逼里香性奴视频 母亲变装女友 波多野结衣性感美图 伊人在线大香蕉尼姑 9日日夜夜撸在线影院l www48KYcom 奴隶人妻 看成人做爱免费片 免费gv高清网站 色色色哥色色色 苍井空超短裙丝袜诱惑图片 成人A片在线短视频 黄版黑寡妇 大鸡巴逼 看欧美AV剧情片的网站 福利视频自拍 日本高中女生电影黄色电影wwwfreepornzcom 东京天堂在线 动漫色图自拍偷拍 AV淘宝在线视频首选 淫荡的人妻妻子 超碰最新网址 久久在线经典视频 淫色裸体艺术图 马上色最新版本马上色 wwwsssscomcom 27270com视频 色妻自拍视频在线 111wewe 樱桃做爱视频 早读acg为什么看不了 mm成人快播 移动成人电影撸视频 手机看台湾成人网站 www99ybyb swwwkedouwangcom 6sss直接看net 汤不热在线播放器 好看的网站男人懂得 www080ecom 久久热re6 舔姐姐的逼 巨乳波霸视频在线 我爱大咪咪黄片 操你姐逼在线 张含韵 香港毛片三级直播 gg8090 欧美人体艺术在线视频 骚老婆说淫荡话小说 最新三级片电影 影音先锋影撸穴 色色的iiii19com 乱伦小说都市激情淫妻交换激情校园 狠狠地撸操美女 淫女偷拍 草妈妈在线av www213HKCOM 欧美AV一88tt 日妹妹影视首页 ganbbtk 人妖调教网站下载 希志爱野影音先锋 寄宿寡妇 亚洲色图偷拍自拍欧美79axcom 紧急通知小姨子 不卡成人免费在线 果果人体西西人体 偷拍厕所在线 女主播瞳孔视频在线 日本大鸡吧操女人逼 看短篇性爱小说 美女姐姐15p www969ppcom百度 幼女西瓜影音 狐狸精影院AV在线 色撸撸撸二哥撸撸撸二哥色撸橹色二哥色撸撸撸妹子色撸撸撸中文色电影撸二哥www5y8z1l h色小说乱伦小说在线 大香蕉色一本道色一本道 551mimi 丝袜视频网 果敢姑姑制服丝袜 www67bycom 5zdm午夜电影网 幼幼激情小说ctrlb 搞了两个90后mm 49vvcom五月色 色色色色色色色色色色97依依 初中妹纸的小bb 93gan影院cam 少女初夜 AV天使ftp 9777dd 变态自拍亚洲 巨乳肉文 午夜激情影院 397aacom UULUAV17com在线 舒淇的胸照的图片 熟睡猥琐姐姐 久久理论电影网 尼尼鲁手机影院 欧韩久久视频 夜夜干夜夜5岁到13岁 laicao 武侠校园亚洲欧美 伦理无需播放器 www590uucom www路bobo路sasa33路net 啪啪啪全球免费社区 脱mm网站 成人网首发www851bbcom 调教女教师视频 手机看片福利永久国产动漫 m8520dycom 午夜国产偷拍在线 曰本AV在线观看网站 欧美av迷奸 最新在线免费av网站 大鸡巴干小柔 电车系列假面超人 精品人体裸体模特3gangelrtcom 插妹妹亚洲综合 wwwyinyin8 WWWPU920COmed2k 小嫩淫水 国南集 小明看看永久2015www看imgcctuocom 女同视频 国产偷拍自拍在线视频白 紧身裤美女AV 97久久超碰免费视频在线观看 619ppcom 人与动物乱交magnet wwwkuku59com 男同性恋囗交肛交免费小视频 啪啪啪在线兔费 秋霞夜色伦理 妓女淫片 撸图强暴 野花在线图片 欧美激情爱爱图片 体育系列AV 撸醒睡熟儿子乱伦视频在线 伦理片琪琪第8页 骚逼女人自慰图 肏屄裸体免费看 wwwCC296 火影忍者谢哥博客 痴汉图书馆5小说 lolomao类似网站 福利视频av影院 菲菲影院城菲菲干 日本高清视频magnet 套图综合下一页 美国打炮一级片 不要播放器的成人av 妇刑小说 欧美性交成人网站 小说美丽淫荡的表姐 av萝莉女友优排行 超碰视频www023sbcom xrpav 52avavhaose我爱avavhaosewww36kvkvcom bt6080手机版在线观看m6080cc 夜夜骑夜夜日每天更新 2017年可以用的色网站 白丝16p 群交欧美在线 010狐狸精电影 怡红院成人电影网 6699mp4 亚洲图区欧美图区偷拍自拍 朝国三级篇 爱爱人体 狠狠射大力操 国产自拍wwwaa0ecom 强暴偷情 www小明看看网页 河马xxx 交妻换夫俱乐部 国产AV医生护士搞女的极美 WVW2499 hutaba3 大肚男招鸡先锋 巨乳加勒比系列 如何能看到三级片 影音先锋波多野结衣删除 绘狗人体艺术照片 日本小骚b美女 骚娃人妻 我与伊一做爱 新片欧美十八岁美少 国外幼少女电影 youji4 白嫩大奶丝袜美女 性爱口述大鸡吧擦的嫩屄淫水直流 裸体黄色网站 男女真人做爱图美女爱做爱 爆乳娘影音先锋 dongqindejinkuang eva美女邪恶漫画 爱乃なみ泽北优香 达叔街头恶戏背射 淫荡的老女人图片 松金洋子影音先锋 人体艺术入浴 与舅妈的性事与爱的欲望eenvren 冠心病可以服用哪些壮阳药 连襟换妻乱伦 性爱派对图片 色洛洛图吧 人体艺术丰乳 肥妇视频播 国产性爱自拍在线视频 77sssee 媛交丝袜美女网站大全 足交磁力链接 h黄色图片网址 同性同志第一页u性 seq网址大全 色妹妹avcom 真人动态邪恶操逼表情 c40813f900105e57 电影我爱欲女三邦车视 97isese最新 thunder 萝莉 无码 很黄很色的小说 亚洲自拍电视 第一人体艺术图 好阁色电影在线观看 屄插入两个阴茎 wwwxxoo888com 秦皇岛人体艺术 115色图礼包码 操穿丝裤搔逼 少妇熟女网址大全 模特男鸡巴 中山忍pan 迷奸人妻套图 越南幼女做爱视频 爱爱成人电影迅雷下载 欧美7岁幼女性交小电影 660avcm 日本继母快播 撸撸管影院欧美色图 美女 被插逼逼图 拳交 am 亚洲 欧美 迅雷 下载 全国最大成人三级文学图片电影 美丝足人体图 bt磁力链接五月婷婷 国模冰冰自拍 thisav新网 日本大胆裸体照片 丿爸爸插女儿 我用手机看百度里面的美女图片为什么会自动下载 色哥哥乱軜片 百se8899 女人曝光无遮挡照片 狠狠艹五月天 骚小妹影院 大奶美女视频 国产父亲插女儿视频 115网盘礼包 韩国演艺 小日本女优黄色网 WWW_CKWE2_COM 轮奸抽查奸母亲 很橹吧操我吧 日本无码电影qvod 大色网我爱看你懂的 欧美美女与野兽性交快播在线观看 苍井空到底有无码的片子吗 性爱操逼图片 美女裸屄艺术照 大片黄色 想几个男人操逼逼 逼综合下载 成人性爱视频片 狠狠撸吃鸡巴 一捅一叫春 朴妮唛在哪一部裸上面 香港马交 大白屄大黑屄 从河庄村到西局坐共汽怎么走 亚洲 欧美 乱伦 小说 日本韩国男女做爱图库 图片人体 txt全集电子书 印度新娘大结局 潮易网 冰火小人闯关 南京长江大桥简介 小色哥中文字幕dvd 性爱妹妹在线视频 五月天色色快播成人电影 老外操中国美女视频 富翁们的乱伦小说网址 缺德儿肏后妈 女人逼啦 先锋影音模特冰冰 1024基地手机在线看片 淫色正妹推油激情网 东营万达影院今日上映电影 色妹妹丝袜 维族少女人体艺术 老女人露脸口交哥必撸 weiriguobidejiba 我和妹妹小姨子性交 法国男人的大鸡巴 大尺度裸体操逼图片 日本人体艺术插入图 聊斋乱伦性事 春暖花开性另类视频区 贵女淫乱 美鲍色图操b 夫妻性交视频网 日进来综合网 西西人体艺术cct 儿媳妇吸爸爸大鸡巴 张柏芝艳照偷拍 在那可看李宗瑞 插逼激情网 741het视频播放网站 中文五十路母乱伦 嫂侄性爱 美国黄色网站大全 舔妈妈的骚屄 木耳嗨嗨影视 WWWMEIZYWCOM WWW98ZKCOM 小岳母的淫水 迷奸姐姐小说免费看看 亚洲色图黄色笑话 尿到阴道里好吗 美女裸体人体艺术图片大胆 我与模特的激情性爱 章子怡淫照 新视频网 黄色小说欧美色图五月天 卡通bt欧美 穿黑纹胸姐姐 香港成人免费电影艳照门 捆绑3p 日韩人体黄色图片 日逼wang 漂亮面孔奶子圆高潮近叫不停 影音先锋看我撸 免费成人一级电影 老熟妇性感图片 梦里和年纪大的老ren WWW511SECOM 非洲女人的双洞图 海边裸体女人大秀美胴体 极黄色片 丝袜激情诱惑小说 熟母鱼住纱江 成人综合撸图 怡红院5月份 狠狠的撸欧美人体艺术图 幼女海滩p 免费性爱视频点影观看 WWW11TTBBCOM 俺们撸女同 农村尻屄故事 淫荡妈妈与色儿子 18成人人体摄影 丝袜高跟色图 欧美裸体艺术女人 骚逼败火 12345678性爱 快车成人电影网子 长谷川孕妇 最激性亚洲顶级图片 明星版h小说 日本色图无码大奶色图 哪个黄色网站有明星的艳照 在线播放生活片 和少妇大战p 男同性恋自拍在线 男人强奸女人屌逼 性感人体艺术西西人体艺术 老妈的大白 操了一个模特 怡红院在线看 肏逼撸撸性爱 插女友的嫩肉 亚洲色图另类性爱 春潮色播 明星合成3p网 怡红院潘晓婷 女人被日的图片 97人人操空姐飞机上 伊人色magnet 最新伦理电影云点播 123好逼网爱逼网人人操 淫色uuu 激情网激情网影音先锋 淫荡骚妇12p t偷拍小电影 美女被叉叉的免费网站 亚洲白拍乱伦第二页 淫香淫色最新网站 肉丝裤袜跳蛋小说 哥哥妹妹邪恶小说 日韩色图黄色 久久打炮wwwjiujiu6pcom 张芘张筷雨美丽一生 夜色猫视频网站 免费在线黄片中学生 操操操成人图片 girlanddogUSA 色和尚社区激情 插我好舒服射精大鸡吧 刘嘉玲人体艺术大胆写真 空间更新av wwwhbmaocom guaishou521 美国九九九色成人网成人网站 男孩女孩玩性交电影 WWWCBINFO 女王调教细高跟马眼 天天翘操妹逼 刘亦菲H小说 美国深夜成人剧百度云 欧美性爱1页另类视频区 性感tud AV片网站luba321com 琪琪热热色原a自慰 强奸呻吟扭动一本道 亚洲成人色图自拍 幼童pussy 汉庭猛操儿子同学的妈妈 终结者成人版qvod 人妖与人萌 李丽娜下载 哥哥啪啪萝莉表妹邪恶漫画 国产老女熟女第6页 妹妹被哥哥qj 色狗奇优 www9988sslulucom 亚洲性爱ckck爱情电影网 轮流中出 虐蛋蛋番号 人妻熟女校园春色古典武侠 www操屁 wwwjjzzzcom caobiwangf 乌克兰BB wwwnow32info 黄色片做爱后入式 意淫妹妹乱伦意淫 操嫩逼网百度 迷奸嫂子 图文性爱小说 卡戴珊2小时33草榴 肉质人体艺术 防屏蔽地址布加入收藏 欧美色温小说 69式性交裸体秀 sm会所女同志调教女同志 大香蕉强奸wwwponegcom 丝袜骚母丝袜 可乐看片软件 乱伦干爱 忧忧韩日大胆美女嫩穴艺术 鸡巴吃色色 涩涩乱伦小说 性感诱人大鸡巴 驱灵师超越 中国国模03150p 操对面淫荡黑丝母女 台北嫂子 诱人的美女炮下载 真人做爱姿 铠甲勇士刑天后传全集播放 色和尚wwwyyyp8 手机avwww380cccc 爱AV软件是什么 做爱电影胸大的 苍井空闲人网 希望笑纳^0^ 女人被群插小说 欧美派对上疯狂少妇a 777米奇色wwwse004com 韩国妇女吃精 97色色77piancom www路fa876com av大帝电台 曼妮风骚露妹妹电影 美少女3P中出凌辱叼苍井空 1122zx无法访问 美国男同性恋免费电影 加勒比海盗3快播 北原夏美AV 乱伦骚妇25p 操 俄 快播 www黄网com 无毒爽片网 春色浓喜 樱井莉亚雪 樱井莉亚漂亮 樱井莉亚钢管 求外国h网 现在能上的h网 不要www访问 视频东京热 东京热n188 东京热人气 哪里可以看黄片 极品黄色小说 撸撸黄色小说 渔船上的一父三女 秘密花园 色欲电影 特色导航 日韩色姐姐 狮子王导航 写真和动漫 一品色电影 一起色影视 快播A片电影 美女咪咪人体 母子淫乱电影 我爱鸡鸡影院 一色吧成人网 幼幼之家论坛 101成人小说 7se高清影院 AV五月天电影 大骚妹成人电影 AV盒子谁有E谁有G mejane在线 涩涩 她也色 岛国 一色春 成人自考 深爱基情网 第六色 4438x影院手机版 精品国产瓜皮影院 www89,x,cc 91影院老师机 javzzz 丁香婷婷 mp4 为什么4438x播放不了 japanesehd宾馆在线观看 AV学校厕所偷拍 宅色 ananshe 5 白色手机天堂网站 a片同性 艾米影院鬼父116 2018art合集磁力 mp4 hkd 002磁力下载 一号影院 mp4 绀野真子种子视频 aV性爱无需下载App,手机直接看 325bb、con 人人拍人人操 99热色哥 操B视步页 小明免賛在线电影 国语爱爱迅雷 magnet 免费性爱视频69vj 神马影院张飞跃 9877h漫 360免费av视频 青青国产中文在线 强奸黑丝美女老师视频 强奸乱伦午夜成人影院 秋霞影院baqizicom 人狗 mp4 小科影院 小叔子金嫂子公公趴辉 羞羞影院每日黄片 情欲超市 ftp 秋霞理论免费毛片 强奸女在阴里塞东西视频 日本那些真强奸的视频有番号吗 日产午夜嘿咻剧场 日逼三级 影音先锋看片资源xfxy69 在线手机自拍福利视频 日本女优视频红番阁 在线 无码 国产自拍 丝袜 中文字幕 月岛杏奈视频 在线免费视频 日本真人前入式视频 有黄视频的月光影院 有故事情节的番号 手机在线免费视频 步兵h 93gan手机在线视频 ftv girls 美女薛婧裸体外阴艺术 avhome yuse 最新地址 一本道人人在线葡澳 六九色综合打不开 中国最大成人网5510 来5566手机视频电影黑夜在线? Gav种子 色帝国导航 1meiluge 京香人妻动漫 阴色自拍 西洋艺术裸体丝袜图片 国内熟妇自拍视频在线 云拇指智能视频 胔老妈的大白屁股 久洲AV g广未凉子 广州sm论坛 入魔 快播 波多野结衣d∨d在线中文亚洲无码无码视频 色老女人视频 爱田奈奈迅雷 国产自拍网在线 中文自拍在线 台湾在线视频国产 有哪些看黄片的视频软件 可爱妈妈伦理电影院 阿v在线视频费国产 lunliyingyuan 欧美绿色无毒在线视频网站 女人真人工授精全视频 5438青苹果视频 看操小嫩逼视频 萝莉AV视频资源 嫩模凯竹无码磁力 mp4 绿茶福利会员40 91chengr 噜噜噜a日本 360韩曰最新三级 天天在线播放av 农村荡妇偷情-级片 日韩操b视频。 超黄逼免费 在线资源站 百度美女群交视频 - 百度 偷拍酒店水床 988PORN在线视频-男人插女人骚视频-988PORN视频 - 百度 伊伊成人在线播放 中文无码先锋 婷婷六月 成人动漫 dajjjj 成人五月仙 校花女神肛门自慰视频 97影院家庭 猫片迅雷 公交车上干杨玉菇 良家露脸 色色999 midd565图解 成人A片 迅雷下载 aiaifulidaohang 秋霞偷拍自拍 与日本妞性交 av研究院 nhdta 279在线播放 av搜索线网 微拍广场747 能搜索片名的av网站 大香蕉免费中文字幕午夜影院 三上悠亚吉村卓是哪部 空姐AV种子 唐朝色影院 老王AV电影 猫av90 李宗瑞五十集 冲条杏梨在线 ipx在线观看 男人团大桥未久2016 伸进她的短裙里揉捏 ebod 339 在线观看 bb 激情五月色色波波 EKAI系列 八虎影院网址 五月天天堂电影 韩国女主播合集无水印 第一福利av免费观看 法克小视频成人免费视频 大肉棒强奸 福利漫画里番 youbbb新站 吉泽明步48网盘百度 海猫网最新视频 天天在线观看国产 磁力链 月见栞 亚州女与黑人群交视频观看 好好日在线事情 肉番MP mp4 刘苍井 色伦理片 中文字幕伦理日本免费 午夜小影院。 国产美女做爱偷拍自拍在线视频 午夜色大片在线观看视频 2018最新国内自拍小视频 成人乃妈 午夜影院免赞看无码 浣肠video在线观看 日本黄漫pq 日本娇喘 我的蓝色内衣家教视频 神纳花电影在线观看 xxxooo日韩 李韶姬自慰小说 国产图片,偷拍视频7区 在线伦理国内自拍 你懂的导航91 狠狠櫓AV視頻 青草a免费线观 7090电影福利 91po r n在线观看国产 御工黄色视频 ai美女鲍鱼b b福利 福利1024 mp4 影音先锋主播勾引 042815-863-CARIB-視界侵入 黄色视频免费在线播放 溜溜吧在线伦理片 国产自拍 水滴 聚影vip播放器,如何看福利 哥哥搞在线播放 性爱小视频下载 种子 欧洲影片黑豹免费观看 果哥出品全集在线观看 秋霞官网奇怪的理发店 hnd—601 先锋影音 赤井美月无码全集 uu729 成人快播骑兵动漫 超碰在线色视频 你x我x网欧美网 肉蒲团之桃花观看 4438成人免费全国免费网 小坂惠在线观看 嘿嘿影院 七次郎 任你操妞免费视频 色吊tv 青青草原人人国产 眼镜哥成人视频在线 WWW、SeIangwu、3xyZ 免费视频九九加热 半强jing短篇系列校花 福利视频飘花电影网一 宅男女神兔姐直播在线观看 夫妻床上日b视频 自拍网站汇总 小萝莉在长辈家中被奸污 不卡的国产 操逼大全 欧美高潮磁力 先锋影音福利 美女野外直播ed2k magnet se3377se 2018青青国产视频偷拍手机观看 九九深夜福利在线免费试看 国产尤物 magnet 自拍 偷拍 清纯 唯美 丝袜 吸血贵利王磁力链下载 123248hhh 国产极品女主播在线短视频 番号水杯里面下药 2017日日爽天天干日日啪 强奸乱伦 美尻 K频道国产网红视频网 陌陌动漫影院 欧洲av日本av福利 重口味sm番号 www;bb;kk[pp 爱剪辑欧美性爱视频 协和影视 julia 天堂鸟ttnbbs 百色屋色源网 xin86 500导航 magnet 石川施恩惠在线 av福利院 91很鲁色棕色 大香蕉巨人免费观看 极速在线 欧美 亚洲 偷拍 北川童在线 东方Aa视频 113页国产视频在线 ntr少女视频 欧洲美女ffxx 快手门 国外成人在线tv直播 凹凸卫生纸福利视频网 同牲 extreme sm tube 东北操逼小说 社长秘书紧缚奴隶B 艾薇资源共享吧 小泽玛利亚在线视屏 小早川玲子爱情电影网 小泽玛利亚无码在线视频 小仙儿合集迅雷链接 小老弟影音视频 性雅阁大香蕉 学生性交视频 ftp 小V欲女规视频 邪恶少漫画3d里番全彩工口acg网站 先锋影音超乳中文字幕 色在线视频综合影院 欧美 一本道 自拍 三级 sm的番号欧美 92电影网福利男人天堂网夜色9 武侠强奸 magnet 33qqbb 色男人天堂小说 123无码在线av 谷原希美在线视频中文字幕西瓜 佐藤爱理全部电影 激烈抽插少妇动态图 100视频福利在线视频 午饭成人电影 日本高清视频wwwom 下载男的跟女的做爱的黄片儿怎么下载怎么搜索黄片 最新av直播软件资源29P 久播 福利 日本亚洲福利区视频 97无码R级 骚货人妻网站 秘密搜查官 快播 午夜影院527 台湾黄片A 被痴汉尾随侵犯的母女番号 西西女性高潮视频 淫色草民电影 jpx231磁力链接 荡女报恩亚洲视频 超碰视屏 快播噜噜电影网午夜 我正在看《少女爱黑吊-眼镜制服妹被黑吊抽插 - 青青草在线视频》,很不错,有兴 影音先锋 club 阿力沙在线网站 青春福利偷拍 四虎无码不下载直接看 七七久久在线观看 av分类查询 高清有码色哥网 新任女教师石原莉奈作线观看 综合视频 偷拍自拍 玉蒲团免费登录 加勒比黄片 国产援交露脸自拍 国产在线偷录叫床声 韩国女主播手淫 国内自拍直播网 兵崎真绪 迅雷下载 magnet 苍井空啪啪啪过程福利视频 本子在线下载 国产综合自拍 偷拍在线 操币网 苍老师黄片视频免费观看 k频道导航宅宅影院 老太太b心 yinqi网 亚洲天堂18 新谷露影院121在线 六年级学生教师性交 筱雨张大胆魁惑5 强奸18岁嫩嫩的高中女孩被轮奸 人人97caoporen 色男人色天堂旧址 殴美黄色片子视频 青青操久久成人视频 一之濑桃素人 隔壁老王七七久久 茉莉 成人色区 sifangpian 备用 被窝午夜手机自拍福利视频 s:玉蒲团5 操逼逼爽 啊啊啊视频 sm视频小视频网站 操b视频激烈视频 welcomea级黄色片 藏精第一福利在线 v∧男女作爱视频动态最黄 VIP私人玩物粉穴 sdmu749 曰日干夜夜插 中国teen 操你视频在线 1国产porn丝袜大屁股视频 情趣磁力 magnet 成人电影视频网 大香蕉猫咪久草综合 +21sexvideo 欧美男男毒爱 堕落的人妻 雨柔 txt 丁香五月亚洲图片 六月婷婷网址 李小璐五十部解压密码 古图解墓丽影h版神马电影 周防雪子家庭教师 榴草福利 鸡鸡叉逼视频免费安全 俄罗斯胖女人tv 日月夜夜乳乱视频 橘梨纱视频在线 特级毛片高清无码 日本无码激情性交免费视频 一本道乱交bt种子 国模迅雷链接 cili 600 xx 高清无码喷水迅雷 ed2k 极品女友爱吃大肉棒视频 国产群交偷拍免费视频 操呦呦777 国产自啪偷啪视频在线 操哭学生妹 韩国美女激情vip秀187 爆乳美女主动让帅哥摸故事 国产真实泄露在线 厕所偷窥丁香五月 不毛之地黄片 极品少妇视频中出 国产巨乳波霸2页 娇花三弄朱紫莺 干露露视频9部迅雷 暖暖福利a片 台湾瘦美美女种子 日本无码片lll03 全网最全AV4833 夏馨雨大尺度外阴展 7tav国产自拍视频在线 爱爱小电影网站262 aav视频免费播放 女人有奶水的伦理片 美女自慰在线视频超碰视频 福利九州导航 成人 最新欧美番号 下载成人影院安装 www560hhcomfefegancom 波多野结衣伦理人性 pppd 西瓜影音 4438为什么播放不了 在线出人黄色网站 64uuu 最新五月丁香情网址 女友小诗王海 酥胸高潮视频 一本道国产精品视频 国内午夜免费福利视频 27paopao强力免费视频 艳母被窝 良友导航87在线 极欲燃烧 快播 国产在线 强奸乱伦 大奶骚B视频 97搞 我要美脚社区爱斯艾慕 伦理片eeussa片 色和尚57557 伦理片巨乳伦ess 美女被调教视频网址 色麒麟无码帝国av影院 鲁sir 柠檬导航 色偷偷电影院 深圳豪哥95炮友第六期正片 马匹窝免费做爱视频 色色助手 秋霞电影鲁丝片国语 今日六月丁香 家里迷奸偷拍视频 奇米影视盒三级 视频 琴瑟制服丝袜 江雪兰女烈 秋霞xoxoxo 激情男吹潮女视频 激情图片 在线播放 激情做爱插进去视频 去火涩 QE稻森麗奈 黄色网站 西瓜影音 482q 海影库伦理 亚洲涩涩虎 蒂亚tia ed2k 性奴调教 电影 1992 东方伊圃园在线 欧美一级A片视频 跟拍抄底超市里边玩手机边挑点心的风骚丁字裤长裙美少妇 被窝网理论在线福利视频 色七七手机视频无需安装播放器 自拍网站国产精品 日本达讪人妻中文字幕在线片 小红帽网站最新链接 抖音爱露露17分钟完整视频 大香蕉一本道在线 丝雅内部破解视频 japanese+hd+teen 寝取部在线观看 gao669视频在线日韩精品 大象福利视频网 free 萝莉poen 地址三 mmraa065 磁力搜索 汤姆tom四虎 那里的香气漫画无遮版 国产 色 在线 人妻 91蛋叔奔驰第二部 色86网 怕怕怕日本视频大全 红番阁免费播放器 黄片带叫声 黄色动漫视频网站 黄色乱伦视频在线观看 女生光膀照 啪啪影院 黄黄片网站有哪些 很很露在线视频观看97 欧美磁力链 下载 好看福利在线 狠狠揉捏着她的奶视频 泡了个大学妹子脸蛋漂亮性欲强带到酒店 后人动态图 狠操视频资源 华夏成人影院午夜 日本中学生做爱免费视频 强奸乱乱人妻 后入 夫妻 自拍 视频 z 黄色一号视 天天鲁天天射综合在线视频 2福利 小视频 骑鸡巴的视频 中国porn自拍在线 女人被抽插激情视频 我的大鸡巴爸爸影院 日本射精自拍视频 东北农民自拍 @uu在线赌博 东方影库555 看真人一级黄片 四虎影院里有黄瓜十八禁 欧美h片迅雷下载 magnet 黄色干屄视频大前 国产伦理自拍聚合在线观看 蕾丝拳交解禁 716OC0M wwwⅩV|DEOS luluba在线av 啪啪啪的漫画甜逼逼 3d爆乳女教师快播 废弃的豪宅 在线观看 五月天毛片基地 2019年偷拍自拍手机在线 av第一页 近亲兄妹中文字幕无码有码 雯雅婷mp4在线观看 magnet 色中色综合影院 色老板影院╠免费视频在线观看 av之家av导航av电影 653cccom id034一之濑桃 小早川怜子在线无码 成人在线不要播放器萝莉 大美兔在线 主播扣b全裸 鸥美成人影院 八子妻福利影院 国产在线2224 男人鸡鸡叉女人逼视频 国内自拍青青20186 尼姑吧福利影视 韩尤物女主播诱惑 奶水乳汁揉捏在线观看 韩国喷奶视频网站 韩国女人的奶插我的视频嗯嗯动作快点出来了 国模小莉宾馆私拍花絮视频流出阴毛有点重亮点是全程搞笑对白 男人插女人视频3d988 国内a片直播 免费DVD欧日韩视频在线播放 国语日本电影 欧美在线 成人 日本熟女大白屁股 美国老妇人的生殖器外加视频 影音先锋淑女人妻 日本Beastiality 女主回村子被戴面具的人强暴的里番 日本空姐飞机上群性交视频 美美爱爱在线 美女与帅哥曰逼全过程 美女黄片A片 另类综合性 001vod 新视觉三级午夜日本 西西另类偷产 强奸AV王者荣耀 艺校高颜值背着男友过夜在线 我要xxoo 伦理色电影三级片 小姐和帅哥在一起操逼的视频下载 叶月奈穗磁力迅雷 1午夜视频丁丁进bb 1黄色做爱图片动作 经典国产自拍 神马影院 牛牛在线刺激自拍 纵情网 yy6080级理论神马在 325sao 2017男人天堂网免费网址 老黄鸭影院在线 av伊甸园最新地址 av男人小视库 被祸电影网 西瓜影音 日本成人性爱视频在线 女优电影院 欧美图在线视频 超清伦理丝袜写真 大黄号在线手机福利 自拍偷拍 家庭乱伦 制服丝袜 jux-540 快播伦理迅雷下载 偷拍自自拍亚洲经典三级 欧美做爱色图30p 骚屄凯蒂和闺蜜桃子的同性激情性爱 百度十八av 大奶子性爱影片 哪个黄色图库网可以上 大香蕉电影在线观看 快播第六页 强奸撸网 saofubei 仓井空百度视频 人体艺体大写真 乳晕大大露脸自拍 熟妇发骚疯狂要爱 影音先锋资源站我去 日本乱伦狠狠插图 苍井空人人体 丝袜内裤人体艺术 ww77com 黄色操比乱伦图 成人人体bb 骚偷拍自拍 欧美屌插入屄里做爱全裸体美女图片 色小姐台湾人妻 溪流中的美女人体 极品老熟女的逼逼 美女么自己鸡巴 午夜干美女小穴 欧美大奶孕妇乱伦 qingsedingxianghua 波多野结衣片子吧 兽交小电影 老外的大鸡巴插中国美女 ww色五月 美女张筱雨裸体人体 表姐内裤里淫穴液 欧美人体私处摄影 743vv 同性爱一本道 黑女人屄 人休模特摄影男女双人棚 草裙社区5566伦理片 亚洲五码区迅雷 东京热狠狠爆 是不是所有欧美男人的屌都比较大 百度影音成人区图片 250ppcum 张柏芝日屄门 鸡巴插阴道实图 国内厕所偷拍视频在线 十八岁少女开饱视频动漫 最大胆爱爱图 李宗瑞不雅照外泄全套高清图片和视频 吉吉亚洲有码 小川亚美纱qvod 偸拍骚妇 偷拍美女走光10p 骚女无毛逼 李宗瑞15p df837a1700007500 屄屄白虎袅袅 影音先锋2015偷拍自拍 快播少妇潮吹 人体因片阴道 抽插淫性交游戏 颜射色图片 肉棒小穴玫瑰临屏 谢婷婷多大被操的 美女医生的性生活故事 口爆 小穴 东京热 tokyo hot 永沢まおみ 在线播放不需播放器 147人大胆 艺术 裸女在线激情第01集 色狼公公给模特儿媳下春药的乱伦性爱故事 强奸美国女人 大学生援交50p 台湾佬爱发宗 操逼有神魔好处 cccc36色姐 膛内射精 内射90后 p 黑人大番号 色老爹小说 女人跟狗做爱迅雷种子 插妹15p 伦里小说全文阅读 成人熟女俱乐部 干小穴23p 亚洲色图 群交 人与兽黄片视频一级片 鸡巴操穴潮喷图p 狠狠插逼狠狠干 chaocijizuoaishiping 色人色天使校园春色小说 激情小说人妻小说 建行 刘菁菁 动漫美少女喷血15p 儿子操我老婆 骚妇插逼图 肏逼的游戏 黑丝系列a片 少爷 雏 开苞 kk色色网 qq影音 幼交兽交 和丝袜姐姐爱爱 插进她黑木耳逼逼也很舒服【12p】 妹妹 水 淫 美女冶觓iai图片 动漫人体艺术露穴 天天射屄综合图片网 大牛村群 国产av偷拍在线 百度云看片网址有没有 老熟女撸撸图撸撸图 电影一级片处女的器官 欧美av人兽交 baieluoshirentiyishu 高清电影下载网站哪个好 幼乳图片 兽交小幼幼电影 彩色黄色漫画 经典三级笫四色 徐小可 uu1111 电影 骚穴美女美体图片 骚货颜射 WWW_QQQ2222_COM 沙希第几集有床戏 妈妈性游戏中出 嫩模粉嫩美穴狠狠鲁 横山美玲骑兵第1集快播在线播放 乱luen小说wan删除 插淫逼图片 狂干少妇美女眼小穴小说手机看书 少女高清av 和狗操逼网 粉木耳快播 插入鸡迈 鲍鱼逼 人体艺术 欧美美女口交图 李儿媳干人妻 狂野抽插小妹 好看的黄图 ilovekao 口交性爱电影 松岛菜菜子 淫荡姐姐和狗做爱淫水直流 阿斯顿马丁标志 张雨柔 路边吉他队 西城影院 异世草木师 治疗过敏性鼻炎的药物 新新魔塔 台湾成人 八仙过海老版全集 t恤衫图案 色艺艺术中心 男人和男人肏腚眼 色大娘快播 父女囗交 影音先锋色片插屈眼护士 漏出息子 WWW_52KDM_COM 撸nenbi 激情男上女下换妻小说 娜娜人艺体术 明日花绮罗ッの痛み下载 韩国主播艾琳第一部旋风磁力链接下载 美女做爱15p 求苍井空sm xxx美女腿逼毛 cgx全集zip 在线成人骚货网 母亲の香织p 在哪可以看到艳照 先锋影音中国黑人 偷拍东北农村火炕乱伦 堀北真希无码 不用任何播放器看黄色 好色色播 乱伦系列ed2k种子 快播色老妈儿子小说 騒公鸡影院 偷看老师做爱快奥 h版电影观看快播 影音先锋美女扩肛电影 强奸a片百度影音播放 玉蒲团之玉女心经 激情影剧院 黄色短篇小说我和我的小姨 欧美金发女人做爱图 男人暴奸女人的黄色小说 天空人体av图片 日加色 jjplayer伦理 大胆妹娟 颜射合集短片视频 大屁股大奶子的av 挠脚心的小说 风间优美裸体影音先锋 欧美美女外阴 美女性交49p 裸体展阴毛艺术照 se欧美性交 日本女同性片qvod 淫妇淫语 ww亚洲ww 99xxxx在线电影幼女片 小色哥成人站 黄色做爱同房 百度云大胆人体艺术 东北的卖屄女照片 宇都宫紫苑快播 成人性爱艺术草君社区图片 yazhouxingaitaiwanlao 日屄很爽 luotimeinvtupian 泷泽乃南先锋下载 7777ci 1919go全集wctoupai 时田爱梨写真视频 欧美幼女被黑吊 青苹果网站色戒 一淫片少女被裸体内射 美国大奶骚妇掰开阴道拍摄顶级露阴人体艺术诱惑裸图 妹妹毛图片 影视先锋av爽片电影资源网 人艺术高清 相思美眉丝袜 l47人休艺术性爱 美女全裸顶级艺术 怎么用手机在线看片子 WWW654KXWCOM 强奸后妈的淫穴 冬天阴茎变小 一一西西人体艺术 熟女谷村志保 欧美操逼逼逼 张筱雨阴道有多深 三姨子 人体坊网站 催眠迷奸丝袜美腿姐姐 淫射11p zooshkool 亚洲伦理片百度影音 WWWCQCCOMCN 谢文私人体艺术 吃六神丸壮阳吗 大胆做爱人体艺术图片 淫乱图片下载 大陆a片网址大全 丝袜人妖肛交色图 一个男的和女的做爱几秒就射了是韩国的叫什么名字 超碰视频成人免费 毛片成人综合网 美国性爱俱乐部 色青tuoyiwu 绿岛影院影音先锋 2008艳照门图片下载 漂亮姑娘馒头屄 母亲为我生下我的孩子 濑美丽奈 女优淫水 熟女自拍色图 小龙女被虐记 欧美视激情 五十路丝袜 强奸滛乱 wwwsskanzyzcom 舒淇裸体写真无码 屄色哥 亚洲爱爱17p 自拍越南孕人操 日韩多p人体 亚洲骑妹妹 chengrenxing 中国美女洗澡时奶子头的真视频 WWW562GGCOM 搜狗人体图片 日本女人性行为视频 乳汁人体艺术 黑人操白人操屄图片 团团影院vip破解版4.2可下载 美女自慰喷尿射水操b 草逼she 偷偷摸摸成人电影性爱 百度一下插屁屁 淫色网欧美 tingtingwuyuntian 小说乱伦母 美女臭脚脚交 哥哥去小说全集 乱伦骚逼小说 dioguitar 儿子干后妈的小说 偷窥自拍都市激情校园春色 欧美色很很撸 特殊女同拳交 久久日干学生妹视频 欧美性爱冠希哥 毛片女人骚逼图 荡妇肛交 色哥鲁 马六女人裸体大胆图片 dadanmote 欧美影院 美丽的丝袜老师妈妈4 zuoaiwangzaixian syxxb wwwkbb7 换妻视频在线 xxoo578 色成人激情小说 鸡巴播进妈妈的穴 辽宁夫妻自拍50p 艳照博客相册 操毛屄屄 美屄日狗 色老师小说合集 妇人的骚屄 色姐姐激情操逼小说 小美女种子下载 色姐操逼网 中文字幕qvod性教育 四野战打炮 女孩被舔逼至哼哼 WWW88ATVCOM 影音先锋迷药推油 淫色舅妈 爆操肉肉少妇 入侵者日本在线播放 � 人体大胆艺犬 偷拍自拍爱唯 迅雷下载韩国操 哪个明星的逼最好看 郏县少妇乱伦 色干女 十五岁男孩操十五岁女孩 我们69式互舔 yibmj 日本3d成人片下载 美洲大胆人体艺术 厕所暴奸在线avi 96色色com百度 亚洲若怒超碰视频在线看 骚妇的骚穴套图片 欧美色图推女郎 亚洲mm激情自拍区 女豹2电影在线观看 大胆日本人体艺术电影 新亚洲成人网站 wwwhaooxxcom 在线视频强奸乱伦制服诱惑凌辱侵犯 搜索sexinsexnet 好想要性生活 亚洲自拍偷拍kkkk91com 超碰教师在线播放版 俄罗斯肥女性爱 奶火焰电影 素人土井春加harukadoiaisy8随意影院 188baidu 大鸡巴狂插B 欧美男女性肛交 2017成人电影云播放 午夜综合网激情五月 偷拍强奸乱伦www5u6bcom 欧洲人体艺术插入360kan 91puppydog沉沦的妻子 大香蕉大香蕉网人人看人人操人人碰av视频 夏夏和公公 www女人与狗con 新疆妹妹群交 香港性爱电影盟 偷拍校园春色都市 wwqpp6scom SIRO系列全集更新 高清亚洲14岁女童色逼逼图 韩国三级在线视频精品 高中生色图 金梅瓶2在线云播放国语 日夲一夲道老人在线 在线幼女成人区 色视频高清在线观看mhuangseshipincomhqxtubecom 婷婷爸爸爱你 汇聚全球亚洲精品制服 艳舞影片草榴社区 男人男人天堂a视频 曰本美女性交吃吊 超碰黑色丝袜 婷婷影视网 久九热人兽篇 苍井空AV目录 亚洲无码按摩院 一骑欢色a片网 仙女和农夫在线 a插菊花综合网站 www网友自拍comwwwzhibocxcom 小男孩日妈妈在线视频myoukucom 法国人与美女ppp 顶级人艺体图片 乡村淫妻 伊人综合伊人电影网 色妮咕 setudaquan 日本Av无吗亚州 对白自拍视频现 成人直播潘金莲 现代激情校园春色html 曾晓薇 ca0bi youjizzbotube 久草热久操福利视频 hh377wu2us 电影网站免费毛片 欧美性爱av豪车 www4性福 亚洲在线page 佐木明月香 巨乳专 我怀孕被人轮奸 玉足动漫排行版 亚洲最新在线视频视频色龙BT色龙在线色龙免费 大屁股女人欠干 亚洲色色www44pecom 哥哥妹妹淫荡射 公公和色儿媳妇小说 欧美性虐哥哥射 日日撸撸玛 日本wwwc成人网站 iuanlun 久色导航 BABEsOM成人内射免费Av 成人网天天干 wwwjizzuccom 欧美最新炮图 av天堂狠狠噜百度 长篇连载五月 14p撸哥 中国美女诱惑 免费看男人爱看的片 武侠古典爆乳精灵村 经典淫妻辽宁丹东 屁股软软AV 年轻母子乱淫 18内衣美女 淫水飞溅滨崎真绪 四房间婷婷五月丁香 a片毛片免费观看手机午 女人的阴性部有几种人体艺术 噢美性爱他三级片 成人裸女舞蹈视频 为什么我穿丝袜臀部会漏下来的 看了就想操的片子迅雷下载 亚洲色图迅雷下裁 中国人体艺术明星谁最好看 林心如乳交免费电影 圆圆的奶子影音先锋 最好看的俄罗斯a片 吸bb烟50p 女尿尿偷拍 奇米手机色播 风流皇帝张无忌全文 狼爱色网 美女性爱图未满成年免进 我爱干逼 幕春堂母子强奸小说 老外换妻 做爱裸体女人逼 欧美色老汉下载 wwwsnrenticom 韩国女做爱福利的网站 91肏屄视频 www五月天cmm 第五综合自拍偷拍 极品天堂网 欧美另类日韩 前男友杀死丈夫北京 www75744com 奇米噜噜噜在线视频 少女和少男做爱黄色网站 久久色欲 巨乳荡妇小说 成年动漫av nit少女 7758电影院下载 轮奸女人视频 成人在线视频综合站手机版 自拍女穴P 男人与狗兽交 日日夜夜鲁妈妈鲁播放小说 在线视频清纯唯美 少妇骚妇系列小说wwwlwxiaoshuocom feizhourentiyishu A片免费中文字幕动漫 重口味肛片 美女与动物交配成人综合 5x社区免费观看视频sqwww5xqq1com5xxx2comwww5xxx1comwww5xxx2com 超碰动漫www 成人Av勉费电影视频wwwyehaobo7com 恋足视界 苍井空骑兵 按摩系列视频三级 强奸乱伦操你啦 性情 wwwwcon520XXid 激情都市操丈母娘 有个黄色网站后入式护士是谁 酒色乱伦 手机强暴视频网站 夜店sm 有没有关于av的贴吧 泷泽萝拉AV在线wwwlulukan1com 妈咪综合网 丝袜性感老师小说 美国性爱导航 超碰国产偷拍视频 丝交15p www917yscmo 午夜剧场免费韩国电影 性福TV国产 aav去吧 国外母子视频网址 唐山师范学院官网 狠狠爱狠狠干明星 苍井空av有没有无码片 18岁姑娘自拍 淫荡母子AV 世界上潮吹 亲亲热com 日本最大胆照片体图 天天激情mediaweibocn 无码亚洲在线下载 啪啪色资源站 淫骚录音 偷拍美女上女侧 射姐姐手机在线影院 日本萝莉av小视频在线 偷拍自拍做爱小说 俺去也anquyecom 撸操逼网啊 梅花计划床吻戏 123bbbb wwwwwwcaocom 美女脱粪视频 haoleavcomwww 五月婷婷六月丁香-百度 久久萝莉视频 嫩妹子wwwlutubacc 久久爱人马交 少妇被抽插内射14p 亚州av鸟 www3a3acom 开心撸射 国产黄色小说有哪些 WWW88lsnCOm 东北xx成人网 母子乱伦~38岁漂亮熟女 www1100luscon 062bcomwap231dycom 五月天色色色色色色色色色色色色色 撸撸管狠狠爱在线 大宝剑电影院网址 WWWAV7S 黑人成人a片视频 毛片基地之怡红院 淫荡蹂躏动态图 波多野结衣性感美图 9日日夜夜撸在线影院l 三级电影借种 自拍偷拍西瓜播放 少妇自拍网 啪啪视频高清视频在线观看大全1000 为成年少女性爱下载 上海厕所偷拍视频在线 精品自拍美女 欧美阿ⅴ在线视频 色猫444 骚妇丝足 李宗盛三级片 青青草色币色和尚 少妇极品乱伦 成人电影_偷拍自拍_亚洲图片_欧美图片_成人在线电影WWWAVAV91 福利视频自拍 chinesepornvideos--porn300 手机毛片太太影院 AV插妹妹在线电影啪啪啪 青青草视频视论坛首页 lululuAV 荷包网123app下载 琪琪影院高级下载 涩妹妹在线av 银虎聚色导航 东京热MV 我想找个小妹妹 青青草光棍免费视频 欧美口交乱图片 先锋资源淫荡女自慰 婷婷五月花亚洲色图 开心春色 5151jj 免费avzx 亚州色图日韩插插视频 古典武侠情欲小说 好吊日53scncom 欧美幼幼A片 苍老师BB UU帝国 激情桃园春色 6080激情影院 清纯唯美乱伦图 丰腴白皙人妻 ‖www1111kpcom 米奇色第四色 天天撸一撸图 免费黄色无码视频 影视软件论坛 爱上丝袜美少女 www玩捧Tv 麻生希人妻 人人日人人摸人人添 台湾经典三级在线观看 carpon群交 野花鲁最近30部 成人伊人开心网 媚药教师在线播放 日本av无码在线播放 熟人妻网 成人涩涩影院的视频 日本无码福利电影在线 浴室中的苍井空 AV黄色片下载 草狗 大香焦网66y7com免费 蜜桃成熟时完整版2005 成人速播视频 怎样看免费的AV 5XX 456jjj 性感美人妻电影 护士熟母wwwmcc676com fupapacn 日本少女性交漫画 男sm视频在线观看 超碰免费公开视频国语 色乌龟综合网苍井空 久久精品视频在线看99苍井空 1314heicim 亚洲性爱欧美性爱偷拍自拍 变态另类淫淫网 国外人体艺术摄影 666xxxbcom 色色色色色小说激情 69黑几把 ttjh113dddcom www2222xecomvodlist566html 男为女口交图片培训 pornhub熟女 你懂的资源网最新地址 22cccom 91超碰视频在线观看 97炮app 成人动画网www42nscom haole56789 a片丝袜那有a片看a片东京热a片q钬 chaopengwww 爱上姐姐操逼 国外幼幼在线观看网址 亚洲大胆套图 嘿咻影院美女和男人啪啪视频 m手机在线观看色色 亚洲色图欧洲色图偷拍自拍另类色图在线 手机在线的h观看网址 五月天亚洲 零点影院 家有淫乱色网 啪啪网夜夜在线影院 19撸管网 wap9999akcom qq邮箱163com 泰国人妖免费中文群交性爱视频播放视频播放 有h的少女小说 暴插大奶少妇 990ee 我和大嫂16P 美女基情四月 东方avav电网 av904comwap 美女天天好逼网 超碰在线乱伦爱爱小说 sm系列国产系列偷拍自拍在线视频 人妻的b真嫩 国产网红av 先锋2018资源 顶级夫妻爽图 老婆自拍中文字幕 新新影院若怒 男女互甜屄的视频 色欲影视10000 射脸上在线 大屁股阿姨作者不详 先锋资源比特 青苹果影院噜噜妈妈 蚂蚁小说网美女图 撸撸干先锋 acaaacccxom HAOXAVCOMmagnet 我爱丰满中年女人日皮性交怎么办 姐姐骚av手机在线 日本男人玩中国女人 www艹表姐 黄色av动漫下载 丁香五月婷婷人与兽 天海翼巨乳色图 岛国熟女撸撸色 爱色网成人片www789lllcom 台弯妹中文娱乐网 亚洲饥渴15P 外国少女口交大吊黑 套图综合下一页 波多野结衣床上被操 569BB 77嘿最新地 欧美性交成人网站 色七七色小姐 国模杨依在线 af990com 3D性色网 wwwbt3456Commp4 舔老女人穴电影 好色大姨姐 HHH57COM 大黑bi bbb811在线 同学精液 大鸡吧老公操动态图片 啪啪啪色图 欧美性爱经典强奸乱伦 资源吧强奸小说 caoliushequrg 女儿的好紧好爽 WWWBB251con 成人激情偸怕 强奸少女a片 美女欧美大胆人体裸体艺术下体 23个姿势真人动态图 WWW_FAMITSU_COM 美女激情裸聊自拍 丝袜美腿番号封面 做爱hai小说 抽插黄美姬 日本韩国丝袜电影 ribenavnvyou 香港黄性大全图片 mm72con 插死她电影网快播 青毛极品画眉鸟图片 欧美金发骚女情趣内衣人体 大胆成人艺术图片 147人体性艺术 妻陥落同人作品官网 屄 插 官场淫荡母女同夫 后庭式插入少妇屄 狠狠干2015 夜夜肏小说图片 三峰麓香 小伙子与少妇3p 美女if男人真实操逼图 家庭乱伦之照片 小哲玛丽压结衣 偷拍自拍插老b 撸逼色 韩国女主播朴妮唛20 caopiwangzhan 嫂子 阴唇 操老汉影视 深深插妹妹花心 怎么下载黄色种子 老农和几个大学色女生的淫荡生活 人人色色资源站 欧美与姐姐性爱 日本性交小电影 经典无码清纯美女珍藏影片 五月天母与子乱伦小说 大肉棒猛干爆乳美女 吴亚磬裸体照 美幼艺术 妻子后妈媳妇日文漫画 最让人射的写真 国模汤加丽私拍 大胆 人体写 胖女人maopian 东京美女缓交图 少女的性交视频 李宗瑞种子资源百度云 清晰韩国性爱图15p 世界最大胆的人体艺术漏阴 狠狠色欧美色图 找少女顶级阴道相片 偷拍自拍女厕 日本种子资源 色美女自拍偷拍 偷拍自拍 10p 外国大香蕉 日本av女优成人qvad 韩国丝袜美女在家和男友大胆露穴美腿口活自拍视频 性门照 性一夜成人社区 123美女se图 迅雷种子性戏王 黑人在家做爱视频 台弯黄色a级片 成熟女人裸体正面照片 黄色穿越小说 147ee c0m 肥佬影音成人 好看 成人视频 欧日人体下阴穴裸体真人图片展 WWW_HHHX_COM 性交巨乳 wwwenenlu69com 爱逼熟女 夫妻zi拍在线看 男女打洞百度图片 女丽网小说 酒色视频网最新网址 漂亮妈妈被我操 肏屄的小故事 国模幼幼 22人体艺术美女 美女下体图片大全无遮挡 久久热在线视频李宗瑞 制服空姐集体大乱交 操逼姿势大片 熟女做爱偷拍自拍快播56 女明星正在被肏 加藤鹰潮吹教学 艹11岁嫩逼 女嫩逼 你们操逼喜欢摸奶吗操操 我和爸爸的激情乱伦 在床上猛干妈妈 欧美乱妇大奶图片 偷拍黑丝女秘书 成人装逼故事 黑人长吊大干 韩国色资源 肏骚屄在线视频 亚洲tpzp 潇湘 star395 学习时报 psp怎么刷机 碧波仙子全集 伊丽莎白肖特 梦幻化生寺加点 wherewereyou 香奈儿no5 周杰伦雪碧 梦幻西游69群p 杀美女吃人肉小说 调教小姨妈游戏 韩国人体艺术组图 玛雅magnet 来插我吧小说 优库怎么看a片 人体做爱写真 快活谷论坛在线电影小店找俩小姐不够再来一个 小说区淫妻交换小说史 11影院回家的欲望 大操苍井空动态组图 萝莉日本幼女性交 妈妈被撸图片 最新人强奸与动物的案例 夫妻自拍打手枪 qvodqvod吴清雅 wwyoujizz日本 苍井空粉红亚洲 夏娃色网 求色老妈导航 欧美人与兽杂交变态 午夜影院免费观看 黄色图片和a片 与姐乱伦 妹妹的大骚屄 大胆裸模露bb 板口美惠乃在线观看 性吧第四色乱伦 美女露乳色图 色五色播网 欧美无码拳交 美国少妇黑色丝袜 母孑乱伦在线 亚洲色片va一天堂网 t56人体艺术大胆人体模特 夜夜快波大咪咪 美女馒头照视频 深田恭子下海了没有 波多野结衣av作品图蜜桃网 爱上色 明星做爱thunderftp 阴道是屄吗 如厕视频 WWWWW26UUUCOM 长坂本真绫 WWWG5S2COM 天噜啦影院 处女阴沟无毛图片 不同年龄女性生值器图 茂如森林的阴毛图 mac版淫色网站 搞色av 美女huang 色嘻嘻网址 巨乳无码xfplay 先锋片 亚洲色色图色色哥 hi一sexnet 黑人大阴富 我爱看片免装版 色天使影音在导航 偷拍自拍人体网 操少妇的逼短片 女人裸体撸一撸摄影专咠 我要看美女性交片 嫩滑女 影音先锋手机资源大全 成濑里纱第七部 苍井空近况 西西人体艺术张雨 工作时间干炮吃了职业生涯的影音先锋 古武操逼小说 WWW5678LUCOM 寻找日本乱伦家庭黄色小说 激情乱伦系列图片 女同av亚洲 在线观看色色影院无需播放器 森迫永依成年照片 4seh5 操色论坛 tpimage久草 WWW679BOCOM 老人强奸少女小说 幼幼性爱美图 抽插小穴水多 12345678性爱 口交口爆性爱 先锋影音美女 黄色老人性爱小说 日本城人电 五月激情电影百度影音 骚妹声吧 大鸡吧草草草 吃玉足小说 谁那有与youjizz差不多的网站 xingjiaoen 三级色黄色片图片 美女漏屄秀场 最猛成人真人口交网站 蒋文文renti 丫丫4480新影院 蒙古女人的性欲故事 撸撸b图 我选择了慕色天使 私色房奇米网 做爱玩15p 草榴社区迷奸 人妻被公公操的动漫 操你综合图片 哥哥影视撸啊撸 有声乱伦小说下载收听 噜噜色影院噜噜色电影色噜噜影视噜噜色网 哥哥操网正在播放双拳交女朋友如何自慰 vA成人教育 车上操嫂嫂五月天 翔田千里泳装 粉嫩小逼的美女图片 狼百度人体图片 偷窥女教师网站 美女与野兽性爱成人电影 狠狠撸AV集团 妻子出台 wwwhuangsecome 红怡院www 久草在线资草免 yy骚女直播 兽幼怀孕 www路02kkkcom v片资源吧首页百度 酒店爆操背着老公偷情的漂亮少妇 jizxsex sm性调教百度影音 新东方av在线成人 插国产小护士b 干老老人丝袜高跟鞋 香港母乳三级 寂寞草在线视频色亚洲 偷看女老师嘘嘘色色 天上人间美女做爱的图片 收一个三级带 女人黑逼流水大鸡巴图片乱伦小说 女男做爱激情 肏妹玩肏我呀 26uuu综合插吧网 俄罗斯每少女鲍15p 国产免插 怏播色色色 撸李哥 裸体漏阴茎阴道性交流k精 av12av日本在线观看512qucom 哥哥干16p图片 以色列XXX百度 色女熟女人妻 奇米第四色春大香蕉 最新日本母子乱伦手机在线网站 美女俄罗妈妈女主 台湾呦呦中文网娱乐网 亚洲成人网站图片变态性爱 九九色视瓶在线播放 黄色片wwwybg3com黄色片一级 黄色电影av片不用放大器免费看 欧美巨阴av下载 近亲亚洲图 日本淫色magnet VS色色国王 www215妹com 迷j系 3d动画乱伦 wwwsaolaomeicom 五月天瑟 女同性恋干逼 另类少妇AV 乱伦性爱舔逼 各种漂亮的美屄摄影展示 抽插做爱视频一级日妣图 自摸粉出水 大鸡吧操大阴唇 我的女朋友叫我搞她妈妈 黄色图片合成软件 找个小姐后插进去的真人版 王老鲁26uuu 笨郎出品经典好图 邪恶集网站 黑斯袜少妇 爆射阴部高跟鞋hhh 怡红院日韩av欧美av国产av7月 美女少妇自慰舌舔自慰 av成人影视wwwaavv5com 美女掰小穴 丈母娘喜欢我操她视屏 sedoudouzonghewang 艳照门照片漏b照片 春色图宫 知春色 春色文学 樱井莉亚电影美愚 樱井莉亚经典片儿 樱井莉亚认识 樱井莉亚像张韶涵 小泽玛利亚迅雷 有没有无毒h网 h网介绍 www酷狗音乐com 开心五月天之小说 东京热n339 五月婷婷 酒色网 爱人看黄片 国足看黄片 谁有黄色小说的网站 黄色小说之校园春色 有没有什么黄色小说 h网站 吉川爱美 色哥乐园 天然素人 人群兽交网 三级电影网 色你妹综合 少妇集中营 性爱成人网 性淫淫影院 淫淫乐色网 艾奇成人社区 女人色站导航 女人舒服阴茎 最大的成人网 御の二代目谁有E谁有G 夜夜日 色久久 夜必撸 撸飘飘 逍遥阁 av撸色 就要搞 狠很橹影院 色博士导航网址看片 3344wv亚洲 成人 空姐 人人热人人爱 千人斩的电影天堂 青青草原bt资源网 黄页爱情电影 色碰碰资源网 5xxpp jiujiuse789 谷露影院美腿丝袜 大乱交在家 930伦理 激情内射网站 皇家Lu23 男人天堂2018亚洲男人天堂大香蕉 国际黄色影片 插爆我 真爽 嗯嗯骚货 PTS-298 丸ノ内OL专业按摩治疗院 23 香港电影暴力 av下载 美女夜夜日天天插 69bbb sw-167 天天拍天天鲁寡妇 午夜肏屄片 VTT644 拳交的美女裸体视频 强奸之夜视屏 热门明星合成视频 青青青手机版福利自拍视频在线观看 青青国产视频偷拍土豪酒店 强烈主播小妖深夜福利秀 影音先锋 成人 大众 日韩欧美天码在/线播放 在线a视频影音先锋 又色又黄爽片 日本十八禁啪啦啪视频 日后庭菊花视频在线播放 淫色书籍 日日久久福利影院破处 又黄又色添鸡把 玉房秘诀 百度云 日本女优跟狗 ftp 狗爷小县城城中村正在播放 拉丁美洲配人视频播放 正在播放极品美少女 WWW、fe553、COm pan baidu 福利 国产视频福利 爱色影2017视频播放 南宁护士门链接 视频 2018圣爱天堂网 一本道东京热大香蕉aⅴ Av资源综合网 俺什么www 神马电影dy888午夜4k4k bree xart 在线 1024手机看片 卡莉的一天 香蕉liveios网址 av10665 :狼人干综合大香蕉无码aV 俺去也成人小说网站 男女怕怕(漫画) 水嶋あずみ白丝 性爱人人操 十度福利影院 1亚欧成人小视频 今永纱奈下载链接 218国产免费自拍 美女图片视频mmtv 洋米糕在线久久 国产自拍大学生 开房 国产强奸自拍视频 李宗瑞迷奸视 频 午夜福利 穿着丝袜被操 第一福利网址导舰视频 偷拍美女丝袜 日本成人色色高清视频 亚洲图片 欧美图片日本 岛国在线无码免费视频 萌妹吧福利第1页 g0g0图片大全 迅雷高清无码种子 韩国丝袜 色戒未删版三邦车视频 动画伦理福利 天津备皮真人视频 凑点伦理电影网站 自拍偷拍bd播放 hdfucktv 优酷成人网 老女人淫秽物品 正在播放欧美私人影院 日逼视频17s 欲望之屋2 magnet 福利大鸡吧 3080avcom 久久毛片美国a影院 5侧所性视频 日本萝莉av女优 一本道色老天堂 抖音茶茶轩轩骗小学生钱吗? 日本xoxoxo无码 美女XXOO福利 maxi 247 hime 最新黄色网址 亚洲视频欧美视频在线视频 全国最色床戏 有声小说床戏比较多的 在线kedou 广州伧理片 动漫情侣上床视频 人人操 视频 SANJIQIAN 蓝尺润电影 苍井空女教师云播 ZEX-201磁力猫 人妻av在线看 亚洲综合成人社区网 狠狠啪综合 488成年网站 桃谷2018在线观看 6o8O第九影院 magnet 激情美女中出图片 成人无码免费播放 ed2k jizzzzzjav AV小黄网站 啄木鸟成人 mp4 a牛影院 被窝电影网wwwgaoavcc 一色屋色色资源站 世界十大禁片 ftp 紫竹铃取汁 ftp av中文字莫 噜噜色国产 ppv视频金瓶梅 磁力SSNI-407链 下载 日韩色导航 粉鲍鱼168视频 处女肏屄视频 达达色视频在线 XⅩX成人色片 呆哥爆操北京极品小悠继续演乱伦趴窗操女儿并约好操她妹 24:53 跟哥哥和骚表妹在 福利luoli视频 福利在线你懂得2015 疯骚少妇趴床上撅起蜜臀 大黑鸡巴肏熟女屄视频 岛国禁片在线播放 yin乱大巴a在线播放 饭冈加奈子 丁丁影视 俄罗斯美女内射视频 超碰在线播放 玩弄巨乳女邻居在线 强插床震叫床视频 教师无码磁力迅雷下载 米奇福利影院 超碰国产网友自拍视频 985hh 2018年手机福利自拍小视频 安装操逼小姐视频 神马午夜伦理福利51x 国产靠国产精品万人靠 成人福利视频手机在线 勾引摩的司机打野战 西瓜影音 手机福利视频玉兰站 激色猫欧美系列大香蕉 啪啪男人夜间小视频 日本高清无马一本道av 女人乱欲视频播放 做爱性交网 淫色天王 暮光之城1免费版视频pp 亚洲AV资源 偷窥自拍 清纯唯美 欧美 微咪xxxav 小电影在线成人 韩国娱乐圈悲惨全集39手机在线 成年漫画嗨网站 春色唐朝网页 夫妻人人草 日韩亚洲主播在线 茉哥影视 ftp 情侣怕怕怕视频免费正片 92看看电影院1000 摸毛片子 伦理无码在线福利在线播放 iuaniuen视频 韩国a片免费线上看 lu18小视频 玖玖黄色 小少妇被我操真人视频 zipai国模 骚屌操视频 仓多真绪爆乳女教师 超级AV在线 好XXOO在线视频 黑色男女福利 5性视频在线观看 叔狼影视 能看的免费一本道视频 小向美奈子AV在线看 女同按摩 国语对白迅雷链接 超碰人妻视频在线视频 神马电影天堂s杨 水嶋あずみ在线av 三亚午夜网址 欧美tv色无极在线影院 caoporon永久发布页 无码流出 簊田优 完全流出 色大姐青青草原综合网 4438x视频最新免费 守望先锋AV动漫版 影音先锋 青青草b网站 4438com 先锋影音在线视频 嫂子内射 jvid免费视频 正在播放 素股摩擦挑战 蓬莱仙山美女福利视频 操乱b 草他妈比视频 巨乳精油按摩在线 ssee日本在线视频 不知火舞动漫视频网站 宅男影院zn37 黄瓜资源 84pao强力 69iiu 国内最大在线自拍 女儿的朋友3 414影院 ae86老司机在线 成人文学视频 张丽黑鬼在线视频 在线av丝袜美腿视频 长春男人的舌头舔女人阴道的动作视频 台湾女优番号 人妻的诱惑磁力 下载 A级黄片视频 女郎狂濡 番号 菊丽莎百度云 潮吹迅雷磁力链接 色戒2小时38分在线观看gogo 破处mp4 下载 日本熟妇色五十路免费视频 欧美成人骑兵影院 3D精油按摩中文字幕电影 任任你操任你日 4鸭子av 色无极亚洲东京 人人费人人妻 秋霞66在线播放 唐朝丅V 大香蕉巨人免费观看 碰人人爽一本道 清宫性史5手魂影院 葵司在线观看无播放器 水岛津实 高潮 棚户区卖暗视频 播放 色娱乐 酒店卫生间操女朋友 潍坊陈倩倩视频在哪看 色狐吧 CDTS 8x8x强奸 日本重口网站在线 日本老影院 雪姑七友成大人版在线 小骚货中文视频 性爱骚逼 小妖精考试不及格性贿赂老师在线 小野寺梨纱无码 magnet 人人橾起碰免费视频69 成人性大香蕉伊人视频 陌上花av资源 狂操空姐嫩比网 mmfuli 白毛衣电影高清在线播放 幼女吞精图片 WWW5500PPCOM www 黄大片日本阿v片在线播放免费 鸡巴硬插视频 影音先锋秒拍福利 成人性感动漫xxx 国外小黄片在线播放 黑大屁股男女性交视频 邪恶道 acg漫画大全漫画皇室 福利视频369导航 pr社区玩具福利 乱伦视频大全 福利屋在线影院 94福利社区会员 一级黄色录像夫妻性生活影片播放 91av视频在线播放 青娱乐全国楼凤验证 国产 大胸 在线 仔仔网福利视频在线观看 人体44444444 色戒 搜一搜 偷窥盜摄在线播放 国产户外露出在线观看 aaa催女郎福利 天堂鲁丝袜 综合网人 超碰920在线 2018AV淘宝在线 wapAV 东京强制系列免费观看 婷婷五 西瓜影音 亚洲最大成人色爱网站 av月亮 金艺贞百度网盘 北纪麻妃无码高清电影 播放器大全女女女女女 国产在线精品视频播放 肏师娘 操逼无码小视频 韩国成人主播 双飞 北京麻妃在线影视 波多野结衣 带狗脖套 北田优步GIGA 国产自拍偷拍下载 国语对白磁力链接 操少妇人妻视频插播 成人AV先锋资源网 日韩伦理 ftp wwwcao 热热av 电影天堂黄片 亚洲电影第32页先锋 TPPN-062汗尻鈴原エミリ 今永美久磁力链 宅男影院91 boafdoa黑人肛交毛片 午夜视频怡红院 艳婚野史观看 天天插天天添天天秀 杭州废继AV片下载 为救人被迫献身的视频 91另类视频 wwwx鈺硏69 亚洲第一狼人天堂网 五月婷婷资源站 sss日本xxx snis123百度云 草露脸白皙女神说做爱不舒服我把她弄疼了国语对白 超碰巨乳女友浴室洗澡 超碰自插自拍 twyav ppyy永久 videostv中国sexovideo幼女 操96大学生视频露脸 TW18mm 人妖另类gif动态图 h版欧美伦理在线播放 曰日干夜夜插 37caocaobb 北岛玲直播视频 超碰视频资源免费高清人妻 超碰风骚视屏 黄色做爱视频免费观看 超碰视频伦理三级 五五热在线视频 电影影音先峰男人站 国产Av自拍 20l8亚洲男人天堂在线 先锋影院色播无码 饭粒网 剧情图解 日本毛片高清免费视频回曰本 快播还能用吗6 无码在 老鸭网盘 自慰百度云 古图解墓丽影h版神马电影 李毅吧在线电影深夜福利 琪琪影院色啪啪 mp4 和老师啪啪啪视频 草泥马视频 magnet 福利视频全裸直播 日本www152 性交视频APP下载 美女操最裸黄播平台 午夜福利日本空姐飞机做爰视频 av黄片免费网站 国产福利在线观看网站 免费欧美三级 高难度软体无码av 性福谈 午夜老司机美日视频 国内自拍老司机福利视频 成人午夜高清 一道本东无码免费手机直接看 嘿嘿嘿迅雷 超频在线香蕉伊人 操美女喷水福利视频 波多野结衣末剪版在线观看 还看成人免费视频 爆乳美女主动让帅哥摸故事 爆乳自拍偷拍 韩国女主播在线观看87 韩国女主播福利第二期 国产色 比佐沙罗 国产在线afv 无码 BT 乳交 种子 久久爱免费福利在线 黑丝av女忧在线视频 免费天海翼视频在线观看 丝溜溜小视频6080 1024福利视频啪啪啪 操熟女逼视频 新奴隶岛20番号 女王调教sm视频全集 电影天堂利欧路 喝下春药女学生被强奸先锋电影 嗯嗯 流氓 影院 咪咪乱伦大片 韩国pvi视屏 青青草国产自拍偷拍在线视频 阿v在线免賛 美女浴足综合网·c0m 大屁股少妇电影 自拍偷拍一本道在线播放 欧美滴色色 鲁鲁鸟av,390,ss,com av高清无码磁力链 bt种子 神马午l夜福利51 91自拍 周琳琳下载 宇都宫紫苑 红秀 午夜影院男女激情 偷窥438日本 邪恶后入式动态图 亚洲日韩色在线高清 日韩女忧 国内自拍 欧美激情 卡通动漫 强奸乱伦 在线播放 巨乳在线AV视频网 国产av一级 狠狠摞网友自拍 GVG-225 青青草i在线视频 renrenganshipin 伊人久久五十路 卵蛋动态zdface 乱伦彪 涉黄视频免费在线播放 射插插插免费视频 美国十次大公鸡 美女 亚洲 无套 全身体罗同性恋视频 青苹果视频分类精品免费 强奸皮裤丝袜诱惑视频 簧銫片kk449_com 人妻野外大胆自拍福利 人人色啪啪视频 极度变态Sm调教女性奴磁力链接 magnet 今永纱奈 网盘 金瓶梅在线第五季 朴诗妍百度云 a片资源吧6岁 老熟女碰碰视频在线观看 麻美由真 恶父 宋飞和表嫂 我要涩涩 神马手机91zizi在线观 大香蕉在线观看119 秋弄韩 av福利资源 182tv-人人草-大香蕉-av淘宝av在线av福利av网站-搜索页 日本大鸡巴插进去图视频 伦理片自拍国产 唐山葬本子 幼女福利视频 干完下铺草上铺在线完整 69p69骑虎精品 福利视频0885 手机看片福利永久国产 蓝奏云最新资源福利 mobilevio 秦先生女神琪琪在线观看 钟真最浪三级 阿v在线堂网老司机 丝瓜影视 小萝莉导航 很漂亮的美女口爆吞精视频 黄色美女操逼视频 狐狸视频下下载 欧美人与动物?交 荷兰丰满美女操b视频 很色的床上视频。 丝袜大胸美女自摸视频 最新番号 在线观看 AV在线视频天天看 757视频福利28集 国产尿尿手机观看 5手机在线福利 日本偷窥女性上厕所的视频黄 成人区色福利小视频 口活最好的番号排名 先锋俺射了 肛虐泥鳅在线看 大白兔在线视频 xinxin保坂 sanbangceshi 现在日照站街女 日本乱伦动漫 我的魅惑小娘姨子在线观看 自拍阴唇图片博彩 色欲亚洲在线 91 在线 桃谷绘里香 日本一本道官方网 最新欧美影院青青草 - 百度 美固一级片 ftp 欧美AV先锋影音xfyy 任你爽偷偷操 菲菲影视城中学生西瓜影音 男人av 织田真子tv 97夏同学改成什么了 361dy 欧美 亚洲美洲国产综合 五月丁香六月婷婷小说综合 爱上影院宅宅网 宅男影视第一页 www52jbcon 真人超大但在线视频 无码视频爱爱 韩国男女日币免费视频 内藤幸恵 国内真实偷拍 手机在线 国产自拍 日本有码 91 国语免费怡红院av 搜一部男女性交时全裸的视频看得一清二白和声音 日韩一级毛片大战 国产 丝袜 足交 伦理捆绑电景”, 淫小奴分享 日本熟女大白屁股 裸体抽插频频 fei插入式性交 透大屄 逼逼痒视频 动态做爱LOⅤE 黄色3d 肉体视频 找操屄视频 4388x全国人情免费网 日本动漫三级片 mp4 一本道官网原20岁在线播放 757深夜福视频在线观看 韩主播 M3U8 丝袜五分裤视频 伦理片草巴 微拍福利伦理片 国产自拍3p在线观 韩国三人以上性肛交视频播放 4438成人黄色视频 touch 女同 色色酒吧影院1 www,avtb008,ocm 水萝拉欲望保姆 电影唾液濃厚接吻在线观看 色五月天 婷婷 GVG-541 thunder 国语对白在线免费视频 马凡舒泳裤掉落视频2011 虎牙高管潜规则视频 狼窝影院32853 日子献精无码视频 苦力怕娘的污污视频 男人和美女人操逼 美女乳头高清图 西班牙成人色导航 承认美女图片 zhonghuaqingsewang 外国美女性感穴图 肏小屄图 小穴小穴内射6 此女子好娇嫩啊真想推到就肉p 老妇操小伙 亚洲色图偷偷撸小说 meilijianchengrenwang 日本性女学家人体模型 第四成人网白白发布 丝袜人体艺术鍒犻櫎 美国胖子性交 透明艺术操逼 WWW_YINBIAV_COM sv人体艺术 狂操美女0814熟女乱伦 qingsezipaitoupai 361美女人体 妈咪和子干逼 叔母小穴 亚洲操逼图片小说区 luanluncaobixiaoshuo 江苏徐州13岁学生美眉qq号 1级爿快播 www色美眉com 各种犬的性能力 肏艺术 女人与交狗视频 放荡少妇影片 huangse电影在线观看 我的动物情人狗与马 学妹淫色 国外好网站 爱色电影网图片 狠狠干丰满女儿 淫荡的叫床声下载 0女性裸体艺术图