天天看點

最新Hive函數 LanguageManual UDF Hive Operators and User-Defined Functions (UDFs)

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#page-metadata-start">Go to start of metadata</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-HiveOperatorsandUser-DefinedFunctions(UDFs)">Hive Operators and User-Defined Functions (UDFs)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inOperators">Built-in Operators</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-OperatorsprecedencesOperatorsPrecedencesOperatorsPrecedences">Operators Precedences</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-RelationalOperators">Relational Operators</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ArithmeticOperators">Arithmetic Operators</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators">Logical Operators</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-StringOperators">String Operators</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ComplexTypeConstructors">Complex Type Constructors</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-OperatorsonComplexTypes">Operators on Complex Types</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inFunctions">Built-in Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-CollectionFunctions">Collection Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-TypeConversionFunctions">Type Conversion Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions">Date Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions">Conditional Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-StringFunctions">String Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DataMaskingFunctions">Data Masking Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Misc.Functions">Misc. Functions</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-xpath">xpath</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-get_json_object">get_json_object</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inAggregateFunctions(UDAF)">Built-in Aggregate Functions (UDAF)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inTable-GeneratingFunctions(UDTF)">Built-in Table-Generating Functions (UDTF)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-UsageExamples">Usage Examples</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-explode(array)">explode (array)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-explode(map)">explode (map)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-posexplode(array)">posexplode (array)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-inline(arrayofstructs)">inline (array of structs)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-stack(values)">stack (values)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-explode">explode</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-posexplode">posexplode</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-json_tuple">json_tuple</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-parse_url_tuple">parse_url_tuple</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-GROUPingandSORTingonf(column)">GROUPing and SORTing on f(column)</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-UDFinternals">UDF internals</a>

<a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-CreatingCustomUDFs">Creating Custom UDFs</a>

Case-insensitive

All Hive keywords are case-insensitive, including the names of Hive operators and functions.

Bug for expression caching when UDF nested in UDF or function

A[B] , A.identifier

bracket_op([]), dot(.)

element selector, dot

-A

unary(+), unary(-), unary(~)

unary prefix operators

A IS [NOT] (NULL|TRUE|FALSE)

IS NULL,IS NOT NULL, ...

unary suffix

A ^ B

bitwise xor(^)

bitwise xor

A * B

star(*), divide(/), mod(%), div(DIV)

multiplicative operators

A + B

plus(+), minus(-)

additive operators

A || B

string concatenate(||)

string concatenate

A &amp; B

bitwise and(&amp;)

bitwise and

A | B

bitwise or(|)

bitwise or

The following operators compare the passed operands and generate a TRUE or FALSE value depending on whether the comparison between the operands holds.

A = B

All primitive types

TRUE if expression A is equal to expression B otherwise FALSE.

A == B

Synonym for the = operator.

A &lt;=&gt; B

A &lt;&gt; B

NULL if A or B is NULL, TRUE if expression A is NOT equal to expression B, otherwise FALSE.

A != B

Synonym for the &lt;&gt; operator.

A &lt; B

NULL if A or B is NULL, TRUE if expression A is less than expression B, otherwise FALSE.

A &lt;= B

NULL if A or B is NULL, TRUE if expression A is less than or equal to expression B, otherwise FALSE.

A &gt; B

NULL if A or B is NULL, TRUE if expression A is greater than expression B, otherwise FALSE.

A &gt;= B

NULL if A or B is NULL, TRUE if expression A is greater than or equal to expression B, otherwise FALSE.

A [NOT] BETWEEN B AND C

A IS NULL

All types

TRUE if expression A evaluates to NULL, otherwise FALSE.

A IS NOT NULL

FALSE if expression A evaluates to NULL, otherwise TRUE.

A IS [NOT] (TRUE|FALSE)

Boolean types

Note: NULL is UNKNOWN, and because of that (UNKNOWN IS TRUE) and (UNKNOWN IS FALSE) both evaluates to FALSE.

A [NOT] LIKE B

strings

NULL if A or B is NULL, TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A (similar to . in posix regular expressions) while the % character in B matches an arbitrary number of characters in A (similar to .* in posix regular expressions). For example, 'foobar' like 'foo' evaluates to FALSE whereas 'foobar' like 'foo_ _ _' evaluates to TRUE and so does 'foobar' like 'foo%'.

A RLIKE B

NULL if A or B is NULL, TRUE if any (possibly empty) substring of A matches the Java regular expression B, otherwise FALSE. For example, 'foobar' RLIKE 'foo' evaluates to TRUE and so does 'foobar' RLIKE '^f.*r$'.

A REGEXP B

Same as RLIKE.

The following operators support various common arithmetic operations on the operands. All return number types; if any of the operands are NULL, then the result is also NULL.

All number types

Gives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. For example since every integer is a float, therefore float is a containing type of integer so the + operator on a float and an int will result in a float.

A - B

Gives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

Gives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy.

A / B

A DIV B

Integer types

Gives the integer part resulting from dividing A by B. E.g 17 div 3 results in 5.

A % B

Gives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

Gives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

Gives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

Gives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

~A

Gives the result of bitwise NOT of A. The type of the result is the same as the type of A.

The following operators provide support for creating logical expressions. All of them return boolean TRUE, FALSE, or NULL depending upon the boolean values of the operands. NULL behaves as an "unknown" flag, so if the result depends on the state of an unknown, the result itself is unknown.

A AND B

boolean

TRUE if both A and B are TRUE, otherwise FALSE. NULL if A or B is NULL.

A OR B

TRUE if either A or B or both are TRUE, FALSE OR NULL is NULL, otherwise FALSE.

NOT A

TRUE if A is FALSE or NULL if A is NULL. Otherwise FALSE.

! A

Same as NOT A.

A IN (val1, val2, ...)

A NOT IN (val1, val2, ...)

[NOT] EXISTS (subquery)

strings 

The following functions construct instances of complex types.

map

(key1, value1, key2, value2, ...)

Creates a map with the given key/value pairs.

struct

(val1, val2, val3, ...)

Creates a struct with the given field values. Struct field names will be col1, col2, ....

named_struct

(name1, val1, name2, val2, ...)

array

(val1, val2, ...)

Creates an array with the given elements.

create_union

(tag, val1, val2, ...)

Creates a union type with the value that is being pointed to by the tag parameter.

The following operators provide mechanisms to access elements in Complex Types.

A[n]

A is an Array and n is an int

Returns the nth element in the array A. The first element has index 0. For example, if A is an array comprising of ['foo', 'bar'] then A[0] returns 'foo' and A[1] returns 'bar'.

M[key]

M is a Map&lt;K, V&gt; and key has type K

Returns the value corresponding to the key in the map. For example, if M is a map comprising of {'f' -&gt; 'foo', 'b' -&gt; 'bar', 'all' -&gt; 'foobar'} then M['all'] returns 'foobar'.

S.x

S is a struct

Returns the x field of S. For example for the struct foobar {int foo, int bar}, foobar.foo returns the integer stored in the foo field of the struct.

The following built-in mathematical functions are supported in Hive; most return NULL when the argument(s) are NULL:

DOUBLE

round(DOUBLE a)

Returns the rounded <code>BIGINT</code> value of <code>a</code>.

round(DOUBLE a, INT d)

Returns <code>a</code> rounded to <code>d</code> decimal places.

bround(DOUBLE a)

bround(DOUBLE a, INT d)

BIGINT

floor(DOUBLE a)

Returns the maximum <code>BIGINT</code> value that is equal to or less than <code>a</code>.

ceil(DOUBLE a), ceiling(DOUBLE a)

Returns the minimum BIGINT value that is equal to or greater than <code>a</code>.

rand(), rand(INT seed)

Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifying the seed will make sure the generated random number sequence is deterministic.

exp(DOUBLE a), exp(DECIMAL a)

ln(DOUBLE a), ln(DECIMAL a)

log10(DOUBLE a), log10(DECIMAL a)

log2(DOUBLE a), log2(DECIMAL a)

log(DOUBLE base, DOUBLE a)

log(DECIMAL base, DECIMAL a)

pow(DOUBLE a, DOUBLE p), power(DOUBLE a, DOUBLE p)

Returns <code>ap</code>.

sqrt(DOUBLE a), sqrt(DECIMAL a)

STRING

bin(BIGINT a)

hex(BIGINT a) hex(STRING a) hex(BINARY a)

BINARY

unhex(STRING a)

conv(BIGINT num, INT from_base, INT to_base), conv(STRING num, INT from_base, INT to_base)

abs(DOUBLE a)

Returns the absolute value.

INT or DOUBLE

pmod(INT a, INT b), pmod(DOUBLE a, DOUBLE b)

Returns the positive value of <code>a mod b</code>.

sin(DOUBLE a), sin(DECIMAL a)

asin(DOUBLE a), asin(DECIMAL a)

cos(DOUBLE a), cos(DECIMAL a)

acos(DOUBLE a), acos(DECIMAL a)

tan(DOUBLE a), tan(DECIMAL a)

atan(DOUBLE a), atan(DECIMAL a)

degrees(DOUBLE a), degrees(DECIMAL a)

radians(DOUBLE a), radians(DOUBLE a)

positive(INT a), positive(DOUBLE a)

Returns <code>a</code>.

negative(INT a), negative(DOUBLE a)

Returns <code>-a</code>.

DOUBLE or INT

sign(DOUBLE a), sign(DECIMAL a)

e()

Returns the value of <code>e</code>.

pi()

Returns the value of <code>pi</code>.

factorial(INT a)

cbrt(DOUBLE a)

INT

shiftleft(TINYINT|SMALLINT|INT a, INT b)

shiftleft(BIGINT a, INT b)

Returns int for tinyint, smallint and int <code>a</code>. Returns bigint for bigint <code>a</code>.

shiftright(TINYINT|SMALLINT|INT a, INT b)

shiftright(BIGINT a, INT b)

shiftrightunsigned(TINYINT|SMALLINT|INT a, INT b),

shiftrightunsigned(BIGINT a, INT b)

T

greatest(T v1, T v2, ...)

least(T v1, T v2, ...)

width_bucket(NUMERIC expr, NUMERIC min_value, NUMERIC max_value, INT num_buckets)

Version

The following built-in collection functions are supported in Hive:

int

size(Map&lt;K.V&gt;)

Returns the number of elements in the map type.

size(Array&lt;T&gt;)

Returns the number of elements in the array type.

array&lt;K&gt;

map_keys(Map&lt;K.V&gt;)

Returns an unordered array containing the keys of the input map.

array&lt;V&gt;

map_values(Map&lt;K.V&gt;)

Returns an unordered array containing the values of the input map.

array_contains(Array&lt;T&gt;, value)

Returns TRUE if the array contains value.

array&lt;t&gt;

sort_array(Array&lt;T&gt;)

The following type conversion functions are supported in Hive:

binary

binary(string|binary)

Casts the parameter into a binary.

Expected "=" to follow "type"

cast(expr as &lt;type&gt;)

Converts the results of the expression expr to &lt;type&gt;. For example, cast('1' as BIGINT) will convert the string '1' to its integral representation. A null is returned if the conversion does not succeed. If cast(expr as boolean) Hive returns true for a non-empty string.

The following built-in date functions are supported in Hive:

string

from_unixtime(bigint unixtime[, string format])

Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of "1970-01-01 00:00:00".

bigint

unix_timestamp()

Gets current Unix timestamp in seconds. This function is not deterministic and its value is not fixed for the scope of a query execution, therefore prevents proper optimization of queries - this has been deprecated since 2.0 in favour of CURRENT_TIMESTAMP constant.

unix_timestamp(string date)

Converts time string in format <code>yyyy-MM-dd HH:mm:ss</code> to Unix timestamp (in seconds), using the default timezone and the default locale, return 0 if fail: unix_timestamp('2009-03-20 11:30:01') = 1237573801

unix_timestamp(string date, string pattern)

pre 2.1.0: string

2.1.0 on: date

to_date(string timestamp)

Returns the date part of a timestamp string (pre-Hive 2.1.0): to_date("1970-01-01 00:00:00") = "1970-01-01". As of Hive 2.1.0, returns a date object.

year(string date)

Returns the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970.

quarter(date/timestamp/string)

month(string date)

Returns the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11.

day(string date) dayofmonth(date)

Returns the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1.

hour(string date)

Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12.

minute(string date)

Returns the minute of the timestamp.

second(string date)

Returns the second of the timestamp.

weekofyear(string date)

Returns the week number of a timestamp string: weekofyear("1970-11-01 00:00:00") = 44, weekofyear("1970-11-01") = 44.

extract(field FROM source)

Examples:

select extract(month from "2016-10-20") results in 10.

select extract(hour from "2016-10-20 05:06:07") results in 5.

select extract(dayofweek from "2016-10-20 05:06:07") results in 5.

select extract(month from interval '1-3' year to month) results in 3.

select extract(minute from interval '3 12:20:30' day to second) results in 20.

datediff(string enddate, string startdate)

Returns the number of days from startdate to enddate: datediff('2009-03-01', '2009-02-27') = 2.

date_add(date/timestamp/string startdate, tinyint/smallint/int days)

Adds a number of days to startdate: date_add('2008-12-31', 1) = '2009-01-01'.

date_sub(date/timestamp/string startdate, tinyint/smallint/int days)

Subtracts a number of days to startdate: date_sub('2008-12-31', 1) = '2008-12-30'.

timestamp

from_utc_timestamp({any primitive type}*, string timezone)

to_utc_timestamp({any primitive type} ts, string timezone)

date

current_date

current_timestamp

add_months(string start_date, int num_months)

last_day(string date)

next_day(string start_date, string day_of_week)

trunc(string date, string format)

double

months_between(date1, date2)

date_format(date/timestamp/string ts, string fmt)

date_format can be used to implement other UDFs, e.g.:

dayname(date) is date_format(date, 'EEEE')

dayofyear(date) is date_format(date, 'D')

if(boolean testCondition, T valueTrue, T valueFalseOrNull)

Returns valueTrue when testCondition is true, returns valueFalseOrNull otherwise.

isnull( a )

Returns true if a is NULL and false otherwise.

isnotnull ( a )

Returns true if a is not NULL and false otherwise.

nvl(T value, T default_value)

COALESCE(T v1, T v2, ...)

Returns the first v that is not NULL, or NULL if all v's are NULL.

CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END

When a = b, returns c; when a = d, returns e; else returns f.

CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END

When a = true, returns b; when c = true, returns d; else returns e.

nullif( a, b )

Shorthand for: CASE WHEN a = b then NULL else a

void

assert_true(boolean condition)

The following built-in String functions are supported in Hive:

ascii(string str)

Returns the numeric value of the first character of str.

base64(binary bin)

character_length(string str)

chr(bigint|double A)

concat(string|binary A, string|binary B...)

Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. For example, concat('foo', 'bar') results in 'foobar'. Note that this function can take any number of input strings.

array&lt;struct&lt;string,double&gt;&gt;

context_ngrams(array&lt;array&lt;string&gt;&gt;, array&lt;string&gt;, int K, int pf)

concat_ws(string SEP, string A, string B...)

Like concat() above, but with custom separator SEP.

concat_ws(string SEP, array&lt;string&gt;)

decode(binary bin, string charset)

elt(N int,str1 string,str2 string,str3 string,...)

Return string at index number. For example elt(2,'hello','world') returns 'world'. Returns NULL if N is less than 1 or greater than the number of arguments.

encode(string src, string charset)

field(val T,val1 T,val2 T,val3 T,...)

Returns the index of val in the val1,val2,val3,... list or 0 if not found. For example field('world','say','hello','world') returns 3.

All primitive types are supported, arguments are compared using str.equals(x). If val is NULL, the return value is 0.

find_in_set(string str, string strList)

Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. For example, find_in_set('ab', 'abc,b,ab,c,def') returns 3.

format_number(number x, int d)

get_json_object(string json_string, string path)

Extracts json object from a json string based on json path specified, and returns json string of the extracted json object. It will return null if the input json string is invalid. NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys *cannot start with numbers.* This is due to restrictions on Hive column names.

in_file(string str, string filename)

Returns true if the string str appears as an entire line in filename.

instr(string str, string substr)

Returns the position of the first occurrence of <code>substr</code> in <code>str</code>. Returns <code>null</code> if either of the arguments are <code>null</code> and returns <code>0</code> if <code>substr</code> could not be found in <code>str</code>. Be aware that this is not zero based. The first character in <code>str</code> has index 1.

length(string A)

Returns the length of the string.

locate(string substr, string str[, int pos])

Returns the position of the first occurrence of substr in str after position pos.

lower(string A) lcase(string A)

Returns the string resulting from converting all characters of B to lower case. For example, lower('fOoBaR') results in 'foobar'.

lpad(string str, int len, string pad)

Returns str, left-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters. In case of empty pad string, the return value is null.

ltrim(string A)

Returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar ') results in 'foobar '.

ngrams(array&lt;array&lt;string&gt;&gt;, int N, int K, int pf)

octet_length(string str)

parse_url(string urlString, string partToExtract [, string keyToExtract])

Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. For example, parse_url('http://facebook.com/path1/p.php?k1=v1&amp;k2=v2#Ref1', 'HOST') returns 'facebook.com'. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, for example, parse_url('http://facebook.com/path1/p.php?k1=v1&amp;k2=v2#Ref1', 'QUERY', 'k1') returns 'v1'.

printf(String format, Obj... args)

regexp_extract(string subject, string pattern, int index)

regexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)

Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT. For example, regexp_replace("foobar", "oo|ar", "") returns 'fb.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; '\\s' is necessary to match whitespace, etc.

repeat(string str, int n)

Repeats str n times.

replace(string A, string OLD, string NEW)

reverse(string A)

Returns the reversed string.

rpad(string str, int len, string pad)

Returns str, right-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters. In case of empty pad string, the return value is null.

rtrim(string A)

Returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ') results in ' foobar'.

array&lt;array&lt;string&gt;&gt;

sentences(string str, string lang, string locale)

Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The 'lang' and 'locale' are optional arguments. For example, sentences('Hello there! How are you?') returns ( ("Hello", "there"), ("How", "are", "you") ).

space(int n)

Returns a string of n spaces.

split(string str, string pat)

Splits str around pat (pat is a regular expression).

map&lt;string,string&gt;

str_to_map(text[, delimiter1, delimiter2])

Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and ':' for delimiter2.

substr(string|binary A, int start) substring(string|binary A, int start)

substr(string|binary A, int start, int len) substring(string|binary A, int start, int len)

substring_index(string A, string delim, int count)

translate(string|char|varchar input, string|char|varchar from, string|char|varchar to)

trim(string A)

Returns the string resulting from trimming spaces from both ends of A. For example, trim(' foobar ') results in 'foobar'

unbase64(string str)

upper(string A) ucase(string A)

Returns the string resulting from converting all characters of A to upper case. For example, upper('fOoBaR') results in 'FOOBAR'.

initcap(string A)

levenshtein(string A, string B)

soundex(string A)

The following built-in data masking functions are supported in Hive:

mask(string str[, string upper[, string lower[, string number]]])

mask_first_n(string str[, int n])

mask_last_n(string str[, int n])

mask_show_first_n(string str[, int n])

mask_show_last_n(string str[, int n])

mask_hash(string|char|varchar str)

varies

java_method(class, method[, arg1[, arg2..]])

reflect(class, method[, arg1[, arg2..]])

hash(a1[, a2...])

Returns a hash value of the arguments. (As of Hive 0.4.)

current_user()

logged_in_user()

current_database()

md5(string/binary)

sha1(string/binary)

sha(string/binary)

crc32(string/binary)

sha2(string/binary, int)

aes_encrypt(input string/binary, key string/binary)

aes_decrypt(input binary, key string/binary)

version()

xpath, xpath_short, xpath_int, xpath_long, xpath_float, xpath_double, xpath_number, xpath_string

A limited version of JSONPath is supported:

$ : Root object

. : Child operator

[] : Subscript operator for array

* : Wildcard for []

Syntax not supported that's worth noticing:

: Zero length string as key

.. : Recursive descent

@ : Current object/element

() : Script expression

?() : Filter (script) expression.

[,] : Union operator

[start:end.step] : array slice operator

Example: src_json table is a single column (json), single row table:

<code>+----+</code>

<code>                               </code><code>json</code>

<code>{</code><code>"store"</code><code>:</code>

<code>  </code><code>{</code><code>"fruit"</code><code>:\[{</code><code>"weight"</code><code>:</code><code>8</code><code>,</code><code>"type"</code><code>:</code><code>"apple"</code><code>},{</code><code>"weight"</code><code>:</code><code>9</code><code>,</code><code>"type"</code><code>:</code><code>"pear"</code><code>}],</code>

<code>   </code><code>"bicycle"</code><code>:{</code><code>"price"</code><code>:</code><code>19.95</code><code>,</code><code>"color"</code><code>:</code><code>"red"</code><code>}</code>

<code>  </code><code>},</code>

<code> </code><code>"email"</code><code>:</code><code>"amy@only_for_json_udf_test.net"</code><code>,</code>

<code> </code><code>"owner"</code><code>:</code><code>"amy"</code>

<code>}</code>

The fields of the json object can be extracted using these queries:

<code>hive&gt; SELECT get_json_object(src_json.json,</code><code>'$.owner'</code><code>) FROM src_json;</code>

<code>amy</code>

<code>hive&gt; SELECT get_json_object(src_json.json,</code><code>'$.store.fruit\[0]'</code><code>) FROM src_json;</code>

<code>{</code><code>"weight"</code><code>:</code><code>8</code><code>,</code><code>"type"</code><code>:</code><code>"apple"</code><code>}</code>

<code>hive&gt; SELECT get_json_object(src_json.json,</code><code>'$.non_exist_key'</code><code>) FROM src_json;</code>

<code>NULL</code>

The following built-in aggregate functions are supported in Hive:

count(*), count(expr), count(DISTINCT expr[, expr...])

count(*) - Returns the total number of retrieved rows, including rows containing NULL values.

count(expr) - Returns the number of rows for which the supplied expression is non-NULL.

sum(col), sum(DISTINCT col)

Returns the sum of the elements in the group or the sum of the distinct values of the column in the group.

avg(col), avg(DISTINCT col)

Returns the average of the elements in the group or the average of the distinct values of the column in the group.

min(col)

Returns the minimum of the column in the group.

max(col)

Returns the maximum value of the column in the group.

variance(col), var_pop(col)

Returns the variance of a numeric column in the group.

var_samp(col)

Returns the unbiased sample variance of a numeric column in the group.

stddev_pop(col)

Returns the standard deviation of a numeric column in the group.

stddev_samp(col)

Returns the unbiased sample standard deviation of a numeric column in the group.

covar_pop(col1, col2)

Returns the population covariance of a pair of numeric columns in the group.

covar_samp(col1, col2)

Returns the sample covariance of a pair of a numeric columns in the group.

corr(col1, col2)

Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group.

percentile(BIGINT col, p)

Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.

array&lt;double&gt;

percentile(BIGINT col, array(p1 [, p2]...))

Returns the exact percentiles p1, p2, ... of a column in the group (does not work with floating point types). pi must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.

percentile_approx(DOUBLE col, p [, B])

Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.

percentile_approx(DOUBLE col, array(p1 [, p2]...) [, B])

Same as above, but accepts and returns an array of percentile values instead of a single one.

regr_avgx(independent, dependent)

regr_avgy(independent, dependent)

regr_count(independent, dependent)

regr_intercept(independent, dependent)

regr_r2(independent, dependent)

regr_slope(independent, dependent)

regr_sxx(independent, dependent)

regr_sxy(independent, dependent)

regr_syy(independent, dependent)

array&lt;struct {<code>'x','y'</code>}&gt;

histogram_numeric(col, b)

Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights

collect_set(col)

Returns a set of objects with duplicate elements eliminated.

collect_list(col)

INTEGER

ntile(INTEGER x)

Normal user-defined functions, such as concat(), take in a single input row and output a single output row. In contrast, table-generating functions transform a single input row to multiple output rows.

explode(ARRAY&lt;T&gt; a)

Explodes an array to multiple rows. Returns a row-set with a single column (col), one row for each element from the array.

Tkey,Tvalue

explode(MAP&lt;Tkey,Tvalue&gt; m)

int,T

posexplode(ARRAY&lt;T&gt; a)

Explodes an array to multiple rows with additional positional column of int type (position of items in the original array, starting with 0). Returns a row-set with two columns (pos,val), one row for each element from the array.

T1,...,Tn

inline(ARRAY&lt;STRUCT&lt;f1:T1,...,fn:Tn&gt;&gt; a)

T1,...,Tn/r

stack(int r,T1 V1,...,Tn/r Vn)

Breaks up n values V1,...,Vn into r rows. Each row will have n/r columns. r must be constant.

string1,...,stringn

json_tuple(string jsonStr,string k1,...,string kn)

Takes JSON string and a set of n keys, and returns a tuple of n values. This is a more efficient version of the <code>get_json_object</code> UDF because it can get multiple keys with just one call.

string 1,...,stringn

parse_url_tuple(string urlStr,string p1,...,string pn)

Takes URL string and a set of n URL parts, and returns a tuple of n values. This is similar to the <code>parse_url()</code> UDF but can extract multiple parts at once out of a URL. Valid part names are: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:&lt;KEY&gt;.

<code>select</code> <code>explode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>));</code>

<code>select</code> <code>explode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>))</code><code>as</code> <code>col;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>explode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>)) tf;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>explode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>)) tf</code><code>as</code> <code>col;</code>

A

B

C

<code>select</code> <code>explode(map(</code><code>'A'</code><code>,10,</code><code>'B'</code><code>,20,</code><code>'C'</code><code>,30));</code>

<code>select</code> <code>explode(map(</code><code>'A'</code><code>,10,</code><code>'B'</code><code>,20,</code><code>'C'</code><code>,30))</code><code>as</code> <code>(</code><code>key</code><code>,value);</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>explode(map(</code><code>'A'</code><code>,10,</code><code>'B'</code><code>,20,</code><code>'C'</code><code>,30)) tf;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>explode(map(</code><code>'A'</code><code>,10,</code><code>'B'</code><code>,20,</code><code>'C'</code><code>,30)) tf</code><code>as</code> <code>key</code><code>,value;</code>

10

20

30

<code>select</code> <code>posexplode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>));</code>

<code>select</code> <code>posexplode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>))</code><code>as</code> <code>(pos,val);</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>posexplode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>)) tf;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>posexplode(array(</code><code>'A'</code><code>,</code><code>'B'</code><code>,</code><code>'C'</code><code>)) tf</code><code>as</code> <code>pos,val;</code>

1

2

<code>select</code> <code>inline(array(struct(</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>),struct(</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-02-02'</code><code>)));</code>

<code>select</code> <code>inline(array(struct(</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>),struct(</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-02-02'</code><code>)))</code><code>as</code> <code>(col1,col2,col3);</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>inline(array(struct(</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>),struct(</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-02-02'</code><code>))) tf;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>inline(array(struct(</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>),struct(</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-02-02'</code><code>))) tf</code><code>as</code> <code>col1,col2,col3;</code>

2015-01-01

2016-02-02

<code>select</code> <code>stack(2,</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>,</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-01-01'</code><code>);</code>

<code>select</code> <code>stack(2,</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>,</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-01-01'</code><code>)</code><code>as</code> <code>(col0,col1,col2);</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>stack(2,</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>,</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-01-01'</code><code>) tf;</code>

<code>select</code> <code>tf.*</code><code>from</code> <code>(</code><code>select</code> <code>0) t lateral</code><code>view</code> <code>stack(2,</code><code>'A'</code><code>,10,</code><code>date</code> <code>'2015-01-01'</code><code>,</code><code>'B'</code><code>,20,</code><code>date</code> <code>'2016-01-01'</code><code>) tf</code><code>as</code> <code>col0,col1,col2;</code>

2016-01-01

Using the syntax "SELECT udtf(col) AS colAlias..." has a few limitations:

No other expressions are allowed in SELECTSELECT pageid, explode(adid_list) AS myCol... is not supported

UDTF's can't be nestedSELECT explode(explode(adid_list)) AS myCol... is not supported

GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY is not supportedSELECT explode(adid_list) AS myCol ... GROUP BY myCol is not supported

<code>explode()</code> takes in an array (or a map) as an input and outputs the elements of the array (map) as separate rows. UDTFs can be used in the SELECT expression list and as a part of LATERAL VIEW.

As an example of using <code>explode()</code> in the SELECT expression list, consider a table named myTable that has a single column (myCol) and two rows:

[100,200,300]

[400,500,600]

Then running the query:

<code>SELECT explode(myCol) AS myNewCol FROM myTable;</code>

will produce:

100

200

300

400

500

600

The usage with Maps is similar:

<code>SELECT</code> <code>explode(myMap)</code><code>AS</code> <code>(myMapKey, myMapValue)</code><code>FROM</code> <code>myMapTable;</code>

<code>posexplode()</code> is similar to <code>explode</code> but instead of just returning the elements of the array it returns the element as well as its position in the original array.

As an example of using <code>posexplode()</code> in the SELECT expression list, consider a table named myTable that has a single column (myCol) and two rows:

<code>SELECT posexplode(myCol) AS pos, myNewCol FROM myTable;</code>

3

For example,

<code>select a.timestamp, get_json_object(a.appevents,</code><code>'$.eventid'</code><code>), get_json_object(a.appenvets,</code><code>'$.eventname'</code><code>) from log a;</code>

should be changed to:

<code>select a.timestamp, b.*</code>

<code>from log a lateral view json_tuple(a.appevent,</code><code>'eventid'</code><code>,</code><code>'eventname'</code><code>) b as f1, f2;</code>

The parse_url_tuple() UDTF is similar to parse_url(), but can extract multiple parts of a given URL, returning the data in a tuple. Values for a particular key in QUERY can be extracted by appending a colon and the key to the partToExtract argument, for example, parse_url_tuple('http://facebook.com/path1/p.php?k1=v1&amp;k2=v2#Ref1', 'QUERY:k1', 'QUERY:k2') returns a tuple with values of 'v1','v2'. This is more efficient than calling parse_url() multiple times. All the input parameters and output column types are string.

<code>SELECT b.*</code>

<code>FROM src LATERAL VIEW parse_url_tuple(fullurl,</code><code>'HOST'</code><code>,</code><code>'PATH'</code><code>,</code><code>'QUERY'</code><code>,</code><code>'QUERY:id'</code><code>) b as host, path, query, query_id LIMIT</code><code>1</code><code>;</code>

A typical OLAP pattern is that you have a timestamp column and you want to group by daily or other less granular date windows than by second. So you might want to select concat(year(dt),month(dt)) and then group on that concat(). But if you attempt to GROUP BY or SORT BY a column on which you've applied a function and alias, like this:

<code>select f(col) as fc, count(*) from table_name group by fc;</code>

you will get an error:

<code>FAILED: Error in semantic analysis: line</code><code>1</code><code>:</code><code>69</code> <code>Invalid Table Alias or Column Reference fc</code>

because you are not able to GROUP BY or SORT BY a column alias on which a function has been applied. There are two workarounds. First, you can reformulate this query with subqueries, which is somewhat complicated:

<code>select sq.fc,col1,col2,...,colN,count(*) from</code>

<code>  </code><code>(select f(col) as fc,col1,col2,...,colN from table_name) sq</code>

<code> </code><code>group by sq.fc,col1,col2,...,colN;</code>

Or you can make sure not to use a column alias, which is simpler:

<code>select f(col) as fc, count(*) from table_name group by f(col);</code>

Contact Tim Ellis (tellis) at RiotGames dot com if you would like to discuss this in further detail.

The context of a UDF's evaluate method is one row at a time. A simple invocation of a UDF like

<code>SELECT length(string_col) FROM table_name;</code>

would evaluate the length of each of the string_col's values in the map portion of the job. The side effect of the UDF being evaluated on the map-side is that you can't control the order of rows which get sent to the mapper. It is the same order in which the file split sent to the mapper gets deserialized. Any reduce side operation (such as SORT BY, ORDER BY, regular JOIN, etc.) would apply to the UDFs output as if it is just another column of the table. This is fine since the context of the UDF's evaluate method is meant to be one row at a time.

<code>SELECT reducer_udf(my_col, distribute_col, sort_col) FROM</code>

<code>(SELECT my_col, distribute_col, sort_col FROM table_name DISTRIBUTE BY distribute_col SORT BY distribute_col, sort_col) t</code>

select explode(array('A','B','C'));select explode(array('A','B','C')) as col;select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf;select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf as col;