Constructs this object with specified parameters.
Separators for fields in CSV stream.
Delimiters character to quote strings.
Clears definitions of word chars.
Ignore word (such as blanks and tabs), and return the tokenizer's next token.
A textual string to be tokenized.
A tokenizer class that controls the process.
The next token from the top of the stream.
Establish characters in the given range as valid characters for part of a word after the first character. Note that the tokenizer must determine which characters are valid as the beginning character of a word.
First character index of the interval.
Last character index of the interval.
true
if this state should use characters in the given range.
Generated using TypeDoc
Implements a word state to tokenize CSV stream.