Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ITokenizer

A tokenizer divides a string into tokens. This class is highly customizable with regard to exactly how this division occurs, but it also has defaults that are suitable for many languages. This class assumes that the character values read from the string lie in the range 0-255. For example, the Unicode value of a capital A is 65, so System.out.println((char)65); prints out a capital A.

The behavior of a tokenizer depends on its character state table. This table is an array of 256 TokenizerState states. The state table decides which state to enter upon reading a character from the input string.

For example, by default, upon reading an 'A', a tokenizer will enter a "word" state. This means the tokenizer will ask a WordState object to consume the 'A', along with the characters after the 'A' that form a word. The state's responsibility is to consume characters and return a complete token.

The default table sets a SymbolState for every character from 0 to 255, and then overrides this with:

From    To     State
0     ' '    whitespaceState
'a'    'z'    wordState
'A'    'Z'    wordState
160     255    wordState
'0'    '9'    numberState
'-'    '-'    numberState
'.'    '.'    numberState
'"'    '"'    quoteState
'\''   '\''    quoteState
'/'    '/'    slashState
In addition to allowing modification of the state table, this class makes each of the states above available. Some of these states are customizable. For example, wordState allows customization of what characters can be part of a word, after the first character.

Hierarchy

  • ITokenizer

Implemented by

Index

Properties

commentState

commentState: ICommentState

A token state to process comments.

decodeStrings

decodeStrings: boolean

Decodes quoted strings.

mergeWhitespaces

mergeWhitespaces: boolean

Merges whitespaces.

numberState

numberState: INumberState

A token state to process numbers.

quoteState

quoteState: IQuoteState

A token state to process quoted strings.

scanner

scanner: IScanner

The stream scanner to tokenize.

skipComments

skipComments: boolean

Skips comments.

skipEof

skipEof: boolean

Skips End-Of-File token at the end of stream.

skipUnknown

skipUnknown: boolean

Skip unknown characters

skipWhitespaces

skipWhitespaces: boolean

Skips whitespaces.

symbolState

symbolState: ISymbolState

A token state to process symbols (single like "=" or muti-character like "<>")

unifyNumbers

unifyNumbers: boolean

Unifies numbers: "Integers" and "Floats" makes just "Numbers"

whitespaceState

whitespaceState: IWhitespaceState

A token state to process white space delimiters.

wordState

wordState: IWordState

A token state to process words or indentificators.

Methods

hasNextToken

  • hasNextToken(): boolean
  • Checks if there is the next token exist.

    Returns boolean

    true if scanner has the next token.

nextToken

  • Gets the next token from the scanner.

    Returns Token

    Next token of null if there are no more tokens left.

tokenizeBuffer

  • tokenizeBuffer(buffer: string): Token[]
  • Tokenizes a string buffer into a list of tokens structures.

    Parameters

    • buffer: string

      A string buffer to be tokenized.

    Returns Token[]

    A list of token structures.

tokenizeBufferToStrings

  • tokenizeBufferToStrings(buffer: string): string[]
  • Tokenizes a string buffer into a list of strings.

    Parameters

    • buffer: string

      A string buffer to be tokenized.

    Returns string[]

    A list of token strings.

tokenizeStream

  • Tokenizes a textual stream into a list of token structures.

    Parameters

    • scanner: IScanner

      A textual stream to be tokenized.

    Returns Token[]

    A list of token structures.

tokenizeStreamToStrings

  • tokenizeStreamToStrings(scanner: IScanner): string[]
  • Tokenizes a textual stream into a list of strings.

    Parameters

    • scanner: IScanner

      A textual stream to be tokenized.

    Returns string[]

    A list of token strings.

Generated using TypeDoc