...

Package shlex

import "github.com/google/shlex"
Overview
Index

Overview ▾

Package shlex implements a simple lexer which splits input in to tokens using shell-style rules for quoting and commenting.

The basic use case uses the default ASCII lexer to split a string into sub-strings:

shlex.Split("one \"two three\" four") -> []string{"one", "two three", "four"}

To process a stream of strings:

l := NewLexer(os.Stdin)
for ; token, err := l.Next(); err != nil {
	// process token
}

To access the raw token stream (which includes tokens for comments):

  t := NewTokenizer(os.Stdin)
  for ; token, err := t.Next(); err != nil {
	// process token
  }

func Split

func Split(s string) ([]string, error)

Split partitions a string into a slice of strings.

type Lexer

Lexer turns an input stream into a sequence of tokens. Whitespace and comments are skipped.

type Lexer Tokenizer

func NewLexer

func NewLexer(r io.Reader) *Lexer

NewLexer creates a new lexer from an input stream.

func (*Lexer) Next

func (l *Lexer) Next() (string, error)

Next returns the next word, or an error. If there are no more words, the error will be io.EOF.

type Token

Token is a (type, value) pair representing a lexographical token.

type Token struct {
    // contains filtered or unexported fields
}

func (*Token) Equal

func (a *Token) Equal(b *Token) bool

Equal reports whether tokens a, and b, are equal. Two tokens are equal if both their types and values are equal. A nil token can never be equal to another token.

type TokenType

TokenType is a top-level token classification: A word, space, comment, unknown.

type TokenType int

Classes of lexographic token

const (
    UnknownToken TokenType = iota
    WordToken
    SpaceToken
    CommentToken
)

type Tokenizer

Tokenizer turns an input stream into a sequence of typed tokens

type Tokenizer struct {
    // contains filtered or unexported fields
}

func NewTokenizer

func NewTokenizer(r io.Reader) *Tokenizer

NewTokenizer creates a new tokenizer from an input stream.

func (*Tokenizer) Next

func (t *Tokenizer) Next() (*Token, error)

Next returns the next token in the stream.