Skip to content

Conversation

@norcalli
Copy link

This makes it possible to get more accurate information for extracting out
tokens from the source

I'm using this to do dbg and expect style testing where I can modify
source in place with snapshot tests

This makes it possible to get more accurate information for extracting out
tokens from the source
@norcalli
Copy link
Author

For instance

return {
  name = "dbg";
  entrypoints = { "dbg" };
  keywords = {  };
  expression = function(self, lex)
    lex:expect "dbg"
    local start_token = lex:cur()
    local expfn = lex:luaexpr()
    local end_token = lex:cur()
    local filename = lex.source
    -- terralib.printraw{"dbg", start_token, end_token}
    return function(env_fn)
      local env = env_fn()
      local data = assert(io.open(filename)):read"*a"
      local start = start_token.start_offset + 1
      local finish = end_token.ws_start_offset
      io.stderr:write(("WS:%q\n"):format(data:sub(end_token.ws_start_offset+1, end_token.start_offset)))
      io.stderr:write(("CODE:%d:%d: %q\n"):format(start, finish, data:sub(start, finish)))
      return expfn(env)
    end
  end;
}

without this change it would be impossible to get the end of the token since it would be defined by the next token's offset, which happens after skipping whitespace and comments.

@elliottslaughter
Copy link
Member

Is there an easy way to add a short test case to sanity check this is doing what you expect?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants