-
Notifications
You must be signed in to change notification settings - Fork 5
Expand file tree
/
Copy pathREADME.Rmd
More file actions
419 lines (280 loc) · 11.1 KB
/
README.Rmd
File metadata and controls
419 lines (280 loc) · 11.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
---
output: github_document
editor_options:
chunk_output_type: console
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-"
)
```
[](https://travis-ci.org/systemincloud/rly)
[](https://cran.r-project.org/package=rly)
[](https://cran.r-project.org/package=rly)
[](https://codecov.io/gh/systemincloud/rly)
[](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=UR288FRQUSYQE&item_name=rly+-+R+Lex+and+Yacc¤cy_code=USD&source=url)
# rly
Tools to Create Formal Language Parsers
## Introduction
{rly} is a 100% R implementation of the common parsing tools [`lex` and `yacc`](http://dinosaur.compilertools.net/).
`lex` is a "lexical analyzer generator". It's core functionality is to split up an input stream into more usable elements. You can think of it as a tools to help identify the interesting components in a text file, such as `->` or `%>%` in an R script.
`yacc` is "yet another compiler-compiler" and the main task for it is to take the tokens `lex` provides and process them contextually.
Together, you can use them to:
- define what tokens a given language/input stream will accept
- define what R code should be executed as a language file/input stream (e.g. a program) is parsed.
This project is a R clone of [Ply](https://github.com/dabeaz/ply).
## Installation
```{r eval=FALSE}
devtools::install_github("systemincloud/rly")
```
## Usage
{rly} consists of two files : `lex.R` and `yacc.R`. There's quite a bit "going on" in those two files and they may be helpful examples of how to structure {R6} classes.
```{r eval=TRUE}
library(rly)
```
## Examples
The `demo` directory contains several different examples. Here are some more.
### Finding Tokens
We can build a "lexer" to pull out URLs from text input. Sure, we could just use `stringi::stri_match_all_regex()` for this particular example, but the intent is to work with a known, straightforward domain to see how "lexing" works:
```{r url-lex-01}
# Define what tokens we accept. In this case, just URLs
TOKENS <- c("URL")
# Build our "lexing" rules. This is an {R6} class.
# If you've never played with {R6} classes, head on
# over to <https://cran.rstudio.com/web/packages/R6/>
# to learn more about it and also take a look at the
# packages that depend on/use it (which may help you
# grok {R6} a bit better.)
URLexer <- R6::R6Class(
classname = "Lexer",
public = list(
# tell it abt the tokens we accept
tokens = TOKENS,
# we use the t_ prefix to identify that this is the
# "matcher" for the token and then give it the regular
# expression that goes with this token. The URL
# regex is a toy one that says to match http or https
# strings until it hits a space.
#
# The `t` parameter is the the full context of the token
# parser at the time it gets to this token.
#
# here, we're just printing a message out and continuing
# but we could do anything we (programmatically) want
t_URL = function(re = 'http[s]*://[^[:space:]]+', t) {
message("Found URL: ", t$value) # Be verbose when we find a URL
return(t) # we need to return the potentially modified token
},
# whenever a newline is encounterd we increment a line #
# counter. this is useful when providing contextual errors
t_newline = function(re='\\n+', t) {
t$lexer$lineno <- t$lexer$lineno + nchar(t$value)
return(NULL)
},
# the goal of the lexer is to give us valid input
# but we can ignore errors if we're just looking for
# certain things (like URLs)
t_error = function(t) {
t$lexer$skip(1)
return(t)
}
)
)
# Create out lexer
lexer <- rly::lex(URLexer)
# Feed it some data
lexer$input(s = "
http://google.com https://rstudio.org/
Not a URL https://rud.is/b Another non-URL
https://r-project.org/
https://one.more.url/with/some/extra/bits.html
")
# We'll put found URLs here (rly inefficient)
found_urls <- character(0)
# keep track of the # of invalid token info (also inefficient)
invalid <- list()
# Now, we'll iterate through the tokens we were given
repeat {
tok <- lexer$token() # get the next token
if (is.null(tok)) break # no more tokens, done with lexing
switch(
tok$type,
# Do this when we find a token identified as a `URL`
URL = found_urls <- append(found_urls, tok$value),
# Do this whenever we find an invalid token
error = invalid <- append(invalid, list(data.frame(
bad_thing = tok$value,
stream_pos = tok$lexpos,
line = tok$lineno,
stringsAsFactors = FALSE
)))
)
}
invalid <- do.call(rbind.data.frame, invalid)
nrow(invalid) # number of errors
head(invalid, 10) # it'll be clear we never told it abt whitespace
found_urls # the good stuff
```
We can extend this to do things with different types of URIs (not necessarily "http"-ish URLs)
```{r url-lex-02}
# we'll define different token types for HTTP URLs, HTTPS URLs and
# MAILTO URLs
TOKENS <- c("HTTP_URL", "HTTPS_URL", "MAILTO_URL")
URLexer <- R6::R6Class(
classname = "Lexer",
public = list(
tokens = TOKENS,
# three different token regexes
t_HTTPS_URL = function(re = 'https://[^[:space:]]+', t) {
message("Found HTTPS URL: ", t$value)
return(t)
},
t_MAILTO_URL = function(re = 'mailto:[^[:space:]]+', t) {
message("Found MAILTO URL: ", t$value)
return(t)
},
t_HTTP_URL = function(re = 'http://[^[:space:]]+', t) {
message("Found HTTP URL: ", t$value)
return(t)
},
t_error = function(t) {
t$lexer$skip(1) # if we don't do this the lexer will error out on tokens we don't match (which is usually what we want)
return(t)
}
)
)
# Create out lexer
lexer <- rly::lex(URLexer)
# Feed it some data
lexer$input(s = "
http://google.com https://rstudio.org/
Not a URL https://rud.is/b Another non-URL mailto:fred@example.com?subject=Hello
https://r-project.org/
mailto:steve@example.com
https://one.more.url/with/some/extra/bits.html
")
http_urls <- character(0)
https_urls <- character(0)
mailto_urls <- character(0)
repeat {
tok <- lexer$token() # get the next token
if (is.null(tok)) break # no more tokens, done with lexing
switch(
tok$type,
HTTP_URL = http_urls <- append(http_urls, tok$value),
HTTPS_URL = https_urls <- append(https_urls, tok$value),
MAILTO_URL = mailto_urls <- append(mailto_urls, tok$value)
)
}
http_urls
https_urls
mailto_urls
```
### Calculator Example
Here is an example showing a {rly} implementation of a calculator with variables.
```{r echo=FALSE}
Sys.setenv("__R_CHECK_LENGTH_1_CONDITION_"=FALSE)
Sys.setenv("__R_CHECK_LENGTH_1_LOGIC2_"=FALSE)
```
Let's bootstrap our tokenizer/lexer:
```{r eval=TRUE}
TOKENS = c('NAME', 'NUMBER')
LITERALS = c('=', '+', '-', '*', '/', '(', ')') # these are "LEXEMES" (ref: https://stackoverflow.com/questions/14954721/what-is-the-difference-between-a-token-and-a-lexeme)
Lexer <- R6::R6Class(
classname = "Lexer",
public = list(
tokens = TOKENS,
literals = LITERALS,
t_NAME = '[a-zA-Z_][a-zA-Z0-9_]*',
t_NUMBER = function(re='\\d+', t) {
t$value <- strtoi(t$value)
return(t)
},
t_ignore = " \t",
t_newline = function(re='\\n+', t) {
t$lexer$lineno <- t$lexer$lineno + nchar(t$value)
return(NULL)
},
t_error = function(t) {
cat(sprintf("Illegal character '%s'", t$value[1]))
t$lexer$skip(1)
return(t)
}
)
)
```
Now, we write our expression parser. Note that we use `TOKENS` and `LITERALS` from the lexer we just wrote so the parser has some context for what gets passed to it as the tokeneizer emits bits to it:
```{r eval=TRUE}
Parser <- R6::R6Class(
classname = "Parser",
public = list(
tokens = TOKENS,
literals = LITERALS,
# Parsing rules
precedence = list(
c('left', '+', '-'),
c('left', '*', '/'),
c('right', 'UMINUS')
),
# dictionary of names (can be inefficient but it's cool here)
names = new.env(hash=TRUE),
# One type of "statement" is NAME=expression
p_statement_assign = function(doc='statement : NAME "=" expression', p) {
self$names[[as.character(p$get(2))]] <- p$get(4)
},
# Another type of "statement" is just an expression
p_statement_expr = function(doc='statement : expression', p) {
cat(p$get(2))
cat('\n')
},
# Classic simple definition of an expression
p_expression_binop = function(doc="expression : expression '+' expression
| expression '-' expression
| expression '*' expression
| expression '/' expression", p) {
if(p$get(3) == '+') p$set(1, p$get(2) + p$get(4))
else if(p$get(3) == '-') p$set(1, p$get(2) - p$get(4))
else if(p$get(3) == '*') p$set(1, p$get(2) * p$get(4))
else if(p$get(3) == '/') p$set(1, p$get(2) / p$get(4))
},
# unary minus is a special case we need to handle
# see https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.bpxa600/bpxa698.htm
# for %prec explanation
# note order does kinda matter in both lexer and parser rule specs
p_expression_uminus = function(doc="expression : '-' expression %prec UMINUS", p) {
p$set(1, -p$get(3))
},
# parnens expression
p_expression_group = function(doc="expression : '(' expression ')'", p) {
p$set(1, p$get(3))
},
p_expression_number = function(doc='expression : NUMBER', p) {
p$set(1, p$get(2))
},
p_expression_name = function(doc='expression : NAME', p) {
p$set(1, self$names[[as.character(p$get(2))]])
},
p_error = function(p) {
if(is.null(p)) cat("Syntax error at EOF")
else cat(sprintf("Syntax error at '%s'", p$value))
}
)
)
lexer <- lex(Lexer)
parser <- yacc(Parser)
# these will each end with `NULL` as that's how the `parser` signals it's done
parser$parse("3", lexer)
parser$parse("3 + 5", lexer)
parser$parse("3 + 5 * 10 - 100", lexer)
parser$parse("A + B * C - D", lexer) # valid lexical syntax but no data to work on; in a real calculator this wld error out
parser$parse("A + B * C - D = E", lexer) # invalid lexical syntax
parser$parse("A = 1 + 2", lexer) # valid syntax, still no output b/c we just did assignment
parser$parse("A", lexer)
invisible(parser$parse("B = 5", lexer)) # using invisible() only to suppress useless NULLs
invisible(parser$parse("C = 10", lexer))
invisible(parser$parse("D = 100", lexer))
parser$parse("A + B * C - D", lexer)
```