Read-Eval-Print Loop (REPL)
A REPL is an interactive programming environment. You type code, it immediately runs, you see the result. Then you type more code. This tight feedback loop makes REPLs perfect for learning, experimenting, and debugging.
You’ve probably used REPLs before:
- Python’s
>>>prompt - Node.js’s interactive mode
- Browser developer console
Now we’ll build one for our calculator! Even better, we’ll be able to switch between three execution backends - seeing how the same code runs through different compilation paths.
How a REPL Works
The name tells you everything:
- Read - Get a line of input from the user
- Eval - Parse and execute it
- Print - Show the result
- Loop - Go back to step 1
Here’s our implementation using rustyline, a library that provides readline-style line editing (arrow keys, history, etc.):
fn main() -> Result<()> {
let mut rl = DefaultEditor::new()?;
println!("Calculator prompt. Expressions are line evaluated.");
loop {
let readline = rl.readline(">> ");
match readline {
Ok(line) => {
let line = line.trim();
if line.is_empty() {
continue;
}
cfg_if! {
if #[cfg(any(feature = "jit", feature = "interpreter"))] {
match Engine::from_source(line) {
Ok(result) => println!("{}", result),
Err(e) => eprintln!("{}", e),
};
}
else if #[cfg(feature = "vm")] {
let byte_code = Engine::from_source(line);
println!("byte code: {:?}", byte_code);
let mut vm = VM::new(byte_code);
vm.run();
println!("{}", vm.pop_last());
}
}
}
Err(ReadlineError::Interrupted) => {
println!("CTRL-C");
break;
}
Err(ReadlineError::Eof) => {
println!("CTRL-D");
break;
}
Err(err) => {
println!("Error: {:?}", err);
break;
}
}
}
Ok(())
The REPL is simple:
- Create a rustyline editor (handles input, history, etc.)
- Loop forever, reading lines
- For each line, compile and execute using the chosen backend
- Print the result
- On Ctrl-C or Ctrl-D, exit
Three Backends, One Interface
It’s all controlled by feature flags. The same REPL works with three different backends:
| Backend | Description | Rust Version |
|---|---|---|
| Interpreter | Walks AST directly | Stable |
| VM | Compiles to bytecode | Stable |
| JIT | Compiles to native code via LLVM | Nightly |
You can compare how the same expression is handled by each backend. Let’s run through two examples with all three.
Interpreter Output Example
The interpreter walks the AST and computes results directly:
cargo run --bin repl --features interpreter
You see the AST structure and direct evaluation:
Calculator prompt. Expressions are line evaluated.
>> 1 + 2
Compiling the source: 1 + 2
[BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }]
3
The interpreter is the simplest backend. It parses the input into an AST (BinaryExpr with Plus operator, left-hand side Int(1), right-hand side Int(2)), then walks the tree and computes the result directly.
A more complex expression shows a nested AST:
>> (1 + 2) - (8 - 10)
Compiling the source: (1 + 2) - (8 - 10)
[BinaryExpr { op: Minus, lhs: BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }, rhs: BinaryExpr { op: Minus, lhs: Int(8), rhs: Int(10) } }]
5
The outer BinaryExpr has Minus as its operator, with two inner BinaryExpr nodes as children. The interpreter recursively evaluates each subtree: (1 + 2) = 3, (8 - 10) = -2, then 3 - (-2) = 5.
VM Output Example
The VM compiles AST to bytecode, then executes it on a stack machine:
cargo run --bin repl --no-default-features --features vm
You see bytecode generation step by step:
Calculator prompt. Expressions are line evaluated.
>> 1 + 2
Compiling the source: 1 + 2
[BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }]
compiling node BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }
added instructions [1, 0, 0] from opcode OpConstant(0)
added instructions [1, 0, 0, 1, 0, 1] from opcode OpConstant(1)
added instructions [1, 0, 0, 1, 0, 1, 3] from opcode OpAdd
added instructions [1, 0, 0, 1, 0, 1, 3, 2] from opcode OpPop
byte code: Bytecode { instructions: [1, 0, 0, 1, 0, 1, 3, 2], constants: [Int(1), Int(2)] }
3
Instead of walking the tree directly, the VM compiles the AST to bytecode first. You can see each instruction being added:
OpConstant(0)- Push constant at index 0 (which is1)OpConstant(1)- Push constant at index 1 (which is2)OpAdd- Pop two values, push their sumOpPop- Pop and return the result
A more complex expression shows more bytecode instructions being generated:
>> (1 + 2) - (8 - 10)
Compiling the source: (1 + 2) - (8 - 10)
[BinaryExpr { op: Minus, lhs: BinaryExpr { ... }, rhs: BinaryExpr { ... } }]
compiling node BinaryExpr { ... }
added instructions [1, 0, 0] from opcode OpConstant(0)
added instructions [1, 0, 0, 1, 0, 1] from opcode OpConstant(1)
added instructions [1, 0, 0, 1, 0, 1, 3] from opcode OpAdd
added instructions [1, 0, 0, 1, 0, 1, 3, 1, 0, 2] from opcode OpConstant(2)
added instructions [1, 0, 0, 1, 0, 1, 3, 1, 0, 2, 1, 0, 3] from opcode OpConstant(3)
added instructions [1, 0, 0, 1, 0, 1, 3, 1, 0, 2, 1, 0, 3, 4] from opcode OpSub
added instructions [1, 0, 0, 1, 0, 1, 3, 1, 0, 2, 1, 0, 3, 4, 4] from opcode OpSub
added instructions [1, 0, 0, 1, 0, 1, 3, 1, 0, 2, 1, 0, 3, 4, 4, 2] from opcode OpPop
byte code: Bytecode { instructions: [1, 0, 0, 1, 0, 1, 3, 1, 0, 2, 1, 0, 3, 4, 4, 2], constants: [Int(1), Int(2), Int(8), Int(10)] }
5
Four constants, multiple operations, all encoded in a flat byte array. The VM then executes this bytecode using a simple stack machine.
JIT Output Example
The JIT compiles to native machine code via LLVM (requires nightly Rust):
rustup run nightly cargo run --bin repl --no-default-features --features jit
You see the generated LLVM IR:
Calculator prompt. Expressions are line evaluated.
>> 1 + 2
Compiling the source: 1 + 2
[BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }]
Generated LLVM IR: define i32 @jit() {
entry:
ret i32 3
}
3
Notice something interesting: the IR just says ret i32 3! LLVM computed 1 + 2 = 3 at compile time and baked the answer directly into the code. This is constant folding, one of LLVM’s many optimizations.
Let’s try the same complex expression:
>> (1 + 2) - (8 - 10)
Compiling the source: (1 + 2) - (8 - 10)
[BinaryExpr { op: Minus, lhs: BinaryExpr { op: Plus, lhs: Int(1), rhs: Int(2) }, rhs: BinaryExpr { op: Minus, lhs: Int(8), rhs: Int(10) } }]
Generated LLVM IR: define i32 @jit() {
entry:
ret i32 5
}
5
Again, LLVM optimized the whole expression to just ret i32 5. The AST shows the full nested structure, but the compiled native code is minimal - just returning a constant!
Why Build a REPL?
Building a REPL teaches you:
- The edit-compile-run cycle - Even simpler than files
- Error handling - What happens when input is invalid?
- State management - In later chapters, we’ll maintain variables across lines
- Debugging - Print AST, bytecode, or IR to see what’s happening
Professional language implementations always include a REPL. It’s one of the most useful tools for both language developers and language users.
Conclusion
This concludes our Calculator chapter. We took advantage of the simplicity of our Calc language to cover a lot of ground:
- Grammar and parsing - Converting text to structured data
- AST - Representing programs as trees
- Interpretation - Walking the tree and computing
- JIT compilation - Generating native code with LLVM
- Bytecode VMs - An intermediate approach
- REPL - Interactive programming
Note that our Calculator grammar is intentionally simple. It handles basic cases like negative first numbers (-1 + 2) and flexible whitespace, but it doesn’t have proper operator precedence. The expression 1 + 2 * 3 might not evaluate as you’d expect! In the next chapter, we’ll see how Firstlang builds a more sophisticated grammar with proper operator precedence and multiple expression types.
Thanks for following along! In the next chapter, we’ll build Firstlang - a dynamically typed language with variables, functions, control flow, and recursion.