Charl Language Documentation
Reference manual for the Charl programming language v0.3.0
Contents
1. Getting Started
- Installation — Building and installing Charl on Linux, macOS, and Windows
- Quick Start — First program and basic concepts
- CLI Reference — Command-line interface documentation
2. Language Specification
- Type System — Primitive and compound types, type inference
- Variables and Constants — Variable declaration and scope
- Operators — Arithmetic, logical, and comparison operators
- Control Flow — Conditionals, loops, and pattern matching
- Functions — Function definition and calling conventions
- Arrays — Array operations and indexing
- Tuples — Heterogeneous tuples and destructuring
3. API Reference
- Tensor Operations — 13 functions for tensor creation and manipulation
- Automatic Differentiation — 4 functions for gradient tracking
- Neural Network Layers — 5 functions for layers and activations
- Loss Functions — 2 loss functions (MSE, Cross-Entropy)
- Optimizers — 4 optimization algorithms (SGD, Momentum, Adam)
- Backpropagation Helpers — 4 gradient computation functions
4. Examples and Tutorials
- Basic Optimization — Gradient descent example
- Neural Network Training — Complete training loop with backpropagation
- XOR Problem — Non-linear classification
Function Index
Complete alphabetical index of all 34 built-in functions:
Autograd
autograd_compute_linear_grad,
autograd_compute_mse_grad,
autograd_compute_relu_grad,
autograd_compute_sigmoid_grad
Loss Functions
loss_cross_entropy,
loss_mse
Neural Networks
nn_linear,
nn_relu,
nn_sigmoid,
nn_softmax,
nn_tanh
Optimizers
optim_adam_step,
optim_sgd_momentum_step,
optim_sgd_step
Tensor Operations
tensor,
tensor_add,
tensor_clip_grad,
tensor_div,
tensor_grad,
tensor_matmul,
tensor_mean,
tensor_mul,
tensor_ones,
tensor_randn,
tensor_requires_grad,
tensor_reshape,
tensor_set_grad,
tensor_sub,
tensor_sum,
tensor_transpose,
tensor_zero_grad,
tensor_zeros