Introduction
This site contains courses and practical work for the course 4SE02: Rust for Embedded Systems offered by Guillaume Duc and Samuel Tardieu as part of the Embedded Systems program at Télécom Paris. Its use is reserved for students of the Institut Polytechnique de Paris.
An archive of the practical work statement is available for offline work: book.tar.xz.
Lecture notes are available by following this link.
ⓒ 2020-2025 Guillaume Duc and Samuel Tardieu – all rights reserved
Practical Work
Git Repository
❎ First of all, you need to request to join the 4SE02/2425
group on the Telecom GitLab using this link.
Once this is done, the instructors will be able to create your personal repository where you can store your practical work.
Installation of rustup
An utility to manage the installation of Rust on a computer is rustup
. Once rustup
is installed, it takes care of downloading and locally installing (without privileges) the compiler and executables versions, especially for cross-compilation.
❎ Install rustup
, either from your Linux distribution's package system or from the rustup.rs website. If you choose the second method, you will need to reload your environment after the installation of rustup
so that the directories it uses are added to your PATH
.
Note: The Rust ecosystem works not only on GNU/Linux but also on macOS and Windows. In this practical work, we will assume your installation is on GNU/Linux. You are free to use another operating system as long as you don't expect assistance with your system from us.
Note: If you reach the maximum quota assigned to you on the school's computers, you can use the directory /home/users/LOGIN
(where LOGIN
is to be replaced with your Unix account name), which is created locally when you log in. By setting the environment variable RUSTUP_HOME
to /home/users/LOGIN/rustup
in your initialization files, you can instruct rustup
to store all its data in this directory rather than in ~/.rustup
. Don't forget to delete the ~/.rustup
directory to free up space. You will also need to repeat this operation if you log in to another computer in the TP rooms.
Installation of the "stable" compilation toolchain
Rust's compilation toolchain comes in three major families:
- stable: a tested and proven version, updated every six weeks
- beta: a test version intended to become the next stable version
- nightly: a development version, allowing testing of experimental features, some of which are intended to enter the next beta version
By default, rustup
has installed the latest stable version:
$ rustup show
Default host: x86_64-unknown-linux-gnu
rustup home: /usr/local/rustup
stable-x86_64-unknown-linux-gnu (default)
rustc 1.84.1 (e71f9a9a9 2025-01-27)
On a development system, you may find several versions of development chains, targets, etc., for example:
$ rustup show
Default host: x86_64-unknown-linux-gnu
rustup home: /home/johndoe/.rustup
installed toolchains
--------------------
stable-x86_64-unknown-linux-gnu
beta-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)
installed targets for active toolchain
--------------------------------------
armv7-unknown-linux-gnueabihf
thumbv6m-none-eabi
thumbv7em-none-eabi
thumbv7em-none-eabihf
thumbv7m-none-eabi
thumbv8m.base-none-eabi
thumbv8m.main-none-eabi
thumbv8m.main-none-eabihf
wasm32-unknown-unknown
x86_64-unknown-linux-gnu
active toolchain
----------------
nightly-x86_64-unknown-linux-gnu (default)
You can update all installed components with rustup update
.
❎ Make sure, especially for those who already had rustup
installed, that you are using the latest stable version of Rust by running rustup update
.
What comes with development chains:
cargo
: the all-purpose tool that knows how to call others, it's almost the only one we will use directly through its commands (cargo build
,cargo doc
,cargo fmt
,cargo clippy
, etc.)rustc
: the Rust compilerrustdoc
: the documentation generatorrustfmt
: the code formatterclippy
: the code linter
Editor
You are free to use the code editor of your choice. For editing Rust, we recommend Visual Studio Code with the rust-analyzer
extension, which allows you to analyze your code directly from your editor. The Error Lens
extension also warns of errors as you write code but may seem a bit intrusive at times.
Formatting and style
Rust comes with a very handy linter tool named Clippy which detects anti-patterns, weird or inefficient coding styles, and suggest ways of doing better using a more idiomatic Rust style.
⚠️ We expect you to regularly use
cargo clippy
to run Clippy on your code, and to apply the suggestions or change your code until Clippy does not complain anymore.
You may also:
- deactivate some Clippy lints with the
#[allow(clippy::some_lint_name)]
attribute applied to an item (be prepared to come with a good justification for doing so); - use Clippy in an even stricter mode, by using
cargo clippy -D clippy::pedantic
.
⚠️ We also expect you to format your code according to the Rust standard. Fortunately, this is easy to do using
cargo fmt
.
Getting Started: Fibonacci
To quickly get started with Rust, we'll begin with a classic: the Fibonacci sequence.
Project Creation
❎ Create a new project with the command cargo new fibo
. This will create a new fibo
directory with a corresponding binary project (you could have used --lib
to create a library project). Navigate to this directory.
The project is organized as follows:
- At the root,
Cargo.toml
contains project information (name, author, etc.). This file will also include dependencies used by this project. - Also at the root,
Cargo.lock
will contain, after compilation, information about the exact versions of dependencies used for that compilation, allowing the exact compilation conditions to be reproduced if necessary. src/
contains the project code.
All these files and directories are intended to be added to your version control system (git
in our case). By default, if you are not already in a git repository, cargo also creates an empty git repository in the target directory along with a .gitignore
file. You can change this behavior with the --vcs
option (see cargo help new
).
After compilation, a target
directory will contain object files and binaries. It should be ignored by the version control system (it is in the .gitignore
file created by cargo).
Compile the project using the cargo build
command. By default, a debug version, much slower but more easily debuggable, will be built and can be found in target/debug/fibo
. Run the program, and for now, observe that it displays "Hello, world!". This corresponds to the code in src/main.rs
.
You can also compile and execute (in case of success) the program in a single command: cargo run
.
Note: You can build or run release versions using cargo build --release
and cargo run --release
.
Recursive Implementation of the fibo
Function
❎ Implement the fibo
function recursively with the following prototype:
#![allow(unused)] fn main() { fn fibo(n: u32) -> u32 { // TODO } }
We should have fibo(0) = 0
, fibo(1) = 1
, fibo(n) = fibo(n-1) + fibo(n-2)
. Remember that if
expressions return a value; you don't need to use return
explicitly in your code.
In the main
function, create a loop from 0 to 42 (inclusive) that displays:
fibo(0) = 0
fibo(1) = 1
fibo(2) = 1
fibo(3) = 2
fibo(4) = 3
fibo(5) = 5
up to fibo(42) = 267914296
.
You can compare execution speeds in debug
and release
modes.
Iterative Implementation
❎ Reimplement the fibo
function with the same signature but iteratively.
For this, you will probably need:
- to declare variables
- to declare mutable variables with
mut
- to create a loop in which you do not use the loop index; you can use
_
as the loop index name to avoid compiler warnings
You may also return prematurely using return
if the argument is smaller than 2, for example.
Checking Calculations
Change the maximum limit from 42 to 50. Notice what happens between fibo(47)
and fibo(48)
. Do you understand what is happening?
We have several ways to fix this issue:
- increase the size of integers and use
u64
instead ofu32
- use saturated arithmetic, which ensures that in case of exceeding a boundary (lower or upper), the value of that boundary will be returned
- use checked arithmetic, which signals an error if the operation results in a value that does not fit into the targeted size
Saturated Arithmetic
In the documentation for the u32
type, look for the saturating_add
method.
❎ Replace the addition in your code with saturated addition. Run the program and compare.
Note: Remember that you can specify the type of numeric literals by suffixing them with a predefined type, such as 1u32
.
The results are now monotonic but still not correct. They are limited by the maximum value of a u32
, namely 232-1.
Checked Arithmetic
In the u32
type documentation, look for the checked_add
method.
❎ Replace the (saturated) addition with a call to checked_add
followed by an unwrap()
call to retrieve the value from the option. Run the program and observe the runtime error.
Although the program crashes, at least it no longer displays incorrect values!
Displaying Only Correct Values
Change the fibo
function prototype like this:
#![allow(unused)] fn main() { fn fibo(n: u32) -> Option<u32> { // TODO } }
Return None
if it is not possible to represent the result as a u32
, or Some(result)
when it fits into a u32
.
❎ After making the above modifications, modify the main program to exit the loop as soon as it is not possible to display the result.
You can advantageously use:
match
if let Some(…) = … { }
Use of Crates
A crate is a collection of functionalities. It can be of two kinds: binary or library. When you created your project, you created a binary crate with the project name (fibo
), and its source code is in src/main.rs
.
You can import crates to use the functionalities they offer, either from local projects or from remote sources. The Rust ecosystem provides a central repository crates.io that gathers many crates (but you can use other repositories). The cargo
utility allows you to easily fetch these crates, use them in your projects, and track their updates. It also allows you to easily publish your own crates.
We propose using the clap crate to add the ability to pass arguments and options on the command line to our fibo
program.
❎ Add the following two lines to the Cargo.toml
file:
[dependencies]
clap = { version = "4.1.14", features = ["derive"] }
This indicates that our project requires the clap
crate, in version 4.1.4 or later, up to version 5.0.0 exclusive (more information in cargo's documentation on how to specify these version numbers). It also indicates that we want to use the derive
feature of clap which is not enabled by default. This feature allows the use of #[derive(Parser)]
.
Now, in main.rs
, import the structures from the crate that you need to use:
#![allow(unused)] fn main() { use clap::Parser; }
❎ Using the clap
documentation, modify your application to work according to the following schema:
Compute Fibonacci suite values
Usage: fibo [OPTIONS] <VALUE>
Arguments:
<VALUE> The maximal number to print the fibo value of
Options:
-v, --verbose Print intermediate values
-m, --min <NUMBER> The minimum number to compute
-h, --help Print help
(You may not have exactly the same display depending on the versions and features of clap, it's okay)
Note that specifying that you are using clap
in Cargo.toml
automatically and transitively fetches clap
's dependencies and compiles them when building the application.
You can check the exact versions used in the Cargo.lock
file mentioned earlier, allowing other users to exactly rebuild the same version of the program that you have built yourself.
❎ Use cargo clippy
to run Clippy on your code, and apply the suggestions or change your code until Clippy does not complain anymore.
❎ Use cargo fmt
to reformat your code according to the common Rust formatting conventions.
(we expect you to execute those steps every time you commit your code to the main branch or your repository)
A few simple problems
You can create a "problems" Rust project in your repository to try your solutions to those simple problems.
Lifetimes
The trim
method on str
(and thus on String
thanks to Deref
) removes the blanks at the beginning and at the end of a string. Its signature is:
fn trim(&self) -> &str;
which is, thanks to lifetimes elision, a shortcut for
fn trim<'a>(&'a self) -> &'a str;
Who is the owner?
The following code fails to compile:
fn ret_string() -> String { String::from(" A String object ") } fn main() { let s = ret_string().trim(); assert_eq!(s, "A String object"); }
Why? Ask yourself: what is the lifetime of s
? Who is the owner of the underlying string with spaces (every object has an owner)?
❎ Fix the code so that it compiles (and the s
variable represents the trimmed string). Note that you can reuse the same variable name.
Select between alternatives
❎ Add the most appropriate lifetimes to the following function:
fn choose_str(s1: &str, s2: &str, select_s1: bool) -> &str {
if select_s1 { s1 } else { s2 }
}
At call time, s1
and s2
may have different lifetimes and we don't want any constraint between the lifetimes of those two strings.
Write a OOR (owned or ref) type
For this problem, do not look at the standard Cow
type.
We want to create a OOR
type which can store either a String
or a &str
to avoid copying a string which already exists in the environment.
❎ Write a OOR
enum with two alternatives: Owned
which stored a String
and Borrowed
which stores a &str
.
It will require using a generic parameter. What does it represent?
❎ Implement the Deref
trait for the OOR
structure so that it dereferences into an a &str
. What is the lifetime of the resulting &str
(note that you have no choice there)? Why is that always appropriate?
❎ Check that you can now call &str
methods directly on an arbitrary OOR
object by writing some tests.
❎ Write a DerefMut
trait for the OOR
structure. If you have not stored a String
, you will have to mutate and store a String
before you can hand out a &mut str
because you can't transform your inner &str
into &mut str
.
❎ Check that you can run the following test:
// Check Deref for both variants of OOR
let s1 = OOR::Owned(String::from(" Hello, world. "));
assert_eq!(s1.trim(), "Hello, world.");
let mut s2 = OOR::Borrowed(" Hello, world! ");
assert_eq!(s2.trim(), "Hello, world!");
// Check choose
let s = choose_str(&s1, &s2, true);
assert_eq!(s.trim(), "Hello, world.");
let s = choose_str(&s1, &s2, false);
assert_eq!(s.trim(), "Hello, world!");
// Check DerefMut, a borrowed string should become owned
assert!(matches!(s1, OOR::Owned(_)));
assert!(matches!(s2, OOR::Borrowed(_)));
unsafe {
for c in s2.as_bytes_mut() {
if *c == b'!' {
*c = b'?';
}
}
}
assert!(matches!(s2, OOR::Owned(_)));
assert_eq!(s2.trim(), "Hello, world?");
A virtual machine in Rust
The goal of this assignment is to write an interpreter for a virtual machine of our own. You have access to:
Machine architecture
Machine model
The machine model is simple:
- The memory contains both the program and the data.
- The memory contains 4096 bytes and is addressed from address 0 to address 4095.
- 32-bit registers are numbered from
r0
tor15
. - Register
r0
is the instruction pointer (IP): it contains the address of the next instruction to be executed. - Reads from memory and writes to memory are 32-bit wide and do not need to be aligned.
- Data stored in memory uses little-endian ordering.
Execution model
A step of execution happens as follows:
- The instruction at IP is decoded. Its length depends on the instruction (in other words, instruction size is variable). Note that each element - e.g
reg_a
- of the instruction is encoded on exactly one byte. - IP is advanced to point after the decoded instruction and its arguments.
- The decoded instruction is executed.
Machine failure
Here are the reasons the machine can fail:
- The memory at IP does not contain a valid instruction.
- The instruction does not totally fit in memory.
- The instruction references an invalid register.
- The instruction references an invalid memory address.
A failure must cause the execution of the current step to return an error: the execution is not allowed to panic. The machine must no longer be used after an error.
Instruction set
Instruction | Arguments | Effect |
---|---|---|
move if | 1 rᵢ rⱼ rₖ | if rₖ ≠ 0 Then rᵢ ← rⱼ |
store | 2 rᵢ rⱼ | mem[rᵢ] ← rⱼ |
load | 3 rᵢ rⱼ | rᵢ ← mem[rⱼ] |
loadimm | 4 rᵢ L H | rᵢ ← extend(signed(H L)) |
sub | 5 rᵢ rⱼ rₖ | rᵢ ← rⱼ - rₖ |
out | 6 rᵢ | output char(rᵢ) |
exit | 7 | exit the program |
out number | 8 rᵢ | output decimal(rᵢ) |
Detailed description
The number of instructions is very limited. We will give at least one example for every instruction. All examples assume that:
- register
r1
contains 10 - register
r2
contains 25 - register
r3
contains 0x1234ABCD - register
r4
contains 0 - register
r5
contains 65
All other registers are unused in the examples.
If the example contains 1 1 2 3
, it means that the instruction is made of bytes 1, 1, 2 and 3 (4 bytes total) in this order.
move if
1 rᵢ rⱼ rₖ
: if register rₖ
contains a non-zero value, copy the content of register rⱼ
into register rᵢ
; otherwise do nothing.
Examples:
1 1 2 3
: since registerr3
contains a non-zero value (0x1234ABCD), registerr1
is set to 25 (the value of registerr2
).1 1 2 4
: since registerr4
contains a zero value, nothing happens.
store
2 rᵢ rⱼ
: store the content of register rⱼ
into the memory starting at address pointed by register rᵢ
using little-endian representation.
Example:
2 2 3
: the content of registerr3
(0x1234ABCD) will be stored at addresses [25, 26, 27, 28] since registerr2
contain 25. 0xCD will be stored into address 25, 0xAB into address 26, 0x34 into address 27, and 0x12 into address 28.
load
3 rᵢ rⱼ
: load the 32-bit content from memory at address pointed by register rⱼ
into register rᵢ
using little-endian representation.
Example:
3 1 2
: since registerr2
contains 25, move the 32-bit value at addresses [25, 26, 27, 28] into registerr1
. In little-endian format, it means that if address 25 contains 0xCD, address 26 contains 0xAB, address 27 contains 0x34, and address 28 contains 0x12, the value loaded into registerr1
will be 0x1234ABCD.
loadimm
4 rᵢ L H
: interpret H
and L
respectively as the high-order and the low-order bytes of a 16-bit signed value, sign-extend it to 32 bits, and store it into register rᵢ
.
Examples:
4 1 0x11 0x70
: store 0x00007011 into registerr1
4 1 0x11 0xd0
: store 0xffffd011 into registerr1
Note how sign extension transforms a positive 16 bit value (0x7011 == 28689) into a positive 32 bit value (0x00007011 == 28689) and a negative 16 bit value (0xd011 == -12271) into a negative 32-bit value (0xffffd011 == -12271).
sub
5 rᵢ rⱼ rₖ
: store the content of register rⱼ
minus the content of register rₖ
into register rᵢ
Arithmetic wraps around in case of overflow. For example, 0 - 1 returns 0xffffffff, and 0 - 0xffffffff returns 1.
Examples:
5 10 2 1
: store 15 intor10
(the subtraction of registerr2
25 and registerr1
10).5 10 4 1
: store -10 (0xfffffff6) intor10
(the subtraction of registerr4
0 and registerr1
10).
out
6 rᵢ
: display the character whose unicode value is stored in the 8 low bits of register rᵢ
on the standard output.
Example:
6 5
: output "A" since the 8 low bits of registerr5
contain 65 which is the unicode codepoint for "A".6 3
: output "Í" since the 8 low bits of registerr3
contain 0xCD which is the unicode codepoint for "Í".
Note: you have to convert the content into a char
and display this char
.
exit
7
: exit the current program
Example:
7
: get out.
out number
8 rᵢ
: output the signed number stored in register rᵢ
in decimal.
Example:
8 5
: output "65" since registerr5
contains 65.8 3
: output "305441741" since registerr3
contains 0x1234ABCD.
Note
Note that some common operations are absent from this instruction set. For example, there is no add
operation, however a+b
can be replaced by a-(0-b)
. Also, there are no jump or conditional jump operations. Those can be replaced by manipulating the value stored in register r0
(IP).
Your program
Your program will contain both an application and a library:
- The library allows other programs to embed your virtual machine
- The application lets you run programs written for the virtual machine from the command line.
You are given an archive file which contains (in a vm
project):
Cargo.toml
: the initial configuration filesrc/main.rs
: the main program for the application, which loads a binary file with machine code and executes itsrc/lib.rs
: the entry point for theinterpreter
library which contains your implementation of the virtual machinesrc/tests/
: a directory with many tests, ranging from individual instructions tests to complex testssrc/examples/
: some examples for the virtual machines that you can run when your interpreter is complete
⚠ The project uses Rust edition 2024 (released on Feb. 20, 2025, with Rust 1.85). Make sure your compiler is up-to-date by executing
rustup update
if needed.
Tests and examples are accompanied by their disassembled counterpart to help you understand what happens (*.bin
is the program for the virtual machine, *.dis
is the disassembly).
Start by adding the vm
Cargo project to your repository and ensure that you can build the program even though it doesn't do anything useful yet and will contain many warnings:
$ cargo build
You can see the tests fail (hopefully this is a temporary situation) by running:
$ cargo test
Program structure
At any time, make sure that the program and the tests compile, even if they don't pass succesfully yet. In particular, you are not allowed to rename the Machine
and Error
types, although you will need to modify them to implement this assignment. Similarly, the already documented method must be kept without modifying their signature because they will be used in automated tests.
❎ After creating a new interpreter through interpreter::Machine::new()
, the following methods must be implemented:
step_on()
: takes a descriptor implementingWrite
(for theout
andout number
instructions), and execute just one instructionstep()
: similar tostep_on()
, but writes on the standard outputrun_on()
: takes aWrite
-implementing descriptor and runs until the program terminatesrun()
: similar torun_on()
, but writes on the standard outputmemory()
andregs()
: return a reference on the current memory and registers contentset_reg()
: set the value of a register
Do not hesitate to add values to the Error
enumeration to ease debugging. Also, you can implement additional functions to Machine
if it helps dividing the work.
As far as Machine::new()
is concerned, you might be interested in looking at slice::copy_from_slice()
.
Writing things to the user
For the out
and out_number
opcodes, you will have to write things to a file descriptor (respectively a character and a number). This can be done with the write!()
macro, which lets you write into any object whose type implements the Write
trait.
Suggested work program
Several tests are provided in the tests
directory:
assignment.rs
contains all the examples shown in the specification. You should try to concentrate on this one first and implement instructions in the same order as in the specification (and the test) until you pass this test. You can run only this test by usingcargo test --test assignment
.basic_operations.rs
checks that all instructions are implemented correctly. For example, it will attempt to read and write past the virtual machine memory, or use an invalid register, and check that you do not allow it.complex_execution.rs
will load binary images and execute them using your virtual machine.
How to debug more easily
In order to ease debugging, you can use two existing crates, log
and pretty_env_logger
.
log
provides you with a set of macros letting you formatting debugging information with different severities:
log::info!(…)
is for regular informationlog::debug!(…)
is for data you'd like to see when debugginglog::trace!(…)
is for more verbose cases- …
See the documentation for a complete information.
pretty_env_logger
is a back-end for log
which gives you nice colored messages and is configured through environment variables.
You can initialize at the beginning of your main program by calling pretty_env_logger::init()
. Then, you can set an environment variable to determine the severities you want to see:
$ RUST_LOG=debug cargo run mytest.bin
You'll then see all messages with severity debug
and above. Once again, the documentation is online.
💡 Note on the
Result
typeYou might notice a redefinition of the
Result
type:#![allow(unused)] fn main() { type Result<T, E = Error> = std::result::Result<T, E>; }
This defines a local
Result
type whose second generic parameter has a default value: your ownError
type. It means that you can writeResult<T>
instead ofResult<T, Error>
for the return type of your functions. Also, a user of your library will be able to reference such a type asinterpreter:::Result<T>
instead ofinterpreter:::Result<T, interpreter::Error>
.This kind of shortcut is very common in Rust. For example, the
std::io
module defines:#![allow(unused)] fn main() { type Result<T, E = std::io::Error> = std::result::Result<T, E>; }
so that you can use
std::io::Result<usize>
for an I/O operation which returns a number of bytes instead ofstd::io::Result<usize, std::io::Error>
.Similarly, the
std::fmt
module goes even further and defines#![allow(unused)] fn main() { type Result<T = (), E = std::fmt::Error> = std::result::Result<T, E>; }
so that you can use
std::fmt::Result
(without generic parameters) in a formatting operation instead ofstd::fmt::Result<(), std::fmt::Error>
.
LED matrix lab
The goal of this lab is to duplicate what has been done in C in the 4SE07 bare board programmation lab (French) in Rust. We will use higher level constructs, and skip all the parts that are not strictly necessary.
Initial setup
Install the necessary components and tools
We will use the following tools and components, make sure that they are installed by using the provided instructions.
❎ Install the tools described below.
cargo-edit
cargo-edit
provides the cargo rm
subcommand which lets you remove dependencies easily from your project if you don't need them anymore:
$ cargo install cargo-edit
cargo-binutils
cargo-binutils
provides the cargo size
subcommand and requires the llvm-tools-preview
component:
$ rustup component add llvm-tools-preview
$ cargo install cargo-binutils
probe-rs
We will use those two tools later:
$ cargo install probe-rs-tools
💡 You might need to install the
libudev-dev
package on Debian and Ubuntu systems forprobe-rs
to work.
Create the project
❎ In your git repository, create a tp-led-matrix
new library project. Use cargo new --help
if you are not sure of the arguments to pass to cargo new
to create a library project.
At every step you are expected to check that your project compiles fine and without any warning. It must be kept formatted at all time (using cargo fmt
), and you can use cargo clippy
to keep it tidy and get advices on what to change.
no_std
Since our program will run in an embedded context, we cannot use the standard library. We must declare in our src/lib.rs
file that we do not use any standard library imports.
❎ Add the inner attribute no_std
to your library.
Since this attribute applies to the whole library, it must be indicated as follows: #![no_std]
.
Visual data structures
We will start by building some useful types to manipulate pixel information.
❎ Start by creating a public image
module in your project. The types in this section will be created in this module.
We will now build two data structures
that we will later reexport from the library top-level module. Do not reexport anything yet.
Color
❎ Create a image::Color
structure containing three unsigned bytes named after the primary colors used in the led matrix: r
, g
and b
.
❎ Since copying a Color
(3 bytes) is cheap, make it automatically derive Copy
(and Clone
which is needed for Copy
).
❎ The default initialization of the structure would set all integer fields to 0, which is a perfect default for a color as it represents black. Make Color
automatically derive Default
.
❎ Implement three public constants Color::RED
, Color::GREEN
and Color::BLUE
initialized to the correct values for those three primary colors.
⚠️ If you put the code for your
image
module inside a file namesimage.rs
, do not usepub mod image { … }
inside this file. Otherwise, you will end up with aimage::image
module, which is not what you want. Theimage.rs
must contain the content of theimage
module directly inside the file.
Gamma correction
The led matrix requires some gamma correction to represent colors as our eyes can see them. This gamma correction table works fine with our led matrix.
❎ Add a gamma
module to the project containing the above-mentioned gamma table and a function pub fn gamma_correct(x: u8) -> u8
which returns the corresponding value in the table.
❎ Implement a pub fn gamma_correct(&self) -> Self
method on Color
which applies the gamma::gamma_correct
correction to all components of a color.
Color multiplication and division
We would like to be able to take a color and make it more or less vibrant by multiplying or dividing it by a floating point value. Since we do not have access to the standard library, we will implement traits coming directly from core::ops
instead of importing them from std::ops
.
However, in no_std
mode we do not have access to some standard operations on floating point operands, such as f32::round()
. We will have to use an external crate such as micromath
to get those operations.
❎ Add the micromath
crate to your project, and use micromath::F32Ext
in your image
module to get common operations back.
❎ Implement the trait core::ops::Mul<f32>
on Color
and make it return another Color
whose individual components are multiplied by the given floating point value. You might want to use a helper function to ensure that each component stays within the range of an u8
. Make sure you properly round to the closest value. Also, f32::clamp()
might be of some use.
❎ Implement the trait core::ops::Div<f32>
on Color
and make it return another Color
whose individual components are divided by the given floating point value. Note that you can use the multiplication defined above to implement this more concisely.
That's it, our Color
type is complete.
🦀 Note on traits, visibility, and namespace pollution
When you
use micromath::F32Ext;
, you bring theF32Ext
trait into the current namespace. This trait declares methods such asround()
, and it is implemented on thef32
type. So importingF32Ext
into the current namespace makes theround()
method available onf32
values.But note that altough you do not use the
F32Ext
name explicitely in your code, you still "pollute" your namespace with this name. To prevent this, you might import theF32Ext
trait but bind it to no name by renaming it to_
:// Import the F32Ext trait without importing the F32Ext name use micromath::F32Ext as _;
Image
We want to manipulate images as a whole. Images are a collection of 64 Color
pixels.
❎ Create a public structure image::Image
containing a unique unnamed field consisting of an array of 64 Color
. Structures with unnamed fields are declared as follows, and fields are accessed like tuple fields (.0
to access the first field, .1
to access the second field, …):
struct Image([Color; 64]);
Here, if im
is of type Image
, im.0
will designate the array contained the Image
object.
❎ Create a public function pub fn new_solid(color: Color) -> Self
on Image
which returns an image filled with the color given as an argument.
Default trait
Unfortunately, the trait Default
cannot be derived automatically for Image
because of a temporary technical limitation of the Rust language: arrays with more than 32 entries cannot have Default
derived automatically. However, nothing prevents us from implementing the Default
trait manually.
❎ Implement the Default
trait for Image
by making it return an image filled with the default color.
Individual pixel access
We want to be able to access an individual pixel of an image by using an expression such as my_image[(row, column)]
. For doing so, we want to implement two traits, Index
and IndexMut
, which allow indexing into our data structure. Fortunately, Rust lets us use any type as an index, so a (usize, usize)
couple looks perfectly appropriate.
❎ Implement the core::ops::Index<(usize, usize)>
trait on Image
with output type Color
.
❎ Implement the core::ops::IndexMut<(usize, usize)>
trait on Image
. Note that you do not specify the output type as it is necessarily identical to the one defined in Index
, as IndexMut
can only be implemented on types also implementing Index
with the same index type.
Row access
Since we will display the image one row at a time on the led matrix, it might be useful to have a method giving access to the content of one particular row.
❎ Add a pub fn row(&self, row: usize) -> &[Color]
on Image
referencing the content of a particular row.
Note how the reference will stay valid no longer than the image itself; for this reason, Rust lets us take a reference inside a structure without any risk for the reference to become invalid if the image is destroyed.
Gradient
For visual testing purpose, we would like to be able to build a gradient from a given color to black. Each pixel should receive the reference color divided by (1 + row * row + col)
.
❎ Using the image[(row, col)]
utilities defined in previous steps, implement a pub fn gradient(color: Color) -> Self
function returning an image containing a gradient.
Image as an array of bytes
We already know from the 4SE07 lab that we will receive image bytes from the serial port, and that we will build the image byte by byte. It would be much easier if we could also see the image as a slice and access the individual bytes.
Remember that Rust is allowed to reorder, group, or otherwise rearrange fields in a struct. It means that so far we have no idea of how the Color
type is organized in memory. Maybe each field is stored on 32 bits instead of 8, or maybe g
is stored first, before r
and b
. We will use a representation clause to force Rust to make each field 8 bits wide, to have a one byte alignment only on the structure, and to keep r
, g
and b
in the order we have chosen.
❎ Force Rust to use a C compatible representation for Color
by using the appropriate repr(C)
attribute. It will ensure all properties mentioned above.
Concerning the Image
type itself, we do not have much to do. We already know that Rust arrays are guaranteed to be layed out according to the size and alignment requirements of the element type. In our case, it means that the three bytes of the first pixel will be immediately followed by the three bytes of the second pixel, and so on.
However, to guarantee that Rust uses the same representation for Image
as the one it uses for the inner array, we need to request that the Image
type is transparent, i.e., that it uses the same representation as its unique non-zero-sized field.
❎ Add a repr(transparent)
attribute on the Image
type to ensure that it keeps the same representation as its unique element.
To see an image as an immutable array of bytes, we will implement the AsRef<[u8; 192]>
trait. This way, using my_image.as_ref()
will return a reference to an array of 192 (8 rows × 8 columns × 3 color bytes) individual bytes.
❎ Implement AsRef<[u8; 192]>
for Image
. You will need to use core::mem::transmute()
, which is an unsafe function, in order to convert self
to the desired return value.
❎ Since we know we will need a mutable reference to the individual bytes, implement AsMut<[u8; 192]>
the same way.
Congratulations, you now have a rock-solid Image
type which will make the rest of the job easier.
Reexporting types
Users of the library are likely to want to use our Color
and Image
types. We can make it easier for them by reexporting them in lib.rs
.
❎ Using the appropriate pub use
, reexport Color
and Image
from the top of the library.
From now on, users will be able to do:
use tp_led_matrix::{Color, Image};
Embedded mode
We will now configure our (yet non-existent) Rust program so that it generates code for our IoT-node board. It requires a few setup steps, then all will be automated by Cargo.
We will need to:
- configure the toolchain so that we can be a first empty program;
- upload it to the board using Segger JLink tools;
- make the program display something;
- optimize our development environment;
- configure the peripherals we want to use;
- lit the LED matrix.
Configuring the toolchain
We will ensure that we are able to compile and link an empty program, and upload it to the board.
Choosing the target for the library
Our board uses a STM32L475VGT6 microcontroller which contains a Cortex-M4F core. We need to download the corresponding target so that Rust can cross-compile for it.
❎ Add the thumbv7em-none-eabihf
target using rustup
:
$ rustup target add thumbv7em-none-eabihf
We will build our code for this target by default, this can be configured in .cargo/config.toml
as this is specific to our build process.
❎ Create .cargo/config.toml
and configure the default target to use when compiling:
[build]
target = "thumbv7em-none-eabihf" # Cortex-M4F/M7F (with FPU)
❎ Check that Rust can cross-compile your current code by using cargo build
. You will notice a target/thumbv7em-none-eabihf
directory which contains the build artifacts.
Choosing the runtime support package to build an executable
We are not able to build a library, but we do not have an executable program yet. In order to do this, we will need to provide:
- a linker script
- the linker arguments
- a main program
- an implementation of the panic handler so that Rust knows what to do if there is a panic
Linker script and linker arguments
We could write a whole linker script as was done in the 4SE07 lab, but this is not necessary. The cortex-m-rt
crate provides a linker script as well as a #[entry]
attribute and builds a complete program for Cortex-M based microcontrollers, including a vector table.
The linker script is named link.x
and will be placed in the linker search path. However, this script includes a memory.x
which describes the memory regions, and we will have to provide this linker script fragment and place it at the right place.
❎ Add a dependency to the cortex-m-rt
crate.
❎ Write a memory.x
file, next to Cargo.toml
, containing:
MEMORY
{
FLASH : ORIGIN = 0x08000000, LENGTH = 1M
RAM : ORIGIN = 0x20000000, LENGTH = 96K
}
We must tell the linker to use the link.x
script provided by the cortex-m-rt
crate when compiling for arm-none-…
.
❎ Add the following conditional section to the .cargo/config.toml
file:
[target.'cfg(all(target_arch = "arm", target_os = "none"))']
rustflags = ["-C", "link-arg=-Tlink.x"]
Peripheral access crate
The only thing missing that cortex-m-rt
link scripts will require is a vector table. Since this depends on our device, we must add one for the STM32L475VGT6 microcontroller by importing a peripheral access crate (PAC).
❎ Add embassy-stm32
as a dependency with the feature stm32l475vg
.
❎ The crate embassy-stm32
requires a way to define a critical section. You can define one by adding the cortex-m
crate with the critical-section-single-core
feature as a dependency.
Main program
We want to have an executable program named tp-led-matrix
, located in src/main.rs
. While a crate may contain only one library, it may contain several executables. The TOML syntax to describe an element of a list is to use a double bracket.
❎ Give the name of the executable for Cargo in your Cargo.toml
:
[[bin]]
name = "tp-led-matrix"
Now, we can write our main program. We will run in no_std
mode, and with no_main
. We will use cortex-m-rt
's entry
attribute to define what our entry point should be. This entry point must never return, hence the use of the !
(never) type.
❎ Create src/main.rs
with this code in it:
#![no_std]
#![no_main]
use cortex_m_rt::entry;
use embassy_stm32 as _; // Just to link it in the executable (it provides the vector table)
#[panic_handler]
fn panic_handler(_panic_info: &core::panic::PanicInfo) -> ! {
loop {}
}
#[entry]
fn main() -> ! {
panic!("The program stopped");
}
Note that we need to define a panic handler, because otherwise Rust doesn't know what to do in case of a panic. You can either define one yourself as we did there, or use and import a crate such as panic-halt
which also does an infinite loop.
Program building
❎ Build the program in both debug and release modes using cargo build
and cargo build --release
.
❎ Look at the executable size with the arm-none-eabi-size
program. The executables for the debug and the release modes are stored respectively in target/thumbv7em-none-eabihf/debug/tp-led-matrix
and target/thumbv7em-none-eabihf/release/tp-led-matrix
.
❎ Stop hurting yourself trying to type long path names, and use cargo size
and cargo size --release
instead. This tool builds the right version (debug or release mode) and calls size
on it.
Documentation
The PAC and HAL crates have been written with many microcontrollers in mind. For example, the embassy-stm32
crate supports several microcontrollers from the STM32L4 family, including our STM32L475VGT6. Features flags will cause the use of macros to generate the various methods and modules appropriate for the targeted microcontroller.
However, this makes the online documentation unsuitable for proper use: only the generic methods and modules will be included, and it will be hard to find help on a functionality which is specific to our microcontrollers. Fortunately, cargo doc
can generate documentation according to our crate dependencies and their feature flags.
❎ Generate the documentation using cargo doc
.
❎ Open the file target/thumbv7em-none-eabihf/doc/tp_led_matrix/index.html
in your browser, for example by using firefox target/thumbv7em-none-eabihf/doc/tp_led_matrix/index.html
, and search for a method (for example gamma_correct
).
You should regenerate the documentation every time you update your dependencies and when you significantly update your code, by rerunning cargo doc
. It will only regenerate the documentation for things that have changed.
Uploading the program to the board using Segger JLink tools
Even though this program does nothing, we want to upload it to the board. For this, we will use Segger JLink tool suite, as explained in 4SE07 lab.
❎ Ensure that you have either one of arm-none-eabi-gdb
or gdb-multiarch
installed on your system. If this is not the case, install it before proceeding.
❎ In a dedicated terminal, launch JLinkGDBServer -device STM32L475VG
.
We need to configure gdb
so that it connects to the JLinkGDBServer program and uploads the program.
❎ Create a jlink.gdb
gdb script containing the commands to connect to JLinkGDBServer, upload and run the debugged program:
target extended-remote :2331
load
mon reset
c
We would like cargo run
to automatically launch gdb with the script we just wrote. Fortunately, the runner can be configured as well!
❎ In .cargo/config.toml
, add the following to the conditional target
section you created earlier:
runner = "arm-none-eabi-gdb -q -x jlink.gdb"
⚠ On some systems, one must use gdb-multiarch
instead of arm-none-eabi-gdb
, check which executable is available.
❎ Upload and run your program using cargo run
while your board is connected. You should be able to interrupt gdb using ctrl-c
and see that you are indeed looping in the panic handler function.
Congratulations: you are running your first embedded Rust program on a real board.
Displaying something
Now that we have a running program, we would like to have one which displays something. Segger JLink tool suite can use the RTT protocol to exchange data between the host and the target. This protocol uses in-memory buffers that are scanned by the JLink debugging probe and transferred between the host and the target.
Using RTT on your board
Fortunately, several crates exist that implement the RTT protocol in Rust on the target side:
rtt-target
implements the RTT protocol and defines macros such asrprintln!()
to send formatted data to the hostpanic-rtt-target
implements a panic handler using RTT, so that you can see the full panic message on the host
❎ Add those two crates as dependencies.
❎ In src/main.rs
, remove your panic handler and import panic_rtt_target
so that its panic handler is linked in. Since we won't be using any symbol explicitly, we can import it silently:
use panic_rtt_target as _;
❎ Import the rtt_init_print
and rprintln
macros from rtt_target
. Modify the main program so that it uses them:
#[entry]
fn main() -> ! {
rtt_init_print!();
rprintln!("Hello, world!");
panic!("The program stopped");
}
❎ In a terminal, launch JLinkRTTClient
(or JLinkRTTClientExe
, depending on your setup). This is a simple client that connects to a running JLinkGDBServer
.
❎ Compile and run the program on the board using cargo run --release
. You should be able to see the output from the program.
Debugging will be much easier this way!
Optimizing the setup
We will take some steps to ease our development process and save some time later.
Reduce binary size
Using cargo size
and cargo size --release
, we can see that the binary produced in release mode is much smaller than the one produced in debug mode. Note that size
doesn't display the debug information since those are never stored in the target memory.
We would like to use --release
to keep an optimized binary, but we would like to keep the debug information in case we need to use gdb
, or to have a better backtrace in case of panic. Fortunately,
we can do that with cargo
and require that the release
profile:
- keeps debug symbols;
- uses link-time-optimization (LTO) to optimize the produced binary even further;
- generates objects one by one to get an even better optimization.
❎ To do so, add the following section to your program Cargo.toml
:
[profile.release]
debug = true # symbols are nice and they don't increase the size on the target
lto = true # better optimizations
codegen-units = 1 # better optimizations
From now on, we will always use --release
when building binaries and those will be optimized fully and contain debugging symbols.
Make it simplier to run the program
Even though we have configured cargo run
so that it runs gdb
automatically and uploads our program, we still have to start JLinkGDBServer
and JLinkRTTClient
. Fortunately, the probe-rs
and knurling-rs
projects make it easy to develop embedded Rust programs:
probe-rs
lets you manipulate the probes connected to your computer, such as the probe located on your IoT-node board.defmt
(for deferred formatting) is a logging library and set of tools that lets you log events from your embedded programs and transmit them in an efficient binary format. The formatting for the developer consumption will be made by tools running on the host rather than on the target.probe-rs run
is able to getdefmt
traces using a RTT channel and decode and format them.
Many others programs such as cargo flash
or cargo embed
exist, but we will not need them here.
❎ Stop the Segger JLink tools. Using the probe-rs
executable, check if the probe on your board is properly detected.
❎ Use probe-rs run
with the appropriate parameters instead of gdb
to upload your program onto the board and run it. Replace your runner in .cargo/config.toml
by:
runner = "probe-rs run --chip stm32l475vgtx"
❎ Using cargo run --release
, look at your program being compiled, uploaded and run on your board. You should see the messages sent over RTT on your screen.
⚠ You can use ctrl-c
to quit probe-rs run
.
Use defmt for logging
Instead of using RTT directly, we will use defmt
to have a better and efficient logging system.
❎ Remove the rtt-target
and panic-rtt-target
from your dependencies in Cargo.toml
.
❎ Add the defmt
and defmt-rtt
dependencies to your Cargo.toml
.
❎ Add the panic-probe
dependency to your Cargo.toml
with the print-defmt
feature.
defmt-rtt
is the RTT transport library for defmt
. panic-probe
with the print-defmt
feature will indicate to probe-rs run
the panic message to display using defmt and will tell it to stop in case of a panic.
❎ defmt
uses a special section in your executable. In .cargo/config.toml
, add the following to your existing rustflags
in order to include the provided linker file fragment: "-C", "link-arg=-Tdefmt.x"
.
❎ Modify your code in src/main.rs
to include the following changes:
- Write
use panic_probe as _;
instead ofpanic_rtt_target
to use thepanic-probe
crate. - Write
use defmt_rtt as _;
to link with thedefmtt-rtt
library. - Remove use of
rtt_target
items. - Remove
rtt_init_print!()
, and replacerprintln!()
withdefmt::info!()
to print a message.
❎ Run your program using cargo run --release
. Notice that you see the panic information, but you do not see the "Hello, world!" message.
By default, defmt
only prints errors. The various log level are trace
, debug
, info
, warn
, and error
. If you want to see the messages of level info
and above (info
, warn
, and error
), you must set the DEFMT_LOG
environment variable when building and when running the program. Only the appropriate information will be included at build time and displayed at run time.
❎ Build and run your program using DEFMT_LOG=info cargo run --release
. You will see the "Hello, world!" message. Note that you could also have used DEFMT_LOG=trace
or DEFMT_LOG=debug
if you add more verbose error messages.
❎ Setup the default log level by telling cargo
to set the DEFMT_LOG
environment variable when using cargo commands. You can do this by adding a [env]
section in .cargo/config.toml
:
[env]
DEFMT_LOG = "info"
⚠️ Changing the
[env]
section of.cargo/config.toml
will not recompile the program with the new options. Make sure that you usecargo clean
when you change theDEFMT_LOG
variable.
🎉 Your environment is fully setup in an efficient way. If needed, you can revert to using gdb
and Segger JLink tools, but that should be reserved to extreme cases.
Configuring the hardware
So far, we have not configured our hardware at all. The board is running with its initial setup on the 16MHz HSI clock, and only the debug interface is active. All other peripherals are still in their initial state.
Adding a PAC / a HAL
Several crates may be interesting to configure and access the hardware of the STM32L475VGT6 more easily:
cortex-m
provides access to functionalities common to all Cortex-M based microcontrollers.stm32-metapac
is a peripheral access crate (PAC) for all STM32 microcontrollers. It provides very low-level access to the various peripherals of each microcontroller. It will be automatically depended on by the HAL (see below), you do not need to add it as an explicit dependency.embassy-stm32
is a hardware abstraction layer (HAL) using both crates mentioned above to provide higher-level functionalities, this is the one we will be using.
❎ Add the following imports to your main.rs
since we will use them:
use embassy_stm32::rcc::*;
use embassy_stm32::Config;
❎ Use the following code as your main()
function, to configure the system clocks to run at 80MHz (the higher usable frequency) from the 16Mhz HSI clock, by multiplying it by 10 then dividing the result by 2 (using the PLL):
#[entry]
fn main() -> ! {
defmt::info!("defmt correctly initialized");
// Setup the clocks at 80MHz using HSI (by default since HSE/MSI
// are not configured): HSI(16MHz)×10/2=80MHz. The flash wait
// states will be configured accordingly.
let mut config = Config::default();
config.rcc.hsi = true;
config.rcc.pll = Some(Pll {
source: PllSource::HSI,
prediv: PllPreDiv::DIV1,
mul: PllMul::MUL10,
divp: None,
divq: None,
divr: Some(PllRDiv::DIV2), // 16 * 10 / 2 = 80MHz
});
config.rcc.sys = Sysclk::PLL1_R;
embassy_stm32::init(config);
panic!("Everything configured");
}
Here you can see how the HAL offers high-level features. The whole clock tree can be configured through the Config
structure. This will automatically setup the right registers, configure the flash memory wait states, etc.
Congratulations, you have displayed the same thing as before, but with a system running at 80MHz. 👏
GPIO and the LED matrix
We will now configure and program our LED matrix. It uses 13 GPIO on three different ports.
HAL and peripherals
The embassy_stm32::init()
function that you have used earlier returns a value of type Peripherals
. This is a large structure which contains every peripheral available on the microcontroller.
❎ Store the peripherals in a variable named p
:
let p = embassy_stm32::init(config);
In this variable, you will find for example a field named PB0
(p.PB0
). This field has type embassy_stm32::peripherals::PB0
. Each pin will have its own type, which means that you will not use one instead of another by mistake.
HAL and GPIO configuration
A pin is configured through types found in the embassy_stm32::gpio
module. For example, you can configure pin PB0
as an output with an initial low state and a very high commuting speed by doing:
// pin will be of type Output<'_>
let mut pin = Output::new(p.PB0, Level::Low, Speed::VeryHigh);
// Set output to high
pin.set_high();
// Set output to low
pin.set_low();
If pin
is dropped, it will be automatically deconfigured and set back as an input.
🦀 The lifetime parameter
'a
inOutput<'a>
represents the lifetime of the pin that we have configured as output. In our case, the lifetime is'static
as we work directly with the pins themselves. But sometimes, you get the pin from a structure which has a limited lifetime, and this is reflected in'a
.
Matrix module
❎ Create a public matrix
module.
❎ In the matrix
module, import embassy_stm32::gpio::*
as well as tp_led_matrix::{Color, Image}
(from your library) and define the Matrix
structure. It is fully given here to avoid a tedious manual copy operation, as well as all the functions you will have to implement on a Matrix
:
pub struct Matrix<'a> {
sb: Output<'a>,
lat: Output<'a>,
rst: Output<'a>,
sck: Output<'a>,
sda: Output<'a>,
rows: [Output<'a>; 8],
}
impl Matrix<'_> {
/// Create a new matrix from the control registers and the individual
/// unconfigured pins. SB and LAT will be set high by default, while
/// other pins will be set low. After 100ms, RST will be set high, and
/// the bank 0 will be initialized by calling `init_bank0()` on the
/// newly constructed structure.
/// The pins will be set to very high speed mode.
#[allow(clippy::too_many_arguments)] // Necessary to avoid a clippy warning
pub fn new(
pa2: PA2,
pa3: PA3,
pa4: PA4,
pa5: PA5,
pa6: PA6,
pa7: PA7,
pa15: PA15, // <Alternate<PushPull, 0>>,
pb0: PB0,
pb1: PB1,
pb2: PB2,
pc3: PC3,
pc4: PC4,
pc5: PC5,
) -> Self {
// Configure the pins, with the correct speed and their initial state
todo!()
}
/// Make a brief high pulse of the SCK pin
fn pulse_sck(&mut self) {
todo!()
}
/// Make a brief low pulse of the LAT pin
fn pulse_lat(&mut self) {
todo!()
}
/// Send a byte on SDA starting with the MSB and pulse SCK high after each bit
fn send_byte(&mut self, pixel: u8) {
todo!()
}
/// Send a full row of bytes in BGR order and pulse LAT low. Gamma correction
/// must be applied to every pixel before sending them. The previous row must
/// be deactivated and the new one activated.
pub fn send_row(&mut self, row: usize, pixels: &[Color]) {
todo!()
}
/// Initialize bank0 by temporarily setting SB to low and sending 144 one bits,
/// pulsing SCK high after each bit and pulsing LAT low at the end. SB is then
/// restored to high.
fn init_bank0(&mut self) {
todo!()
}
/// Display a full image, row by row, as fast as possible.
pub fn display_image(&mut self, image: &Image) {
// Do not forget that image.row(n) gives access to the content of row n,
// and that self.send_row() uses the same format.
todo!()
}
}
❎ Implement all those functions.
You can refer to 4SE07 notes for GPIO connections (in French) and the operation of the LED Matrix controller (in French).
Note that you need to maintain the reset signal low for 100ms. How can you do that? Keep reading.
Implementing a delay
Since you do not use an operating system (yet!), you need to do some looping to implement a delay. Fortunately, the embassy-time
can be used for this. By cooperating with the embassy-stm32
crate, it will be able to provide you with some timing functionalities:
❎ Add the embassy-time
crate as a dependency with feature tick-hz-32_768
: this will configure a timer at a 32768Hz frequency, which will give you sub-millisecond precision. You will also have to enable the generic-queue-8
feature since we don't use the full Embassy executor at this stage. Note that embassy-time
knows nothing about the microcontroller you use, it needs a timer to run on.
❎ Add the time-driver-any
to the embassy-stm32
dependency. This will tell the HAL to make a timer at the disposal of the embassy-time
crate.
The Rust embedded working-group has defined common traits to work on embedded systems. One of those traits is the DelayNs
in the embedded-hal
crate, which is implemented by the embassy_stm32::d::Delay
singleton of embassy-time
. You can use it as shown below:
❎ Add the embedded-hal
dependency.
❎ Import the DelayNS
trait in your matrix.rs
, as well as the Delay
singleton from embassy-time
:
use embedded_hal::delay::DelayNs as _;
use embassy_time::Delay;
You can then use the following statement to wait for 100ms:
Delay.delay_ms(100);
🦀 Note on singletons
Delay
is a singleton: this is a type which has only one value. Here,Delay
is declared as:struct Delay;
which means that the type
Delay
has only one value, which occupies 0 bytes in memory, also calledDelay
. Here, theDelay
type is used to implement theDelayNs
trait from theembedded-hal
crate:impl embedded_hal::delay::DelayNs for Delay { fn delay_ms(&mut self, ms: u32) { … } … }
You might have noticed that
self
is not used indelay_ms
, but the implementation has to conform to the way the trait has been defined. When you later writeDelay.delay_ms(100)
, you create a new instance (which contains nothing) of the typeDelay
, on which you mutably calldelay_ms(100)
.
Main program
❎ In your main program, build an image made of a gradient of blue and display it in loop on the matrix. Since it is necessary for the display to go fast, do not forget to run your program in release
mode, as we have been doing for a while now. Don't forget that Image
values have a .row()
method which can be handy here.
Are you seeing a nice gradient? If you do, congratulations, you have programmed your first peripheral in bare board mode with the help of a HAL. 👏
(if not, add traces using defmt
)
Real-time mode
Now that we are able to display something on the LED matrix as fast as possible, we would like to do it in a more controlled way. More precisely, we want to:
- start using the Embassy executor;
- display each line at a controlled pace to get a uniform 80 frames per second display;
- change the image with a timer;
- receive a new image from the serial port;
- use triple buffering to ensure smooth transitions.
Embassy executor
The Embassy framework and particularly its executor will help us decouple tasks and resources.
Add the Embassy executor as a dependency
❎ Add the embassy-executor
dependency to your Cargo.toml
file with the following features:
arch-cortex-m
in order to select the asynchronous executor adapted to our architectureexecutor-thread
to enable to default executor ("thread mode" is the "normal" processor mode, opposed to "interrupt mode")defmt
to enable debugging messages
Since we now use the full executor, the generic-queue-8
feature can be removed from embassy-time
. The timers will use the features provided by the Embassy executor.
Embassy main program
❎ Add the embassy_executor::main
attribute to your main function (instead of the previous entry
attribute) and make it async
, as seen in class and in Embassy documentation. Check that you can still execute your code as you did before. The main()
function must take a Spawner
parameter, which will be used to create tasks.
❎ Modify the Matrix::new()
method so that it becomes asynchronous. Replace the use of the blocking delay by a call to one of the Timer
asynchronous function.
For example you could use Timer::after()
and give it an appropriate Duration
, or use Timer::after_millis()
directly.
Check that your program works correctly, including after unplugging and replugging your board in order to deinitialize the led matrix.
Controlled line change
In this part, we will start using a periodic ticker to run some tasks at designated times. For example, we want to display frames at a pace of 80 FPS (frames per second) as it is most pleasant for the eyes to not have frequencies below 70Hz. Since each line of the matrix should get the same display time, we will call a display task 80×8=640 times per second. This display task will display the next line.
Blinking led
In order to check that you do not block the system, you want to create a new asynchronous task which will make the green led blink.
❎ Comment out your matrix display loop. You will reenable it later.
❎ Create a new task blinker
as an asynchronous function with attribute embassy_executor::task
. This function:
- receives the green led port (PB14) as an argument
- initialize the port as an output
- loops infinitely while displaying this pattern:
- three quick green flashes
- a longer pause
Don't forget that you can use asynchronous functions from Timer
as you did just before.
❎ Using the Spawner
object passed to your main program, spawn the blinker
task.
❎ Check that the green led displays the expected pattern repeatidly.
❎ Reenable the matrix display loop (after you have spawned the new task).
You should no longer see your green led blink: your matrix display loops never returns and never suspends itself as an asynchronous task would do while waiting for the time to switch to the next line. We will take care of that.
Controlled pace
We want to make an asynchronous task whose job is to take care of displaying the lines of the led matrix at the right pace in order to get a 80Hz smooth display. For this we will need to build the elements:
- An asynchronous task that will be spawned from
main()
- A
Matrix
instance to give to this task – we already have it! - A
Ticker
object to maintain a steady 80Hz rate. - A way to be able to modify the
Image
displayed on the matrix from other tasks, such asmain()
. We will need to use aMutex
from the crateembassy-sync
to protect theImage
being accessed from several places.
Let's build this incrementally.
Asynchronous display task
❎ Make a new display
asynchronous task taking a Matrix
instance as an argument, and copy the current display loop inside. Put an infinite loop around, as we do not want to leave the display
task, ever! Add what is needed to make it working (such as a static Image
). Spawn the display
task from main()
.
Note that you have to supply a lifetime, as your Matrix
type gets one. Fortunately, 'static
will work, as this is the lifetime of the ports you configured from your Peripherals
object.
Check that your program still works. Still, no green led blinking yet. Both the blinker
and display
asynchronous tasks run on the same executor, but the display
task never relinquishes control to the executor.
Ticking
❎ In your display
task, create a Ticker
object which will tick every time it should display a new line. 8 lines, 80 Hz, that gives? You got it! Don't hesitate to use the convenience methods such as Duration::from_hz()
.
You now want Matrix::display_image()
to use this ticker.
❎ Add a ticker
parameter to display_image()
. You just want to use it, not take ownership of it, so you need a reference. Since you note that the ticker's next()
method requires a &mut self
, you need to receive the ticker as a mutable reference as well.
❎ Make display_image()
an asynchronous function, since it needs to wait for the ticker to tick.
❎ In display_image()
, wait until the tickers tick before displaying a row, so that rows are evenly spaced every 1/640th of a second.
❎ In display()
, pass a mutable reference to the ticker to display_image()
.
If everything goes well, you should see both the image on your led matrix and the green led pattern. Neat, eh?
Image change
Right now, the display tasks does more than displaying something, as it takes care of the Image
itself. It should only access it when needed, but creating and modifying the image should not be its responsibility. Let's fix that.
Sharing a Image
between tasks
We will create a shared Image
, protected by a mutex. However, you have to understand how Embassy's mutexes work first.
Embassy asynchronous mutexes
Embassy's mutexes cannot use spin locks, as spin locks loop forever until they get the lock. If Embassy did this, it would block the current asynchronous task, and thus the whole executor.
Embassy's mutexes are asynchronous-friendly, and will yield when they cannot lock the resource immediately. However, to implement it, Embassy still needs a real mutex (which Embassy calls a "raw mutex", or "blocking mutex") for a very short critical section.
Since all our tasks are running on the same executor, they will never try to lock the raw mutex at the same time. It means that we can safely use the ThreadModeRawMutex
as raw mutex.
Creating the shared image object
So we want to create a global (static
) Image
object protected by a Mutex
using internally a ThreadModeRawMutex
.
❎ Import embassy_sync::mutex::Mutex
and embassy_sync::blocking_mutex::raw::ThreadModeRawMutex
.
❎ Declare a new global (static
) IMAGE
object of type Mutex<ThreadModeRawMutex, Image>
and initialize it… but with what?
Creating the initial image
Initialization of static
variables are done before any code starts to execute. The compiler must know what data to put in the global IMAGE
object.
We could try to use:
static IMAGE: Mutex<ThreadModeRawMutex, Image> = Mutex::new(Image::new_solid(Color::GREEN));
but the compiler will complain:
error[E0015]: cannot call non-const fn `tp_led_matrix::Image::new_solid` in statics
|
| static IMAGE: Mutex<ThreadModeRawMutex, Image> = Mutex::new(Image::new_solid(Color::GREEN));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Indeed, it cannot execute the call to Image::new_solid()
before even code starts to execute. However, there is an easy solution here! 💡
The code of Image::new_solid()
is likely simple (if it is not, fix it):
impl Image {
pub fn new_solid(color: Color) -> Self {
Image([color; 64])
}
}
Indeed, this is so simple that this could be done at compilation time if the function were a const
one. const
functions, when given constant parameters, can be replaced by their result at compilation time.
By adding the const
keyword:
impl Image {
pub const fn new_solid(color: Color) -> Self {
Image([color; 64])
}
}
the compiler will now be able to create the data structure for the mutex containing the image with the green constant at compilation time, and place it the .data
section.
Putting it together
❎ Add the const
keyword to the Image::new_solid()
function and initialize the IMAGE
object. You may want to add a new constant Color
, such as BLACK
, even though it may be useful at the beginning to look at a visible image.
❎ Modify the display
task so that, before being displayed, each image is copied locally in order not to keep the mutex locked for a long time.
Changing images dynamically
❎ Modify the main task so that the IMAGE
object is modified, every second or so, by another one.
Don't make things complicated. You should noticed that your display changes every second, while being pleasant to look at. The green led should blink its pattern at the same time.
This is starting to look nice.
Serial port
As was done in the 4SE07 lab, we want to be able to send image data from the serial port. We will configure the serial port, then write a decoding task to handle received bytes.
Fortunately, this will be much simpler to do so using Rust and Embassy.
The procedure
Of course, we will create a serial_receiver
asynchronous task. This task will:
- receive the needed peripherals needed to configure the serial port
- receive the image bytes
- update the shared image by copying the received bytes
- loop to receiving the image bytes
Receiving the image efficiently
How can we receive the image most efficiently? How will we handle incomplete images, or extra bytes sent after the image?
The first byte sent is a marker (0xff), we must wait for it. Then we should receive 192 bytes, none of which should be a 0xff. We want to receive all bytes in one DMA (direct memory access) transaction. But what will happen if an image is incomplete?
In this case, another image will follow, starting with a 0xff. In our buffer, we will have:
<------------------------ 192 ---------------------> | o | … | o | o | o | o | 0xff | n | n | n | … | n | <---------- P ----------> <-------- N -------->
where
o
belongs to the original image, andn
to the new image (N bytes received). In this case, we should rotate the dataP+1
places to the left (orN
places to the right, which is equivalent) so that the new image datan
is put at the beginning of the buffer, in order to have<------------------------ 192 --------------------> |n | n | n | … | n | o | … | o | o | o | o | 0xff | <------- N --------X--------- P ----------->
We just need to receive the 192-N bytes starting after the N bytes, and check again that there is no 0xff in the buffer. If this is the case, we have a full image, otherwise we rotate again, etc.
Note that the initial situation, after receiving the 0xff marker, is similar to having N being 0, there is no need to special case it.
The task
❎ Create the serial_receiver
task. This task receives several peripherals: the USART1
peripheral, the serial port pins, and the DMA channel to use for the reception.
By looking at the figure 29 on page 339 of the STM32L4x5 reference manual, you will see that the DMA channel for transmission (TX) of USART1
is DMA1_CH4
, and the DMA channel for reception (RX) is DMA1_CH5
.
❎ Create the Uart
device. Also, don't forget to configure the baudrate to 38400.
Note that
Uart::new()
expects a_irq
parameter. This is a convention for Embassy to ensure at compile time that you have properly declared that the corresponding IRQ is forwarded to the HAL using thebind_interrupts!()
macro.bind_interrupts!(struct Irqs { USART1 => usart::InterruptHandler<USART1>; });
Irqs
is the singleton that needs to be passed as the_irq
parameter ofUart::new()
.
The logic
❎ Implement the reception logic, and update the shared image when the bytes for a full image have been received.
Some tips:
-
Use the algorithm shown in "Receiving the image efficiently" above:
- Create a buffer to hold 192 bytes
- Wait for the 0xff marker — you have then received N=0 image bytes at this stage
- Receive the missing 192-N bytes starting at offset N of the buffer
- If, looking from the end, you find a 0xff in the buffer at position K:
- Shift the buffer right by K positions
- Set N to K and go back to step 3 Otherwise, you have a full image, you can update the shared image and go to step 2.
-
To update the shared image from the received bytes, you can extract it from the static mutex-protected
IMAGE
object, then request the&mut [u8]
view of the image with.as_mut()
, since you have implementedAsMut<[u8; 192]>
onImage
. You can then use an assignment to update the image content from the buffer you have received.
❎ Start the serial_receiver
task from main()
. Check that you can display data received from the serial port.
Congratulations, your project rocks!
Triple buffering
Our current handling of the image received on the serial port is not very satisfying. As soon as we have received a full image, we update the shared image: it means that the next rows to be displayed will come from the newer image while some rows on the LED matrix may have come from the older image.
⚠ You do not have to implement double-buffering. You have to understand how it works, but you only need to implement triple-buffering.
What is double-buffering?
In older computers, drawing something was performed directly in the screen buffer (also called the video RAM) as memory was tight. It meant that some artifacts could easily be perceived unless extreme caution was observed. For example, if an image was displayed by a beam going from the top to the bottom of the screen, drawing a shape starting from the bottom of the screen would make the bottom half of the shape appear before the top half does. On the other hand, drawing from the top to the bottom at the same pace as the refreshing beam would display consistent pictures.
As memory became more affordable, people started to draw the next image to display into a back buffer. This process lets software draw things in an order which is not correlated with the beam displaying the image (for example objects far away then nearer objects). Once the new image is complete, it can be transferred into the front buffer (the video RAM) while ensuring that the transfer does not cross the beam, which requires synchronization with the hardware. This way, only full images are displayed in a consistent way.
On some hardware, both buffers fit in video RAM. In this case, switching buffer at the appropriate time is done by modifying a hardware register at the appropriate time.
Double-buffering in our project
We already implement part of the double-buffering method in our code: we prepare the next image in a separate buffer while the current one is being displayed in a loop. We could modify our code (⚠ again, you do not need to implement double-buffering, this is only an example, you'll implement triple-buffering) so that the image switching takes place at the appropriate time:
- Make the new image a shared resource
next_image
rather than a local resource. - Add a shared boolean
switch_requested
to theShared
state, and set it inreceive_byte
when the new image is complete. - Have the
display
task check theswitch_requested
boolean after displaying the last row of the current image, and swap theimage
andnext_image
if this is the case and resetswitch_requested
.
By locking next_image
and switch_requested
for the shortest possible time, the receive_byte
task would prevent the display
task from running for very short periods. However, we could still run into an issue in the following scenario:
- The last byte of the next image is received just as the current image starts displaying.
- We set
switch_requested
to request the image switch, but this will happen after the whole current image as been displayed (roughly 1/60 seconds later, or 17ms). - The speed of the serial port is 38400 bits per second, and a byte requires 10 symbols (start, 8 bits, stop).
- It means that while the current image is being displayed, about 64 bytes of the next-next image can be received.
Where can we store those bytes? If we store them in next_image
, we will alter a buffer which has been fully drawn but not displayed yet so we cannot do this. We cannot obviously store them in image
either. There is nothing we can do there.
Triple buffering
We need a third buffer: one buffer is the one currently being displayed, one buffer is the next fully completed image ready to be displayed, and one buffer is the work area where we build the currently incomplete image.
In order to avoid copying whole images around, we would like to work with buffer references and switch those references. Should we use dynamic memory allocation? ☠ Certainly not.
The heapless
crate
The heapless
crate contains several data structures that can be used in environments where dynamic memory allocation is not available or not desirable:
heapless::Vec<T>
has an interface quite similar tostd::vec::Vec<T>
except that those vectors have a fixed capacity, which means that thepush
operation returns aResult
indicating if the operation succeeded or failed (in which case it returns the element we tried to push).- Other structures such as
BinaryHeap
,IndexMap
,IndexSet
,String
, etc. act closely like the standard library ones. heapless::pool
is a module for defining lock-free memory pools which allocate and reclaim fixed size objects: this is the one we are interested in.
Using a pool
By using a static pool of Image
types named POOL
, we will be able to manipulate values of type Box<POOL>
: this type represents a reference to an image from the pool. Box<POOL>
implements Deref<Target = Image>
as well as DerefMut
, so we will be able to use such a type instead of a reference to an Image
. Also, we can easily swap two Box<POOL>
objects instead of exchanging whole image contents.
A pool is declared globally by using the heapless::box_pool!()
macro as described in the heapless::pool
documentation. The BoxBlock<Image>
represents the space occupied by an image and will be managed by the pool. Then the .alloc()
method can be used to retrieve some space to be used through a Box<POOL>
smart pointer. Dropping such a Box<POOL>
will return the space to the pool.
box_pool!(POOL: Image);
…
// Code to put in the main function:
// Statically reserve space for three `Image` objects, and let them
// be managed by the pool `POOL`.
unsafe {
const BLOCK: BoxBlock<Image> = BoxBlock::new();
static mut MEMORY: [BoxBlock<Image>; 3] = [BLOCK; 3];
// By defaut, mutable reference static data is forbidden. We want
// to allow it.
#[allow(static_mut_refs)]
for block in &mut MEMORY {
POOL.manage(block);
}
}
- This pool can hand out
Box<POOL>
throughPOOL.alloc(model)
which returns anResult<Box<POOL>, Image>
initialized frommodel
:- Either the pool could return an object (
Ok(…)
). - Or the pool had no free object, in which case the model is returned with the error:
Err(model)
.
- Either the pool could return an object (
- When it is no longer used, a
Box<POOL>
can be returned to the pool just by dropping it.
We will build a pool containing the space for three images:
- When we receive a
0xff
on the serial port to indicate a new image, we will draw an image from the pool and start filling its data until we have all the bytes. - When an image is complete, the serial receiver will hand it to the display task.
- The display task starts by waiting for an image coming from the serial receiver and starts displaying it repeatidly.
- If a new image arrives from the serial receiver after the last line of the current image is displayed, the display task replaces the current image by the new one. This drops the image that was just displayed, and it is then automatically returned to the pool.
We see why, in the worst case, three images might coexist at the same time:
- The display task may be displaying image 1.
- The serial receiver has finished receiving image 2 and has stored it so that the display task can pick it up when it is done displaying image 1.
- The serial receiver has started the reception of image 3.
❎ Declare a pool named POOL
handing out Image
objects using the box_pool!()
macro.
❎ In the main()
function, before starting the display
or serial_receiver
task, reserve memory for 3 Image
(using the unsafe
block shown above) and hand those three areas to the pool to be managed.
Using Embassy's Signal
To pass an image from the serial receiver to the display task, we can use the Signal
data structure from the embassy_sync
crate. The Signal
structure is interesting:
- It acts like a queue with at most one item.
- Reading from the queue waits asynchronously until an item is available and returns it.
- Writing to the queue overwrites (and drops) the current item if there is one.
This is exactly the data structure we need to pass information from the serial receiver to the display task. We will make a global NEXT_IMAGE
static variable which will be a Signal
to exchange Box<POOL>
objects (each Box<POOL>
contains an Image
) between the serial_receiver
and the display
tasks.
A Signal
needs to use a raw mutex internally. Here, a ThreadModeRawMutex
similar to the one we used before can be used.
❎ Declare a NEXT_IMAGE
static object as described above.
Displaying the image
You want to modify the display
task so that:
- It waits until an image is available from
NEXT_IMAGE
and stores it into the localimage
variable. - Then in an infinite loop:
- It displays the image it has received.
image
is of typeBox<POOL>
, but sinceBox<POOL>
implementsDeref<Target = Image>
,&image
can be used in a context where an&Image
would be required. - If there is a new image available from
NEXT_IMAGE
, thenimage
is replaced by it. This will drop the olderBox<POOL>
object, which will be made available to the pool again automatically.
- It displays the image it has received.
NEXT_IMAGE.wait()
returns a Future
which will eventually return the next image available in NEXT_IMAGE
:
- Awaiting this future using
.await
will block until an image is available. This might be handy to get the initial image. - If you import
futures::FutureExt
into your scope, then you get additional methods onFuture
implementations. One of them is.now_or_never()
, which returns anOption
: eitherNone
if theFuture
does not resolve immediately (without waiting), orSome(…)
if the result is available immediately. You could use this to check if a new image is available fromNEXT_IMAGE
, and if it is replace the currentimage
.
❎ Add the futures
crate as a dependency in your Cargo.toml
. By default, the futures
crates will require std
; you have to specify default-features = false
when importing it, or add it using cargo add futures --no-default-features
.
❎ Rewrite display_image()
to do what is described above.
You now want to check that it works by using an initial image before modifying the serial receiver. To do so, you will build an initial image and put it inside NEXT_IMAGE
so that it gets displayed.
❎ At the end of the main()
function, get an image from the pool, containing a red gradient, by using the POOL.alloc()
method.
❎ Send this image containing a gradient to the NEXT_IMAGE
queue by using the signal
method of the queue.
You should see the gradient on the screen.
❎ Now, check that new images are correctly displayed:
- Surround the code above with an infinite loop.
- Inside the loop, add an asynchronous delay of 1 second after sending the image to
NEXT_IMAGE
. - Still inside the loop, repeat those three steps (get an image from the pool, send it to the display task through
NEXT_IMAGE
, and wait for one second) in another color.
If you see two images alternating every second, you have won: your display task is working, with proper synchronization. Time to modify the serial receiver.
Receiving new images
Only small modifications are needed to the serial receiver:
- When you receive the first
0xff
indicating a new image, get an image from the pool (you can initialize it from the default image,Image::default()
). You may panic if you don't get one as we have shown that three image buffers should be enough for the program to work. - Receive bytes directly in the image buffer, that you can access with
image.as_mut()
(remember, you implemented theAsMut
trait onImage
). - When the image is complete, signal its existence to
NEXT_IMAGE
.
❎ Implement the steps above.
❎ Remove the static IMAGE
object which is not used anymore.
❎ Remove the image switching in main()
, as don't want to interfere with displaying the images received from the serial port. You may keep one initial image though, to display something before you receive the first image through the serial port.
❎ Check that you can display images coming from the serial port. Congratulations, you are now using triple buffering without copying large quantities of data around.
Bonus level
You are not required to do this part, and you might get the maximum grade for this project even without doing this part, provided everything else is perfect. However, tasks in this part may earn you additional points if you have not reached the maximum grade yet. And also they are fun.
The bonus level contains three parts:
- You can use a dedicated executor to ensure a priority treatment for the
display
task and prevents glitchs even when the system is busy. - Your LED matrix really deserves a screen saver.
- What if this screensaver could display text?
Dedicated executor
Until now, we used only one executor in thread mode (the regular mode in which the processor runs, as opposed to interrupt mode). It means that Embassy's executor will execute one asynchronous task until it yields, then the other, then the other, and so on. If for any reason one task requires a bit more time than expected, you might delay other tasks such as the display task. In this case, you might notice a short glitch on the display.
To prevent this, we will use a dedicated interrupt executor to run our display
task. In this scenario, when it is time to display a new line on the display, an interrupt will be raised and the executor will resume the display
task while still in interrupt mode, interrupting the rest of the program.
You will have to choose an unused hardware interrupt, and:
- configure it to the priority you want to use, with regard to other interrupts in the system
- start the executor, telling it to tell its tasks to raise this interrupt by software (pend the interrupt, as in make it pending) when they have progress to signal
- call the executor's
on_interrupt()
method in the ISR, so that the executor knows that it must poll its tasks
Those are three easy tasks. We will choose interrupt UART4
, and set it to priority level Priority::P6
:
❎ Add the executor-interrupt
feature to the embassy-executor
dependency in Cargo.toml
.
❎ Create a static DISPLAY_EXECUTOR
global variable, with type InterruptExecutor
.
❎ Choose an unused interrupt (pick UART4
), configure it with an arbitrary priority (use Priority::P6
). Start the DISPLAY_EXECUTOR
and associate it with this interrupt. Use the returned spawner to spawn the display
task.
❎ Write an ISR for this interrupt, and redirect the event to the executor:
#[interrupt]
unsafe fn UART4() {
DISPLAY_EXECUTOR.on_interrupt();
}
Note that ISR are unsafe functions, as doing the wrong thing in an interrupt routine might lock up the system.
At this stage, you might notice that your code does not compile: the NEXT_IMAGE
data structure uses a ThreadModeRawMutex
as its internal mutex. Such a mutex, as its name indicates, can only be used to synchronize tasks running in thread mode, not in interrupt mode.
❎ Use a CriticalSectionRawMutex
as an internal mutex for NEXT_IMAGE
, because such a mutex is usable to synchronize code running in interrupt mode with code running in thread mode.
Your display should now be as beautiful as ever.
Screen saver
What should your led matrix do when you do not send anything on the serial port? Wouldn't it be great to have a screen saver, which automatically runs when nothing is sent, and does not get in the way otherwise?
You will have to create a new screensaver
task, which will trigger an image change when nothing is being received on the serial port for a while.
Recording image changes
You don't want the screen saver to run if data is being received. Let's record new images arrival.
❎ Declare a static NEW_IMAGE_RECEIVED
Signal
object containing a Instant
.
❎ When a new image is received in serial_receiver
, signal the current date to the NEW_IMAGE_RECEIVED
queue.
Implementing the screensaver task
❎ Implement a screensaver
task and start it on the thread-mode (regular) executor.
In this task, you may for example, in an infinite loop:
- Read the date of the last image received without waiting.
- If any image has been received, wait until one second after this date and
continue
the loop. This way, you effectively do not display anything until the serial port has been idle for one second. - Display your screensaver image (get one from the pool and set it to
NEXT_IMAGE
). - Wait for one second.
You can even be more creative and use alternating images every second.
Note that both the serial port code and the screensaver run in thread-mode. The NEW_IMAGE_RECEIVED
should only require a ThreadModeRawMutex
for its internal synchronization. Check that you haven't used a CriticalSectionRawMutex
as it does not require one.
Drawing things
The screensaver feature was nice, but the screensaver could be more entertaining. What if it could display scrolling text, such as "This Rust 4SE02 project will get me a good grade"?
Fortunately, one crate can help you do that: embedded-graphics. Provided you do the proper interfacing with your hardware, this crate will let you draw all kind of shapes, and even display text.
Interfacing with your hardware: the embedded
module
You have already decoupled the logical representation of your LED
matrix (the Image
type) from the physical one (the Matrix
type). This will make your job easier, as you will only have to
interface the Image
type with the embedded-graphics
crate: once
you have an Image
you can display it on your hardware by putting it
into next_image
.
❎ Create an embedded
module in your library. This module will
contain anything needed to interface the drawing primitives of the
embedded-graphics
crate with your Image
type.
First you'll have to choose a pixel representation that
embedded-graphics
can use and which is appropriate for your
display. Since you can already display RGB colors with 8 bits data for
each component, the
Rgb888
color type seems appropriate.
❎ Implement From<Rgb888>
for your Color
type. That will be useful
when drawing on your Image
, to build a proper Color
value.
Now, you need to implement the
DrawTarget
trait for your Image
type. This trait is the one which does the real
drawing. You will only implement the minimal functionality and use the
provided defaults for the rest.
❎ Implement DrawTarget
for Image
:
- The
Color
type will beRgb888
. - You can use
Infallible
as yourError
type, because drawing into anImage
never fails. - When you implement
draw_iter()
, make sure that you only set the pixels whose coordinates belong to the image (x
andy
both in0..8
). This method can be called with a larger image, for example a large text, and you will only display a portion of it. - If you need to convert a
Rgb888
into aColor
, do not forget that you can use.into()
because you implementedFrom<Rgb888> for Color
.
Upgrading the screensaver
You can now use the drawing primitives of embedded-graphics
to
create images in your screensaver instead of using gradients.
❎ Modify your screensaver so that it creates intesting images using the drawing primitives.
For example, you could add another local variable in addition to the color index, such as a shape index, and draw a square, a triangle, a circle, and a solid color. Ideally, those color and shape indices would use cycle sizes which are coprime, to maximize the displayed combinations.
When this works, commit and push your code.
Drawing text
The next step is to display scrolling text from the screensaver. Yes, that means forgetting about the shapes that you just designed, they were used to familiarize yourself with the library.
A
Text
object represents some text that can later been drawn into anything
implementing DrawTarget
(such as an Image
). It uses a character
style, which can be built using
MonoTextStyle::new()
from a font and a color. And the
ibm437
crate provides a great
IBM437_8X8_REGULAR
font which will be perfect for your LED matrix.
The idea is to wait for 60ms (instead of one second) after you have
displayed an image to make some text scroll to the next position if no
new image has been received. To make the text scroll to the left, you
will position it as a negative x
offset: since you display pixels
whose x
is in 0..8
, decreasing the x
position of the start of
the text will make it go left.
❎ Modify the screensaver
task so that it gets called every
60ms. You need a precise timing if you want the scrolling to be
pleasant.
❎ Modify the screensaver
task such that, when it wants to display something:
- A
Text
object is built with a text such as "Hello 4SE02", and placed at anx
position whose value is kept in aoffset
local variable. You can use the color you want, or make the color cycle. - The text is drawn into an image coming from the pool, and displayed through
NEXT_IMAGE
. - Decrease the
offset
local variable, except if the end of the text has reached the0
x
coordinate, in which caseoffset
must be reset to display the text again (find the appropriate value so that it is nice for the eyes). Note: theText
object has methods to check its bounding box (the smallest rectangle in which it fits).
❎ Modify the screensaver
task so that if a new image has been
received on the serial port, the offset of the text is reset so that
next time the screensaver displays something it will start from the
beginning of the text.
Note: you might have to adapt your DrawingText
trait implementation for
Image
, for example if the text appears upside down.
Make it even prettier if you wish, commit, push.
🦀 Congratulations, you have reached the end of this lab! 🦀