How to write your own derive-macro in Rust

Derive macros are one of the three procedural macro types in Rust. Visit the rust-reference to learn more about their differences and use cases.
Once implemented, they are used to add extra functionality to your code, without having to write it. But simply through adding an “annotation” if you will, to the source code. And the compiler creates the code (at compile time), in the way the author (we) have described it.

One derive macro rustaceans might be familiar with, is the #[derive(Debug)] macro which provides a default implementation of the Debug trait. When added to a struct or enum, a text representation gets implemented for that data structure, which can be used in debug logging for example.

And we are now writing our own derive macro!

The ToUrl macro

⚠️ This is not a macro for production! ⚠️ It is meant for training purposes only.
The entire code is on github under

ToUrl is supposed to work as follows:

When the #[derive(ToUrl)] is added to a struct (only structs are supported) then a
to_url(&self, base_url: String) -> String method gets implemented for that struct.
When called with a URL-String (the url_base), this method will first add a (the start of a url-query-section), to the given URL. It then iterates over all fields and adds them in the form field=value, to the current URL-String.
If the struct has more than one field, the pairs are concatenated with an & (ampersand).
Vecs are treated slightly special (only one dimensional Vecs are supported):
Their values are all joined with a url-encoded space character (%20).

Usage Example

// This example would be in a different crate than *to-url* (because proc_macros must be defined in their own crates)

// Annotate a struct with ToUrl
pub struct Request {
    response_type: String,
    client_id: String,
    scope: Vec<String>,
    redirect_uri: String,
    state: String,
    nonce: String,
// Create an instance of that struct
let dummy_req = Request {
    response_type: "code".to_string(),
    client_id: "1234andSomeText".to_string(),
    scope: vec!["openid".to_string(),
    redirect_uri: "".to_string(),
    state: "security_token0815".to_string(),
    nonce: "4242-3531".to_string(),

// Calling the to_url-method on the instance in the following way...

// ...would create the following string:

Implementation of the ToUrl derive macro

Create the library with cargo

// from the command line run
cargo new to-url --lib && cd to-url

configuration (Cargo.toml)

The dependencies and their versions are as follows:

proc_macro = true

syn = { version = "1.0", features = ["full", "extra-traits"] }
quote = "1.0"
proc-macro2 = "1.0"

Note that under lib we have specified that we want to create a procedural macro crate. As of writing those have to live in their own crates.

The function definition

Proc macros are functions that have to be annotated with #[proc_macro_derive(NameOfYourMacro)]. The function receives a TokenStream, which is an abstract token representation of the source code, on which the macro has been added. It’s not quite the code that we wrote anymore. But also not actual machine instructions yet.
And procedural macros can modify that TokenStream, in order to create new/different code, again in form of a TokenStream.
In our case we call the macro ToUrl.

pub fn to_url(tokens: TokenStream) -> TokenStream {
  /* implementation */

Parsing (syn) & Generation (quote)

Inside the function we will make use of two amazing crates. For parsing we will use syn and for code generation quote. With parse_macro_input! the TokenStream gets transformed into a DeriveInput which is helpful for walking the tree structure and provides additional helpful methods. To retrieve the name of struct that we want to derive our macro, we safe the value on the indent field of the input.

pub fn to_url(tokens: TokenStream) -> TokenStream {
    let input = parse_macro_input!(tokens as DeriveInput);
    let name = input.ident;

    /* rest of the implementation */ 

To get to the fields I have chosen to match on the data field of the input. It looks a bit funky, but if you follow the types through the documentation, starting with the DeriveInput, you might notice that I walk the structure, only mentioning the parts that I am interested in. The part that we don’t care about is ignored with the double dots (..). The fields are of type Punctuated<Field, Comma> and live on an instance of type FieldsNamed.

    /* implementation before is skipped */

    let fields_punct = match {
        Data::Struct(DataStruct {
            fields: Fields::Named(fields),
        }) => fields.named,
        _ => panic!("Only structs with named fields can be annotated with ToUrl"),

    /* rest of the implementation */

Here comes the rest of the implementation inside of our derive macro function. The part where the code is generated. The modified code gets passed back to the compiler as a TokenStream.
The part, that concatenates the fields and their values, will be looked at further down. For now, it is abstracted away as a call to query_from_field_and_value(..). Just know that it gives us an Iterator over TokenStreams.

  /* implementation before is skipped */

    let query_parts = query_from_field_and_value(&fields_punct);

    let modified = quote! {
        impl #name {
            pub fn to_url(&self, base_url: String) -> String {

                let url = format!("{}?", base_url) #(#query_parts)*;


quote! is a macro that lets us construct full TokenStreams, which the compiler understands, from the text we write inside of it. One must of course adhere to the rules of quote!. But it is much more convenient than constructing parsable trees by hand.
In the impl line, we use the name variable, we have defined on the very top of our function. Bindings within scope but outside of the quote!-macro-call are referenced by prefixing them with the pound symbol (#). At compile time #name contains the name of the struct, that our macro gets derived for. In the example that is Request. So the impl-line actually gets expanded to:

impl Request {

Note: This version of the macro does not take into account that the struct might have lifetime annotations like <‘a>. If we would like to support structs with and without lifetimes, they have to be taken into account in the code generation.

Before we move on, let’s disect the line
let url = format!(“{}?”, base_url) #(#query_parts)*;
The left hand side is merely defining a binding with the name url. To which we assign the right hand side. The first section of the expression on the right, the format!(“{}?”, base_url) will evaluate to a String. Resulting in base_url extended with the question mark, the beginning of the query-section. In the example this part would turn into:
Now comes the fun part: #(#query_parts)* which combines the field=value pairs into one String.
In quote! we have the possibility to repeat patterns by using the library’s interpolation-syntax. We are using the form #(#var)* where #var will turn into some expression and we repeat this pattern until there are no more elements in that #var. In our case (remember above: query_from_field_and_values(..) returns an Iterator), the repetitions end when the Iterator is exhausted.

Constructing the Url

How does the URL-String come together? Have a look at the beginning of the returned quote!-call in the else block of query_from_field_and_values(..).

fn query_from_field_and_value(
    fields: &Punctuated<Field, Comma>,
) -> impl Iterator<Item = proc_macro2::TokenStream> + '_ {
    let fields = fields.iter().enumerate().map(move |(i, field)| {
        let field_ident = field.ident.as_ref().unwrap();
        let delim = if i < fields.len() - 1 { "&" } else { "" }; // Add an & between two field=value pairs
        if is_vec(&field) {
        } else {
            quote! { + &format!("{}={}{}", stringify!(#field_ident), self.#field_ident, #delim) }

The implicitly returned quote!-section in the else-case starts with the “+” operator. After that follows a call to the format!-macro, which will evaluate to nothing more but a String. This means that on every iteration over the pattern #(#query_parts)*, we get “+ field=value” which gets concatenated to the existing String, by using “+”. No magic. Just adding Strings together.

// format!("{}?", base_url) #(#query_parts)* expands to something similar to this:
"my-dummy-url? + "response_type=code" + "client_id=1234andSomeText" + ... + "nonce=4242-3531"

Three more things are worth mentioning.

  1. We use the stringify! macro. It turns the given token(s) into literal text. Which means our field-identifiers get turned into their String-representation. If we would use #ident.to_string() the compiler would try to find a variable with that name (e.g. client_id) to call to_string() on it’s value. But of course would complain that there is no such variable.
  2. We use self inside a function without having it as an argument! 😯 Pretty cool right. At least it took me quite a while to notice that in some other examples. It sure gives a lot more flexibility to separate some logic into its own concise section.
  3. This example uses the format!-macro a lot and is tailored around String concatenation. This means field-types have to implement std::fmt::Display, the trait that provides the implementation of to_string() for a type. This makes ToUrl quite limited. Even worse, there is no error-handling around that fact. But like I said in the beginning: this is a training implementation.

And thats pretty much it.
There is still a is_vec() and a join_values() helper-function. They don’t do anything new regarding the derive-macro-topic and they are pretty specific to this concrete example (which has questionable applicability in it’s current form).
Never the less, here is what those two look like (I have implemented them outside of the macro-function but in the same file). Like mentioned above, the entire ToUrl-macro-code is also available on github.

fn is_vec(field: &Field) -> bool {
    match &field.ty {
        Type::Path(TypePath {
            path: Path { segments, .. },
        }) => {
            // segments is of Type syn::punctuated::Punctuated<PathSegment, _>
            if let Some(path_seg) = segments.first() {
                let ident = &path_seg.ident;
                return ident == "Vec";
        _ => false,

fn join_values(field_ident: &Ident) -> proc_macro2::TokenStream {
    let len = quote! { self.#field_ident.len() };
    let vec_values = quote! {
        //let len = self.#field_ident.len();
        self.#field_ident.iter().enumerate().fold(String::new(), |mut vals, (i, v)| {
            if (i < #len - 1) {
            if (i == #len - 1) {
    quote! {+ &format!("{}={}", stringify!(#field_ident), #vec_values)}


Proc macros in Rust, especially the derive macro in my opinion, are fantastic concepts that let the developers extend the language in very versatile ways. They are heavily used in many crates and make our lives as library users much easier.
They are also a rather advanced topic and not necessarily coined towards Rust beginners. Nevertheless there are extremely helpful crates like syn and quote with great documentation.
Precious resources are also the proc-macro-workshop by David Tolnay on github.
And my favorite, the one that actually took away my scare around proc_macros:
Procedural Macros in Rust Part1 & Part 2 by Jon Gjengset, where he works through some of the exercises of the before mentioned proc-macro-workshop.

That’s it! I hope this was helpful or interesting to you. Thank you for reading.



Get HTTPS for your WordPressSite Title image

Get HTTPS (SSL) for your WordPress Site

By the end of this tutorial your WordPress – site/blog will have an SLL certificate, so that it can be served over the secure HTTPS protocol.

For which stack does this tutorial apply?

  • WordPress site/blog
  • Hosted on AWS (Amazon Web Services) EC2 instance
  • Bitnami stack (Ubuntu Linux + Bitnami specific software)

NOTE: The Bitnami WordPress setup (which I also use myself), is an “out-of-the-box” package, available on the AWS marketplace.
What this preconfigured stack doesn’t provide from the get go, is an SLL certificate. So your site will be http only in the beginning and cannot be served via HTTPS. For some blogs that’s probably fine. But users will certainly feel much more comfortable when their browser isn’t shouting at them, for accessing an unsecure site.

This tutorial will mostly follow the Bitnami-tutorial, extended by some explanations or “gotchas” I encountered myself.

Let’s get ourself a certificate!

We will use SSH (secure shell) to connect to our server (EC2 instance). If you don’t know anything about “how to connect to a remote server”, you might want to look at the slightly less alianated/scary way (through an FTP-client) first. I already wrote a tutorial on “Access Your WordPress EC2 Instance via SFTP”, if your interested.

First we need to connect to our server via SSH (If your on Mac, SSH is already available in your terminal/command line). You will need a keyFile to successfully connect to your server. It ends on .pem (e.g. myKeyPair.pem). You should have gotten that keyFile when you first launched your WordPress instance on AWS.

NOTE: If you don’t know where you put your keyPair-file that’s very unfortunate. You might not get SSH access to that EC2 instance anymore. A work around is, to copy your instance as an AMI (Amazon Machine Image). Initialize a new EC2 instance, using that image. Disable the old instance. Here is a discussion on how to do that.

1. Activate Inbound Traffic for SSH

We need to make sure that SSH is enabled for inbound traffic, on our WordPress instance:

  • Log in to your AWS dashboard
  • Navigate to: Services > EC2 > running instances
  • Click on your running WordPress instance

You should see the main Menu of your instance now. Somewhere at the bottom you also see which keyPair is associated with your site.

aws EC2 instance overview

Check or activate SSH traffic. You can find the corresponding settings under:
NETWORK & SECURITY > Security Groups….

…click on the Inbound-tab.

SSH should be activated with a Port Range of 22.
And as Source it should show your IP-Adress.

NOTE: If SSH is not there yet, you can add SSH by clicking on Edit -> Add Rule -> and choose SSH from the dropdown. The port defaults to 22 automatically.
For the Source you could use anywhere ( But this means every computer that has your KeyPair could access your server/instance remotely. This is not ideal. BETTER: choose My IP (some-ip-address). You don’t have to know your IP address. It gets filled out automatically. Now only your computer has access to the instance.

FANTASTIC! That was already quite essential.

2. Access your Instance through the Secure Shell

Open a terminal window.
Navigate to the place where your KeyPair-file resides.
It needs a certain set of permissions. Therefore run the following command:

command: read/write permissions (for owner)

This command defines that the file can be read and being written to, only by the owner of the file (e.g. root-user).

Now connect to your server by executing the following ssh command in your terminal:

ssh command to connect to remote server

Your SSH client will most likely ask you to confirm the server’s host key and add it to the cache before connecting. Accept this request by typing “Yes”.

You should be logged in to your server and see something like this in the terminal:

Bitnami server login-welcome page (photo from:

NOTE: If this doesn’t work, try resetting your “MyIp-Adress” in your SSH inbound traffic settings in the AWS dashboard (as described above). This has thrown me a couple of times, because it seems I have a dynamic IP-address. And if it has changed, the server thinks someone else is trying to connect and will deny the access.

3. Get your SSL Certificate

We assume you are logged in to your server now, via your terminal.
If your site/blog is based on the Bitnami-stack you should be on an Ubuntu Linux distribution. But if you like, you can double check that by typing:

cat /etc/os-release

This shows some details about your operating system.

Bitnami uses a tool called Lego to automate some of the SSL-certificate juggling. The program might be already installed under /opt/bitnami/letsencrypt/. In my case it wasn’t. You can find out whether it is by typing:

sudo ls /opt/bitnami/letsencrypt

If you see lego listed you’re golden and you can jump forward to #4. If not – you guessed it – install Lego first.

>> Installing Lego

Change into the temp folder of your server by typing:

cd /tmp

Download Lego with this command:

curl -Ls | grep browser_download_url | grep linux_amd64 | cut -d '"' -f 4 | wget -i -

This will download the latest Lego-version from the repository. Type ls (you should still be in the /tmp directory) to see the exact name (version) of the file you downloaded.
Unpack the tar file with the following command:

command to unpack lego tar file

Create a directory for where lego will be put in:

sudo mkdir -p /opt/bitnami/letsencrypt

Move the unpacked lego binary into the just created folder by typing:

sudo mv lego /opt/bitnami/letsencrypt/lego

If you like, check whether lego gets listed now:

sudo ls /opt/bitnami/letsencrypt

The next thing sounds scary. But we are going to shut down the server.

BITNAMI-NOTE: Before proceeding with this step, ensure that your domain name points to the public IP address of the Bitnami application host.

To shut down all Bitnami services run the following command:

sudo /opt/bitnami/ stop

Now FINALLY get your certificate!!! You do so by running the following command. But for EMAIL-ADDRESS you enter your email-address and for DOMAIN you enter your domain (e.g.

sudo lego --tls --email="EMAIL-ADDRESS" --domains="" --domains="" --path="/opt/bitnami/letsencrypt" run

NOTE: With the –domains flag you can specify several domains in one go as you can see in the example command above (–domains=”” AND –domains=””).

You will have to agree to the terms of service.
After that, certificates will be generated in the /etc/lego/certificates directory. This set includes the server certificate file DOMAIN.crt and the server certificate key file DOMAIN.key.
To really see how they’re named, check for the files by running the following command:

sudo ls /opt/bitnami/letsencrypt/certificates

4. Configure your Server

Figure out what server is used on your instance by running:

sudo /opt/bitnami/ status

If it is Apache the following lines will apply (Nginx below). Those first three commands rename the old server certificate files by adding .old onto the end.

sudo mv /opt/bitnami/apache2/conf/server.crt /opt/bitnami/apache2/conf/server.crt.old
sudo mv /opt/bitnami/apache2/conf/server.key /opt/bitnami/apache2/conf/server.key.old
sudo mv /opt/bitnami/apache2/conf/server.csr /opt/bitnami/apache2/conf/server.csr.old

The next two commands will create symbolic links to your new SSL certificates. Make sure that for DOMAIN you enter your domain-name with the top-level domain part. So that the key for example becomes

sudo ln -sf /opt/bitnami/letsencrypt/certificates/DOMAIN.key /opt/bitnami/apache2/conf/server.key
sudo ln -sf /opt/bitnami/letsencrypt/certificates/DOMAIN.crt /opt/bitnami/apache2/conf/server.crt

And the last two of those seven commands change the owner of the certificate-files. Which is the root user. And we modify the rights. So that only the root user can read from or write to those files.

sudo chown root:root /opt/bitnami/apache2/conf/server*
sudo chmod 600 /opt/bitnami/apache2/conf/server*

If your server turned out to be Nginx, the principle is the same. But the commands from above will look like this:

sudo mv /opt/bitnami/nginx/conf/server.crt /opt/bitnami/nginx/conf/server.crt.old
sudo mv /opt/bitnami/nginx/conf/server.key /opt/bitnami/nginx/conf/server.key.old
sudo mv /opt/bitnami/nginx/conf/server.csr /opt/bitnami/nginx/conf/server.csr.old
sudo ln -sf /etc/lego/certificates/DOMAIN.key /opt/bitnami/nginx/conf/server.key
sudo ln -sf /etc/lego/certificates/DOMAIN.crt /opt/bitnami/nginx/conf/server.crt
sudo chown root:root /opt/bitnami/nginx/conf/server*
sudo chmod 600 /opt/bitnami/nginx/conf/server*

DONE! Restart your Engine

Here is the very last command which will restart the Bitnami services (server, database, etc.), with your new SSL certificates in place:

sudo /opt/bitnami/ start

Your WordPress Site should now be servable via HTTPS. Try it out by going to your browser and enter your domain. Don’t forget to put https:// in front of your domain (e.g.

That was really quite an effort, but you made it. CONGRATULATIONS!!!

There is one last thing you should know. Your certificates do not last forever. To be more specific. They expire after 90 days

NOTE: If you would like to check when your certificate expires, you can run the following OpenSSL command (while you’re on your server): sudo openssl x509 -text -noout -in server.crt

This will spit out a lot of data, including the validity period: Certificate:
Not Before: Apr 19 08:38:40 2019 GMT
Not After : Jul 18 08:38:40 2019 GMT

There is a way to renew them of course. And even a way to do this automatically by setting up a cron-job. Since I don’t have a tutorial on that topic, I point you to the original Bitnami tutorial (renewal starts at #5).

NOTE: If you want to redirect your HTTP version of your site/blog (without SSL) to your HTTPS version now: Follow the instructions on the Bitnami docs.

That’s it for today. I really hope that by reading this you had less trouble setting up SLL/HTTPS for your WordPress site than I did.

If you liked this post or if you would like to comment on it, please feel free to share it on social media or send me an email.



server room (by Manuel Geissinger)

Access your WordPress EC2 Instance via SFTP

Three times now I had forgotten how to connect to my WordPress instance via an FTP-client. So I need to write down how it works, as long as my memories are fresh.

What I have:

  • AWS (Amazon Web Services) account
  • EC2 instance on AWS
  • WordPress on that EC2 instance (bitnami)

What I want:

  • Connect to the server/instance via SFTP,
  • so that I can get access to the files on my server.

What I need:

  • access to my AWS-dashboard
  • a security user group (on AWS)
  • the access-key-pair-file (.pem) of my EC2 instance
  • the AMI (Amazon Machine Image) user-name
  • an FTP-client

Let’s begin!

Do I have an FTP-client like FileZilla?
Yes: Good! You’ll need it later.
NO: Get one. You’ll need it later. FileZilla is free of charge and is available here:

Do I have access to my AWS dashboard in the browser?

  • YES: Login to the AWS-Console/Dashboard.
  • NO: Stop here and contact AWS support to regain access.

Do I have a running AWS instance with WordPress on it?

  • YES: On the AWS Console: Navigate to Services -> EC2 -> Running Instances. Click on the instance that runs the WordPress site.
  • NO: If you like to host a WordPress Site I suggest this tutorial: Make sure to either use an existing KeyPair when you launch the instance or create a new KeyPair. DOWNLOAD, SAVE the KeyPair.pem file AND REMEMBER WHERE YOU SAVED IT!!!

So you are looking at your instance summary. Check which KeyPair name is associated with your instance:

You know where the .pem file with that name (My_KeyPair_Name.pem) resides on your computer?

  • YES: Very good. Open your FTP-client (e.g. FileZilla).
  • NO: Very unfortunate. You might not get direct access to that EC2 instance anymore. A work around is to copy your instance as an AMI (Amazon Machine Image). Initialize a new EC2 instance, using that image. Disable the old instance. Here is a discussion on how to do that. (This time: REALLY REMEMBER AND SAVE THE keyPair.pem FILE).

In the AWS EC2 dashboard, navigate to NETWORK & SECURITY -> Security Groups

aws Security Groups dashboard

Check the Inbound tab. You need port 22 (SSH) activated. If it is not already there you can add SSH by clicking on Edit -> Add Rule -> and choose SSH from the dropdown. The port defaults to 22 automatically. You have to choose a source. You could use anywhere ( But this means every computer that has your KeyPair can access your server/instance remotely. This is not ideal. BETTER: choose My IP (some-ip-address). You don’t have to know your IP address. It gets filled out automatically. Now only your computer has access to the instance. After doing the above, your inbound settings should look something like this:

aws security group settings for SSH

FANTASTIC! That was actually the most critical part.

Now open FileZilla on your computer. Navigate to Settings (a window pops up). Click on SFTP (under Connections). Which looks like this:

FileZilla SFTP Settings to add a keyfile

There is a button to add a KeyFile with. Remember that My_KeyPair_name.pem file? Good!
Click the Add keyfile… button, go to the directory where you stored the downloaded .pem file. Choose it. (There might be some conversion happening but this is so far in the past that I don’t remember). If so, let it convert the file. And click OK.

Almost done! In the top left corner of the FileZilla user interface you can spot an icon that looks like three server-machines, wired together.

FileZilla site manager icon

That is the site manager. Click on the site-manager-icon. A new window pops open.

In the main settings (which should already been chosen by default):

  • Check that SFTP is chosen as protocol.
  • In the server-field: type your public IP address. You can find it in the AWS EC2 instance dashboard on the right of the instance summary.
aws EC2 instance summary public IP address
  • Leave port empty (it should default to 22 on it’s own).
  • As Logon type choose ask for password. (once you do get asked for one, simply press ok, without typing any password).
  • The user name is based on the operating system’s distribution. Meaning: it can be root, ec2-user, ubuntu, etc. A list of some very common ssh user-names to connect to EC2 can be found here. In my case btw. ubuntu works (the WordPress site runs on an Ubuntu server). But bitnami also works. Which is the stack of the site and seems to be added as an alias for ubuntu.
  • Leave password empty. The window should look similar to this:
  • Click Connect. FileZilla should automatically find the right keyfile. And – like I already mentioned – if you get asked for a password, just click ok. FileZilla should now connect to your server, which means that on the right side of the FileZilla user interface a bunch of folders will magically appear.

Am I connected and do I see my server files on the right?

  • NO: Very sorry, that this blog wasn’t any help….don’t really have any suggestions other than Stackoverflow, Google and patience.
  • YES: FANTABULOUS! We made it! I am very very happy if this blog was helpful to you (which might be myself…again).

If you liked this post or want to hint something, please feel free to share it on social media or send me an email.