init_programs(self)

To start out, we need to think about how to split the implemention between the datapath and userspace. For AIMD, the main things we need to keep track of are ACKs to increment our window and losses to cut our window. Although we could ask the datapath to give us this information on every ACK or loss detected, this wouldn't scale well. Since it takes one RTT for our algorithm to get feedback about a given packet on the network in steady state, it is pretty natural to update our state once per RTT (note: this is by no means fundamentally correct, so feel free to play around with differnet time scales!). A common pattern in CCP is to aggregate per-ACK statistics (such as the number of bytes ACKed) over a given time interval and then periodically report them to the userspace agent, which handles the logic of how to adjust the window or rate.

Although they work in tandem, it makes sense to think about the datapath program first, since the userspace agent reacts to events generated by the datapath. (For a detailed background on CCP datapath programs, read this.)

As mentioned above, for this algorithm we want to collect two statistics in the datapath, number of packets ACKed and number of packets lost, so we'll define our Report structure as follows:

(def (Report
    (volatile packets_acked 0)
    (volatile packets_lost 0)
))

The 0 after each value sets the default value and the volatile keyword tells the datapath to reset each value to it's default (0) after each report command.

Next, we'll specify our event handlers. First, we'll use the when true event to update our counters on each ACK (the event handler is run on each ACK, represented by the Ack structure):

(when true
    (:= Report.packets_acked (+ Report.packets_acked Ack.packets_acked))
    (:= Report.packets_lost (+ Report.packets_lost Ack.lost_pkts_sample))
    (fallthrough)
)

The (fallthrough) statement tells the datapath to continue checking the rest of our event handlers. Without this statement, the datapath would stop here even if the other event conditions resolved to true.

The only other condition we need is a timer that sends a report once per RTT. This can be implemented using the Micros variable. This variable starts at 0 and represents the number of microseconds since it was last reset. (Flow contains some flow-level statistics, such as the datapath's current estimate of the RTT in microseconds, which comes in handy here):

(when (> Micros Flow.rtt_sample_us)
    (report)
    (:= Micros 0)
)

This condition resolves to true if Micros is greater than one RTT, and then resets it so that it can fire again on the next RTT. (NOTE: Micros is only reset by you. If you forgot to reset it, this condition would keep firing on every ACK because it will only be increment on each ACK).

Although this is not absolutely necessary, when a loss happens, we should probably know about that right away. A loss (in the simplistic model assumed by this algorithm) indicates that we are putting packets into the network too quickly. Therefore, if we were to continue sending at this rate for up to 1 RTT after receiving the first loss, we may introduce further losses. We can add another when clause to give us a report immediately upon any loss.

(when (> Report.packets_lost 0)
    (report)
)

We can now write init_programs by putting this together into a string literal and giving our program a name ("default"):

def init_programs(self):
    return [
        (("default"), ("""\
            (def (Report
                (volatile packets_acked 0)
                (volatile packets_lost 0)
            ))
            (when true
                (:= Report.packets_acked (+ Report.packets_acked Ack.packets_acked))
                (:= Report.packets_lost (+ Report.packets_lost Ack.lost_pkts_sample))
                (fallthrough)
            )
            (when (> Micros Flow.rtt_sample_us)
                (report)
                (:= Micros 0)
            )
            (when (> Report.packets_lost 0)
                (report)
            )
        """)
    ]

NOTE: If you don't return any programs here, there will be no logic to decide when your algorithm receives reports, and thus your algorithm won't receive any callbacks beyond the creation of each flow.