writing a tcp stack from scratch in rust
why would you do this
most engineers treat TCP as a black box. packets go in, data comes out. that abstraction is fine until it isn't — until you're debugging a production system dropping packets under load, or building a protocol that needs custom congestion control.
so i built one. from scratch. in rust.
the basics: what TCP actually does
TCP gives you three things: reliable delivery, ordering, and flow control. everything else — latency, throughput, fairness — is a consequence of how these three properties are implemented.
struct TcpHeader {
src_port: u16,
dst_port: u16,
seq_num: u32,
ack_num: u32,
flags: u8,
window: u16,
checksum: u16,
}
the handshake
the three-way handshake is SYN → SYN-ACK → ACK. sounds simple. the implementation is not.
you need to track connection state across async events: the SYN might arrive while you're processing another connection. the ACK might be lost and need retransmission. the client might crash between SYN and ACK, leaving a half-open connection that you need to clean up with a timeout.
enum TcpState {
Listen,
SynSent,
SynReceived,
Established,
FinWait1,
FinWait2,
TimeWait,
CloseWait,
LastAck,
Closed,
}
congestion control
this is where it gets interesting. the linux kernel implements CUBIC by default. i implemented Reno — simpler, still teaches you everything.
congestion window starts small (typically 10 segments), grows exponentially during slow start, then linearly during congestion avoidance. on packet loss, cut the window in half and restart.
what i learned
building TCP taught me more about distributed systems than any book. the protocol is a masterclass in handling partial failure, timeouts, and state machines. rust's ownership model caught at least three bugs at compile time that would have been nightmares to debug at runtime.