Deep Dive: The Microservices Migration
How a BBS emulator accidentally became a distributed system—gRPC services, NATS message bus, gateway routing, and the move from localStorage to real backends.
Deep Dive: The Microservices Migration
How a BBS emulator accidentally became a distributed system
Here’s a thing that happened: I started with a browser-based BBS emulator, and somehow ended up with gRPC services, a NATS message bus, and gateway routing. Classic scope creep, but in this case, intentional scope creep.
The core problem was simple: BBS backends (bulletin boards, user directories, message systems) were storing data in browser localStorage. This was fine for single-user testing, but the moment two people wanted to see the same bulletin board? Chaos.
(And by “chaos” I mean “they’d each see completely different content.” Which is technically correct for localStorage, but not great for a shared BBS experience.)
The Architecture Decision
I had two choices:
-
Keep it client-side — Use WebSocket sync to replicate localStorage across clients. Simple, but meant every client had a full copy of everything.
-
Move to server-side — Real databases, real APIs, real services. More complex, but actually correct.
I chose option 2, partly because it’s the right answer, but mostly because I wanted to learn gRPC properly.
The Stack
The final architecture looks like this:
┌─────────────────┐
│ PostgreSQL │
│ (Supabase) │
└────────┬────────┘
│
┌──────────┐ ┌─────────────┐ ┌─────┴────────┐
│ Browser │────►│ Gateway │───►│ Services │
│ Terminal │ │ (Axum) │ │ (gRPC) │
└──────────┘ └──────┬──────┘ └──────────────┘
│ │
┌──────┴──────┐ │
│ NATS │◄──────────┘
│ (pub/sub) │
└─────────────┘
- Gateway: Axum HTTP server that routes REST requests to gRPC services
- Services: Individual Rust microservices (users, messaging, market data)
- NATS: Pub/sub for real-time events between services
- PostgreSQL: Actual persistence (via Supabase for the hosted instance)
Service: Users
The users service handles profiles, online status, and the user directory backend:
// server/services/users-service/src/service.rs
pub struct UsersService {
pool: PgPool,
nats: async_nats::Client,
}
#[tonic::async_trait]
impl Users for UsersService {
async fn list_profiles(
&self,
request: Request<ListProfilesRequest>,
) -> Result<Response<ListProfilesResponse>, Status> {
let req = request.into_inner();
let profiles = sqlx::query_as!(
ProfileRow,
r#"
SELECT id, handle, display_name, bio, created_at, last_seen
FROM profiles
ORDER BY last_seen DESC
LIMIT $1 OFFSET $2
"#,
req.limit as i64,
req.offset as i64
)
.fetch_all(&self.pool)
.await
.map_err(|e| Status::internal(e.to_string()))?;
Ok(Response::new(ListProfilesResponse {
profiles: profiles.into_iter().map(Into::into).collect(),
}))
}
async fn set_online_status(
&self,
request: Request<SetOnlineStatusRequest>,
) -> Result<Response<SetOnlineStatusResponse>, Status> {
let req = request.into_inner();
// Update database
sqlx::query!(
"UPDATE profiles SET is_online = $1, last_seen = NOW() WHERE id = $2",
req.online,
req.user_id
)
.execute(&self.pool)
.await
.map_err(|e| Status::internal(e.to_string()))?;
// Publish event to NATS for real-time subscribers
self.nats
.publish(
"users.status",
serde_json::to_vec(&UserStatusEvent {
user_id: req.user_id,
online: req.online,
})
.unwrap()
.into(),
)
.await
.map_err(|e| Status::internal(e.to_string()))?;
Ok(Response::new(SetOnlineStatusResponse { success: true }))
}
}
The set_online_status method shows the pattern: update the database, then publish to NATS. Any service interested in real-time status changes subscribes to users.status.
Service: Messaging
The messaging service handles the bulletin board and private messages:
// server/services/messaging-service/src/service.rs
pub struct MessagingService {
pool: PgPool,
nats: async_nats::Client,
}
#[tonic::async_trait]
impl Messaging for MessagingService {
async fn post_message(
&self,
request: Request<PostMessageRequest>,
) -> Result<Response<PostMessageResponse>, Status> {
let req = request.into_inner();
// Verify user owns this auth token
let user_id = self.verify_auth(&req.auth_token).await?;
let message = sqlx::query_as!(
MessageRow,
r#"
INSERT INTO messages (board_id, author_id, subject, body)
VALUES ($1, $2, $3, $4)
RETURNING id, board_id, author_id, subject, body, created_at
"#,
req.board_id,
user_id,
req.subject,
req.body
)
.fetch_one(&self.pool)
.await
.map_err(|e| Status::internal(e.to_string()))?;
// Notify subscribers
self.nats
.publish(
format!("messages.board.{}", req.board_id),
serde_json::to_vec(&NewMessageEvent::from(&message))
.unwrap()
.into(),
)
.await
.ok(); // Fire and forget for pub/sub
Ok(Response::new(PostMessageResponse {
message: Some(message.into()),
}))
}
}
Board-specific NATS subjects (messages.board.{id}) allow clients to subscribe only to boards they’re viewing. No need to receive every message across the system.
The Gateway Layer
The gateway translates REST to gRPC and handles authentication:
// server/src/handlers/gateway.rs
pub async fn list_profiles(
State(state): State<AppState>,
Query(params): Query<ListProfilesParams>,
) -> Result<Json<Vec<Profile>>, AppError> {
let mut client = state.users_client().await?;
let response = client
.list_profiles(ListProfilesRequest {
limit: params.limit.unwrap_or(50),
offset: params.offset.unwrap_or(0),
})
.await?;
Ok(Json(response.into_inner().profiles))
}
pub async fn post_message(
State(state): State<AppState>,
AuthUser(user): AuthUser,
Json(body): Json<PostMessageBody>,
) -> Result<Json<Message>, AppError> {
let mut client = state.messaging_client().await?;
let response = client
.post_message(PostMessageRequest {
auth_token: user.token,
board_id: body.board_id,
subject: body.subject,
body: body.body,
})
.await?;
Ok(Json(response.into_inner().message.unwrap()))
}
The AuthUser extractor handles JWT validation. Requests without valid tokens get a 401 before they ever reach the gRPC service.
NATS Pub/Sub
NATS gives us real-time event distribution without coupling services together:
// server/src/nats.rs
pub async fn setup_nats(config: &Config) -> Result<async_nats::Client> {
let client = async_nats::connect(&config.nats_url).await?;
info!("Connected to NATS at {}", config.nats_url);
Ok(client)
}
// Subscribing to events (in a service that cares)
pub async fn subscribe_user_events(
nats: async_nats::Client,
handler: impl Fn(UserStatusEvent) + Send + Sync + 'static,
) -> Result<()> {
let mut subscriber = nats.subscribe("users.status").await?;
tokio::spawn(async move {
while let Some(msg) = subscriber.next().await {
if let Ok(event) = serde_json::from_slice(&msg.payload) {
handler(event);
}
}
});
Ok(())
}
The pattern is simple: services publish events when state changes, other services subscribe if they care. No direct service-to-service calls needed for notifications.
Docker Compose Deployment
The whole thing runs in containers:
# docker-compose.yml
services:
gateway:
build: ./server
ports:
- "8080:8080"
environment:
- NATS_URL=nats://nats:4222
- DATABASE_URL=postgres://user:pass@db:5432/emulator
depends_on:
- nats
- db
- users-service
- messaging-service
users-service:
build:
context: ./server
dockerfile: services/users-service/Dockerfile
environment:
- NATS_URL=nats://nats:4222
- DATABASE_URL=postgres://user:pass@db:5432/emulator
messaging-service:
build:
context: ./server
dockerfile: services/messaging-service/Dockerfile
environment:
- NATS_URL=nats://nats:4222
- DATABASE_URL=postgres://user:pass@db:5432/emulator
nats:
image: nats:latest
ports:
- "4222:4222"
db:
image: postgres:15
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=emulator
docker compose up gives you the whole stack. For local development, I use Dragonfly as a Redis-compatible cache because it’s faster and uses less memory.
The Proto Definitions
gRPC services are defined in Protocol Buffers:
// server/proto/backend.proto
syntax = "proto3";
package backend;
service Users {
rpc ListProfiles(ListProfilesRequest) returns (ListProfilesResponse);
rpc GetProfile(GetProfileRequest) returns (GetProfileResponse);
rpc UpdateProfile(UpdateProfileRequest) returns (UpdateProfileResponse);
rpc SetOnlineStatus(SetOnlineStatusRequest) returns (SetOnlineStatusResponse);
}
service Messaging {
rpc ListBoards(ListBoardsRequest) returns (ListBoardsResponse);
rpc GetMessages(GetMessagesRequest) returns (GetMessagesResponse);
rpc PostMessage(PostMessageRequest) returns (PostMessageResponse);
rpc DeleteMessage(DeleteMessageRequest) returns (DeleteMessageResponse);
}
message Profile {
string id = 1;
string handle = 2;
string display_name = 3;
string bio = 4;
int64 created_at = 5;
int64 last_seen = 6;
bool is_online = 7;
}
// ... more messages
The backend-proto crate generates Rust code from these definitions at build time. Type-safe RPC without writing serialization code by hand.
Lessons Learned
-
gRPC is nice, actually — Strong typing, good tooling, and efficient binary serialization. The learning curve is worth it.
-
NATS is delightfully simple — Compared to Kafka or RabbitMQ, NATS is almost trivially easy to operate. Publish, subscribe, done.
-
Gateway pattern works — Keeping REST for the browser while using gRPC internally gives you the best of both worlds.
-
Start with the database schema — I consolidated everything into an authoritative schema file early. Every service reads from the same source of truth.
-
Docker Compose for local, Helm for prod — Same services, different orchestration. The 12-factor approach pays off.
Was It Overkill?
For a BBS emulator? Probably. But the architecture now supports:
- Multiple simultaneous users
- Real-time message updates
- Shared game state (for multiplayer text adventures)
- Proper authentication and authorization
- Horizontal scaling (if somehow thousands of people want to use a terminal BBS)
And I learned gRPC, which was the actual goal.
See also: Journey Day 10: Microservices & Market Data — when the services went live.
See also: Deep Dive: Passkey Authentication — for the auth side of this story.