Timestamps in API Design

This article tackles two common API design questions: how to model time fields and how to format them. It also introduces an input → process → output pattern for handling timestamps safely in code.

Timestamps in API Design
Photo by Murray Campbell / Unsplash

I’m writing this because what follows should be obvious to any seasoned developer—yet I’m still drawn into tedious debates with colleagues in AWS over two questions:

  1. Should time fields be modeled as a dedicated Timestamp type or as a primitive long?
  2. Should API time values be exposed as raw epoch integers or as RFC3339‑formatted strings?

The Answers

Timestamps should be modeled as a dedicated type, not a long.

Using a primitive like long or int64 to represent a timestamp might seem easy and performant—but in most cases, it’s a bad idea. It’s error-prone, opaque, and worst of all, that performance “gain” is rarely something your application actually needs.

A raw long gives no context. Is it seconds? Milliseconds? Nanoseconds? Since epoch? From some custom epoch? Why should you– or anyone else–have to keep asking these tedious questions? It’s a timestamp—use a type that makes that explicit!

There's also no type safety. It’s easy for developers to accidentally assign unrelated numeric values, or confuse durations with points in time.

And then there’s functionality: you can’t easily perform date/time operations on a long. A proper time type usually comes with everything you need—comparison, formatting, arithmetic, etc.

However, some still might argue, “I don’t need all that—I just want to get the difference in seconds between two time points.” Fair enough. But doesn’t a timestamp type give you that too, plus a lot more?

Sure, there are exceptions. For instance, if you're building a database or a low-level storage engine, modeling timestamps with long might make sense for efficiency. But outside of those rare cases, reach for a real timestamp type. Your future self—and your teammates—will thank you.

RFC3339 for time values instead of epoch in APIs.

I’m not talking about binary protocols or compact serialization formats. Don’t argue on that. I’m talking about APIs—typically text-based formats like JSON, XML, etc.

Timestamps in APIs should be human-readable and unambiguous. That’s what RFC3339 gives us.

Epochs are not readable—1681724411 means nothing at a glance. They're also ambiguous: is that seconds? Milliseconds? Something else?

RFC3339 is a standard. It's a widely adopted format (2006-01-02T15:04:05Z), easily parsed by both humans and machines, and supported almost in every language and API tookit.

If you're concerned about performance or payload size, optimize elsewhere. Don’t sacrifice clarity and interoperability.

Some might argue, “We don’t need to look at API responses directly. Who cares about readability? Epoch is more performant. Why optimize for clarity?”

Frankly, I don’t even want to engage with that line of reasoning. It misses the point entirely. APIs are not just for machines—they’re for developers too. Readability, debuggability, and long-term maintainability do matter.

Best Practices for Handling Timestamps in Your Program – The input → process → output Pattern

When dealing with a timestamp, it should be divided into 3 phases.

  • Input phase: Parse raw timestamp values (e.g. from API requests or file inputs) into proper time objects—like java.time.Instant in Java or time.Time in Go. Additionally, time objects should be timezone-aware. Time types without timezone information should be forbidden—they're a breeding ground for bugs and confusion.
  • Processing phase: Use these time objects throughout your program logic. This gives you type safety, clarity, and access to useful date/time operations.
  • Output phase: When producing output (e.g. an API response), format the timestamp as needed. For text-based APIs, RFC3339 is the go-to choice for readability and interoperability.

Actually, this practice also applies to any other data types that involve conversions—for instance, URLs, string encodings, and so on.

I’ve lost count of how many times I’ve seen developers mess up URLs by treating them like plain strings. Here’s what I mean:

base_url = "https://example.com/search"
query = "apple"
url = base_url + "?q=" + query + "&sort=asc"

I’m not saying you should never build a URL by concatenating strings. In some simple, controlled cases, it’s perfectly fine. But it should be done with caution—and only when the structure is dead simple and the parameters are guaranteed to be safe. For anything dynamic or user-controlled, use proper URL utilities.

Following the pattern above, you should parse the raw URL string into a proper URL object as part of the input phase. From there on, treat it like what it is: a structured object. Use the provided methods to inspect or modify it—don’t fall back to string hacks.

Why this "input → process → output" methodology matters?

This approach works for any kind of data that needs conversion, like timestamps (Instant), encoded strings, file paths, and more.

By centralizing conversions into the input and output phases, you get these advantages:

  • Cleaner code: your main logic deals with proper types.
  • Fewer bugs: invalid data gets caught early, and you avoid bugs by using the proper functions (typically provided with the types).
  • No duplicate work: you parse once, use many times in the entire program.
  • Better separation: input/output is handled at the edge, business logic stays focused.

Closing Thoughts

These debates aren't about preference—they're about clarity, safety, and long-term maintainability. We should aim for conventions that prevent bugs and help other developers (and future you) understand what’s going on at a glance.

Let’s stop reinventing the wheel and adopt what the industry has already learned through decades of hard lessons.