Securing APIs in a Cloud Native Environment

Curity
7 min readDec 17, 2019

Computer systems built today have very little in common with what we built only a few years ago. Systems have evolved from classic client-server solutions, into distributed systems that span over many data centers and geolocations. DevOps teams are now able to build applications that scale up and down effortlessly, and even build serverless applications that can spin up a server just to serve a single request. It’s pretty impressive.

However, one requirement has stayed the same over the years. The system still needs to know who the caller is, or at least know a little something about the caller. Only when the caller is known can the data be released. Identity is a prerequisite to authorization.

So how do we fulfil this requirement in ever changing environments? One way is to adhere to the standards that are available. Following standards ensures interoperability, and well-written standards help with scalability. But which ones to choose? There are hundreds of standards that could apply to the subject. For identity alone there are ~50 different ones that could be relevant depending on the use case.

The Power of Identity Standards OAuth and OpenID Connect

There have been several standard attempts trying to solve these kinds of things, and there will likely be many more in the future. Protocols as SAML, WS-* and the likes of them have been around for many years and are still quite heavily deployed. They do solve a lot of the use cases, but in the current need for REST-ful access control and identity management, they are quite hard to use. Simply because they’re not designed for that. Instead two others have taken over the stage:

OAuth2 and OpenID Connect. This should not come as a surprise to anyone, they have been around a while now and have become de facto standards for digital identity and delegated access. Together, these two make up the core of a secure API platform.

A lot of people have glanced at the core OAuth 2.0 spec, and thought to themselves; “I can implement this”. And that’s probably true, but there is a lot more to it than the core specifications. OAuth and OpenID Connect is a whole family of specifications, and if we printed them all we would have close to a bookshelf full of specifications to read. Because of this, I never recommend implementing the server part by yourself. All those nuances should be left to be implemented by experts.

So, if I install an OAuth/OpenID server, am I done? No, but you’re well on your way. There are still some measures to be made, and I’ll give you a few tips on how to avoid some of the pitfalls when deploying largely scaled platforms.

Phantom Token — The base of a secure API platform

The Phantom Token flow is something that we architectured to fulfill the need of hiding data from clients, and at the same time share all the data the API’s need for making their authorization decision. Although it is not a standard in itself, it ties together several standards in a nice and comprehensible way. It’s a pattern that we have deployed at all of our customers, with very good results.

The idea is that you allow your OAuth server to issue an opaque access token, a token that is merely a reference to the token data. The opaque token is represented by a random string, so there is no way for a potential attacker to extract any data from it. This also means that the client receiving the token isn’t able to read any data from it. This is because the token is not really for the client, it’s for the API. When the client uses the token to call an API, the API will have to de-reference the data by using the introspection capability of the OAuth server. This will not scale very well, since all APIs would have to do the same thing for every incoming request and would more or less force the APIs to create their own cache. So instead, we introduce the API gateway.

With the API gateway in place, we can allow it to perform the introspection for the API. This means several things. First, it allows us to move the cache to the API gateway which will give us control over it. Second, we can have the OAuth server respond with more than just the document that explains if the token is valid or not. It can also respond with the access token in the form of a JSON Web Token (JWT). The JWT is a JSON representation of the token, signed with the private key of the OAuth server. This JWT is then what’s passed on with the request to the API. The API can then validate the signature of the token using the public key of the OAuth server and base its authorization decision on the data from the token. This makes for a very scalable platform, since all APIs can make their own authorization decision without asking anyone else. And all we need to distribute to them is the public key.

But now consider this, in a distributed environment, where there are multiple instances of the API gateway. If you’re unlucky, the API requests might hit new gateways each time, so the benefits of caching would be lost. To mitigate against this, the OAuth server could be allowed to warm up the cache for the gateway instances. Depending on the gateway, that could mean to push the reference/value token pair to the gateway. Or in other cases, push to some common cache.

Token validation

When using the Phantom Token flow, the API is able to validate the tokens using the public key of the OAuth server. To obtain the key, it can use the metadata of the server. The metadata and where it is obtained is described in RFC8414 or/and OpenID Connect Discovery depending on the server. So if your OAuth server supports one of these, it means we can get the public keys using http requests. The keys are represented in a JWKS, and look something like this.

{
"keys": [
{
"kty": "RSA",
"kid": "1555934847",
"use": "sig",
"alg": "RS256",
"n": "rCwwj0H1f2Gl3W6…8QlB9R9M_DxcKRQ",
"e": "AQAB"
}
]
}

This document contains one key with id 1555934847. It could contain a full list of keys.

So let’s have a look at a token, and see how to validate it.

eyJraWQiOiIxNTU1OTM0ODQ3IiwieDV0IjoiOWdCOW9zRldSRHRSMkhtNGNmVnJnWTBGcmZRIiwiYWxnIjoiUlMyNTYifQ.eyJhdF9oYXNoIjoiV3RDYWN6N3hrNHBHZDE0Y29PeTM3dyIsImRlbGVnYXRpb25faWQiOiJiNWZmYjMyZC0zNDdiLTQyYWQtODQzMS03MGEzM2I0N2UwMjIiLCJhY3IiOiJ1cm46c2U6Y3VyaXR5OmF1dGhlbnRpY2F0aW9uOmh0bWwtZm9ybTpodG1sLXByaW1hcnkiLCJzX2hhc2giOiJraUdtTUN0YmNmUy1rZ2FUSTZXLWNRIiwidXBkYXRlZF9hdCI6MTU0MDE5NzU2NSwiYXpwIjoidG9vbHMiLCJhdXRoX3RpbWUiOjE1NTc3ODMxMjMsInByZWZlcnJlZF91c2VybmFtZSI6ImphY29iIiwiZ2l2ZW5fbmFtZSI6IkphY29iIiwiZmFtaWx5X25hbWUiOiJJZGVza29nIiwiZXhwIjoxNTU3Nzg2NzMzLCJuYmYiOjE1NTc3ODMxMzMsImp0aSI6IjExOGYyMDJkLTcyZjctNGI5Zi05MTk0LTU5MDZiYzAwNjQwMiIsImlzcyI6Imh0dHBzOi8vbm9yZGljYXBpcy5jdXJpdHkuaW8vfiIsImF1ZCI6InRvb2xzIiwic3ViIjoiamFjb2IiLCJpYXQiOjE1NTc3ODMxMzMsInB1cnBvc2UiOiJpZCJ9.DnY8tSaT2VoDfVUazp28JnKPnl1o0bOaCZRRx6nR31vebG8xkTQLGGD56piiwp6HroehRECtniOxOMuPi91w7NBqVky3jbxDNYRyfmbTMxz6TRk2k1M-Tc2d1UrQposSf-GNeMxchVB47pzArUAcnACM58vB83RpCzdsbv3VxdLcP9Bp8hGSU3bGKSLDJIEYlWYV9au2qYrwLA2Avzj-ZCv4qK6WxIlcbQdfHkw3hsF_JULTxxvMHFwE6EAzxEXu5DRiNVJqn57P_jc4wW5SLkxS0fhBXFG2LZ2tnSGaoNc3JZ5g6LnJ-7IXvg14NWtzLM6yPMv5Dw_KxC5bBIFjFw

This is a JWT. It has a header in pink, body in grey, and the signature in green. The header and body are encoded JSON documents, and the signature is encoded binary data. The parts are separated with a period (‘.’) character. If we decode the header, it looks like this

{
"kid": "1555934847",
"alg": "RS256"
}

The “kid” (key ID), points to the key in the JWKS that was used to sign this JWT, and the “alg” (algorithm) describes how it was signed. So to validate the JWT, the API can use the key from the JWKS and validate that the signature is correct. If the validation passes, the data of the body can be trusted, and the API can base its authorization decision on it. Mission accomplished!

That means to validate an incoming token, the API must do the following:

● Get server metadata

● Cache keys locally

● Validate the signature

Important to note here is that if a token comes to the API with a “kid” that is not recognized, it can mean two things. Either the server rolled its keys, or the token comes from an untrusted source. To be sure, the API must first update its keys, and if the kid still isn’t found it means that the source is untrusted. This way, the server can roll its keys at any time without getting dropped requests by the API.

This works really well in all environments that can keep a state, like traditional web servers, Docker containers, Kubernetes and so on. But for other things like lambda functions we need something else.

Browser model

For stateless functions, performing token validation the mentioned way gives a lot of overhead. The function would need to collect the metadata and keys for each request, so we obviously need something else. For these types we can use the same model that the browsers use to validate that websites are trusted while using https. Allow the OAuth Server to create a Certificate Authority (CA) that can issue sub-certificates to use to sign the tokens. The CA is then distributed with the functions, by compiling in or using some other means of the current platform.

The OAuth server can now issue JWTs with slight difference from before

{
"x5c": "MIICojCCAYoC…xMjExMjJaFw0yNDAxMjcxM",
"alg": "RS256"
}

Instead of the “kid” we had from before, we have a “x5c”. x5c contains the full certificate that corresponds to the key used to sign the JWT. So to validate the token, the API needs to extract the certificate, validate that it is issued by the CA and validate the signature using the public key of the certificate.

So we have enabled lambda functions to validate JWTs, without the http overhead. And the server can still roll keys by getting a new signing certificate.

Final Thoughts

By following these patterns in your platform, you allow all the components in the platform to be distributed or to dynamically scale. But maybe even more important, it allows you to enforce your access policies in both APIs and gateways. The policy enforcement can be made without calling out to a third party, since all the data needed is provided in the request.

What enables us to create these patterns is the use of standards. We separate the concern of every component in the platform, and by tying them together with the use of open standards we’re not only allowing them to scale separately, but we also allow them to be replaceable. Since the glue of the components are standard protocols, it makes it easier to replace components. All of this will make you able to build a truly scalable platform.

Daniel Lindau, Identity Specialist, Curity

--

--

Curity

Curity is the leading supplier of API-driven identity management, providing unified security for digital services. Visit curity.io or contact info@curity.io