Pritunl okta3/6/2023 At this point, users outside of the cluster will see a valid cert, so they know they've gone to the right site, and applications inside the cluster will see that traffic is coming from the proxy, and can decide how they want to trust that. I wrote a "control plane" that automates most of the mTLS stuff (it's production/ekglue in the repository ekglue is an open-source project that is agnostic to mTLS, my configuration adds it for my setup). I use an Envoy proxy in the middle this terminates the end user's TLS connection, and routes requests to the desired backend, like every HTTPS reverse proxy ever. (But probably compromised your control plane too, so it's all pointless.)Ģ) The other half is letting things outside of the cluster make requests to things inside the cluster. And, of course, if the NSA is wiretapping your internal network, they don't get to observe the actual traffic. (I have not set this up yet, however.) I also like the ability to reliably detect misconfiguration if you misconfigure a DNS record, instead of making requests to the wrong server, the connection just breaks. This is nice because, in theory, I don't have to configure each application with a Postgres password, Postgres can just validate the client cert and grant privileges based on that. (This integrates nicely with things like Postgres, that expect exactly this sort of setup.) This lets pure service-to-service communication securely validate the other side of the connection. The application is configured to use this cert requiring incoming connections to present a client certificate that validates against the CA, and making outgoing connections with its own certificate. Each application gets a one-word subject alternate name that is its network identity. You can peruse my production environment config for my personal projects at (dunno if that's the real link, my ISP broke routes to github tonight, but it's something like that).ġ) At application installation time, a cert is provisioned via cert-manager. I personally use Envoy as the proxy and cert-manager to manage certificates internally. Then each application sees (source application, user) and can make an authorization decision. I personally use mTLS as a two part authentication/authorization system services prove their identity to each other with certificates, humans prove their identity to a proxy server that generates a bearer token for the lifetime of the request. I think most open source stuff supports client certs pretty well, but the issue is getting them to end users. Hiding behind VPNs or layers of reverse proxies seems to cause more harm than good. We are at a point of placing our actual application servers directly on the public internet (with TLS1.2/MFA/ACLs/etc). You can take security to the next level with server-side rendering of web content in order to avoid additional required channels of communication or revealing of implementation secrets to the client (e.g. Users are scoped directly to the system of concern rather than an entire network of hosts. very old legacy systems) you probably want to put an nginx box in front and then put the authentication at that level. Ideally, you just directly expose a secure web application to clients, but in some cases (i.e. But, if the thing you were using VPN for is already a web application, you are basically halfway there. Obviously, this isn't practical for everything. In my experience, the best way to eliminate the VPN is to expose your various internal business services as websites w/ TLS1.2 & multi-factor authentication.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |