Networking Security: Atlas / Private Endpoints
Imagine connecting to your MongoDB Atlas cluster from the applications on your virtual private cloud with pinpoint precision, targeting only the exact cluster you need while keeping your data completely off the public Internet.
That's the power of private endpoints.
They provide the most secure way to access your data, creating a direct private tunnel between your application and specific Atlas resources.
In this lesson, we'll explore what private endpoints are, how they work, and when to use them.
We'll explore how they are different from other connection methods, and walk through setting up a private endpoint using AWS.
By the end, you'll have another useful tool in your security toolbox for protecting your MongoDB data. When we look at a network peering setup, we see two virtual networks connected, your cloud provider's VPC and MongoDB Atlas VPC.
This connection allows resources in your VPC to communicate with your Atlas cluster since you've opened the necessary ports. This approach works well, but it's connecting two entire networks just to access a single resource, your Atlas cluster.
It's like building a bridge between two islands when you only need to visit one specific building.
Instead, what if you could create a direct private connection specifically to the Atlas cluster and nothing else? Private endpoints create a secure resource specific link to your Atlas cluster.
Rather than connecting entire networks, private networks allow you to effectively isolate access to a cluster to your VPC without exposing other parts of the Internet. Here's how they work. A private endpoint maps a specific resource, like your Atlas cluster, to an IP address within your virtual private cloud, essentially making that resource appear as if it's part of your network.
Traffic to the resource stays completely within your private network and the cloud provider's backbone infrastructure.
It's important to note that private endpoints are unidirectional, meaning they allow applications to connect to Atlas, but not the other way around. This adds another layer of security by limiting the direction of communication.
MongoDB Atlas supports private endpoints for all major cloud providers, AWS Private Link, Azure Private Link, and Google Cloud Private Service Connect.
Each implementation has its own specific configuration, but they all achieve the same goal.
One limitation to be aware of, if you have a multi region or multi cloud cluster setup, it's required to configure private endpoints for each region or cloud provider you're using.
This ensures your application can connect to the nearest cluster node without going over the public Internet. Let's walk through setting up a private endpoint using AWS.
The process follows similar principles across other cloud providers, though the specific commands might vary. First, we need to create a private endpoint service in Atlas using the Atlas CLI.
The "privateEndpoints aws create" command tells Atlas to create a new AWS private endpoint service.
The region flag specifies US West one as the AWS region where the Atlas cluster and VPC is located, and the project ID flag identifies the Atlas project.
When successful, Atlas responds with a simple confirmation that the endpoint service was created, and that provides the unique ID we'll need in subsequent steps.
With that complete, we can move on to retrieving the endpoint service name from Atlas. For this, we're using the private endpoint's describe command to get details about our endpoint service.
We provide the private endpoint ID we received earlier, the same project ID, and we'll specify JSON for output.
Once the command has been successfully executed, the response will contain several useful fields.
The cloud provider confirms that we're using AWS.
The status field shows available indicating that the service is ready to be used. And most importantly, the endpoint service name is AWS's identifier for the Atlas service. We'll need this value in the next step.
The interface endpoints array is empty because we haven't connected to any VPC endpoints yet.
Next, we'll need to create an interface endpoint in our AWS VPC to connect to Atlas. The AWS EC two create VPC endpoint command creates the connection point in our VPC.
For this, we provide the VPC ID, which identifies the VPC where the endpoint will be created.
The VPC ID can be found in the AWS dashboard.
The region must match the region we specified when creating the Atlas endpoint.
For service name, we use the endpoint service name value from the previous step.
We specify the VPC endpoint type as interface, and then we provide the subnet IDs where the endpoint should be available.
After running the command, we receive a detailed JSON object with information about the new VPC endpoint.
The most important part is the VPC endpoint ID, which looks something like v p c e dash x.
We'll need this ID in the next step. With our AWS endpoint created, we now tell Atlas about it. The Atlas private endpoint's AWS interfaces create command associates our AWS VPC endpoint with the Atlas endpoint service.
We provide the endpoint service ID from our first step, the private endpoint ID that AWS generated when we created the VPC endpoint, and our project ID again.
After running the command, we receive some information about the connection.
For instance, the connection status initially shows pending acceptance because AWS and Atlas are establishing the connection. It will take a moment for Atlas and AWS to get everything set up. Once the connection is established, we can use the interface's describe command to check the status of the interface.
For this, we provide the interface endpoint ID, which is the same as the AWS VPC endpoint ID, the endpoint service ID, and our project ID.
After running the command, the response shows that the interface is now available.
The connection status value of available confirms that our private endpoint is fully set up and ready to use.
We can further confirm that our private endpoint is available by visiting the Atlas cluster dashboard.
Our private endpoint is up and running, but you will need to configure a security group within your AWS VPC to allow inbound traffic to your Atlas cluster. To learn more about configuring security groups, visit the AWS documentation.
Once that is complete, applications in your VPC can connect to your Atlas cluster through this private endpoint, with all traffic staying within AWS's private network infrastructure.
Great work. Let's take a moment to review what we learned in this lesson. First, we learned how private endpoints provide resource specific access to your Atlas clusters, creating a direct private connection that keeps your data off the public Internet.
Unlike network peering, which connects entire networks, private endpoints give you the precise control over which resources can be accessed.
After that, we set up our own private endpoint using AWS.
Finally, we confirmed that our private endpoint was operational on both Atlas and AWS.
