Sometimes you need a “jump box” that you can SSH into and get shell-level access to a production environment. That might be for debugging and maintenance or part of a deploy process.
This comes with a tradeoff. If you are using a private AWS subnet (which you should use for security), you need to open up access for the jump box making it a big target for attackers.
Ideally, you could get access to a production environment only when access is needed and done as securely as possible. AWS Session Manager has pretty much all the ingredients to do that but we only want to allow connections temporarily on a new server (not an existing server with the Session Manager plugin installed).
For that we need to combine that with ECS so we can spin up a brand new server, connect to it securely, and then shut it down when we’re done.
How to create an ephemeral jump box with AWS ECS and Session Manager
- Make sure the plugin is installed locally.
- Create a new ECS task (a new server) using the same definition as the production instance you want to access (or create a new definition just for bastion boxes)
aws ecs run-task --cluster {YOUR CLUSTER NAME} --task-definition {YOUR TASK DEFINITION NAME} --count 1 --launch-type FARGATE --enable-execute-command --network-configuration '{"awsvpcConfiguration": {"subnets": [{LIST OF SUBNETS YOU NEED ACCESS TO}], "assignPublicIp": "ENABLED"}}' --overrides '{"containerOverrides": [{"name": "shell", "command": ["sleep", "{TIME IN SECONDS BEFORE SHUTTING DOWN}"]}]}'
- Grab the task ARN from the output, wait for the task to start running, and connect
aws ecs wait tasks-running --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}" aws ecs execute-command --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}" --command "bash" --interactive 2>/dev/null
- Stop the task (or wait for it to expire)
aws ecs stop-task --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}"