Credits: Inspired by Dan V.'s cloudshell-store project that demonstrated the CloudShell API use, I built an unofficial boto3 client by injecting a custom botocore service model and you get a native boto3 experience: SigV4 signing, retries...
β οΈ Undocumented API β AWS can break it anytime. For exploration and learning only.
π https://github.com/guyon-it-consulting/cloudshell-boto3
AWS CloudShell Has a Hidden API. Here's How to Use It with boto3.
AWS CloudShell is a free, browser-based shell built into the AWS Console β pre-authenticated, with the AWS CLI, Python, Docker, and ~1 GB of persistent storage per region. No setup, no EC2, no cost. It even supports VPC mode, so it can reach your private resources directly.
But here's the thing: it has no public API. No SDK, no CLI, no CloudFormation support. Console only.
Or so it seems. π
Discovering the API
If you open your browser's developer tools while launching CloudShell, you'll notice something interesting. The Console makes REST calls to https://cloudshell.<region>.amazonaws.com β with endpoints like /createEnvironment, /describeEnvironments, /createSession, and more. These requests are signed with standard AWS SigV4 authentication. They use the cloudshell: IAM namespace. In other words, this is a full AWS API β it's just not documented.
Dan V. demonstrated that you could call these endpoints programmatically. Building on their work, I wanted to go further: what if we could use this API with the standard boto3 interface, complete with SigV4 signing, retries, pagination, and error handling?
That's exactly what cloudshell-boto3 does.
How it works
The trick is botocore's extensibility. Every AWS service in boto3 is described by a JSON service model β the same format AWS uses internally. If you provide your own model and inject it into botocore's loader, boto3 treats it as a standard service. You get a native client with proper request signing, serialization, and error handling.
Here's how you create the client:
import boto3
import botocore.session
# Inject the custom service model
bc_session = botocore.session.get_session()
loader = bc_session.get_component('data_loader')
loader.search_paths.insert(0, './my-additional-models')
# Create the boto3 session and client
session = boto3.Session(botocore_session=bc_session)
client = session.client('cloudshell', region_name='eu-west-1')
The API surface
Once you have the client, you get access to the full CloudShell lifecycle. Let me walk you through the key operations.
List your environments
response = client.describe_environments()
print(response['Environments'])
Create an environment
For a public environment, call with no arguments. For a VPC environment, pass the network configuration:
# Public environment
response = client.create_environment()
env_id = response['EnvironmentId']
# VPC environment
response = client.create_environment(
EnvironmentName='my-vpc-shell',
VpcConfig={
'VpcId': 'vpc-0123456789abcdef0',
'SubnetIds': ['subnet-0123456789abcdef0'],
'SecurityGroupIds': ['sg-0123456789abcdef0'],
},
)
VPC environments require additional IAM permissions (
ec2:CreateNetworkInterface,ec2:CreateTags, etc.). See the AWS docs.
Wait for it, then connect
Environments take a few seconds to start. Poll the status, then open an interactive session via SSM WebSocket:
import uuid
import time
# Wait for RUNNING
while client.get_environment_status(EnvironmentId=env_id)['Status'] != 'RUNNING':
time.sleep(5)
# Open a session
sess = client.create_session(
EnvironmentId=env_id,
SessionType='TMUX',
TabId=str(uuid.uuid4()),
QCliDisabled=True,
)
print(sess['SessionId'])
print(sess['StreamUrl']) # wss://ssmmessages.<region>.amazonaws.com/...
Use the StreamUrl, SessionId, and TokenValue with the session-manager-plugin to get an interactive shell:
import json, subprocess
payload = json.dumps({
'SessionId': sess['SessionId'],
'TokenValue': sess['TokenValue'],
'StreamUrl': sess['StreamUrl'],
})
subprocess.run(['session-manager-plugin', payload, 'eu-west-1', 'StartSession'])
Upload and download files
The API provides S3 presigned URLs for file transfer β the same mechanism the Console uses behind the scenes:
import requests
resp = client.get_file_upload_urls(EnvironmentId=env_id)
with open('script.sh', 'rb') as f:
requests.post(
resp['FileUploadPresignedUrl'],
data=resp['FileUploadPresignedFields'],
files={'file': ('script.sh', f)},
)
Keep it alive
CloudShell environments go to sleep after inactivity. Send heartbeats to prevent that:
import threading
def heartbeat_loop(client, env_id, interval=300):
while True:
client.send_heart_beat(EnvironmentId=env_id)
time.sleep(interval)
t = threading.Thread(target=heartbeat_loop, args=(client, env_id), daemon=True)
t.start()
Lifecycle management
You also get start_environment, stop_environment, delete_environment, and delete_session β the full lifecycle, from creation to cleanup.
The complete API reference
Here's every operation available in the reverse-engineered service model:
| Operation | Description |
|---|---|
describe_environments |
List all CloudShell environments for the current IAM principal |
create_environment |
Create a new public or VPC environment |
get_environment_status |
Get current status (CREATING, RUNNING, SUSPENDED, ...) |
start_environment |
Start a suspended environment |
stop_environment |
Stop a running environment |
delete_environment |
Delete an environment |
create_session |
Open an interactive SSM WebSocket session |
delete_session |
Close an active session |
send_heart_beat |
Keep an environment alive |
get_file_upload_urls |
Get S3 presigned URLs for file upload |
get_file_download_urls |
Get S3 presigned URLs for file download |
put_credentials |
Forward console credentials (console-only, not usable programmatically) |
IAM permissions
CloudShell uses the cloudshell: IAM namespace. Each API operation maps to a corresponding IAM action (cloudshell:CreateEnvironment, cloudshell:DescribeEnvironments, etc.). For VPC environments, IAM also supports condition keys to restrict which VPCs, subnets, and security groups can be used:
cloudshell:VpcIdscloudshell:SubnetIdscloudshell:SecurityGroupIds
Limits to keep in mind
- Max 2 VPC environments per IAM principal
- Max 5 security groups per VPC environment
- ~1 GB persistent storage per environment per region
- Environments sleep after inactivity (use heartbeats to prevent)
- Environments can be reclaimed by AWS at any time
Getting started
Clone the repo and try it yourself:
git clone https://github.com/guyon-it-consulting/cloudshell-boto3.git
cd cloudshell-boto3
pip install -r requirements.txt
AWS_PROFILE=my-profile python simple_example.py
The repo includes a complete example that creates an environment, connects via session-manager-plugin, injects credentials, and runs a command β the full lifecycle in one script.
β JΓ©rΓ΄me
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
20h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
20h ago
Why Iβm Still Learning to Code Even With AI
22h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago