
AWS T-Pot Honeypot Guide
1. Overview of T-Pot 24.04.1
T-Pot is an all-in-one honeypot platform that bundles 20+ honeypots (Cowrie, Dionaea, Conpot, etc.), plus multiple security and analysis tools (Elastic Stack, Suricata, CyberChef, Spiderfoot, etc.) in Docker containers.
Key highlights from T-Pot 24.04.1:
- Multi-distribution support: Installs on many Linux distros (Ubuntu, Debian, Fedora, Rocky, Alma, openSUSE, etc.).
- LLM-based honeypots: Beelzebub and Galah (optional, requiring GPU for Ollama or ChatGPT support).
- Recommended system specs:
- Sensor: 8GB RAM & 128GB disk
- Hive (i.e., full T-Pot on one system): 16GB RAM & 256GB disk
- Default daily reboot and automatic updates, with a flexible
docker-compose.yml
. - New default SSH port is
64295
. - Blackhole Mode to stealthily drop known mass scanners.
Disclaimer: Deploy honeypots in isolated or dedicated environments. By design, honeypots attract malicious traffic.
2. AWS Prerequisites and Considerations
- Basic AWS, EC2, SSH, and Linux Commands: You should be familiar with launching EC2 instances, connecting via SSH, and basic Linux command usage.
- AWS Account: Must have permissions to launch/manage EC2, allocate Elastic IPs, manage Security Groups, etc.
- Instance Requirements:
- At minimum, an 8GB RAM instance for a “sensor” type T-Pot.
- If you want to enable more honeypots or the LLM-based honeypots (like Beelzebub/Galah), aim for 16GB+ RAM.
- t3.large (2 vCPU, 8GB RAM) is a bare minimum for a sensor. For heavier usage, consider t3.xlarge (4 vCPU, 16GB).
- Storage:
- T-Pot can accumulate large amounts of logs, especially if operated for several days.
- Start with 128GB EBS volume for a sensor, or 256GB for a full hive if storing everything locally.
- VPC & Networking:
- Choose a public subnet to expose T-Pot to the internet.
- Plan your Security Group inbound rules carefully (see Section 6 below).
- Consistent Public IP:
- Use an Elastic IP to avoid changes if you stop/restart the instance.
- Costs:
- Consider compute (CPU/RAM), EBS storage, and data transfer fees.
- Security:
- Honeypots are intentionally vulnerable and will be probed.
- Do not store sensitive data.
- Restrict management ports to your IP.
- Keep the OS and T-Pot updated; plan to remove the instance after your test.
3. Launching an EC2 Instance (Ubuntu 22.04)
T-Pot supports multiple distributions, but Ubuntu 22.04 LTS is a straightforward choice in AWS.
- Choose AMI:
- “Ubuntu Server 22.04 LTS (HVM), SSD Volume Type”
- Prefer the stable Canonical jammy image.
- Select Instance Type:
- For a minimal sensor:
t3.large
(8GB). - For a heavier deployment/hive:
t3.xlarge
(16GB) or higher.
- For a minimal sensor:
- Key pair:
- Use a
.pem
file for OpenSSH or a.ppk
file for PuTTY. - Select an existing key pair or create a new key pair.
- Used for SSH into your EC2 instance.
- Use a
- Configure Network:
- Use a public subnet for direct internet access.
- If you prefer advanced setups, you can NAT/forward only certain ports; ensure T-Pot sees inbound connections.
- Add Storage:
- At least 128GB EBS.
- If you plan on high-volume, go 256GB.
- In this example: 16GB (root volume) for OS + 256GB (EBS volume) for logs.
- Security Group:
- Inbound rule for SSH (port 22) to your IP for now.
- We’ll configure T-Pot’s ports (64295, 64297, etc.) in Section 6.
- Elastic IP (Optional):
- Allocate & attach an Elastic IP to keep a stable IP if you stop your instance.
Figure 1: Example EC2 setup. Region us-east-2 (Ohio) (Click to expand)
Figure 2: Example EC2 Setup (continued) (Click to expand)
4. Preparing the System (Post-Launch Setup)
SSH to the Instance:
Navigate to the directory with your.pem
or.ppk
(converted if using PuTTY). For OpenSSH:chmod 400 your-key.pem ssh -i /path/to/key.pem ubuntu@<Public-DNS>
- Use port 22 initially if T-Pot isn’t installed yet. T-Pot later shifts SSH to
64295
. - You can find the connect command in the EC2 console → Connect → SSH client.
- Use port 22 initially if T-Pot isn’t installed yet. T-Pot later shifts SSH to
Update & Upgrade:
sudo apt-get update && sudo apt-get upgrade -y
Install
curl
(if missing):sudo apt-get install -y curl
(Optional) Additional Utilities:
sudo apt-get install -y git wget net-tools
5. Installing T-Pot 24.04.1
5.1 System Requirements Check
- Confirm at least 8GB RAM & 128GB disk.
- Ensure non-filtered, outgoing internet (no proxies).
- T-Pot uses many honeypot ports—ensure your Security Group won’t block if you want real attacks.
5.2 One-line Installer (Recommended)
The newest T-Pot release includes a simplified installation script. Run it as a non-root user in $HOME
:
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
- Choose your installation type (
h
for Hive,s
for Sensor, etc.). - Make sure you have the recommended requirements (Hive = 16GB+).
- You’ll be prompted for a
<WEB_USER>
and password (BasicAuth for the T-Pot WebUI).
What the installer does:
- Changes SSH port to
64295
. - Installs Docker + Docker Compose plugin & recommended packages.
- Disables conflicting services (DNS stub).
- Sets SELinux to monitor mode (some distros).
- Adds a
tpot
system user & a daily reboot cron. - Configures T-Pot to run at startup, then reboots.
TIP: Watch for port conflict messages if you have custom services.
5.3 Reboot
Once finished, reboot:
sudo reboot
When the instance is back, T-Pot is running. SSH now uses port 64295 by default.
6. AWS Security Group Configuration
T-Pot uses a wide range of ports. At minimum:
Port | Purpose |
---|---|
64295 (TCP) | SSH (T-Pot’s new SSH port) |
64297 (TCP) | NGINX Reverse Proxy (Kibana, Attack Map, CyberChef, etc.) |
1–64000 (TCP/UDP) | (Optional) Full coverage. Otherwise open only needed ports. |
Best Practice:
- Restrict
64295
and64297
to your IP only. - If you only want specific honeypots, open those ports.
- In the setup below, we open ports 1–64000.
AWS Security Group Configuration for T-Pot (Click to expand)
Note: If you don’t have a static IP, your IP lease may change. Update the inbound rules if your IP changes to regain SSH/WebUI access.
7. Post-Install Verification & Basic Usage
7.1 SSH on New Port
After the T-Pot reboot, SSH on port 64295:
ssh -p 64295 -i your-key.pem ubuntu@<Public-DNS>
Example:
ssh -p 64295 -i your-key.pem ubuntu@ec2-xx-xxx-xxx-xxx.us-east-2.compute.amazonaws.com
- Username:
ubuntu
(or your OS user).
7.2 Check T-Pot Service and Running Containers
systemctl status tpot
dps
- Should see multiple containers (Cowrie, Dionaea, Kibana, etc.).
- Status should indicate “Up.”
7.3 T-Pot Landing Page
Open your browser:
https://<AWS-Public-IPv4>:64297
Obtain
AWS-Public-IPv4
from the EC2 instance page.
Input the <WEB_USER>
Credentials
- This logs you into the T-Pot WebUI.
- From here, you can access:
- Kibana, CyberChef, Elasticvue, Spiderfoot
- Attack Map
7.4 Kibana (Main Log Analysis Interface)
- Click Kibana in the T-Pot Landing Page, or:
https://<AWS-Public-IPv4>:64297/kibana
- Explore the dashboards for each honeypot.
- You’ll see hits accumulating over time.
8. Daily Reboot & Cron Job
By default, T-Pot sets up a daily reboot around 2:42 AM (system’s local time) via crontab
.
This is to ensure:
Containers restart cleanly.
Docker images can be pruned.
Reduces risk of memory leaks or disk issues from long runs.
You can customize or remove this in the root crontab:
sudo crontab -e
# T-Pot daily reboot
42 2 * * * bash -c 'systemctl stop tpot.service && docker container prune -f; docker image prune -f; docker volume prune -f; /usr/sbin/shutdown -r +1 "T-Pot Daily Reboot"'
If you want uninterrupted operation, remove/comment out that line.
9. Gathering and Managing Logs
- Kibana:
- Real-time visualization (top attackers, geolocation, etc.).
- Export logs in Kibana (Stack Management → Saved Objects or Discover).
- Exporting Logs:
- T-Pot data is in Elasticsearch (Docker container).
- For offline analysis, you can:
- Export from Kibana as NDJSON/CSV.
- Copy data from
~/tpotce/data/...
.
- HPFeeds or third-party submission is also available.
- Log Retention:
- T-Pot sets 30-day index lifecycle policy by default. Adjust in Kibana if needed.
- Persistent Data:
- Volumes under
~/tpotce/data
. For forensics, create an EBS snapshot or tarball at test end.
- Volumes under
10. Enhancing Security
- Restrict Management:
- Restrict Kibana (
64297
) and SSH (64295
) to your IP.
- Restrict Kibana (
- OS-Level Firewall:
- T-Pot modifies some firewall settings. Be sure it doesn’t block needed honeypot ports.
- Blackhole Mode:
TPOT_BLACKHOLE=ENABLED
in~/tpotce/.env
blocks known mass scanners but reduces overall hits.
- Avoid Exposing T-Pot’s Docker or other services beyond honeypots.
11. Disabling Community Data Submission (Optional)
T-Pot sends anonymized data to Sicherheitstacho by default. To opt out:
sudo systemctl stop tpot
- Edit
~/tpotce/docker-compose.yml
- Remove/comment out the
ewsposter
block. sudo systemctl start tpot
12. Day to Day Operation & Teardown
- Operation:
- Check Kibana daily.
- Watch EC2 usage in CloudWatch.
- T-Pot reboots daily unless disabled.
- Before Teardown:
- Export data from Kibana or copy
~/tpotce/data
for offline analysis. - (Optional) EBS snapshot or AMI image for full backups.
- Export data from Kibana or copy
- Terminate the EC2:
- Disassociate your Elastic IP if no longer needed.
- Terminate the instance once logs are gathered.
13. Quick Reference & Troubleshooting
- Check Container Health:
dpsw 2
- Review Logs:
docker logs -f <container_name> cat ~/tpotce/data/tpotinit.log
- Stop T-Pot:
sudo systemctl stop tpot
- Start T-Pot:
sudo systemctl start tpot
- If You Lose SSH Access:
- Check AWS SG rules, daily reboot, and confirm port changes.
- Remember port
64295
.
- Low RAM/Disk:
- Elasticsearch/Logstash can crash if under-resourced.
- Monitor with
htop
ordocker stats
.
- Disk Space:
- Keep an eye on
df -h
to avoid filling up the partition.
- Keep an eye on
14. Add Second Storage Volume (Optional)
If you run out of space for logs, you can attach a second EBS volume. Many prefer having a separate volume from the OS to keep logs and utilize gp3 for cost savings. Create the volume in AWS, attach it, then follow these steps.
14.1 Format Your Drive
Create a Partition (optional, but recommended)
sudo parted /dev/nvme0n1 -- mklabel gpt sudo parted /dev/nvme0n1 -- mkpart primary ext4 0% 100%
Afterward, you may have
/dev/nvme0n1p1
.Format (ext4):
sudo mkfs.ext4 /dev/nvme0n1p1
(Adjust if parted labeled it differently.)
14.2 Create a Mount Point and Mount It
sudo mkdir /data
sudo mount /dev/nvme0n1p1 /data
df -h
You should see /data
with the new capacity (e.g., 256GB).
14.3 Add an Entry to /etc/fstab
So it auto-mounts on reboot:
echo "/dev/nvme0n1p1 /data ext4 defaults 0 2" | sudo tee -a /etc/fstab
14.4 Move T-Pot Logs to the New Drive
T-Pot stores data in ~/tpotce/data
. Let’s relocate:
Stop T-Pot:
sudo systemctl stop tpot
Move the Data Folder:
mv ~/tpotce/data /data/tpot-data
Symlink it back:
ln -s /data/tpot-data ~/tpotce/data
Start T-Pot:
sudo systemctl start tpot
Confirm with:
df -h
Now T-Pot writes data to
/data/tpot-data
.
Final Thoughts & Further Enhancements
With T-Pot 24.04.1 on AWS:
- You have a single EC2 instance running a robust multi-honeypot environment (Hive or Sensor style).
- Data ingestion happens in real-time, visible via Kibana, Attack Map, and more.
- Fine-tune your honeypot by restricting or opening specific ports and adjusting T-Pot’s settings so it behaves more like a genuine production system. Attackers are adept at identifying honeypot signatures, so a realistic environment can attract deeper or more sophisticated attacks.
- Avoid storing sensitive data; remember that honeypots are by design open to malicious traffic.
- Plan your post-run analysis by exporting or snapshotting your logs, then safely terminate the EC2 to avoid further charges.
I hope this guide helps you successfully deploy and manage T-Pot on AWS. Feel free to experiment with new honeypot configurations, visualization dashboards, or additional cloud monitoring tools. If you have questions or want to share findings, reach out on my socials.
Happy Hacking!
Charlemagne (0xCD)