A Practical, Reproducible, and Enterprise-Ready Implementation Guide
AI coding tools such as OpenCode AI can significantly improve developer productivity. However, installing them directly on developer machines quickly becomes problematic in team or enterprise environments:
- Inconsistent setups
- Credentials scattered across laptops
- No governance or auditability
- Difficult upgrades and rollbacks
This guide shows how to run OpenCode AI inside a Docker container, following best practices for security, reproducibility, and enterprise governance.
1. Design Goals
Before implementation, define clear goals:
- OpenCode AI must not be installed directly on the host
- The runtime environment must be reproducible
- No API tokens or secrets baked into images
- Run as a non-root user
- Easy to rebuild, upgrade, and roll back
- Ready for team-wide or enterprise adoption
2. Project Structure
Create a clean project directory:
opencode-docker/
├── Dockerfile
├── build.sh
└── run.sh
3. Dockerfile (Core Implementation)
Below is a production-ready Dockerfile aligned with security best practices.
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
# Install required system tools
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
ca-certificates \
git \
openssh-client \
sudo \
&& rm -rf /var/lib/apt/lists/*
# Create a non-root user
RUN useradd -m -s /bin/bash ubuntu \
&& echo "ubuntu ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/ubuntu \
&& chmod 0440 /etc/sudoers.d/ubuntu
USER ubuntu
WORKDIR /home/ubuntu
# Prepare SSH configuration
RUN mkdir -p /home/ubuntu/.ssh \
&& touch /home/ubuntu/.ssh/known_hosts
# Preload GitHub host keys (non-interactive Git usage)
RUN ssh-keyscan -T 5 github.com 2>/dev/null >> /home/ubuntu/.ssh/known_hosts || true
# Install OpenCode AI (official binary installer)
RUN curl -fsSL https://opencode.ai/install | bash
Key Security Notes
- No secrets in the image
- Runs as non-root
- Disposable container by design
4. Build the Image
Create build.sh:
#!/bin/bash
set -e
docker build -t opencode-ai:latest .
Run:
chmod +x build.sh
./build.sh
5. Running the Container (Critical Step)
OpenCode AI requires authentication data.
Credentials must remain on the host and be mounted at runtime.
5.1 Host Preparation (One-Time)
After authenticating with OpenCode AI on the host, the following paths typically exist:
~/.local/share/opencode/auth.json
~/.config/opencode/
These must never be baked into the image.
5.2 run.sh (Standard Usage)
#!/bin/bash
docker run --rm -it \
-v "$HOME/.local/share/opencode:/home/ubuntu/.local/share/opencode" \
-v "$HOME/.config/opencode:/home/ubuntu/.config/opencode" \
-v "$PWD:/workspace" \
-w /workspace \
opencode-ai:latest \
opencode
Run:
chmod +x run.sh
./run.sh
6. What Happens at Runtime
Inside the container:
- You run
opencodenormally - You operate on the current project directory
- You use your own credentials
- No state is preserved inside the container
This follows the cloud-native principle:
Containers are disposable; data and credentials are external.
7. Why This Approach Works for Enterprises
✔ IT Operations
- Centralized image versioning
- Easy upgrades and rollbacks
- Compatible with security scanning tools
✔ Security & Compliance
- Credentials never enter the image
- Non-root execution
- Reduced supply-chain risk
✔ Engineering Teams
- Identical environment for all users
- No host pollution
- Low onboarding cost
8. Extensions and Next Steps
This setup can naturally evolve into:
- A shared enterprise OpenCode AI image
- Integration with CI/CD pipelines
- AI-assisted code review or refactoring workflows
- Internal RAG-augmented coding environments
Conclusion
The value of AI coding tools is not just speed—it’s safe, repeatable, and governed adoption.
By running OpenCode AI in Docker, organizations can turn a personal productivity tool into a managed, enterprise-grade capability.
This approach ensures AI adoption that is:
- Secure
- Scalable
- Sustainable