Running any AI code assistant inside Docker

PUBLISHED ON DEC 11, 2025 — CATEGORIES: utilities

(TLDR)

Make sure docker is installed and grab the OpenCode config file. Then download and run the DockerHub container:

# create a tmp folder to share with the container
SESSION_DIR=$(mktemp -d /tmp/opencode_session-XXXXXX) && chmod 1777 "$SESSION_DIR"

# download/run container
docker run --rm -it -p 1455:1456 \
  -v "/path/to/opencode_full_config.json:/workspace/opencode.json:ro" \
  -e OPENCODE_CONFIG=/workspace/opencode.json \
  -v "$SESSION_DIR:/tmp" \
  -v "$HOME/opencode_workspace:/opencode_workspace" \
  --name opencode-ubuntu-sandbox andresfr/opencode-ubuntu-sandbox:codex

# start AI assistant
opencode auth login   # optional: authentication
opencode

# log into container from a different terminal:
docker exec -it opencode-ubuntu-sandbox bash

Alternatively this bash launcher script wraps the first two arguments for convenience. You can add it to your environment (e.g. .bash_aliases) and then launch the container via:

llmsandbox /path/to/opencode_full_config.json  /path/to/proj1 /path/to/proj2 ...

Vibe coding: the good, the bad and the OpenCode + Docker way

Let’s say we want to use a state-of-the-art AI to assist us with coding. Said AI should be able to access and modify our filesystem, and we interact via some sort of chat. Many vendors provide solutions for this (Claude, Copilot, Codex or Mistral to name a few). Typically, we just need to install their corresponding tool/plugin and start cracking!

So what is the problem?
  • Privacy: Typically, these systems have unlimited access to a bunch of files in your system, and these may get uploaded to the AI cloud, and maybe even used for training.
  • Fragmentation: Often we also code on different machines, with different operating systems or editors, and we may use different vendors for different tasks, which may require different logins. Setting this up, especially if we care about privacy, can be a big headache.

This post proposes one way to fix this. In a nutshell, the idea is to run OpenCode inside of a Docker image.

How does this fix it?
  • Safety: The container only sees locations that you have explicitly mounted. We inherit safety from Docker.
  • Simplicity: If you have Docker installed, just run the image and you are good to go! No need for plugins or text editors.
  • Flexibility: This works on any system with Docker enabled. OpenCode supports virtually all vendors. Since this is terminal-based, does not depend on any particular text editor.

All of this while keeping a nice user experience: Both you and your AI assistant can access and modify the mounted files in parallel (with the text editor of your choice). The OpenCode interface is pretty smooth, and provides access to virtually every state-of-the-art AI assistant. As a teaser, this is how the end result looks like. If this seems interesting to you, keep reading!

On the right, a terminal is running OpenCode inside a Docker container. Using a GPT model, we requested to create a hello world app inside /tmp, which is mirrored to /tmp/opencode_sesion-PW1ND9 in our computer. On the top left we can see (and we could open our text editor to read, modify…) the files that were generated inside that location. On the bottom left we have installed and ran the python app. The whole thing took less than a minute.


Setup and usage example

👉 We assume the ability run Docker images inside your machine via the docker command. To get started with Docker, see here.

To set up, we just need to pull the Docker image that I already prepared at DockerHub:

# create a tmp folder to share with the container
SESSION_DIR=$(mktemp -d /tmp/opencode_session-XXXXXX) && chmod 1777 "$SESSION_DIR"
# download/run container
docker run --rm -it -p 1455:1456 \
  -v "/path/to/opencode_full_config.json:/workspace/opencode.json:ro" \
  -e OPENCODE_CONFIG=/workspace/opencode.json \
  -v "$SESSION_DIR:/tmp" \
  -v "$HOME/opencode_workspace:/opencode_workspace" \
  --name opencode-ubuntu-sandbox andresfr/opencode-ubuntu-sandbox:codex

That’s it! The terminal will then become what we call the container, awaiting your further instructions (the host is “your” machine, where you run docker run from). Here is an explanation of what’s going on:

  • The SESSION_DIR line creates a temporary folder with open permissions in the form /tmp/opencode_session-XXXX inside your host. Inside the container, this folder will be mounted as /tmp. Both you (the host) and the container can read, add, remove and modify files in that folder simultaneously.
  • In this case, we are also mounting the host folder $HOME/opencode_workspace inside the image. Similarly to before, both host and image can access and modify the contents of this folder.
  • Crucially no other host contents can be seen or modified by the image. So whatever you want your AI assistant to see, needs to be explicitly mounted via -v flags!
  • The /path/to/opencode_full_config.json is a JSON file such as this one that contains the OpenCode configuration. You can adjust it to your preferences, and it will be used by OpenCode inside the container. More info about the config here.
  • The -p 1455:1456 port forwarding is needed to perform browser authentication if you are using the Codex assistant from OpenAI. It is not needed otherwise.

Then, inside the container, simply run opencode to start the session! If you have a subscription that requires login, in particular Codex, run:

opencode auth login

You can also log into the same container from a different terminal using the following command:

docker exec -it opencode-ubuntu-sandbox bash

Here is how the auth login looks like currently (the full list of supported vendors is much longer):

And some of the supported models:

Once you are set up, you can start chatting with your assistant! An example creating a "hello world" python app from a single prompt is shown at the top of this post.


Advanced setup

Naturally, you may want to know what is going on in the Docker image I provided. Note that you can inspect the corresponding Dockerfile (such as this one) to see what’s actually contained there (I have commented it with some explanations). And of course, you can build it locally yourself via:

# run from the same location as the Dockerfile
docker build -t opencode-ubuntu-sandbox:codex .

You can then run this locally built image as before, but removing the andresfr/ DockerHub tag. Another advantage of this approach is that you can modify the Dockerfile at will before building it. This is particularly important if you require different versions, have more dependencies or there is something that isn’t quite working for you. I confess that I only tested the free agents (big-pickle and Grok Code Fast 1) and the subscription-based Codex via browser authentication. Let me know if there are any issues!


As we can see, this setup is powerful, quite simple, provides strong control over filesystem access, and works on many systems and vendors independently of a text editor. Hope you enjoy it as much as I do, let’s bring that big Docker energy to your vibe coding setup! 🐳

TAGS: ai, ai coding assistant, coding, docker, llm, opencode, productivity, security