💬 frozen-crystal-harbor

v2.1.69 main
Session ID
c3d4e5f6-a7b8-9012-cdef-123456789012
Working directory /home/you/projects/webapp
Started 2026-02-20T16:45:03.456Z
Ended 2026-02-20T16:46:12.234Z
Messages 5 user / 6 assistant
Tool calls 3
Resume in Claude Code
cd /home/you/projects/webapp
claude --resume c3d4e5f6-a7b8-9012-cdef-123456789012
Human16:45:03
My frontend container keeps getting curl: (6) Could not resolve host: backend when it tries to call the backend. Both containers are up according to docker compose ps. What's going on?
🔧Read
Result: Read
🔧Edit
Result: Edit
Claude16:45:15in:1520 | out:152 tokens
Found the problem. Your frontend service had no networks key, so Docker attached it to the default bridge network rather than app-network. The backend and db services were on app-network, which has its own internal DNS — that's why backend hostname wasn't resolvable from the frontend container.

I've added networks: - app-network to the frontend service. Now all three services share the same network and can reach each other by service name.

Recreate the containers to apply the change:
docker compose up -d --force-recreate frontend

You can verify all containers are on the same network with:
🔧Bash
Result: Bash
Claude16:45:19in:1810 | out:118 tokens
The network webapp_app-network exists and is using the bridge driver as expected. After you recreate the frontend container, you can confirm all three services are attached to it with:

docker network inspect webapp_app-network --format '{{range .Containers}}{{.Name}} {{end}}'

You should see webapp-frontend-1, webapp-backend-1, and webapp-db-1 all listed. At that point http://backend:8000 will resolve correctly from inside the frontend container.
Turn duration: 16.1s
Human16:46:08
That fixed it! All three containers show up in the network inspect output now. I can't believe such a small thing caused that — I should have caught it in review.
Claude16:46:12in:1960 | out:120 tokens
Easy to miss — Docker silently puts services with no networks key on a default network, and everything looks fine in docker compose ps because the containers are running. The connectivity failure only shows up at runtime.

A good way to catch this in future reviews: if any service defines explicit networks, do a quick scan to make sure every service that needs to talk to another is in a shared network. You could also add a simple smoke-test step to your CI that curls between containers after docker compose up, which would surface this kind of issue automatically before it reaches review.
Turn duration: 4.3s