Problem: Request to allowed domain is being blocked
Solution:
- Check domain spelling in
--allow-domains - Add subdomains if needed (e.g.,
api.github.comin addition togithub.com) - Enable debug logging to see Squid access logs:
sudo awf \ --allow-domains github.com \ --log-level debug \ 'your-command' - Check Squid logs for blocked requests:
sudo grep "TCP_DENIED" /tmp/squid-logs-<timestamp>/access.log
Problem: Docker Compose fails to start containers
Solution:
- Ensure Docker is running:
docker ps
- Check for port conflicts (port 3128 must be available):
netstat -tulpn | grep 3128 - Verify Docker Compose is installed:
docker compose version
- Check for orphaned networks:
If found, clean them up:
docker network ls | grep awfdocker network rm awf-net
Problem: Permission denied: iptables commands require root privileges
Solution:
- All commands MUST be run with
sudofor host-level iptables manipulation - Run:
sudo awf --allow-domains ... 'your-command' - In GitHub Actions, the runner already has root access (no
sudoneeded)
Problem: DOCKER-USER chain does not exist
Solution:
- Ensure Docker is properly installed and running
- Docker creates the DOCKER-USER chain automatically
- Verify Docker version is recent (tested on 20.10+):
docker version
Problem: GITHUB_TOKEN or other environment variables not available in container
Solution:
- Use
sudo -Eto preserve environment variables:sudo -E awf --allow-domains ... 'your-command' - Verify variables are exported before running:
export GITHUB_TOKEN="your-token" echo $GITHUB_TOKEN # Should print the token
Problem: Sensitive tokens (GITHUB_TOKEN, OPENAI_API_KEY, etc.) not being properly cached or cleared
Solution:
- Enable debug logging for the one-shot-token library:
export AWF_ONE_SHOT_TOKEN_DEBUG=1 sudo -E awf --allow-domains ... 'your-command'
- Check the debug output for:
Initialized with N default token(s)- Library loaded successfullyToken <NAME> accessed and cached- Token was read and cachedINFO: Token <NAME> cleared from process environment- Token removed from /proc/environWARNING: Token <NAME> still exposed- Token cleanup failed (security concern)
- If tokens are still exposed, check:
- The token name is in the default protected list (see
containers/agent/one-shot-token/README.md) - Or set
AWF_ONE_SHOT_TOKENSto explicitly protect custom tokens:export AWF_ONE_SHOT_TOKENS="MY_CUSTOM_TOKEN,ANOTHER_TOKEN" export AWF_ONE_SHOT_TOKEN_DEBUG=1 sudo -E awf --allow-domains ... 'your-command'
- The token name is in the default protected list (see
Note: Debug output goes to stderr. Use 2>&1 | tee debug.log to capture it.
Problem: MCP server cannot reach external API
Solution:
- Add MCP server's domain to
--allow-domains - Check if MCP server uses subdomain (e.g.,
api.example.com) - Verify DNS resolution is working:
sudo awf --allow-domains example.com \ 'nslookup api.example.com' - Check Squid logs for blocked requests:
sudo grep "api.example.com" /tmp/squid-logs-<timestamp>/access.log
Problem: MCP tools not showing up in Copilot CLI
Solution:
- Verify MCP config has
"tools": ["*"]field:cat ~/.copilot/mcp-config.json - Ensure
--allow-toolflag matches MCP server name:# MCP config has "github" as server name copilot --allow-tool github --prompt "..."
- Check if built-in MCP is disabled:
copilot --disable-builtin-mcps --prompt "..." - Review agent logs for MCP connection errors:
cat /tmp/awf-agent-logs-<timestamp>/*.log
AWF automatically sets JAVA_TOOL_OPTIONS with -Dhttp.proxyHost, -Dhttp.proxyPort, -Dhttps.proxyHost, -Dhttps.proxyPort, and -Dhttp.nonProxyHosts inside the agent container. This works for most Java tools that read standard JVM system properties, including Gradle and SBT.
Problem: Maven builds fail with network errors even though the domain is in --allow-domains
Cause: Maven's HTTP transport (Apache HttpClient / Maven Resolver) ignores Java system properties for proxy configuration. Unlike Gradle and most other Java tools, Maven does not read -DproxyHost/-DproxyPort from JAVA_TOOL_OPTIONS.
Solution: Create ~/.m2/settings.xml with proxy configuration before running Maven:
mkdir -p ~/.m2
cat > ~/.m2/settings.xml << EOF
<settings>
<proxies>
<proxy>
<id>awf-http</id><active>true</active><protocol>http</protocol>
<host>${SQUID_PROXY_HOST}</host><port>${SQUID_PROXY_PORT}</port>
</proxy>
<proxy>
<id>awf-https</id><active>true</active><protocol>https</protocol>
<host>${SQUID_PROXY_HOST}</host><port>${SQUID_PROXY_PORT}</port>
</proxy>
</proxies>
</settings>
EOFThe SQUID_PROXY_HOST and SQUID_PROXY_PORT environment variables are automatically set by AWF in the agent container.
For agentic workflows, add this as a setup step in the workflow .md file so the agent creates the file before running Maven commands.
Gradle reads JVM system properties via ProxySelector.getDefault(), so the JAVA_TOOL_OPTIONS environment variable set by AWF is sufficient. No extra configuration is needed for Gradle builds.
AWF uses a forward proxy (Squid) for HTTPS egress control rather than transparent interception. This means tools must be proxy-aware:
- Most tools: Use
HTTP_PROXY/HTTPS_PROXYenvironment variables (set automatically by AWF) - Java tools: Use
JAVA_TOOL_OPTIONSwith JVM system properties (set automatically by AWF) - Maven: Requires
~/.m2/settings.xml(must be configured manually — see above)
# View all blocked domains
sudo grep "TCP_DENIED" /tmp/squid-logs-<timestamp>/access.log | awk '{print $3}' | sort -u
# Count blocked attempts by domain
sudo grep "TCP_DENIED" /tmp/squid-logs-<timestamp>/access.log | awk '{print $3}' | sort | uniq -c | sort -rnWhile containers are running (with --keep-containers):
docker logs awf-agent
docker logs awf-squidAfter command completes:
# Agent logs (includes GitHub Copilot CLI logs)
cat /tmp/awf-agent-logs-<timestamp>/*.log
# Squid logs (requires sudo)
sudo cat /tmp/squid-logs-<timestamp>/access.logBlocked UDP and non-standard protocols are logged to kernel logs:
# From host (requires sudo)
sudo dmesg | grep FW_BLOCKED
# From within container
docker exec awf-agent dmesg | grep FW_BLOCKEDProblem: Domains cannot be resolved
Solution:
- Verify DNS is allowed in iptables rules (should be automatic)
- Test DNS resolution:
sudo awf --allow-domains example.com \ 'nslookup example.com' - Check if DNS servers are reachable:
sudo awf --allow-domains example.com \ 'cat /etc/resolv.conf'
Problem: Requests timeout instead of being blocked
Solution:
- Check if Squid proxy is running:
docker ps | grep awf-squid - Verify iptables rules are applied:
docker exec awf-agent iptables -t nat -L -n -v - Increase timeout in your command:
sudo awf --allow-domains github.com \ 'curl --max-time 30 https://api.github.com'
Problem: curl: (7) Failed to connect to 172.30.0.10 port 3128
Solution:
- Ensure Squid container is healthy:
docker ps --filter name=awf-squid # Should show "healthy" status - Check Squid logs for errors:
sudo cat /tmp/squid-logs-<timestamp>/cache.log
- Verify network connectivity:
docker exec awf-agent ping -c 3 172.30.0.10
Problem: Containers remain after command exits
Solution:
- Manually clean up containers:
docker rm -f awf-agent awf-squid
- Clean up networks:
docker network rm awf-net
- Use cleanup script:
./scripts/ci/cleanup.sh
Problem: /tmp directory filling up with logs
Solution:
- Manually remove old logs:
rm -rf /tmp/awf-agent-logs-* rm -rf /tmp/squid-logs-* rm -rf /tmp/awf-*
- Empty log directories are not preserved automatically
- Use
--keep-containersonly when needed for debugging
Problem: GitHub Actions workflow times out
Solution:
- Increase timeout in workflow:
timeout-minutes: 15
- Use
timeoutcommand in script:timeout 60s awf --allow-domains ... 'your-command'
Problem: Cleanup step not executing in workflow
Solution:
- Ensure cleanup step has
if: always():- name: Cleanup if: always() run: ./scripts/ci/cleanup.sh
- Add pre-test cleanup to prevent resource accumulation:
- name: Pre-test cleanup run: ./scripts/ci/cleanup.sh
Problem: Pool overlaps with other one on this address space
Solution:
- Run cleanup before tests:
./scripts/ci/cleanup.sh
- Add network pruning:
docker network prune -f
- This is why pre-test cleanup is critical in CI/CD
Problem: Agent reports SSL/TLS certificate errors when --ssl-bump is enabled
Solution:
- Verify the CA was injected into the trust store:
docker exec awf-agent ls -la /usr/local/share/ca-certificates/ docker exec awf-agent cat /etc/ssl/certs/ca-certificates.crt | grep -A1 "AWF Session CA"
- Check if the application uses certificate pinning (incompatible with SSL Bump)
- For Node.js applications, verify NODE_EXTRA_CA_CERTS is not overriding:
docker exec awf-agent printenv | grep -i cert
Problem: Allowed URL patterns are being blocked with --ssl-bump
Solution:
- Enable debug logging to see pattern matching:
sudo awf --log-level debug --ssl-bump --allow-urls "..." 'your-command'
- Check the exact URL format in Squid logs:
sudo cat /tmp/squid-logs-*/access.log | grep your-domain
- Ensure patterns include the scheme:
# ✗ Wrong: github.com/myorg/* # ✓ Correct: https://github.com/myorg/*
Problem: Application refuses to connect due to certificate pinning
Solution:
- Applications with certificate pinning are incompatible with SSL Bump
- Use domain-only filtering without
--ssl-bumpfor these applications:sudo awf --allow-domains github.com 'your-pinned-app'
If you're still experiencing issues:
-
Enable debug logging:
sudo awf --log-level debug --allow-domains ... 'your-command' -
Keep containers for inspection:
sudo awf --keep-containers --allow-domains ... 'your-command' -
Review all logs:
- Agent logs:
/tmp/awf-agent-logs-<timestamp>/ - Squid logs:
/tmp/squid-logs-<timestamp>/ - Container logs:
docker logs awf-agent
- Agent logs:
-
Check documentation:
- Architecture - Understand how the system works
- Usage Guide - Detailed usage examples
- SSL Bump - HTTPS content inspection and URL filtering
- Logging Quick Reference - Log queries and monitoring