The Complete Claude Code Configuration Guide: From Setup to Skills
Author: CodeGateway team · Tested on May 2026
TL;DR: Getting Claude Code running takes one command. Getting Claude Code running like a senior team would use it — Skills, Hooks, sub-agents, multi-project workflows — takes a real understanding of every config knob. This guide is for engineers past the install step who want a thorough, production-grade configuration.
Table of Contents
- What Claude Code is and what it isn't
- Environment prep: terminal, Node.js, API key
- Base configuration: env vars, model selection, first run
- Skills: from built-in to custom
- Hooks: hand off the repetitive work
- Troubleshooting checklist
- Advanced: proxies, caching, multi-project isolation
- Real workflows: planning and code review
- Team configuration
- FAQ
- Related reading
What Claude Code is and what it isn't
Claude Code is Anthropic's command-line AI coding agent. It is not an in-IDE pair programmer like Cursor or Continue. The shape is intentionally different: the editor is your terminal plus your working directory, and the AI runs as an agent inside that environment — reading files, writing files, running commands, spawning sub-agents on its own initiative.
That shape is well-suited to long-horizon tasks that span many files and tools: cross-service migrations, end-to-end test scaffolding, refactors that touch documentation alongside code. Senior engineers don't use Claude Code asking "can it write code?" — they're asking different questions:
- Which model is the right ratio of cost to capability for this task?
- How do Skills let me encode my team's conventions?
- How do Hooks let me automate the linting / formatting / type-check loop without thinking about it?
- How do I keep different projects' contexts from leaking into each other?
The rest of this guide answers those.
Environment prep: terminal, Node.js, API key
Terminal sanity check
echo $SHELL # confirm zsh / bash
node --version # require 18+; LTS 20.x recommended
npm --version
git --version # Claude Code leans heavily on gitIf Node is < 18, upgrade. On Windows, prefer WSL2 + Ubuntu over native PowerShell — the latter has ongoing reliability issues with stdin piping and long-running streams.
Install Claude Code
npm install -g @anthropic-ai/claude-code
claude --versionUpdates are managed by npm, so periodically: npm update -g @anthropic-ai/claude-code.
Get an API key
Direct to Anthropic works. Going through CodeGateway also works. We use the latter throughout this guide:
- Sign up at https://www.codegateway.dev. New accounts get a $2 starter credit (~440K Sonnet 4.6 input tokens at the 1.5x starter markup).
- Dashboard → API Keys → Create Key.
- The key starts with
sk-cg-and is shown once. Save it to a password manager.
Screenshot placeholder: API Key creation flow.
Base configuration: env vars, model selection, first run
Environment variables
# in ~/.zshrc or ~/.bashrc
export ANTHROPIC_BASE_URL="https://api.codegateway.dev"
export ANTHROPIC_API_KEY="sk-cg-xxxxxxxxxxxxxxxxxxxxxx"
# default model (overridable per project)
export ANTHROPIC_MODEL="claude-sonnet-4-6"source your rc file to take effect.
Model selection
Three Anthropic primary models map to three Claude Code workloads:
Model | Strength | Where it shines | Cost orientation |
|---|---|---|---|
| Deepest reasoning | Architecture, cross-service design, hard debugging | Most expensive — use deliberately |
| Balanced + strong coding | Daily driver: refactors, generation, review | Default recommendation |
| Fast and cheap | Tool-shaped sub-agents, batch lint, simple generation | Lowest input/output rates |
A useful default: main agent on Sonnet, frequent sub-agents on Haiku, one-shot architectural calls on Opus. Don't run Opus all day — it adds up.
First run
mkdir hello-claude && cd hello-claude
git init
echo "# Hello" > README.md
claudeInside the interactive prompt:
Initialize this repo as a Python project using uv for dependency management,
add a minimal hello.py, and write the smallest possible pytest test.Claude Code will run uv init, write the files, run the test, and stream output back the whole way.
Skills: from built-in to custom
What a Skill is
A Skill in Claude Code is a reusable behavioral specification. Each Skill is a directory with a SKILL.md describing trigger conditions, behavior constraints, and the tools it's allowed to use. Skills can live globally (~/.claude/skills/) or per project (.claude/skills/).
Common built-ins and ecosystem Skills include:
tdd-workflow— enforce write-tests-firstpython-patterns— Pythonic idioms, PEP 8, type hintsgolang-testing— table-driven Go testse2e-testing— Playwright Page Object Modelfrontend-design— design system conventions
Invoking a Skill
Inside the session:
/skill tdd-workflowOr implicitly: when Claude Code recognizes a task that matches a Skill description, it auto-loads it.
Authoring a custom Skill
Create .claude/skills/my-team-react/SKILL.md:
---
name: my-team-react
description: Team React component conventions. Triggered when adding React components: function components + TS, explicit prop types, CSS Modules, Vitest + RTL for tests.
---
# Team React conventions
## Component shape
- Function components + TypeScript
- Props interfaces named `<Component>Props`
- Default export, no `React.FC`
## Styles
- CSS Modules (`Component.module.css`), no styled-components
- Tokens always referenced from `tokens.css`
## Tests
- Vitest + React Testing Library
- One test per interactive state
- 80%+ unit coverage
## File organization
- `Component.tsx` + `Component.module.css` + `Component.test.tsx`
- `index.ts` re-export in the same directoryNow Claude Code follows that spec when writing React in this project. Commit it to your repo and every team member inherits the conventions automatically.
Recommended Skill stacks by role
Role | Stack |
|---|---|
Solo full-stack dev |
|
Backend engineer |
|
Frontend engineer |
|
Platform / DevOps |
|
Hooks: hand off the repetitive work
Hook types
Claude Code exposes three:
- PreToolUse — fires before a tool call (validate, mutate args, block).
- PostToolUse — fires after a tool call (auto-format, lint, test).
- Stop — fires at session end (final verification).
Example: auto-lint Python on write
.claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "ruff check --fix \"$FILE_PATH\" && ruff format \"$FILE_PATH\"",
"description": "Lint and format Python files after edits"
}
]
}
}Every time Claude Code writes or edits a file, the hook formats and lints it. No more "fix everything at the end."
Example: end-of-session build check
{
"hooks": {
"Stop": [
{
"command": "pnpm build",
"description": "Verify a production build at session end"
}
]
}
}Acts as a safety net before you walk away.
Example: refuse oversized writes
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write",
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{const i=JSON.parse(d);const c=i.tool_input?.content||'';const lines=c.split('\\n').length;if(lines>800){console.error('[Hook] BLOCKED: > 800 lines');process.exit(2)}console.log(d)})\"",
"description": "Block writes exceeding 800 lines"
}
]
}
}Troubleshooting checklist
Symptom | What to check |
|---|---|
401 / 403 | Key correct? Base URL correct? Did the new shell pick up env vars? |
422 model not found | Model ID is |
429 too many requests | Lower concurrency, raise tier markup, or queue via sub-agents |
| Project directory permissions — common in Docker bind mounts |
Hook doesn't fire | Is |
Skill not loading | Frontmatter present? Description specific enough to trigger? |
| Use |
Mid-task | Middlebox idle timeout — see connection timeout guide |
Advanced: proxies, caching, multi-project isolation
HTTP proxies
Behind a corporate proxy:
export HTTPS_PROXY=http://corp-proxy.example.com:8080
export HTTP_PROXY=http://corp-proxy.example.com:8080
export NO_PROXY=localhost,127.0.0.1If the proxy uses a self-signed cert, also:
export NODE_EXTRA_CA_CERTS=/etc/ssl/corp-ca.pemCache management
Claude Code caches recent prompts and context indexes locally. To clear:
rm -rf ~/.claude/cacheServer-side prompt caching (Anthropic / CodeGateway transparent) makes long-context reuse much cheaper. See Anthropic's prompt caching docs for the discount math.
Multi-project isolation
Different projects, different models or hooks? Use project-level .claude/settings.json to override the global one:
~/.claude/settings.json # personal default
project-a/.claude/settings.json # stricter hooks
project-b/.claude/settings.json # different modelClaude Code finds the nearest .claude/settings.json and merges it. Project-level always wins.
Real workflows: planning and code review
Case 1: PRD to task breakdown
Drop the PRD (Markdown) into the project. Open with:
Read docs/PRD.md, break it into tasks per milestone. For each task,
output: acceptance criteria, files affected, time estimate. Write to
docs/tasks.md.Claude Code will read, plan, and write the file. This is a planning workload — Opus pays off here on complex specs.
Case 2: code review on a diff
Run a review on the current git diff. Focus on:
1. Unhandled errors
2. Hardcoded secrets
3. Existing tests broken or missing
Output as CRITICAL / HIGH / MEDIUM / LOW with code snippets for each fix suggestion.Claude Code runs git diff, reads relevant files, returns a structured review. Pair this with a code-reviewer Skill for even tighter output.
Case 3: cross-service refactor
Renaming a field across multiple repos is one of Claude Code's strongest plays. Use /agents to fan out:
- agent A — grep, list everything to change
- agent B — apply backend service edits
- agent C — apply frontend call-site edits
- agent D — run tests, collect failures
The main agent coordinates and summarizes. Use Haiku for sub-agents to keep cost manageable.
Team configuration
Shared Skills repository
Build a team-skills repo for shared Skills:
team-skills/
├── react-conventions/SKILL.md
├── pr-template/SKILL.md
├── changelog-style/SKILL.md
└── README.mdReference it from each project via .claude/skills/ symlinks or git submodules.
Standardized hooks
The "lint passes locally but CI fails" pattern dies when lint / format / type-check hooks live in .claude/settings.json and are committed to the repo. Every team member's Claude Code session runs the same quality bar. CI failure rate drops sharply.
API key strategy
Don't share personal keys. On CodeGateway, every team member has their own account, and the markup tier is computed per-account on a 90-day rolling window — floor 1.2x. Sharing a single key creates problems:
- Usage attribution falls apart.
- Upstream rate limits halt the entire team at once.
- One leak forces full re-key everywhere.
Issue dedicated CI keys (ci-<repo>) and store them in GitHub Actions / GitLab CI secrets.
FAQ
Q: Claude Code or Cursor — which is "better"?
A: They aren't directly competing. Cursor is an IDE strong at interactive editing, inline completion, and visual feedback. Claude Code is a terminal agent strong at long-horizon tasks, automation, and direct integration with git and shell. For in-editor productivity, Cursor; for big automated tasks, Claude Code. Full breakdown in Claude Code vs Cursor vs GitHub Copilot.
Q: Skills vs Hooks — what's the difference?
A: Skills are behavioral specifications that influence how Claude Code thinks about and structures code. Hooks are automation hooks that force commands to run before or after tool calls. They compose well: Skill defines the rule, Hook enforces it.
Q: Do I have to use CodeGateway?
A: No. Claude Code talks to Anthropic direct natively. CodeGateway exists to compress onboarding cost, link stability, and billing flexibility into one place. See the trade-off table in the connection timeout guide.
Q: Sub-agent count limit?
A: No hard cap on the client. Practical limit is gateway RPM and concurrency. CodeGateway's defaults are looser than Anthropic direct, so normal use rarely hits them.
Q: Can I run Claude Code in an air-gapped environment?
A: No — model API access is mandatory. The usual workaround is an outbound proxy (see "HTTP proxies" above) or a self-hosted CodeGateway tier (contact for enterprise).
Q: Can I version-control my config?
A: Yes, please do. Commit .claude/settings.json, .claude/skills/, .claude/hooks/. Don't commit API keys.
Q: How do I see what I've spent?
A: Dashboard → Logs / Overview. Overview now has a Total Tokens card; Logs supports time-range filters (Today / 7d / 30d / 90d / All), and you can break out by key or model. Cost math is in the billing guide.
Q: Will Claude Code run dangerous commands without asking?
A: By default, destructive operations (rm -rf, git reset --hard, force push) prompt for confirmation. You can widen the auto-approve scope in settings, but don't set --dangerously-skip-permissions — that hands the steering wheel to the AI.
Related reading
- Claude Code 5-minute setup
- Claude Code connection timeout troubleshooting
- Claude Code vs Cursor vs GitHub Copilot
- Top-up and billing guide
- Tier markup explainer
- Error troubleshooting
- Anthropic — Claude Code overview
- Anthropic — Skills
The list above looks long, but the bootstrap is short: install the CLI, set the key and default model, run a session. Skills and Hooks are things you accumulate as you go, not all at once. Commit the config to your repo and your team's setup converges automatically.
