Structure CLAUDE.md Files with Proper Hierarchy, Scoping, and Modularity
The Three-Level Configuration Hierarchy
Claude Code reads configuration instructions from CLAUDE.md files at three distinct levels, each serving a different purpose and audience. Understanding which level to place instructions at is fundamental to effective team workflows.
- User-level (
~/.claude/CLAUDE.md) — Applies exclusively to a single developer. These settings are stored in the user's home directory and are not shared via version control. Use this for personal preferences like preferred coding style, editor shortcuts, or individual workflow habits. - Project-level (
.claude/CLAUDE.mdor a rootCLAUDE.md) — Applies to the entire project and is committed to version control. This is where team-wide coding standards, architecture decisions, and shared conventions belong. Every team member who clones the repository automatically inherits these instructions. - Directory-level (a
CLAUDE.mdin any subdirectory) — Applies only when Claude Code is operating on files within that specific subdirectory. Useful for module-specific guidelines, like different linting rules for a legacy module versus a newly written module.
Keeping Configuration Modular
As project configuration grows, a single monolithic CLAUDE.md becomes difficult to maintain. There are two primary strategies for modularity:
@importsyntax — Reference external files from within CLAUDE.md to break large instruction sets into logical groupings. For example,@import coding-standards.mdcan pull in a dedicated coding standards file, keeping the main CLAUDE.md clean and navigable..claude/rules/directory — Instead of one large CLAUDE.md, create topic-specific rule files inside this directory. Each file addresses a single concern (e.g.,testing-conventions.md,api-design.md), making it easy for team members to find and update specific guidelines.
Verifying Active Configuration
Use the /memory command within a Claude Code session to inspect which memory and configuration files are currently loaded. This is invaluable for debugging when instructions seem to be missing or conflicting.
Claude Code operates on a three-level hierarchy — user, project, and directory — where each level has a distinct scope. User-level is personal and unshared; project-level is team-wide and version-controlled; directory-level is scoped to a specific folder.
Scenario: A new team member joins and doesn't receive the team's coding instructions. Why? The instructions were placed in ~/.claude/CLAUDE.md (user-level), which is local to the original developer's machine and invisible to everyone else. Fix: Move shared standards to project-level .claude/CLAUDE.md so they're distributed through version control.
Your team agrees on coding standards that every developer must follow when using Claude Code. Where should these standards be configured?
.claude/CLAUDE.md is the correct location because it is committed to version control and automatically shared with every team member upon cloning. User-level config is personal and invisible to others. Directory-level applies only to subdirectories and would require duplicating standards everywhere. Environment variables are not how CLAUDE.md configuration works.Build and Configure Custom Slash Commands and Skills
Project-Scoped vs User-Scoped Commands
Custom slash commands extend Claude Code's functionality with reusable workflows. Their placement determines their sharing scope:
- Project-scoped commands (
.claude/commands/) — Stored in the repository and shared with the entire team through version control. When any team member types the slash command, it is available immediately after cloning or pulling. Ideal for standardized workflows like/review,/deploy-check, or/generate-tests. - User-scoped commands (
~/.claude/commands/) — Stored in the developer's home directory and available only to that individual. Use these for personal productivity shortcuts that don't need to be standardized across the team.
Skills and Their Configuration
Skills (.claude/skills/) provide more sophisticated, self-contained capabilities with a SKILL.md file that includes YAML frontmatter for configuration:
context: fork— Runs the skill in an isolated sub-agent context. This prevents the skill's output from polluting the main session, which is critical for exploratory or verbose operations.allowed-tools— Restricts which tools the skill can access, creating a sandboxed execution environment for safety and focus.argument-hint— Provides a hint to the user about what argument the skill expects when invoked.
Commands vs Skills: When to Use Which
The fundamental distinction is execution context. Commands execute directly within the current session — they are always part of the ongoing conversation. Skills can be invoked on-demand and optionally run in isolated contexts, making them better suited for tasks that generate a lot of intermediate output or need tool restrictions.
Commands are always-in-session; they execute as part of the current conversation context. Skills are on-demand with isolation options; they can fork into a separate sub-agent context, preventing output pollution and enabling tool restrictions.
Mistake: Using a regular command for a task that requires context isolation (e.g., a deep codebase exploration that generates verbose output). The command's output floods the main session, degrading context quality for subsequent interactions. Better approach: Use a skill with context: fork so the exploration runs in an isolated sub-agent and only a clean summary returns to the main session.
You want to create a /review command that runs a standardized code review checklist. Every developer on the team should have access to it after cloning the repository. Where should the command definition file be placed?
.claude/commands/ are committed to version control and automatically available to all team members. User-scoped commands (B) are personal and not shared. CLAUDE.md (C) is for instructions, not command definitions. While a skill (D) can provide similar functionality, the question specifically asks about a slash command for the team.Apply Path-Specific Rules for Conditional Convention Loading
How Path-Scoped Rules Work
Files inside .claude/rules/ can include YAML frontmatter with a paths field containing glob patterns. When Claude Code operates on a file, it checks these patterns and loads only the rules whose globs match the current file path. This means rules are conditionally activated based on what you're editing.
Glob Pattern Matching
The paths field accepts standard glob patterns, enabling precise targeting:
**/*.test.tsx— Matches all test files with a.test.tsxextension across all directoriessrc/api/**— Matches everything under the API source directory**/*.{ts,tsx}— Matches all TypeScript and TSX files project-wide
When to Prefer Path Rules Over Directory-Level CLAUDE.md
Directory-level CLAUDE.md files are useful when conventions align neatly with folder boundaries. However, many conventions span across multiple directories — for example, testing patterns that apply to *.test.tsx files scattered throughout the entire project. In these cases, glob-based rules in .claude/rules/ are far more effective because a single rule file covers all matching files regardless of where they live in the directory tree.
--- # .claude/rules/testing-conventions.md paths: - "**/*.test.tsx" - "**/*.test.ts" - "**/*.spec.ts" --- Use React Testing Library for component tests. Prefer userEvent over fireEvent for user interactions. Each test file must include at least one integration test.
Use glob patterns in .claude/rules/ for conventions that span multiple directories (e.g., **/*.test.tsx). Path-scoped rules activate only when editing matching files, keeping irrelevant instructions out of context and reducing noise.
Your project has test files spread across dozens of directories. You want all test files to follow a specific testing convention, regardless of where they are in the project. What is the most maintainable approach?
.claude/rules/ is the most maintainable approach: one file covers all test files across every directory, and it only loads when editing matching files. Placing a CLAUDE.md in each directory (A) creates a maintenance nightmare. Loading the convention globally (C) pollutes the context when editing non-test files. A custom skill (D) is overengineered for static convention loading.Decide When to Use Plan Mode vs Direct Execution
When Plan Mode Is the Right Choice
Plan mode is a read-only collaborative mode designed for designing an approach before making any changes. Activate it when:
- Complex, multi-file tasks — Changes touch many files or modules and require understanding dependencies before editing
- Large-scale refactoring or restructuring — Architectural changes like splitting a monolith, migrating database layers, or reorganizing module boundaries
- Multiple valid approaches exist — The task has meaningful trade-offs between different solutions and you need to evaluate options before committing
- Architectural decisions are required — Choices about patterns, data flow, or system design that will have lasting impact
When Direct Execution Is More Efficient
For straightforward, well-scoped changes, direct execution avoids the overhead of a planning phase:
- Simple, clearly defined changes — Adding a single validation check, fixing a typo, updating a configuration value
- Scope is unambiguous — The change touches one or two files and the correct approach is obvious
- No meaningful alternatives — There is essentially one right way to implement the change
Using Explore Sub-Agents for Verbose Discovery
When you need to investigate a codebase thoroughly before planning, an explore sub-agent isolates the verbose discovery output from your main session. This keeps your primary context clean while still gathering the information needed to make informed decisions.
Plan mode is for architecture decisions — complex tasks with multiple valid approaches and significant trade-offs. Direct execution is for clear-scope changes — simple, unambiguous modifications where planning adds no value.
You've been assigned to restructure a monolithic application into microservices. This involves dozens of files, decisions about service boundaries, and dependency analysis. What approach should you use?
Apply Iterative Refinement Techniques for Progressive Improvement
Concrete Examples Over Abstract Instructions
When communicating the desired transformation to Claude Code, concrete input/output examples are the most effective technique. Instead of describing what you want in abstract terms, show a before-and-after pair. This eliminates ambiguity and gives the model a precise reference for the expected behavior.
Test-Driven Iteration (TDD Pattern)
The TDD iteration pattern is one of the most powerful refinement techniques:
- Write a failing test suite first — Define the expected behavior through tests before any implementation exists
- Ask Claude Code to implement — Share the failing tests and let it write code to pass them
- Share specific failures — When tests fail, provide the exact failure output (not just "tests failed") so the model can target the precise issue
- Iterate and refine — Continue the loop: run tests, share failures, implement fixes, verify all tests still pass
The Interview Pattern
For underspecified tasks, use the interview pattern: instruct Claude Code to ask clarifying questions before starting implementation. This surfaces hidden requirements and prevents costly mid-implementation pivots.
Batching Issues vs Sequential Fixing
Decide your iteration strategy based on issue interdependence:
- Batch together — When issues interact with each other (e.g., related validation failures), present them all at once so the model can address root causes holistically
- Fix sequentially — When issues are independent (e.g., unrelated bugs in different modules), address them one at a time to maintain focus and avoid context overload
TDD-based iteration is the highest-leverage refinement technique: write a failing test, implement to pass it, verify, and refine while keeping all existing tests green. Concrete test expectations communicate requirements far more precisely than verbal descriptions.
You're iterating with Claude Code on a data transformation function. The implementation passes most tests but fails on edge cases. What is the most effective way to communicate the needed improvement?
Integrate Claude Code into CI/CD Pipelines
Non-Interactive Mode with -p
In CI/CD environments, Claude Code must run without human interaction. The -p (or --print) flag enables non-interactive mode, where Claude Code processes a prompt, produces output, and exits — no terminal input required. Without this flag, Claude Code will wait for interactive input, causing the CI pipeline to hang indefinitely.
Structured Output for Automation
CI/CD pipelines typically need machine-parseable output. Two flags enable this:
--output-format json— Forces all output into a structured JSON format that downstream pipeline steps can parse reliably--json-schema— Constrains the JSON output to conform to a specific schema, ensuring consistent field names, types, and structure across runs
Project Context in CI
When Claude Code runs in CI, it still reads CLAUDE.md files from the repository. This means your project-level CLAUDE.md provides essential context — coding standards, architectural guidelines, review criteria — even in automated environments. Ensure your CLAUDE.md is rich enough to guide CI-invoked Claude Code effectively.
Session Context Isolation: Generation vs Review
A critical architectural principle for CI/CD integration: the session that generated code should not be the same session that reviews it. When Claude Code generates code and then reviews it in the same session, it retains the reasoning context from the generation phase. This creates confirmation bias — the reviewer "remembers" why it made certain choices and is less likely to catch errors.
Always use separate sessions for code generation and code review. A fresh review session examines the code without the generator's reasoning baggage.
Avoiding Duplicate Findings on Re-Runs
When re-running Claude Code for review (e.g., after a push to an open PR), include findings from prior review runs in the new prompt. This prevents Claude Code from flagging the same issues repeatedly, keeping review comments actionable and non-repetitive.
Use -p for non-interactive mode in CI/CD. Use separate sessions for code generation and code review to prevent the reviewer from inheriting the generator's confirmation bias. Combine with --output-format json and --json-schema for pipeline-friendly structured output.
Trap #1: Running Claude Code in interactive mode within a CI pipeline. Without the -p flag, it waits for terminal input and the pipeline hangs indefinitely.
Trap #2: Using the same session to generate code and then review it. The review inherits the generation context, creating confirmation bias — the model is less critical of its own reasoning and more likely to overlook genuine issues.
Your CI pipeline uses Claude Code to generate code in one step and then review it in the next step. Reviews rarely catch real issues. What is the most likely cause?