Amazon Q CLI
Welcome to the supplementary Amazon Q CLI Developer documentation. This documentation supplements the primary Amazon Q CLI documentation.
These docs are experimental, work in progress, and subject to change. As of now, they do not represent latest stable builds, instead documenting the functionality of development builds.
Agent Format
The agent configuration file for each agent is a JSON file. The filename (without the .json
extension) becomes the agent's name. It contains configuration needed to instantiate and run the agent.
[!TIP] We recommend using the
/agent generate
slash command within your active Q session to intelligently generate your agent configuration with the help of Q.
Every agent configuration file can include the following sections:
name
— The name of the agent (optional, derived from filename if not specified).description
— A description of the agent.prompt
— High-level context for the agent.mcpServers
— The MCP servers the agent has access to.tools
— The tools available to the agent.toolAliases
— Tool name remapping for handling naming collisions.allowedTools
— Tools that can be used without prompting.toolsSettings
— Configuration for specific tools.resources
— Resources available to the agent.hooks
— Commands run at specific trigger points.useLegacyMcpJson
— Whether to include legacy MCP configuration.model
— The model ID to use for this agent.
Name Field
The name
field specifies the name of the agent. This is used for identification and display purposes.
{
"name": "aws-expert"
}
Description Field
The description
field provides a description of what the agent does. This is primarily for human readability and helps users distinguish between different agents.
{
"description": "An agent specialized for AWS infrastructure tasks"
}
Prompt Field
The prompt
field is intended to provide high-level context to the agent, similar to a system prompt. It supports both inline text and file:// URIs to reference external files.
Inline Prompt
{
"prompt": "You are an expert AWS infrastructure specialist"
}
File URI Prompt
You can reference external files using file://
URIs. This allows you to maintain long, complex prompts in separate files for better organization and version control, while keeping your agent configuration clean and readable.
{
"prompt": "file://./my-agent-prompt.md"
}
File URI Path Resolution
- Relative paths: Resolved relative to the agent configuration file's directory
"file://./prompt.md"
→prompt.md
in the same directory as the agent config"file://../shared/prompt.md"
→prompt.md
in a parent directory
- Absolute paths: Used as-is
"file:///home/user/prompts/agent.md"
→ Absolute path to the file
File URI Examples
{
"prompt": "file://./prompts/aws-expert.md"
}
{
"prompt": "file:///Users/developer/shared-prompts/rust-specialist.md"
}
McpServers Field
The mcpServers
field specifies which Model Context Protocol (MCP) servers the agent has access to. Each server is defined with a command and optional arguments.
{
"mcpServers": {
"fetch": {
"command": "fetch3.1",
"args": []
},
"git": {
"command": "git-mcp",
"args": [],
"env": {
"GIT_CONFIG_GLOBAL": "/dev/null"
},
"timeout": 120000
}
}
}
Each MCP server configuration can include:
command
(required): The command to execute to start the MCP serverargs
(optional): Arguments to pass to the commandenv
(optional): Environment variables to set for the servertimeout
(optional): Timeout for each MCP request in milliseconds (default: 120000)
Tools Field
The tools
field lists all tools that the agent can potentially use. Tools include built-in tools and tools from MCP servers.
- Built-in tools are specified by their name (e.g.,
fs_read
,execute_bash
) - MCP server tools are prefixed with
@
followed by the server name (e.g.,@git
) - To specify a specific tool from an MCP server, use
@server_name/tool_name
- Use
*
as a special wildcard to include all available tools (both built-in and from MCP servers) - Use
@builtin
to include all built-in tools - Use
@server_name
to include all tools from a specific MCP server
{
"tools": [
"fs_read",
"fs_write",
"execute_bash",
"@git",
"@rust-analyzer/check_code"
]
}
To include all available tools, you can simply use:
{
"tools": ["*"]
}
ToolAliases Field
The toolAliases
field is an advanced feature that allows you to remap tool names. This is primarily used to resolve naming collisions between tools from different MCP servers, or to create more intuitive names for specific tools.
For example, if both @github-mcp
and @gitlab-mcp
servers provide a tool called get_issues
, you would have a naming collision. You can use toolAliases
to disambiguate them:
{
"toolAliases": {
"@github-mcp/get_issues": "github_issues",
"@gitlab-mcp/get_issues": "gitlab_issues"
}
}
With this configuration, the tools will be available to the agent as github_issues
and gitlab_issues
instead of having a collision on get_issues
.
You can also use aliases to create shorter or more intuitive names for frequently used tools:
{
"toolAliases": {
"@aws-cloud-formation/deploy_stack_with_parameters": "deploy_cf",
"@kubernetes-tools/get_pod_logs_with_namespace": "pod_logs"
}
}
The key is the original tool name (including server prefix for MCP tools), and the value is the new name to use.
AllowedTools Field
The allowedTools
field specifies which tools can be used without prompting the user for permission. This is a security feature that helps prevent unauthorized tool usage.
{
"allowedTools": [
"fs_read",
"fs_*",
"@git/git_status",
"@server/read_*",
"@fetch"
]
}
You can allow tools using several patterns:
Exact Matches
- Built-in tools:
"fs_read"
,"execute_bash"
,"knowledge"
- Specific MCP tools:
"@server_name/tool_name"
(e.g.,"@git/git_status"
) - All tools from MCP server:
"@server_name"
(e.g.,"@fetch"
)
Wildcard Patterns
The allowedTools
field supports glob-style wildcard patterns using *
and ?
:
Native Tool Patterns
- Prefix wildcard:
"fs_*"
→ matchesfs_read
,fs_write
,fs_anything
- Suffix wildcard:
"*_bash"
→ matchesexecute_bash
,run_bash
- Middle wildcard:
"fs_*_tool"
→ matchesfs_read_tool
,fs_write_tool
- Single character:
"fs_?ead"
→ matchesfs_read
,fs_head
(but notfs_write
)
MCP Tool Patterns
- Tool prefix:
"@server/read_*"
→ matches@server/read_file
,@server/read_config
- Tool suffix:
"@server/*_get"
→ matches@server/issue_get
,@server/data_get
- Server pattern:
"@*-mcp/read_*"
→ matches@git-mcp/read_file
,@db-mcp/read_data
- Any tool from pattern servers:
"@git-*/*"
→ matches any tool from servers matchinggit-*
Examples
{
"allowedTools": [
// Exact matches
"fs_read",
"knowledge",
"@server/specific_tool",
// Native tool wildcards
"fs_*", // All filesystem tools
"execute_*", // All execute tools
"*_test", // Any tool ending in _test
// MCP tool wildcards
"@server/api_*", // All API tools from server
"@server/read_*", // All read tools from server
"@git-server/get_*_info", // Tools like get_user_info, get_repo_info
"@*/status", // Status tool from any server
// Server-level permissions
"@fetch", // All tools from fetch server
"@git-*" // All tools from any git-* server
]
}
Pattern Matching Rules
*
matches any sequence of characters (including none)?
matches exactly one character- Exact matches take precedence over patterns
- Server-level permissions (
@server_name
) allow all tools from that server - Case-sensitive matching
Unlike the tools
field, the allowedTools
field does not support the "*"
wildcard for allowing all tools. To allow tools, you must use specific patterns or server-level permissions.
ToolsSettings Field
The toolsSettings
field provides configuration for specific tools. Each tool can have its own unique configuration options.
Note that specifications that configure allowable patterns will be overridden if the tool is also included in allowedTools
.
{
"toolsSettings": {
"fs_write": {
"allowedPaths": ["~/**"]
},
"@git/git_status": {
"git_user": "$GIT_USER"
}
}
}
For built-in tool configuration options, please refer to the built-in tools documentation.
Resources Field
The resources
field gives an agent access to local resources. Currently, only file resources are supported, and all resource paths must start with file://
.
{
"resources": [
"file://AmazonQ.md",
"file://README.md",
"file://.amazonq/rules/**/*.md"
]
}
Resources can include:
- Specific files
- Glob patterns for multiple files
- Absolute or relative paths
Hooks Field
The hooks
field defines commands to run at specific trigger points during agent lifecycle and tool execution.
For detailed information about hook behavior, input/output formats, and examples, see the Hooks documentation.
{
"hooks": {
"agentSpawn": [
{
"command": "git status"
}
],
"userPromptSubmit": [
{
"command": "ls -la"
}
],
"preToolUse": [
{
"matcher": "execute_bash",
"command": "{ echo \"$(date) - Bash command:\"; cat; echo; } >> /tmp/bash_audit_log"
},
{
"matcher": "use_aws",
"command": "{ echo \"$(date) - AWS CLI call:\"; cat; echo; } >> /tmp/aws_audit_log"
}
],
"postToolUse": [
{
"matcher": "fs_write",
"command": "cargo fmt --all"
}
]
}
}
Each hook is defined with:
command
(required): The command to executematcher
(optional): Pattern to match tool names forpreToolUse
andpostToolUse
hooks. See built-in tools documentation for available tool names.
Available hook triggers:
agentSpawn
: Triggered when the agent is initialized.userPromptSubmit
: Triggered when the user submits a message.preToolUse
: Triggered before a tool is executed. Can block the tool use.postToolUse
: Triggered after a tool is executed.stop
: Triggered when the assistant finishes responding.
UseLegacyMcpJson Field
The useLegacyMcpJson
field determines whether to include MCP servers defined in the legacy MCP configuration files (~/.aws/amazonq/mcp.json
for global and cwd/.amazonq/mcp.json
for workspace).
{
"useLegacyMcpJson": true
}
When set to true
, the agent will have access to all MCP servers defined in the global and local configurations in addition to those defined in the agent's mcpServers
field.
Model Field
The model
field specifies the model ID to use for this agent. If not specified, the agent will use the default model.
{
"model": "claude-sonnet-4"
}
The model ID must match one of the available models returned by the Q CLI's model service. You can see available models by using the /model
command in an active chat session.
If the specified model is not available, the agent will fall back to the default model and display a warning.
Complete Example
Here's a complete example of an agent configuration file:
{
"name": "aws-rust-agent",
"description": "A specialized agent for AWS and Rust development tasks",
"mcpServers": {
"fetch": {
"command": "fetch3.1",
"args": []
},
"git": {
"command": "git-mcp",
"args": []
}
},
"tools": [
"fs_read",
"fs_write",
"execute_bash",
"use_aws",
"@git",
"@fetch/fetch_url"
],
"toolAliases": {
"@git/git_status": "status",
"@fetch/fetch_url": "get"
},
"allowedTools": [
"fs_read",
"@git/git_status"
],
"toolsSettings": {
"fs_write": {
"allowedPaths": ["src/**", "tests/**", "Cargo.toml"]
},
"use_aws": {
"allowedServices": ["s3", "lambda"]
}
},
"resources": [
"file://README.md",
"file://docs/**/*.md"
],
"hooks": {
"agentSpawn": [
{
"command": "git status"
}
],
"userPromptSubmit": [
{
"command": "ls -la"
}
]
},
"useLegacyMcpJson": true,
"model": "claude-sonnet-4"
}
Built-in Tools
Amazon Q CLI includes several built-in tools that agents can use. This document describes each tool and its configuration options.
execute_bash
— Execute a shell command.fs_read
— Read files, directories, and images.fs_write
— Create and edit files.introspect
— Provide information about Q CLI capabilities and documentation.report_issue
— Open a GitHub issue template.knowledge
— Store and retrieve information in a knowledge base.thinking
— Internal reasoning mechanism.todo_list
— Create and manage TODO lists for tracking multi-step tasks.use_aws
— Make AWS CLI API calls.
Execute_bash Tool
Execute the specified bash command.
Configuration
{
"toolsSettings": {
"execute_bash": {
"allowedCommands": ["git status", "git fetch"],
"deniedCommands": ["git commit .*", "git push .*"],
"autoAllowReadonly": true
}
}
}
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
allowedCommands | array of strings | [] | List of specific commands that are allowed without prompting. Supports regex formatting. Note that regex entered are anchored with \A and \z |
deniedCommands | array of strings | [] | List of specific commands that are denied. Supports regex formatting. Note that regex entered are anchored with \A and \z. Deny rules are evaluated before allow rules |
autoAllowReadonly | boolean | false | Whether to allow read-only commands without prompting |
Fs_read Tool
Tool for reading files, directories, and images.
Configuration
{
"toolsSettings": {
"fs_read": {
"allowedPaths": ["~/projects", "./src/**"],
"deniedPaths": ["/some/denied/path/", "/another/denied/path/**/file.txt"]
}
}
}
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
allowedPaths | array of strings | [] | List of paths that can be read without prompting. Supports glob patterns. Glob patterns have the same behavior as gitignore. For example, ~/temp would match ~/temp/child and ~/temp/child/grandchild |
deniedPaths | array of strings | [] | List of paths that are denied. Supports glob patterns. Deny rules are evaluated before allow rules. Glob patterns have the same behavior as gitignore. For example, ~/temp would match ~/temp/child and ~/temp/child/grandchild |
Fs_write Tool
Tool for creating and editing files.
Configuration
{
"toolsSettings": {
"fs_write": {
"allowedPaths": ["~/projects/output.txt", "./src/**"],
"deniedPaths": ["/some/denied/path/", "/another/denied/path/**/file.txt"]
}
}
}
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
allowedPaths | array of strings | [] | List of paths that can be written to without prompting. Supports glob patterns. Glob patterns have the same behavior as gitignore.For example, ~/temp would match ~/temp/child and ~/temp/child/grandchild |
deniedPaths | array of strings | [] | List of paths that are denied. Supports glob patterns. Deny rules are evaluated before allow rules. Glob patterns have the same behavior as gitignore.For example, ~/temp would match ~/temp/child and ~/temp/child/grandchild |
Introspect Tool
Provide information about Q CLI capabilities, features, commands, and documentation. This tool accesses Q CLI's built-in documentation and help content to answer questions about the CLI's functionality.
Usage
The introspect tool is automatically used when you ask questions about Q CLI itself, such as:
- "What can you do?"
- "How do I save conversations?"
- "What commands are available?"
- "Do you have feature X?"
Behavior
- Tries to provide the information that is explicitly documented
- Accesses README, built-in tools documentation, experiments, and settings information
- Automatically enters tangent mode when configured to do so and if we set the setting introspect.tangentMode = true.
Report_issue Tool
Opens the browser to a pre-filled GitHub issue template to report chat issues, bugs, or feature requests.
This tool has no configuration options.
Knowledge Tool (experimental)
Store and retrieve information in a knowledge base across chat sessions. Provides semantic search capabilities for files, directories, and text content.
This tool has no configuration options.
Thinking Tool (experimental)
An internal reasoning mechanism that improves the quality of complex tasks by breaking them down into atomic actions.
This tool has no configuration options.
TODO List Tool (experimental)
Create and manage TODO lists for tracking multi-step tasks. Lists are stored locally in .amazonq/cli-todo-lists/
.
This tool has no configuration options.
Use_aws Tool
Make AWS CLI API calls with the specified service, operation, and parameters.
Configuration
{
"toolsSettings": {
"use_aws": {
"allowedServices": ["s3", "lambda", "ec2"],
"deniedServices": ["eks", "rds"],
"autoAllowReadonly": true
}
}
}
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
allowedServices | array of strings | [] | List of AWS services that can be accessed without prompting |
deniedServices | array of strings | [] | List of AWS services to deny. Deny rules are evaluated before allow rules |
autoAllowReadonly | boolean | false | Whether to automatically allow read-only operations (get, describe, list, ls, search, batch_get) without prompting |
Using Tool Settings in Agent Configuration
Tool settings are specified in the toolsSettings
section of the agent configuration file. Each tool's settings are specified using the tool's name as the key.
For MCP server tools, use the format @server_name/tool_name
as the key:
{
"toolsSettings": {
"fs_write": {
"allowedPaths": ["~/projects"]
},
"@git/git_status": {
"git_user": "$GIT_USER"
}
}
}
Tool Permissions
Tools can be explicitly allowed in the allowedTools
section of the agent configuration:
{
"allowedTools": [
"fs_read",
"knowledge",
"@git/git_status"
]
}
If a tool is not in the allowedTools
list, the user will be prompted for permission when the tool is used unless an allowed toolSettings
configuration is set.
Some tools have default permission behaviors:
fs_read
andreport_issue
are trusted by defaultexecute_bash
,fs_write
, anduse_aws
prompt for permission by default, but can be configured to allow specific commands/paths/services
Knowledge Management
The /knowledge command provides persistent knowledge base functionality for Amazon Q CLI, allowing you to store, search, and manage contextual information that persists across chat sessions.
[!NOTE] This is a beta feature that must be enabled before use.
Getting Started
Enable the Knowledge Feature
The knowledge feature is experimental and disabled by default. Enable it with:
q settings chat.enableKnowledge true
Basic Usage
Once enabled, you can use /knowledge
commands within your chat session:
/knowledge add --name myproject --path /path/to/project
/knowledge show
Commands
/knowledge show
Display all entries in your knowledge base with detailed information including creation dates, item counts, and persistence status. Also shows any active background indexing operations with progress and ETA information.
This unified command replaces the previous separate /knowledge status
command, providing a complete view of both your stored knowledge and ongoing operations in one place.
/knowledge add --name <name> --path <path> [--include pattern] [--exclude pattern] [--index-type Fast|Best]
Add files or directories to your knowledge base. The system will recursively index all supported files in directories.
Required Parameters:
--name
or-n
: A descriptive name for the knowledge entry--path
or-p
: Path to the file or directory to index
Examples:
/knowledge add --name "project-docs" --path /path/to/documentation
/knowledge add -n "config-files" -p /path/to/config.json
/knowledge add --name "fast-search" --path /path/to/logs --index-type Fast
/knowledge add -n "semantic-search" -p /path/to/docs --index-type Best
Index Types
Choose the indexing approach that best fits your needs:
-
--index-type Fast
(Lexical - BM25):- ✅ Lightning-fast indexing - processes files quickly
- ✅ Instant search - keyword-based search with immediate results
- ✅ Low resource usage - minimal CPU and memory requirements
- ✅ Perfect for logs, configs, and large codebases
- ❌ Less intelligent - requires exact keyword matches
-
--index-type Best
(Semantic - all-MiniLM-L6-v2):- ✅ Intelligent search - understands context and meaning
- ✅ Natural language queries - search with full sentences
- ✅ Finds related concepts - even without exact keyword matches
- ✅ Perfect for documentation, research, and complex content
- ❌ Slower indexing - requires AI model processing
- ❌ Higher resource usage - more CPU and memory intensive
When to Use Each Type:
Use Case | Recommended Type | Why |
---|---|---|
Log files, error messages | Fast | Quick keyword searches, large volumes |
Configuration files | Fast | Exact parameter/value lookups |
Large codebases | Fast | Fast symbol and function searches |
Documentation | Best | Natural language understanding |
Research papers | Best | Concept-based searching |
Mixed content | Best | Better overall search experience |
Default Behavior:
If you don't specify --index-type
, the system uses your configured default:
# Set your preferred default
q settings knowledge.indexType Fast # or Best
# This will use your default setting
/knowledge add "my-project" /path/to/project
Default Pattern Behavior
When you don't specify --include
or --exclude
patterns, the system uses your configured default patterns:
- If no patterns are specified and no defaults are configured, all supported files are indexed
- Default include patterns apply when no
--include
is specified - Default exclude patterns apply when no
--exclude
is specified - Explicit patterns always override defaults
Example with defaults configured:
q settings knowledge.defaultIncludePatterns '["**/*.rs", "**/*.py"]'
q settings knowledge.defaultExcludePatterns '["target/**", "__pycache__/**"]'
# This will use the default patterns
/knowledge add "my-project" /path/to/project
# This will override defaults with explicit patterns
/knowledge add "docs-only" /path/to/project --include "**/*.md"
New: Pattern Filtering
You can now control which files are indexed using include and exclude patterns:
/knowledge add "rust-code" /path/to/project --include "*.rs" --exclude "target/**"
/knowledge add "docs" /path/to/project --include "**/*.md" --include "**/*.txt" --exclude "node_modules/**"
Pattern examples:
*.rs
- All Rust files in all directories recursively (equivalent to**/*.rs
)**/*.py
- All Python files recursivelytarget/**
- Everything in target directorynode_modules/**
- Everything in node_modules directory
Supported file types (expanded):
- Text files: .txt, .log, .rtf, .tex, .rst
- Markdown: .md, .markdown, .mdx (now supported!)
- JSON: .json (now treated as text for better searchability)
- Configuration: .ini, .conf, .cfg, .properties, .env
- Data files: .csv, .tsv
- Web formats: .svg (text-based)
- Code files: .rs, .py, .js, .jsx, .ts, .tsx, .java, .c, .cpp, .h, .hpp, .go, .rb, .php, .swift, .kt, .kts, .cs, .sh, .bash, .zsh, .html, .htm, .xml, .css, .scss, .sass, .less, .sql, .yaml, .yml, .toml
- Special files: Dockerfile, Makefile, LICENSE, CHANGELOG, README (files without extensions)
Important: Unsupported files are indexed without text content extraction.
/knowledge remove <identifier>
Remove entries from your knowledge base. You can remove by name, path, or context ID.
/knowledge remove "project-docs"
# Remove by name
/knowledge remove /path/to/old/project
# Remove by path
/knowledge update <path>
Update an existing knowledge base entry with new content from the specified path. The original include/exclude patterns are preserved during updates.
/knowledge update /path/to/updated/project
/knowledge clear
Remove all entries from your knowledge base. This action requires confirmation and cannot be undone.
You'll be prompted to confirm:
⚠️ This will remove ALL knowledge base entries. Are you sure? (y/N):
/knowledge cancel [operation_id]
Cancel background operations. You can cancel a specific operation by ID or all operations if no ID is provided.
/knowledge cancel abc12345 # Cancel specific operation
/knowledge cancel all # Cancel all operations
Configuration
Configure knowledge base behavior:
q settings knowledge.maxFiles 10000
# Maximum files per knowledge base
q settings knowledge.chunkSize 1024
# Text chunk size for processing
q settings knowledge.chunkOverlap 256
# Overlap between chunks
q settings knowledge.indexType Fast
# Default index type (Fast or Best)
q settings knowledge.defaultIncludePatterns '["**/*.rs", "**/*.md"]'
# Default include patterns
q settings knowledge.defaultExcludePatterns '["target/**", "node_modules/**"]'
# Default exclude patterns
Agent-Specific Knowledge Bases
Isolated Knowledge Storage
Each agent maintains its own isolated knowledge base, ensuring that knowledge contexts are scoped to the specific agent you're working with. This provides better organization and prevents knowledge conflicts between different agents.
Folder Structure
Knowledge bases are stored in the following directory structure:
~/.aws/amazonq/knowledge_bases/
├── q_cli_default/ # Default agent knowledge base
│ ├── contexts.json # Metadata for all contexts
│ ├── context-id-1/ # Individual context storage
│ │ ├── data.json # Semantic search data
│ │ └── bm25_data.json # BM25 search data (if using Fast index)
│ └── context-id-2/
│ ├── data.json
│ └── bm25_data.json
├── my-custom-agent_<alphanumeric-code>/ # Custom agent knowledge base
│ ├── contexts.json
│ ├── context-id-3/
│ │ └── data.json
│ └── context-id-4/
│ └── data.json
└── another-agent_<alphanumeric-code>/ # Another agent's knowledge base
├── contexts.json
└── context-id-5/
└── data.json
How Agent Isolation Works
- Automatic Scoping: When you use
/knowledge
commands, they automatically operate on the current agent's knowledge base - No Cross-Agent Access: Agent A cannot access or search knowledge contexts created by Agent B
- Independent Configuration: Each agent can have different knowledge base settings and contexts
- Migration Support: Legacy knowledge bases are automatically migrated to the default agent on first use
Agent Switching
When you switch between agents, your knowledge commands will automatically work with that agent's specific knowledge base:
# Working with default agent
/knowledge add /path/to/docs
# Switch to custom agent
q chat --agent my-custom-agent
# This creates a separate knowledge base for my-custom-agent
/knowledge add /path/to/agent/docs
# Switch back to default
q chat
# Only sees the original project-docs, not agent-specific-docs
/knowledge show
How It Works
Indexing Process
When you add content to the knowledge base:
- Pattern Filtering: Files are filtered based on include/exclude patterns (if specified)
- File Discovery: The system recursively scans directories for supported file types
- Content Extraction: Text content is extracted from each supported file
- Chunking: Large files are split into smaller, searchable chunks
- Background Processing: Indexing happens asynchronously in the background
- Semantic Embedding: Content is processed for semantic search capabilities
Search Capabilities
The knowledge base uses semantic search, which means:
- You can search using natural language queries
- Results are ranked by relevance, not just keyword matching
- Related concepts are found even if exact words don't match
Persistence
- Persistent contexts: Survive across chat sessions and CLI restarts
- Context persistence is determined automatically based on usage patterns
- Include/exclude patterns are stored with each context and reused during updates
Best Practices
Organizing Your Knowledge Base
- Use descriptive names when adding contexts: "api-documentation" instead of "docs"
- Group related files in directories before adding them
- Use include/exclude patterns to focus on relevant files
- Regularly review and update outdated contexts
Effective Searching
- Use natural language queries: "how to handle authentication errors using the knowledge tool"
- Be specific about what you're looking for: "database connection configuration"
- Try different phrasings if initial searches don't return expected results
- Prompt Q to use the tool with prompts like "find database connection configuration using your knowledge bases" or "using your knowledge tools can you find how to replace your laptop"
Managing Large Projects
- Add project directories rather than individual files when possible
- Use include/exclude patterns to avoid indexing build artifacts:
--exclude "target/**" --exclude "node_modules/**"
- Use /knowledge show to monitor indexing progress for large directories
- Consider breaking very large projects into logical sub-directories
Pattern Filtering Best Practices
- Be specific: Use precise patterns to avoid over-inclusion
- Exclude build artifacts: Always exclude directories like
target/**
,node_modules/**
,.git/**
- Include relevant extensions: Focus on file types you actually need to search
- Test patterns: Verify patterns match expected files before large indexing operations
Limitations
File Type Support
- Binary files are ignored during indexing
- Very large files may be chunked, potentially splitting related content
- Some specialized file formats may not extract content optimally
Performance Considerations
- Large directories may take significant time to index
- Background operations are limited by concurrent processing limits
- Search performance may vary based on knowledge base size and embedding engine
- Pattern filtering happens during file walking, improving performance for large directories
Storage and Persistence
- No explicit storage size limits, but practical limits apply
- No automatic cleanup of old or unused contexts
- Clear operations are irreversible with no backup functionality
Troubleshooting
Files Not Being Indexed
If your files aren't appearing in search results:
- Check patterns: Ensure your include patterns match the files you want
- Verify exclude patterns: Make sure exclude patterns aren't filtering out desired files
- Check file types: Ensure your files have supported extensions
- Monitor progress: Use /knowledge show to check if indexing is still in progress
- Verify paths: Ensure the paths you added actually exist and are accessible
- Check for errors: Look for error messages in the CLI output
Search Not Finding Expected Results
If searches aren't returning expected results:
- Wait for indexing: Use /knowledge show to ensure indexing is complete
- Try different queries: Use various phrasings and keywords
- Verify content: Use /knowledge show to confirm your content was added
- Check file types: Unsupported file types won't have searchable content
Performance Issues
If operations are slow:
- Check operations: Use /knowledge show to see operation progress and queue
- Cancel if needed: Use /knowledge cancel to stop problematic operations
- Add smaller chunks: Consider adding subdirectories instead of entire large projects
- Use better patterns: Exclude unnecessary files with exclude patterns
- Adjust settings: Consider lowering maxFiles or chunkSize for better performance
Pattern Issues
If patterns aren't working as expected:
- Test patterns: Use simple patterns first, then add complexity
- Check syntax: Ensure glob patterns use correct syntax (
**
for recursive,*
for single level) - Verify paths: Make sure patterns match actual file paths in your project
- Use absolute patterns: Consider using full paths in patterns for precision
Migrating Profiles to Agents
All global profiles (created under ~/.aws/amazonq/profiles/
) support automatic migration to the global agents directory under ~/.aws/amazonq/cli-agents/
on initial startup.
If you have local MCP configuration defined under .amazonq/mcp.json
, then you can optionally add this configuration to a global agent, or create a new workspace agent.
Creating a New Workspace Agent
Workspace agents are managed under the current working directory inside .amazonq/cli-agents/
.
You can create a new workspace agent with q agent create --name my-agent -d .
.
Global Context
Global context previously configured under ~/.aws/amazonq/global_context.json
is no longer supported. Global context will need to be manually added to agents (see the below section).
MCP Servers
The agent configuration supports the same MCP format as previously configured.
See the agent format documentation for more details.
Context Files
Context files are now file URI's and configured under the "resources"
field.
Example from profiles:
{
"paths": [
"~/my-files/**/*.txt"
]
}
Same example for agents:
{
"resources": [
"file://~/my-files/**/*.txt"
]
}
Hooks
Hook triggers have been updated:
- Hook name is no longer required
conversation_start
is nowagentSpawn
per_prompt
is nowuserPromptSubmit
See the agent format documentation for more details.
Example from profiles:
{
"hooks": {
"sleep_conv_start": {
"trigger": "conversation_start",
"type": "inline",
"disabled": false,
"timeout_ms": 30000,
"max_output_size": 10240,
"cache_ttl_seconds": 0,
"command": "echo Conversation start hook"
},
"hello_world": {
"trigger": "per_prompt",
"type": "inline",
"disabled": false,
"timeout_ms": 30000,
"max_output_size": 10240,
"cache_ttl_seconds": 0,
"command": "echo Per prompt hook"
}
}
}
Same example for agents:
{
"hooks": {
"userPromptSubmit": [
{
"command": "echo Per prompt hook",
"timeout_ms": 30000,
"max_output_size": 10240,
"cache_ttl_seconds": 0
}
],
"agentSpawn": [
{
"command": "echo Conversation start hook",
"timeout_ms": 30000,
"max_output_size": 10240,
"cache_ttl_seconds": 0
}
]
}
}