This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
nov_chat is a multi-pane terminal user interface (TUI) for querying multiple LLMs simultaneously. Built with Rust, it allows users to compare responses from different models or configurations side-by-side in real-time.
# Build the project
cargo build
# Run the application
cargo run
# Run with release optimizations
cargo build --release
cargo run --release
# Run all tests
cargo test
# Run tests for a specific module
cargo test llm::
cargo test storage::
# Run a specific test
cargo test test_config_serialization
# Run tests with output
cargo test -- --nocapture
# Check code without building
cargo check
# Run clippy for lints
cargo clippy
# Format code
cargo fmt
The application uses an async message-passing architecture:
app.rs - Application state and business logic
App: Main application state containing panes, input mode, and LLM clientPane: Individual chat pane with its own ModelConfig and message historyInputMode: Enum for Normal/Editing/Config modes (vim-inspired interface)llm/ - LLM client abstraction
LlmClient: OpenAI-compatible API client using reqwest (supports any OpenAI-compatible endpoint)ModelConfig: Model configuration (model name, temperature, max_tokens, top_p)OPENAI_BASE_URL (required), OPENAI_API_KEY (optional)storage/ - Persistence layer
Conversation: Saves chat history as markdown files in conversations/ directoryMessage: Individual message with role, content, and timestampSettings: Saves/loads pane configurations from config.toml in project rootui.rs - Terminal rendering with ratatui
input.rs - (Not yet implemented, input handling is currently in main.rs)
Parallel LLM Requests: When a prompt is submitted, the app spawns a separate tokio task for each pane (app.rs:260-271). This allows multiple models to be queried concurrently without blocking.
Channel-based Response Handling: LLM responses are sent through an unbounded mpsc channel. The main loop polls this channel on each iteration (app.rs:163, 277-295), allowing responses to appear as they arrive without blocking user input.
Vim-style Navigation: The interface uses vim keybindings (Ctrl+h/j/k/l) and arrow keys (Ctrl+arrows) for pane navigation, with 'i' to enter input mode and Esc to return to normal mode.
Configuration Persistence: On exit or manual save, the app saves:
config.toml for restoration on next launchCreate a .env file in the project root:
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_KEY=your_key_here # Optional for some endpoints
The application loads this on startup via dotenvy (main.rs:25).
Tests use standard Rust testing with:
#[cfg(test)])mockito for HTTP mocking (dev dependency)tempfile for temporary file operations in storage testsMost modules have basic serialization/deserialization tests. The LLM client has minimal tests due to requiring live API access.