LLM Output

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

nov_chat is a multi-pane terminal user interface (TUI) for querying multiple LLMs simultaneously. Built with Rust, it allows users to compare responses from different models or configurations side-by-side in real-time.

Development Commands

Building and Running

# Build the project
cargo build

# Run the application
cargo run

# Run with release optimizations
cargo build --release
cargo run --release

Testing

# Run all tests
cargo test

# Run tests for a specific module
cargo test llm::
cargo test storage::

# Run a specific test
cargo test test_config_serialization

# Run tests with output
cargo test -- --nocapture

Linting

# Check code without building
cargo check

# Run clippy for lints
cargo clippy

# Format code
cargo fmt

Architecture

Core Data Flow

The application uses an async message-passing architecture:

  1. User input is collected in the main event loop (main.rs:55-165)
  2. When a prompt is submitted, it's sent to ALL panes simultaneously (app.rs:244-274)
  3. Each pane spawns an independent tokio task to query its configured LLM
  4. Responses are sent back via an mpsc channel and processed asynchronously (app.rs:276-296)
  5. UI is redrawn on each event loop iteration showing real-time updates

Module Structure

app.rs - Application state and business logic

llm/ - LLM client abstraction

storage/ - Persistence layer

ui.rs - Terminal rendering with ratatui

input.rs - (Not yet implemented, input handling is currently in main.rs)

Key Behaviors

Parallel LLM Requests: When a prompt is submitted, the app spawns a separate tokio task for each pane (app.rs:260-271). This allows multiple models to be queried concurrently without blocking.

Channel-based Response Handling: LLM responses are sent through an unbounded mpsc channel. The main loop polls this channel on each iteration (app.rs:163, 277-295), allowing responses to appear as they arrive without blocking user input.

Vim-style Navigation: The interface uses vim keybindings (Ctrl+h/j/k/l) and arrow keys (Ctrl+arrows) for pane navigation, with 'i' to enter input mode and Esc to return to normal mode.

Configuration Persistence: On exit or manual save, the app saves:

  1. Each pane's conversation as a markdown file with timestamp and model info
  2. Pane configurations to config.toml for restoration on next launch

Environment Setup

Create a .env file in the project root:

OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_KEY=your_key_here  # Optional for some endpoints

The application loads this on startup via dotenvy (main.rs:25).

Testing Strategy

Tests use standard Rust testing with:

Most modules have basic serialization/deserialization tests. The LLM client has minimal tests due to requiring live API access.