AI Features
skrptiq integrates AI throughout the app for prompt testing, quality analysis, and general assistance.
Provider Setup
Configure your AI provider in Settings (gear icon in the toolbar). Two options are available:
- Claude CLI (default) — uses the local
claudecommand via your existing Anthropic subscription. No API key needed, no extra billing. Slightly slower than direct API access. - Anthropic API — direct API access with separate usage-based billing. Requires an API key (
sk-ant-...).
You can switch provider at any time. The change takes effect immediately for all AI features.
AI Chat
The left panel includes a collapsible AI chat sidebar (280px wide).
The chat is context-aware: each message you send is accompanied by a graph summary — how many nodes of each type you have and the titles of up to 20 nodes. This means the AI can answer questions about your specific setup without you having to describe it.
Chat history is kept for the current session only — closing the app clears the conversation.
On an empty chat, three suggestion prompts appear:
- “What’s missing from my graph?”
- “Suggest a workflow for code review”
- “How should I structure a RAG pipeline?”
Responses stream progressively — text appears word-by-word with a blinking green cursor while the AI is still generating.
AI Modes (/mode)
The chat panel includes a mode switcher that changes how the AI frames its responses. Each mode corresponds to a persona with its own system prompt, shaping the AI’s tone, priorities, and frameworks.
Switching modes
- Type
/modein the chat input, or - Click the mode badge below the message list (shows the current mode name with a colour dot)
A dropdown appears with all available modes:
| Mode | AI behaviour |
|---|---|
| Developer | Working code, technical patterns, security, performance |
| Content Creator | Tone, SEO, platform conventions, content repurposing |
| Researcher | Rigour, attribution, methodology, academic conventions |
| Product Manager | User-centric thinking, RICE/JTBD frameworks, evidence-based reasoning |
| Project Manager | Actionable outputs, RACI matrices, risk registers, delivery focus |
| Student | Progressive explanations, active learning, academic standards |
| Marketing | Audience segments, brand voice, persuasion frameworks, metrics |
When you switch modes, a confirmation message appears in the chat. The mode affects all subsequent AI chat messages until you switch again or restart the app.
Modes are ephemeral — they reset when you close the app. They are independent of the persona you chose during first-run setup, so you can freely switch between them. A product manager might use Developer mode when working on a technical task, then switch back to Product Manager mode for strategy work.
Prompt Analysis
Available via the Analysis button in the editor modal header (prompt nodes only). Clicking it opens a popover with the analysis results.
Quality Score
A score from 0 to 100 with colour coding:
| Score | Colour | Meaning |
|---|---|---|
| 80–100 | Green | Strong prompt, well-structured |
| 60–79 | Amber | Decent but has gaps |
| 40–59 | Orange | Needs significant improvement |
| 0–39 | Red | Missing key elements |
Best Practices Checklist
Each prompt is checked against:
- Role definition
- Constraints
- Examples
- Output format
- Edge cases
Statistics
- Estimated tokens
- Word count
- Number of sections, instructions, constraints, and code blocks
Variable Detection
Lists all {{VARIABLE}} placeholders found in the prompt text, with duplicate counts where a variable appears more than once.
Prompt Testing
Available in the Test tab of the editor modal.
- Create named test cases with sample input for each variable.
- Variable substitution — fill in values for each
{{VARIABLE}}placeholder. The prompt is assembled with your values before being sent to the LLM. - Run individual tests against the configured provider. Results show the response text alongside token usage (input and output counts).
- Auto-generate test case — the AI creates realistic input for your prompt based on its content.
- Save multiple test cases to compare outputs across different inputs.
- Persistence — test cases (including variable values and responses) are saved in the node’s metadata. They survive closing and reopening the editor.
Prompt Review
Available in the Review tab of the editor modal.
Send your prompt to the AI for expert review. Optionally add notes describing what you want reviewed (e.g. “Is the output format clear enough?”). The AI returns structured feedback covering quality, completeness, and clarity. Responses stream progressively.
Prompt Refine
Available in the Refine tab of the editor modal.
A conversational interface for improving your prompt iteratively:
- Describe what you want to change in plain language.
- The AI suggests specific edits based on its analysis and prompt engineering best practices.
- Responses stream progressively.
Streaming
All AI responses across chat, review, testing, and refine use streaming output:
- Text appears word-by-word as it is generated.
- A blinking green cursor indicates the AI is still working.
- Under the hood, this uses Claude CLI’s
stream-jsonoutput format.