Building TokenCentric: A Desktop App for Managing AI Context Files

10 min read
tokencentricelectrontypescriptai-toolsindie-hackingopen-source

The Problem Nobody Talks About

If you use AI coding assistants, you have context files. Maybe it's a CLAUDE.md for Claude Code. Maybe it's .cursorrules for Cursor. Maybe it's .github/copilot-instructions.md for GitHub Copilot. Maybe it's all of them.

Now multiply that by 15 projects. Or 30. Or the 8 products I'm juggling under Helsky Labs alone.

Each project has different conventions, different tech stacks, different rules the AI needs to follow. Some files are 200 tokens. Some are 15,000. You have no idea which ones are bloated, which ones are outdated, and whether that massive context file is actually hurting your AI's performance by eating into the response window.

I was managing all of this in VS Code, switching between projects, opening raw markdown files, copy-pasting sections between them. It was a mess. So I built something better.

What TokenCentric Does

TokenCentric is a desktop app purpose-built for one thing: managing the context files that AI coding assistants consume. It supports every major format:

  • CLAUDE.md for Claude Code
  • .cursorrules for Cursor
  • .github/copilot-instructions.md for GitHub Copilot
  • .windsurfrules for Windsurf
  • AGENTS.md for ChatGPT Codex

You point it at your projects directory, and it discovers all of these files automatically. Then you get a unified interface to view, edit, compare, and most importantly, understand the token cost of each one.

The Tech Stack

I went with Electron 28, React 18, TypeScript, Vite, and Tailwind CSS. State management is Zustand. The editor is Monaco.

// The core app structure
const App: React.FC = () => {
  const { projects, activeFile, activeProject } = useProjectStore();
  const { tabs, activeTabId } = useTabStore();

  return (
    <AppShell>
      <Sidebar projects={projects} />
      <EditorArea>
        <TabBar tabs={tabs} activeId={activeTabId} />
        <MonacoEditor file={activeFile} />
        <TokenCounter file={activeFile} />
      </EditorArea>
    </AppShell>
  );
};

Some of these choices were obvious. Electron because I needed filesystem access and cross-platform desktop distribution. React because it's what I think in. TypeScript because I'm not a masochist. Vite because life is too short for slow builds.

Two decisions deserve deeper explanation.

Why Monaco Over CodeMirror

The editor was the most important component in the app. Users would spend 90% of their time in it. I evaluated both CodeMirror 6 and Monaco Editor.

CodeMirror 6 is lighter, more modular, and arguably better designed. It's the trendy choice. But Monaco is the engine behind VS Code, and that matters for a specific reason: familiarity.

The developers using TokenCentric are the same developers who live in VS Code eight hours a day. Their fingers know the keyboard shortcuts. Their eyes know the syntax highlighting. Their muscle memory expects certain behaviors when they hit Ctrl+D or Alt+Up.

Monaco gave me all of that for free. Multi-cursor editing, find and replace, code folding, bracket matching, minimap. Things that would take weeks to replicate in CodeMirror came out of the box.

import Editor from "@monaco-editor/react";

const ContextFileEditor: React.FC<EditorProps> = ({ file, onChange }) => {
  return (
    <Editor
      height="100%"
      language="markdown"
      theme={useTheme() === "dark" ? "vs-dark" : "vs-light"}
      value={file.content}
      onChange={onChange}
      options={{
        wordWrap: "on",
        minimap: { enabled: true },
        fontSize: 14,
        lineNumbers: "on",
        renderWhitespace: "boundary",
        bracketPairColorization: { enabled: true },
      }}
    />
  );
};

The trade-off is bundle size. Monaco adds about 4MB to the app. In a web context, that's disqualifying. In an Electron app where the user already downloaded a 150MB binary, it's irrelevant.

Token Counting Done Right

This is the feature I'm most proud of, because getting it wrong would have made the entire app pointless.

Different AI providers use different tokenizers. A file that's 5,000 tokens for Claude might be 4,800 tokens for GPT-4. The difference matters when you're trying to stay under context limits.

TokenCentric uses the official tokenizers from each provider:

import { countTokens } from "@anthropic-ai/tokenizer";
import { encoding_for_model } from "tiktoken";

interface TokenCount {
  claude: number;
  openai: number;
}

function getTokenCounts(text: string): TokenCount {
  const claudeTokens = countTokens(text);

  const enc = encoding_for_model("gpt-4");
  const openaiTokens = enc.encode(text).length;
  enc.free();

  return {
    claude: claudeTokens,
    openai: openaiTokens,
  };
}

@anthropic-ai/tokenizer is Anthropic's official tokenizer for Claude models. tiktoken is OpenAI's tokenizer. Both run locally, no API calls needed.

The token count updates in real time as you type, with a color-coded indicator:

  • Green (under 5,000 tokens): You're in great shape. Minimal impact on the context window.
  • Yellow (5,000 to 20,000 tokens): Getting hefty. Consider whether everything in this file is necessary.
  • Red (over 20,000 tokens): This file is consuming a significant chunk of context. Time to audit.

These thresholds aren't arbitrary. Based on current model context windows (200k for Claude, 128k for GPT-4), a context file over 20k tokens is eating 10-15% of your available space before a single line of code is analyzed.

Hierarchical Context Cost

Here's something that catches people off guard: context files inherit.

If you have a CLAUDE.md in your home directory, another in your workspace root, and another in a subdirectory, Claude Code reads all three. The tokens stack. A 3,000-token global file plus a 5,000-token workspace file plus a 2,000-token project file means 10,000 tokens of context before any code is loaded.

TokenCentric visualizes this inheritance chain. You can see the total accumulated cost at each level of the hierarchy and identify which layer is contributing the most weight.

Templates with Variable Substitution

After managing context files across a dozen projects, I noticed I was writing the same patterns repeatedly. Every project needs sections for tech stack, coding conventions, and file structure. The specifics change, but the skeleton is the same.

TokenCentric ships with 7 built-in templates. The key feature is variable substitution:

# {{PROJECT_NAME}} - AI Context

## Tech Stack
- Framework: {{FRAMEWORK}}
- Language: {{LANGUAGE}}
- Styling: {{STYLING}}

## Conventions
- Use {{LANGUAGE}} strict mode
- Follow {{FRAMEWORK}} best practices
- Components in `src/components/`

## File Structure
{{FILE_STRUCTURE}}

When you create a new file from a template, TokenCentric prompts you for each variable. It's a small feature, but it saves real time when onboarding a new project.

Split View and Multi-Tab Editing

Sometimes you need to compare two context files side by side. Maybe you're standardizing conventions across projects. Maybe you're debugging why the AI behaves differently in two similar repos.

TokenCentric supports split view in both horizontal and vertical orientations, with independent scrolling and editing in each pane. Multi-tab editing lets you keep several files open and switch between them without losing your place.

AI Provider Integration

This one surprised even me during development. TokenCentric integrates with Anthropic's Claude, OpenAI's API, and Ollama for local models. The use case: you can ask the AI to review and improve your context files.

It's meta in a way that makes my head spin. Using AI to optimize the instructions you give to AI. But it works. The AI can identify redundant sections, suggest clearer phrasing, and flag instructions that might confuse the model.

The Nightmare: macOS Code Signing and Notarization

Building the app took about 6 weeks. Getting it to run on other people's Macs took another 3 weeks. Apple's code signing and notarization process was the single hardest part of shipping TokenCentric.

If you've never dealt with this, here's the deal: macOS Gatekeeper blocks any app that isn't signed by a registered Apple developer and notarized through Apple's servers. Without it, users get the "this app is damaged and can't be opened" dialog. Or worse, the app silently refuses to launch.

Hardened Runtime

Electron apps need specific entitlements to function under Apple's Hardened Runtime. TokenCentric requires JIT compilation (for V8), unsigned executable memory allocation (also for V8), network access (for AI provider integration), and file access (its entire purpose).

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "...">
<plist version="1.0">
<dict>
    <key>com.apple.security.cs.allow-jit</key>
    <true/>
    <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
    <true/>
    <key>com.apple.security.network.client</key>
    <true/>
    <key>com.apple.security.files.user-selected.read-write</key>
    <true/>
</dict>
</plist>

Each of these entitlements is a potential rejection reason during notarization. Getting the minimal set that actually works required trial and error across multiple submissions.

The Notarization Script

Apple's notarization API is asynchronous. You submit your app, get a request ID, then poll until it either passes or fails. The process can take anywhere from 30 seconds to 15 minutes.

I wrote a custom notarization script with a 15-minute timeout and exponential backoff retry logic, because electron-builder's built-in notarize integration was unreliable. It would sometimes fire twice, causing a double-notarization that Apple's servers would reject as a duplicate submission.

// afterSign hook for electron-builder
exports.default = async function notarizing(context) {
  if (context.electronPlatformName !== "darwin") return;

  // Disable built-in notarize to avoid double-submission
  const appPath = path.join(
    context.appOutDir,
    `${context.packager.appInfo.productFilename}.app`
  );

  await notarize({
    appPath,
    appleId: process.env.APPLE_ID,
    appleIdPassword: process.env.APPLE_ID_PASSWORD,
    teamId: process.env.APPLE_TEAM_ID,
  });
};

I also had to downgrade @electron/notarize from v3 to v2.4.0 for Node 20 compatibility. The v3 release introduced an ESM-only change that broke the CommonJS afterSign hook in electron-builder. Three days of debugging for a one-line version change in package.json.

CI/CD with GitHub Actions

The final piece was automated builds. The GitHub Actions workflow needs 5 Apple-specific secrets: the Developer ID certificate (as a base64-encoded .p12), the certificate password, the Apple ID, an app-specific password, and the Team ID.

Getting the certificate exported correctly, encoded without corruption, and decoded properly in CI took more attempts than I'd like to admit. The workflow now builds, signs, notarizes, and publishes DMG files for every tagged release.

By the Numbers

  • 10,609 lines of TypeScript across 35+ components
  • 3 releases in 2 months: v0.1.0 (editor + token counting), v0.2.0 (templates + split view), v1.0.0 (AI integration + polish)
  • 7 built-in templates covering common project types
  • 5 context file formats supported
  • 3 AI providers integrated (Anthropic, OpenAI, Ollama)
  • MIT licensed, free forever

What I Learned

Electron gets a bad rap, but it works. Yes, it ships Chromium. Yes, the binary is large. But for a desktop app that needs a rich text editor, filesystem access, and cross-platform distribution, the alternatives are worse. Tauri is promising but its webview inconsistencies would have been a nightmare for Monaco.

Official tokenizers matter. Early in development, I used a rough "4 characters per token" heuristic. The counts were off by 15-30% depending on the content. When the whole point of your app is accurate token counting, "close enough" isn't.

Apple's developer tooling is hostile. The documentation is scattered, outdated, and sometimes contradictory. Error messages from notarization are cryptic. The process assumes you're building a Swift app in Xcode, not an Electron app in a CI pipeline. Budget triple the time you think you'll need.

Context files are an underserved space. The response to TokenCentric told me this problem is bigger than my own workflow. Developers managing multiple AI tools across multiple projects all hit the same friction. The tooling just hasn't caught up yet.

What's Next

TokenCentric 1.0 is stable and does what I need. The roadmap includes:

  • Windows and Linux builds (the code is cross-platform, the packaging pipeline isn't yet)
  • Context file linting to catch common antipatterns
  • Diff view for tracking changes over time
  • Plugin system for custom file formats as new AI tools emerge

The project is open source and free: tokencentric.app

If you're managing AI context files across more than a couple of projects, give it a try. And if you find a bug or have a feature request, the GitHub issues are open.


Building developer tools? I'd love to hear what you're working on. Find me on GitHub.