writing-tools-mcp: Still Useful, Now With uvx

writing-tools-mcp is the MCP server I’ve written about the most, starting with the initial deep dive and then the MCPB packaging update. It’s also the one I use the most consistently. Every post on this blog goes through it.

This update is minor compared to the others. writing-tools-mcp was already in good shape: 144 tests passing, CI pipeline in place, solid feature set. The changes were mostly about consistency with the rest of my MCP servers.

What Changed

The main addition is a proper main() entry point so it works with uvx:

uvx --from git+https://github.com/wdm0006/writing-tools-mcp writing-tools-mcp

I also fixed the hatch build config (it was accidentally pulling in the entire .venv directory into the package, which is not great), added pre-commit hooks, and standardized the Makefile to match the other servers.

What It Does (For the Uninitiated)

If you write anything, whether blog posts, documentation, or long-form content, this server gives your AI assistant a set of tools that are genuinely hard for models to replicate on their own:

  • Character and word counts that are actually accurate (models are notoriously bad at counting due to tokenization)
  • Readability scores: Flesch reading ease, Flesch-Kincaid grade level, Gunning Fog index
  • Reading time estimates at different analysis levels
  • Keyword density and frequency analysis
  • Passive voice detection
  • Spellchecking
  • AI content detection using GPT-2 perplexity analysis and stylometric baselines

The AI detection tools are particularly interesting. The perplexity analysis measures how “surprising” your text is to GPT-2, which correlates with whether the text was generated by a model. The stylometric analysis compares sentence structure, vocabulary diversity, and other features against known human baselines. Neither is perfect, but together they give you a reasonable signal about whether your writing sounds like it came from a model.

I use this mostly as a self-check. After working with Claude on a post, I’ll run the stylometric analysis to make sure the result still sounds like me. Sometimes it flags sections that need a more personal voice, and that’s useful feedback.


Links: