Legacy Database Migrations to PostgreSQL. Evolution Not Revolution.

Database Solutions That Work

We specialize in PostgreSQL internals, custom extensions, and migrating legacy systems to modern architectures. Whether you need performance optimization, C/C++ extension development, or a path off your aging database platform, we can help.

Ready to modernize your data infrastructure?

Let's Talk

About dataStone

dataStone is a PostgreSQL and C/C++ consulting firm founded by Dave Sharpe. We help organizations solve complex database challenges, especially migrating legacy systems to modern PostgreSQL architectures.

Founder: Dave Sharpe

Dave brings 20+ years of C/C++ development experience with deep specialization in PostgreSQL backend development. His expertise includes:

  • PostgreSQL internals, extension development, and FDW (Foreign Data Wrapper) implementation
  • PostgreSQL hooks, callbacks, and PL/pgSQL development
  • Relational database theory and SQL semantics
  • AI theory and algorithms, with a focus on applying AI to development workflows

Topics We Cover

Recent Posts

  • Catching Spec-Kit Task Phantom Completions with /speckit.verify-tasks

    AI agents sometimes mark tasks [X] without doing the work. These phantom completions are rare (~0.36% in my data), but each one is a false claim you’ll either accept or spend precious mental energy figuring out the truth. verify-tasks is a spec-kit community extension that runs a multilayer verification cascade against every [X] completion in your tasks.md and passes a verdict on whether the work was actually done.

    In The [X] Problem, I documented phantom completions: tasks that AI agents mark [X] complete without doing the work. Across ~830 structured tasks spanning Claude Code /plan and spec-kit workflows, I found three phantom completions, about 0.36%. The preceding post introduced /verify-plan to catch this in Claude Code’s /plan workflow, but nothing equivalent existed for spec-kit’s task-based workflows, where hundreds of [X] marks in tasks.md go unchecked after /speckit.implement finishes.

  • The [X] Problem: Phantom Completions in AI-Assisted Development

    AI coding agents sometimes mark tasks as complete when the work was never done. The code compiles, the tests pass, and the agent moves on. But the specified file was never created, or the required modification was never applied. I call this failure mode a phantom completion: a false positive in the agent’s own task-tracking output, where the checkbox was marked [X] complete, but the code is missing or “wrong” (syntactically correct but not to spec).

  • Claude Code Said 'Done.' It Wasn't. So I Built a Skill to Catch Phantom Completions

    I spent hours refining a /plan in Claude Code. Six major change groups, over sixty discrete implementation items across multiple files. Type definitions, new methods, filter logic, wiring between upstream producers and downstream consumers. The plan was thorough because the feature was complex, and I had iterated it carefully before switching to implementation.

    Claude implemented the plan and reported it was complete. The code compiled. The structure looked right. No errors, no warnings, no hesitation from the agent.

View all posts →