About
Original analysis, benchmark data, and honest AI coverage for practitioners who build. Backed by AgentPulse, our independent, open-source model benchmarks.
About MakerPulse
MakerPulse covers frontier AI for the people building with it. Not ML engineers training models. Not general audiences reading headlines. Practitioners: the growing class of people using tools like Claude Code, Cursor, and Replit Agent to ship real products.
We publish original analysis backed by our own data, not rewritten press releases. When we make a claim, we show the numbers behind it.
How We Work
I pick every topic, define the angle, and bring the perspective that comes from building with these tools every day. Every article goes through a multi-stage editorial pipeline with automated quality gates, mandatory fact-check reports, and strict style enforcement before it goes live.
The bar is high on purpose. No filler words. No hedging conclusions. If I can't back a claim with data or direct experience, it doesn't get published.
About AgentPulse
AgentPulse is our research arm: independent, open-source AI model benchmarks designed around the tasks practitioners actually do. We test models on real-world prompts (writing difficult emails, extracting structured data, planning under constraints) and score them using a triple-evaluator methodology with built-in bias detection.
Unlike leaderboard benchmarks that optimize for academic tasks, AgentPulse measures what matters when you're shipping a product: cost per run, latency, hallucination rates, and output quality on tasks you'd actually delegate to an AI.
The full dataset and methodology are available at data.makerpulse.ai and GitHub.
Who's Behind This
MakerPulse is run by Michael Blickenstaff, an investor, AI researcher, and practitioner who builds with AI daily. I started MakerPulse because the AI coverage practitioners need didn't exist: rigorous, data-backed, written for people who are actually shipping things with these tools.