All Work
Program Analysis ProductSciTech Strategies

InnoRate: AI commercialization analysis with an evidence ledger

InnoRate takes a technology disclosure or a set of context documents and produces a full commercialization analysis: market, IP, competitive landscape, regulatory, ESG, licensees, risk, and an investment memo. Every claim in the report carries an evidence type and a confidence score. High-risk claims run through Chain-of-Verification against live web search before the report is delivered.

InnoRate: AI commercialization analysis with an evidence ledger

The situation

Technology commercialization analysis is expensive, slow, and inconsistent. A technology transfer office or a VC analyst reads a disclosure, googles around, reads a few market reports, writes a memo. The memo takes a week and it's stale the day it's filed. Different analysts produce different memos on the same technology, because the inputs and the reasoning aren't captured anywhere.

The question we wanted to answer. Can we build an evaluation pipeline that produces a report of the same shape and quality, every time, with a record of how it got there that a human can audit?

What we built

InnoRate is a Nuxt 3 application with a headless V1 API. A user uploads documents or pastes in a description. The system detects the innovations, then runs the generation pipeline:

  1. A research pool phase generates every web query for every section in a single LLM call, deduplicates them, batch-executes against Serper and Exa, and caches the results in Redis with a thirty-minute TTL.
  2. Parallel section generation across eleven section types (technology overview, development stage, IP status, commercialization strategy, competitive picture, market analysis, regulatory compliance, ESG impact, potential licensees, risk assessment, and an investment memo). Ten sections run concurrently.
  3. Each section runs its own self-refine loop with up to three iterations and early exit on quality.
  4. Inline evidence extraction. The LLM generates evidence records during generation, not after. Every claim gets typed (web_research, document_stated, analytical_estimate, industry_knowledge, logical_inference, unsupported) and classified (quantitative, qualitative, comparative, causal, predictive).
  5. Chain-of-Verification runs on high-risk claims. Analytical estimates, industry-knowledge claims, and low-confidence quantitatives get re-researched against the live web. Results come back as verified, corrected, contradicted, or unverified.
  6. Evidence ledger finalization. The report ships with a structured ledger capturing every claim, its type, its sources, and its verification status.

The backend is a three-tier architecture: server/api/ H3 handlers, server/domain/ business logic organized by subdomain (evaluation, report, research), server/infrastructure/ for AI, cache, DI, and repositories. A typed error hierarchy runs through the whole stack.

How it's defensible

Inline evidence generation is the core move. Most AI report generators produce a report and then ask a second pass to "add citations." That second pass is a hallucination pipeline in a hat. The model invents citations that look plausible. InnoRate generates the evidence alongside the claim, from the same context, with a schema that forces the claim type and the confidence level. A claim without evidence is a schema violation, not a warning.

Chain-of-Verification catches the claims the model might have been confident about for the wrong reasons. An industry-knowledge claim ("this market is growing at 12% annually") is exactly the kind of thing an LLM will state with false confidence. CoVe sends those claims back to the web, checks them, and flags the ones that come back corrected or contradicted. The user sees those flags in the report.

Every section ships with a structured schema enforced by Zod. The system refuses to return a report if any section failed schema validation. Silent fallbacks to templated prose are forbidden.

What it replaced

A manual process that took a week per disclosure and produced inconsistent memos. Or a generic LLM chat that produced confident prose with invented citations.

What a similar engagement looks like

10 to 14 weeks to deploy InnoRate (or an InnoRate-shaped product) for a new domain. We need the domain's section taxonomy, reference examples of the reports you want to produce, access to any domain-specific data sources, and subject-matter-expert review time. You get the deployed platform, the headless API, the section prompts tuned for your vertical, and the evidence ledger schema.

It's a fit for tech transfer offices, VC evaluation teams, due-diligence shops, and any organization that produces standardized analytical reports at volume and needs to stand behind the numbers.

For internal champions

Making the case inside your organization?

We've written a two-page business case for this engagement shape. Executive summary, problem statement, deliverables, risks, success metrics, investment range. Read it in the browser or print it to PDF and forward.

Read the business case

Initiate Contact

Ready to transform your decision architecture?

Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.

Schedule a Briefing