GraphRAGGraphRAG-Bench Novel

GraphRAG-Bench Novel Answer Accuracy

Published May 1, 2026 ยท Updated May 1, 2026

TypeGraph scored 0.6265 ACC on all 2,010 GraphRAG-Bench Novel questions with semantic, BM25, and graph retrieval; observed search latency was 794ms p50 and 1.69s p95.

What this page shows

This is a TypeGraph Cloud answer-quality run on GraphRAG-Bench Novel, a benchmark that tests generated answers over public-domain books rather than only checking whether retrieval returned a known document ID.

The run used semantic, BM25, and graph retrieval, passed the SDK-native markdown prompt directly into a single tuned answer prompt, and scored answers with the GraphRAG-Bench LLM-as-judge ACC calculation.

Query latency
794ms p50
1.69s p95 end-to-end TypeGraph query latency

Latency is measured across the benchmark retrieval requests. Answer generation and judge calls are not included in these TypeGraph query latency percentiles.

Average ACC
0.6265
Answer correctness across all 2,010 GraphRAG-Bench Novel questions
Total Benchmark Time
64m 48s
Total ingest and eval time, including query and judge (running 20 evals concurrently)
Total TypeGraph Cost
$34.57
TypeGraph-metered ingest plus retrieval cost; answer generation and judge calls excluded

Executive Summary

GraphRAG-Bench Novel is split across direct fact retrieval, complex reasoning, contextual summarization, and creative generation questions. The overall score is the average answer correctness across all 2,010 questions, not a retrieval-only metric.

TypeGraph scored 0.6265 ACC overall. The strongest category was Contextual Summarize at 0.6446 ACC with 0.8482 coverage, followed closely by Fact Retrieval at 0.6351 ACC and Complex Reasoning at 0.6263 ACC.

Creative Generation remains the hardest category in this run at 0.4072 ACC. That category also exposes a different tradeoff: faithfulness was 0.6212, while coverage was 0.4047, suggesting the generated responses tended to stay grounded but often missed some requested creative or contextual elements.

Benchmark Dataset

The Novel split contains public-domain book passages and generated questions that exercise fact lookup, multi-hop reasoning, summarization, and creative generation grounded in retrieved context.

PropertyValue
DatasetGraphRAG-Bench Novel
CategoryAnswer-quality GraphRAG benchmark
Corpus1,147 source documents
Indexed chunksIndexed with 512-token chunks and 64-token overlap
Queries2,010 questions
QrelsGold answers and question-type labels
Chunking512 tokens, 64 overlap
Ingest time46m 18s

Ingest time covers corpus indexing, chunking, graph extraction, and retrieval index construction.

Methodology

  1. Loaded the GraphRAG-Bench Novel queries and gold answers from the benchmark dataset.
  2. Searched the indexed TypeGraph corpus with semantic, BM25, and graph weights enabled.
  3. Requested SDK-native markdown context with chunk and fact sections and passed response.prompt directly into answer generation.
  4. Generated answers with openai/gpt-4o-mini.
  5. Scored answers with the GraphRAG-Bench ACC method: LLM-judged factuality plus embedding-based semantic similarity.

Detailed Metrics Overview

Before we dive into the leaderboard, here's a quick overview of the metrics, TypeGraph Cloud's scores, and how to read them:
MetricTypeGraph ScoreHow to read it
Overall ACC0.626541Primary GraphRAG-Bench answer-quality score across all 2,010 questions. Judges if the answer is factually equivalent to the gold answer.
Overall ROUGE-L0.377493Text overlap with the gold answer; useful but can underrate good paraphrases.
Fact Retrieval ACC0.635099971 direct fact questions. Did you return the correct specific fact? Who killed X? In what year did Y happen? Easy to judge: the answer is a name, date, or short phrase.
Complex Reasoning ACC0.626275610 reasoning questions. Did you correctly chain multiple facts together? Why did X betray Y? Judge checks the conclusion and often the linking steps.
Contextual Summarize ACC0.644626362 summarization questions with coverage judging. Does your summary correctly cover the requested entities and relationships? Judge checks whether the key facts are present and accurate.
Creative Generation ACC0.40723867 creative questions with faithfulness and coverage judging. Does the creative output stay faithful to the source while fulfilling the creative ask? Judge checks both grounding (no hallucinated facts) and form (did you actually write a scene, not a one-liner).

How to read GraphRAG-Bench ACC

GraphRAG-Bench ACC is a continuous answer-quality score from 0 to 1. It is not exact match and it is not a BEIR retrieval metric like nDCG@10. The benchmark decomposes generated and gold answers into statements, judges factual overlap, and blends that with semantic similarity.

GraphRAG-Bench Novel Leaderboard Comparison

Published comparison rows come from the GraphRAG-Bench Novel leaderboard values. The highlighted TypeGraph row uses the same percentage scale.

RankSystemAvg ACCFact RetrievalComplex ReasoningContextual SummarizeCreative Generation
ACCROUGE-LACCROUGE-LACCCovACCFSCov
1AutoPrunedRetriever-llm63.72%45.9926.9962.8035.3583.1083.8662.9734.4022.13
2TypeGraph Cloud62.65%63.5145.4962.6330.4664.4684.8240.7262.1240.47
3G-reasoner58.94%60.0736.9353.9223.0071.2855.6050.4854.2445.44
4HippoRAG256.48%60.1431.3553.3833.4264.1070.8448.2849.8430.95
5Fast-GraphRAG52.02%56.9535.9048.5521.1256.4180.8246.1857.1936.99
6MS-GraphRAG (local)50.93%49.2926.1150.9324.0964.4075.5839.1055.4435.65
7Lazy-GraphRAG50.59%51.6536.9749.2223.4858.2976.9443.2350.6939.74
8StructRAG49.13%53.8426.7346.2723.4954.2863.5642.1652.6836.75
9RAG (w rerank)48.35%60.9236.0842.9315.3951.3083.6438.2649.2140.04
10KGP48.01%54.1524.7346.3116.9151.2164.3440.3752.5534.65
11RAG (w/o rerank)47.93%58.7637.3541.3515.1250.0882.5341.5247.4637.84
12KET-RAG47.62%55.3927.3936.5925.9852.4769.2446.0336.7233.68
13LightRAG45.09%58.6235.7249.0724.1648.8563.0523.8057.2825.01
14HippoRAG44.75%52.9326.6538.5211.1648.7085.5538.8571.5338.97
15MS-GraphRAG (global)44.52%36.9217.3243.1715.1256.8780.5541.1175.1530.34
16RAPTOR43.24%49.2523.7438.5911.6647.1082.3338.0170.8535.88

Graph Footprint

MetricValueHow to read it
Documents1,147Source documents indexed for the benchmark.
Document groups20One group per source novel, used to scope each benchmark query.
Chunks / passage nodes3,416Indexed chunks at 512-token chunking with 64-token overlap; graph passage nodes mirror the chunks.
Semantic entities / graph nodes10,793Resolved graph entities extracted from the novel corpus.
Semantic edges11,652Stored relationship edges between semantic entities.
Entity chunk mentions53,401Entity mention rows linking extracted entities back to chunks.
Passage entity edges25,211Edges between passage nodes and entities for graph-anchored retrieval.

Metered Cost

Ingest
$34.50
Eval
$0.067
Total
$34.57
MeterUsageRateCost
Ingest embeddings2,431,064 tokens$0.12 / M tokens$0.29
Ingest LLM input24,665,606 tokens$1.00 / M tokens$24.66
Ingest LLM output1,884,015 tokens$3.00 / M tokens$5.65
Ingest compute26,987,113 ms$0.52 / CPU-hour$3.89
Eval search embeddings60,678 tokens$0.04 / M tokens$0.0024
Eval retrieval compute444,918 ms$0.52 / CPU-hour$0.0643

Storage, answer generation, and judge calls are excluded. Costs use TypeGraph metered usage only: ingest embeddings at $0.12/M tokens, search embeddings at $0.04/M tokens, LLM input at $1.00/M tokens, LLM output at $3.00/M tokens, and compute at $0.52/CPU-hour.

Relevant Code

Create a graph-enabled bucket

Create a bucket with stable chunking, graph extraction enabled, and explicit embedding settings. Tenant isolation comes from the client tenantId; benchmark corpus separation is handled by bucket and graph selection.

import { typegraphInit } from '@typegraph-ai/sdk'

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const bucket = await typegraph.bucket.create({
  name: 'graphrag-bench-novel',
  indexDefaults: {
    chunkSize: 512,
    chunkOverlap: 64,
    graphExtraction: true,
    deduplicateBy: ['content'],
  }
})

console.log(bucket.id)

Ingest documents by corpus group

Use document metadata and bucket/graph selection so benchmark queries can retrieve from the same corpus as the gold answer. In our test, we ingested concurrent batches of 300 documents because TypeGraph has an upper limit of 3 MB payloads for ingestion batches.

import { readFile } from 'node:fs/promises'
import { typegraphInit } from '@typegraph-ai/sdk'

type NovelDocument = {
  id: string
  name: string
  corpus: string
  text: string
  url?: string
  metadata?: Record<string, unknown>
}

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const bucketId = process.env.TYPEGRAPH_BUCKET_ID!
const corpus = JSON.parse(await readFile('./novels.json', 'utf8')) as NovelDocument[]

const byCorpus = new Map<string, NovelDocument[]>()

for (const documentRow of corpus) {
  byCorpus.set(documentRow.corpus, [...(byCorpus.get(documentRow.corpus) ?? []), documentRow])
}

for (const [corpusId, documents] of byCorpus) {
  await typegraph.document.ingest(
    documents.map((documentRow) => ({
      id: documentRow.id,
      name: documentRow.name,
      content: documentRow.text,
      url: documentRow.url,
      metadata: {
        ...(documentRow.metadata ?? {}),
        corpus: corpusId,
        stableDocumentId: documentRow.id,
      },
    })),
    {
      bucketId,
      graphExtraction: true,
      chunkSize: 512,
      chunkOverlap: 64,
    },
  )
}

Run a corpus-scoped graph search

For each benchmark question, search the benchmark bucket/graph and pass the SDK-built markdown prompt downstream unchanged.

const response = await typegraph.search(question.text, {
  buckets: [process.env.TYPEGRAPH_BUCKET_ID!],
  graph: process.env.TYPEGRAPH_GRAPH_ID ?? 'public',
  resources: ['documents', 'facts', 'entities'],
  limit: 12,
  weights: {
    semantic: 1,
    bm25: 0.7,
    graph: 0.5,
    recency: 0.3,
  },
  promptBuilder: {
    format: 'markdown',
    sections: ['chunks', 'facts'],
    includeAttributes: false,
  },
})

// Use response.prompt as the full answer-generation context.
console.log(response.prompt)

Evaluation loop outline

The public pieces are corpus-scoped retrieval, SDK-built prompts, answer generation, and JSONL result logging. Use the paper scorer or your own judge for final metrics. In our test, we ran 20 concurrent evals (eval = search+judge).

import { appendFile, readFile } from 'node:fs/promises'
import { typegraphInit } from '@typegraph-ai/sdk'

type Question = {
  id: string
  corpus: string
  text: string
  questionType: string
}

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const questions = JSON.parse(await readFile('./queries.json', 'utf8')) as Question[]

for (const question of questions) {
  const response = await typegraph.search(question.text, {
    buckets: [process.env.TYPEGRAPH_BUCKET_ID!],
    graph: process.env.TYPEGRAPH_GRAPH_ID ?? 'public',
    resources: ['documents', 'facts', 'entities'],
    limit: 12,
    weights: { semantic: 1, bm25: 0.7, graph: 0.5 },
    promptBuilder: { format: 'markdown', sections: ['chunks', 'facts'] },
  })

  const answer = await generateAnswer({
    question: question.text,
    context: response.prompt,
  })

  await appendFile(
    './results.jsonl',
    JSON.stringify({
      id: question.id,
      corpus: question.corpus,
      questionType: question.questionType,
      answer,
      prompt: response.prompt,
      retrieval: response.results,
    }) + '\n',
  )
}

Answer generation prompt used

You can see the answer generation prompt that was used in the benchmark runner. One important thing to note, is that in our benchmark we used gpt-4o-mini to generate the answers. In the original benchmark paper, they used qwen2.5-14b-instruct. One impact of this, is that gpt-4o-mini is faster to generate answers than qwen2.5-14b-instruct, and we were able to run 20 evals concurrently in the same time window. However, gpt4o-mini tends to be more verbose and creative, which hurts ACC scoring as it adds prose and context that is irrelevant.

---Role---
You are a helpful assistant responding to user queries.

---Goal---
Generate a direct, concise answer based strictly on the provided Context.
Answer only what the Question asks. Do not restate the Question, explain your reasoning, or add background details.
Use one sentence when possible. For multi-part or creative requests, use the shortest complete answer that satisfies the Question.
If asked to summarize, summarize the relationships, effect, contrast, or implication asked about, not the whole passage.
Stay grounded in the Context and avoid unsupported specifics.
If the Context contains partial relevant evidence, synthesize the supported answer instead of refusing.
When the question asks for a chain, connection, relation, sequence, or how X links to Y, answer with only the explicit named relation chain from the context; do not explain themes, causes, motives, or background unless asked.
Respond in plain text without formatting.
Use the same language as the Question.
No markdown headings, no placeholder dates, do not refuse historical/fictional perspective tasks.
Default to 5-20 words; exceed 25 words only when the Question explicitly asks for a summary, comparison, explanation, or creative response.
If the answer can be expressed as a name, list, date, place, relation, or short clause, output only that.

References

Related TypeGraph Reading

GraphRAG-Bench Novel Answer Accuracy

GraphRAG-Bench Novel Answer Accuracy | TypeGraph