GraphRAGGraphRAG-Bench Medical

GraphRAG-Bench Medical Answer Accuracy

Published May 1, 2026 ยท Updated May 1, 2026

TypeGraph scored 0.6768 ACC on all 2,062 GraphRAG-Bench Medical questions with semantic, BM25, and graph retrieval; observed search latency was 290ms p50 and 1.39s p95.

What this page shows

This is a TypeGraph Cloud answer-quality run on GraphRAG-Bench Medical, a benchmark that tests generated answers over medical and healthcare source material rather than only checking whether retrieval returned a known document ID.

The run used semantic, BM25, and graph retrieval, passed the SDK-native markdown prompt directly into a single tuned answer prompt, and scored answers with the GraphRAG-Bench LLM-as-judge ACC calculation.

Query latency
290ms p50
1.39s p95 end-to-end TypeGraph query latency

Latency is measured across the benchmark retrieval requests. Answer generation and judge calls are not included in these TypeGraph query latency percentiles.

Average ACC
0.6768
Answer correctness across 2,061 scored GraphRAG-Bench Medical questions
Total Benchmark Time
1h 31m 02s
Total corpus ingest and eval retrieval wall-clock time for the benchmark run
Total TypeGraph Cost
$6.55
TypeGraph-metered ingest plus retrieval cost; answer generation and judge calls excluded

Executive Summary

GraphRAG-Bench Medical is split across direct fact retrieval, complex reasoning, contextual summarization, and creative generation questions. The overall score is answer correctness across the benchmark, not a retrieval-only metric.

TypeGraph scored 0.6768 ACC overall. The strongest categories were Fact Retrieval at 0.7250 ACC and Complex Reasoning at 0.7246 ACC, with Contextual Summarize close behind at 0.6656 ACC.

Creative Generation remains the hardest category in this run at 0.2312 ACC. The faithfulness score was 0.5068 and coverage was 0.4265, which suggests responses often stayed partially grounded but did not fully satisfy the requested creative form and evidence coverage.

Benchmark Dataset

The Medical split contains healthcare and guideline-style source material with generated questions that exercise fact lookup, multi-hop reasoning, summarization, and creative generation grounded in retrieved context.

PropertyValue
DatasetGraphRAG-Bench Medical
CategoryAnswer-quality GraphRAG benchmark
Corpus249 source documents
Indexed chunksIndexed with 512-token chunks and 64-token overlap
Queries2,062 questions
QrelsGold answers and question-type labels
Chunking512 tokens, 64 overlap
Ingest time6m 36s

Ingest time covers corpus indexing, chunking, graph extraction, and retrieval index construction.

Methodology

  1. Loaded the GraphRAG-Bench Medical queries and gold answers from the benchmark dataset.
  2. Searched the indexed TypeGraph corpus with semantic, BM25, and graph weights enabled.
  3. Requested SDK-native markdown context with chunk and fact sections and passed response.prompt directly into answer generation.
  4. Generated answers with openai/gpt-4o-mini.
  5. Scored answers with the GraphRAG-Bench ACC method: LLM-judged factuality plus embedding-based semantic similarity.

Detailed Metrics Overview

Before we dive into the leaderboard, here's a quick overview of the metrics, TypeGraph Cloud's scores, and how to read them:
MetricTypeGraph ScoreHow to read it
Overall ACC0.676777Primary GraphRAG-Bench answer-quality score across 2,061 scored questions. Judges if the answer is factually equivalent to the gold answer.
Overall ROUGE-L0.391737Text overlap with the gold answer; useful but can underrate good paraphrases.
Fact Retrieval ACC0.7249561,097 direct fact questions. Did you return the correct specific medical fact, name, risk factor, treatment, or short answer?
Complex Reasoning ACC0.724637509 reasoning questions. Did you correctly chain multiple medical facts together and answer the conclusion?
Contextual Summarize ACC0.665567289 summarization questions with coverage judging. Does the response cover the requested clinical concepts and relationships without drifting?
Creative Generation ACC0.231160166 creative questions with faithfulness and coverage judging. Does the response stay faithful to the source while satisfying the requested creative form?

How to read GraphRAG-Bench ACC

GraphRAG-Bench ACC is a continuous answer-quality score from 0 to 1. It is not exact match and it is not a BEIR retrieval metric like nDCG@10. The benchmark decomposes generated and gold answers into statements, judges factual overlap, and blends that with semantic similarity.

GraphRAG-Bench Medical Leaderboard Comparison

Published comparison rows come from the official GraphRAG-Bench Medical leaderboard values. The highlighted TypeGraph row is inserted on the same percentage scale.

RankSystemAvg ACCFact RetrievalComplex ReasoningContextual SummarizeCreative Generation
ACCROUGE-LACCROUGE-LACCCovACCFSCov
1G-reasoner73.30%68.8444.7375.1729.1077.2360.6472.0453.6548.31
2TypeGraph Cloud67.68%72.5046.1072.4632.6766.5659.7023.1250.6842.65
3AutoPrunedRetriever-llm67.00%61.2534.6971.5931.1170.1440.5965.0233.0628.62
4HippoRAG264.85%66.2836.6961.9836.9763.0846.1368.0558.7851.54
5Fast-GraphRAG64.12%60.9331.0461.7321.3767.8852.0765.9356.0744.73
6LightRAG62.59%63.3237.1961.3224.9863.1451.1667.9178.7651.58
7RAG (w rerank)62.43%64.7330.7558.6415.5765.7578.5460.6136.7458.72
8RAG (w/o rerank)61.00%63.7229.2157.6113.9863.7277.3458.9435.8857.87
9HippoRAG59.08%56.1420.9555.8713.5759.8662.7364.4369.2165.56
10StructRAG58.56%55.3827.5356.1722.7962.4865.6660.2142.3545.76
11RAPTOR57.10%54.0717.9353.2011.7358.7378.2862.3858.9863.63
12Lazy-GraphRAG56.89%60.2531.6647.8222.6857.2855.9262.2230.9543.79
13KGP56.33%55.5321.3451.5311.6954.5162.4063.7745.2535.55
14KET-RAG47.05%60.3531.9939.5619.5245.2729.0443.0433.6731.93
15MS-GraphRAG (local)45.16%38.6326.8047.0421.9941.8722.9853.1132.6539.42
16MS-GraphRAG (global)28.56%16.4246.0015.6152.7519.82-20.81-13.64

Graph Footprint

MetricValueHow to read it
Documents249Source documents indexed for the benchmark.
Document groups1One corpus principal for the Medical corpus, used to scope benchmark queries.
Chunks / passage nodes745Indexed chunks at 512-token chunking with 64-token overlap; graph passage nodes mirror the chunks.
Semantic entities / graph nodes1,262Resolved graph entities extracted from the medical corpus.
Semantic edges739Stored relationship edges between semantic entities.
Fact records739Evidence-backed fact records used by graph retrieval and answer context assembly.
Entity chunk mentions9,649Entity mention rows linking extracted entities back to chunks.
Passage entity edges4,019Edges between passage nodes and entities for graph-anchored retrieval.

Metered Cost

Ingest
$6.48
Eval
$0.074
Total
$6.55
MeterUsageRateCost
Ingest embeddings377,143 tokens$0.12 / M tokens$0.0453
Ingest LLM input5,168,132 tokens$1.00 / M tokens$5.17
Ingest LLM output234,583 tokens$3.00 / M tokens$0.70
Ingest compute3,868,786 ms$0.52 / CPU-hour$0.56
Eval search embeddings35,251 tokens$0.04 / M tokens$0.0014
Eval retrieval compute502,611 ms$0.52 / CPU-hour$0.0726

Storage, answer generation, and judge calls are excluded. Costs use TypeGraph metered usage only: ingest embeddings at $0.12/M tokens, search embeddings at $0.04/M tokens, LLM input at $1.00/M tokens, LLM output at $3.00/M tokens, and compute at $0.52/CPU-hour.

Relevant Code

Create a graph-enabled bucket

Create a bucket with stable chunking and graph extraction enabled. Tenant isolation comes from the client tenantId; benchmark corpus separation is handled by bucket and graph selection.

import { typegraphInit } from '@typegraph-ai/sdk'

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const bucket = await typegraph.bucket.create({
  name: 'graphrag-bench-medical',
  indexDefaults: {
    chunkSize: 512,
    chunkOverlap: 64,
    graphExtraction: true,
    deduplicateBy: ['content'],
  },
})

console.log(bucket.id)

Ingest the medical corpus

Write the medical documents with stable corpus metadata so benchmark queries can read from the same corpus as the gold answer.

import { readFile } from 'node:fs/promises'
import { typegraphInit } from '@typegraph-ai/sdk'

type MedicalDocument = {
  id: string
  name: string
  corpus?: string
  text: string
  url?: string
  metadata?: Record<string, unknown>
}

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const bucketId = process.env.TYPEGRAPH_BUCKET_ID!
const corpus = JSON.parse(await readFile('./medical-corpus.json', 'utf8')) as MedicalDocument[]

await typegraph.document.ingest(
  corpus.map((documentRow) => ({
    id: documentRow.id,
    name: documentRow.name,
    content: documentRow.text,
    url: documentRow.url,
    metadata: {
      ...(documentRow.metadata ?? {}),
      corpus: documentRow.corpus ?? 'Medical',
      stableDocumentId: documentRow.id,
    },
  })),
  {
    bucketId,
    graphExtraction: true,
    chunkSize: 512,
    chunkOverlap: 64,
  },
)

Run a corpus-scoped graph search

For each benchmark question, search the medical benchmark bucket/graph and pass the SDK-built markdown prompt downstream unchanged.

const response = await typegraph.search(question.text, {
  buckets: [process.env.TYPEGRAPH_BUCKET_ID!],
  graph: process.env.TYPEGRAPH_GRAPH_ID ?? 'public',
  resources: ['documents', 'facts', 'entities'],
  limit: 12,
  weights: {
    semantic: 1,
    bm25: 0.7,
    graph: 0.5,
    recency: 0.3,
  },
  promptBuilder: {
    format: 'markdown',
    sections: ['chunks', 'facts'],
    includeAttributes: false,
  },
})

// Use response.prompt as the full answer-generation context.
console.log(response.prompt)

Evaluation loop outline

The public pieces are corpus-scoped retrieval, SDK-built prompts, answer generation, and JSONL result logging. Use the official GraphRAG-Bench scorer or your own judge for final metrics.

import { appendFile, readFile } from 'node:fs/promises'
import { typegraphInit } from '@typegraph-ai/sdk'

type Question = {
  id: string
  corpus?: string
  text: string
  questionType: string
}

const typegraph = await typegraphInit({
  apiKey: process.env.TYPEGRAPH_API_KEY!,
  tenantId: process.env.TYPEGRAPH_TENANT_ID!,
})

const questions = JSON.parse(await readFile('./medical-queries.json', 'utf8')) as Question[]

for (const question of questions) {
  const response = await typegraph.search(question.text, {
    buckets: [process.env.TYPEGRAPH_BUCKET_ID!],
    graph: process.env.TYPEGRAPH_GRAPH_ID ?? 'public',
    resources: ['documents', 'facts', 'entities'],
    limit: 12,
    weights: { semantic: 1, bm25: 0.7, graph: 0.5 },
    promptBuilder: { format: 'markdown', sections: ['chunks', 'facts'] },
  })

  const answer = await generateAnswer({
    question: question.text,
    context: response.prompt,
  })

  await appendFile(
    './results.jsonl',
    JSON.stringify({
      id: question.id,
      corpus: question.corpus ?? 'Medical',
      questionType: question.questionType,
      answer,
      prompt: response.prompt,
      retrieval: response.results,
    }) + '\n',
  )
}

Answer generation prompt used

Use one concise, context-grounded prompt across all question types and pass the retrieved context exactly as returned by the SDK.

---Role---
You are a helpful assistant responding to user queries.

---Goal---
Generate a direct, concise answer based strictly on the provided Context.
Answer only what the Question asks. Do not restate the Question, explain your reasoning, or add background details.
Use one sentence when possible. For multi-part or creative requests, use the shortest complete answer that satisfies the Question.
If asked to summarize, summarize the relationships, effect, contrast, or implication asked about, not the whole passage.
Stay grounded in the Context and avoid unsupported specifics.
If the Context contains partial relevant evidence, synthesize the supported answer instead of refusing.
Respond in plain text without formatting.
Use the same language as the Question.
Default to 5-20 words; exceed 25 words only when the Question explicitly asks for a summary, comparison, explanation, or creative response.
If the answer can be expressed as a name, list, date, place, relation, or short clause, output only that.

References

Related TypeGraph Reading

GraphRAG-Bench Medical Answer Accuracy

GraphRAG-Bench Medical Answer Accuracy | TypeGraph