Query your pathology archive using clinical language or image regions. PatchMatch leverages state-of-the-art foundation models like CONCH and TITAN to retrieve diagnostic-grade tissue matches across 70,000+ indexed patches in milliseconds.
BRACS retrieval results — query: IC region · CONCH v1 encoder
Hover patches to reveal tissue class · simulated from real benchmark data
Every search mode maps to a real clinical workflow. PatchMatch connects the way pathologists describe tissue to the way AI encodes it.
Find slides using natural language queries. Retrieve cases by typing diagnostic terms, clinical details from reports, descriptive tissue appearances, or even filter by cellular metrics like TIL ratio.
Select any region on a slide to perform a patch-by-patch comparison across the entire library. Find visually and morphologically similar regions, ranked by match quality, to identify recurrent patterns.
Visualize the spatial distribution of diagnostic categories across the entire slide. A color-coded overlay highlights specific tissue types like Normal, Benign, Atypical, and Carcinoma to easily identify high-risk regions.
Identify slides most similar to the current case based on overall tissue architecture and cellular patterns. Explore matching cases with optional similarity maps highlighting shared morphological characteristics.
Instantly detect and outline individual cells (Tumor, Immune, Connective). Automatically calculate key diagnostic indicators like Tumor Cellularity, TIL, and TSR, supported by spatial heatmaps for cell density and circularity.
Click and inspect specific local regions to receive automated semantic descriptions of tissue morphology. Review predicted diagnostic class distributions, attach clinical notes, and export comprehensive PDF reports.
Modular ML Subsystem: fully decoupled online similarity search services powered by dedicated foundation models.
Clinical text and reports are encoded using CONCH v1 (CoCa). The Search Engine executes FAISS Inner-Product retrieval, applying IDF-weighted keyword boosts and clinical entity extraction (e.g., ER/PR statuses).
Dense ROI queries are executed using CONCH v1 image embeddings. The system performs batched k-NN per patch and aggregates results via max-pool-then-mean to generate slide-level rankings and spatial heatmaps.
Whole-slide nearest-neighbor search is powered by CONCH v1.5 (ViT-L). It provides high-fidelity global retrieval combined with dense cross-slide patch similarity heatmaps.
Quantitative tissue metrics are generated by streaming pre-computed CellViT segmentations. It dynamically calculates Tumor Cellularity, TIL ratio, and TSR for precise clinical assessment.
When inspecting a specific area, a BRACS-trained linear probe predicts diagnostic class probabilities (e.g., ADH, DCIS, IC) in real time, while a Quilt-1M semantic index retrieves contextual natural-language tissue descriptions.
Search modalities are independently evaluated and powered by separate foundation models. Quantitative tissue metrics are completely decoupled and driven by localized CellViT data.
A short walkthrough of the application workflow, from exploring whole-slide images to retrieving visually similar regions.
A visual walkthrough of the application.
CS · Bilkent
CS · Bilkent
CS · Bilkent
CS · Bilkent
CS · Bilkent
Supervisor, expert advisor & field advisor
Chair, Department of Computer Engineering · Bilkent University
SupervisorPathologist · Hacettepe University
Field advisorMedical AI Architect · SmartAlpha
Industry expertComplete technical specification — system architecture, data pipelines, API design, scoring formulas, and all key engineering decisions.
View PDF →Clinical workflow analysis, user requirements, system constraints, and validation methodology against pathology standards.
View PDF →Comprehensive architectural analysis, module interactions, and system implementation details for the PatchMatch platform.
View PDF →Complete project summary, evaluation results, clinical validation, and future directions for the PatchMatch system.
View PDF →