Skip to content

Latest commit

 

History

History
79 lines (47 loc) · 3.9 KB

README.md

File metadata and controls

79 lines (47 loc) · 3.9 KB

LDBC_LOGO

LDBC SNB Business Intelligence (BI) workload implementations

Build Status

Implementations for the BI workload of the LDBC Social Network Benchmark. See our VLDB 2023 paper and its presentation for details on the design and implementation of the benchmark.

To get started with the LDBC SNB benchmarks, visit the ldbcouncil.org site.

📜 If you wish to cite the LDBC SNB, please refer to the documentation repository (bib snippet).

Implementations

The repository contains the following implementations:

All implementations use Docker containers for ease of setup and execution. However, the setups can be adjusted to use a non-containerized DBMS.

Reproducing SNB BI experiments

Running an SNB BI experiment requires the following steps.

  1. Pick a system, e.g. Umbra. Make sure you have the required binaries and licenses available.

  2. Generate the data sets using the SNB Datagen according to the format described in the system's README.

  3. Generate the substitution parameters using the paramgen tool.

  4. Load the data set: set the required environment variables and run the tool's scripts/load-in-one-step.sh script.

  5. Run the benchmark: set the required environment variables and run the tool's scripts/benchmark.sh script.

  6. Collect the results in the output directory of the tool.

⚠️ Note that deriving official LDBC results requires commissioning an audited benchmark, which is a more complex process as it entails code review, cross-validation, etc. For details, see LDBC's auditing process, the specification's Auditing chapter and the audit questionnaire.

Cross-validation

To cross-validate the results of two implementations, use two systems. Load the data into both, then run the benchmark in validation mode, e.g. Cypher and Umbra results. Then, run:

export SF=10

cd cypher
scripts/benchmark.sh --validate
cd ..

cd umbra
scripts/benchmark.sh --validate
cd ..

scripts/cross-validate.sh cypher umbra

Usage

See .circleci/config.yml for an up-to-date example on how to use the projects in this repository.

Data sets

We have pre-generated data sets and parameters.

Scoring

To run the scoring on a full benchmark run, use the scripts/score-full.sh script, e.g.:

scripts/score-full.sh umbra 100

The script prints its summary to the standard output and saves the detailed output tables in the scoring directory (as .tex files).