LynxKite is a complete graph data science platform for very large graphs and other datasets. It seamlessly combines the benefits of a friendly graphical interface and a powerful Python API.
- Hundreds of scalable graph operations, including graph metrics like PageRank, embeddedness, and centrality, machine learning methods including GCNs, graph segmentations like modular clustering, and various transformation tools like aggregations on neighborhoods.
- The two main data types are graphs and relational tables. Switch back and forth between the two as needed to describe complex logical flows. Run SQL on both.
- A friendly web UI for building powerful pipelines of operation boxes. Define your own custom boxes to structure your logic.
- Tight integration with Python lets you implement custom transformations or create whole workflows through a simple API.
- Integrates with the Hadoop ecosystem. Import and export from CSV, JSON, Parquet, ORC, JDBC, Hive, or Neo4j.
- Fully documented.
- Proven in production on large clusters and real datasets.
- Fully configurable graph visualizations and statistical plots. Experimental 3D and ray-traced graph renderings.
LynxKite is under active development. Check out our Roadmap to see what we have planned for future releases.
Quick try:
docker run --rm -p2200:2200 lynxkite/lynxkite
Setup with persistent data:
docker run \
-p 2200:2200 \
-v ~/lynxkite/meta:/metadata -v ~/lynxkite/data:/data \
-e KITE_MASTER_MEMORY_MB=1024 \
--name lynxkite lynxkite/lynxkite
If you find any bugs, have any questions, feature requests or comments, please file an issue or email us at [email protected].
If you build LynxKite with Earthly, you don't have to install anything on your system except Earthly and get really reliable builds.
- Install Earthly.
- Run
earthly +run
to build and run LynxKite. See theEarthfile
for other targets.
You can install LynxKite's dependencies (Scala, Node.js, Go) locally with Conda.
Before the first build:
tools/git/setup.sh # Sets up pre-commit hooks.
conda env create --name lk --file conda-env.yml
conda activate lk
cp conf/kiterc_template ~/.kiterc
We use make
for building the whole project.
make
LynxKite can be run as a fat jar started with spark-submit
. See run.sh
for an example of this.
We have test suites for the different parts of the system:
-
Backend tests are unit tests for the Scala code. They can also be executed with Sphynx as the backend. If you run
make backend-test
it will do both. Or you can startsbt
and runtestOnly *SomethingTest
to run just one test. Run./test_backend.sh -si
to startsbt
with Sphynx as the backend. -
Frontend tests use Playwright to simulate a user's actions on the UI.
make frontend-test
will build everything, start a temporary LynxKite instance and run the tests against that. If you already have a running runnpm test
in theweb
directory. You can start up a dev server that proxies backend requests to LynxKite withnpm start
. -
Python API tests are started with
make remote_api-test
. If you already have a running LynxKite that is okay to test on, runpython/remote_api/test.sh
. This script can also run a subset of the test suite:python/remote_api/test.sh -p *something*