A high-performance, model-agnostic geospatial inference pipeline for gigapixel satellite imagery. GSIP enables seamless deep learning integration on standard hardware through a memory-efficient chunking engine, flexible batch processing, and a unified CLI/GUI ecosystem.
The gsip command provides access to four specialized tools designed for the full lifecycle of Earth Observation inference:
- 🚀
gsip infer: The core high-performance engine. Processes massive Sentinel-2/1 tiles using ERF-aware tiling and dynamic memory auto-configuration to prevent OOM errors on any hardware. - 📊
gsip suite: A powerful batch orchestrator. Run complex experiments using Cartesian product job generation (Multi-Model × Multi-Input) with hierarchical config overrides and automatic GPU cooldowns. - 🎨
gsip studio: A native GTK4 dashboard. Features an interactive Visual Editor for batch runs, a System Info hub, and detailed Post-Run Performance Analysis (GPU/RAM usage charts). - 🔧
gsip manage: Developer-centric CLI for extending the pipeline. Quickly scaffold and register new model adapters or output reporters.
# Dependencies
sudo apt install libcairo2-dev libgirepository-2.0-dev
# Recommended: Isolated install via pipx
pipx install .
# Or standard development install
pip install -e .# 1. Run inference on a single tile
gsip infer model=resnet_s2 input_path=/path/to/S2_tile.SAFE output_path=./out
# 2. Launch the GUI for analysis or batch building
gsip studio
# 3. Run a batch of jobs
gsip suite --config my_batch.json| Guide | Description |
|---|---|
| Usage Guide | How to use the CLI tools, configure batch runs, and analyze results. |
| Extending GSIP | Advanced: How to write your own Model Adapters and Output Reporters. |
| Configuration | Detailed reference for all YAML settings and Hydra overrides. |
| Technical Reference | Deep dive into Tiling math, Memory management, and Architecture. |
| API Reference | Detailed documentation of the internal Python modules. |
| Project Structure | Overview of the codebase and file organization. |
GSIP is designed to be highly extensible. You can integrate any PyTorch-based model (CNNs, ViTs, Foundation Models) by writing a Model Adapter.
👉 Check out the Extending Guide to learn how to add your own models.