Background removal that runs locally or through the API. Pick whichever fits your constraints.
Two modes: run the open source Focus model locally (free, private, works offline), or call the withoutBG Pro API (better quality, no GPU needed, pay per image). The code is the same either way; just swap the initializer.
pip install withoutbgfrom withoutbg import WithoutBG
model = WithoutBG.opensource()
model.remove_background("your-photo.jpg").save("result.png")The local model loads ~320MB of weights into ~2GB of RAM. You pay that cost once per process, then process images for free indefinitely. The API skips all of that: send an image, get a result in 1-3 seconds, pay per call.
If you're building a product, the API is the right default. You don't want to manage 2GB of model weights in a production service, and the quality is better. The local model makes sense when you need offline or private processing, or when you're running a large batch job and don't want to pay per image.
Need offline or private processing? → Local model
Processing a large batch? → Local model (pay once, amortize over all images)
Building a product? → withoutBG Pro (better quality, zero infra overhead)
Occasional use, no setup tolerance? → withoutBG Pro
View Complete Dockerized Web App Documentation →
docker run -p 80:80 withoutbg/app:latest
open http://localhostRuns on both amd64 and arm64.
View Complete Python SDK Documentation →
uv add withoutbg
# or: pip install withoutbgLocal model:
from withoutbg import WithoutBG
model = WithoutBG.opensource()
result = model.remove_background("input.jpg") # Returns PIL Image (RGBA)
result.save("output.png")
result.show()
result.resize((500, 500))
result.save("output.webp", quality=95)withoutBG Pro:
from withoutbg import WithoutBG
model = WithoutBG.api(api_key="sk_your_key")
result = model.remove_background("input.jpg")
result.save("output.png")
# Prefer the environment variable so the key stays out of your code
# export WITHOUTBG_API_KEY=sk_your_keyCLI:
withoutbg photo.jpg
withoutbg ~/Photos/vacation/ --batch --output-dir ~/Photos/vacation-no-bg/
# Flatten to JPEG with white fill (for printing or upload)
withoutbg portrait.jpg --format jpg --quality 95
export WITHOUTBG_API_KEY=sk_your_key
withoutbg wedding-photo.jpg --use-api
withoutbg photo.jpg --verboseDon't have
uv? It's a faster drop-in for pip. Get it at astral.sh/uv.
See More Focus Model Results →
Monorepo with three layers: packages (reusable libraries), apps (end-user deployments), and integrations (plugin targets):
withoutbg/
├── packages/ # Reusable packages
│ └── python/ # Core Python SDK (published to PyPI)
│
├── apps/ # End-user applications
│ └── web/ # Web application (React + FastAPI)
│
├── integrations/ # Third-party tool integrations
│ └── (future: GIMP, Photoshop, Figma plugins)
│
├── models/ # Shared ML model files
│ └── checkpoints/ # ONNX model files
│
├── docs/ # Documentation
└── scripts/ # Development scripts
Core library. Published to PyPI as withoutbg.
- Install:
uv add withoutbg(orpip install withoutbg) - Exposes: Python API + CLI
- Models: Focus v1.0.0 (local), withoutBG Pro (API)
Web interface with drag-and-drop, batch processing, and live preview.
- Stack: React 18 + FastAPI + Nginx
- Deploy: Docker Compose
- GIMP plugin
- Photoshop extension
- Figma plugin
- Blender addon
All methods return a PIL Image in RGBA mode, a standard Python image object with an alpha channel carrying the mask:
from withoutbg import WithoutBG
model = WithoutBG.opensource()
result = model.remove_background("photo.jpg") # PIL Image, RGBA
result.save("output.png") # PNG preserves the alpha channel
result.save("output.webp") # WebP also supports alpha
result.save("output.jpg", quality=95) # JPEG drops alpha; you get a flat imageJPEG has no alpha channel. Saving to .jpg will discard the mask without an error:
# This works but you lose the cutout:
result.save("output.jpg")
# Use PNG or WebP to keep it:
result.save("output.png")
result.save("output.webp")On first call, WithoutBG.opensource() downloads ~320MB of ONNX weights from HuggingFace and caches them locally. This takes 5-10 seconds depending on your connection. Every subsequent run loads from cache and processes in 2-5 seconds per image.
# First call: downloads 320MB, takes 5-10s
model = WithoutBG.opensource()
# Every call after that: loads from cache, fast
result = model.remove_background("photo.jpg")The model weights stay in RAM as long as the model object is alive. If you create a new WithoutBG instance for every image, you're loading and unloading 2GB of RAM each time. Don't do that:
from withoutbg import WithoutBG
model = WithoutBG.opensource()
images = ["photo1.jpg", "photo2.jpg", "photo3.jpg"]
results = model.remove_background_batch(images, output_dir="results/")- Single file:
photo.jpg→photo-withoutbg.png - Batch:
photo1-withoutbg.png,photo2-withoutbg.png, etc. - Override:
--output(single) or--output-dir(batch)
def show_progress(progress):
print(f"Processing: {progress * 100:.0f}%")
model = WithoutBG.opensource()
result = model.remove_background("photo.jpg", progress_callback=show_progress)from withoutbg import WithoutBG, APIError, WithoutBGError
try:
model = WithoutBG.api(api_key="sk_your_key")
result = model.remove_background("photo.jpg")
result.save("output.png")
except APIError as e:
print(f"API error: {e}")
except WithoutBGError as e:
print(f"Processing error: {e}")| Metric | Local (CPU) | withoutBG Pro |
|---|---|---|
| First Run | 5-10s (~320MB download) | 1-3s |
| Per Image | 2-5s | 1-3s |
| Memory | ~2GB RAM | None |
| Disk Space | 320MB (one-time) | None |
| Setup | One-time download | API key only |
| Cost | Free forever | Pay per use |
Model breakdown (cached after first download):
- ISNet segmentation: 177 MB
- Depth Anything V2: 99 MB
- Focus Matting: 27 MB
- Focus Refiner: 15 MB
- Total: ~320 MB
For batch jobs, keep the model object alive across all images. Reinitializing for each image reloads the weights every time, which is 10-100x slower than reusing a single instance.
Model download fails:
- Models are pulled from HuggingFace on first run (~320MB). Check your connection.
- To use locally cached or custom model files, see Configuration.
Out of memory:
- The local model uses ~2GB of RAM. Either reduce your batch size or switch to the API, which offloads inference entirely.
Import error or "module not found":
which python # confirm you're in the right environment
pip list | grep withoutbg
source venv/bin/activate
pip install withoutbgAPI key rejected:
- Get your key at withoutbg.com.
- Set it as an environment variable:
export WITHOUTBG_API_KEY=sk_your_key
Slow on first run:
- Expected. The ~320MB weights are downloading. Subsequent runs use the local cache and take 2-5s per image.
- Python SDK Docs: API reference and examples
- Python SDK Documentation: Online documentation
- Web App Docs: Deployment and development guide
- Dockerized Web App Documentation: Online documentation
- withoutBG Pro API Results: Example outputs
- Focus Model Results: Example outputs
- Compare Focus vs Pro: Model comparison
cd packages/python
uv sync --extra dev
# or: pip install -e ".[dev]"
pytest
black src/ tests/
ruff check src/ tests/docker-compose -f apps/web/docker-compose.yml up
# Or run the components separately:
cd apps/web/backend
uv sync
uvicorn app.main:app --reload
cd apps/web/frontend
npm install
npm run devThe current open source model. Key improvements over prior versions:
- Better edge detail, particularly around hair and fur (the hardest case for matting models)
- More consistent generalization across image types
- Pipeline: ISNet segmentation → Depth Anything V2 guidance → Focus Matting → Focus Refiner
See sample-results/ for visual comparisons.
Apache License 2.0. See LICENSE
- Depth Anything V2: Apache 2.0 License (vits model only)
- ISNet: Apache 2.0 License
- segmentation-models-pytorch: MIT License (used to train the matting and refiner models)
See THIRD_PARTY_LICENSES.md for complete attribution.
Read CONTRIBUTING.md, then open a PR with tests. Small, focused changes are easiest to review.
- Bugs: GitHub Issues
- Discussion: GitHub Discussions
- Commercial: contact@withoutbg.com






