X Tutup
Skip to content

Add support for FLUX.2 Klein 9B and 4B in single_file_utils#13237

Open
eliemichel wants to merge 1 commit intohuggingface:mainfrom
eliemichel:patch-2
Open

Add support for FLUX.2 Klein 9B and 4B in single_file_utils#13237
eliemichel wants to merge 1 commit intohuggingface:mainfrom
eliemichel:patch-2

Conversation

@eliemichel
Copy link

@eliemichel eliemichel commented Mar 9, 2026

What does this PR do?

It enables detection of FLUX.2 Klein models when loading transformers from single file, like in the following example:

import torch
from diffusers import Flux2KleinPipeline, Flux2Transformer2DModel, GGUFQuantizationConfig
dtype = torch.bfloat16

ckpt_path = "https://huggingface.co/unsloth/FLUX.2-klein-4B-GGUF/blob/main/flux-2-klein-4b-Q2_K.gguf"
transformer = Flux2Transformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=dtype),
    torch_dtype=dtype,
)
pipe2 = Flux2KleinPipeline.from_pretrained(
    "black-forest-labs/FLUX.2-klein-4B",
    transformer=transformer,
    torch_dtype=dtype,
)
pipe2.enable_model_cpu_offload()

Before this PR, Klein models were detected as flux-2-dev, leading to wrong expected tensor sizes.

Before submitting

Who can review?

@sayakpaul @yiyixuxu @DN6

@sayakpaul sayakpaul requested a review from DN6 March 10, 2026 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

X Tutup