Nimbus8 Try Now
Mirage · Image

Images, conjured from nothing.

Mirage is the image-generation sandbox — pick from a curated shelf of open-source diffusion models, write a prompt, watch the Neural Engine render. Gallery in the sandbox, nothing you prompt or produce ever leaves the phone.

iOS 17+iPhone & iPad
100%on-device
Diffusion · Core ML · Neural Enginecurated open-source catalog
9:41
Mirage
PROMPT
A lone cirrus cloud over a warm desert at golden hour, cinematic, 35mm, soft grain.
Step 14 of 20
~6s
SD 1.5 Pal 512² 20 steps Seed 42

Neural Engine diffusion

Diffusion models run on the Neural Engine via Core ML. The tooling (apple/ml-stable-diffusion, CoreML-Tools) is Apache-licensed; the models come from the open-source ecosystem — palettized weights keep memory sane, the ANE keeps power sane.

Private gallery

Every render lands in a local gallery — long-press to multi-select, drag to extend, share through the iOS share sheet. No cloud, no accounts, no embedded watermarks.

Device-tiered catalog

The capability manifest hides anything that won't run on your chip. Small palettized models on low-end iPhone, mid-tier on the Pros, bigger transformer DiTs (FLUX, PixArt) on iPad Pro and Mac — all surfaced in one picker.

What is Mirage?

Mirage is Nimbus8's image-generation module — a full diffusion pipeline that runs on your iPhone or iPad. Write a prompt, tune steps or seed if you want, and the Neural Engine produces the image. Nothing is sent to a server, because there is no server.

The name is literal: an atmospheric mirage is an image appearing from nothing, which is exactly what a text-to-image model does. Mirage fits into the weather-phenomenon naming shared across Nimbus8 and sits alongside Gale, Cirrus, Mist, and the rest as a first-class module.

Diffusion pipeline

The Core ML runtime underneath Mirage uses apple/ml-stable-diffusion (Apache 2.0) as its execution shell — that's tooling, not a model. Models ship as .mlpackage bundles compiled for the Neural Engine and GPU; the pipeline is loaded once per session, then reused for every prompt until you switch models or quit the app.

UNet + text encoder + VAE all run Core ML. Steps dispatch to the ANE where the hardware supports it and fall back to the GPU for intermediate tensors that are faster there. Latency lives in memory and compute — not in network round-trips — so airplane mode is a supported configuration. Any Core ML–converted open diffusion model that matches the pipeline shape drops into the catalog; Mirage is not tied to any single architecture or brand.

Supported models — open-source only

Every model Mirage surfaces is open-weight with a publicly downloadable license. The catalog pulls from Hugging Face, CivitAI, and curated GitHub Releases; anything requiring a cloud API or proprietary weights is excluded at the source layer. License is shown alongside the name in the picker so you know what you're installing. List current as of 2026-04-22; the catalog updates on its own.

Permissive license (Apache 2.0 / MIT)

  • FLUX.1-schnell — Apache 2.0 — Black Forest Labs. 4-step sampling, sharpest prompt adherence of the open models; the iPad Pro / Mac-tier pick.
  • PixArt-α / Σ — Apache 2.0 — transformer DiT, excellent compositional scenes at modest budget.
  • Würstchen v2 — MIT — efficient cascaded model, iPhone-tier friendly.
  • Kandinsky 3 — Apache 2.0 — alternative aesthetic, strong on non-photo styles.
  • AuraFlow — Apache 2.0 — flow-matching DiT, more recent architecture.
  • Kolors — Apache 2.0 — Kuaishou's SDXL-derived model; multilingual prompts.

OpenRAIL-M family (research / community license)

  • SD 1.5 Palettized (6-bit) — CreativeML OpenRAIL-M — roughly 900 MB on disk, the iPhone-tier default. Broad LoRA ecosystem.
  • SDXL base 1.0 — SDXL OpenRAIL++-M — iPad-tier / iPhone 17 Pro, higher fidelity, heavier memory.
  • SDXL Turbo — Stability research license (non-commercial) — 1–4 step sampling, near-instant previews. Licensing gate shown in the picker.
  • Stable Diffusion 2.1 base — CreativeML OpenRAIL-M — alternate aesthetic, smaller than SDXL.
  • Stable Diffusion 3 Medium — Stability Community License — newest SD architecture; iPad Pro / Mac-tier.

Other open models

  • FLUX.1-dev — FLUX.1-dev license (non-commercial) — reference-quality FLUX at higher cost; gated in the picker.
  • Playground v2.5 — Playground Community License — strong stylistic output at SDXL scale.
  • Orchid, Cascade, and HF community ports — surfaced by the catalog as iOS-viable builds land.

Device fit is enforced at the picker. Anything that can't run on your chip is hidden by default — advanced users can flip a Settings toggle to show all. The tooling (apple/ml-stable-diffusion, coremltools) is Apache 2.0 and treated as execution shell, not brand. See device fit for how the filter works.

Performance

Numbers land within normal variance; the actual latency on your device depends on steps, resolution, and thermal state. As a rough anchor from calibration runs:

  • iPhone 15 Pro — SD 1.5 Palettized at 512², 20 steps: ~6 seconds. SDXL Turbo at 512², 4 steps: ~1.5 seconds.
  • iPhone 17 — SD 1.5 Palettized at 512², 20 steps: ~4 seconds. SDXL Turbo at 1024², 4 steps: ~3 seconds.
  • iPad Pro (M4) — SDXL full at 1024², 20 steps: ~8 seconds. Turbo drops it under 2.

Mirage shows the current step and a short ETA while diffusing, so you can tell the difference between "thermals throttled, slower than normal" and "normal for this chip." The first render after a cold start is always slower — the pipeline has to load.

What leaves my device when I use Mirage?

Nothing, by default. Prompts, intermediate tensors, final images, and gallery metadata all live in the app sandbox. There is no analytics call on generate, no prompt logging, no server-side storage — there is no server. The microphone, camera, and network permissions are irrelevant because Mirage does not use any of them.

Downloading a new model is the one network event that can happen — and only when you tap to install one. After that, every render is a local operation. See the privacy policy for the full data flow.

FAQ

Does Mirage need an internet connection?

Only to download models the first time. Once a pipeline is on your device, Mirage generates fully offline — airplane mode is fully supported.

Can I use my own LoRAs or fine-tunes?

Yes, within reason: any Core ML–converted open checkpoint that matches a supported pipeline shape (SD 1.5, SDXL, PixArt, FLUX, etc.) drops into the catalog. LoRA merging on-device is on the roadmap, not in the first ship. Civitai LoRAs are surfaced in the browser where Core ML conversions exist — no cloud inference, no proxied downloads.

Is there a safety filter on outputs?

The default pipeline includes the upstream NSFW classifier; you can disable it in Mirage's settings with an explicit toggle and an iOS confirmation. Nothing is reported off-device either way.

What resolutions does Mirage support?

512² on the iPhone tier by default, 768² and 1024² on SDXL-capable devices. The picker hides resolutions that would blow past your device's peak memory budget — no "Generate" button that silently fails.

How many images can the gallery hold?

As many as your device has room for. Images are stored uncompressed-ish at PNG quality; the gallery reports its current on-disk footprint in settings so you can prune when it gets large.

Can Mirage output to Photos or Files?

Yes. Share sheet from the detail view goes to Photos, Files, Mail, iMessage, anywhere. The gallery itself is separate from Photos by design — a render does not land in your camera roll unless you put it there.