PR_STYLE LoRA output: ivory cascading ruffle dress on nude mesh bodysuit, asymmetric ruffle placement
Technique

Teaching an AI to Speak Pisces Rising

What happens when you stop describing what your clothes look like and start describing how they're built.

I've been thinking about what it means for an AI to understand a design vocabulary, not just an aesthetic. Every text-to-image model can generate "an avant-garde fashion image" or "a dramatic red carpet gown." What none of them can do, out of the box, is respond to the way I actually talk about garments: where the weight sits, how a seam holds one panel against another, what happens when tension releases into fall.


I'm not here to discuss whether training AI to design is right or wrong, and I don't particularly want to defend the efficiencies of synthetic brain power in a one-person operation. This experiment is about expanding the range of possibilities beyond current human limitations, and I use it to find opportunities within my own IP.


So I built a custom LoRA on my own collection and taught a text-to-image model to think in the construction language of Pisces Rising, my fashion label, where the garments come from visible structure and tension, a material vocabulary that doesn't exist in any generic image model. The whole thing cost about twenty dollars and took about a day.

A LoRA, Low-Rank Adaptation, is essentially a dialect patch for a language model. The base model already speaks fluent "image," and a LoRA teaches it a new accent. In this case, the accent is Pisces Rising: a specific relationship between body and garment, a preference for visible construction, and a material vocabulary built from jersey tulle, power mesh, and milliskin rather than "luxurious fabric."


I trained mine on Black Forest Labs' Flux.1-dev through Replicate, twenty images over two thousand training steps at LoRA rank 16. The model learns to associate a trigger word, PR_STYLE, with everything in the training data, so when I prompt it later, that single token activates the full visual vocabulary at once.

Arise training image: black Tether construction in 4-way power mesh and milliskin, cascading ruffles with visible anchor points Arise training image: ivory Ribbon construction in rayon grosgrain, layered cascading volume Arise training image: Ribbon construction in rayon grosgrain, variation with different silhouette

Three of the twenty training images from the Arise collection. Left: Tether construction in power mesh and milliskin. Center and right: Ribbon construction in rayon grosgrain.

Deborah Won, Arise (2023). Photography by Deborah Won.

I pulled twenty images from the Arise collection, all transparent-background PNGs at 1024 by 1536. They break into two categories, two different construction philosophies.


Ribbon is the cascading, ruffled, layered pieces in organza, garments where the fabric does the structural work through volume and repetition. Tether is the connected, suspended, anchored constructions in 4-way power mesh and 4-way milliskin, where tension between compression and release creates the silhouette.


Twenty images isn't a lot. That's deliberate. With LoRA training, a focused dataset teaches the model a coherent language. A large, varied one teaches it to average.

The project got interesting for me when I started writing the captions. My first round used the language you'd find in any fashion editorial: "organza ruffles," "avant-garde silhouette," "ethereal draping." The model learned them fine. It generated pretty fashion images. But when I prompted it with Pisces Rising's actual vocabulary, with "tethered construction" or "load-bearing seams" or "the compression of milliskin holding each ruffle's weight before releasing into the next fall," the model didn't know what I was talking about.


The problem wasn't the images. The problem was that I'd described them in someone else's language.

V1 LoRA output: dark, muddy black jacket with no recognizable Pisces Rising construction vocabulary V1 LoRA output: generic white ruffled garment resembling a trench coat, not Pisces Rising

Two outputs from v1 captions. The model generated fashion images, but nothing that reads as Pisces Rising. Left: generic dark garment with no visible construction logic. Right: ruffled volume with no relationship to body or tension.

Generated by Deborah Won using PR_STYLE LoRA v1 on Flux.1-dev, 2026.

For version two, I rewrote all twenty captions using the way I actually discuss garments in my studio. Construction verbs replaced generic descriptors, ie cascading, tethered, anchored, suspended, gathered, released. Structural terms replaced aesthetic ones, load-bearing seams, tension, fall, hold, release, weight. And I named the actual fabrics: 4-way power mesh, 4-way milliskin, matte jersey, silk jersey. Not "fabric" or "material."


I also varied the caption structure. Some lead with the construction method. Some lead with the fabric. Some lead with the silhouette. The variety keeps the model from learning a formula and teaches it a vocabulary instead.

PR_STYLE tethered construction connecting fitted bodice to skirt through suspended fabric bridges, black 4-way power mesh holding the body's silhouette while 4-way milliskin ruffle tiers cascade from hip to floor, the compression of milliskin holding each ruffle's weight before releasing into the next fall, architectural silhouette where every connection point is visible, dual figure composition, transparent background, dramatic directional lighting

Sample caption from the v2 training set, Tether category. The trigger word PR_STYLE opens every caption, binding the full vocabulary to a single token.

"The problem wasn't the images. The problem was that I'd described them in someone else's language."

PR_STYLE LoRA output: ivory ruffle peplum jumpsuit with deep V neckline and tiered organza PR_STYLE LoRA output: black off-shoulder gown with cascading organza ruffles and ribbon streamers PR_STYLE LoRA output: black gown with sculptural ruffled shoulders and peplum construction PR_STYLE LoRA output: navy sheer gown with satin ribbon winding through mesh bodice PR_STYLE LoRA output: ivory cascading ruffle dress on nude mesh bodysuit base PR_STYLE LoRA output: dark organza gown with sculptural ruffle collar and full skirt

Six outputs from the PR_STYLE LoRA (v2A). Same trigger word, same model, different prompts. The construction variety comes from the vocabulary embedded in training.

Generated by Deborah Won using PR_STYLE LoRA on Flux.1-dev, 2026.

After two thousand steps, the LoRA does a few things well. It generates silhouettes that read as Pisces Rising, not generic fashion, and it responds to construction prompts. I can ask for "tethered construction with visible anchor points" and get something recognizably different from "cascading ruffle tiers." The poses and compositions are consistently editorial, and the overall brand alignment is strong. The images feel like they belong in the same world as Arise.


Where it falls short is the fabric specificity. The model understands that power mesh and organza are different things, but it doesn't yet render their behaviors distinctly. The way mesh compresses against the body versus the way organza holds volume away from it. Seam lines show up as surface texture rather than true construction detail. These are the hardest things for any image generator to learn because they're about material physics, not appearance.


I scored the model on six criteria, garment accuracy, fabric rendering, construction detail, silhouette, pose and composition, and brand alignment, and four of six improved from version one to version two. The caption rewrite was the single biggest factor.

Garment Accuracy Recognizable garment types match prompts consistently.
Fabric Rendering Texture differentiation present, but fabric behaviors not yet distinct.
Construction Seam lines render as surface texture, not true construction detail.
Silhouette Strong, fashion-forward. Full-body proportions improved from v1.
Composition Editorial poses consistent and natural across outputs.
Brand Alignment Consistently reads as Pisces Rising.

One thing I didn't expect, the LoRA's outputs had wildly inconsistent backgrounds, corner pixel values ranging from near-black to light gray, even though I'd specified transparent backgrounds in every caption. I built a three-step standardization pipeline. Generate with the LoRA, remove the background through Replicate's rembg model, composite onto a uniform ground. Cost: about a penny per image, three to four seconds each. A small problem, but the kind of thing that matters if you're using these images in any professional context.

The LoRA isn't replacing design work, I'm not generating garments and then sewing them. What it does is encode design knowledge in a form that's responsive to the same language I use in my studio. I can explore directions quickly: what would Pisces Rising look like in knitwear? In menswear? With a different relationship between body and structure? The model gives me something to react to, a starting point that already speaks my vocabulary, rather than a blank page or a Pinterest board of other people's references.


The part that surprised me is about the captions themselves. Writing twenty precise descriptions of my own garments, in my own construction language, forced a clarity about what Pisces Rising actually is that no mood board or statement has. The LoRA training data isn't just training data, it's a design document for sketching.

Does this beat the traditional process of sketching? Not for me, not when the barriers between design, merchandising, and product development are nonexistent. I can work much faster and more creatively without adding a step in between, especially when that step is a tool I have to build in parallel to the design vocabulary itself. I won't rely on it anytime soon.


But I'll continue to experiment with LoRAs, because at some point in the future it could be useful for when I need to communicate ideas about my collection to someone else prior to design development. Until then, it's pencil and paper for me.


I'm curious what happens with a larger training set. Twenty images from one collection taught the model a specific moment in Pisces Rising's evolution. What would sixty images across three collections teach it? Would the model find through-lines I haven't articulated, or would the signal dilute?


I'm also curious whether this approach transfers to other designers. The caption methodology, embedding brand-specific construction vocabulary rather than generic descriptors, should work for anyone with a precise material language. But most fashion brands don't have one. They describe their work in the same borrowed editorial vocabulary my v1 captions used. The LoRA just makes that gap visible.

Platform

Replicate

Base Model

Flux.1-dev (Black Forest Labs)

Training Data

20 images, Arise collection

Steps

2,000 (v2A production)

LoRA Rank

16

Resolution

1024 × 1536 (dual)

Trigger Word

PR_STYLE

Cost

~$20 USD, ~1 day

Last edited: 03/26

Previous Hemstitching Technique