Stop Memorizing Prompts: MJ/Niji is Chemistry, Not a Fill-in-the-Blank Quiz

A chaotic neutral engineer's field manual for surviving the AI art black box

Who Am I and Why Should You Listen (Or Not)

Okay, anyone who knows me knows this: I love chaos.

I hate using the same prompt over and over. I hate sticking to one p-code forever. Not because they don't workβ€”they probably work fine. I just find it boring as hell.

I'm the kind of person who breaks things just to see what happens. I switch prompts. I swap p-codes. I add random words. I delete half the prompt and see if it still works.

This chaotic approach made me stumble into patterns.

Patterns that might help you troubleshoot your MJ/Niji disasters. Or maybe not. I don't know your life.

Here's the deal:

If you want a "do this, get that" tutorial β†’ This isn't it.

If you're okay with experimental notes from someone who's stepped on a LOT of landmines β†’ Keep reading.

Some things I still don't fully understand. Like why certain word combos just summon hyperrealistic hell instead of semi-realistic art. Especially words related to real camera settingsβ€”those seem particularly cursed. My theory: the system associates camera jargon with actual photographs and goes full National Geographic on you. But I'm not 100% sure.

So yeah. This is a field manual written by a chaotic neutral engineer.

Take what's useful. Ignore the rest. Go break some stuff and learn for yourself.

Let's go. 🫑


Part 1: Prompts Are Chemistry, Not Mad Libs

The Fatal Mistake Most People Make

They treat prompts like a template:

[Subject] + [Action] + [Scene] + [Style] = Result

So they think:

"I'll just swap 'girl' for 'boy' and 'forest' for 'beach' and get a similar vibe!"

Nope.

Prompts aren't Mad Libs. They're chemistry.

Why Chemistry?

In chemistry:

In MJ/Niji:

Every word reshapes the entire "flavor" of the output.

Think of it like cooking: Sugar alone = sweet. Salt alone = salty. Sugar + salt + random spice? Could be amazing. Could be inedible garbage.

You won't know until you mix it.

Real Example: The Swimsuit Context Trap

woman in swimsuit, beach, summer vibes, candid atmosphere
woman in swimsuit, bedroom, soft lighting, intimate mood

Same swimsuit. Same woman. Different context.

The combination of swimsuit + bedroom + intimate created a "suggestive semantic field."

The system doesn't judge individual words. It judges the vibe your entire prompt creates.

Like this:

Context changes everything.

The Uncomfortable Truth

What works today might break tomorrow.

Why?

This isn't user error. This is just how the black box works.

You can't "solve" MJ/Niji once and be done. You have to keep adapting.


Part 2: Common Newbie Mistakes (That I Also Made)

Mistake 1: Over-Using Emotional Vocabulary

The problem:

People write:

melancholy woman, depressed, lost in contemplation, feeling uncertain

The system's response:

"Cool story bro. What does she LOOK like?"

MJ/Niji doesn't understand feelings. It understands visuals.

The fix:

Translate emotions into visual cues:

Show, don't tell. It's not a novel. It's a visual prompt.

Mistake 2: Overstacking Negative Prompts

The problem:

People panic and write:

--no young, youthful, teen, teenage, adolescent, immature, childish, 
child-like, juvenile, underage, minor, kid...

What happens:

More --no β‰  better.

The fix:

Keep negative prompts short and surgical.

--no young, teen, youthful
--no [20 synonyms for young]

Focus your energy on writing a STRONG positive prompt. Use --no as a scalpel, not a sledgehammer.

Mistake 3: Ignoring Artist Name Bias

The problem:

Not all artist names are created equal. Some artist names carry strong stylistic or thematic associations that affect your output way more than you think.

Example from my own experiments:

tempting smile, curvaceous build, 
in the style of Wlop, Greg Rutkowski, Kuvshinov Ilya
[same prompt]
in the style of Wlop

Only difference: Number of artists.

Why?

Wlop's style is heavily associated with sensual female characters + dreamy romantic lighting.

So when you write tempting + curvaceous + Wlop, the system sees: Risk level: HIGH β†’ πŸ’₯ Banned

But when you write tempting + curvaceous + [Wlop + GR + KI], the risk gets diluted by mixing in more neutral artists.

Like mixing vodka with juice. Pure vodka = strong. Vodka + juice + ice = chill.

The lesson:

Some artists are "high-risk" in the system's eyes:

If you want to use them, mix them with neutral artists:

Balance the risk.

Mistake 4: Age-Stacking Overkill

The problem:

mature adult woman, 35 years old, in her mid-thirties, 
with mature features, mature appearance, mature face, 
NOT young, NOT youthful, definitely NOT a teen...

Diminishing returns. Yes, the system has a youth bias. But drowning the prompt in repetitive constraints doesn't help.

The fix:

Use layered testing, not overkill:

Layer 1: mature adult woman, in her mid-thirties, 35 years old β†’ Test
Layer 2 (if needed): --no young, teen, youthful β†’ Test
Layer 3 (if still needed): fine lines, subtle crow's feet β†’ Test

Stop when it works. Don't keep adding layers.


Part 3: Surviving the Content Filter (The Tricky Part)

The Core Problem

MJ/Niji's content filter is STRICT. Sometimes it feels stricter than Google. Seriously.

And here's the frustrating part: The filter doesn't look at individual words. It looks at the overall semantic fieldβ€”the "vibe" created by your entire prompt.

The "Legitimate Scene" Strategy (With Important Caveats)

I've noticed that grounding your subject in a work/professional context often helps.

Examples:

More likely to pass:
  • woman holding microphone on stage
  • doctor holding stethoscope in hospital
  • scientist in lab with test tubes
  • artist painting at easel
More likely to get flagged:
  • woman reading book at home
  • woman sitting on couch
  • woman standing in bedroom

This is NOT a magic formula.

It's not like: "Just add a microphone and you're safe!"

What's actually happening: The system is judging the overall context and intent.

When you write woman in bedroom, intimate lighting, soft atmosphere, the system might think: "Hmm, this seems... suggestive."

But when you write woman on stage with microphone, spotlight, performance energy, the system thinks: "Oh, this is a performance/work context. Probably fine."

It's about semantic dilution. You're adding professional/public context to reduce ambiguity.

Why "Reading a Book" Can Be Risky

This sounds insane, but I've noticed: Generic "passive" activities sometimes get flagged: reading, thinking, sitting, relaxing.

The system might interpret these as "posing" rather than "doing something specific." And "posing" + certain other elements (clothing, lighting, location) = potential flag.

The "Professional Props" Trick

Adding work-related props helps establish "this is a professional/public setting":

These signal: "This person is WORKING, not posing suggestively."

But Let's Be Honest...

MJ/Niji's filter is a black box. Sometimes it makes NO sense:

There's randomness involved. You can't predict it 100%.

My General Strategy

When I'm worried a prompt might be borderline:

  1. Add public/work context: in a cafΓ©, bright daylight, other people visible in background
  2. Use "activity" rather than "pose": ❌ "woman standing" β†’ βœ… "woman giving presentation"
  3. Keep clothing/body descriptions neutral: ❌ "tight dress, revealing" β†’ βœ… "casual blouse, professional attire"
  4. Avoid "intimate" lighting/atmosphere words: ❌ "soft warm lighting, intimate mood" β†’ βœ… "dramatic lighting, candid atmosphere"

This is about managing the OVERALL SEMANTIC FIELD.

It's not: "Add microphone = instant pass"

It's: "Build a context that makes your subject's presence feel natural, non-suggestive, and purpose-driven."

You're engineering the vibe.


Part 4: P-codes Have Personalities (And Strong Opinions)

Not All P-codes Are Created Equal

Through chaotic experimentation, I've learned:

Example: The Close-Up Diva

p-code: abcd11 + bds111 (fake example code)

Why? I have no idea.

My best guess: Each p-code was trained on different datasets with different dominant compositions. But the model doesn't tell you this. You just have to... find out by trial and error.

Less is More (Sometimes)

If your long, detailed prompt is producing garbage: Try deleting stuff.

What to cut:

  1. Psychological descriptions: ❌ "lost in melancholic contemplation of her uncertain future" β†’ βœ… "thoughtful expression, distant gaze"
  2. Redundant atmosphere words: ❌ "warm cozy intimate comfortable soft gentle atmosphere" β†’ βœ… "warm candid atmosphere"
  3. Over-specific actions: ❌ "gracefully walking while holding a coffee cup and thinking" β†’ βœ… "walking, holding coffee"

Donburi-Style Prompts

For certain p-codes, short and punchy wins:

rainy street, neon lights, cigarette smoke,
noir mood, trench coat, 1940s detective,
dramatic shadows

No complete sentences. No flowery descriptions. Just visual ingredients thrown together.

Like ordering food: "Rice. Beef. Egg. Sauce. Done." Not: "I would like a carefully curated culinary experience..."

Simple. Direct. Visual.


Part 5: The "Realistic" Death Trap

Words That Summon Hyperrealism

Some words are cursed incantations that trigger photo-realistic nightmares:

Why Camera Jargon is Especially Dangerous

Words like f/2.8, ISO 400, 50mm lens are photography terms. In the training data, these terms probably appear in captions of actual photographs. So the system thinks: "Oh, user wants a PHOTO." And gives you uncanny valley hyperrealism instead of beautiful digital art.

I'm like 70% confident in this theory. What I DO know for certain: Every time I use camera-technical words, I get way more photo-realistic outputs. So I stopped using them unless I actually want that look.

The Fix: Style Inoculation

Put this at the VERY BEGINNING of your prompt:

Semi-realism, digital illustration, pseudorealistic character art,

Think of it as a vaccine. You're telling the system upfront: "Everything I say next? Interpret it as ART. Digital painting. NOT a photograph."

Then even if you use words like detailed skin texture, the system reads it as painted detail, not photo detail.

Example Comparison

Without inoculation:
woman, detailed face, natural lighting, realistic skin
With inoculation:
Semi-realism, digital illustration, pseudorealistic character art,
woman, detailed face, dramatic lighting, realistic skin

Also notice: natural lighting β†’ dramatic lighting

"Natural" sounds photographic. "Dramatic" sounds artistic.

It's vibes. Vibes matter.


Part 6: Negative Prompts Are Surgery, Not Carpet Bombs

When to Use --no

βœ… Use when:

❌ Don't use when:

The Pink Elephant Problem

If I say "DON'T think about a pink elephant," what happens?

You immediately picture a pink elephant.

Same with --no.

If you write --no photorealistic, hyperrealistic, realistic, you just made the system focus HARD on the concept of "realistic."

Sometimes this backfires. I've had prompts where adding --no realistic made things MORE realistic.

My Strategy: Surgical Precision

Keep it short. Target the exact problem.

--no eyeglasses, glasses
--no eyeglasses, glasses, spectacles, frames, optical devices, 
reading glasses, sunglasses, goggles, monocle...

More words = more chaos.


Part 7: Semantic Field Landmines

The Brutal Truth

MJ/Niji doesn't judge individual words. It judges the overall vibe your words create together.

Real Example: Artist Name Risk

From my experiments:

tempting smile, curvaceous build,
in the style of Wlop, Greg Rutkowski, Kuvshinov Ilya
[exact same]
in the style of Wlop

Why?

Wlop = sensual female characters + romantic lighting.

The system's "risk calculator":

Version A: tempting (risk +20) + curvaceous (risk +15) + mixed styles (risk +15) = TOTAL: 50% β†’ βœ… Pass

Version B: tempting (risk +20) + curvaceous (risk +15) + Wlop (risk +30) = TOTAL: 65% β†’ πŸ’₯ Banned

It's dilution. Mix risky elements with neutral ones.

Other Landmine Combos

  1. Clothing + Location: βœ… swimsuit + beach | πŸ’₯ swimsuit + bedroom
  2. Body + Pose: βœ… athletic build + running | πŸ’₯ curvaceous + lying down
  3. Age + Clothing: βœ… mature woman + any outfit | πŸ’₯ young girl + ANY clothing description (instant ban, correct behavior)
  4. Mood + Lighting: βœ… dramatic mood + cinematic lighting | πŸ’₯ intimate mood + soft warm lighting

The Lesson

Individual words β‰  the problem.
Word combinations = semantic field = what gets judged.

Your prompt is a recipe. Individual ingredients are harmless. Combined wrong? Food poisoning.


Part 8: Age Control is War

The Problem

Even if you write mature adult woman, in her mid-thirties, 35 years old, system gives you someone who looks 23.

Why? Training data is FLOODED with young characters. The model has a strong youth bias.

My Layered Defense

Don't use all layers at once. Build one at a time.

Layer 1: Triple-lock the age
mature adult woman, in her mid-thirties, 35 years old
Why three? Redundancy. Hard to ignore.
Layer 2: Negative prompt
--no young, teen, youthful
Layer 3: Aging details
fine lines around eyes, subtle crow's feet
Be subtle. Don't overdo it or you'll get someone who looks 60.
Layer 4: Celebrity reference
character reference: Cate Blanchett
Pick someone actually in their 30s-40s.
Layer 5: Blame the p-code
Some p-codes just have youth bias. Try a different one.
Add one layer. Test. Then add another if needed. Don't throw everything in at once.

Part 9: Disaster Protocols

πŸ”΄ Disaster A: Hyperrealistic Hell

Symptom: Output looks like a photo. Lost all painterly beauty.

Step 1: Add style inoculation
Semi-realism, digital illustration, pseudorealistic character art,
Step 2: Remove realistic skin β†’ Test
Step 3: Put it back, remove other words one at a time
Step 4: Change lighting: natural β†’ dramatic
Step 5: Change camera: close-up β†’ medium-shot
Step 6: Try different p-code or rewrite prompt
Only change ONE thing at a time.

πŸ”΄ Disaster B: Unwanted Elements (Glasses, Hats, etc.)

Symptom: You didn't ask for glasses. System gives everyone glasses.

Parallel testing:

Test both simultaneously. Find out if it's prompt or p-code.

πŸ”΄ Disaster C: Wrong Age

Symptom: Asked for 35-year-old. Got teenager.

Layer 1: Strengthen main prompt β†’ Test
Layer 2: Add --no young, teen β†’ Test
Layer 3: Add aging cues β†’ Test
Layer 4: Celebrity reference β†’ Test
Layer 5: Try different p-code

πŸ”΄ Disaster D: Too Cartoon/Anime

Symptom: Wanted semi-realistic. Got anime.

Priority 1: Keep quality anchors (realistic skin, detailed face)
Priority 2: Change camera angle β†’ Test
Priority 3: Simplify prompt β†’ Test
Priority 4: Strengthen lighting β†’ Test
Priority 5: Try different p-code

πŸ”΄ Disaster E: Content Ban

Symptom: Changed one word. Got banned.

It's a semantic field problem.

Solution A: Change the risky element
Solution B: Add safe context to scene
Solution C: Add third element (dilution)

Example: πŸ’₯ swimsuit + bedroom β†’ βœ… swimsuit + bedroom + "unpacking beach bag, vacation prep"

Build a narrative that makes the combo logical and non-suggestive.


Part 10: Build Your Own Intuition

My "feeling" came from chaos, failure, experimentation. You can't download my intuition. But you can build your own.

How I Did It

  1. I failed a lot - Every failure taught my brain a pattern.
  2. I tested one variable at a time - Change one thing. Test. Learn which change did what.
  3. I tested hypotheses in parallel - Don't test sequentially. Test multiple fixes at once.
  4. I accepted randomness - Sometimes I don't know WHY it works. And that's okay.
  5. I learned when to retreat - Know when to change p-code, rewrite prompt, or try again tomorrow.

What You Should Do

🎯 Step on your own landmines - Build YOUR map. My landmines aren't yours.

🎯 Experiment fearlessly - Worst case? Bad image. Generate another one.

🎯 Build feeling through repetition - After enough attempts, your brain will whisper: "This prompt feels... off. Change that word." That's intuition forming.


Part 11: Community, Collaboration, and Witchcraft

The Uncomfortable Reality

No matter how disciplined you are, randomness plays a role.

Model updates. Platform changes. Black-box probability shifts.

What works today might not work tomorrow. And vice versa.

The Real Mastery

Learning to:

Encourage Skepticism

Don't blindly trust ANY guide (including mine). Test everything yourself.

Ask: "Why does this work? Why doesn't that work?"

Community knowledge is built on confusion as much as success.

Share Your Experiments

Post your findings. Compare notes. Reverse-engineer successful prompts together.

Collective knowledge > individual genius.


Final Thoughts: This is a Map, Not the Territory

I'm not giving you "the solution." I'm giving you a map of where I've been.

But the terrain changes. Updates happen. Model behavior shifts.

Your journey will be different.

What I Hope You Take Away

Most importantly:

These patterns can't be "installed." They grow through:

That's the game.

Closing Note: On Gurus and Formulas

Be skeptical of anyone claiming to have "the answer." Including me.

Even the best pattern, overused, becomes a trap.

The model evolves. Your tactics must too.

Stay curious. Stay adaptive. Stay ready to break your own rules.


Welcome to the chaos. ❀
Good luck. May your prompts render without bans. 🎨

P.S.
If you like neat, predictable systems β†’ MJ/Niji will frustrate you.
If you like breaking things to see what happens β†’ You'll have fun.
Either way: Experiment. Fail. Learn. Repeat.
PuppyJun

↑