midjourney logo

In a groundbreaking leap for AI-driven creativity, Midjourney unveiled its Omni-Reference feature on May 1, 2025, transforming how artists, designers, and storytellers craft visual worlds. This innovative tool, launched as part of Midjourney’s Version 7 (V7) platform, empowers users to seamlessly integrate specific elements—be it a character, object, or entire aesthetic—into their AI-generated images with unprecedented precision. Described as a way to say “put THIS in my image,” Omni-Reference is already sparking excitement across the creative community for its versatility and potential to streamline workflows. Let’s explore how this feature works, why it matters, and how you can use it to bring your visions to life.

A New Era of Creative Control

Midjourney has long been celebrated for turning text prompts into stunning visuals, but maintaining consistency across images—such as reusing a specific character or logo—has been a challenge. Omni-Reference addresses this by evolving the V6 Character Reference tool into a more robust system that supports a wide range of elements. Whether you’re designing a recurring character for a graphic novel, embedding a brand logo in a marketing campaign, or placing a vintage car in a futuristic scene, Omni-Reference ensures your chosen elements appear faithfully in the final image.

The feature supports an impressive array of reference types:

  • People and Creatures: Include human characters, animals, or fantastical beings with consistent features like hair, clothing, or expressions.
  • Objects and Props: Add specific items like weapons, jewelry, or furniture, ensuring they match your reference image.
  • Vehicles and Settings: Incorporate cars, spaceships, or even architectural elements with precise details.
  • Styles and Aesthetics: Apply entire character designs or visual themes, such as a cyberpunk vibe or watercolor texture, across multiple generations.

Unlike its predecessor, which was limited to character consistency, Omni-Reference is a “universal reference tool” that handles complex integrations, making it a game-changer for creators seeking cohesive visuals. Early adopters have called it “incredibly cool” and a “10x improvement” for consistent character design.

How Omni-Reference Works

Omni-Reference is designed for ease of use, whether you’re working on Midjourney’s web platform or Discord interface. Here’s a glimpse into its functionality:

  • Web Interface: Users update to V7 in the settings menu, then drag and drop a reference image (in .png, .jpg, .gif, .webp, or .jpeg format) into the “Omni-Reference” section of the prompt bar. A slider adjusts the “omni-weight” (–ow), which controls how closely the output adheres to the reference, from subtle influence (e.g., –ow 25 for style shifts like photo to anime) to strict replication (e.g., –ow 400 for preserving specific details like a character’s face).
  • Discord Interface: Add the –oref parameter followed by an image URL (e.g., –oref <image_url>) to your prompt. Adjust the influence with –ow, ranging from 0 to 1000 (default is 100). For example, a prompt like “a warrior holding a sword –oref sword.png –ow 400” ensures the exact sword appears in the scene.
  • Multiple Elements: While currently limited to one reference image, users can include multiple characters or objects within that image and specify them in the prompt (e.g., “a knight and a dragon –oref knight_dragon.png”). This allows for complex compositions, though intricate details like logos or freckles may not always transfer perfectly.

The feature integrates with other Midjourney tools, such as personalization profiles, style references, and mood boards, enabling creators to combine precise element placement with tailored aesthetics. However, it’s not compatible with Fast Mode, Draft Mode, or V6.1’s inpainting features, and it’s still in testing, with Midjourney encouraging user feedback to refine its capabilities.

Why Omni-Reference Matters

Omni-Reference arrives at a time when AI image generation is becoming integral to industries like gaming, film, advertising, and digital art. Maintaining visual consistency is critical for storytelling and branding, yet many AI tools struggle to preserve specific elements across outputs. Omni-Reference solves this by giving users granular control, reducing the need for extensive post-editing or repeated prompt tweaks.

For example, a game designer can use Omni-Reference to ensure a protagonist’s armor remains consistent across cutscenes, while a marketer can embed a company logo into diverse campaign visuals. The feature’s ability to handle non-human elements, like vehicles or mythical creatures, also opens doors for sci-fi and fantasy creators. One user noted, “Character consistency just got 10x easier!”

The release also reflects Midjourney’s response to competitive pressures. Rivals like Runway’s Gen-4 have introduced similar reference tools for video, prompting Midjourney to accelerate its feature rollout. While some users report that Omni-Reference isn’t as advanced as Runway’s offering, its integration with V7’s enhanced prompt accuracy and image quality makes it a formidable contender.

Tutorial: How to Use Omni-Reference in Midjourney V7

Ready to try Omni-Reference? Follow this step-by-step guide to incorporate specific elements into your AI-generated images. You’ll need a Midjourney subscription and access to V7, which requires setting up a V7 Global Personalization Profile by ranking 200 image pairs.

Step 1: Set Up Your Account

  • Visit midjourney.com or join the Midjourney Discord server.
  • Ensure your account is set to V7 (web: select V7 in settings; Discord: add –v 7 to prompts).
  • Complete your V7 personalization profile by ranking image pairs at midjourney.com/ideas to unlock V7 features.

Step 2: Prepare Your Reference Image

  • Choose or create a reference image (e.g., a character sketch, logo, or object) in .png, .jpg, .gif, .webp, or .jpeg format.
  • If using Discord, upload the image to a platform like Discord or Imgur to generate a public URL. For web users, have the image ready on your device.

Step 3: Craft Your Prompt

  • Web: Open the “Imagine” bar, click the image icon, and upload your reference image. Drag it to the “Omni-Reference” section. Enter a text prompt describing the scene (e.g., “a knight in a forest holding a glowing sword”). Adjust the omni-weight slider (or type –ow <value>) to control adherence.
  • Discord: Write your prompt, adding –oref <image_url> and –ow <value> (e.g., “a knight in a forest holding a glowing sword –oref <sword_url> –ow 400 –v 7”). Ensure the URL is valid.
  • Include specific details in the prompt to reinforce the reference (e.g., “glowing blue sword”).
  • For style changes, use a lower –ow (e.g., –ow 25) and specify the style (e.g., “in the style of anime”).

Step 4: Generate and Refine

  • Submit the prompt to generate images. Review the outputs to ensure the reference element appears as intended.
  • If results aren’t accurate, adjust the –ow value, clarify the prompt, or simplify the reference image. For high stylization (–stylize >500), use a higher –ow to maintain details.
  • Save or upscale your favorite image. Share it via Midjourney’s community gallery or social media.

Pro Tip: Test with simple prompts first, like “a cat –oref cat.png –ow 100,” to understand how –ow affects output. Avoid overly complex reference images, as small details may not transfer perfectly.

The Road Ahead

Omni-Reference is still experimental, and Midjourney is actively seeking user feedback to enhance its functionality. Planned updates include support for multiple reference images and integration with upcoming features like video and real-time 3D generation. Some users have reported minor issues, such as inconsistent detail retention or moderation blocks for seemingly innocent prompts, but Midjourney’s rapid update cycle—promising tweaks every 1–2 weeks—suggests these will be addressed swiftly.

The feature’s launch underscores Midjourney’s commitment to empowering creators in a competitive AI landscape. As tools like Omni-Reference redefine what’s possible, they also raise ethical questions about copyright and responsible use, especially with real-world likenesses. Midjourney emphasizes joy, wonder, and respect in its community guidelines, urging users to wield this power thoughtfully.

For now, Omni-Reference invites artists to dream bigger, offering a canvas where a single image can anchor entire worlds. Whether you’re crafting a brand campaign or a fantasy epic, this tool makes consistency effortless and creativity boundless. So, fire up Midjourney V7, upload your reference, and tell the AI exactly what to put in your image—the possibilities are endless.

By Kenneth

One thought on “Midjourney’s Omni-Reference Feature Redefines AI Image Creation”
  1. This is a groundbreaking innovation for the creative community! Omni-Reference truly revolutionizes how we craft visual stories with AI. The ability to maintain consistency across elements like characters or logos is a game-changer. It’s exciting to see how this tool will streamline workflows and inspire new levels of creativity. How does Omni-Reference handle complex integrations compared to previous versions?

Leave a Reply to IT Cancel reply

Your email address will not be published. Required fields are marked *