
{"id":136931,"date":"2026-02-09T11:12:53","date_gmt":"2026-02-09T03:12:53","guid":{"rendered":"https:\/\/vertu.com\/?post_type=aitools&#038;p=136931"},"modified":"2026-03-13T13:59:35","modified_gmt":"2026-03-13T05:59:35","slug":"seedance-2-0-complete-guide-bytedances-revolutionary-multimodal-ai-video-generator","status":"publish","type":"aitools","link":"https:\/\/legacy.vertu.com\/ar\/ai-tools\/seedance-2-0-complete-guide-bytedances-revolutionary-multimodal-ai-video-generator\/","title":{"rendered":"Seedance 2.0 Complete Guide: ByteDance&#8217;s Revolutionary Multimodal AI Video Generator"},"content":{"rendered":"<h1><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-136954\" src=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide.png\" alt=\"\" width=\"832\" height=\"474\" srcset=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide.png 832w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide-300x171.png 300w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide-768x438.png 768w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide-18x10.png 18w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide-600x342.png 600w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Seedance-2.0-Complete-Guide-64x36.png 64w\" sizes=\"(max-width: 832px) 100vw, 832px\" \/><\/h1>\n<h2>The Ultimate Tutorial: Master Image+Video+Audio+Text Input, @ Reference System, Character Consistency, Camera Replication, and Native Audio Generation<\/h2>\n<p>Seedance 2.0 represents a fundamental shift in AI video generation by accepting <strong>images, videos, audio, and text simultaneously<\/strong> as inputs\u2014enabling filmmaker-level control over every aspect of creation. <strong>The Multimodal Breakthrough<\/strong>: Upload up to <strong>9 images, 3 videos (15s max), 3 audio files (15s max), plus text prompts<\/strong> (12 files total per generation) using <strong>@ mention reference system<\/strong> to explicitly control style, motion, camera work, rhythm, and narrative. <strong>The Quality Leap<\/strong>: Sharp <strong>2K resolution<\/strong> with enhanced colors, automatic lighting adjustment, smooth physics, fluid motion, precise instruction following, and style consistency throughout 4-15 second outputs. <strong>The Speed Advantage<\/strong>: <strong>30% faster generation<\/strong> than previous versions while supporting videos <strong>3x longer<\/strong>, maintaining professional quality without delays. <strong>The Character Consistency<\/strong>: Faces, product details, logos, text, environments, and visual styles remain accurate across all frames\u2014solving previous AI video's identity drift problem. <strong>The Advanced Capabilities<\/strong>: Motion\/camera replication from reference videos (choreography, tracking shots, crane movements, Hitchcock zooms), creative template replication (ad formats, visual effects, film techniques), video extension, video editing (character replacement, element addition\/removal, plot subversion), audio-synchronized generation (lip-sync dialogue, sound effects, background music), beat-synced editing, and one-take continuity shots. <strong>The @ Reference Power<\/strong>: Natural language instructions like &#8220;@Image1 as first frame, reference @Video1 for camera movement, use @Audio1 for background music&#8221; giving explicit control over each uploaded asset's contribution. <strong>The Applications<\/strong>: Advertising\/e-commerce product demos, content localization with multi-language lip-sync, storyboard-to-video conversion, template-based creation, music videos, cinematic sequences. <strong>Available Now<\/strong>: On WaveSpeedAI, ImagineArt and <a href=\"https:\/\/www.topview.ai\/seedance-2\" target=\"_blank\" rel=\"noopener\">Topview<\/a> platforms with free trials.<\/p>\n<h2>Part I: What Makes Seedance 2.0 Revolutionary<\/h2>\n<h3>The Fundamental Paradigm Shift<\/h3>\n<p><strong>Traditional AI Video Limitations<\/strong>:<\/p>\n<ul>\n<li>Text prompts only (abstract, imprecise)<\/li>\n<li>Single reference image maximum<\/li>\n<li>No audio input capability<\/li>\n<li>Limited control over specific elements<\/li>\n<li>Generic, unpredictable outputs<\/li>\n<\/ul>\n<p><strong>Seedance 2.0 Innovation<\/strong>:<\/p>\n<ul>\n<li><strong>Multimodal inputs<\/strong>: Images + videos + audio + text simultaneously<\/li>\n<li><strong>Explicit reference control<\/strong>: @ mention system for precise asset usage<\/li>\n<li><strong>Filmmaker-level direction<\/strong>: Control over style, motion, camera, audio separately<\/li>\n<li><strong>Predictable results<\/strong>: Natural language instructions for exact specifications<\/li>\n<li><strong>Professional outputs<\/strong>: Cinema-quality 2K resolution<\/li>\n<\/ul>\n<h3>The Technical Specifications<\/h3>\n<p><strong>Input Capabilities<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Input Type<\/th>\n<th>Maximum Capacity<\/th>\n<th>Details<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Images<\/strong><\/td>\n<td>Up to 9 images<\/td>\n<td>JPEG, PNG formats, style\/character reference<\/td>\n<\/tr>\n<tr>\n<td><strong>Videos<\/strong><\/td>\n<td>Up to 3 videos<\/td>\n<td>Max 15 seconds total, motion\/camera reference<\/td>\n<\/tr>\n<tr>\n<td><strong>Audio<\/strong><\/td>\n<td>Up to 3 MP3 files<\/td>\n<td>Max 15 seconds total, rhythm\/music reference<\/td>\n<\/tr>\n<tr>\n<td><strong>Text<\/strong><\/td>\n<td>Natural language prompts<\/td>\n<td>Unlimited length, narrative guidance<\/td>\n<\/tr>\n<tr>\n<td><strong>Total Files<\/strong><\/td>\n<td>12 files per generation<\/td>\n<td>Prioritize highest-impact assets<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Output Specifications<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Output Feature<\/th>\n<th>Specification<\/th>\n<th>Benefits<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Resolution<\/strong><\/td>\n<td>2K (2048\u00d71080)<\/td>\n<td>Sharp detail, professional quality<\/td>\n<\/tr>\n<tr>\n<td><strong>Duration<\/strong><\/td>\n<td>4-15 seconds<\/td>\n<td>User-selectable length<\/td>\n<\/tr>\n<tr>\n<td><strong>Audio<\/strong><\/td>\n<td>Native sound effects + music<\/td>\n<td>Fully synchronized<\/td>\n<\/tr>\n<tr>\n<td><strong>Frame Rate<\/strong><\/td>\n<td>Smooth motion<\/td>\n<td>Natural movement physics<\/td>\n<\/tr>\n<tr>\n<td><strong>Aspect Ratios<\/strong><\/td>\n<td>16:9, 1:1, others<\/td>\n<td>Platform-optimized<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>The @ Reference System<\/h3>\n<p><strong>How It Works<\/strong>: After uploading assets, reference them in prompts using <code>@<\/code> followed by file identifier<\/p>\n<p><strong>Basic Syntax Example<\/strong>:<\/p>\n<pre><code>@Image1 as the first frame, reference @Video1 for camera movement,\r\nuse @Audio1 for background music\r\n<\/code><\/pre>\n<p><strong>Why It Matters<\/strong>: Explicit control eliminates guesswork\u2014you specify exactly what each file contributes<\/p>\n<p><strong>\u0645\u0639\u0627\u0644\u062c\u0629 \u0627\u0644\u0644\u063a\u0627\u062a \u0627\u0644\u0637\u0628\u064a\u0639\u064a\u0629<\/strong>: Model understands context and intent<\/p>\n<h2>Part II: Core Capabilities in Depth<\/h2>\n<h3>1. Enhanced Base Quality<\/h3>\n<p><strong>Physics Accuracy<\/strong>:<\/p>\n<ul>\n<li>Objects fall, collide, interact according to real-world rules<\/li>\n<li>Proper gravity, momentum, inertia<\/li>\n<li>Realistic material behavior (fabric, liquids, solids)<\/li>\n<li>Natural environmental interactions<\/li>\n<\/ul>\n<p><strong>Example Prompt<\/strong>:<\/p>\n<pre><code>A girl elegantly hanging laundry, finishing one piece and reaching\r\ninto the basket for another, shaking it out firmly.\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Continuous action with accurate fabric physics, natural body mechanics, smooth transitions\u2014no explicit physics instructions needed<\/p>\n<p><strong>Fluid Motion<\/strong>:<\/p>\n<ul>\n<li>Proper momentum and timing<\/li>\n<li>Smooth transitions between poses<\/li>\n<li>Natural acceleration\/deceleration<\/li>\n<li>Lifelike movement patterns<\/li>\n<\/ul>\n<p><strong>Precise Instruction Following<\/strong>:<\/p>\n<ul>\n<li>Complex multi-step prompts executed accurately<\/li>\n<li>Understands nuanced creative direction<\/li>\n<li>Maintains consistency with specifications<\/li>\n<li>Interprets filmmaker terminology correctly<\/li>\n<\/ul>\n<p><strong>Style Consistency<\/strong>:<\/p>\n<ul>\n<li>Visual coherence throughout entire video<\/li>\n<li>No style drift between frames<\/li>\n<li>Stable color palette<\/li>\n<li>Consistent lighting and atmosphere<\/li>\n<\/ul>\n<h3>2. The Multimodal Reference System<\/h3>\n<p><strong>What You Can Reference<\/strong>:<\/p>\n<p><strong>From Images<\/strong>:<\/p>\n<ul>\n<li>Character appearances and faces<\/li>\n<li>Product details and branding<\/li>\n<li>Visual style and aesthetics<\/li>\n<li>Color palettes and mood<\/li>\n<li>Architectural\/environmental elements<\/li>\n<li>Clothing and accessories<\/li>\n<\/ul>\n<p><strong>From Videos<\/strong>:<\/p>\n<ul>\n<li>Motion patterns and choreography<\/li>\n<li>Camera techniques and movements<\/li>\n<li>Editing rhythm and pacing<\/li>\n<li>Visual effects and transitions<\/li>\n<li>Action sequences<\/li>\n<li>Performance styles<\/li>\n<\/ul>\n<p><strong>From Audio<\/strong>:<\/p>\n<ul>\n<li>Background music and atmosphere<\/li>\n<li>Rhythm and beat synchronization<\/li>\n<li>Sound effect templates<\/li>\n<li>Dialogue and voice patterns<\/li>\n<li>Emotional tone<\/li>\n<\/ul>\n<p><strong>From Text<\/strong>:<\/p>\n<ul>\n<li>Narrative structure<\/li>\n<li>Scene descriptions<\/li>\n<li>Character motivations<\/li>\n<li>Technical specifications<\/li>\n<li>Creative direction<\/li>\n<\/ul>\n<p><strong>The Key Principle<\/strong>: Use natural language to describe what to extract from which file<\/p>\n<p><strong>Advanced Example<\/strong>:<\/p>\n<pre><code>Reference @Image1 for the man's appearance in @Image2's elevator\r\nsetting. Fully replicate @Video1's camera movements and the\r\nprotagonist's facial expressions. Hitchcock zoom when startled,\r\nthen several orbit shots inside the elevator. Doors open, tracking\r\nshot following him out. Exterior scene references @Image3, man\r\nlooks around. Reference @Video1's mechanical arm multi-angle\r\nfollowing shots tracking his line of sight.\r\n<\/code><\/pre>\n<h3>3. Character and Object Consistency (The Identity Lock)<\/h3>\n<p><strong>The Previous Problem<\/strong>: AI video models struggle maintaining identity across frames\u2014faces morph, products change, details disappear<\/p>\n<p><strong>Seedance 2.0 Solution<\/strong>:<\/p>\n<p><strong>Face Consistency<\/strong>:<\/p>\n<ul>\n<li>Characters maintain exact appearance throughout<\/li>\n<li>Facial features stable across all angles<\/li>\n<li>Expression changes natural while preserving identity<\/li>\n<li>Multi-character scenes keep everyone distinct<\/li>\n<\/ul>\n<p><strong>Product Detail Preservation<\/strong>:<\/p>\n<ul>\n<li>Logos remain crisp and accurate<\/li>\n<li>Text legibility maintained<\/li>\n<li>Brand colors consistent<\/li>\n<li>Fine details (stitching, textures) preserved<\/li>\n<\/ul>\n<p><strong>Scene Coherence<\/strong>:<\/p>\n<ul>\n<li>Environments stable throughout<\/li>\n<li>Architecture consistent<\/li>\n<li>Props maintain appearance<\/li>\n<li>Background elements don't drift<\/li>\n<\/ul>\n<p><strong>Complex Example<\/strong>:<\/p>\n<pre><code>Man @Image1 comes home tired from work, walks down the hallway\r\nslowing his pace, stops at the front door. Close-up of his face\r\nas he takes a deep breath, adjusts his expression from stressed\r\nto relaxed. Close-up of him finding his keys, inserting them into\r\nthe lock. He enters and his daughter and pet dog run to greet him\r\nwith a hug. The interior is warm and cozy, with natural dialogue\r\nthroughout.\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Man's face identical across all shots (long, medium, close-up), daughter and dog maintain appearances, interior consistent, emotional arc clear<\/p>\n<h3>4. Motion and Camera Replication<\/h3>\n<p><strong>What You Can Replicate<\/strong>:<\/p>\n<p><strong>Complex Choreography<\/strong>:<\/p>\n<ul>\n<li>Fighting sequences with multiple moves<\/li>\n<li>Dance routines and steps<\/li>\n<li>Action scenes with stunts<\/li>\n<li>Athletic performances<\/li>\n<li>Coordinated group movements<\/li>\n<\/ul>\n<p><strong>Camera Techniques<\/strong>:<\/p>\n<ul>\n<li><strong>Dolly shots<\/strong>: Smooth tracking on rails<\/li>\n<li><strong>Crane movements<\/strong>: Vertical and sweeping motions<\/li>\n<li><strong>Tracking shots<\/strong>: Following subject motion<\/li>\n<li><strong>Handheld feel<\/strong>: Documentary-style natural shake<\/li>\n<li><strong>Hitchcock zoom<\/strong>: Dolly zoom\/vertigo effect<\/li>\n<li><strong>Whip pans<\/strong>: Fast transitions between subjects<\/li>\n<li><strong>Orbit shots<\/strong>: 360\u00b0 circular camera movement<\/li>\n<\/ul>\n<p><strong>Editing Rhythm<\/strong>:<\/p>\n<ul>\n<li>Cut timing between shots<\/li>\n<li>Transition styles (hard cuts, fades, wipes)<\/li>\n<li>Pacing variations<\/li>\n<li>Montage sequences<\/li>\n<\/ul>\n<p><strong>Advanced Camera Example<\/strong>:<\/p>\n<pre><code>Reference @Image1 for the man's appearance in @Image2's elevator\r\nsetting. Fully replicate @Video1's camera movements and the\r\nprotagonist's facial expressions. Hitchcock zoom when startled,\r\nthen several orbit shots inside the elevator. Doors open, tracking\r\nshot following him out. Exterior scene references @Image3, man\r\nlooks around. Reference @Video1's mechanical arm multi-angle\r\nfollowing shots tracking his line of sight.\r\n<\/code><\/pre>\n<h3>5. Creative Template Replication<\/h3>\n<p><strong>Advertising Formats<\/strong>:<\/p>\n<ul>\n<li>Product reveal sequences<\/li>\n<li>Lifestyle montages<\/li>\n<li>Brand storytelling structures<\/li>\n<li>Call-to-action endings<\/li>\n<\/ul>\n<p><strong>Visual Effects<\/strong>:<\/p>\n<ul>\n<li>Particle systems (sparks, smoke, magic)<\/li>\n<li>Morphing and transformations<\/li>\n<li>Stylized transitions (light leaks, glitch effects)<\/li>\n<li>Text animations and kinetic typography<\/li>\n<\/ul>\n<p><strong>Film Techniques<\/strong>:<\/p>\n<ul>\n<li>Opening credit sequences<\/li>\n<li>Title card designs<\/li>\n<li>Dramatic reveals<\/li>\n<li>Scene transitions<\/li>\n<\/ul>\n<p><strong>Music Video Cuts<\/strong>:<\/p>\n<ul>\n<li>Beat-synced editing<\/li>\n<li>Performance montages<\/li>\n<li>Narrative intercuts<\/li>\n<li>Abstract visual sequences<\/li>\n<\/ul>\n<p><strong>Complex Template Example<\/strong>:<\/p>\n<pre><code>Replace the person in @Video1 with the girl in @Image1. Replace\r\nthe moon goddess CG with an angel referencing @Image2. When the\r\ngirl crouches, wings grow from her back. Wings sweep past camera\r\nfor transition. Reference @Video1's camera work and transitions.\r\nEnter the next scene through the angel's pupil, aerial shot of\r\nthe angel (spiraling wings match the pupil), camera descends\r\nfollowing the angel's face, pulls back on arm raise to reveal\r\nthe stone angel statues in background. One continuous shot\r\nthroughout.\r\n<\/code><\/pre>\n<h3>6. Video Extension (Seamless Continuity)<\/h3>\n<p><strong>Capability<\/strong>: Extend existing videos while maintaining narrative and visual coherence<\/p>\n<p><strong>Example Prompt<\/strong>:<\/p>\n<pre><code>Extend @Video1 by 15 seconds. Reference @Image1 and @Image2 for\r\nthe donkey-on-motorcycle character. Add a wild advertisement\r\nsequence:\r\n\r\nScene 1: Side shot, donkey bursts through fence on motorcycle,\r\nnearby chickens startled.\r\n\r\nScene 2: Donkey performs spinning stunts on sand, tire close-up\r\nthen aerial overhead shot of donkey doing circles, dust rising.\r\n\r\nScene 3: Mountain backdrop, donkey launches off slope, ad copy\r\nappears behind through masking effect (text revealed as donkey\r\npasses): \"Inspire Creativity, Enrich Life\". Final shot: motorcycle\r\npasses, dust cloud rises.\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Original video seamlessly continues with new scenes matching style, character, motion quality, and narrative flow<\/p>\n<p><strong>Best Practice<\/strong>: Set generation duration to match extension length (extend by 5s = generate 5s)<\/p>\n<h3>7. Video Editing (Non-Destructive Modification)<\/h3>\n<p><strong>Character Replacement<\/strong>:<\/p>\n<ul>\n<li>Swap actors while keeping action identical<\/li>\n<li>Change protagonists in scenes<\/li>\n<li>Replace background characters<\/li>\n<\/ul>\n<p><strong>Element Addition\/Removal<\/strong>:<\/p>\n<ul>\n<li>Add objects to scenes<\/li>\n<li>Remove unwanted elements<\/li>\n<li>Modify environment details<\/li>\n<\/ul>\n<p><strong>Style Transfer<\/strong>:<\/p>\n<ul>\n<li>Apply new visual treatments<\/li>\n<li>Change color grading<\/li>\n<li>Modify lighting atmosphere<\/li>\n<\/ul>\n<p><strong>Narrative Changes<\/strong> (Plot Subversion):<\/p>\n<p><strong>Dramatic Example<\/strong>:<\/p>\n<pre><code>Subvert the plot of @Video1. The man's expression shifts instantly\r\nfrom tender to cold and ruthless. In the moment the woman least\r\nexpects it, he shoves her off the bridge into the water. The push\r\nis decisive, premeditated, without hesitation\u2014completely subverting\r\nthe romantic character setup. As she falls, no scream, only\r\ndisbelief in her eyes. She surfaces and shouts at him: \"You were\r\nlying to me from the start!\" He stands on the bridge with a cold\r\nsmile and says quietly: \"This is what your family owes mine.\"\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Complete tonal shift from original\u2014romantic scene becomes thriller\/betrayal<\/p>\n<h3>8. Audio-Synchronized Generation<\/h3>\n<p><strong>Native Audio Capability<\/strong>: Seedance 2.0 generates videos with built-in sound\u2014not silent outputs requiring post-production<\/p>\n<p><strong>What's Generated<\/strong>:<\/p>\n<p><strong>Lip-Sync Dialogue<\/strong>:<\/p>\n<ul>\n<li>Multi-language support<\/li>\n<li>Natural mouth movements<\/li>\n<li>Proper timing and expression<\/li>\n<li>Emotional delivery<\/li>\n<\/ul>\n<p><strong>Sound Effects<\/strong>:<\/p>\n<ul>\n<li>Actions matched to visuals (footsteps, door creaks, impacts)<\/li>\n<li>Environmental sounds (wind, rain, ambient noise)<\/li>\n<li>Object interactions<\/li>\n<li>Natural acoustics<\/li>\n<\/ul>\n<p><strong>Background Music<\/strong>:<\/p>\n<ul>\n<li>Mood-appropriate scoring<\/li>\n<li>Rhythm matching visual pacing<\/li>\n<li>Dynamic intensity changes<\/li>\n<li>Professional composition<\/li>\n<\/ul>\n<p><strong>Voice Acting<\/strong>:<\/p>\n<ul>\n<li>Character-appropriate voices<\/li>\n<li>Emotional expression<\/li>\n<li>Proper enunciation<\/li>\n<li>Natural dialogue flow<\/li>\n<\/ul>\n<p><strong>Audio Reference Example<\/strong>:<\/p>\n<pre><code>Fixed shot. Fisheye lens looking down through circular opening.\r\nReference @Video1's fisheye effect. Make the horse from @Video2\r\nlook up at the fisheye lens. Reference @Video1's speaking motion.\r\nBackground audio references @Video3's sound effects.\r\n<\/code><\/pre>\n<h3>9. Beat-Synced Editing (Music Video Creation)<\/h3>\n<p><strong>Single Image Beat Sync<\/strong>:<\/p>\n<pre><code>The girl in the poster keeps changing outfits. Clothing styles\r\nreference @Image1 and @Image2. She holds the bag from @Image3.\r\nVideo rhythm references @Video1.\r\n<\/code><\/pre>\n<p><strong>Multiple Image Sequence<\/strong>:<\/p>\n<pre><code>Images @Image1 through @Image7 cut to the keyframe positions\r\nand overall rhythm of @Video1. Characters in frame are more\r\ndynamic. Overall style is more dreamlike. Strong visual impact.\r\nAdjust reference image framing as needed for music and visual\r\nflow. Add lighting changes between shots.\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Professional music video with cuts hitting beats, dynamic lighting changes, dreamlike visuals, strong impact\u2014all automated from references<\/p>\n<h3>10. One-Take Continuity (Long Shots)<\/h3>\n<p><strong>The Challenge<\/strong>: Maintaining visual consistency and narrative flow in single unbroken shots<\/p>\n<p><strong>Seedance 2.0 Solution<\/strong>: Generates long tracking shots with perfect continuity<\/p>\n<p><strong>Simple Example<\/strong>:<\/p>\n<pre><code>@Image1 through @Image5, one continuous tracking shot following\r\na runner up stairs, through corridors, onto the roof, ending\r\nwith an overhead view of the city.\r\n<\/code><\/pre>\n<p><strong>Complex Spy Thriller Example<\/strong>:<\/p>\n<pre><code>Spy thriller style. @Image1 as first frame. Front-facing tracking\r\nshot of woman in red coat walking forward. Full shot following\r\nher. Pedestrians repeatedly block the frame. She reaches a corner,\r\nreference @Image2's corner architecture. Fixed shot as woman\r\nexits frame, disappears around corner. A masked girl lurks at\r\nthe corner watching maliciously, mask girl appearance references\r\n@Image3 (appearance only, she stands at the corner). Camera pans\r\nforward toward woman in red. She enters a mansion and disappears.\r\nMansion references @Image4. No cuts. One continuous take.\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Cinematic one-take with multiple characters, location changes, camera movements, all seamlessly connected<\/p>\n<h2>Part III: How to Use Seedance 2.0 (Step-by-Step)<\/h2>\n<h3>Entry Point Selection<\/h3>\n<p><strong>First\/Last Frame Mode<\/strong>:<\/p>\n<ul>\n<li><strong>Use When<\/strong>: Simple projects needing starting image + text prompt<\/li>\n<li><strong>Process<\/strong>: Upload one image, write prompt describing desired action<\/li>\n<li><strong>Best For<\/strong>: Quick generations, straightforward animations<\/li>\n<\/ul>\n<p><strong>Universal Reference Mode<\/strong>:<\/p>\n<ul>\n<li><strong>Use When<\/strong>: Complex multimodal projects<\/li>\n<li><strong>Process<\/strong>: Upload multiple images\/videos\/audio, use @ syntax<\/li>\n<li><strong>Best For<\/strong>: Professional productions, template replication, advanced control<\/li>\n<\/ul>\n<h3>The @ Mention Workflow<\/h3>\n<p><strong>Step 1: Upload Your Assets<\/strong><\/p>\n<ul>\n<li>Drag and drop images, videos, audio files<\/li>\n<li>Verify file names\/numbers for @ referencing<\/li>\n<li>Maximum 12 files total per generation<\/li>\n<\/ul>\n<p><strong>Step 2: Write @ Reference Instructions<\/strong><\/p>\n<p><strong>Basic Pattern<\/strong>:<\/p>\n<pre><code>@[FileType][Number] [purpose\/instruction]\r\n<\/code><\/pre>\n<p><strong>Common Patterns<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Use Case<\/th>\n<th>Prompt Pattern<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Set first frame<\/strong><\/td>\n<td><code>@Image1 as the first frame<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Reference motion<\/strong><\/td>\n<td><code>Reference @Video1 for the fighting choreography<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Copy camera work<\/strong><\/td>\n<td><code>Follow @Video1's camera movements and transitions<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Add music\/rhythm<\/strong><\/td>\n<td><code>Use @Audio1 for the background music<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Extend video<\/strong><\/td>\n<td><code>Extend @Video1 by 5 seconds<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Replace character<\/strong><\/td>\n<td><code>Replace the woman in @Video1 with @Image1<\/code><\/td>\n<\/tr>\n<tr>\n<td><strong>Apply style<\/strong><\/td>\n<td><code>Match @Image2's color palette and mood<\/code><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Step 3: Set Output Parameters<\/strong><\/p>\n<ul>\n<li><strong>Duration<\/strong>: 4-15 seconds (slider or dropdown)<\/li>\n<li><strong>Resolution<\/strong>: 720p, 1080p, 2K<\/li>\n<li><strong>Aspect Ratio<\/strong>: 16:9, 1:1, 9:16, or custom<\/li>\n<li><strong>Enhancement<\/strong>: Enable prompt enhancement if needed<\/li>\n<\/ul>\n<p><strong>Step 4: Generate and Review<\/strong><\/p>\n<ul>\n<li>Click &#8220;Generate&#8221; button<\/li>\n<li>Wait 30-120 seconds (depending on complexity)<\/li>\n<li>Review output video with sound<\/li>\n<li>Regenerate with adjusted prompt if needed<\/li>\n<\/ul>\n<h3>Platform-Specific Access<\/h3>\n<p><strong>On WaveSpeedAI<\/strong>:<\/p>\n<ol>\n<li>Visit wavespeed.ai<\/li>\n<li>Navigate to Models \u2192 Seedance 2.0<\/li>\n<li>Upload assets in Universal Reference mode<\/li>\n<li>Write @ reference prompts<\/li>\n<li>Configure settings and generate<\/li>\n<\/ol>\n<p><strong>On ImagineArt<\/strong>:<\/p>\n<ol>\n<li>Visit imagine.art\/video<\/li>\n<li>Select Seedance 2.0 model<\/li>\n<li>Choose text-to-video or image-to-video mode<\/li>\n<li>Upload assets and write prompts<\/li>\n<li>Select resolution and aspect ratio<\/li>\n<li>Generate and export<\/li>\n<\/ol>\n<h2>Part IV: Creative Applications<\/h2>\n<h3>Advertising and E-Commerce<\/h3>\n<p><strong>Product Demonstrations<\/strong>:<\/p>\n<ul>\n<li>Upload product images as @Image1<\/li>\n<li>Reference professional ad video for style<\/li>\n<li>Add synchronized narration via @Audio1<\/li>\n<li>Generate lifestyle shots automatically<\/li>\n<\/ul>\n<p><strong>Brand Storytelling<\/strong>:<\/p>\n<ul>\n<li>Upload brand assets (logos, colors, environments)<\/li>\n<li>Reference creative templates from successful campaigns<\/li>\n<li>Maintain brand consistency across all frames<\/li>\n<li>Generate multi-scene narratives<\/li>\n<\/ul>\n<p><strong>Marketing Content<\/strong>:<\/p>\n<ul>\n<li>Create platform-optimized videos (16:9, 1:1, 9:16)<\/li>\n<li>Beat-synced edits for social media<\/li>\n<li>Product reveals with cinematic camera work<\/li>\n<li>Call-to-action endings<\/li>\n<\/ul>\n<h3>Content Localization<\/h3>\n<p><strong>Multi-Language Adaptations<\/strong>:<\/p>\n<ul>\n<li>Reference original video for motion and timing<\/li>\n<li>Generate new lip-synced dialogue in target language<\/li>\n<li>Maintain visual consistency while changing audio<\/li>\n<li>Export multiple language versions from single template<\/li>\n<\/ul>\n<p><strong>Cultural Adaptation<\/strong>:<\/p>\n<ul>\n<li>Replace characters while keeping narrative<\/li>\n<li>Modify environmental elements for local relevance<\/li>\n<li>Adjust visual style for regional preferences<\/li>\n<\/ul>\n<h3>Storyboard to Video<\/h3>\n<p><strong>Animation Workflow<\/strong>:<\/p>\n<ul>\n<li>Upload storyboard panels as @Image1, @Image2, @Image3&#8230;<\/li>\n<li>Describe motion between panels in prompt<\/li>\n<li>Reference timing from animatic video if available<\/li>\n<li>Generate animated sequence matching boards<\/li>\n<\/ul>\n<p><strong>Pitching and Previz<\/strong>:<\/p>\n<ul>\n<li>Convert static concepts to moving previews<\/li>\n<li>Test camera angles and editing before production<\/li>\n<li>Client presentations with realistic motion<\/li>\n<li>Budget estimates based on generated complexity<\/li>\n<\/ul>\n<h3>Template-Based Creation<\/h3>\n<p><strong>Style Transfer Process<\/strong>:<\/p>\n<ol>\n<li>Find video style you admire<\/li>\n<li>Upload as @Video1 reference<\/li>\n<li>Upload your characters\/products as images<\/li>\n<li>Prompt: &#8220;Create video with @MyCharacter in style of @Video1&#8221;<\/li>\n<li>Generate content matching template aesthetics<\/li>\n<\/ol>\n<p><strong>Franchise Consistency<\/strong>:<\/p>\n<ul>\n<li>Maintain visual language across series<\/li>\n<li>Reference previous episodes for style lock<\/li>\n<li>Character consistency throughout seasons<\/li>\n<li>Brand identity preservation<\/li>\n<\/ul>\n<h3>Music Video Production<\/h3>\n<p><strong>Beat-Sync Workflow<\/strong>:<\/p>\n<ul>\n<li>Upload music track as @Audio1<\/li>\n<li>Upload visual concepts as images<\/li>\n<li>Reference rhythm from existing music video<\/li>\n<li>Prompt: &#8220;Cut images to @Audio1 beats, reference @Video1 pacing&#8221;<\/li>\n<\/ul>\n<p><strong>Performance Videos<\/strong>:<\/p>\n<ul>\n<li>Upload artist images<\/li>\n<li>Reference choreography from dance videos<\/li>\n<li>Sync lip movements to lyrics<\/li>\n<li>Generate dynamic camera movements<\/li>\n<\/ul>\n<h3>Cinematic Sequences<\/h3>\n<p><strong>Action Scenes<\/strong>:<\/p>\n<ul>\n<li>Reference stunt choreography from @Video1<\/li>\n<li>Apply to your characters from images<\/li>\n<li>Add Hitchcock zooms and orbit shots<\/li>\n<li>One-take continuous action<\/li>\n<\/ul>\n<p><strong>Dramatic Moments<\/strong>:<\/p>\n<ul>\n<li>Close-up character expressions<\/li>\n<li>Tracking shots through environments<\/li>\n<li>Slow-motion effects<\/li>\n<li>Emotional arc visualization<\/li>\n<\/ul>\n<h2>Part V: Best Practices and Pro Tips<\/h2>\n<h3>Maximizing Quality<\/h3>\n<p><strong>1. Be Explicit About References<\/strong>:<\/p>\n<p><strong>\u274c Weak<\/strong>: &#8220;Use the video&#8221;<\/p>\n<p><strong>\u2705 Strong<\/strong>: &#8220;Reference @Video1's camera movement and lighting, but keep @Image1's character design&#8221;<\/p>\n<p><strong>2. Prioritize Your 12-File Limit<\/strong>:<\/p>\n<ul>\n<li>Choose assets with greatest impact on final output<\/li>\n<li>One excellent reference video &gt; three mediocre images<\/li>\n<li>Audio crucial for rhythm\u2014don't skip if doing music sync<\/li>\n<\/ul>\n<p><strong>3. Double-Check @ Mentions<\/strong>:<\/p>\n<ul>\n<li>With multiple files, easy to confuse @Image1 vs @Image2<\/li>\n<li>Write list of files and purposes before prompting<\/li>\n<li>Verify each @ reference in prompt matches intended file<\/li>\n<\/ul>\n<p><strong>4. Specify Edit vs. Reference<\/strong>:<\/p>\n<p><strong>\u274c Ambiguous<\/strong>: &#8220;Use @Video1&#8221;<\/p>\n<p><strong>\u2705 Clear Edit<\/strong>: &#8220;Extend @Video1 by 5 seconds&#8221;<\/p>\n<p><strong>\u2705 Clear Reference<\/strong>: &#8220;Reference @Video1's camera work for new scene with @Image1 character&#8221;<\/p>\n<p><strong>5. Align Duration Settings<\/strong>:<\/p>\n<ul>\n<li>Extending 10s video by 5s \u2192 set generation to 5s duration<\/li>\n<li>Creating new video \u2192 choose 4-15s based on content needs<\/li>\n<li>Longer \u2260 better\u2014match duration to narrative requirements<\/li>\n<\/ul>\n<p><strong>6. Use Natural Language<\/strong>:<\/p>\n<ul>\n<li>Model understands filmmaker terminology<\/li>\n<li>&#8220;Hitchcock zoom when startled&#8221; works perfectly<\/li>\n<li>&#8220;Dolly tracking shot following the character&#8221; is clear<\/li>\n<li>&#8220;Orbit shot around the subject&#8221; interpreted correctly<\/li>\n<\/ul>\n<p><strong>7. Test Iteratively<\/strong>:<\/p>\n<ul>\n<li>Start simple with one reference type<\/li>\n<li>Add complexity gradually<\/li>\n<li>Regenerate with refined prompts<\/li>\n<li>Save successful prompt patterns<\/li>\n<\/ul>\n<h3>Common Pitfalls to Avoid<\/h3>\n<p><strong>\u274c Too Many Competing References<\/strong>:<\/p>\n<pre><code>Reference @Video1's motion, @Video2's camera, @Video3's lighting,\r\n@Image1's style, @Image2's colors, @Image3's mood...\r\n<\/code><\/pre>\n<p><strong>Result<\/strong>: Confused output pulling from too many sources<\/p>\n<p><strong>\u2705 Focused References<\/strong>:<\/p>\n<pre><code>Reference @Video1 for camera and motion. Apply @Image1's color\r\npalette and @Image2's character design.\r\n<\/code><\/pre>\n<p><strong>\u274c Vague Instructions<\/strong>:<\/p>\n<pre><code>Make it look cool with @Image1\r\n<\/code><\/pre>\n<p><strong>\u2705 Specific Direction<\/strong>:<\/p>\n<pre><code>@Image1 as first frame. Character performs backflip, landing\r\nin hero pose. Slow-motion on apex. Dramatic lighting from below.\r\n<\/code><\/pre>\n<p><strong>\u274c File Overload Without Purpose<\/strong>:<\/p>\n<ul>\n<li>Uploading 12 files just because you can<\/li>\n<li>Including redundant references<\/li>\n<li>Assets that don't contribute to vision<\/li>\n<\/ul>\n<p><strong>\u2705 Strategic Selection<\/strong>:<\/p>\n<ul>\n<li>2-4 carefully chosen high-impact assets<\/li>\n<li>Each file serving clear purpose<\/li>\n<li>Quality over quantity<\/li>\n<\/ul>\n<h3>Troubleshooting<\/h3>\n<p><strong>Issue: Generated video doesn't match reference<\/strong><\/p>\n<p><strong>Solutions<\/strong>:<\/p>\n<ul>\n<li>Make @ instructions more explicit<\/li>\n<li>Use stronger directive language (&#8220;exactly replicate&#8221;)<\/li>\n<li>Simplify prompt to isolate which reference isn't working<\/li>\n<li>Try different reference video if current one too complex<\/li>\n<\/ul>\n<p><strong>Issue: Character consistency fails<\/strong><\/p>\n<p><strong>Solutions<\/strong>:<\/p>\n<ul>\n<li>Upload higher quality reference images<\/li>\n<li>Specify &#8220;maintain @Image1 character appearance throughout&#8221;<\/li>\n<li>Use close-up reference for facial features<\/li>\n<li>Avoid extreme angles if face preservation critical<\/li>\n<\/ul>\n<p><strong>Issue: Audio sync off<\/strong><\/p>\n<p><strong>Solutions<\/strong>:<\/p>\n<ul>\n<li>Verify audio file duration matches video duration setting<\/li>\n<li>Use clearer dialogue reference if lip-sync needed<\/li>\n<li>Specify &#8220;sync lip movements to @Audio1 dialogue&#8221;<\/li>\n<li>Try shorter audio clips for better precision<\/li>\n<\/ul>\n<p><strong>Issue: Motion too subtle or exaggerated<\/strong><\/p>\n<p><strong>Solutions<\/strong>:<\/p>\n<ul>\n<li>Reference specific video with desired motion intensity<\/li>\n<li>Add descriptors: &#8220;subtle&#8221;, &#8220;dramatic&#8221;, &#8220;explosive&#8221;<\/li>\n<li>Specify speed: &#8220;slow-motion&#8221;, &#8220;fast-paced&#8221;, &#8220;normal speed&#8221;<\/li>\n<li>Provide comparison: &#8220;more energetic than @Video1&#8221;<\/li>\n<\/ul>\n<h2>Part VI: Technical Advantages<\/h2>\n<h3>2K Resolution Benefits<\/h3>\n<p><strong>Visual Sharpness<\/strong>:<\/p>\n<ul>\n<li>Every detail visible\u2014textures, patterns, fine print<\/li>\n<li>Professional quality suitable for commercial use<\/li>\n<li>Large screen display without quality loss<\/li>\n<li>Zoom capability maintaining clarity<\/li>\n<\/ul>\n<p><strong>Color Enhancement<\/strong>:<\/p>\n<ul>\n<li>Automatic color grading<\/li>\n<li>Balanced saturation<\/li>\n<li>Natural lighting adjustments<\/li>\n<li>Vivid but realistic palette<\/li>\n<\/ul>\n<p><strong>Texture Preservation<\/strong>:<\/p>\n<ul>\n<li>Fabric weaves visible<\/li>\n<li>Skin pores and details maintained<\/li>\n<li>Material properties distinguishable<\/li>\n<li>Depth and dimension enhanced<\/li>\n<\/ul>\n<h3>30% Speed Increase<\/h3>\n<p><strong>Production Efficiency<\/strong>:<\/p>\n<ul>\n<li>Faster iterations during creative process<\/li>\n<li>Quick A\/B testing of concepts<\/li>\n<li>Rapid client revisions<\/li>\n<li>Same-day project turnaround possible<\/li>\n<\/ul>\n<p><strong>Workflow Integration<\/strong>:<\/p>\n<ul>\n<li>Fits into tight production schedules<\/li>\n<li>Real-time creative direction adjustments<\/li>\n<li>Immediate feedback loops<\/li>\n<li>Batch processing multiple variations<\/li>\n<\/ul>\n<h3>3x Length Extension<\/h3>\n<p><strong>Longer Narratives<\/strong>:<\/p>\n<ul>\n<li>Complete story arcs in single generation<\/li>\n<li>Tutorial and educational content<\/li>\n<li>Product demonstrations with detail<\/li>\n<li>Character development sequences<\/li>\n<\/ul>\n<p><strong>Maintained Quality<\/strong>:<\/p>\n<ul>\n<li>No quality degradation in longer videos<\/li>\n<li>Consistent motion throughout<\/li>\n<li>Stable visual style end-to-end<\/li>\n<li>Professional output regardless of length<\/li>\n<\/ul>\n<h3>Platform Optimization<\/h3>\n<p><strong>Automatic Formatting<\/strong>:<\/p>\n<ul>\n<li>Right size for each platform (YouTube, TikTok, Instagram)<\/li>\n<li>Correct aspect ratio without manual cropping<\/li>\n<li>Resolution optimized for platform requirements<\/li>\n<li>Export ready for immediate upload<\/li>\n<\/ul>\n<p><strong>API Integration<\/strong>:<\/p>\n<ul>\n<li>Programmatic access for developers<\/li>\n<li>Batch processing capabilities<\/li>\n<li>Workflow automation potential<\/li>\n<li>Custom pipeline integration<\/li>\n<\/ul>\n<p><strong>Cross-Platform Consistency<\/strong>:<\/p>\n<ul>\n<li>Same visual quality across all formats<\/li>\n<li>Brand consistency maintained<\/li>\n<li>Future-proof for new platforms<\/li>\n<li>No rework needed for distribution<\/li>\n<\/ul>\n<h2>Conclusion: The Future of AI Video Is Multimodal<\/h2>\n<h3>What Seedance 2.0 Achieves<\/h3>\n<p><strong>Filmmaker-Level Control<\/strong>: @ reference system giving explicit direction over every element<\/p>\n<p><strong>Professional Quality<\/strong>: 2K resolution, accurate physics, smooth motion, style consistency<\/p>\n<p><strong>Speed and Scale<\/strong>: 30% faster, 3x longer, without quality compromise<\/p>\n<p><strong>Creative Flexibility<\/strong>: Images + videos + audio + text opening infinite possibilities<\/p>\n<p><strong>Character Consistency<\/strong>: Identity lock solving AI video's biggest previous weakness<\/p>\n<p><strong>Advanced Techniques<\/strong>: Camera replication, template matching, audio sync, beat editing, one-take shots<\/p>\n<h3>Who Benefits Most<\/h3>\n<p><strong>Content Creators<\/strong>: Rapid video production for social media, YouTube, streaming<\/p>\n<p><strong>Marketers<\/strong>: Product demos, brand stories, ad campaigns without expensive production<\/p>\n<p><strong>Filmmakers<\/strong>: Previz, storyboarding, concept testing before physical shoots<\/p>\n<p><strong>Educators<\/strong>: Tutorial videos, explainers, educational content at scale<\/p>\n<p><strong>E-Commerce<\/strong>: Product showcases, lifestyle integration, customer testimonials<\/p>\n<p><strong>Agencies<\/strong>: Client pitches, template libraries, multi-platform campaigns<\/p>\n<p><strong>Musicians<\/strong>: Music videos, lyric videos, performance clips<\/p>\n<p><strong>Indie Developers<\/strong>: Game trailers, cinematic sequences, promotional content<\/p>\n<h3>The Competitive Landscape<\/h3>\n<p><strong>Versus Sora 2<\/strong>: Seedance 2.0 offers multimodal input (Sora text-only)<\/p>\n<p><strong>Versus Kling 3.0<\/strong>: @ reference system provides more explicit control<\/p>\n<p><strong>Versus Veo 3.1<\/strong>: Native audio generation and beat-sync capabilities<\/p>\n<p><strong>Versus WAN 2.6<\/strong>: Superior character consistency and motion replication<\/p>\n<p><strong>Versus Runway Aleph<\/strong>: More accessible pricing and faster generation<\/p>\n<h3>Getting Started Today<\/h3>\n<p><strong>Free Trials Available<\/strong>:<\/p>\n<ul>\n<li>WaveSpeedAI: Sign up for free credits<\/li>\n<li>ImagineArt: Free tier with limited generations<\/li>\n<\/ul>\n<p><strong>Learning Curve<\/strong>: Moderate\u2014@ syntax intuitive, experiment friendly<\/p>\n<p><strong>Community Resources<\/strong>:<\/p>\n<ul>\n<li>Tutorial videos<\/li>\n<li>Prompt libraries<\/li>\n<li>Discord communities<\/li>\n<li>Example galleries<\/li>\n<\/ul>\n<p><strong>Best First Projects<\/strong>:<\/p>\n<ul>\n<li>Simple product reveal (1 image + text)<\/li>\n<li>Character animation (3 images showing progression)<\/li>\n<li>Music video (1 audio + 3-5 images)<\/li>\n<li>Camera replication (1 reference video + your character image)<\/li>\n<\/ul>\n<hr \/>\n<p><strong>Ready to Create?<\/strong><\/p>\n<p><strong>Start on WaveSpeedAI<\/strong>: wavespeed.ai \u2192 Models \u2192 Seedance 2.0<\/p>\n<p><strong>Start on ImagineArt<\/strong>: imagine.art\/video \u2192 Select Seedance 2.0<\/p>\n<p><strong>Pro Tip<\/strong>: Begin with Universal Reference Mode and 2-3 carefully chosen assets\u2014you'll achieve better results than uploading maximum 12 files without clear purpose.<\/p>\n<hr \/>\n<p><strong>The Bottom Line<\/strong>: Seedance 2.0's multimodal @ reference system (9 images + 3 videos + 3 audio + text) delivers filmmaker-level control over AI video generation at 2K resolution, 30% faster, 3x longer than predecessors, with groundbreaking character consistency, camera replication, native audio sync, and beat-matched editing\u2014making professional video creation accessible to anyone through natural language instructions on WaveSpeedAI , ImagineArt and Topview platforms. The future of video isn't text-to-video\u2014it's <strong>image+video+audio+text-to-cinema<\/strong>.<\/p>\n<p><strong>Stop limiting yourself to text prompts. Start directing with multimodal references.<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>The Ultimate Tutorial: Master Image+Video+Audio+Text Input, @ Reference System, Character Consistency, Camera Replication, and Native Audio Generation Seedance 2.0 represents [&hellip;]<\/p>","protected":false},"author":11214,"featured_media":136954,"menu_order":0,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[468],"tags":[],"class_list":["post-136931","aitools","type-aitools","status-publish","format-standard","has-post-thumbnail","hentry","category-best-post"],"acf":[],"_links":{"self":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136931","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools"}],"about":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/types\/aitools"}],"author":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/users\/11214"}],"version-history":[{"count":5,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136931\/revisions"}],"predecessor-version":[{"id":140938,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136931\/revisions\/140938"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media\/136954"}],"wp:attachment":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media?parent=136931"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/categories?post=136931"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/tags?post=136931"}],"curies":[{"name":"\u0648\u0648\u0631\u062f\u0628\u0631\u064a\u0633","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}