
{"id":137920,"date":"2026-02-13T11:16:10","date_gmt":"2026-02-13T03:16:10","guid":{"rendered":"https:\/\/vertu.com\/?post_type=aitools&#038;p=137920"},"modified":"2026-02-13T11:16:10","modified_gmt":"2026-02-13T03:16:10","slug":"minimax-m2-5-officially-released-comprehensive-benchmarks-comparison","status":"publish","type":"aitools","link":"https:\/\/legacy.vertu.com\/ar\/ai-tools\/minimax-m2-5-officially-released-comprehensive-benchmarks-comparison\/","title":{"rendered":"MiniMax M2.5 Officially Released: Comprehensive Benchmarks, Comparison"},"content":{"rendered":"<h1 data-path-to-node=\"0\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-large wp-image-137953\" src=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-1024x493.webp\" alt=\"\" width=\"1024\" height=\"493\" srcset=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-1024x493.webp 1024w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-300x144.webp 300w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-768x370.webp 768w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-18x9.webp 18w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-600x289.webp 600w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially-64x31.webp 64w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Minimax-M2.5-Officially.webp 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/h1>\n<p data-path-to-node=\"1\">This article provides a deep dive into the official release of MiniMax M2.5, analyzing its performance benchmarks against industry giants like GPT-5.2 and Claude 4.6. We explore its coding capabilities, multi-turn function calling, and its massive leap over the previous M2.1 iteration.<\/p>\n<h3 data-path-to-node=\"2\">Is MiniMax M2.5 the New State-of-the-Art?<\/h3>\n<p data-path-to-node=\"3\">MiniMax M2.5 is a frontier-class Large Language Model (LLM) that establishes itself as a top-tier competitor in the global AI landscape. Based on the latest verified benchmarks, MiniMax M2.5 delivers performance that rivals and occasionally exceeds speculative &#8220;next-gen&#8221; models such as GPT-5.2 and Claude 4.6. It is particularly dominant in <b data-path-to-node=\"3\" data-index-in-node=\"342\">BFCL (Berkeley Function Calling Leaderboard) multi-turn tasks<\/b> with a score of <b data-path-to-node=\"3\" data-index-in-node=\"420\">76.8<\/b>, and shows elite-level proficiency in <b data-path-to-node=\"3\" data-index-in-node=\"463\">SWE-Bench Verified<\/b> tasks (80.2), making it a powerhouse for software engineering, complex reasoning, and long-context interaction.<\/p>\n<hr data-path-to-node=\"4\" \/>\n<h2 data-path-to-node=\"5\">Introduction to the MiniMax M2.5 Era<\/h2>\n<p data-path-to-node=\"6\">The artificial intelligence industry has reached a new inflection point with the official unveiling of MiniMax M2.5. As the successor to the already capable M2.1, this new model signifies a massive architectural leap. In an era where &#8220;Pro&#8221; and &#8220;Ultra&#8221; models are the standard, MiniMax M2.5 aims to redefine efficiency and accuracy across coding, browsing, and multimodal reasoning.<\/p>\n<p data-path-to-node=\"7\">This report analyzes the data from the latest benchmark comparisons, highlighting how MiniMax M2.5 stacks up against the most powerful models currently available (or projected) in the market.<\/p>\n<hr data-path-to-node=\"8\" \/>\n<h2 data-path-to-node=\"9\">Detailed Performance Comparison: MiniMax M2.5 vs. The World<\/h2>\n<p data-path-to-node=\"10\">To understand the utility of MiniMax M2.5, we must look at the quantitative data. The following table compares MiniMax M2.5 against its predecessor and its primary competitors: Claude 4.5\/4.6, Gemini 3 Pro, and GPT-5.2.<\/p>\n<h3 data-path-to-node=\"11\">Benchmark Data Comparison Table<\/h3>\n<table data-path-to-node=\"12\">\n<thead>\n<tr>\n<td><strong>Benchmark Category<\/strong><\/td>\n<td><strong>MiniMax M2.1<\/strong><\/td>\n<td><strong>MiniMax M2.5<\/strong><\/td>\n<td><strong>Claude 4.5<\/strong><\/td>\n<td><strong>Claude 4.6<\/strong><\/td>\n<td><strong>Gemini 3 Pro<\/strong><\/td>\n<td><strong>GPT-5.2<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><span data-path-to-node=\"12,1,0,0\"><b data-path-to-node=\"12,1,0,0\" data-index-in-node=\"0\">SWE-Bench Verified<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,1,1,0\">74.0<\/span><\/td>\n<td><span data-path-to-node=\"12,1,2,0\"><b data-path-to-node=\"12,1,2,0\" data-index-in-node=\"0\">80.2<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,1,3,0\">80.9<\/span><\/td>\n<td><span data-path-to-node=\"12,1,4,0\">80.8<\/span><\/td>\n<td><span data-path-to-node=\"12,1,5,0\">78.0<\/span><\/td>\n<td><span data-path-to-node=\"12,1,6,0\">80.0<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,2,0,0\"><b data-path-to-node=\"12,2,0,0\" data-index-in-node=\"0\">SWE-Bench Pro<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,2,1,0\">49.7<\/span><\/td>\n<td><span data-path-to-node=\"12,2,2,0\"><b data-path-to-node=\"12,2,2,0\" data-index-in-node=\"0\">55.4<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,2,3,0\">56.9<\/span><\/td>\n<td><span data-path-to-node=\"12,2,4,0\">55.4<\/span><\/td>\n<td><span data-path-to-node=\"12,2,5,0\">54.1<\/span><\/td>\n<td><span data-path-to-node=\"12,2,6,0\">55.6<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,3,0,0\"><b data-path-to-node=\"12,3,0,0\" data-index-in-node=\"0\">Multi-SWE-Bench<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,3,1,0\">47.2<\/span><\/td>\n<td><span data-path-to-node=\"12,3,2,0\"><b data-path-to-node=\"12,3,2,0\" data-index-in-node=\"0\">51.3<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,3,3,0\">50.0<\/span><\/td>\n<td><span data-path-to-node=\"12,3,4,0\">50.3<\/span><\/td>\n<td><span data-path-to-node=\"12,3,5,0\">N\/A<\/span><\/td>\n<td><span data-path-to-node=\"12,3,6,0\">42.7<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,4,0,0\"><b data-path-to-node=\"12,4,0,0\" data-index-in-node=\"0\">VIBE-Pro (AVG)<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,4,1,0\">42.4<\/span><\/td>\n<td><span data-path-to-node=\"12,4,2,0\"><b data-path-to-node=\"12,4,2,0\" data-index-in-node=\"0\">54.2<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,4,3,0\">55.2<\/span><\/td>\n<td><span data-path-to-node=\"12,4,4,0\">55.6<\/span><\/td>\n<td><span data-path-to-node=\"12,4,5,0\">N\/A<\/span><\/td>\n<td><span data-path-to-node=\"12,4,6,0\">36.9<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,5,0,0\"><b data-path-to-node=\"12,5,0,0\" data-index-in-node=\"0\">BrowseComp (w\/ctx)<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,5,1,0\">62.0<\/span><\/td>\n<td><span data-path-to-node=\"12,5,2,0\"><b data-path-to-node=\"12,5,2,0\" data-index-in-node=\"0\">76.3<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,5,3,0\">67.8<\/span><\/td>\n<td><span data-path-to-node=\"12,5,4,0\">84.0<\/span><\/td>\n<td><span data-path-to-node=\"12,5,5,0\">59.2<\/span><\/td>\n<td><span data-path-to-node=\"12,5,6,0\">65.8<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,6,0,0\"><b data-path-to-node=\"12,6,0,0\" data-index-in-node=\"0\">BFCL multi-turn<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,6,1,0\">37.4<\/span><\/td>\n<td><span data-path-to-node=\"12,6,2,0\"><b data-path-to-node=\"12,6,2,0\" data-index-in-node=\"0\">76.8<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,6,3,0\">68.0<\/span><\/td>\n<td><span data-path-to-node=\"12,6,4,0\">63.3<\/span><\/td>\n<td><span data-path-to-node=\"12,6,5,0\">61.0<\/span><\/td>\n<td><span data-path-to-node=\"12,6,6,0\">N\/A<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,7,0,0\"><b data-path-to-node=\"12,7,0,0\" data-index-in-node=\"0\">MEWC<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,7,1,0\">55.6<\/span><\/td>\n<td><span data-path-to-node=\"12,7,2,0\"><b data-path-to-node=\"12,7,2,0\" data-index-in-node=\"0\">74.4<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,7,3,0\">82.1<\/span><\/td>\n<td><span data-path-to-node=\"12,7,4,0\">89.8<\/span><\/td>\n<td><span data-path-to-node=\"12,7,5,0\">78.7<\/span><\/td>\n<td><span data-path-to-node=\"12,7,6,0\">41.3<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"12,8,0,0\"><b data-path-to-node=\"12,8,0,0\" data-index-in-node=\"0\">GDPval-MM<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,8,1,0\">24.6<\/span><\/td>\n<td><span data-path-to-node=\"12,8,2,0\"><b data-path-to-node=\"12,8,2,0\" data-index-in-node=\"0\">59.0<\/b><\/span><\/td>\n<td><span data-path-to-node=\"12,8,3,0\">61.1<\/span><\/td>\n<td><span data-path-to-node=\"12,8,4,0\">73.5<\/span><\/td>\n<td><span data-path-to-node=\"12,8,5,0\">28.1<\/span><\/td>\n<td><span data-path-to-node=\"12,8,6,0\">54.5<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr data-path-to-node=\"13\" \/>\n<h2 data-path-to-node=\"14\">Key Breakthroughs in MiniMax M2.5 Performance<\/h2>\n<h3 data-path-to-node=\"15\">1. Exceptional Software Engineering (SWE-Bench)<\/h3>\n<p data-path-to-node=\"16\">The SWE-Bench suite is designed to test an AI\u2019s ability to solve real-world GitHub issues.<\/p>\n<ul data-path-to-node=\"17\">\n<li>\n<p data-path-to-node=\"17,0,0\"><b data-path-to-node=\"17,0,0\" data-index-in-node=\"0\">Verified Performance:<\/b> MiniMax M2.5 achieved an <b data-path-to-node=\"17,0,0\" data-index-in-node=\"47\">80.2<\/b>, placing it neck-and-neck with Claude 4.6 (80.8) and GPT-5.2 (80.0). This indicates that the model is fully capable of autonomous coding, debugging, and software architecture planning.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"17,1,0\"><b data-path-to-node=\"17,1,0\" data-index-in-node=\"0\">Pro & Multi-Bench:<\/b> In more rigorous tests like Multi-SWE-Bench, M2.5 actually leads the pack with a score of <b data-path-to-node=\"17,1,0\" data-index-in-node=\"109\">51.3<\/b>, suggesting superior consistency across complex, multi-file software tasks compared to Claude 4.6 (50.3).<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"18\">2. Industry-Leading Function Calling (BFCL)<\/h3>\n<p data-path-to-node=\"19\">Perhaps the most significant victory for MiniMax M2.5 is in the <b data-path-to-node=\"19\" data-index-in-node=\"64\">BFCL multi-turn benchmark<\/b>.<\/p>\n<ul data-path-to-node=\"20\">\n<li>\n<p data-path-to-node=\"20,0,0\">MiniMax M2.5 scored a staggering <b data-path-to-node=\"20,0,0\" data-index-in-node=\"33\">76.8<\/b>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"20,1,0\">In comparison, Claude 4.5 scored 68.0, and Gemini 3 Pro scored 61.0.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"20,2,0\">This suggests that MiniMax M2.5 is the most reliable model currently available for developers building agentic workflows that require multiple rounds of tool use and function calling without losing track of the user\u2019s intent.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"21\">3. Massive Generational Leap over M2.1<\/h3>\n<p data-path-to-node=\"22\">The upgrade from MiniMax M2.1 to M2.5 is not incremental; it is transformative.<\/p>\n<ul data-path-to-node=\"23\">\n<li>\n<p data-path-to-node=\"23,0,0\">In <b data-path-to-node=\"23,0,0\" data-index-in-node=\"3\">GDPval-MM (Multimodal Grounded Reasoning)<\/b>, the score jumped from <b data-path-to-node=\"23,0,0\" data-index-in-node=\"68\">24.6 to 59.0<\/b>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"23,1,0\">In <b data-path-to-node=\"23,1,0\" data-index-in-node=\"3\">BFCL multi-turn<\/b>, the performance more than doubled (from <b data-path-to-node=\"23,1,0\" data-index-in-node=\"60\">37.4 to 76.8<\/b>).<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"23,2,0\">This indicates a fundamental change in the training methodology or architecture, likely involving significantly better synthetic data and reinforcement learning from human feedback (RLHF).<\/p>\n<\/li>\n<\/ul>\n<hr data-path-to-node=\"24\" \/>\n<h2 data-path-to-node=\"25\">Evaluating Technical Capabilities for Developers<\/h2>\n<p data-path-to-node=\"26\">For developers and enterprises deciding whether to integrate MiniMax M2.5, several technical factors stand out:<\/p>\n<h3 data-path-to-node=\"27\">Enhanced Browsing and Context Handling (BrowseComp)<\/h3>\n<p data-path-to-node=\"28\">The &#8220;BrowseComp (w\/ctx)&#8221; benchmark measures how well the model can navigate the web and process retrieved information within its context window. MiniMax M2.5 scored <b data-path-to-node=\"28\" data-index-in-node=\"165\">76.3<\/b>, a massive improvement over the 62.0 of its predecessor. While Claude 4.6 remains the leader here (84.0), M2.5 outperforms GPT-5.2 (65.8) and Gemini 3 Pro (59.2), making it a superior choice for RAG (Retrieval-Augmented Generation) applications.<\/p>\n<h3 data-path-to-node=\"29\">Multimodal Reasoning and VIBE-Pro<\/h3>\n<p data-path-to-node=\"30\">The VIBE-Pro benchmark focuses on the &#8220;vibe&#8221; or human-aligned quality of the model's outputs across various tasks. MiniMax M2.5's score of <b data-path-to-node=\"30\" data-index-in-node=\"139\">54.2<\/b> puts it in the elite tier, showing that the model is not just a &#8220;math and code&#8221; engine but is also highly capable of nuanced, high-quality content generation that rivals the best models from Anthropic.<\/p>\n<h3 data-path-to-node=\"31\">Why MEWC Scores Matter<\/h3>\n<p data-path-to-node=\"32\">MEWC (Multi-turn Evaluation of Web Capabilities) shows MiniMax M2.5 at <b data-path-to-node=\"32\" data-index-in-node=\"71\">74.4<\/b>. While slightly trailing behind the Claude 4 series, it crushes the current GPT-5.2 projection (41.3). This suggests that for complex web-agent tasks\u2014such as booking flights, managing e-commerce platforms, or navigating complex UI\u2014MiniMax M2.5 is a top-three global contender.<\/p>\n<hr data-path-to-node=\"33\" \/>\n<h2 data-path-to-node=\"34\">How to Leverage MiniMax M2.5 in Your Workflow<\/h2>\n<ol start=\"1\" data-path-to-node=\"35\">\n<li>\n<p data-path-to-node=\"35,0,0\"><b data-path-to-node=\"35,0,0\" data-index-in-node=\"0\">Autonomous Coding Agents:<\/b> Use M2.5 for complex refactoring. Its high SWE-Bench scores mean it can handle large codebases with fewer errors.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"35,1,0\"><b data-path-to-node=\"35,1,0\" data-index-in-node=\"0\">Complex API Orchestration:<\/b> Given its BFCL dominance, M2.5 is the ideal backbone for AI agents that need to call dozens of different APIs in a specific sequence.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"35,2,0\"><b data-path-to-node=\"35,2,0\" data-index-in-node=\"0\">Data Analysis & Reasoned Grounding:<\/b> Use the GDPval-MM capabilities to process multimodal data where visual context is necessary for logical deduction.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"35,3,0\"><b data-path-to-node=\"35,3,0\" data-index-in-node=\"0\">Cost-Effective High Performance:<\/b> Historically, MiniMax has offered competitive pricing. M2.5 provides &#8220;Opus-level&#8221; performance, likely at a fraction of the token cost.<\/p>\n<\/li>\n<\/ol>\n<hr data-path-to-node=\"36\" \/>\n<h2 data-path-to-node=\"37\">E-E-A-T Analysis: Why These Benchmarks are Trustworthy<\/h2>\n<p data-path-to-node=\"38\">Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) are critical when evaluating AI claims. The data for MiniMax M2.5 comes from standardized, third-party benchmarks including:<\/p>\n<ul data-path-to-node=\"39\">\n<li>\n<p data-path-to-node=\"39,0,0\"><b data-path-to-node=\"39,0,0\" data-index-in-node=\"0\">Berkeley Function Calling Leaderboard:<\/b> A gold standard for tool-use evaluation.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"39,1,0\"><b data-path-to-node=\"39,1,0\" data-index-in-node=\"0\">SWE-Bench:<\/b> A rigorous, objective test of software engineering capability.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"39,2,0\"><b data-path-to-node=\"39,2,0\" data-index-in-node=\"0\">VIBE-Pro:<\/b> A community-driven evaluation of model alignment and output quality.<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"40\">The transparency in these results allows developers to make data-driven decisions rather than relying on marketing hype.<\/p>\n<hr data-path-to-node=\"41\" \/>\n<h2 data-path-to-node=\"42\">Summary of Findings<\/h2>\n<p data-path-to-node=\"43\">MiniMax M2.5 is a &#8220;giant killer&#8221; in the LLM space. While the industry has been focused on the rivalry between OpenAI and Anthropic, MiniMax has quietly built a model that matches them in coding and beats them in complex, multi-turn tool interactions. For users seeking a model that balances web browsing, coding, and grounded reasoning, M2.5 is an essential addition to the AI toolkit.<\/p>\n<hr data-path-to-node=\"44\" \/>\n<h2 data-path-to-node=\"45\">Frequently Asked Questions (FAQ)<\/h2>\n<h3 data-path-to-node=\"46\">What is MiniMax M2.5?<\/h3>\n<p data-path-to-node=\"47\">MiniMax M2.5 is the latest flagship Large Language Model from MiniMax, designed for high-performance coding, function calling, and multimodal reasoning. It represents a significant upgrade over the previous M2.1 version.<\/p>\n<h3 data-path-to-node=\"48\">How does MiniMax M2.5 compare to GPT-5.2?<\/h3>\n<p data-path-to-node=\"49\">In the SWE-Bench Verified benchmark, MiniMax M2.5 (80.2) is virtually equal to GPT-5.2 (80.0). However, MiniMax M2.5 significantly outperforms GPT-5.2 in browsing-related tasks (BrowseComp) and web-based multi-turn interactions (MEWC).<\/p>\n<h3 data-path-to-node=\"50\">Is MiniMax M2.5 good for coding?<\/h3>\n<p data-path-to-node=\"51\">Yes, it is excellent. With an 80.2 score on SWE-Bench Verified and a 55.4 on SWE-Bench Pro, it is currently one of the highest-rated models for software engineering tasks in the world.<\/p>\n<h3 data-path-to-node=\"52\">What is the strongest feature of MiniMax M2.5?<\/h3>\n<p data-path-to-node=\"53\">Its strongest feature is <b data-path-to-node=\"53\" data-index-in-node=\"25\">Multi-turn Function Calling (BFCL)<\/b>. With a score of 76.8, it outperforms Claude 4.5, Claude 4.6, and Gemini 3 Pro, making it the best model for complex AI agent workflows.<\/p>\n<h3 data-path-to-node=\"54\">Where can I find the official benchmarks?<\/h3>\n<p data-path-to-node=\"55\">The official benchmarks are released by MiniMax and have been integrated into various leaderboards such as the Berkeley Function Calling Leaderboard and SWE-Bench.<\/p>","protected":false},"excerpt":{"rendered":"<p>This article provides a deep dive into the official release of MiniMax M2.5, analyzing its performance benchmarks against industry giants [&hellip;]<\/p>","protected":false},"author":11214,"featured_media":137953,"menu_order":0,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[468],"tags":[],"class_list":["post-137920","aitools","type-aitools","status-publish","format-standard","has-post-thumbnail","hentry","category-best-post"],"acf":[],"_links":{"self":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/137920","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools"}],"about":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/types\/aitools"}],"author":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/users\/11214"}],"version-history":[{"count":2,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/137920\/revisions"}],"predecessor-version":[{"id":137955,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/137920\/revisions\/137955"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media\/137953"}],"wp:attachment":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media?parent=137920"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/categories?post=137920"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/tags?post=137920"}],"curies":[{"name":"\u0648\u0648\u0631\u062f\u0628\u0631\u064a\u0633","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}