
{"id":136945,"date":"2026-02-09T11:20:59","date_gmt":"2026-02-09T03:20:59","guid":{"rendered":"https:\/\/vertu.com\/?post_type=aitools&#038;p=136945"},"modified":"2026-02-09T11:20:59","modified_gmt":"2026-02-09T03:20:59","slug":"claude-opus-4-6-anthropics-smartest-ai-model-with-1m-context-agent-teams-and-state-of-the-art-performance","status":"publish","type":"aitools","link":"https:\/\/legacy.vertu.com\/ar\/ai-tools\/claude-opus-4-6-anthropics-smartest-ai-model-with-1m-context-agent-teams-and-state-of-the-art-performance\/","title":{"rendered":"Claude Opus 4.6: Anthropic&#8217;s Smartest AI Model With 1M Context, Agent Teams, and State-of-the-Art Performance"},"content":{"rendered":"<h1><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-136964\" src=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2.png\" alt=\"\" width=\"828\" height=\"474\" srcset=\"https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2.png 828w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2-300x172.png 300w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2-768x440.png 768w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2-18x10.png 18w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2-600x343.png 600w, https:\/\/vertu-website-oss.vertu.com\/2026\/02\/Claude-Opus-4.6-vs-GPT-5.2-64x37.png 64w\" sizes=\"(max-width: 828px) 100vw, 828px\" \/><\/h1>\n<h2>Everything You Need to Know: Benchmark Dominance (144 Elo Over GPT-5.2, #1 on Terminal-Bench), New Features (Adaptive Thinking, Effort Controls, Context Compaction), Safety Leadership, and Partner Testimonials<\/h2>\n<p><strong>Claude Opus 4.6<\/strong> represents Anthropic's most significant upgrade to their smartest model, delivering <strong>state-of-the-art performance across agentic coding, knowledge work, and expert reasoning<\/strong> while introducing revolutionary productivity features. <strong>The Benchmark Dominance<\/strong>: Outperforms GPT-5.2 by <strong>144 Elo points on GDPval-AA<\/strong> (economically valuable knowledge work), achieves <strong>highest industry score on Terminal-Bench 2.0<\/strong> (agentic coding), leads on <strong>Humanity's Last Exam<\/strong> (complex multidisciplinary reasoning), and scores best on <strong>BrowseComp<\/strong> (hard-to-find information search)\u2014consistently beating all frontier models. <strong>The Coding Excellence<\/strong>: Improved planning, longer agentic task sustainability, reliable operation in larger codebases, better code review\/debugging to catch own mistakes, <strong>90.2% on BigLaw Bench<\/strong> (Harvey legal), <strong>81.42% on SWE-bench Verified<\/strong> with modifications, autonomous <strong>13-issue closure + 12-assignment delegation<\/strong> in single day (Rakuten). <strong>The Context Breakthrough<\/strong>: First Opus-class model with <strong>1M token context window (beta)<\/strong>, <strong>76% on MRCR v2 8-needle<\/strong> versus Sonnet 4.5's 18.5%\u2014qualitative shift eliminating &#8220;context rot,&#8221; <strong>128k output tokens<\/strong> for large completions, superior long-context retrieval and reasoning. <strong>The New Features<\/strong>: <strong>Adaptive thinking<\/strong> (model decides when extended reasoning helps), <strong>effort controls<\/strong> (low\/medium\/high\/max for intelligence-speed-cost balance), <strong>context compaction<\/strong> (auto-summarize older context to continue long tasks), <strong>agent teams<\/strong> in Claude Code (parallel coordination), <strong>Claude in PowerPoint<\/strong> (research preview). <strong>The Safety Excellence<\/strong>: Lowest misaligned behavior rate among frontier models, comprehensive evaluations (most extensive ever), new cybersecurity probes, lowest over-refusal rate of recent Claude models. <strong>The Partner Validation<\/strong>: 20 major companies (GitHub, Replit, Cursor, Notion, Asana, Thomson Reuters, etc.) reporting &#8220;huge leap,&#8221; &#8220;clear step up,&#8221; &#8220;biggest leap in months,&#8221; &#8220;almost unbelievable performance jump.&#8221; <strong>Availability<\/strong>: Now on claude.ai, API (<code>claude-opus-4-6<\/code>), major cloud platforms; <strong>pricing unchanged<\/strong> at $5\/$25 per million tokens.<\/p>\n<h2>Part I: The Performance Revolution<\/h2>\n<h3>Benchmark Dominance Across Categories<\/h3>\n<p><strong>Knowledge Work (GDPval-AA)<\/strong>:<\/p>\n<p><strong>The Evaluation<\/strong>: Real-world economically valuable tasks across finance, legal, and professional domains<\/p>\n<p><strong>Claude Opus 4.6 Performance<\/strong>: Industry-leading<\/p>\n<p><strong>Versus GPT-5.2<\/strong>: <strong>+144 Elo points<\/strong> (translates to winning ~70% of comparisons)<\/p>\n<p><strong>Versus Opus 4.5<\/strong>: <strong>+190 Elo points<\/strong><\/p>\n<p><strong>Significance<\/strong>: Largest performance gap in knowledge work category<\/p>\n<p><strong>Independent Verification<\/strong>: Run by Artificial Analysis (see methodology)<\/p>\n<p><strong>Agentic Coding (Terminal-Bench 2.0)<\/strong>:<\/p>\n<p><strong>The Evaluation<\/strong>: Real-world system tasks and coding in terminal environments<\/p>\n<p><strong>Claude Opus 4.6 Score<\/strong>: <strong>Highest in industry<\/strong><\/p>\n<p><strong>Framework<\/strong>: Terminus-2 harness<\/p>\n<p><strong>Resource Allocation<\/strong>: 1\u00d7 guaranteed \/ 3\u00d7 ceiling<\/p>\n<p><strong>Samples<\/strong>: 5-15 per task across staggered batches<\/p>\n<p><strong>What It Tests<\/strong>: Multi-file editing, system configuration, debugging, tool usage<\/p>\n<p><strong>Expert Reasoning (Humanity's Last Exam)<\/strong>:<\/p>\n<p><strong>The Evaluation<\/strong>: Complex multidisciplinary reasoning test designed to challenge frontier models<\/p>\n<p><strong>Claude Opus 4.6<\/strong>: <strong>Leads all frontier models<\/strong><\/p>\n<p><strong>Configuration<\/strong>: With tools (web search, web fetch, code execution, programmatic tool calling)<\/p>\n<p><strong>Advanced Settings<\/strong>: Context compaction at 50k\u21923M tokens, max reasoning effort, adaptive thinking<\/p>\n<p><strong>Domain Decontamination<\/strong>: Blocklist applied to ensure clean results<\/p>\n<p><strong>Agentic Search (BrowseComp)<\/strong>:<\/p>\n<p><strong>The Evaluation<\/strong>: Locating hard-to-find information online through multi-step search<\/p>\n<p><strong>Claude Opus 4.6<\/strong>: <strong>Best performance in industry<\/strong><\/p>\n<p><strong>With Multi-Agent Harness<\/strong>: <strong>86.8% score<\/strong><\/p>\n<p><strong>Configuration<\/strong>: Web search + fetch, context compaction 50k\u219210M tokens, max effort, no thinking mode<\/p>\n<p><strong>What It Measures<\/strong>: Information retrieval accuracy, search strategy quality, source synthesis<\/p>\n<h3>Comprehensive Benchmark Table<\/h3>\n<p><strong>Coding Benchmarks<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Benchmark<\/th>\n<th>Opus 4.6<\/th>\n<th>Comparison<\/th>\n<th>Notes<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Terminal-Bench 2.0<\/strong><\/td>\n<td>#1 Industry<\/td>\n<td>Beats all competitors<\/td>\n<td>Terminus-2 harness, 5-15 samples<\/td>\n<\/tr>\n<tr>\n<td><strong>SWE-bench Verified<\/strong><\/td>\n<td>81.42%<\/td>\n<td>With modifications<\/td>\n<td>25-trial average, prompt optimization<\/td>\n<\/tr>\n<tr>\n<td><strong>OpenRCA<\/strong><\/td>\n<td>Highest<\/td>\n<td>Root cause analysis<\/td>\n<td>Official methodology verification<\/td>\n<\/tr>\n<tr>\n<td><strong>Multilingual Coding<\/strong><\/td>\n<td>State-of-art<\/td>\n<td>Cross-language issues<\/td>\n<td>Multiple programming languages<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Knowledge Work & Reasoning<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Benchmark<\/th>\n<th>Opus 4.6<\/th>\n<th>Delta<\/th>\n<th>Significance<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>GDPval-AA<\/strong><\/td>\n<td>Leading<\/td>\n<td>+144 Elo vs GPT-5.2<\/td>\n<td>~70% win rate<\/td>\n<\/tr>\n<tr>\n<td><strong>Humanity's Last Exam<\/strong><\/td>\n<td>#1<\/td>\n<td>Leads all frontier<\/td>\n<td>Complex multidisciplinary<\/td>\n<\/tr>\n<tr>\n<td><strong>Life Sciences<\/strong><\/td>\n<td>2\u00d7 vs Opus 4.5<\/td>\n<td>Nearly double<\/td>\n<td>Biology, chemistry, phylogenetics<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Long-Context Performance<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Test<\/th>\n<th>Opus 4.6<\/th>\n<th>Opus 4.5<\/th>\n<th>Sonnet 4.5<\/th>\n<th>Improvement<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>MRCR v2 (8-needle, 1M)<\/strong><\/td>\n<td>76%<\/td>\n<td>N\/A<\/td>\n<td>18.5%<\/td>\n<td>4.1\u00d7 vs Sonnet<\/td>\n<\/tr>\n<tr>\n<td><strong>Context Window<\/strong><\/td>\n<td>1M tokens<\/td>\n<td>200k<\/td>\n<td>256k<\/td>\n<td>First Opus with 1M<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Specialized Domains<\/strong>:<\/p>\n<table>\n<thead>\n<tr>\n<th>Domain<\/th>\n<th>Benchmark<\/th>\n<th>Score<\/th>\n<th>Notes<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Legal<\/strong><\/td>\n<td>BigLaw Bench<\/td>\n<td>90.2%<\/td>\n<td>40% perfect scores, Harvey<\/td>\n<\/tr>\n<tr>\n<td><strong>Cybersecurity<\/strong><\/td>\n<td>CyberGym<\/td>\n<td>#1<\/td>\n<td>Vulnerability detection, NBIM 38\/40 wins<\/td>\n<\/tr>\n<tr>\n<td><strong>Long-term Focus<\/strong><\/td>\n<td>Vending-Bench 2<\/td>\n<td>+$3,050.53 vs Opus 4.5<\/td>\n<td>Sustained performance<\/td>\n<\/tr>\n<tr>\n<td><strong>Biology\/Chemistry<\/strong><\/td>\n<td>Life Sciences Tests<\/td>\n<td>2\u00d7 Opus 4.5<\/td>\n<td>Computational biology, organic chem<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>What the Improvements Mean<\/h3>\n<p><strong>Better Planning<\/strong>:<\/p>\n<ul>\n<li>Identifies most challenging parts without prompting<\/li>\n<li>Allocates cognitive resources intelligently<\/li>\n<li>Moves quickly through straightforward sections<\/li>\n<li>Handles ambiguous problems with better judgment<\/li>\n<\/ul>\n<p><strong>Longer Sustainability<\/strong>:<\/p>\n<ul>\n<li>Stays productive over extended sessions<\/li>\n<li>Maintains quality as context grows<\/li>\n<li>Doesn't drift or lose focus<\/li>\n<li>Handles multi-hour agentic workflows<\/li>\n<\/ul>\n<p><strong>Larger Codebase Reliability<\/strong>:<\/p>\n<ul>\n<li>Operates confidently in millions of lines of code<\/li>\n<li>Tracks dependencies across files<\/li>\n<li>Navigates unfamiliar architectures<\/li>\n<li>SentinelOne: &#8220;multi-million-line codebase migration like a senior engineer&#8221;<\/li>\n<\/ul>\n<p><strong>Enhanced Self-Correction<\/strong>:<\/p>\n<ul>\n<li>Reviews own code before submitting<\/li>\n<li>Catches mistakes proactively<\/li>\n<li>Debugs autonomously<\/li>\n<li>Vercel: &#8220;frontier-level reasoning, especially with edge cases&#8221;<\/li>\n<\/ul>\n<h2>Part II: The 1M Context Window Breakthrough<\/h2>\n<h3>Eliminating &#8220;Context Rot&#8221;<\/h3>\n<p><strong>The Previous Problem<\/strong>: AI models degrade as conversations exceed token limits\u2014losing track of information, missing buried details, producing inconsistent responses<\/p>\n<p><strong>Opus 4.6 Solution<\/strong>: First Opus-class model with <strong>1 million token context window (beta)<\/strong><\/p>\n<p><strong>The MRCR v2 Proof<\/strong>:<\/p>\n<p><strong>Test<\/strong>: 8-needle, 1M variant\u2014information &#8220;hidden&#8221; in vast amounts of text<\/p>\n<p><strong>Opus 4.6<\/strong>: <strong>76% retrieval accuracy<\/strong><\/p>\n<p><strong>Sonnet 4.5<\/strong>: <strong>18.5% retrieval accuracy<\/strong><\/p>\n<p><strong>Improvement<\/strong>: <strong>4.1\u00d7 better<\/strong> at maintaining performance over long contexts<\/p>\n<p><strong>Qualitative Shift<\/strong>: From &#8220;struggles beyond 200k&#8221; to &#8220;reliably handles 1M&#8221;<\/p>\n<h3>Long-Context Applications<\/h3>\n<p><strong>Document Analysis<\/strong>:<\/p>\n<ul>\n<li>Entire legal briefs with exhibits<\/li>\n<li>Complete financial reports with appendices<\/li>\n<li>Technical documentation sets<\/li>\n<li>Multi-book literature review<\/li>\n<\/ul>\n<p><strong>Codebase Understanding<\/strong>:<\/p>\n<ul>\n<li>Full application codebases<\/li>\n<li>Dependency chains across projects<\/li>\n<li>Architecture documentation + code<\/li>\n<li>Historical commit context<\/li>\n<\/ul>\n<p><strong>Research Synthesis<\/strong>:<\/p>\n<ul>\n<li>Dozens of research papers<\/li>\n<li>Conference proceedings<\/li>\n<li>Patent portfolios<\/li>\n<li>Scientific literature reviews<\/li>\n<\/ul>\n<p><strong>Extended Conversations<\/strong>:<\/p>\n<ul>\n<li>Week-long project discussions<\/li>\n<li>Accumulated knowledge bases<\/li>\n<li>Historical decision contexts<\/li>\n<li>Evolving requirements specifications<\/li>\n<\/ul>\n<h3>The 128k Output Token Advantage<\/h3>\n<p><strong>Previous Limitation<\/strong>: Long outputs required breaking into multiple requests<\/p>\n<p><strong>Opus 4.6 Capability<\/strong>: Up to <strong>128,000 tokens in single output<\/strong><\/p>\n<p><strong>Enabled Use Cases<\/strong>:<\/p>\n<ul>\n<li>Complete technical documentation<\/li>\n<li>Comprehensive reports<\/li>\n<li>Full application code<\/li>\n<li>Detailed analysis documents<\/li>\n<li>Multi-section deliverables in one response<\/li>\n<\/ul>\n<p><strong>Partner Testimony<\/strong>: Bolt.new CEO Eric Simons\u2014&#8221;one-shotted a fully functional physics engine, handling a large multi-scope task in a single pass&#8221;<\/p>\n<h3>Premium Pricing for Extended Context<\/h3>\n<p><strong>Standard Pricing<\/strong> (up to 200k tokens):<\/p>\n<ul>\n<li><strong>Input<\/strong>: $5 per million tokens<\/li>\n<li><strong>Output<\/strong>: $25 per million tokens<\/li>\n<\/ul>\n<p><strong>Extended Context<\/strong> (200k-1M tokens):<\/p>\n<ul>\n<li><strong>Input<\/strong>: $10 per million tokens<\/li>\n<li><strong>Output<\/strong>: $37.50 per million tokens<\/li>\n<li><strong>Multiplier<\/strong>: 2\u00d7 for input, 1.5\u00d7 for output<\/li>\n<\/ul>\n<p><strong>When It's Worth It<\/strong>:<\/p>\n<ul>\n<li>Multi-document analysis requiring simultaneous access<\/li>\n<li>Codebase-scale operations<\/li>\n<li>Historical context critical to quality<\/li>\n<li>Single-pass complex deliverables<\/li>\n<\/ul>\n<h2>Part III: New Developer Platform Features<\/h2>\n<h3>Adaptive Thinking<\/h3>\n<p><strong>The Previous Binary<\/strong>: Enable or disable extended thinking\u2014no middle ground<\/p>\n<p><strong>The New Intelligence<\/strong>:<\/p>\n<p><strong>How It Works<\/strong>: Model decides when deeper reasoning would be helpful based on task complexity<\/p>\n<p><strong>At Default (High Effort)<\/strong>:<\/p>\n<ul>\n<li>Uses extended thinking when useful<\/li>\n<li>Skips it for straightforward tasks<\/li>\n<li>Balances quality and speed automatically<\/li>\n<\/ul>\n<p><strong>Developer Control<\/strong>: Adjust effort level to make model more\/less selective<\/p>\n<p><strong>Benefit<\/strong>: Optimal performance without manual micromanagement per query<\/p>\n<p><strong>Example Behavior<\/strong>:<\/p>\n<ul>\n<li>Simple query: Instant response without thinking<\/li>\n<li>Ambiguous problem: Activates extended reasoning<\/li>\n<li>Edge case detection: Automatically thinks deeper<\/li>\n<li>Routine task: Efficient execution<\/li>\n<\/ul>\n<h3>Effort Controls (Four Levels)<\/h3>\n<p><strong>Low Effort<\/strong>:<\/p>\n<ul>\n<li><strong>Speed<\/strong>: Fastest responses<\/li>\n<li><strong>Cost<\/strong>: Lowest token usage<\/li>\n<li><strong>Use When<\/strong>: Simple queries, routine tasks, time-critical operations<\/li>\n<li><strong>Thinking<\/strong>: Minimal extended reasoning<\/li>\n<\/ul>\n<p><strong>Medium Effort<\/strong>:<\/p>\n<ul>\n<li><strong>Speed<\/strong>: Balanced<\/li>\n<li><strong>Cost<\/strong>: Moderate<\/li>\n<li><strong>Use When<\/strong>: Standard tasks, most everyday work<\/li>\n<li><strong>Thinking<\/strong>: Selective extended reasoning<\/li>\n<\/ul>\n<p><strong>High Effort<\/strong> (Default):<\/p>\n<ul>\n<li><strong>Speed<\/strong>: Quality-optimized<\/li>\n<li><strong>Cost<\/strong>: Standard pricing<\/li>\n<li><strong>Use When<\/strong>: Complex tasks requiring careful consideration<\/li>\n<li><strong>Thinking<\/strong>: Extended reasoning when beneficial (adaptive)<\/li>\n<\/ul>\n<p><strong>Max Effort<\/strong>:<\/p>\n<ul>\n<li><strong>Speed<\/strong>: Deepest reasoning (may be slower)<\/li>\n<li><strong>Cost<\/strong>: Highest token usage<\/li>\n<li><strong>Use When<\/strong>: Most challenging problems, critical decisions, expert-level work<\/li>\n<li><strong>Thinking<\/strong>: Maximum extended reasoning, 120k thinking budget<\/li>\n<\/ul>\n<p><strong>Parameter Access<\/strong>: <code>\/effort<\/code> in API<\/p>\n<p><strong>Anthropic Recommendation<\/strong>: &#8220;If model is overthinking on a given task, dial effort down from high to medium&#8221;<\/p>\n<h3>Context Compaction (Beta)<\/h3>\n<p><strong>The Problem<\/strong>: Long conversations and agentic tasks hitting context limits<\/p>\n<p><strong>The Solution<\/strong>: Automatic summarization and replacement of older context<\/p>\n<p><strong>How It Works<\/strong>:<\/p>\n<p><strong>1. Threshold Configuration<\/strong>: Developer sets token limit (e.g., 50k, 100k)<\/p>\n<p><strong>2. Automatic Trigger<\/strong>: When conversation approaches threshold<\/p>\n<p><strong>3. Intelligent Summarization<\/strong>: Model summarizes older context<\/p>\n<p><strong>4. Seamless Replacement<\/strong>: Summary replaces original detailed context<\/p>\n<p><strong>5. Continued Operation<\/strong>: Task proceeds without hitting limits<\/p>\n<p><strong>Configuration Options<\/strong>:<\/p>\n<ul>\n<li>Custom threshold settings<\/li>\n<li>Preservation rules for critical context<\/li>\n<li>Summary detail levels<\/li>\n<li>Maximum total context after compaction<\/li>\n<\/ul>\n<p><strong>Use Cases<\/strong>:<\/p>\n<ul>\n<li>Multi-day debugging sessions<\/li>\n<li>Iterative design discussions<\/li>\n<li>Long-running research projects<\/li>\n<li>Extended customer support threads<\/li>\n<li>Continuous monitoring workflows<\/li>\n<\/ul>\n<p><strong>BrowseComp Example<\/strong>: Compaction at 50k\u219210M total tokens enabled deep search<\/p>\n<p><strong>Humanity's Last Exam<\/strong>: Compaction 50k\u21923M tokens for complex reasoning<\/p>\n<h3>US-Only Inference<\/h3>\n<p><strong>Requirement<\/strong>: Workloads must run in United States data centers<\/p>\n<p><strong>Pricing<\/strong>: <strong>1.1\u00d7 token pricing<\/strong> (10% premium)<\/p>\n<p><strong>Use Cases<\/strong>:<\/p>\n<ul>\n<li>Regulated industries (healthcare, finance, government)<\/li>\n<li>Data residency compliance requirements<\/li>\n<li>US government contracts<\/li>\n<li>Legal\/contractual restrictions<\/li>\n<\/ul>\n<p><strong>How to Enable<\/strong>: Specify in API call or platform settings<\/p>\n<p><strong>Documentation<\/strong>: See <a href=\"https:\/\/platform.claude.com\/docs\/en\/build-with-claude\/data-residency\" target=\"_blank\" rel=\"noopener\">Data Residency<\/a><\/p>\n<h2>Part IV: Product Updates<\/h2>\n<h3>Agent Teams in Claude Code (Research Preview)<\/h3>\n<p><strong>The Innovation<\/strong>: Multiple agents working in parallel, coordinating autonomously<\/p>\n<p><strong>When to Use<\/strong>: Tasks splitting into independent, read-heavy work<\/p>\n<p><strong>Example<\/strong>: Codebase reviews across multiple repositories<\/p>\n<p><strong>How It Works<\/strong>:<\/p>\n<ol>\n<li>Main agent decomposes task<\/li>\n<li>Sub-agents instantiated for independent pieces<\/li>\n<li>Parallel execution<\/li>\n<li>Autonomous coordination<\/li>\n<li>Results synthesis<\/li>\n<\/ol>\n<p><strong>Developer Control<\/strong>:<\/p>\n<ul>\n<li>Take over any subagent: Shift+Up\/Down<\/li>\n<li>Tmux integration support<\/li>\n<li>Monitor all agent activities<\/li>\n<li>Intervene when needed<\/li>\n<\/ul>\n<p><strong>Replit President Michele Catasta<\/strong>: &#8220;Breaks complex tasks into independent subtasks, runs tools and subagents in parallel, identifies blockers with real precision&#8221;<\/p>\n<p><strong>Rakuten GM Yusuke Kaji<\/strong>: &#8220;Autonomously closed 13 issues and assigned 12 issues to right team members in single day, managing ~50-person org across 6 repositories&#8221;<\/p>\n<h3>Claude in Excel (Substantial Upgrades)<\/h3>\n<p><strong>Improved Capabilities<\/strong>:<\/p>\n<p><strong>Long-Running Tasks<\/strong>: Handles complex multi-step spreadsheet operations<\/p>\n<p><strong>Harder Problems<\/strong>: Solves challenging data analysis and modeling<\/p>\n<p><strong>Plan Before Acting<\/strong>: Thinks through approach before executing<\/p>\n<p><strong>Unstructured Data Ingestion<\/strong>: Processes messy data, infers proper structure automatically<\/p>\n<p><strong>Multi-Step Changes<\/strong>: Executes complex modifications in one pass<\/p>\n<p><strong>Use Cases<\/strong>:<\/p>\n<ul>\n<li>Financial model construction with Pivot Tables<\/li>\n<li>Data cleaning and structuring<\/li>\n<li>Complex formula creation<\/li>\n<li>Multi-sheet coordination<\/li>\n<li>Automated reporting<\/li>\n<\/ul>\n<p><strong>Shortcut.ai CTO Nico Christie<\/strong>: &#8220;Performance jump feels almost unbelievable. Real-world tasks challenging for Opus [4.5] suddenly became easy. Watershed moment for spreadsheet agents.&#8221;<\/p>\n<h3>Claude in PowerPoint (Research Preview)<\/h3>\n<p><strong>Now Available<\/strong>: Max, Team, and Enterprise plans<\/p>\n<p><strong>Core Capabilities<\/strong>:<\/p>\n<p><strong>Layout Reading<\/strong>: Understands your existing templates<\/p>\n<p><strong>Font Recognition<\/strong>: Matches brand typography<\/p>\n<p><strong>Slide Master Awareness<\/strong>: Stays on brand automatically<\/p>\n<p><strong>Template Building<\/strong>: Constructs presentations from templates<\/p>\n<p><strong>Full Deck Generation<\/strong>: Creates complete presentations from descriptions<\/p>\n<p><strong>Visual Intelligence<\/strong>: Brings data to life with appropriate charts, graphics<\/p>\n<p><strong>Excel + PowerPoint Workflow<\/strong>:<\/p>\n<ol>\n<li>Process and structure data in Excel<\/li>\n<li>Transfer insights to PowerPoint<\/li>\n<li>Automatic visual generation<\/li>\n<li>Brand-consistent output<\/li>\n<li>Professional presentation ready<\/li>\n<\/ol>\n<p><strong>Figma CDO Loredana Crisan<\/strong>: &#8220;Translates detailed designs and multi-layered tasks into code on first try, powerful starting point for teams to explore ideas&#8221;<\/p>\n<h2>Part V: Safety and Alignment Leadership<\/h2>\n<h3>Lowest Misaligned Behavior Rate<\/h3>\n<p><strong>Automated Behavioral Audit Results<\/strong>:<\/p>\n<p><strong>Opus 4.6<\/strong>: <strong>Lowest misaligned behavior score<\/strong> of any frontier model<\/p>\n<p><strong>Versus Opus 4.5<\/strong>: Equal or better alignment (Opus 4.5 was previous best)<\/p>\n<p><strong>Misalignment Categories Tested<\/strong>:<\/p>\n<ul>\n<li>Deception and dishonesty<\/li>\n<li>Sycophancy (excessive agreement)<\/li>\n<li>Encouragement of user delusions<\/li>\n<li>Cooperation with misuse requests<\/li>\n<li>Harmful instruction following<\/li>\n<\/ul>\n<p><strong>Over-Refusal Rate<\/strong>: <strong>Lowest of recent Claude models<\/strong><\/p>\n<p><strong>Balance Achieved<\/strong>: High safety without excessive caution on benign queries<\/p>\n<h3>Most Comprehensive Safety Evaluation Ever<\/h3>\n<p><strong>New Evaluation Types<\/strong>:<\/p>\n<ul>\n<li>User wellbeing assessments<\/li>\n<li>Complex refusal testing<\/li>\n<li>Surreptitious harmful action detection<\/li>\n<li>Interpretability experiments (understanding why model behaves certain ways)<\/li>\n<\/ul>\n<p><strong>Upgraded Existing Tests<\/strong>:<\/p>\n<ul>\n<li>Enhanced dangerous request scenarios<\/li>\n<li>More sophisticated misuse attempts<\/li>\n<li>Multi-step harmful task detection<\/li>\n<li>Context-dependent safety evaluation<\/li>\n<\/ul>\n<p><strong>Interpretability Integration<\/strong>: Using science of AI model inner workings to catch problems standard testing might miss<\/p>\n<p><strong>System Card<\/strong>: Full details in <a href=\"https:\/\/www.anthropic.com\/claude-opus-4-6-system-card\" target=\"_blank\" rel=\"noopener\">Claude Opus 4.6 System Card<\/a><\/p>\n<h3>Cybersecurity-Specific Safeguards<\/h3>\n<p><strong>The Context<\/strong>: Opus 4.6 shows enhanced cybersecurity abilities<\/p>\n<p><strong>Dual-Use Recognition<\/strong>: Capabilities helpful for defense, potentially harmful for offense<\/p>\n<p><strong>Six New Cybersecurity Probes<\/strong>: Methods detecting harmful responses across:<\/p>\n<ul>\n<li>Exploit development<\/li>\n<li>Vulnerability discovery misuse<\/li>\n<li>Attack methodology guidance<\/li>\n<li>Malicious code generation<\/li>\n<li>System intrusion assistance<\/li>\n<li>Data exfiltration techniques<\/li>\n<\/ul>\n<p><strong>Defensive Acceleration<\/strong>: Using Opus 4.6 to find and patch vulnerabilities in open-source software (see <a href=\"https:\/\/red.anthropic.com\/2026\/zero-days\/\" target=\"_blank\" rel=\"noopener\">Anthropic cybersecurity blog<\/a>)<\/p>\n<p><strong>Future Plans<\/strong>: Real-time intervention to block abuse as threats evolve<\/p>\n<p><strong>Philosophy<\/strong>: &#8220;Critical that cyberdefenders use AI models like Claude to level the playing field&#8221;<\/p>\n<h2>Part VI: Partner Testimonials and Real-World Validation<\/h2>\n<h3>Development Tools and Platforms<\/h3>\n<p><strong>GitHub CPO Mario Rodriguez<\/strong>:<\/p>\n<blockquote><p>&#8220;Delivering on complex, multi-step coding work developers face daily\u2014especially agentic workflows demanding planning and tool calling. Unlocking long-horizon tasks at frontier.&#8221;<\/p><\/blockquote>\n<p><strong>Cursor Co-founder Michael Truell<\/strong>:<\/p>\n<blockquote><p>&#8220;Stands out on harder problems. Stronger tenacity, better code review, stays on long-horizon tasks where others drop off. Team really excited.&#8221;<\/p><\/blockquote>\n<p><strong>Replit President Michele Catasta<\/strong>:<\/p>\n<blockquote><p>&#8220;Huge leap for agentic planning. Breaks complex tasks into independent subtasks, runs tools and subagents in parallel, identifies blockers with real precision.&#8221;<\/p><\/blockquote>\n<p><strong>Windsurf CEO Jeff Wang<\/strong>:<\/p>\n<blockquote><p>&#8220;Noticeably better than Opus 4.5 in Windsurf, especially on tasks requiring careful exploration like debugging and understanding unfamiliar codebases. Thinks longer, which pays off.&#8221;<\/p><\/blockquote>\n<p><strong>Bolt.new CEO Eric Simons<\/strong>:<\/p>\n<blockquote><p>&#8220;Meaningful improvement for design systems and large codebases\u2014enormous enterprise value use cases. One-shotted fully functional physics engine, handling large multi-scope task single pass.&#8221;<\/p><\/blockquote>\n<p><strong>Vercel GM Zeb Hermann (v0)<\/strong>:<\/p>\n<blockquote><p>&#8220;Only ship models developers genuinely feel difference. Opus 4.6 passed that bar with ease. Frontier-level reasoning with edge cases helps v0 elevate ideas from prototype to production.&#8221;<\/p><\/blockquote>\n<h3>Enterprise and Knowledge Work<\/h3>\n<p><strong>Notion AI Lead Sarah Sachs<\/strong>:<\/p>\n<blockquote><p>&#8220;Strongest model Anthropic shipped. Takes complicated requests, actually follows through, breaks into concrete steps, executes, produces polished work even when ambitious. Feels like capable collaborator.&#8221;<\/p><\/blockquote>\n<p><strong>Asana Interim CTO Amritansh Raghav<\/strong>:<\/p>\n<blockquote><p>&#8220;Clear step up. Code, reasoning, planning excellent. Ability to navigate large codebase and identify right changes feels state-of-the-art.&#8221;<\/p><\/blockquote>\n<p><strong>Thomson Reuters CTO Joel Hron<\/strong>:<\/p>\n<blockquote><p>&#8220;Meaningful leap in long-context performance. Handles much larger information bodies with consistency level strengthening how we design complex research workflows. More powerful building blocks for expert-grade systems.&#8221;<\/p><\/blockquote>\n<p><strong>Harvey Head of AI Niko Grupen<\/strong>:<\/p>\n<blockquote><p>&#8220;Achieved highest BigLaw Bench score of any Claude model at 90.2%. With 40% perfect scores and 84% above 0.8, remarkably capable for legal reasoning.&#8221;<\/p><\/blockquote>\n<p><strong>Box Head of AI Yashodha Bhavnani<\/strong>:<\/p>\n<blockquote><p>&#8220;Excels in high-reasoning tasks like multi-source analysis across legal, financial, technical content. Box eval showed 10% lift in performance, reaching 68% vs 58% baseline, near-perfect scores in technical domains.&#8221;<\/p><\/blockquote>\n<h3>Product and Creative Tools<\/h3>\n<p><strong>Shopify Staff Engineer Paulo Arruda<\/strong>:<\/p>\n<blockquote><p>&#8220;Best Anthropic model we've tested. Understands intent with minimal prompting, went above and beyond, exploring and creating details I didn't know I wanted until I saw them. Felt like working with model, not waiting on it.&#8221;<\/p><\/blockquote>\n<p><strong>Figma CDO Loredana Crisan<\/strong>:<\/p>\n<blockquote><p>&#8220;Generates complex, interactive apps and prototypes in Figma Make with impressive creative range. Translates detailed designs and multi-layered tasks into code first try\u2014powerful starting point.&#8221;<\/p><\/blockquote>\n<p><strong>Lovable Co-founder Fabian Hedin<\/strong>:<\/p>\n<blockquote><p>&#8220;Uplift in design quality. Works beautifully with our design systems and more autonomous, core to Lovable's values. People should create things that matter, not micromanage AI.&#8221;<\/p><\/blockquote>\n<h3>Security and Infrastructure<\/h3>\n<p><strong>NBIM Head of AI & ML Stian Kirkeberg<\/strong>:<\/p>\n<blockquote><p>&#8220;Across 40 cybersecurity investigations, Opus 4.6 produced best results 38 of 40 times in blind ranking against Claude 4.5 models. Each model ran end-to-end on same agentic harness with up to 9 subagents and 100+ tool calls.&#8221;<\/p><\/blockquote>\n<p><strong>SentinelOne Chief AI Officer Gregor Stewart<\/strong>:<\/p>\n<blockquote><p>&#8220;Handled multi-million-line codebase migration like senior engineer. Planned up front, adapted strategy as learned, finished in half the time.&#8221;<\/p><\/blockquote>\n<p><strong>Ramp Staff Software Engineer Jerry Tsui<\/strong>:<\/p>\n<blockquote><p>&#8220;Biggest leap seen in months. More comfortable giving sequence of tasks across stack and letting run. Smart enough to use subagents for individual pieces.&#8221;<\/p><\/blockquote>\n<h3>Specialized Applications<\/h3>\n<p><strong>Rakuten GM AI Yusuke Kaji<\/strong>:<\/p>\n<blockquote><p>&#8220;Autonomously closed 13 issues and assigned 12 to right team members in single day, managing ~50-person organization across 6 repositories. Handled product and organizational decisions while synthesizing context across domains, knew when to escalate to human.&#8221;<\/p><\/blockquote>\n<p><strong>Cognition Co-founder Scott Wu<\/strong>:<\/p>\n<blockquote><p>&#8220;Reasons through complex problems at level we haven't seen before. Considers edge cases other models miss, consistently lands on more elegant, well-considered solutions.&#8221;<\/p><\/blockquote>\n<p><strong>Shortcut.ai CTO Nico Christie<\/strong>:<\/p>\n<blockquote><p>&#8220;Performance jump almost unbelievable. Real-world tasks challenging for Opus [4.5] suddenly became easy. Watershed moment for spreadsheet agents on Shortcut.&#8221;<\/p><\/blockquote>\n<h2>Part VII: How to Use Claude Opus 4.6<\/h2>\n<h3>Access Points<\/h3>\n<p><strong>Claude.ai<\/strong> (Web\/App):<\/p>\n<ul>\n<li>Direct access for all users<\/li>\n<li>Max, Team, Enterprise plans<\/li>\n<li>Integrated with Claude in Excel, PowerPoint<\/li>\n<li>Cowork autonomous multitasking<\/li>\n<\/ul>\n<p><strong>API<\/strong> (Developers):<\/p>\n<ul>\n<li>Model string: <code>claude-opus-4-6<\/code><\/li>\n<li>Full documentation: <a href=\"https:\/\/platform.claude.com\/docs\/en\/about-claude\/models\/overview\" target=\"_blank\" rel=\"noopener\">platform.claude.com\/docs<\/a><\/li>\n<li>All major cloud platforms supported<\/li>\n<\/ul>\n<p><strong>Claude Code<\/strong> (Terminal):<\/p>\n<ul>\n<li>Agent teams feature<\/li>\n<li>IDE integration (Xcode support announced)<\/li>\n<li>Autonomous coding workflows<\/li>\n<\/ul>\n<h3>Pricing Structure<\/h3>\n<p><strong>Standard API<\/strong>:<\/p>\n<ul>\n<li><strong>Input<\/strong>: $5 per million tokens<\/li>\n<li><strong>Output<\/strong>: $25 per million tokens<\/li>\n<li><strong>Unchanged<\/strong>: Same pricing as Opus 4.5<\/li>\n<\/ul>\n<p><strong>Extended Context<\/strong> (200k-1M tokens):<\/p>\n<ul>\n<li><strong>Input<\/strong>: $10 per million tokens<\/li>\n<li><strong>Output<\/strong>: $37.50 per million tokens<\/li>\n<\/ul>\n<p><strong>US-Only Inference<\/strong>:<\/p>\n<ul>\n<li><strong>Multiplier<\/strong>: 1.1\u00d7 standard pricing<\/li>\n<li><strong>Use For<\/strong>: Compliance requirements<\/li>\n<\/ul>\n<p><strong>Full Details<\/strong>: <a href=\"https:\/\/claude.com\/pricing#api\" target=\"_blank\" rel=\"noopener\">claude.com\/pricing<\/a><\/p>\n<h3>Configuration Best Practices<\/h3>\n<p><strong>Effort Selection<\/strong>:<\/p>\n<p><strong>Low<\/strong>: Quick answers, simple queries, time-critical \u2192 Fast, cheap, minimal thinking<\/p>\n<p><strong>Medium<\/strong>: Standard tasks, most work \u2192 Balanced, selectively thinks<\/p>\n<p><strong>High<\/strong> (Default): Complex problems, quality-critical \u2192 Adaptive thinking, optimal balance<\/p>\n<p><strong>Max<\/strong>: Hardest challenges, expert-level \u2192 Maximum reasoning, 120k budget<\/p>\n<p><strong>Context Management<\/strong>:<\/p>\n<p><strong>Enable Compaction<\/strong>: For long-running tasks \u2192 Set threshold (e.g., 50k tokens) \u2192 Automatic summarization prevents limits<\/p>\n<p><strong>Use 1M Window<\/strong>: For multi-document analysis \u2192 Be aware of premium pricing 200k+ \u2192 Worth it when simultaneous access critical<\/p>\n<p><strong>Adaptive Thinking<\/strong>: \u2192 Leave enabled at default (high effort) \u2192 Model decides when to think deeply \u2192 Dial down if overthinking simple tasks<\/p>\n<h2>Conclusion: The New Standard for AI Intelligence<\/h2>\n<h3>What Opus 4.6 Achieves<\/h3>\n<p><strong>Performance Leadership<\/strong>:<\/p>\n<ul>\n<li>144 Elo over GPT-5.2 on knowledge work<\/li>\n<li>#1 on Terminal-Bench 2.0 agentic coding<\/li>\n<li>Leads Humanity's Last Exam reasoning test<\/li>\n<li>Best BrowseComp search performance<\/li>\n<\/ul>\n<p><strong>Context Breakthrough<\/strong>:<\/p>\n<ul>\n<li>1M token window (first Opus-class)<\/li>\n<li>76% on MRCR v2 (4.1\u00d7 better than Sonnet)<\/li>\n<li>128k output tokens<\/li>\n<li>Context rot effectively eliminated<\/li>\n<\/ul>\n<p><strong>Developer Empowerment<\/strong>:<\/p>\n<ul>\n<li>Adaptive thinking intelligence<\/li>\n<li>Four-level effort controls<\/li>\n<li>Context compaction for long tasks<\/li>\n<li>Agent teams in Claude Code<\/li>\n<li>US-only inference option<\/li>\n<\/ul>\n<p><strong>Product Integration<\/strong>:<\/p>\n<ul>\n<li>Upgraded Claude in Excel<\/li>\n<li>New Claude in PowerPoint<\/li>\n<li>Cowork autonomous multitasking<\/li>\n<li>Improved everyday work capabilities<\/li>\n<\/ul>\n<p><strong>Safety Excellence<\/strong>:<\/p>\n<ul>\n<li>Lowest misaligned behavior rate<\/li>\n<li>Most comprehensive evaluation ever<\/li>\n<li>Specialized cybersecurity safeguards<\/li>\n<li>Lowest over-refusal rate<\/li>\n<\/ul>\n<h3>Who Benefits Most<\/h3>\n<p><strong>Developers<\/strong>: Agentic coding, codebase navigation, debugging, system tasks<\/p>\n<p><strong>Knowledge Workers<\/strong>: Financial analysis, legal research, document creation, presentations<\/p>\n<p><strong>Enterprises<\/strong>: Long-context analysis, compliance workflows, multi-repository management<\/p>\n<p><strong>Researchers<\/strong>: Literature synthesis, data analysis, expert-level reasoning<\/p>\n<p><strong>Creative Professionals<\/strong>: Design systems, prototyping, autonomous content creation<\/p>\n<h3>The Competitive Landscape<\/h3>\n<p><strong>Versus GPT-5.2<\/strong>: +144 Elo on GDPval-AA, wins ~70% comparisons<\/p>\n<p><strong>Versus Previous Opus<\/strong>: +190 Elo on knowledge work, 2\u00d7 on life sciences<\/p>\n<p><strong>Versus Industry<\/strong>: State-of-the-art across most benchmarks<\/p>\n<p><strong>Safety Profile<\/strong>: Best alignment of any frontier model<\/p>\n<h3>Getting Started<\/h3>\n<p><strong>Immediate Steps<\/strong>:<\/p>\n<ol>\n<li>Access at claude.ai or via API (<code>claude-opus-4-6<\/code>)<\/li>\n<li>Start with default high effort, adaptive thinking<\/li>\n<li>Enable context compaction for long tasks<\/li>\n<li>Experiment with effort levels for your use cases<\/li>\n<li>Try agent teams in Claude Code for parallel work<\/li>\n<\/ol>\n<p><strong>Learning Resources<\/strong>:<\/p>\n<ul>\n<li>System card for full technical details<\/li>\n<li>Developer documentation for API features<\/li>\n<li>Partner case studies for real-world examples<\/li>\n<li>Support center for implementation help<\/li>\n<\/ul>\n<p><strong>The smartest AI just got smarter. And safer. And more autonomous. Same price.<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>Everything You Need to Know: Benchmark Dominance (144 Elo Over GPT-5.2, #1 on Terminal-Bench), New Features (Adaptive Thinking, Effort Controls, [&hellip;]<\/p>","protected":false},"author":11214,"featured_media":136964,"menu_order":0,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[468],"tags":[],"class_list":["post-136945","aitools","type-aitools","status-publish","format-standard","has-post-thumbnail","hentry","category-best-post"],"acf":[],"_links":{"self":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136945","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools"}],"about":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/types\/aitools"}],"author":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/users\/11214"}],"version-history":[{"count":2,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136945\/revisions"}],"predecessor-version":[{"id":136966,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/aitools\/136945\/revisions\/136966"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media\/136964"}],"wp:attachment":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media?parent=136945"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/categories?post=136945"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/tags?post=136945"}],"curies":[{"name":"\u0648\u0648\u0631\u062f\u0628\u0631\u064a\u0633","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}