
{"id":131957,"date":"2026-01-14T13:57:46","date_gmt":"2026-01-14T05:57:46","guid":{"rendered":"https:\/\/vertu.com\/?p=131957"},"modified":"2026-01-23T17:17:06","modified_gmt":"2026-01-23T09:17:06","slug":"gpt-5-3-garlic-everything-you-need-to-know-about-openais-rumored-next-gen-ai","status":"publish","type":"post","link":"https:\/\/legacy.vertu.com\/ar\/%d9%86%d9%85%d8%b7-%d8%a7%d9%84%d8%ad%d9%8a%d8%a7%d8%a9\/gpt-5-3-garlic-everything-you-need-to-know-about-openais-rumored-next-gen-ai\/","title":{"rendered":"GPT-5.3 &#8220;Garlic&#8221;: Everything You Need to Know About OpenAI\u2019s Rumored Next-Gen AI"},"content":{"rendered":"<h3 data-path-to-node=\"0\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-133865\" src=\"https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic.png\" alt=\"\" width=\"818\" height=\"350\" srcset=\"https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic.png 818w, https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic-300x128.png 300w, https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic-768x329.png 768w, https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic-18x8.png 18w, https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic-600x257.png 600w, https:\/\/vertu-website-oss.vertu.com\/2026\/01\/GPT-5.3-Garlic-64x27.png 64w\" sizes=\"(max-width: 818px) 100vw, 818px\" \/><\/h3>\n<p data-path-to-node=\"1\"><b data-path-to-node=\"1\" data-index-in-node=\"0\">What is GPT-5.3 &#8220;Garlic&#8221;?<\/b><\/p>\n<p data-path-to-node=\"2\">GPT-5.3, internally codenamed <b data-path-to-node=\"2\" data-index-in-node=\"30\">&#8220;Garlic,&#8221;<\/b> is the rumored next-generation iteration of OpenAI's flagship language model series, positioned as a high-density, ultra-efficient successor to the GPT-5 and GPT-5.2 line. Following an internal <b data-path-to-node=\"2\" data-index-in-node=\"234\">&#8220;Code Red&#8221;<\/b> at OpenAI triggered by the rapid advancement of Google\u2019s Gemini 3 and Anthropic\u2019s Claude 4.5, &#8220;Garlic&#8221; represents a fundamental shift from &#8220;bigger is better&#8221; to <b data-path-to-node=\"2\" data-index-in-node=\"406\">&#8220;smarter and denser.&#8221;<\/b><\/p>\n<ul data-path-to-node=\"3\">\n<li>\n<p data-path-to-node=\"3,0,0\"><b data-path-to-node=\"3,0,0\" data-index-in-node=\"0\">Release Timeline:<\/b> Rumors point to a broad rollout in <b data-path-to-node=\"3,0,0\" data-index-in-node=\"53\">early 2026<\/b>, following the limited enterprise launch of GPT-5.2 in late 2025.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"3,1,0\"><b data-path-to-node=\"3,1,0\" data-index-in-node=\"0\">Key Technical Edge:<\/b> It utilizes <b data-path-to-node=\"3,1,0\" data-index-in-node=\"32\">Enhanced Pre-Training Efficiency (EPTE)<\/b> and high-density training to pack &#8220;GPT-6 level&#8221; reasoning into a smaller, faster architecture.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"3,2,0\"><b data-path-to-node=\"3,2,0\" data-index-in-node=\"0\">Performance Benchmarks:<\/b> Internal leaks suggest it outperforms competitors in <b data-path-to-node=\"3,2,0\" data-index-in-node=\"77\">complex coding<\/b>, <b data-path-to-node=\"3,2,0\" data-index-in-node=\"93\">multi-step logical reasoning<\/b>, and <b data-path-to-node=\"3,2,0\" data-index-in-node=\"127\">long-context retention<\/b>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"3,3,0\"><b data-path-to-node=\"3,3,0\" data-index-in-node=\"0\">Major Features:<\/b> Expect a <b data-path-to-node=\"3,3,0\" data-index-in-node=\"25\">400,000-token context window<\/b>, native agentic reasoning tokens, and a drastically reduced hallucination rate.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"3,4,0\"><b data-path-to-node=\"3,4,0\" data-index-in-node=\"0\">The &#8220;Garlic&#8221; Vision:<\/b> The codename reflects a model that is &#8220;small but powerful&#8221;\u2014designed to be a highly concentrated &#8220;flavor&#8221; of intelligence that can run more cost-effectively than previous massive models.<\/p>\n<\/li>\n<\/ul>\n<hr data-path-to-node=\"4\" \/>\n<h3 data-path-to-node=\"5\">The 2026 AI Arms Race: Why &#8220;Garlic&#8221; Matters Now<\/h3>\n<p data-path-to-node=\"6\">The artificial intelligence landscape in early 2026 is significantly more crowded than it was just a year ago. With Google\u2019s Gemini 3 dominating multimodal benchmarks and Anthropic\u2019s Claude 4.5 becoming the darling of enterprise software development, OpenAI found itself in a defensive position. Project &#8220;Garlic&#8221; is the strategic counter-move.<\/p>\n<p data-path-to-node=\"7\">Unlike previous updates that focused primarily on increasing parameter counts, GPT-5.3 is built on the philosophy of <b data-path-to-node=\"7\" data-index-in-node=\"117\">computational efficiency<\/b>. The goal is no longer just to build a bigger brain, but to build a more &#8220;wrinkled&#8221; one\u2014increasing the density of connections and the quality of the data used during the pre-training phase.<\/p>\n<h3 data-path-to-node=\"8\">Decoding the Codename: From &#8220;Strawberry&#8221; to &#8220;Garlic&#8221;<\/h3>\n<p data-path-to-node=\"9\">OpenAI has a history of using food-based codenames for its internal milestones. We saw &#8220;Strawberry&#8221; (which became the o1 reasoning series) and &#8220;Lemon.&#8221; The choice of &#8220;Garlic&#8221; is reportedly a nod to the model's architecture.<\/p>\n<ul data-path-to-node=\"10\">\n<li>\n<p data-path-to-node=\"10,0,0\"><b data-path-to-node=\"10,0,0\" data-index-in-node=\"0\">Concentrated Power:<\/b> Just as a small clove of garlic can define the flavor of an entire dish, this model is designed to provide massive intelligence without requiring the massive compute overhead of earlier models.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"10,1,0\"><b data-path-to-node=\"10,1,0\" data-index-in-node=\"0\">Efficiency First:<\/b> It signals a departure from the &#8220;Shallotpeat&#8221; project, which was rumored to be a larger, more cumbersome model that faced scaling challenges.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"10,2,0\"><b data-path-to-node=\"10,2,0\" data-index-in-node=\"0\">Versatility:<\/b> Garlic is intended to be the foundation for everything from mobile-integrated assistants to enterprise-grade autonomous agents.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"11\">High-Density Training: The Secret Sauce<\/h3>\n<p data-path-to-node=\"12\">The most significant technical breakthrough associated with GPT-5.3 is <b data-path-to-node=\"12\" data-index-in-node=\"71\">High-Density Training<\/b>. In the past, LLMs were trained on trillions of tokens from the open web, much of which was &#8220;noisy&#8221; or low-quality.<\/p>\n<ul data-path-to-node=\"13\">\n<li>\n<p data-path-to-node=\"13,0,0\"><b data-path-to-node=\"13,0,0\" data-index-in-node=\"0\">Curated Data Pipelines:<\/b> Garlic was reportedly trained on a much smaller, but vastly more sophisticated dataset consisting of verified scientific papers, high-level code repositories, and synthetic data generated by previous reasoning models.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"13,1,0\"><b data-path-to-node=\"13,1,0\" data-index-in-node=\"0\">Parameter Efficiency:<\/b> By focusing on data quality over quantity, OpenAI has managed to achieve reasoning scores that exceed GPT-5 while potentially reducing the inference cost.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"13,2,0\"><b data-path-to-node=\"13,2,0\" data-index-in-node=\"0\">Auto-Routing Reasoning:<\/b> The model features an internal &#8220;auto-router&#8221; that determines the complexity of a prompt. For simple tasks, it uses a lightning-fast &#8220;reflex&#8221; mode; for complex problems, it automatically engages deeper reasoning tokens.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"14\">Benchmarking the Giant: Garlic vs. The Competition<\/h3>\n<p data-path-to-node=\"15\">Leaked internal evaluations suggest that GPT-5.3 is reclaiming the lead in the &#8220;Big Three&#8221; AI war.<\/p>\n<table data-path-to-node=\"16\">\n<thead>\n<tr>\n<td><strong>Metric<\/strong><\/td>\n<td><strong>GPT-5.3 (Garlic)<\/strong><\/td>\n<td><strong>Google Gemini 3<\/strong><\/td>\n<td><strong>Claude 4.5<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><span data-path-to-node=\"16,1,0,0\"><b data-path-to-node=\"16,1,0,0\" data-index-in-node=\"0\">Reasoning (GDP-Val)<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,1,1,0\"><b data-path-to-node=\"16,1,1,0\" data-index-in-node=\"0\">70.9%<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,1,2,0\">53.3%<\/span><\/td>\n<td><span data-path-to-node=\"16,1,3,0\">59.6%<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"16,2,0,0\"><b data-path-to-node=\"16,2,0,0\" data-index-in-node=\"0\">Coding (HumanEval+)<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,2,1,0\"><b data-path-to-node=\"16,2,1,0\" data-index-in-node=\"0\">94.2%<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,2,2,0\">89.1%<\/span><\/td>\n<td><span data-path-to-node=\"16,2,3,0\">91.5%<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"16,3,0,0\"><b data-path-to-node=\"16,3,0,0\" data-index-in-node=\"0\">Context Window<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,3,1,0\"><b data-path-to-node=\"16,3,1,0\" data-index-in-node=\"0\">400K Tokens<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,3,2,0\">2M Tokens<\/span><\/td>\n<td><span data-path-to-node=\"16,3,3,0\">200K Tokens<\/span><\/td>\n<\/tr>\n<tr>\n<td><span data-path-to-node=\"16,4,0,0\"><b data-path-to-node=\"16,4,0,0\" data-index-in-node=\"0\">Inference Speed<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,4,1,0\"><b data-path-to-node=\"16,4,1,0\" data-index-in-node=\"0\">Ultra-Fast<\/b><\/span><\/td>\n<td><span data-path-to-node=\"16,4,2,0\">Moderate<\/span><\/td>\n<td><span data-path-to-node=\"16,4,3,0\">Fast<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p data-path-to-node=\"17\">While Google still holds the crown for the raw &#8220;length&#8221; of its context window, GPT-5.3 focuses on <b data-path-to-node=\"17\" data-index-in-node=\"98\">contextual accuracy<\/b>\u2014the ability to find a specific &#8220;needle in a haystack&#8221; within that 400k window with 99.9% reliability.<\/p>\n<h3 data-path-to-node=\"18\">Key Features of GPT-5.3<\/h3>\n<p data-path-to-node=\"19\">OpenAI has integrated several &#8220;agent-first&#8221; features into the Garlic architecture to move beyond simple chat interactions.<\/p>\n<ul data-path-to-node=\"20\">\n<li>\n<p data-path-to-node=\"20,0,0\"><b data-path-to-node=\"20,0,0\" data-index-in-node=\"0\">Massive Output Capacity:<\/b> Rumors suggest a 128,000-token output limit, allowing the model to generate entire software applications or 100-page technical manuals in a single pass.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"20,1,0\"><b data-path-to-node=\"20,1,0\" data-index-in-node=\"0\">Native Tool-Calling:<\/b> Unlike previous versions that required external &#8220;wrappers,&#8221; Garlic has a native understanding of APIs and software environments, making it far more reliable at executing multi-step tasks.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"20,2,0\"><b data-path-to-node=\"20,2,0\" data-index-in-node=\"0\">Self-Verification Logic:<\/b> Before delivering a response, the model performs a &#8220;hidden&#8221; reasoning step to check its own logic for contradictions, significantly reducing the &#8220;r&#8221; in &#8220;garlic&#8221; style errors.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"20,3,0\"><b data-path-to-node=\"20,3,0\" data-index-in-node=\"0\">Knowledge Cutoff:<\/b> The training data reportedly includes information up to <b data-path-to-node=\"20,3,0\" data-index-in-node=\"74\">August 31, 2025<\/b>, making it the most current model in the GPT-5 family.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"21\">The Impact on Developers and Enterprises<\/h3>\n<p data-path-to-node=\"22\">For the tech industry, GPT-5.3 represents a shift toward <b data-path-to-node=\"22\" data-index-in-node=\"57\">autonomous coding<\/b>. With its improved reasoning and massive context, developers can feed the model an entire codebase for refactoring.<\/p>\n<ul data-path-to-node=\"23\">\n<li>\n<p data-path-to-node=\"23,0,0\"><b data-path-to-node=\"23,0,0\" data-index-in-node=\"0\">Infrastructure Optimization:<\/b> Because the model is denser and more efficient, API costs are expected to drop for cached inputs, making it more viable for startups to build complex products.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"23,1,0\"><b data-path-to-node=\"23,1,0\" data-index-in-node=\"0\">CI\/CD Integration:<\/b> The model is designed to sit inside deployment pipelines, automatically reviewing code, suggesting security patches, and writing documentation as the code is committed.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"23,2,0\"><b data-path-to-node=\"23,2,0\" data-index-in-node=\"0\">Agentic Workflows:<\/b> Rather than just answering questions, Garlic is built to act as a &#8220;Project Manager&#8221; that can delegate sub-tasks to smaller models (like o4-mini) and synthesize the results.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"24\">Community Reactions and Skepticism<\/h3>\n<p data-path-to-node=\"25\">Despite the &#8220;Code Red&#8221; hype, the Reddit community (specifically r\/OpenAI and r\/Singularity) remains divided.<\/p>\n<ul data-path-to-node=\"26\">\n<li>\n<p data-path-to-node=\"26,0,0\"><b data-path-to-node=\"26,0,0\" data-index-in-node=\"0\">The Hype:<\/b> Many believe Garlic is the precursor to &#8220;true&#8221; agentic AI\u2014software that can actually <i data-path-to-node=\"26,0,0\" data-index-in-node=\"95\">do<\/i> work rather than just <i data-path-to-node=\"26,0,0\" data-index-in-node=\"120\">talk<\/i> about it.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><b data-path-to-node=\"26,1,0\" data-index-in-node=\"0\">The Skepticism:<\/b> Some users point out that we have entered a phase of &#8220;diminishing returns&#8221; where the jump between 5.2 and 5.3 might be imperceptible to the average user, even if the benchmarks look impressive.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,2,0\"><b data-path-to-node=\"26,2,0\" data-index-in-node=\"0\">The &#8220;Fraud&#8221; Debate:<\/b> As always, some critics argue that these incremental updates are a marketing strategy to keep OpenAI in the news while they struggle with the true &#8220;GPT-6&#8221; breakthrough.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"27\">How to Prepare for the GPT-5.3 Rollout<\/h3>\n<p data-path-to-node=\"28\">If you are a business owner or developer, you don't need to wait for the official release to start preparing.<\/p>\n<ol start=\"1\" data-path-to-node=\"29\">\n<li>\n<p data-path-to-node=\"29,0,0\"><b data-path-to-node=\"29,0,0\" data-index-in-node=\"0\">Structure Your Data:<\/b> Since Garlic thrives on high-density, structured input, begin organizing your internal documentation and codebases into RAG-friendly (Retrieval-Augmented Generation) formats.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"29,1,0\"><b data-path-to-node=\"29,1,0\" data-index-in-node=\"0\">Audit Your Current API Usage:<\/b> If you are currently using GPT-4o or 5.2, look for areas where latency is hurting your user experience. Garlic is designed specifically to solve the &#8220;slowness&#8221; of deep reasoning.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"29,2,0\"><b data-path-to-node=\"29,2,0\" data-index-in-node=\"0\">Experiment with Agentic Frameworks:<\/b> Start building small &#8220;agent&#8221; prototypes using LangChain or OpenAI\u2019s Assistants API. The shift in 5.3 toward agentic tokens will make these prototypes much more powerful overnight.<\/p>\n<\/li>\n<\/ol>\n<h3 data-path-to-node=\"30\">Conclusion: Is Garlic the Final Step Toward AGI?<\/h3>\n<p data-path-to-node=\"31\">While GPT-5.3 &#8220;Garlic&#8221; may not be Artificial General Intelligence (AGI) itself, it represents the most significant step toward <b data-path-to-node=\"31\" data-index-in-node=\"127\">autonomous functional intelligence<\/b> we have seen. By solving the &#8220;Code Red&#8221; crisis and delivering a model that is both faster and smarter, OpenAI is proving that the future of AI isn't just about scale\u2014it's about the quality of thought.<\/p>\n<p data-path-to-node=\"32\">Whether it\u2019s called GPT-5.3, 5.5, or simply &#8220;Garlic,&#8221; this model is set to redefine what we expect from a digital assistant in 2026. It's no longer just a chatbot; it's a teammate.<\/p>","protected":false},"excerpt":{"rendered":"<p>What is GPT-5.3 &#8220;Garlic&#8221;? GPT-5.3, internally codenamed &#8220;Garlic,&#8221; is the rumored next-generation iteration of OpenAI&#8217;s flagship language model series, positioned [&hellip;]<\/p>","protected":false},"author":11214,"featured_media":133865,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[468],"tags":[],"class_list":["post-131957","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-best-post"],"acf":[],"_links":{"self":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/posts\/131957","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/users\/11214"}],"replies":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/comments?post=131957"}],"version-history":[{"count":3,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/posts\/131957\/revisions"}],"predecessor-version":[{"id":133867,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/posts\/131957\/revisions\/133867"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media\/133865"}],"wp:attachment":[{"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/media?parent=131957"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/categories?post=131957"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/legacy.vertu.com\/ar\/wp-json\/wp\/v2\/tags?post=131957"}],"curies":[{"name":"\u0648\u0648\u0631\u062f\u0628\u0631\u064a\u0633","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}