{"id":15738,"date":"2025-08-10T10:00:10","date_gmt":"2025-08-10T10:00:10","guid":{"rendered":"https:\/\/dmsretail.com\/RetailNews\/an-ai-nerd-knob-every-network-engineer-should-know\/"},"modified":"2025-08-10T10:00:10","modified_gmt":"2025-08-10T10:00:10","slug":"an-ai-nerd-knob-every-network-engineer-should-know","status":"publish","type":"post","link":"https:\/\/dmsretail.com\/RetailNews\/an-ai-nerd-knob-every-network-engineer-should-know\/","title":{"rendered":"An AI &#8216;Nerd Knob&#8217; Every Network Engineer Should Know"},"content":{"rendered":"<p> <p><a href=\"https:\/\/dmsretail.com\/online-workshops-list\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-496\" src=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png\" alt=\"Retail Online Training\" width=\"729\" height=\"91\" srcset=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png 729w, https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90-300x37.png 300w\" sizes=\"auto, (max-width: 729px) 100vw, 729px\" \/><\/a><\/p><br \/>\n<\/p>\n<div>\n<p>Alright, my friends, I\u2019m back with another post based on my learnings and exploration of AI and how it\u2019ll fit into our work as network engineers. In today\u2019s post, I want to share the first (of what will likely be many) \u201cnerd knobs\u201d that I think we all should be aware of and how they will impact our use of AI and AI tools. I can already sense the excitement in the room. After all, there\u2019s not much a network engineer likes more than <em>tweaking a nerd knob<\/em> in the network to fine-tune performance. And that\u2019s exactly what we\u2019ll be doing here. Fine-tuning our AI tools to help us be more effective.<\/p>\n<p>First up, the requisite disclaimer or two.<\/p>\n<ol>\n<li><strong>There are SO MANY nerd knobs in AI.<\/strong> (Shocker, I know.) So, if you all like this kind of blog post, I\u2019d be happy to return in other posts where we look at other \u201cknobs\u201d and settings in AI and how they work. Well, I\u2019d be happy to return once I understand them, at least. \ud83d\ude42<\/li>\n<li><strong>Changing <em>any of the settings<\/em> on your AI tools can have dramatic effects on results.<\/strong> This includes increasing the resource consumption of the AI model, as well as increasing hallucinations and decreasing the accuracy of the information that comes back from your prompts. Consider yourselves warned. As with all things AI, go forth and explore and experiment. But do so in a safe, lab environment.<\/li>\n<\/ol>\n<p>For today\u2019s experiment, I\u2019m once again using LMStudio running locally on my laptop rather than a public or cloud-hosted AI model. For more details on why I like LMStudio, check out my last blog, Creating a NetAI Playground for Agentic AI Experimentation.<\/p>\n<p>Enough of the setup, let\u2019s get into it!<\/p>\n<h2><strong>The impact of working memory size, a.k.a. \u201ccontext\u201d<\/strong><\/h2>\n<p>Let me set a scene for you.<\/p>\n<p>You\u2019re in the middle of troubleshooting a network issue. Someone reported, or noticed, instability at a point in your network, and you\u2019ve been assigned the joyful task of getting to the bottom of it. You captured some logs and relevant debug information, and the time has come to go through it all to figure out what it means. But you\u2019ve also been using AI tools to be more productive, 10x your work, impress your boss, you know <em>all the things<\/em> that are going on right now.<\/p>\n<p>So, you decide to see if AI can help you work through the data faster and get to the root of the issue.<\/p>\n<p>You fire up your local AI assistant. (Yes, local\u2014because <em>who knows<\/em> what\u2019s in the debug messages? Best to keep it all safe on your laptop.)<\/p>\n<p>You tell it what you\u2019re up to, and paste in the log messages.<\/p>\n<figure id=\"attachment_476324\" aria-describedby=\"caption-attachment-476324\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476324\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-01.png\" alt=\"Asking an AI assistant to help debug a network issue.\" width=\"800\" height=\"608\" srcset=\"\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476324\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-01.png\" alt=\"Asking an AI assistant to help debug a network issue.\" width=\"800\" height=\"608\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-01-300x228.png 300w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-01-768x584.png 768w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-01.png 800w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/noscript><figcaption id=\"caption-attachment-476324\" class=\"wp-caption-text\">Asking AI to assist with troubleshooting<\/figcaption><\/figure>\n<p>After getting 120 or so lines of logs into the chat, you hit enter, kick up your feet, reach for your Arnold Palmer for a refreshing drink, and wait for the AI magic to happen. But before you can take a sip of <em>that iced tea and lemonade goodness<\/em>, you see this has immediately popped up on the screen:<\/p>\n<figure id=\"attachment_476325\" aria-describedby=\"caption-attachment-476325\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476325\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-02.png\" alt=\"AI Failure! Context length issue\" width=\"800\" height=\"272\" srcset=\"\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476325\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-02.png\" alt=\"AI Failure! Context length issue\" width=\"800\" height=\"272\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/IL20250731203321-ai-knobs-context-02-300x102.png 300w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/IL20250731203313-ai-knobs-context-02-768x261.png 768w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/IL20250731203259-ai-knobs-context-02-2048x696.png 2048w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/IL20250731203303-ai-knobs-context-02-1536x522.png 1536w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/IL20250731203308-ai-knobs-context-02-1024x348.png 1024w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-02.png 800w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/noscript><figcaption id=\"caption-attachment-476325\" class=\"wp-caption-text\">AI Failure! \u201cThe AI has nothing to say\u201d<\/figcaption><\/figure>\n<p>Oh my.<\/p>\n<p><em>\u201cThe AI has nothing to say.\u201d<\/em>!?! How could that be?<\/p>\n<p>Did you find a question so difficult that AI can\u2019t handle it?<\/p>\n<p>No, that\u2019s not the problem. Check out the helpful error message that LMStudio has kicked back:<\/p>\n<p style=\"padding-left: 40px;\"><em>\u201cTrying to keep the first 4994 tokens when context the overflows. However, <strong>the model is loaded with context length of only 4096 tokens, which is not enough<\/strong>. Try to load the model with a larger context length, or provide shorter input.\u201d<\/em><\/p>\n<p>And we\u2019ve gotten to the root of this perfectly scripted storyline and demonstration. Every AI tool out there has a limit to how much \u201cworking memory\u201d it has. The technical term for this working memory is \u201ccontext length<em>.\u201d <\/em>If you try to send more data to an AI tool than can fit into the context length, you\u2019ll hit this error, or something like it.<\/p>\n<p>The error message indicates that the model was \u201cloaded with context length of only 4096 tokens.\u201d What is a \u201ctoken,\u201d you wonder? Answering that could be a topic of an entirely different blog post, but for now, just know that \u201ctokens\u201d are the unit of size for the context length. And the first thing that is done when you send a prompt to an AI tool is that the prompt is converted into \u201ctokens\u201d.<\/p>\n<p>So what do we do? Well, the message gives us two possible options: we can increase the context length of the model, or we can provide shorter input. Sometimes it isn\u2019t a big deal to provide shorter input. But other times, like when we are dealing with large log files, that option isn\u2019t practical\u2014all of the data is important.<\/p>\n<h3><strong>Time to turn the knob!<\/strong><\/h3>\n<p>It is that first option, to load the model with a larger context length, that is our nerd knob. Let\u2019s turn it.<\/p>\n<p>From within LMStudio, head over to \u201cMy Models\u201d and click to open up the configuration settings interface for the model.<\/p>\n<figure id=\"attachment_476327\" aria-describedby=\"caption-attachment-476327\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476327\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-03.png\" alt=\"Accessing Model Settings\" width=\"800\" height=\"198\" srcset=\"\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476327\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-03.png\" alt=\"Accessing Model Settings\" width=\"800\" height=\"198\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-03-300x74.png 300w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-03-768x190.png 768w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-03.png 800w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/noscript><figcaption id=\"caption-attachment-476327\" class=\"wp-caption-text\">Accessing Model Settings<\/figcaption><\/figure>\n<p>You\u2019ll get a chance to view all the knobs that AI models have. And as I mentioned, there are a lot of them.<\/p>\n<figure id=\"attachment_476328\" aria-describedby=\"caption-attachment-476328\" style=\"width: 500px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476328\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-04.png\" alt=\"Default configuration settings\" width=\"500\" height=\"538\" srcset=\"\" sizes=\"auto, (max-width: 500px) 100vw, 500px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476328\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-04.png\" alt=\"Default configuration settings\" width=\"500\" height=\"538\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-04-279x300.png 279w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-04.png 500w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\"\/><\/noscript><figcaption id=\"caption-attachment-476328\" class=\"wp-caption-text\">Default configuration settings<\/figcaption><\/figure>\n<p>But the one we care about right now is the Context Length. We can see that the default length for this model is 4096 tokens. But it supports up to 8192 tokens. Let\u2019s max it out!<\/p>\n<figure id=\"attachment_476329\" aria-describedby=\"caption-attachment-476329\" style=\"width: 500px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476329\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-05.png\" alt=\"Maxing out the Context Length\" width=\"500\" height=\"200\" srcset=\"\" sizes=\"auto, (max-width: 500px) 100vw, 500px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476329\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-05.png\" alt=\"Maxing out the Context Length\" width=\"500\" height=\"200\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-05-300x120.png 300w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-05.png 500w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\"\/><\/noscript><figcaption id=\"caption-attachment-476329\" class=\"wp-caption-text\">Maxing out the Context Length<\/figcaption><\/figure>\n<p>LMStudio provides a helpful warning and probable reason for why the model doesn\u2019t default to the max. The context length takes memory and resources. And raising it to \u201ca high value\u201d can impact performance and usage. So if this model had a max length of 40,960 tokens (the Qwen3 model I use sometimes has that high of a max), you might not want to just max it out right away. Instead, increase it by a little at a time to find the sweet spot: a context length big enough for the job, but not oversized.<\/p>\n<p>As network engineers, we are used to fine-tuning knobs for timers, frame sizes, and so many other things. This is right up our alley!<\/p>\n<p>Once you\u2019ve updated your context length, you\u2019ll need to \u201cEject\u201d and \u201cReload\u201d the model for the setting to take effect. But once that\u2019s done, it\u2019s time to take advantage of the change we\u2019ve made!<\/p>\n<figure id=\"attachment_476335\" aria-describedby=\"caption-attachment-476335\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476335\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06.png\" alt=\"The extra context length allows the AI to analyze the data\" width=\"800\" height=\"1666\" srcset=\"\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476335\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06.png\" alt=\"The extra context length allows the AI to analyze the data\" width=\"800\" height=\"1666\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06-144x300.png 144w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06-492x1024.png 492w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06-768x1599.png 768w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06-738x1536.png 738w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-06.png 800w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/noscript><figcaption id=\"caption-attachment-476335\" class=\"wp-caption-text\">AI fully analyzes the logs<\/figcaption><\/figure>\n<p>And look at that, with the larger context window, the AI assistant was able to go through the logs and give us a nice write-up about what they show.<\/p>\n<p>I particularly like the shade it threw my way: <em>\u201c\u2026consider seeking assistance from \u2026 a qualified network engineer.\u201d <\/em>Well played, AI. Well played.<\/p>\n<p>But bruised ego aside, we can continue the AI assisted troubleshooting with something like this.<\/p>\n<figure id=\"attachment_476336\" aria-describedby=\"caption-attachment-476336\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-full wp-image-476336\" data-lazy-type=\"image\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07.png\" alt=\"AI helps put a timeline of the problem together\" width=\"800\" height=\"1246\" srcset=\"\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-476336\" src=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07.png\" alt=\"AI helps put a timeline of the problem together\" width=\"800\" height=\"1246\" srcset=\"https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07-193x300.png 193w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07-657x1024.png 657w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07-768x1196.png 768w, https:\/\/storage.googleapis.com\/blogs-images-new\/ciscoblogs\/1\/2025\/07\/ai-knobs-context-07.png 800w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/noscript><figcaption id=\"caption-attachment-476336\" class=\"wp-caption-text\">The AI Assistant puts a timeline together<\/figcaption><\/figure>\n<p>And we\u2019re off to the races. We\u2019ve been able to leverage our AI assistant to:<\/p>\n<ol>\n<li>Process a significant amount of log and debug data to identify possible issues<\/li>\n<li>Develop a timeline of the problem (that will be super useful in the help desk ticket and root cause analysis documents)<\/li>\n<li>Identify some next steps we can do in our troubleshooting efforts.<\/li>\n<\/ol>\n<h2><strong>All stories must end\u2026<\/strong><\/h2>\n<p>And so you have it, our first AI Nerd Knob\u2014Context Length. Let\u2019s review what we learned:<\/p>\n<ol>\n<li>AI models have a \u201cworking memory\u201d that is referred to as \u201ccontext length.\u201d<\/li>\n<li>Context Length is measured in \u201ctokens.\u201d<\/li>\n<li>Oftentimes times an AI model will support a higher context length than the default setting.<\/li>\n<li>Increasing the context length will require more resources, so make changes slowly, don\u2019t just max it out completely.<\/li>\n<\/ol>\n<p>Now, depending on what AI tool you\u2019re using, you may NOT be able to adjust the context length. If you\u2019re using a public AI like ChatGPT, Gemini, or Claude, the context length will depend on the subscription and models you have access to. However, there most definitely IS a context length that will factor into how much \u201cworking memory\u201d the AI tool has. And being aware of that fact, and its impact on how you can use AI, is important. Even if the knob in question is behind a lock and key. \ud83d\ude42<\/p>\n<p>If you enjoyed this look under the hood of AI and would like to learn about more options, please let me know in the comments: Do you have a favorite \u201cknob\u201d you like to turn? Share it with all of us. Until next time!<\/p>\n<p><em>PS\u2026 If you\u2019d like to learn more about using LMStudio, my buddy Jason Belk put a free tutorial together called Run Your Own LLM Locally For Free and with Ease that can get you started very quickly. Check it out!<\/em><\/p>\n<p style=\"text-align: center;\"><iframe class=\"lazy lazy-hidden\" loading=\"lazy\" title=\"How To Run a Large Language Model (LLM) Locally and with Ease! | Snack Minute Ep. 182\" width=\"640\" height=\"360\" data-lazy-type=\"iframe\" data-src=\"https:\/\/www.youtube-nocookie.com\/embed\/RfdeQmlT_fw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen=\"\"><\/iframe><noscript><iframe loading=\"lazy\" title=\"How To Run a Large Language Model (LLM) Locally and with Ease! | Snack Minute Ep. 182\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/RfdeQmlT_fw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen=\"\"><\/iframe><\/noscript><\/p>\n<p>\u00a0<\/p>\n<p style=\"text-align: center;\" data-ttstextid=\"75\">Sign up for\u00a0Cisco U.\u00a0| Join the\u202f\u00a0Cisco Learning Network\u202ftoday for free.<\/p>\n<blockquote data-ttstextid=\"76\">\n<h2 style=\"text-align: center;\" data-ttstextid=\"77\"><strong>Learn with Cisco<\/strong><\/h2>\n<h3 style=\"text-align: center;\" data-ttstextid=\"78\"><strong><a href=\"https:\/\/twitter.com\/LearningatCisco\" target=\"_blank\" rel=\"noopener\">X<\/a>\u202f|\u202fThreads\u00a0|\u00a0Facebook\u202f|\u202fLinkedIn\u202f|\u202fInstagram<\/strong><strong>\u202f|\u202fYouTube<\/strong><\/h3>\n<\/blockquote>\n<p style=\"text-align: center;\" data-ttstextid=\"79\">Use\u202f\u00a0<strong>#CiscoU\u00a0<\/strong>and\u00a0<strong>#CiscoCert<\/strong>\u202fto join the conversation.<\/p>\n<p data-ttstextid=\"79\">Read next:<\/p>\n<blockquote class=\"wp-embedded-content\" data-secret=\"Tae1KFnedT\">\n<p>Creating a NetAI Playground for Agentic AI Experimentation<\/p>\n<\/blockquote>\n<p><iframe loading=\"lazy\" class=\"lazy lazy-hidden wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u201cCreating a NetAI Playground for Agentic AI Experimentation\u201d \u2014 Cisco Blogs\" data-lazy-type=\"iframe\" data-src=\"https:\/\/blogs.cisco.com\/learning\/creating-a-netai-playground-for-agentic-ai-experimentation\/embed#?secret=IhNnOcoeW4#?secret=Tae1KFnedT\" data-secret=\"Tae1KFnedT\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><noscript><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u201cCreating a NetAI Playground for Agentic AI Experimentation\u201d \u2014 Cisco Blogs\" src=\"https:\/\/blogs.cisco.com\/learning\/creating-a-netai-playground-for-agentic-ai-experimentation\/embed#?secret=IhNnOcoeW4#?secret=Tae1KFnedT\" data-secret=\"Tae1KFnedT\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/noscript><\/p>\n<blockquote class=\"wp-embedded-content\" data-secret=\"YfZpMITLpi\">\n<p>Take an AI Break and Let the Agent Heal the Network<\/p>\n<\/blockquote>\n<p><iframe loading=\"lazy\" class=\"lazy lazy-hidden wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u201cTake an AI Break and Let the Agent Heal the Network\u201d \u2014 Cisco Blogs\" data-lazy-type=\"iframe\" data-src=\"https:\/\/blogs.cisco.com\/learning\/let-the-agent-heal-the-network\/embed#?secret=JFqTnl3jyL#?secret=YfZpMITLpi\" data-secret=\"YfZpMITLpi\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><noscript><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u201cTake an AI Break and Let the Agent Heal the Network\u201d \u2014 Cisco Blogs\" src=\"https:\/\/blogs.cisco.com\/learning\/let-the-agent-heal-the-network\/embed#?secret=JFqTnl3jyL#?secret=YfZpMITLpi\" data-secret=\"YfZpMITLpi\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/noscript><\/p>\n<p>Share:<\/p>\n<p>\n  \t<\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><script async defer src=\"https:\/\/platform.instagram.com\/en_US\/embeds.js\"><\/script><br \/>\n<br \/><p><a href=\"https:\/\/dmsretail.com\/online-workshops-list\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-496\" src=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png\" alt=\"Retail Online Training\" width=\"729\" height=\"91\" srcset=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png 729w, https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90-300x37.png 300w\" sizes=\"auto, (max-width: 729px) 100vw, 729px\" \/><\/a><\/p><br \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Alright, my friends, I\u2019m back with another post based on my learnings and exploration of AI and how it\u2019ll fit into our work as network [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":15739,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-15738","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts\/15738","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/comments?post=15738"}],"version-history":[{"count":0,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts\/15738\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/media\/15739"}],"wp:attachment":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/media?parent=15738"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/categories?post=15738"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/tags?post=15738"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}