{"id":16402,"date":"2025-12-12T12:16:25","date_gmt":"2025-12-12T12:16:25","guid":{"rendered":"https:\/\/dmsretail.com\/RetailNews\/breaking-the-jar-hardening-pickle-file-scanners-with-structure-aware-fuzzing\/"},"modified":"2025-12-12T12:16:25","modified_gmt":"2025-12-12T12:16:25","slug":"breaking-the-jar-hardening-pickle-file-scanners-with-structure-aware-fuzzing","status":"publish","type":"post","link":"https:\/\/dmsretail.com\/RetailNews\/breaking-the-jar-hardening-pickle-file-scanners-with-structure-aware-fuzzing\/","title":{"rendered":"Breaking the Jar: Hardening Pickle File Scanners with Structure-Aware Fuzzing"},"content":{"rendered":"<p> <p><a href=\"https:\/\/dmsretail.com\/online-workshops-list\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-496\" src=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png\" alt=\"Retail Online Training\" width=\"729\" height=\"91\" srcset=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png 729w, https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90-300x37.png 300w\" sizes=\"auto, (max-width: 729px) 100vw, 729px\" \/><\/a><\/p><br \/>\n<\/p>\n<div>\n<p>Artificial intelligence and machine learning (AI\/ML) models are increasingly shared across organizations, fine-tuned, and deployed in production systems. Cisco\u2019s AI Defense offering includes a model file scanning tool designed to help organizations detect and mitigate risks in AI supply chains by verifying their integrity, scanning for malicious payloads, and ensuring compliance before deployment. Strengthening our ability to detect and neutralize these threats is critical for safeguarding both AI model integrity and operational security.<\/p>\n<p>Python pickle files comprise a large share of ML model files, but they introduce significant security risk because pickles can execute arbitrary code when loaded, even a single untrusted file can compromise an entire inference environment. The security risk is compounded by the open and accessible nature of model files in the AI developer ecosystem, where users can download and execute model files from public repositories with minimal verification of their safety. In an attempt to remediate the concern, developers have created security scanners like ModelScan, fickling, and picklescan to detect malicious pickle files before they\u2019re loaded. As security tool developers ourselves, we know that ensuring these tools are robust requires continuous testing and validation.<\/p>\n<p>That\u2019s harder to accomplish than it sounds. The problem is that many of the issues filed against pickle security tools involve detection bypasses (i.e., methods used by attackers to evade analysis). These adversarial samples exploit edge cases in scanner logic, and manual test creation can\u2019t match the breadth needed to surface all possible edge cases.<\/p>\n<p>Today, we\u2019re unveiling and open sourcing pickle-fuzzer, a structure-aware fuzzer that generates adversarial pickle files to test scanner robustness. At Cisco, we\u2019re committed to uplifting the ML community and advancing AI security for everyone. Securing the AI supply chain is a critical part of this mission, ensuring that every model, dependency, and artifact in the ecosystem can be trusted. By openly sharing tools like pickle-fuzzer, we aim to strengthen the entire ecosystem of AI security defenses. When we find and fix these issues collaboratively, everyone who relies on pickle scanners benefits. Our team believes the best way to improve AI security is through collaboration. This means openly sharing tools, testing approaches, and vulnerability findings across the ecosystem.<\/p>\n<h2><strong>Building robustness from within<\/strong><\/h2>\n<p>When developing AI Defense\u2019s model file scanning tool, one of our goals was to ensure that its pickle scanner could withstand real-world adversarial inputs. Traditional testing methods, such as using known malicious samples or carefully crafted test cases, only validate against threats we already understand. But attackers rarely follow known patterns. They probe the unknown, exploiting edge cases, malformed structures, and obscure opcode combinations that typical scanners were never designed to handle.<\/p>\n<p>To truly harden our system, we needed a way to automatically explore the entire landscape of possible pickle files, including the strange, malformed, and deliberately adversarial ones. That\u2019s when we decided to build a fuzzer!<\/p>\n<h2><strong>Building pickle-fuzzer<\/strong><\/h2>\n<p>Fuzzing is a software testing technique that involves generating random inputs to determine if they crash or cause other unexpected behavior in the target program. Originating in the late 1980s at the University of Wisconsin-Madison, fuzzing has become a proven technique for hardening software. For simple file formats, random byte mutations often suffice to find bugs. But pickle isn\u2019t a simple format. It\u2019s a stack-based virtual machine with 100+ opcodes across six protocol versions (0-5), plus a memo dictionary for tracking object references. Naive fuzzing approaches that flip random bits will produce mostly invalid pickle files that will fail validation during parsing, before exercising any interesting code paths.<\/p>\n<p>The challenge was finding a middle ground. We could hand-craft test cases, but that\u2019s exactly what we were trying to move beyond: it\u2019s slow, limited by our imagination, and can\u2019t easily explore the full input space. We could use traditional mutation-based fuzzing on existing pickle files, but mutations that don\u2019t understand pickle semantics would likely break the structural constraints and fail early. We needed an approach that understood pickle\u2019s internal state constraints. That left us with structure-aware fuzzing.<\/p>\n<p>Structure-aware fuzzing generates pickle files that respect the format\u2019s rules:<\/p>\n<ul>\n<li>Maintains a correct representation of the stack and memo dictionary;<\/li>\n<li>Respects protocol version constraints for opcodes; and<\/li>\n<li>Produces diverse and unexpected combinations despite these constraints<\/li>\n<\/ul>\n<p>We wanted to create adversarial inputs that were valid enough to reach deep into scanner logic, but weird enough to trigger edge cases. That\u2019s what pickle-fuzzer does.<\/p>\n<h2><strong>Inside pickle-fuzzer<\/strong><\/h2>\n<p>To generate valid pickles, pickle-fuzzer implements its own pickle virtual machine (PVM) with its own stack and memo dictionary. The generation process works like this:<\/p>\n<ul>\n<li>Build a list of valid opcodes based on the current protocol version, stack state, and memo state<\/li>\n<li>Randomly pick an opcode from that list<\/li>\n<li>Optionally mutate the opcode\u2019s arguments based on their type and PVM constraints<\/li>\n<li>Emit the opcode<\/li>\n<li>Update the stack and memo state based on the opcode\u2019s side effects<\/li>\n<li>Repeat until the desired pickle size is reached<\/li>\n<\/ul>\n<p>With 100% opcode coverage across all protocol versions, pickle-fuzzer can generate thousands of diverse pickle files per second, each one exercising different code paths in scanners. We immediately put it to work.<\/p>\n<h2><strong>Hardening AI Defense\u2019s model file scanner<\/strong><\/h2>\n<p>We ran pickle-fuzzer against our model file scanning tool first. Very quickly, the fuzzer found edge cases in our memo handling and unhashable byte array confusion logic. Unusual but valid pickle files could crash the scanner or cause it to exit early before finishing its security analysis. Each bug was a potential way for attackers to bypass our analysis.<\/p>\n<p class=\"p1\" style=\"text-align: center;\">Figure 1 below shows memo key validation sample bypassed our detections before we hardened our scanner:<img fetchpriority=\"high\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483131 \" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Figure1.png\" alt=\"\" width=\"1022\" height=\"155\"\/><noscript><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter wp-image-483131 \" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Figure1.png\" alt=\"\" width=\"1022\" height=\"155\"\/><\/noscript><\/p>\n<p style=\"text-align: center;\">Figure 2 below shows unhashable byte array confusion sample crashing our detections before we hardened our scanner:<br \/><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483160 size-large\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig2-1024x143.png\" alt=\"\" width=\"1024\" height=\"143\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483160 size-large\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig2-1024x143.png\" alt=\"\" width=\"1024\" height=\"143\"\/><\/noscript><\/p>\n<p style=\"text-align: center;\">We resolved these issues by adding proper validation for both crashes and ensuring the scanner continues processing even when it encounters unexpected input. This reinforced the need for our scanner to handle unusual data gracefully instead of failing. Figures 3 and 4 below demonstrate that the scanner now successfully detects both sample files.<br \/><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483161 size-medium_large\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig3-768x431.png\" alt=\"\" width=\"768\" height=\"431\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483161 size-medium_large\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig3-768x431.png\" alt=\"\" width=\"768\" height=\"431\"\/><\/noscript><span style=\"text-align: center;\">Figure 3. AI Defense\u2019s model file scan results for memo key error proof of concept<\/span><\/p>\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-medium_large wp-image-483165\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig4-1-768x431.png\" alt=\"\" width=\"768\" height=\"431\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium_large wp-image-483165\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig4-1-768x431.png\" alt=\"\" width=\"768\" height=\"431\"\/><\/noscript>Figure 4. AI Defense\u2019s model file scan results for hashing error proof of concept<\/p>\n<h2><strong>Extending to the community<\/strong><\/h2>\n<p>After strengthening our internal tooling, we recognized that pickle-fuzzer could also help the broader AI\/ML security ecosystem. Popular open source scanners such as ModelScan, Fickling, and Picklescan are foundational to many organizations\u2019 pickle security workflows, including platforms like Hugging Face, which integrate third-party solutions. We ran our fuzzer against these scanners to uncover potential weaknesses and help improve their resilience.<\/p>\n<p>The fuzzer revealed that similar edge cases existed across the ecosystem, surfacing a pattern that highlighted the inherent complexity of safely parsing pickle files. When multiple independent implementations encounter the same challenges, it points to areas where the problem space itself is difficult. After fuzzing and triage, we found that the scanners shared a few similar issues. The issues centered around two related patterns:<\/p>\n<p><strong>Memo Key Validation<\/strong>: The scanners didn\u2019t check whether memo keys existed before accessing them. Referencing a non-existent memo key would cause the scanner to crash or exit before completing its security analysis.<\/p>\n<p><strong>Unhashable Bytearray confusion<\/strong>: This technique exploits how the pickle scanner handles unhashable objects from the memo dictionary. When a BYTEARRAY8 opcode introduces a bytearray in the memo, it later causes an error during STACK_GLOBAL\u00a0processing because some scanners tried to add it to a Python set for later processing. This manipulation crashes the scanner, disrupting analysis and revealing a weakness in input validation.<\/p>\n<p>As a result, we generated some pickle samples using proof of concept\u00a0shared in appendix (Figures 10 and 11 below) and uploaded them to Hugging Face\u2019s repository for automated scanning.<\/p>\n<h2><strong>Hugging Face\u2019s scanner test results<\/strong><\/h2>\n<p>As shown in Figures 5 and 6 below, we observed that even industry-grade tools stayed \u201cQueued\u201d indefinitely, while ClamAV flagged the files as suspicious. This outcome highlights how our fuzzer-generated payloads can expose stability and detection gaps in existing AI model security pipelines, showing that even modern scanners can struggle with unconventional or adversarial pickle structures.<\/p>\n<p style=\"text-align: center;\"><strong>Sample1: key_error.pkl:<br \/><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483199 size-large\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig5-1024x509.png\" alt=\"\" width=\"1024\" height=\"509\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483199 size-large\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig5-1024x509.png\" alt=\"\" width=\"1024\" height=\"509\"\/><\/noscript><\/strong><\/p>\n<p style=\"text-align: center;\">Figure 5. Hugging Face scan results for the key error proof of concept<\/p>\n<p style=\"text-align: center;\"><strong>Sample2: unhash_byte.pkl:<br \/><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483200 size-large\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig6-1024x542.png\" alt=\"\" width=\"1024\" height=\"542\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483200 size-large\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Fig6-1024x542.png\" alt=\"\" width=\"1024\" height=\"542\"\/><\/noscript><\/strong>Figure 6. Hugging Face scan results for the hashing error proof of concept<\/p>\n<p>Armed with our findings and analysis, we reached out to the maintainers to report what we found. The response from the open source community was excellent! Two of the three teams were incredibly responsive and collaborative in addressing the issues.<\/p>\n<p>The issues have been fixed in both fickling and picklescan, and patched versions are now available. If you or your organization relies on either tool, we recommend updating to the unaffected versions below:<\/p>\n<ul>\n<li>fickling v0.1.5<\/li>\n<li>picklescan v0.0.32<\/li>\n<\/ul>\n<p>This collaborative approach strengthens the entire ML security ecosystem. When security tools are more robust, everyone benefits.<\/p>\n<h2><strong>Open-sourcing pickle-fuzzer<\/strong><\/h2>\n<p>Today, we\u2019re releasing pickle-fuzzer as an open source tool under the Apache 2.0 license. Our goal is to help the entire ML security community build more robust and secure tools.<\/p>\n<h3><strong>Getting started<\/strong><\/h3>\n<p>Installation is straightforward if you have Rust installed:\u00a0cargo install pickle-fuzzer. You can also build from source at https:\/\/github.com\/cisco-ai-defense\/pickle-fuzzer<\/p>\n<p>There are a few ways pickle-fuzzer can be used, depending on your needs. The command line interface generates its own pickles from scratch, while the Python and Rust APIs allow you to integrate it into popular coverage-guided fuzzers like Atheris. Both options are covered below.<\/p>\n<p><strong>Command line interface<\/strong><\/p>\n<p style=\"text-align: center;\">The command line interface also supports several options to control the generation process:<br \/><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483168 \" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/e2xbStxA-figure-7-1024x273.png\" alt=\"\" width=\"952\" height=\"254\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483168 \" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/e2xbStxA-figure-7-1024x273.png\" alt=\"\" width=\"952\" height=\"254\"\/><\/noscript>Figure 7. pickle-fuzzer\u2019s command line interface<br \/>Pickle-fuzzer supports single pickle file generation and corpus generation with optional mutations and pickle complexity controls.<\/p>\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter wp-image-483169 \" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/KCJRp6vI-figure-8.png\" alt=\"\" width=\"952\" height=\"317\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-483169 \" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/KCJRp6vI-figure-8.png\" alt=\"\" width=\"952\" height=\"317\"\/><\/noscript><span style=\"text-align: center;\">Figure 8. example pickle-fuzzer execution for single-file and batch generation<\/span><\/p>\n<p><strong>Integrate with Atheris<\/strong><\/p>\n<p style=\"text-align: center;\">Pickle-fuzzer allows you to quickly start fuzzing your own scanners with minimal setup. The following example shows how to integrate pickle-fuzzer with Atheris, a popular coverage-guided fuzzer for Python:<img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-full wp-image-483170\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/GZHqNgxq-figure-9.png\" alt=\"\" width=\"618\" height=\"549\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-483170\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/GZHqNgxq-figure-9.png\" alt=\"\" width=\"618\" height=\"549\"\/><\/noscript><span style=\"text-align: center;\">Figure 9. basic example showing pickle-fuzzer integration with the Atheris fuzzing framework<\/span><\/p>\n<h2><strong>Key takeaways<\/strong><\/h2>\n<p>Building pickle-fuzzer taught us a few things about securing AI\/ML supply chains:<\/p>\n<ul>\n<li>Structure-aware fuzzing works. Random bit flipping produces quickly rejected input. Understanding the format and generating valid but unusual inputs exercises the deep logic where bugs hide.<\/li>\n<li>Shared challenges need shared tools. When we found similar bugs across multiple scanners, it confirmed that pickle parsing is difficult to get right. Open sourcing the fuzzer helps everyone tackle these challenges together.<\/li>\n<li>Security tools need testing too. Tools meant to catch attacks need to be as robust as possible in service of the systems they\u2019re protecting.<\/li>\n<\/ul>\n<h2><strong>Future work<\/strong><\/h2>\n<p>We\u2019re continuing to improve pickle-fuzzer based on what we learn from using it. Some areas for further research that we\u2019re exploring include:<\/p>\n<ul>\n<li>Expanding mutation strategies to target specific vulnerability classes<\/li>\n<li>Adding support for other serialization formats beyond pickle<\/li>\n<li>CI\/CD pipeline support for continuous fuzzing (here is how we do it for pickle-fuzzer using cargo-fuzz)<\/li>\n<\/ul>\n<p>We welcome contributions from the community. If you find bugs in pickle-fuzzer or have ideas for improvements, open an issue or PR on GitHub.<\/p>\n<h2><strong>Put pickle-fuzzer to work <\/strong><\/h2>\n<p>Pickle-fuzzer started as an internal tool to harden AI Defense\u2019s model file scanning tool. By open sourcing it, we\u2019re hoping it helps others build more robust pickle security tools. The AI\/ML supply chain has real security challenges, and we all benefit when the tools protecting it get stronger.<\/p>\n<p>If you\u2019re building or using pickle scanners, give pickle-fuzzer a try. Run it against your tools, see what breaks, and fix those bugs before attackers find them.<\/p>\n<p>To explore how we apply these principles in production, check out AI Defense\u2019s model file scanning tool, part of our <strong>AI Defense platform<\/strong> built to detect and neutralize threats across the AI\/ML lifecycle, from poisoned datasets to malicious serialized models.<\/p>\n<h3 style=\"text-align: left;\"><strong>Appendix:<\/strong><\/h3>\n<h3 style=\"text-align: center;\"><strong>Unhashable ByteArray Proof of Concept:<img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-full wp-image-483172\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/yEwAOqhz-figure-10.png\" alt=\"\" width=\"737\" height=\"657\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-483172\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/yEwAOqhz-figure-10.png\" alt=\"\" width=\"737\" height=\"657\"\/><\/noscript><\/strong><span style=\"text-align: center;\">Figure 10. python code snippet to produce hashing error proof of concept<br \/><\/span><\/h3>\n<p style=\"text-align: center;\"><strong>Memo Key Validation Proof of Concept:<img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-full wp-image-483173\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/qJN2vbFH-figure-11.png\" alt=\"\" width=\"737\" height=\"624\"\/><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-483173\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/qJN2vbFH-figure-11.png\" alt=\"\" width=\"737\" height=\"624\"\/><\/noscript><\/strong>Figure 11. python code snippet to produce key error proof of concept<\/p>\n<\/p><\/div>\n<p><p><a href=\"https:\/\/dmsretail.com\/online-workshops-list\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-496\" src=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png\" alt=\"Retail Online Training\" width=\"729\" height=\"91\" srcset=\"https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90.png 729w, https:\/\/dmsretail.com\/RetailNews\/wp-content\/uploads\/2022\/05\/RETAIL-ONLINE-TRAINING-728-X-90-300x37.png 300w\" sizes=\"auto, (max-width: 729px) 100vw, 729px\" \/><\/a><\/p><br \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence and machine learning (AI\/ML) models are increasingly shared across organizations, fine-tuned, and deployed in production systems. Cisco\u2019s AI Defense offering includes a model [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":16403,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-16402","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts\/16402","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/comments?post=16402"}],"version-history":[{"count":0,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/posts\/16402\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/media\/16403"}],"wp:attachment":[{"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/media?parent=16402"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/categories?post=16402"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dmsretail.com\/RetailNews\/wp-json\/wp\/v2\/tags?post=16402"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}