Blog

  • What Is MCP and Why Does It Make MyStorey Possible?

    What Is MCP and Why Does It Make MyStorey Possible?

    MCP — the Model Context Protocol — is the technical foundation that makes MyStorey work. Understanding what it is and why it matters helps explain why AI-powered WordPress management is finally practical rather than just theoretical.

    What Is MCP?

    The Model Context Protocol is an open standard, originally developed by Anthropic and now supported across the AI ecosystem, that defines how AI assistants communicate with external tools and data sources. Think of it as a universal adapter: if a service speaks MCP, any compatible AI can use it as a tool.

    Before MCP, getting an AI to interact with external systems required custom integrations for every AI provider, every service, and every use case. MCP standardizes that interface so a tool built for Claude also works with ChatGPT, Cursor, or any other MCP-compatible assistant — with no additional development work.

    How MyStorey Uses MCP

    MyStorey implements an MCP server that exposes WordPress content management capabilities as a set of structured tools. When you connect MyStorey to your AI assistant, the assistant gains access to tools like:

    • create_post — write and publish a post with full metadata
    • update_page — change the content of any WordPress page
    • import_image — download an image from a URL and add it to the media library
    • update_seo — set SEO title, meta description, and focus keyword
    • create_menu — build a navigation menu with typed items
    • update_site_settings — change the site title and tagline

    The AI decides which tools to call based on your instructions, assembles the right parameters, handles the API calls, and reports back. From your perspective, you’re just having a conversation. Under the hood, structured tool calls are being made against your WordPress site’s REST API via an authenticated MCP session.

    Why MCP Changes Things

    The significance of MCP isn’t just technical efficiency — it’s about where AI intelligence can be applied. Right now, most AI use in publishing looks like this: ask an AI to write content, copy the output, paste it into WP admin. The AI is a better typewriter, but the human is still the hands.

    With MCP, the AI’s hands extend into the tools themselves. It can write and publish, research and schedule, plan a content calendar and execute it. The human role shifts from doing to directing — which is where most of the editorial value actually sits.

    Security Model

    MyStorey uses WordPress Application Passwords for authentication — a native WordPress feature that creates scoped, revocable credentials separate from your main admin password. The MCP token issued by MyStorey is specific to your account and site. You can revoke access at any time from either your MyStorey dashboard or your WordPress user profile, without changing your admin credentials.

    The plugin communicates over HTTPS. No content or credentials are stored on MyStorey’s servers beyond what’s needed for authentication token management.

    MCP Is Growing Fast

    As of early 2026, MCP support has been adopted by Claude.ai (natively), ChatGPT (via the tools ecosystem), Cursor, Windsurf, and a growing list of development tools. The protocol is on track to become the standard interface layer between AI assistants and external services — similar to what REST APIs did for web services a decade ago.

    MyStorey’s bet is that WordPress — the platform running roughly 43% of the web — needs a first-class MCP integration. That’s what the plugin delivers today, and it’s why the Pro roadmap focuses on WooCommerce: because e-commerce is the next frontier where AI-directed action creates the most leverage.

    Get Started

    The MyStorey plugin is free to install. Starter plan starts at $7/month for one site, all tools included. No long-term commitment required.

    Download the plugin and get started →

  • How This Entire Site Was Built by Talking to an AI: A MyStorey Walkthrough

    How This Entire Site Was Built by Talking to an AI: A MyStorey Walkthrough

    This entire website — every post, every page, every image, every category, tag, SEO title, and navigation menu item — was built in a single session by talking to Claude. No WordPress admin. No clicking through menus. No uploading media manually. Just conversation.

    That’s what MyStorey makes possible. Here’s exactly how it was done.

    Step 1: Install the Plugin

    MyStorey is a standard WordPress plugin. Download it from mystorey-staging.docmet.systems/wordpress, upload it via WP admin (Plugins → Add New → Upload), and activate it. The plugin adds a new menu item in WP admin where you’ll find your MCP connection credentials.

    Step 2: Create a MyStorey Account

    Head to mystorey-staging.docmet.systems/register and create a free account. In your dashboard, connect your WordPress site by entering its URL and an Application Password (generated in WP admin under Users → Profile → Application Passwords). MyStorey verifies the connection and issues you an MCP token.

    Step 3: Connect to Claude or ChatGPT

    In Claude.ai, go to Settings → Integrations → Add MCP Server. Paste in the MCP URL from your MyStorey dashboard. That’s it — Claude now has direct access to your WordPress site as a tool it can use.

    For ChatGPT, the process is similar via the Plugins or Tools settings depending on your plan. Any MCP-compatible tool works the same way.

    Step 4: Just Talk to Your Site

    Once connected, you can tell your AI assistant things like:

    • “Write a 600-word post about the latest OpenAI funding round and publish it with a featured image from Unsplash.”
    • “Update the homepage to explain what this site is about.”
    • “Create categories for AI Industry, AI Policy, and AI Research.”
    • “Add SEO metadata to all published posts.”
    • “Build a navigation menu with Home, About, Contact, and a link to the plugin page.”
    • “Rename the site to ‘The AI Dispatch’ and update the tagline.”

    Every single one of those instructions was used to build this site. The AI handles the API calls, error handling, sequencing, and content creation. You just describe what you want.

    What MyStorey Can Do Today

    The current version of MyStorey supports the full content management lifecycle for a typical WordPress publication:

    • Posts: Create, update, publish, draft — with categories, tags, featured images, and status
    • Pages: Create and update static pages with full HTML content
    • Media: Import images from any public URL directly into the media library
    • SEO: Set SEO title, meta description, and focus keyword via Yoast, RankMath, or All in One SEO
    • Menus: Create menus, add items (pages, posts, custom URLs), assign to theme locations
    • Taxonomy: Create and manage categories and tags
    • Site settings: Update site title and tagline
    • Themes: List and activate installed themes

    What’s Coming

    The Pro plan (coming soon at $24/month) will add support for up to 3 sites and early access to WooCommerce tools — meaning AI-controlled product management, pricing updates, and inventory management. If you’re running an e-commerce operation, that’s where things get genuinely transformative.

    The Bottom Line

    If you manage a WordPress site and you use Claude or ChatGPT regularly, MyStorey removes the context switch between “thinking about content” and “publishing content.” The friction of WP admin — finding the right menu, remembering where settings live, uploading images one at a time — disappears. Your AI becomes your content team.

    Get the MyStorey plugin → — Starter plan from $7/month. Free account to get started.

  • Small Language Models: The Enterprise AI Shift Nobody Predicted

    Small Language Models: The Enterprise AI Shift Nobody Predicted

    The AI industry spent 2023 and 2024 racing to build the biggest models. In 2026, the race that actually matters is for the smallest ones that still work. Small Language Models — SLMs — are quietly becoming the defining enterprise AI trend of the year.

    Why Bigger Stopped Meaning Better

    Large language models with hundreds of billions of parameters are remarkable generalists, but they carry significant operational costs: expensive inference, cloud dependency, latency that’s too high for real-time applications, and privacy concerns that make deployment in regulated industries extremely difficult. For enterprises that need reliable AI in specific domains — medical coding, legal review, financial analysis, customer support — a 7-billion-parameter model fine-tuned on domain data often outperforms a 200-billion-parameter generalist at a fraction of the cost.

    AT&T’s chief data officer put it plainly: fine-tuned SLMs will be the big trend of 2026 as cost and performance advantages drive usage over out-of-the-box large models. That’s a candid admission from a major enterprise deployer, and it reflects what practitioners are seeing across industries.

    The Architecture Question

    The dominance of transformer architecture is under scrutiny for the first time since the landmark 2017 “Attention is All You Need” paper. Key researchers, including Ilya Sutskever — co-founder of OpenAI — have acknowledged that pretraining results have plateaued, and that genuinely better architectures will be needed to sustain progress. Yann LeCun’s departure from Meta to build a world model lab represents the highest-profile bet that the transformer paradigm has a ceiling, and that the next generation of AI will need fundamentally different underlying structures.

    On-Device AI Is Real Now

    SLMs aren’t just a cloud phenomenon. At CES 2026, AMD unveiled its Ryzen AI 400 series with upgraded Neural Processing Units specifically designed for on-device AI tasks. Samsung’s Galaxy S26 series runs AI features locally using the Snapdragon 8 Elite Gen 5. Apple’s upcoming iOS release will ship a fully rebuilt Siri powered by on-device AI with contextual awareness. The combination of better NPU hardware and smaller, more efficient models is enabling a category of AI applications that simply couldn’t exist two years ago: private, low-latency, offline-capable AI at the edge.

    Domain Specialists Are Winning

    The most commercially successful AI products emerging in 2026 are domain specialists. Coding-focused variants like GPT-5.3 Codex and Claude Code target developer workflows. Medical AI models fine-tuned on clinical data outperform general models on diagnostics and coding tasks. Legal AI trained on case law is being used by law firms for document review at scale. These specialized models are often built on open-weight foundations like Meta’s Llama family, fine-tuned internally, and deployed on private infrastructure — giving enterprises the control and compliance properties they need.

    What to Build On

    For teams evaluating SLM deployment in 2026: the open-weight ecosystem is the most practical starting point. Meta’s Llama, Mistral, and Chinese models like DeepSeek R1 offer genuinely competitive capabilities that can be fine-tuned on proprietary data and run on-premises. The tooling around quantization, LoRA fine-tuning, and inference optimization has matured to the point where a team of two or three engineers can take an open-weight base model to production-grade deployment in weeks. The question isn’t whether SLMs can work for your use case — it’s whether your organization has the internal data and domain expertise to make the fine-tuning worthwhile.

  • AI Regulation in 2026: The Battleground Taking Shape

    AI Regulation in 2026: The Battleground Taking Shape

    The battle over who governs artificial intelligence is no longer theoretical. In the first weeks of March 2026, AI regulation is moving at legislative speed — with states, federal agencies, and foreign governments all pulling in different directions at once.

    The Federal vs. State Tug-of-War

    In December 2025, President Trump signed an executive order aimed at preempting state AI laws, arguing that a patchwork of 50 different regulatory frameworks would strangle innovation and cede ground to China. The order set up a direct confrontation with states that have been the most aggressive legislators on AI safety.

    That confrontation is now playing out in real time. New York has multiple active AI bills, including the Artificial Intelligence Training Data Transparency Act, which would require developers to publicly disclose the datasets used to train their models. The bill advanced to third reading in the state Senate on March 4. Florida’s Governor DeSantis is pushing his own “AI Bill of Rights” — a broad bill that passed the state Senate the same week. Vermont signed new legislation on synthetic media in elections into law on March 5, becoming one of the first states to regulate AI-generated political content.

    What the Laws Actually Require

    Across the active state bills, several themes keep appearing. Transparency mandates require that AI-generated or AI-modified content carry provenance data so consumers can identify it as synthetic. Data disclosure requirements target training datasets, aiming to expose copyright and privacy concerns baked into foundation models. Liability frameworks attempt to assign responsibility when AI systems cause harm — a question courts are increasingly being asked to answer in the absence of clear statute. Healthcare and housing bills in multiple states limit how AI can be used in decisions affecting insurance, loan approvals, and rental housing access.

    The Global Dimension

    Europe’s AI Act has been in phased implementation since 2025, and regulators are watching closely how companies comply. The UK’s Information Commissioner’s Office and Ofcom recently issued a formal demand to Elon Musk’s xAI for information about the Grok model — one of the first major regulatory actions targeting a specific model’s behavior rather than a company’s data practices.

    AI Companies Are Not Passive

    The lobbying campaign by AI companies against state-level regulation is intense. The industry’s core argument is that inconsistent rules across states will make compliance impossible and push development offshore — framing deregulation as a national security imperative in the US-China AI competition. This narrative has found significant traction in Washington, even as the underlying safety concerns that motivated state-level action remain unresolved.

    The Pragmatic View

    For organizations deploying AI, the regulatory uncertainty is itself a risk factor. The safest approach is to build AI systems that would pass strict transparency and auditability requirements even if those aren’t yet legally mandatory — because in many jurisdictions, they likely will be within two to three years. Document training data sources. Log model decisions. Build explainability into workflows. Companies that treat compliance as an architectural property rather than a retroactive checklist will be far better positioned as the legal landscape solidifies.

  • The Rise of Agentic AI: From Demos to Real Workflows

    The Rise of Agentic AI: From Demos to Real Workflows

    2026 is the year AI agents stopped being demos and started being coworkers. Agentic AI — systems that can plan multi-step tasks, use tools autonomously, and take real-world actions — is moving from research labs into production workflows at a pace that’s outrunning most organizations’ readiness.

    What Makes an AI “Agentic”?

    Unlike traditional language models that respond to a single prompt and stop, agentic systems operate in loops. They receive a goal, break it into steps, call external tools (APIs, browsers, databases, code interpreters), evaluate results, and continue until the task is complete — or they hit a wall. The architecture sounds simple, but the reliability and failure modes at production scale are anything but.

    Where Agents Are Actually Deployed

    The most successful agentic deployments in early 2026 share a common trait: they’re narrow and well-defined. Customer support agents that can query order databases, issue refunds, and escalate edge cases. Code review agents that run test suites, flag regressions, and propose fixes. Marketing research agents that aggregate competitor data, segment audiences, and draft briefs — a workflow now relied on by major advertising agencies using Claude’s enterprise tools.

    Broad, open-ended agents — the kind that promise “just give it a goal and walk away” — are still struggling with reliability. The failure modes are subtle: agents that confidently complete the wrong task, get stuck in loops, or make irreversible actions on misunderstood instructions.

    The Trust Problem

    Trust is the central unsolved problem in agentic AI. Users need to know what an agent will and won’t do before they hand it access to their email, their database, or their customer records. Labs are responding with tool-use permissions, sandboxing, and audit logs — but there’s no industry standard yet. The organizations deploying agents successfully are those investing heavily in observability: logging every tool call, reviewing failure cases, and tightening scope over time.

    World Models: The Next Layer

    One emerging direction that could dramatically improve agent reliability is world models — AI systems that develop an internal simulation of how things work in 3D space and time, rather than just predicting text. Researchers like Yann LeCun, who left Meta to start his own world model lab now seeking a $5 billion valuation, argue this is a prerequisite for truly robust agents. Google DeepMind’s Genie, Fei-Fei Li’s World Labs, and startup General Intuition are all pushing this frontier. If world models mature, agents will be able to reason about consequences before acting — a capability that current transformer-based systems fundamentally lack.

    What Organizations Should Do Now

    The pragmatic path: start with constrained agents in high-volume, low-stakes workflows. Build observability before you build capability. Define clear success and failure criteria. Treat the first six months as a data collection exercise, not a productivity gain. The organizations that will lead in agentic AI by 2027 are the ones quietly building trust infrastructure today — not the ones chasing the most impressive demo.

  • OpenAI’s $110 Billion Funding Round: What It Means for the AI Industry

    OpenAI’s $110 Billion Funding Round: What It Means for the AI Industry

    In March 2026, OpenAI closed what is being called one of the largest private investment rounds in history — a staggering $110 billion that has reshuffled the AI landscape and signaled to the world that the AI boom is far from over.

    The Numbers Behind the Deal

    The funding round was led by a trio of technology heavyweights: Amazon committed $50 billion, SoftBank pledged $30 billion, and Nvidia put in the remaining $30 billion. The investment values OpenAI at approximately $840 billion, making it one of the most valuable private companies on the planet. Alongside the funding came a landmark announcement: a $100 billion extension of OpenAI’s compute partnership with Amazon Web Services, ensuring the company can train ever-larger models faster and at lower cost.

    Scale That Defies Comparison

    To put the scale in context, ChatGPT and affiliated OpenAI tools now serve 900 million weekly active users. That’s not an experimental product — it’s core infrastructure for millions of developers, enterprises, and knowledge workers worldwide. The company also reports 50 million paid subscribers across its ChatGPT Pro, API, and enterprise tiers, generating substantial recurring revenue that gives some grounding to the sky-high valuation.

    Strategic Implications

    Each investor’s stake reflects a clear strategic play. Amazon is betting that deeper OpenAI integration will cement AWS’s position as the dominant cloud for AI workloads. Nvidia, already the GPU kingpin powering most AI training, is buying closer access to the largest model developer. SoftBank, known for moonshot bets, sees OpenAI as a foundational investment in the AI-defined economy it has long predicted.

    The AWS partnership extension is particularly noteworthy for enterprises. Developers already building on AWS may gain access to OpenAI capabilities optimized for cloud-native deployment — potentially lowering the barrier to enterprise AI adoption significantly.

    What This Means for Startups and Competitors

    The concentration of capital at this scale creates both opportunities and headwinds for the broader ecosystem. Smaller AI labs will find fundraising harder as investors consolidate around perceived winners. However, the continued rise of open-source models from DeepSeek, Meta’s Llama, and others means the frontier isn’t exclusively OpenAI’s to define. The gap between open-weight and proprietary models is narrowing — from months to weeks — ensuring that no single player holds a permanent lock on capability.

    For entrepreneurs, the message is pragmatic: you don’t need to build foundation models to win. OpenAI’s investment in making its tools accessible to non-experts means the leverage is in applications, vertical integration, and domain-specific deployment — not raw model development.

    The Road Ahead

    With $840 billion in implied value and compute partnerships locked in for years, OpenAI is positioning itself less like a startup and more like a utility-scale infrastructure provider. Whether the market ultimately validates that valuation will depend on how reliably agentic systems and enterprise deployments perform in the messy real world — beyond benchmark leaderboards and investor decks. The next 18 months will be a critical stress test.

  • The Future of AI-Powered Websites

    The Future of AI-Powered Websites

    What if your website could write, adapt, and evolve on its own?

    Artificial intelligence is rapidly transforming how websites create and deliver content. Instead of relying entirely on manual writing workflows, businesses can now use AI to generate article drafts, refine messaging, and adapt copy for different audiences much faster. This makes publishing more efficient while still giving teams room to shape the final voice and strategy.

    AI is also changing content creation by making it more data-aware. Modern tools can analyze search intent, user behavior, and engagement patterns to suggest stronger headlines, more relevant topics, and clearer structure. As a result, websites are becoming more responsive to what visitors actually want to read, helping brands produce content that is both useful and timely.

    Looking ahead, AI-powered websites will likely become far more adaptive than traditional publishing platforms. Content may evolve in real time based on visitor interests, industry trends, or performance signals, allowing each page to feel more personalized and effective. For teams building the next generation of digital experiences, AI is not just a writing assistant anymore — it is becoming a core part of how websites communicate, grow, and stay relevant.