Model Driven Apps Are Now MCP Apps
I tested the new preview that lets Model Driven Apps surface inside Microsoft 365 Copilot through MCP Apps. The short version: the flow works, the experience is solid, and card rendering is working fine in my tests.
Table of contents
Why This Matters
This changes the interaction model for Dataverse apps.
The classic approach is: open the model-driven app, navigate forms and views, then act. The new approach is: ask Copilot in natural language, get a preview, open the record from there.
In practice, Copilot becomes the UI entry point and the model-driven app becomes the business layer behind it.
For the official announcement and feature framing, see: Public Preview: Your business apps, now part of every conversation.
What I Tested
I ran this in a US-hosted production Dataverse environment.
Setup I Used
My test setup follows the same activation flow described in the Microsoft announcement:
“Available now - get started today”
- “Activate your app’s MCP server in Power Apps.”
- “Download the app package generated by your app’s MCP.”
- “Deploy to Microsoft Teams or Microsoft 365.”
Source: Public Preview: Your business apps, now part of every conversation.
- Opened environment in make portal and edit an app.
- In app settings, used the Upcoming tab and enabled Enable your app in Microsoft 365 Copilot (preview), then selected Download app package. Reference: Manage model-driven app settings in the app designer - Upcoming.
- Uploaded the generated package (
declarative-agent-<app name>.zip) to Microsoft 365 so the app could be exposed as an agent. - In Microsoft 365 admin center Agent Registry, used the Publish flow to scope availability to selected users/groups for pilot rollout. Reference: Agent Registry - Publish agents.
First Query: “My Accounts”
I asked Copilot for “My accounts”.
What happened:
- Copilot returned an MCP-based grid.
- Clicking the row opened the Dataverse record in the Copilot Card.
- The interaction felt native.
I also tested contact retrieval with similar intent-based behavior.
The Important Shift
The fundamental change here is a complete inversion of how users interact with business data.
In the traditional model, the app shell is the starting point. Users navigate to the model-driven app, find a view, open a record, and work from there.
In the Copilot + MCP model, users talk to Copilot first. The app becomes the backend. There is no assumption that users will ever open the traditional app interface. For many workflows, they won’t need to.
Why this changes implementation choices
In the classic model-driven app experience, we often add JavaScript to control form events, command behavior, field visibility, and UX interactions. In the Copilot + MCP path, that rendering surface is not the primary surface anymore.
So the practical consequence is simple: JavaScript-heavy customizations become a weak foundation for this pattern.
If a process relies on custom client-side rendering logic or custom form scripts, it won’t work in this interface. Copilot does not reproduce your full browser runtime.
What to optimize for instead
For new implementations, the strategy is to keep frontend complexity low and push business behavior to layers that are channel-independent:
- Dataverse table design and relationships
- Business rules and validation that don’t depend on custom rendering
- Server-side logic (plugins, Power Automate, custom actions/APIs)
- Security roles
- Clean views metadata (with meaningful titles)
Use JavaScript only when you need UI-only polish in the traditional app shell. Do not make JavaScript the place where critical business behavior lives.
If the behavior must work in Copilot/MCP, design it as backend/domain logic first, then add optional UI enhancements on top.
MCP Apps: rendering UI inside Copilot chat
MCP Apps is an extension of MCP that allows server functions to declare a meta property on their tool definitions. That meta property carries UI rendering instructions — grids, forms, maps, dashboards — that Copilot can render inline or side by side in the chat canvas, inside a sandboxed iFrame.
This is how the model-driven app surfaces grids and forms directly in Copilot: the MCP server functions exposed by the app include these rendering hints, so when Copilot calls view_data or view_record, it does not just get data back — it gets a renderable widget.
Right now, since the MCP server for model-driven apps is first-party and managed by Microsoft, the exposed surface is scoped to forms and grids. That is already useful, but it is a deliberate starting point, not the ceiling.
What comes next is the interesting part. The MCP Apps extension model allows function exposed by an MCP server to carry rendering hints for richer UI — maps, dashboards, multi-step workflows, custom visualizations — all inline in Copilot chat. Whether and when those visualizations arrive, nobody can say. But the architecture is already there.
For the full picture on MCP Apps and how to extend M365 Copilot with this pattern, the reference article is: MCP Apps now available in Copilot chat. The official MCP Apps specification and extension model is documented here: MCP Apps overview.
The agent in the package
It is useful to understand what the generated package actually contains.
The package is basically a three-layer stack built around three JSON files:
manifest.json: the outer package manifest that makes the solution installable and identifiable in Microsoft 365/Teams. This is the packaging layer.declarativeAgent.json: the agent behavior layer. It contains instructions, capabilities, conversation starters, and the reference to the action provider.ai-plugin.json: the tooling layer. It defines the MCP server function signatures — the callable operations the agent can invoke, their parameters, and the runtime endpoint that executes them. This is where the MCP surface lives.
What is in the instruction
Structure Explained
- Policy layer answers “how the agent should behave”: it enforces guardrails like confirming destructive actions, avoiding unnecessary clarification questions, and always finishing interactive requests with a widget.
- Execution layer answers “which path to execute”: it routes requests to the right sequence, such as
view_datafor collections, Dataverse search plusview_record/edit_recordfor specific records, andcreate_recordfor new entries. - Metadata layer answers “what context is already known”: Entity Reference acts as a preload of high-confidence tables, columns, and views so the agent can move quickly and call discovery tools only when needed.
Why Entity Reference block is interesting
The Entity Reference block is included to preload high-value metadata (key tables, important columns, and default/app views) directly into the agent instructions.
Its purpose is to improve execution quality and speed:
- Faster decisions: the agent can often choose table, fields, and view without extra discovery calls.
- Fewer errors: it reduces wrong column/view selection by anchoring decisions to known metadata.
- Consistent behavior: the same default entities and views are reused across prompts.
- Better tool economy:
list_viewsanddescribe_app_tableare called only when truly needed (for example, missing columns, filter conflicts, or non-preloaded entities).
In practical terms, this section acts as an instruction-time cache that keeps the agent both efficient and reliable, while still allowing dynamic fallback to schema/view discovery when user requests go beyond the preloaded scope.
The command describe_app_table explained
describe_app_table plays a critical role in making the agent reliable.
- It lets the agent resolve ambiguous user language into real Dataverse column logical names.
- It is explicitly used when users ask to display specific columns and one or more are not already in preloaded entity reference metadata.
- It helps avoid forcing users to know technical field names up front.
In practice, this is one of the key enablers for metadata-driven forms and grids: the agent can infer the right fields at runtime instead of depending on a pre-modeled UI artifact.
Concrete example: Grid vs Form
Grid workflow (data view approach):
- Detect target table (
opportunity). - Call
describe_app_tableto resolve field names if needed (for example,estimatedvalue,closeprobability). - Select view (preloaded metadata or
list_views). - Call
view_datawithfields,view_id, and optionalnatural_language_filter.
Form workflow (runtime form rendering approach):
- Detect form intent (
create_record,view_record, oredit_record). - Determine table and, if needed, record ID (found via Dataverse search).
- Optionally call
describe_app_tablewhen user-requested fields are ambiguous. - Call the form tool:
create_recordfor a new formview_recordfor a read/update-ready formedit_recordfor update suggestions from user-provided text
- The form is generated at runtime from table metadata, not from a prebuilt form.
What this means for UI design choices
The architecture is metadata-driven:
- Views come from existing Dataverse/model-driven app view definitions.
- Forms for create/view/edit are generated at runtime from entity metadata and context.
So a manually polished form authored in Make Portal (or created via Dataverse Skill in GitHub Copilot) is still useful, but it is no longer a hard requirement for every Copilot interaction. The environment provides the view foundation, and metadata drives runtime form generation.
If you want Copilot to be a primary part of your users’ workflow keep critical logic in data model and server-side behavior, keep frontend customization lean, and treat JavaScript as optional.