From Idea to Dataverse Solution in One Session
I described a furniture management app in plain English. GitHub Copilot CLI built the tables, forms, views, a PCF component, server-side plugins, and ribbon customizations. My job was to think, test, and steer.
Why This Is Possible Now
This workflow is possible because of the Dataverse skills integration for coding agents announced by Microsoft: Dataverse Skills: Your coding agent now speaks Dataverse.
For me, this is a breaking change in the way we build Dataverse solutions. The old baseline was maker-portal-first execution with code as support. The new baseline is intent-first development: describe the solution in natural language, let the agent implement it end to end, and keep the human focused on design decisions, validation, and quality.
The Premise
Building a Dataverse model-driven app the traditional way means hours in the maker portal: clicking through table designers, dragging form fields, configuring views, wiring up ribbon buttons, deploying PCF components, registering plugins. You know the drill: it is methodical, repetitive, and the cognitive load is split between what you want to build and how to navigate the tooling.
I wanted to flip that ratio. What if I could stay entirely in the ideation and testing lane, and let GitHub Copilot CLI handle the implementation?
Here is what happened.
What I Actually Typed
The entire solution was built through a series of natural-language prompts. Here is what my side of the conversation looked like - these are the actual instructions I gave:
“I need to build an app model driven based on account that helps managing the furniture of the customer. A furniture is made by a set of objects in a room and the customer has many rooms. I need it to be created with a new publisher BlueOakLabs.”
That was the starting prompt. From there, each follow-up was a short directive:
I asked Copilot to progressively harden the solution: enforce mandatory fields and lookups, keep users in context with side-panel creation from Room, clean up display names and visual identity (including custom SVG icons), add a ribbon action to count objects with quantity-aware logic, introduce a Color field with a PCF color picker, and implement server-side validation plugins to enforce room capacity rules on both Object create and Room size updates.
That is it. No XML. No C# scaffolding. No fiddling with metadata GUIDs. Each prompt took seconds to type. The implementation, including dead ends and retries, was handled by Copilot.
What Got Built
| Component | What Copilot Produced |
|---|---|
| Solution Foundation | Publisher, unmanaged solution, data model, relationships, and core columns |
| App Experience | Forms, quick create behavior, views, required fields, and display-name cleanup |
| UI Customization | Sitemap updates, custom icons, and ribbon button action |
| Extensibility | PCF color picker and C# plugin validations |
What I Spent My Time On
Here is the interesting part: where my actual time went.
1. Data Model Decisions (~2 minutes)
Copilot asked me to confirm the hierarchy: should it be Account -> Room -> Furniture -> Object or Account -> Room -> Object? I picked the simpler model. It asked about the publisher prefix. I chose boak. These are design decisions that require domain knowledge - exactly where a human should spend time.
2. Testing and Feedback (~80% of the session)
This was the bulk of my work, and it is the right kind of work:
- Opened the form -> “nothing is mandatory” -> told Copilot to fix it
- Created an object -> “the form navigates away” -> asked for side-panel quick create
- Looked at the display names -> “I do not like boak_ showing” -> asked for cleanup
- Clicked the ribbon button -> “no icon showing” -> reported it
- Tried the color picker -> “clicking does nothing” -> reported the bug
- Checked button position -> “it is next to Delete, not Save” -> asked to move it
Every one of these is a quality gate: I was the tester and product owner, not the implementer. Copilot iterated on each fix, sometimes needing 2-3 attempts (the ribbon positioning was particularly stubborn), but the iteration cost was Copilot’s time, not mine.
3. Steering Around Platform Limitations (~5 minutes)
Some things Copilot discovered cannot be done programmatically:
- Model-driven app creation (the API is unreliable)
- Button repositioning in UCI (classic ribbon sequences are ignored)
In both cases, Copilot told me honestly what did not work and gave me the 30-second manual workaround. That is the right trade-off: automate 95%, handle the remaining 5% in the maker portal.
The Time Math
Let us be honest about the numbers.
| Task | Traditional (Maker Portal) | With Copilot CLI |
|---|---|---|
| Foundation setup (publisher, solution, tables, columns, relationships) | 30-45 min | ~3 min |
| App UX configuration (forms, views, quick create, required fields, display names, icons) | 1-2 hours | ~7 min |
| Advanced customization (ribbon button + JS) | 30-45 min | ~5 min |
| Code components (PCF + plugins, including scaffold/build/register) | 3-6 hours | ~20 min |
| Total | ~5-8 hours | ~35 min of human time |
The Copilot session ran about 2.5 hours wall-clock, but most of that was Copilot working (API calls, solution imports, retries) while I could have been doing something else. My active engagement was maybe 35-40 minutes of typing prompts and testing results.
What Matters for Technical Teams
It Is Not Just Faster - It Is a Different Workflow
The traditional Dataverse workflow is: learn the tool -> navigate the UI -> configure -> test -> repeat. With Copilot CLI, it becomes: describe what you want -> test -> give feedback -> iterate.
This matters because:
- Domain experts can drive development. You do not need to know that views require an
ObjectTypeCodein the layout XML, or thatRequiredLevelneeds a PUT with MetadataId GUIDs. You say “make it mandatory” and the right API calls happen. - Iteration is cheap. Changing a form layout, adding a field, modifying a plugin: each is a single prompt. There is no “let me find that screen in the maker portal” overhead.
- Everything is scripted. Behind the curtains, prompts are translated into executable commands, with Dataverse customization tasks generated as Python scripts that run through the Dataverse SDK for Python. In practice, most of the clicks you would do in the maker UI become code. Many of these scripts are one-execution artifacts that you use and discard, but the delivery pattern is still strong, traceable, and repeatable at process level. The solution was exported and unpacked at every milestone.
How It Works Behind the Curtains
The important shift for technical teams is this: natural-language intent is converted into deterministic automation. You ask for a table, a form update, a required field, or a view change, and the agent produces Python scripts that call Dataverse APIs through the Python SDK layer. For code-first components like PCF controls and C# plugins, the scaffolding step is generated through PAC CLI, then the implementation is filled in and wired back into the solution. In many cases these scripts are not meant to become long-lived reusable assets, they are execution scaffolding for that change. The key value is the pattern: your “UI click path” is translated to code-driven execution.
What You Still Need to Know
Copilot CLI does not eliminate the need for Dataverse expertise - it changes where you apply it:
- Data modeling is still your job. The table hierarchy, relationship cardinality, column types: these are design decisions that require business context.
- Testing is critical. Copilot cannot see your screen. You are the eyes that catch “the icon is not showing” or “the button does not fire.” The faster you report issues with specifics, the faster Copilot converges.
- Platform limitations exist. UCI button positioning, model-driven app creation, and some metadata operations still require the maker portal. Knowing when to switch tools saves time.
- Prompt precision matters. “Add a plugin on pre-validation of object creation that checks if there is space left” gives Copilot everything it needs. Vague prompts like “add some validation” would require back-and-forth.
Final Architecture
Account (Customer)
+- Room (1:N)
+- Name* (String)
+- Customer* (Lookup -> Account)
+- Room Type (String)
+- Floor (Integer)
+- Size (max items)*(Integer)
+- Description (String)
+- Object (1:N)
+- Name* (String)
+- Room* (Lookup -> Room)
+- Category (String)
+- Quantity (Integer)
+- Color (String #RRGGBB - PCF ColorPicker)
+- Description (String)
+- Notes (String)
* = Required
Plugins:
- PreValidation/Create/Object -> Capacity check (sum of quantities vs room size)
- PreValidation/Update/Room -> Size reduction guard (filtered: size column)
PCF: ColorPicker (React, native browser color dialog + hex input)
Ribbon: Count Objects (JS dialog showing distinct objects + total items)
Conclusion
The power of this workflow is not that Copilot writes perfect code on the first try - it does not. The ribbon button took several iterations. The PCF form binding needed a specific XML pattern that is not documented.
The power is that the iteration cost is near zero for the human. I said “it does not work” and Copilot tried another approach. I said “still broken” and it tried a third. Each retry was Copilot’s time, not mine. My job was to describe what I wanted, verify the result, and course-correct, which is exactly where a human adds the most value.
For Dataverse professionals: this does not replace your expertise. It amplifies it. You spend your time on data modeling, business rules, and UX decisions. The XML, the API calls, the GUIDs, the solution packaging: that is the part that should have been automated years ago. Now it is.