Almost every week, I find myself explaining what solutions are and how to work with them efficiently. That's why I decided to write this blog, to describe in my own words what solutions are and how to make the best use of them. This way, I can refer people to this article when they need guidance.
Challenge Objectives
🎯 Learn about Solutions
🎯 Fully understand the difference between Managed & Unmanaged
🎯 Know how solution segmentation and layering can benefit you
Introduction
When you are just starting with the Power Platform, it's likely you've never heard of a solution before. You begin by creating a simple flow or a small app, and you gradually become more proficient. As more people start using your app and flows, you notice that the changes you make go live immediately, which can be risky. To mitigate this, you might start making copies of your apps or flows to test changes first, but this quickly becomes cumbersome. You start thinking, 'There must be a better way to handle this.' If this sounds familiar, then this blog is definitely for you. Solutions are an essential concept that may not be obvious at first, but they play a crucial role in structuring and managing your Power Platform projects effectively. In this blog, I want to demystify solutions for you.
What is a solution?
At its core, a solution in the Power Platform is a container that houses all components of your application. A solution is essentially a package that brings together these various Power Platform resources to address a specific business case.
Solutions can contain many types of components:
Apps: Model-driven or canvas apps.
Flows: Power Automate flows used to automate business processes.
Entities/Tables: Dataverse tables, along with columns, relationships, and forms.
Plugins: Custom code to extend application.
Web Resources: Custom HTML, JavaScript, or images used in applications.
Environment Variables and Connection References: Used to abstract details that vary between environments (more on these later).
The most important reason to use solutions, especially when you're just getting started, is to manage the application lifecycle effectively. As your app grows and more people start using it, you want a controlled way to develop, test, and deploy changes without affecting live users.
Using solutions also enables better version control and a clear way to package updates. They help in managing dependencies between different components, which means changes to one part won't inadvertently break the entire application. For larger, enterprise-scale projects, solutions are key to ensuring stability and reusability of your Power Platform projects.
We need to talk about environments
As mentioned, a big part where solutions come into play, is around what is called Application Lifecycle Management (ALM). I will not go into too much detail on that topic for now. The only thing that I want to highlight is that you separate the development and testing from actual live 'version' of you solution (production). We do that using different environments. The sort of base version of this is using a development (DEV), Test (TST), and Production (PRD) environment.
You develop the solution in DEV, deploy it to TST to see if everything works as expected (both technical and functional). This TST environment should be as similar to PRD as can be. This way you will encounter potential errors in TST which you can solve before you move to PRD. That's why I think you should have at least 3 environments your solution goes through.
You can go bigger from here. You might want multiple DEV environments, where each developer has their own environments to work on their part, push it into a federated environment, before moving to TST and PRD.
Another option you have is to add multiple testing stages. Some prefer to do some testing before they bring the solution to the smaller group of business users to do their test. This is called User Acceptance Test (UAT). Others also like a dedicated Pre-production stage (PRE) to make sure that the environments are as much in line as possible. You can the end up with something like the image below.
Option 2 and 3 can also be combined. But this is quite complex and definitely not always required. That's why for now we stick to the first option. But knowing that there are some options to grow when your solution or team requires so is good to know. Moving the solution over happens with solutions, so the more reason to learn more about it.
Unmanaged vs Managed
Now that we know what a solution is, we have to talk about the two different types of solutions. Managed and Unmanaged. Every now and then a debate erupts on how to use these two types. I am not planning on starting a debate here, I am just pointing out how I prefer to use the two types, which I guess is how the majority uses them😉. It is also important to understand these two types, as it can cause some trouble if you don't.
I will start with with some theory. We just need to go through these things in order to fully understand what the difference is and how we can use these.
Solution layers
The image above is copied from MS Learn, but contains almost all the information you need. Tell your brain to save it, or at least bookmark the link. I will talk you through what we can see on this diagram.
The top layer isn't actually a layer, but shows what the user sees when an app (or flow) runs. So we have to interpret the diagram from top to bottom. Although it sounds weird, I will describe the other layer from the bottom to the top.
The bottom layer shows the system solutions, which is a managed (by Microsoft) layer. This is the solution that contains all the tables and components for the platform to function. When you create a new environment, this is deployed to your environment, which is why it takes a few minutes for an environment to be ready to be used.
The next layer(s) are the managed solution layers. there can be multiple solution layers. We will talk about layering solutions in a later stage. For now, it is important to understand that you can have multiple managed layers stacked on top of each other.
Then we get to the unmanaged layer. This is only one layer. It is important to note that the unmanaged layer sits on top of the managed layers. that is always the case. To be clear, time of importing the solution has nothing to do with this. the unmanaged layer sits always on top.
Customizations
The layers explained in the previous section come into play when you want to make adjustments. For unmanaged solutions, you will be working in the unmanaged layer. you can just make your changes and the new version will be active when published.
For managed solutions, you basically cannot. When you are within your solution, you will not have the option to directly make adjustments. There are some ways around this. For instance, when you open a Power Automate flow, you will see the option to edit the flow. But here comes the tricky part.
You will not be adjusting your managed layer, as it is managed (a.k.a. locked). When you make adjustments, you will create an active customization, in the unmanaged layer. which leads us the updating your solution. You can see if a component has active customizations by clicking Advanced and select See solution layers. In the example (again copied from MS Learn) you can see the system layer (which is managed) and the unmanaged layer on top.
I personally use an XrmToolBox tool called Solution Layer Explorer to see which component of my solution contains active layers (unmanaged layers). In the next section you will learn why I check this regularly.
New versions
Your solution is stored in Dataverse tables. Dataverse checks if the solution is new, or it's an new version. No need for you to do anything on this. The same counts for components. If the component does not exist, it will create it. If it does exist, it will update it. But there are a few things to be aware of.
The first is when active customizations are made. Let me elaborate. Imagine you imported version 1 of (managed) solution 2 to a target environment. After that, you made changes to the Power Automate flow. You now know that this means an active or unmanaged layer, which always sits on top of managed layers. Now you want to update (managed) solution 2 with version 2. The deployment went well, but the changes made in you Power Automate flow are somehow not coming through. That is because there is still an active layer on that Power Automate flow, which prevails the version 2 of the managed solution. That's why you always have to be keen on unmanaged layers when working with managed solutions.
The second important aspect is the how updates are treated with managed vs unmanaged solutions. With managed solutions there are actually three options:
Option | Description |
Upgrade | Upgrades your solution to the latest version. Any objects not present in the newest solution will be deleted. |
Stage for upgrade | Upgrades your solution to the higher version, but defers the deletion of the previous version and any related patches until you apply an upgrade later. |
Update | Replaces your older solution with this one. |
Upgrade is the default option, and in my opinion for a good reason. The big advantage of managed solution is exactly that it will also remove any component that somehow isn't required anymore. This means less surprises, which is what we are after.
With unmanaged solutions, you won't remove already existing components. They will still be there, even after an update, which is why unmanaged solutions aren't really suited for versioning. Deleting components as a whole is a difference between managed and unmanaged.
Delete operations
There is also a big difference when you want to delete components. An unmanaged solution is fully customizable. You can just remove a component from the solution. But there is a catch. When you are in your solution and select a component to delete, you will be given the following two options.
To understand this I will have to tell you about the Default Solution. Each environment has a Default solution (it is always at the bottom). All components created will reside there. When you will press remove from this solution, it will still exist in the Default Solution. Especially for Power Automate flows or (low-code) plugins that run some automated processes, this is important to be aware of, as you might think something is removed, when it's not. When you actually want to get rid of the component, you should select Delete from this environment, which will completely remove it. The Remove from this solution is only of value when you want to split your solution multiple solutions, which makes me wonder why this UX has been chosen?
When you want to delete an unmanaged solution, the same thing happens. The solution itself will be removed, but all the components that solution contains will still exist in the Default Solution. So when you fully want to remove the unmanaged solution, you will need to first remove all the components before removing the solution.
For managed solutions it is completely different. You basically cannot remove individual items. Your only option is to remove the complete solution, which will remove the solution, and all the components it contains.
Import & Export
The last piece of theory. almost there! There is also a difference in importing and exporting the solutions. You can only export unmanaged solutions. Once a managed solution is imported, it is locked and cannot be re-exported. With an unmanaged solution, you can do this as many times as required.
Below an overview of the differences we discussed.
Feature | Managed Solution | Unmanaged Solution |
Layering | Multiple layers possible | Always top-layer, overwrites existing |
Modification | Not directly modifiable | Fully modifiable |
New versions | Upgrade as default, which removes unused components | Not suited for versioning |
Deleting components | Delete entire solution | Individual components, but be aware of the Default Solution |
Import/export | Exported as managed, cannot re-export | Fully exportable and re-importable |
How to work with the two types
Now, let's see how we can use the managed and unmanaged solutions. My default setup, and in my opinion the best practice, is shown below.
We want full control over the production environment. So we don't want to keep components we remove from our solution. That's why I think it's a no-brainer to opt for managed solutions in the production environment. As your TST environment should be as close to PRD as can be, here it should be managed too. Only your DEV environment will contain an unmanaged solution.
As I've mentioned, the unmanaged or active layer is something that can cause some trouble when upgrading your solutions. There is a feature out now where you can block unmanaged customizations. Although I really don't like active layers, I haven't turned on this feature. Because unmanaged layers can come in handy.
Imagine you worked on a solution in the DEV environment, and deployed the solution to TST for testing. you test the solution and everything works fine. But now a business users starts testing the solution and runs into some errors. In such a case it is very helpful to be able to quickly create some unmanaged layers as sort of patches. you can quickly test if the adjustment works. If not, you can remove the unmanaged layer. If the adjustment works, you can do the exact same thing in DEV, you unmanaged solution, deploy a new version to TST and remove the active layers.
You have to stay keen on the unmanaged layers. That's for sure. you can always check the layers quickly with the Solution Layer Checker in XrmToolBox.
Another way how you want to use the unmanaged solution is when you are using source control. This is a more advances skill, but when you store the unmanaged solution source code in a repo, you can always recreate an unmanaged, or managed solution. I will not discuss that in more detail here.
I hope this gives a bit of overview on the differences between managed and unmanaged solutions, and all the little caveats that come with both options. It took me a while to understand it, so I hope this will help someone.
Community Contribution: After publishing this Challenge, Carina M. Claesson sent me a message. Her addition to this piece is that ideally you have just one unmanaged solution per environment. Development environments (not the dev solution, but the environment type development) are a great use case for this. This way you make sure that you will not mix components from other unmanaged solutions, which can cause dependencies issues. Development environments are a great use case for this.
These issues may arise directly when you export your unmanaged solution as managed and import it in your target environment. Is there are items missing, you cannot import the solution. Adding them will be the quick fix. The issue can be a bit more painful when you don't notice it at first hand. You will then only notice it when you want the remove a solution. You will then have to unwind the solutions which is not what you want to do.
If creating a dev environment for each unmanaged solution is not an option for you, you have to be very wary on these dependencies. An option would be to reset the TST environment before you deploy your solution there (a feature available for sandbox environments). This way you can limit the amount of environments. In this case you probably want your solutions in source code, so you can make the TST environment exactly how you need it to be.
Connection References & Environment Variables
As mentioned, solutions are great for moving across environments. But sometimes you need to little adaptability across the different environments. Connection references and Environment Variables allow you to separate out environment-specific details and maintain flexibility when promoting solutions through different stages of the application lifecycle.
Environment Variables
An environment variable is essentially a placeholder that can be configured differently for each environment (e.g., DEV, TST, PRD). Instead of hardcoding values like API keys, URLs, or other configurations directly into your apps or flows, you use environment variables to make your solutions more flexible and easier to maintain.
Environment variables can be defined in the solution and referenced in your app or flow. When you deploy the solution to a new environment, you only need to update the value of the environment variable for that specific environment. This gives you mor flexibility, without creating unmanaged layers.
To give you an example, consider a scenario where your Power Platform app needs to interact with an external API. The API endpoint may differ between development, testing, and production environments. Instead of hardcoding the endpoint URL, you can create an environment variable called API_Base_URL. This variable can be assigned different values based on the environment in which the solution is deployed, thereby eliminating the need to change and republish the app each time it moves to a new stage.
There are two Dataverse tables to facilitate this. Environment Variable Definition, and Environment Variable Value. The definition is what is stored in your solution, thus managed and not adjustable. The Environment Variable Value is outside the solution (hence there is no unmanaged layer) and uses a lookup to link to the definition.
Connection References
A connection reference practically has the same approach, but in this case for connections. In the DEV environment the connections will run under your personal credentials, but in TST or PRD this probably will need other credentials. This is where connection references are for.
When you import the solution in the target environment, you will need to select a connection that the connection reference can use. Again, just like the environment variable.
As the addition of the connection reference can cause some linguistic juggle, I added a small list to show the differences between everything connect*.
Name | Description |
Connector | A connector represents the connection interface used by Power Apps or Power Automate to access services such as SharePoint, Dataverse, or any RESTful API. It is basically a low-code version of an API. |
Connection | Stored credentials for the chosen connector, which authorizes you to access the data source. |
Connection Reference | A placeholder for the connection. A different connection can be linked to the connection reference across environments. |
These two items are far more easy to understand compared to managed and unmanaged solutions, but are as critical to make your solutions work across different environments.
Pro tip: I personally combine Environments Variables with a custom Settings table. The environment variables are controlled by me as the developer, where the settings table is managed by the business owner. I use the Configuration Migration Tool to make sure the record IDs are identical across the environments, and I reference these Dataverse records in my app or flow. With security roles I only allow then to edit the records. This way I can give some control to a business owner. This of email addresses that might change over time, or email bodies when sending out emails. They can control the content, instead of requesting a change every 2 months. For every variable that must be set, I ask myself, do I want to control it (Environment Variable), or do I want the business owner to control it (record in Settings table).
Solution Segmentation and Layering
So when you progress with your solution, you solution might become too big to properly manage. I recently encountered this with a solution that contained Power Pages components. Because the vast amount of components required for the Power Page, upgrading the solution takes quite some time. Chopping up the solution into more manageable pieces could improve deployment time, and overall maintainability.
Some practical examples of segmentation are described below.
Core Solution: The core solution contains the key entities that your solution uses. This solution focuses on data structures, business rules, and relationships between those entities. It’s managed to ensure data integrity and consistency across environments.
User Interface Solution: Another solution focuses on the user interface (UI), with Power Apps that expose the core entities to end users. Segmentation of UI components ensures that changes to the interface can be done independently of core data structure modifications. For instance, changes in how forms are displayed or new dashboards can be deployed without impacting the underlying data.
Automation Solution: A third solution could include workflows and Power Automate flows. This segmentation allows the automation layer to evolve independently of core data or UI changes. For instance, if a new automated process is introduced, it can be added to the automation solution without altering the underlying data model or affecting the user interface.
Security and Permissions Solution: Segmentation can also be used to manage security aspects separately. A solution specifically focused on security roles and permission levels can be created and deployed independently. This is especially useful for ensuring that the right roles are applied in each environment, reducing the chances of inconsistencies in permissions.
In such a scenario you will end with a structure like shown below.
You could also create multiple UI solutions (Power Pages, MDA, Canvas Apps) for reasons described earlier.
Another option is to create a core solution and on top of that smaller solutions that are tailored to business users. Imagine a MDA for back office processes, and a Canvas App or Power Pages for different users. Segmentation based on personas can be of use too. In that case, you will combine security roles and UI components together. Depending on your scenario, you can choose what fits best.
We are actually already layering solutions on top of one another. The Core solutions is your base (Solution 1). All the solutions on top of that have a dependency to Solution 1. When you have bigger team with different specialist, you could even split the solution responsibilities accordingly.
A big advantage of splitting your solution into smaller pieces is that the changes you deploy are only of a single part of the solution, which makes testing a lot easier, as you only have to test that particular section. Another advantage is that when you use source code and implement code reviews before deployment, the code review is smaller.
The downside is that you add complexity. Your team should know how to work this way, and you need proper documentation of your solution.
There are plenty of examples to be found of solution segmentation. The CoE Starter Kit is a great example. The Core solution (this Core naming can be found in many segmentations) contains everything that is required for collecting all the data. On top of the core solution different modules are created that can be deployed when you want to. So here there has been chosen for a functional segmentation. Also a nice approach.
Another example is the Creator Kit. This started as a single solution, and later has been split up into a Core solution, with on top a solution for MDA template, and Canvas Template.
When you really want to up your game, you could bundle your different solutions into a package. A package contains at least one solution, but can also include configuration data (remember the custom settings table for example?) and custom code. This should give you the ability to set environment settings (the Creator Kit does exactly this). I cannot tell you everything about this yet, as I am figuring this out myself at the moment. Maybe for a next month?
Additional Information
You can learn a lot from how others do things. You can learn from open source solutions, but in my experience you can definitely learn from the way Microsoft themself package and deploy their solutions. Having thought out how you bring your solution to your end-users will truly up your game and overall create better solutions.
Key Takeaways
👉🏻 The difference between managed and unmanaged is boring at first, but super important
👉🏻 Working with solutions means working with Connection References and Environment Variables
👉🏻 Using segmentation and layering can improve your deployments
コメント