Automate Application creation in ConfigMgr with Powershell

Automate Application creation in ConfigMgr with Powershell

In my previous post i wrote about a convoluted way of hiding credentials whereever possible when working with Task Sequences. Fortunately, this entire solution became obsolete when Microsoft decided to offer to mask sensitive data in a Task Sequence with a mere checkbox. First in the 1804 preview, then in release with 1806.

This time, i’d like to share something that is a little less situational. It’s a Powershell script to create Applications and (almost) everything with it. There are plenty similar scripts around on Technet or so, so you may wonder: what makes this one so different? Honestly, probably not that much. If anything, it would be its flexibility and ease of use as you can basically go through it with just a few mouse clicks. It evolved from simply automating repetitive tasks to a handy tool that I use at pretty much all my customers.


What does it do:

Depending on the specified parameters it will;

  • Create an Application in an optional specific folder within the ConfigMgr console.
  • Create either a script-based or MSI-based Deployment Type for that Application, including its Detection rule.
  • Create an AD Group with a Prefix based on your naming-convention.
  • Link this Group to an ‘All Apps’ Group, so an admin or device in this group has access to all created apps in one go.
  • Create either a User or Device Collection in an optional specific folder within the ConfigMgr console.
  • Create a Membership rule for said Collection based on the AD Group created earlier.

Once executed (without any parameters), it will load the required modules and prompt you to browse to an installation file.
If you select an MSI file and you have an icon file in your source folder, the script will do everything else. If there’s no icon file; it will ask you to select one. Though you can cancel the prompt and ConfigMgr will use the rather ugly default icon.
If you select a .cmd or .ps1 file, the script will prompt you for an uninstall file and a detection script file. And again an optional extra prompt for an icon file.


What doesn’t it do:

It does not create Deployments. When i get around to it i’ll probably add a switch for that too. But since in most cases deployments need to be tested first, so far I’ve always preferred to create these manually.

It has no option to remove stuff. I may or may not integrate that into this script. For now, automating cleanup is something for a future blog post 🙂



  • you have your content share set up in a specific way:
    This is needed because the script will fill in several fields such as Manufacturer, Name and Version based on this folder structure. This is then also used to create AD Groups and/or Collection name.
  • you have the ConfigMgr Cmdlets available or installed,
  • you have the AD Cmdlets available or installed,
  • you prepare your (script-based) Application properly.
    In most cases, all you’ll need is an install, uninstall and detection script. For MSI-based installers you have the option to specify arguments or let the script handle everything for you.
    Ideally, you also have an icon file present that is, at the time of writing, no larger than 250x250px.

For example, you could have the following files;

Calling Setup.exe with some silent parameter from the same folder as the batch file:


Calling Setup.exe with some silent parameter from the same folder as the batch file:


Most properly built installers will write their application info to this location so that it shows up in Add/Remove Programs in Windows. So its fairly reliable to detect a successful installation. You can make your detection as complex as you want. Just make sure the script ends with an exitcode 0 and non-error string. We only return a True and no False (or any other string), as doing so would be picked up by ConfigMgr as a failure.  See this documentation over at Microsoft for valid return values.

Note that 32-bit software on a 64-bit system will redirect to HKLM:\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall’.


The script:

For ease of use, you may want to check some parameter defaults and edit them to match your environment. Most notably the AppPath and GroupOU parameters.

If you have any questions or comments, i’d be happy to hear them.

Automate Application creation in ConfigMgr with Powershell

Supporting both Legacy and UEFI mode in your SCCM environment

When you still have devices in your environment which are only supporting legacy PXE boots and you also want to support UEFI PXE boots with the same task sequence this blog-post is meant for you. I will also give you some additional options you can add to your partitioning step in the Task Sequence (TS) which could come in handy.

Although I recommend using IP helpers above using DHCP, because IP Helpers are much more reliable, underneath a step-by-step guide to configure DHCP for PXE booting legacy and UEFI in your network.

Configure DHCP to support both legacy and UEFI mode

Step one: Create custom vendor classes to use with your DHCP Policy

Create custom Vendor Classes as described in the following steps, these will help to determine how the devices are requesting the boot image from the DHCP server.

  • Open the DHCP Console and expand the IPv4 Node
  • Right-Click on ‘IPv4 Node’ and select ‘Define Vendor Classes’
  • Click ‘Add’
  • Enter the following information:
    • DisplayName: PXEClient (UEFI x64)
    • Description: PXEClient:Arch:00007
    • ASCII: PXEClient:Arch:00007
  • Click ‘OK’
  • Click ‘Add’
  • Enter the following information
    • DisplayName: PXEClient (UEFI x86)
    • Description: PXEClient:Arch:00006
    • ASCII: PXEClient:Arch:00006
  • Click ‘OK’
  • Click ‘Add’
  • Enter the following information
    • DisplayName: PXEClient (BIOS x86 & x64)
    • Description: PXEClient:Arch:00000
    • ASCII: PXEClient:Arch:00000
  • Click ‘OK’

Step two: Create the custom DHCP Policies

UEFI x86 DHCP Policy

  • Right-Click ‘Policies’ and click ‘New Policy’
  • Enter the following information:
    • PolicyName: PXEClient (UEFI x86)
    • Description: Bootfile for UEFI x86 devices
  • Click ‘Next’
  • On the ‘Configure Conditions for the policy’ page click ‘add’
  • Select the ‘Value’ drop-down box, select the ‘PXEClient (UEFI x86)’ vendor class which we created in the previous steps
  • be sure to check the box ‘Append wildcard(*)’
  • Select ‘Add’
  • Select ‘Ok’
  • Click ‘Next’
  • Configure the scope if you want to target the policy on a specific IP range or select ‘No’ and click ‘Next’
  • Be sure that on the Configure settings for the policy page that ‘DHCP Standard Options’ is selected.
  • Configure the following scope options:
    • Option 060: PXEClient
    • Option 066: IP Address of the SCCM or WDS Service
    • Option 067: smsbootx86wdsmgfw.efi
  • Cick ‘Next’
  • Click ‘Finish’

UEFI x64 DHCP Policy

  • Right-Click ‘Policies’ and click ‘New Policy’
  • Enter the following information:
    • PolicyName: PXEClient (UEFI x64)
    • Description: Bootfile for UEFI x64 devices
  • Click ‘Next’
  • On the ‘Configure Conditions for the policy’ page click ‘add’
  • Select the ‘Value’ drop-down box, select the ‘PXEClient (UEFI x64)’ vendor class which we created in the previous steps
  • be sure to check the box ‘Append wildcard(*)’
  • Select ‘Add’
  • Select ‘Ok’
  • Click ‘Next’
  • Configure the scope if you want to target the policy on a specific IP range or select ‘No’ and click ‘Next’
  • Be sure that on the Configure settings for the policy page that ‘DHCP Standard Options’ is selected.
  • Configure the following scope options:
    • Option 060: PXEClient
    • Option 066: IP Address of the SCCM or WDS Service
    • Option 067: smsbootx64wdsmgfw.efi
  • Cick ‘Next’
  • Click ‘Finish’

(Legacy) BIOS x86 & x64 DHCP Policy 

  • Right-Click ‘Policies’ and click ‘New Policy’
  • Enter the following information:
    • PolicyName: PXEClient (BIOS x86 & x64)
    • Description: Bootfile for BIOS devices
  • Click ‘Next’
  • On the ‘Configure Conditions for the policy’ page click ‘add’
  • Select the ‘Value’ drop-down box, select the ‘PXEClient (BIOS x86 & x64)’ vendor class which we created in the previous steps
  • be sure to check the box ‘Append wildcard(*)’
  • Select ‘Add’
  • Select ‘Ok’
  • Click ‘Next’
  • Configure the scope if you want to target the policy on a specific IP range or select ‘No’ and click ‘Next’
  • Be sure that on the Configure settings for the policy page that ‘DHCP Standard Options’ is selected.
  • Configure the following scope options:
    • Option 060: PXEClient
    • Option 066: IP Address of the SCCM or WDS Service
    • Option 067:
  • Cick ‘Next’
  • Click ‘Finish’

Final step:

Remove all the default scope options  066, 067, 060

Configure the task sequence to support both legacy and UEFI mode

Legacy and UEFI need to have a different disk partitioning configuration. During the partitioning steps we can detect if the device was legacy or UEFI PXE booted. By adding the following task sequence variable into your partitioning step you can determine if your device was booted in legacy or UEFI mode. To check if the device is PXE booted in UEFI mode add the following TS variable:

Your UEFI partitioning step could look like this:

Or to check whether the device is PXE booted in legacy mode:

Your legacy partitioning step could look like this:

This way you can determine the disk partition configuration depending on the PXE mode (legacy/UEFI). Depending on your environment you can also create different partitioning steps within your TS for desktops, laptops, tablets or depending on your disk size. If you would like to create a second disk partition, for example when the disk is greater in size than 128gb, you can use the following WMI query:

And change ‘>’ to ‘<‘ to detect if the disk is smaller and you only want one partition.

If you have any comments or questions about this blog post please post them below in the comment section and I will try to answer them as soon as possible.

Automate Application creation in ConfigMgr with Powershell

Securely process credentials during OSD operations

This post has been made entirely obsolete with ConfigMgr 1806 where Microsoft introduced the option to mask sensitive data in Task Sequences. Some of the principles contained herein may still be applicable to protecting other data.


For a customer where I was performing a ConfigMgr implementation,  I was also tasked with installing some 30 Read-Only Domain Controllers. In itself, that hardly takes any effort. But hey, we have ConfigMgr and a boring repetitive task to do; so let’s just automate the whole thing! The challenge came when figuring out how to hide the credentials used in the whole process.

Lets just start simple with a little script we can trigger from ConfigMgr to install and configure the role. Actually, when going through the wizard in Server Manager, it will do most of the scriptwriting for us and with a few changes, we end up with a script that takes parameter input and feeds all of it to the parameters of the Install-ADDSDomainController Cmdlet.

Note the -SafeModeAdministratorPassword and the -Credential parameters. The first takes a SecureString object and the latter needs a PSCredential object. These parameters have to be added to the script that the wizard generates. If you want to manually reboot the system afterwards, you may want to set the -NoRebootOnCompletion parameter. Also note that the user specified in the -DomainAdminUserName parameter needs to be a member of the BUILTIN\Administrators group within the domain -or- be granted the Enable Computer and User accounts to be trusted for delegation user right in the Default Domain Controller Policy GPO.

The script above will work. Great, let’s use it in an OSD Task Sequence!

Or….let’s not.

These are Domain Admin credentials and chances are you won’t be the only one with access to the Task Sequence. Not to mention that ConfigMgr creates detailed logfiles everywhere, including on the target system, that all sorts of people have access to.


The problem

When looking for ways to securely pass credentials to Powershell scripts in Task Sequences in System Center Configuration Manager, the interwebz is full of examples that use ServiceUI.exe (part of the Microsoft Deployment Toolkit) to show a Get-Credential popup, how to specify them using Collection and/or Task Sequence variables, hardcode them in a batch file or script, save a SecureString object to a file and/or they’re just entered in plaintext in the command line of the Task Sequence step.

None of these options were acceptable to me as I had 2 requirements:

  • Aside from that I don’t use MDT unless absolutely needed, the Task Sequence should just not halt to wait on user input.
  • The passwords should not be readable or retrievable by anyone with privileges below Domain Admin.

Hardcoding credentials in a script is just…wrong. So don’t do it. Ever.

SecureString objects in a file could work. But by default only the user that created the file can re-use it unless a specific key was used. Anyone who would obtain that key, which in the case of this scenario would need to be hardcoded or passed as parameter, would be able to decrypt the SecureString. Which bings us back to the root of the problem: The ConfigMgr logfiles.
In %Windir%\CCM\Logs\smsts.log, you can find the commands that the Task Sequence Engine fires, meaning a Run Command Line step would show the actual command being executed. The same is true for Collection Variables (hidden or not). When the Task Sequence loads them, they will show up in the log like this:

If we set a Run Command Line step to be executed as a specific user, ConfigMgr processes the credentials securely. It will not log the actual credentials used anywhere. This is what It looks like when a Run Command Line step is executed using a credential explicitely set in its step:



A solution

Use a certificate that can only be imported by Domain Admins to encrypt the passwords that we then use during OSD to promote a brand new system to RODC within an existing forest…or whatever else you want it to do.

Specifically, we’re going to use the Protect-CmsMessage and Unprotect-CmsMessage Cmdlets with a Self-Signed certificate.

In short, the steps to go through are as follows:

  • Using Powershell; create a self-signed certificate.
  • Using Powershell; export this certificate with ADDS Account Protection.
  • Using Powershell; encrypt the passwords using this certificate.
  • In ConfigMgr; create a package with the certificate so that we can install it.
  • In ConfigMgr; create another package with a script that will decrypt the password and configure the RODC.
  • Put it all together in a Task Sequence.

In this example, the domains’ DNS Name is ‘VMCorp.local’ and its NetBIOS Name is ‘VMCorp’.
There is a user called ‘admin’ that is a member of the ‘Domain Admins’ group. For simplicity, we’ll use this same user to perform all actions.

First, we need a certificate that will be using ADDS Account Protection for the ‘Domain Admins’ group. This certificate needs to have Key Usage for Data Encipherment and Key Encipherment and Enhanced Key Usage for Document Encryption (

So lets create one and store it in a temporary location:

Don’t delete it from your Certificate store just yet. As we still need it to create our Cryptographic Message Syntax strings. We can do that with nothing more than:

And we’ll then end up with something like this:

This entire block, including the “—–BEGIN CMS—–” and “—–END CMS—–“, is what we will later use as value for our password parameters. The only way to retrieve the password in plaintext from this string is with access to the certificate that was used to sign it. That certificate should be stored in a safe location, and even then, only Domain Admins can import it.

Decrypting the password again using the certificate, is as simple as:


With that out of the way, we can create a package in ConfigMgr with a script that will import our certificate to the target machine. This script basically just needs one line:

Save it, together with the certificate we exported in the beginning, on the ConfigMgr content-share and create a package for it. It does not need a program.

Now for the new RODC installation script. Adapt it to incorporate the Unprotect-CmsMessage Cmdlet and it will start to look like this:

You can either modify it to add your own logging or look in %Windir%\Debug\ at the DCPROMO.log and DCPromoUI.log files for the result.
Create a package for this script too. And as before, it doesn’t need a program.


Putting it together in the Task Sequence, we first need to define the variables:

Start by adding a Set Task Sequence Variable step for the user name. In this case a Domain Admin, so I’ve named mine ‘DAUserName’ with a value of VMCorp\Admin:

Next we add another variable and here we enter the encrypted password.
Do the same for the Safe Mode Administrator password.

The system needs to be domain joined. Because we can’t add the certificate to decrypt the password without using a Domain Admin account.
Since this is a lab environment, we’ll throw security best-practices out of the window and use the same over-privileged account for the join.

During OSD, we don’t get policies yet. Which means the Domain Admins group is not yet added to the local Administrators group. To bypass this, we need to make our domain account a local administrator. And the reason for this, is that we will use this account in the next step to import the certificate to the LocalMachine certificate store and only administrators can do that. Insert a Run Command Line step with a net localgroup administrators /add %DAUserName% command:

Up next comes the certificate. This is a simple Run Command Line step that is executed as a Domain Admin, thus having sufficient permissions to import this particular certificate; it uses Active Directory Account Protection after all. In this case the script is just executed directly from the share:

Now for the RODC installation script. Here we can just use a Run Powershell Script step, and set our parameters with the previously set variables; -DomainAdminUserName ‘%DAUserName%’ -EncryptedDomainAdminPassword ‘%DAPassword%’ -DomainNETBIOSName “VMCORP” -DomainDNSName “vmcorp.local” -EncryptedSafeModeAdminPassword ‘%SMPassword%’
Note the single quotes required to properly pass the encrypted password.

Finally, add a Restart Computer step as it is required to complete the configuration. It is highly recommended to add a step to remove the certificate again. This can be done with either a script or a command line:


When you look in the logs now. All you will find are entries like:

Which are useless to an unauthorized user. As it takes a Domain Admin plus the certificate to be able to decrypt what is, in this case, also a Domain Admin’s password.

First thoughts on Technical Preview 1702 for System Center Configuration Manager

A new month, a new technical preview and new thoughts!

It is probably needless to say but “Do NOT install technical previews in your production environments!!”

Send feedback:
Technical preview 1702 introduces a new option in SCCM to send feedback or do feature requests. The home ribbon will have a feedback option but you can also klick on any object in the console. When clicking on feedback, a browser will open a link to the System Center Configration Manager Feedback site. Does this add any value to SCCM? No I do not think so! although it will be a lot easier to send feedback to Microsoft. I just hope it will not be used as a place for bashing whenever things go wrong.

Updates and Servicing:
With 1702 they have simplified the updates and servicing view. When SCCM is more than two (or more updates) behind ‘Updates and Servicing’ will only show the most recent version available. Every new update contains all previous updates so in my opinion this is a great feature. Off course you will still be able to download more previous versions but you will get a warning that it is super-seeded by a newer version. The most recent update will be downloaded automatically when available while older updates, also when not used, will be automatically deleted from the ‘EasySetupPayload’ folder.

Peer Cache improvements:
From now on, a peer cache source computer will reject a request for content when the peer cache source computer meets any of the following conditions:

  • Is in low battery mode.
  • CPU load exceeds 80% at the time the content is requested.
  • Disk I/O has an AvgDiskQueueLength that exceeds 10.
  • There are no more available connections to the computer.

I really like these new settings! They will give us more control over when devices are available for peer caching. You simply don’t want to encumber systems which are low on resources. This way your are more likely to use peer caching.

Use Azure Active Directory Domain Services to manage devices, users, and groups:
With this technical preview version you can manage devices that are joined to an Azure Active Directory (AD) Domain Services managed domain. You can also discover devices, users and groups in that domain with various Configuration Manager Discovery methods. At the moment I am not using Azure AD in combination with SCCM but this is great feature for people who are working with Azure AD.

Conditional access device compliance policy improvements:
This feature only applies to iOS and Android devices. This will help organizations to mitigate data leakage through unsecured iOS or Android apps. You have to configure the apps in a non-compliant list yourself. It will block access to corporate resources that support conditional access until the user has removed the app. Downside is that you will need to determine and configure the apps by yourself. If you are not aware of the app that could be leaking data, this feature won’t help you much. But it will certainly help blocking certain apps which you don’t want to be installed on your corporate iOS or Android devices. For example when a app uses excessive data consumption.

Antimalware client version alert:
When 20% (default) or more of your managed clients is using an outdated version of anti-malware (Windows Defender or Endpoint Protection client) Configuration Manager Endpoint Protection will generate an alert. Great feature when u are using SCEP or Windows Defender in your environment. I wonder how this is measured and in which time frame will a client be marked as outdated?

Compliance assessment for Windows Update for Business updates:
I am not going to explain what ‘Windows Update for Business Updates’ is. Therefor I would like to point you to the following technet article. From this technical preview on you can now configure a compliance policy update rule to include a Windows Update for Business assessment result as part of the conditional access evaluation.

Important: You must have Windows 10 Insider Preview Build 15019 or later to use compliance assessment for Windows Update for Business updates.

Improvements to Software Center settings and notification messages for high-impact task sequences:
This release includes the following improvements to Software Center settings and notification messages for high-impact deployment task sequences:

  • In the properties for the task sequence, you can now configure any task sequence, including non-operating system task sequences, as a high-risk deployment. Any task sequence that meets certain conditions is automatically defined as high-impact. For details, see Manage high-risk deployments.
  • In the properties for the task sequence, you can choose to use the default notification message or create your own custom notification message for high-impact deployments.
  • In the properties for the task sequence, you can configure Software Center properties, which include make a restart required, the download size of the task sequence, and the estimated run time.
  • The default high-impact deployment message for in-place upgrades now states that your apps, data, and settings are automatically migrated. Previously, the default message for any operating system installation indicated that all apps, data, and settings would be lost, which was not true for an in-place upgrade.

This is simply awesome! I believe that user communication is a key feature for a successful deployment of software, applications and releases. For complex updates I always use the Powershell App Deployment Toolkit and all of its nice features. But for more straight forward and simple deployments, which will need less communication, I can use this new feature. Hopefully they will expand it with more possibilities in the near future.

Check for running executable files before installing an application:
Again this is a great new feature which they added, too bad its only for applications in some scenarios I still use packages. But nevertheless this is a great feature which I will be going to use on a frequent base! I always had to use scripts or the Powershell App Deployment Toolkit to achieve this, this will save me a lot of work in the future! Hopefully they will expand this feature in the future for packages and task sequences and maybe add a message. A nice addition to this will be to let the users decide themselves if they want to close the process/executable before continuing or if they want to delay the installation until a pre-defined deadline.

Well these were my first thought on SCCM CB technical preview 1702 this month and I will be continuing my ‘first thoughts’ on all upcoming technical previews. If you have any thoughts yourself or any questions please post them below in the comment area.


First thoughts on Technical Preview 1701 for System Center Configuration Manager

A few days ago Microsoft made technical preview 1701 for SCCM available for download. Here are my first thoughts on this technical preview (TP).

Boundary groups improvements for software update points
In CB 1610 Microsoft introduced  important changes to boundary groups and how they worked with Distribution Points. With TP 1701 they are taking it a step further by adding the Software Update Points role. With TP 1701 you will be able to manage which SUP a client can use and which SUP’s it can use as fallback depending on which DP it’s connected to.

Please take note that the fallback time is not yet fully supported therefor it can take upon 2 hours before a client will use it’s  fallback SUP.

This feature will be more than welcome for a client I am working with at the moment. They’ve got multiple DP’s across the country with slow WAN’s The possibility to decide which boundary group is using which SUP and fallback SUP will be a great addition.

Hardware inventory collects UEFI information
This new feature is enabled by default when TP 1701 is installed. A new inventory class (SMS_Firmware) and property (UEFI) will be filled. The UEFI property will be set to TRUE when a computer is started in UEFI mode.  This will probably be useful in some circumstances when u want to know if a device or more devices use UEFI or Legacy BIOS to boot up.

Improvements to OS deployment
Microsoft listened to the community for most of these improvements. Lets see what they are and what they can do for us.

  • Support for more application for the install Application task sequence step:
    The number of applications you can add to this step have been increased to 99 applications. Previous count was 9 applications. I still prefer using packages for my task sequences so as long as we can still use packages I probably won’t be using applications within my OSD task sequence. And I’ve never been in the situation that I needed to use them in my task sequence. That said I believe for those who do and maybe for future use it’s a good improvement.
  • Expire standalone media:
    It will be possible to optionally set start and expiration dates when you create standalone media. This will be needed when you want to expire certain deployments through standalone media when you don’t want the media to be used after and before a certain date. I don’t use standalone media that much but I can imagine it ill be useful when for example deploying certain software or operating systems for a specific time frame and you don’t want it to be used in a later stadium or before a specific date.
  • Support for additional content in stand-alone media:
    It used to be only possible to add content which was referenced to the task sequence while creating standalone media. With TP 1701 it will be possible to add additional packages, driver packages and applications on the media. This could come in handy when u want to launch additional software and/or scripts after the task sequence is ended. I can imagine combining this with a script launched by the “SMSpostAction” feature which was added in SCCM 2012 R2 a while ago. I wrote a blogpost (linkabout this variable which you can set during your task sequence.
  • Configurable timeout for Auto Apply Driver task sequence step:
    I almost never use the step “Auto Apply Drivers” within a task sequence. I prefer using a tool from the hardware supplier for installing drivers. This way the drivers are installed the way it’s meant to be installed by the supplier. Most big hardware suppliers like DELL, HP and Fujitsu have their own SCCM or command line tooling for installing their drivers. But if you don’t have a choice and/or you prefer to use this step Microsoft added a foursome variables to timeout this step, values are in seconds.

    • SMSTSDriverRequestResolveTimeOut Default: 60
    • SMSTSDriverRequestConnectTimeOut Default: 60
    • SMSTSDriverRequestSendTimeOut Default: 60
    • SMSTSDriverRequestReceiveTimeOut Default: 480


  • Package ID is now displayed in the task sequence step:
    Any task sequence step that references a package, driver package, operating system image, boot image, or operating system upgrade package will now display the package ID of the referenced object. When a task sequence step references an application it will display the object ID. This is a great feature, I really love this. This will make troubleshooting the task sequence easier and its just a small change. You don’t have to search for the specific ID first before you go search in your logs. I see myself combining this with the variable “SMSTSErrorDialogTimeout” set to 0 (forever) so I can quickly see which package/object ID is involved when my task sequence is failing.
  • Windows 10 ADK tracked by build version:
    For example if the site has Windows ADK for Windows 10, version 1607 installed, you won’t be able to edit boot images other than 10.0.014393 in the SCCM console. I can imagine that this will become less practicable when you want to troubleshoot with different versions of boot image versions.
  • Default boot image source path can no longer be changed:
    I always use custom boot images and it will still be possible to adjust the source path for custom boot images. I see no problems with this adjustment and I think it will be a nice addition that you can always find your default boot images on a fixed location.

Host software updates on cloud-based distribution points
Since you can download software updates directly from Microsoft Update this new feature isn’t that appealing. But I believe the feature set for cloud-based distribution points will grow in the near future and it will  become more practicable to use cloud-based distribution points in the future.

Validate device health attestation data via management points
“Beginning with this preview version, you can configure management points to validate health attestation reporting data for cloud or on-premises health attestation service. A new Advanced Options tab in the Management Point Component Properties dialog box lets you Add, Edit, or Remove the On-premises device health attestation service URL.”
I haven’t used DHA before and it was first introduced in Windows 10 version 1507. For more information about DHA I suggest to read the following Microsoft Article (link).

Use the OMS connector for Microsoft Azure Government cloud
With this technical preview, you can now use the Microsoft Operations Management Suite (OMS) connector to connect to an OMS workspace that is on Microsoft Azure Government cloud. I love OMS, nothing more to add.

Android an iOS versions are no longer targetable in creation wizards for hybrid MDM
Beginning in this technical preview for hybrid mobile device management (MDM), you no longer need to target specific versions of Android and iOS when creating new policies and profiles for Intune-managed devices. Instead, you choose one of the following device types:

      • Android
      • Samsung KNOX Standard 4.0 and higher
      • iPhone
      • iPad

It’s always nice to see things get more simplified and this is one of them!

Source and more information: