I had a scenario on client site to ensure the ‘Disable Bitlocker’ Action did not run for Virtual Machines. I did not have the MDT Toolkit running ‘In OS’ therefore I could not pull in the ‘IsVM’ variable therefore I wanted to exclude based on retrievable attributes from WMI.
Instead of the normal ‘like’ statements I used the below which you could adapt:
SELECT * FROM Win32_ComputerSystem WHERE NOT Model LIKE “%VMware Virtual Platform%”
SELECT * FROM Win32_ComputerSystem WHERE NOT Model LIKE “%Virtual Machine%”
For me this covered both VMware and Hyper-V virtual machines.
Recently I was working for a client on a Surface Pro 3 project with a very clean foundation platform as I like to do for a dynamic build. One disadvantage (which would fill a blog of opinions on its own) is the Office Install which can often take a long time.
I had a scenario whereby during this step my Surface Pro’s would drift off to sleep and until they were woken would simply ‘sit’ during the build process until an action on the keyboard or mouse was performed. They would then carry on.
After a bit of digging, during the OSD process the Balanced Power Scheme is applied as default from Windows 8.1 which includes the 10 minute sleep function. As Office takes longer than this it impacted the devices whether plugged in or not.
To combat this issue I placed a conditioned step in the Task Sequence to remedy the situation:
This will allow the build process to continue uninterrupted and allow the normal GPP Power Settings to apply once you get into Windows for whatever your power policies are.
This is a second post in a matter of days and although I like the new App model I must admit I am far from impressed on its current stabilitly and apparent frailty when compared to regular packages.
Aside from the dependancy in and around Build & Captures, at present the revision history seems to play potential havoc inside of Task Sequences in certain instances as outlined below:
When building a machine from a Task Sequence, an application (or multiple) will fail and produce what is apparently an access denied message (0x80004005) however the NA Account is configured correctly and all boundaries are in place.
The source of the issue can be traced to the revision history in the Task Sequence being referenced to an incorrect version causing the install to fail.
This explanation is detailed in this excellent post
Although I like the new App model I feel it has some way to go before moving away from Packages all together.
Since the release of ConfigMgr 2012 Microsoft has employed a more dynamic approach towards application delivery in the form of Applications as opposed to standard Packages. These work in a similar way but come into their own for more complex applications requiring dependancy and being able to determine the presence on a machine rather than using scripts etc.
Although I use these for alot of customers I must admit there are instances where I still prefer the older model as there are advanced elements of troubleshooting where they are easy to get to an issue but also the stability of the platform still does not seem to have been ironed out.I have had countless instances where packages of the same source work fine but produce unrelaible results when working inside of OSD.
Nice lead onto topic 🙂
For one client I decided to move the Build & Capture over to pure applications where possible as opposed to packages and stumbled over an instability when producing the build and capture where they would not install.
Now I try to follow my own rule of thumb/best practice for Gold Deployment which is as follows
VM (Hyper-V)
Non-domain joined during sequence
This scenarion for a package version of the TS works a treat with no issues however transitioned to applicaiton it failed as soon as it hit the first package.
so what options are availible????
The cleanest option I have used to remediate requires a couple of adjustments:
1 – On the Setup Windows and ConfigMgr step, add in the SMSMP=YouCMbox.FQDN
2 – While I was performing the Build and Capture I added in an IP Address Boundary into the boundary group where the contents was located.
This resolved the issue and allowed the Applications to install.
So why is this required?
Well…… Assuming you dont join the domain during the Build & Capture and then Dis-Join (which is an option, just place in an OU with no policies!), the client is unable to query AD for the vital MP information so it needs to be spoon-fed with all the information it would otherwise gather on its own and without that information it is unable to access the Applications.
This issue is not a problem for standard packages which is just a number of differences between the two.
Ill admit im yet to be completely sold on Applications but will endeavour to try to adopt the new practices wherever possible 🙂