A PowerShell Success Story

I’m supposed to be writing the second post in a series on performance (See part 1), but I wanted to take a quick detour to discuss my recent trip to MS Ignite.  Despite working with MS products for around 20 years, I’d never gone to a big conference, so I really didn’t know what to expect.  Considering many MS on-prem products have been problematic of late, I envisioned Ignite as an opportunity for cathartic griping to various product groups.  While there was some of that I’m happy to report Ignite was actually hugely productive for me.

It’s probably no surprise that I attended a few PowerShell oriented sessions, like:

I got a lot out of the sessions, but what really made the whole trip worth it was the time I spent at the PowerShell booth with the team that actually builds and maintains the product.  Besides the fact that I was totally starstruck, I was truly impressed!  The enthusiasm of the team and dedication to the user community is truly unbelievable.

left to right: Japp Brasser (MVP – Working for Rubrik), Danny Maertens, Syndee Smith & Jason Helmick, all PowerShell Program Mangers with Microsoft, and me.

In a conversation with James Helmick I couldn’t resist telling a PowerShell success story I’m particularly proud of.  James actually encouraged me to blog about it, which is the real inspiration for this post.  So without further blathering here’s one of many PowerShell success stories.

When I was hired into my current position the organization was transitioning from a Solaris Unix centric environment to Windows & AD.  They had a long history in the Unix world and had spent decades perfecting their very own way of doing things.  And, as any IT veteran knows, people can get pretty attached to their work product.  It’s not at all surprising; after all technologists work their buts off first to think through tough problems, then to codify solutions.  Of course this is all to say that old systems diehard.  It’s often a protracted bitter process, fraught with hazards.

OK that sounds bad, but these circumstances were a real opportunity for me.  I wasn’t really hired for this sort of thing, but as big pieces of the infrastructure were transitioning they were also disrupting old processes.  This put me in an ideal position to modernize, and PowerShell was the obvious tool to do it.

One series of events took place back in 2015.  Our new HR system had just gone live, and even though data was synchronizing back to the old database, a lot of our data flows broke.

I wasn’t even part of the HRIS project, but having written a few of the now broken sync scripts I suddenly found myself in the middle of a crisis.  By design, the new system was locking records during its on-boarding process.  The data wasn’t being sent downstream, so new users weren’t getting setup properly.  User accounts couldn’t even be tested until the employee start date.  It was a huge and potentially embarrassing problem.

As the resident PoSh evangelist I was waiting for an opportunity like this.  Ultimately I proposed a rewrite of our provisioning programs using PowerShell.

The existing provisioning tools were hosted in Unix, where Tcl programs were used to telnet into Windows with privileged access.  Once in, still other scripts (regrettably I wrote some of those too) would create accounts etc.  It’s an understatement to say this was grotesquely complex and terribly insecure.

In the new system I wrote functions to wrap the user creation process, and added them to a module I had already written for the support team.  The new functions leveraged Just Enough Administration (JEA), using a PowerShell constrained endpoint running under an alternate privileged account.  In this paradigm the module functions would call the end point when needed, rather than directly granting access to the operator.  This also securely stored the credentials with the endpoint configuration, which solved the old problem of storing and transmitting the password in clear text.

To solve the data flow issue I wrote code to prepopulate the necessary data in the old HR database.  The RunAs account did need some access to the DB, but we used views & permissions to minimize that surface area as well.

While it was a lot to bite off at the time this was actually pretty easy to do.  I can’t share anything material to my organization, but I’ll try to demo a framework below.  This isn’t intended to be an exhaustive deep dive into PowerShell’s remoting features, perhaps I’ll cover that later, but this should be enough to get going.

The first thing I did was write a startup script:

# Define Functions:
Function Test-SupportEndPoint
{
	#Just to make sure we can connect to the raw end point...
	Write-Host "Connection Successful on : $($env:COMPUTERNAME)"
	Write-Host "Connected User           : $($PSSenderInfo.ConnectedUser)"
	Write-Host "RunAs User               : $($PSSenderInfo.RunAsUser)"
} #End Function Test-SupportEndPoint

# Add additional functions as needed!
# Define Visibility of Endpoint Functions:
[string[]]$ProxyFunctions =
@(
# As you add functions to your module list them here
'Test-SupportEndPoint'
'Get-Command'           # Only if you plan on interactive or implicit remoting.
'Measure-Object'        # Only if you plan on interactive or implicit remoting.
'Select-Object'         # Only if you plan on interactive or implicit remoting.
'Get-Help'              # Only if you plan on interactive or implicit remoting.
'Get-FormatData'        # Only if you plan on interactive or implicit remoting.
)

# Set visibility of commands
ForEach( $Command In (Get-Command -All) )
{
	If( $ProxyFunctions -notcontains $Command.Name )
		{ $Command.Visibility = 'Private' }
}

<#
There are some commands that must be present for different remoting scenarios.
This seems to be poorly documented, but through trial & error I came up with
the below for implicit and interactive remoting.

Command          Implicit   Interactive
-------          --------   -----------
Select-Object    Yes        Yes
Measure-Object   Yes        Yes
Out-File         No         No
Exit-PSSession   No         Yes
Get-FormatData   Yes        No
Out-Default      No         Yes

Note: Errors generating in some of the testing scenarios suggested that Get-
      Help is also used under the hood but not required.

For some reason these commands wouldn't activate properly from the array/loop
above, so I explicit enabled them.  Uncomment the below according to the 
desired remoting scenario.
#>

# Explicit to show these commands Only for interactive & implicit remoting:
# ( Get-Command Measure-Object).Visibility = 'Public'
# ( Get-Command Select-Object ).Visibility = 'Public'
# ( Get-Command Get-FormatData).Visibility = 'Public'
# ( Get-Command Exit-PSSession).Visibility = 'Public'
# ( Get-Command Out-Default   ).Visibility = 'Public'

<#
Note: Available LanguageModes:
 FullLanguage
 Restricted Language
 ConstrainedLanguage
 NoLanguage          - 

https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_language_modes?view=powershell-6
#>

$ExecutionContext.SessionState.Applications.Clear()
$ExecutionContext.SessionState.Scripts.Clear()
$ExecutionContext.SessionState.LanguageMode = 'FullLanguage'

Now I just needed to register the new end point:

Register-PSSessionConfiguration -Name Support -RunAsCredential PrivUser@MyDomain.local -StartupScript C:\PSSessionConfigs\Support.Startup.ps1 -ShowSecurityDescriptorUI -Confirm:$false

A few things are going to happen after running the command:

  1. You’ll be prompted for the RunAs credentials via a typical credential dialog:
  2. After entering the cred you’ll get a typical security dialog:
    undefined
    If you need to grant access to the endpoint, the user or group will need Read & execute as shown.
  3. The registration completes and you’ll see the below warning.

    WARNING: Register-PSSessionConfiguration may need to restart the WinRM service if a configuration using this name has recently been unregistered, certain system data structures may still be cached.

    In that case, a restart of WinRM may be required. All WinRM sessions connected to Windows PowerShell session configurations, such as Microsoft.PowerShell and session configurations that are created with the Register-PSSessionConfiguration cmdlet, are disconnected.


    Assuming no concerns I usually go ahead and restart the WinRM service. Doing so saves some confusion. When testing a new or changed endpoint, it helps to rule out the need for a service restart.

And that’s it!  You can test the endpoint with:

Invoke-Command -ComputerName YourServer -Configuration Support -ScriptBlock { Test-SupportEndPoint }
Connection Successful on : YourServer
Connected User           : TheCallingUser
RunAs User               : PrivUser

There are 2 other ways to access your new endpoint:

1) You can interactively enter the session:

$Session = New-PSSession -ComputerName YourServer -Configuration Support
Enter-PSSession $Session

2) You can import the session locally, this is referred to as implicit remoting:

$Session = New-PSSession -ComputerName YourServer -Configuration SupportImport-PSSession $Session
Import-PSSession $Session

Implicit remoting is awesome!  PowerShell will generate proxy functions to mimic the functions in your endpoint, but they’ll be loaded in your local session.  When used the local functions seamlessly call the endpoint functions but still return output locally.  More or less this looks no different than using PowerShell locally.

Notice that in the above code there are some comments that describe which cmdlets must be available for these 2 remoting scenarios.  I added that to the demo code for some degree of completeness in this otherwise abridged tale.

In my case I also needed to incorporate commands from the Exchange Management Shell (ESM). ESM is always an implicit remote session that’s imported locally, but it isn’t implemented like the rest of PowerShell’s remoting infrastructure.  That’s a long story for another time, but the gist is it’s very difficult to incorporate ESM commands such that they can be imported with custom endpoint.  In this case I used Exchange Role Based Access Control (RBAC) to create custom least privilege roles for the support team.  I wrote my own front end proxy functions to leverage commands coming from the implicit ESM session then used Invoke-Command against the custom end point when needed.  One of these days I’ll get around to finding a fix for that so I can do the whole implementation through implicit remoting, but for now this works really well.

The endpoint is secured in a few different ways:

  1. An ACL locks it down so only intended users can access it.
  2. The startup script stores the functions, but it also hides anything you don’t want to expose.  This is how we prevent the privileged account from being misused.
  3. Anything I couldn’t obscure, I checked and prevented in code.  For example, I’d only allow an account deletion after checking the OU.  The end point couldn’t be used to delete domain admin, service account other privileged account.

#3 Deserves a little more explanation.  Let’s say I have a function in the end point to remove a user account:

	Function Remove-UserAccount
	{
	    Param( [String]$UserName )
	    Try {
	        Remove-ADUser $UserName -Confirm:$false -ErrorAction Stop
	        Write-Host -ForegroundColor Green "Successfully removed $UserName ."
	    }
	    Catch {
	        Write-Host -ForegroundColor Red "An error occured trying to remove $UserName"
	    }
	}

Obviously this is just an example, but it’s easy to see that by deploying this in an endpoint running under a privileged account, you’ve given a the user the ability to remove any account.  I dealt with this by coding checks into risky functions so the above might look something like:

	Function Remove-UserAccount
	{
	    Param( [String]$UserName )
		
		#Only Allow deletions from the below OU's
		$AlowedOUs = @("ou=regular users,dc=yourcompany,dc=com")
		$User = Get-ADUser $UserName
		
		$OU = $User.DistinguishedName.ToLower().Substring($User.DistinguishedName.IndexOf(",") + 1)
		
		#Exit if not allowed OU…
		If( $AllowedOUs -notcontains $OU) {
		    Write-Host -ForegroundColor Red "$UserName is not in an allowed OU exit function!"
		    Return
		}
		
	    Try {
	        Remove-ADUser $UserName -Confirm:$false -ErrorAction Stop
	        Write-Host -ForegroundColor Green "Successfully removed $UserName ."
	    }
	    Catch {
	        Write-Host -ForegroundColor Red "An error occured trying to remove $UserName"
	    }
}

One other point.  In my organization I ended up creating the endpoint on quite a few systems to service different locations.  Of course for security reasons we need to change the password on the RunAs account pretty frequently and going to each host became a real chore.  I wasn’t able to make the change through remoting itself, but I did find one thing that makes it a little easier.

In theory you can use Set-PSConfiguration -RunAsCredential <Pre-created Credential Object> but that’s never worked for me.  Instead I quickly unregister the endpoint then re-register, but I use the SecurityDescriptoSddl string that already exists on the object to skip the security dialog.

$Sddl = (Get-PSSessionConfiguration Support).SecurityDescriptoSddl

Unregister-PSSessionConfiguration -Name Support -Confirm:$false

Register-PSSessionConfiguration -Name Support -RunAsCredential PrivUser@MyDomain.local -StartupScript C:\PSSessionConfigs\Support.Startup.ps1 -SecurityDescriptorSddl $sddl -Confirm:$false

Restart-Service WinRM -Confirm:$false

You’ll still get prompted for the password, but you won’t have to click around in the security dialog.

I should point out there are other ways to create constrained endpoints. In particular you can create a session configuration file using the New-PSSessionConfigurationFile cmdlet, similarly limiting visible commands.  I chose to go with the startup script because it offered more granular control.  For example, I couldn’t lock down a command like Remove-ADUser the way I described earlier.

I’ll delve into endpoints more deeply in a future post. The capabilities have evolved quite a bit since I first built this but this story was basically to illustrate a real use case and implementation.  PowerShell had everything we needed to quickly and robustly solve a very serious problem.  Furthermore, this has worked so well it’s stood the test of time. We implemented more than 4 years ago, and only built upon the original work.  Many organizations end up buying 3rd party products to integrate their provisioning processes, but we have no need for it. This was truly a case where Anything PoSh-able | Everything PoSh-able.

Here are some more resources on PowerShell’s remoting features.

  • Introduction to PowerShell Endpoints – MVP Boe Prox guest blogging for the Scripting Guys.  This was the most useful reference I found at the time, it was invaluable.
  • Secrets of PowerShell Remoting – Small E-Book from the DevOps collective.  Don Jones and Tobias Weltner are the principal authors, with Dave Wyatt & Aleksander Nikolik contributing.
  • Once again I did this back in 2015, and I think the JEA terminology had just come into use.  Researching for this article I found MS has been developing these concepts.  Oh this definitely means I’ll have to write a follow-up.  At any rate, check out the JEA Section of the PowerShell Documentation.

Once again this isn’t meant to be a comprehensive walk through. However, for the beginner, or other practitioner trying to address specific issues, I hope this demonstrates the utility PowerShell can bring to your skill set and by extension your organization.

I’m seriously hoping to get some comments on this one. I’m sure there are some tweaks or best practices I may have missed. So, as always, I’d love to get some feedback.  Comment, click follow, or grab the RSS feed to get notifications of future posts.

Advertisement

PowerShell Performance Part 1, Introduction:

Comparatively speaking, PowerShell’s concise and flexible syntax translates to faster development, but usually not faster programs.  From its earliest days, one of PowerShell’s most annoying drawbacks is its comparatively poor performance.

I typically start script projects as a purist, leveraging native functionality, cmdlets, pipes, where clauses etc.  All the stuff that makes PowerShell code easy to write. Frequently the elation of quickly building a working prototype is overshadowed by unsatisfactory performance, and what follows is a time-consuming effort to refactor the code.  There may be better workflows to implement, but the issues usually boil down to PowerShell itself.  In turn, this forces a regressive break from the purist approach. Mining for performance results in a proliferation of increasingly exotic techniques within the code.

Refactoring is rather unfortunate because it erodes some of PowerShell’s eloquence.

Hazards of Refactoring for Performance (An incomplete list for sure):

  1. The resulting time deficit is the biggest problem, encompassing the additional effort to improve performance and generally more difficult maintenance.
  2. Code can become less readable.  Resulting code can end up so far from PoSh norms that it’s difficult even for the author to revisit.  Comments can help, but it’s difficult to relay the reasoning behind hard fought code.
  3. It’s hard to foresee at the outset, but you may only yield so much improvement.  Hence you can get lured into a time consuming yet relatively fruitless effort

I’ve spent a lot of time on these types of problems; in fact, I’m a little obsessed with maximizing performance.  This is the first in a series of posts about various tips & tricks I use to make things a bit faster.

It goes without saying that to analyze performance we need a method of measuring it.  PowerShell provides the cmdlet Measure-Command for this.  Measure-Command makes it easy to gauge the performance of almost any block of code.  Here I’m going to focus on simple commands, but keep in mind you can do this on much bigger script blocks.  So if you’re testing a different code pattern as opposed to a simple command, Measure-Command is still quite useful.

Measure-Command { Get-Process Explorer }
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 8
Ticks             : 81151
TotalDays         : 9.39247685185185E-08
TotalHours        : 2.25419444444444E-06
TotalMinutes      : 0.000135251666666667
TotalSeconds      : 0.0081151
TotalMilliseconds : 8.1151

Performance usually becomes a factor when processing large collections of data, where a relatively small performance difference can accumulate into a sub-optimal run-duration.  As such we’ll often be comparing granular pieces of code but this isn’t as straight forward as Measure-Command would have us believe.  One thing to consider is the reliability of the measurements.  After exalting the awesomeness of Measure-Command, this seems like a stupid question, but there are circumstances where inconsistent results can be presented.  This should be easy to address; repeat the tests and compare averages.  In other words, use a larger sample size.  Nevertheless, there are some anomalies I’ve never been comfortable with.

To get a larger sample size you can wrap a particular test in a loop, something like:

For($i = 1; $i -le 20; ++$i)
{ ( Measure-Command { Get-Service AtherosSvc } ).TotalMilliseconds }

I ran this test 3 times, but the results were questionable:

Note: Just to make sure, I repeated this with several different common cmdlets & sessions:

  1. Get-Service
  2. Get-Process
  3. Get-ChildItem
  4. Write-Host

Obviously something isn’t right here.  The first run is way slower than the others.  A few days ago at MS Ignite 2019, I spoke with PowerShell team members Jason Helmick & Tyler Leonhardt.  They confirmed my long-held suspicion that this is due to a caching mechanism.  That’s probably a good feature, but something to be aware of when you’re measuring performance.

This doesn’t really invalidate the test either.  If you are comparing 2 different commands generally the first run of each will be predictive enough.  However, I always track the results over some number of executions.  Furthermore, this appears to be specific to the cmdlet (or more likely the underlying .Net classes) not the command. Hence if you run Get-Service Service_1, then Get-Service Service_2, the second command should show the faster result.

As of yet, I’m not sure how long the cmdlet stays cached but considering this is a performance scenario centered on repetition it’s not a big concern.  Granted this lends itself to some ambiguity, but I’ll, try to resolve that in a later post.

Additional Guidelines I use when Evaluating Performance:

  1. Don’t look at performance in isolation.  Whether we’re talking about PowerShell or anything else. Performance is based on a variety of environmental factors including CPU disk, and memory, which are important to consider.
  2.  Always consider expected runtime conditions.  Say you choose a faster but more memory intensive piece of code.  If you then run it on a memory constrained system you may get unexpected results.
  3. Stay current!  The PowerShell team is aware of performance issues and they’ve been steadily improving the native cmdlets.  This is especially true since PowerShell went open source. So, if you’ve been using a faster technique on an older version, you may want to recheck it.  It’s possible whatever issue has been resolved, and your code can remain a little more pure.
  4. Think before you refactor.  Just because something is faster doesn’t mean you should use it.  You have to decide the specific tradeoffs.
  5.  Always re-evaluate performance after implementation.  Sometimes despite our best efforts we still miss the mark.  Of course that could mean anything, but don’t draw conclusions based purely on pre-testing.  Besides, it’s always a bit more satisfying to demonstrate improvements in the final product.

This first post on driving PowerShell performance should establish a reasonable, albeit informal framework for testing and validating such improvements.

As always, I’d love to get some feedback.  Comment, click follow or grab the RSS feed to get notifications of future posts.