GPO Missing World Wide Web Services Inbound Rule

I’ve just completed the work needed to control the Windows 7 firewall through our corporate GPO. During that time, I ran across a display bug with IIS that I couldn’t find anyone else documenting.

Here’s the problem. I enabled the rules for World Wide Web Services on the workstation that I was using for the initial configuration. After importing that rule into the group policy, it doesn’t show on the list.

I repeated the process a second time to confirm I didn’t miss them, or that the overwrite of the existing policy wasn’t somehow in conflict – it wasn’t.

WithIIS_NotInstalled

Even though the line item is missing, the policy is still applied. I could disable the rule on the target laptops, apply the GPO, and it would take effect. It just wouldn’t display.

A had a hunch, so I went ahead and installed IIS onto the domain controller that I was using to design this GPO.

Lo and behold, the rule is suddenly’ there, and it showed my chosen preferences (domain + private). Clearly the rule existed all the time, it just wouldn’t display it in GPMC.

WithIISInstalled

It gets stranger. I uninstalled IIS, because who needs that hanging around if you aren’t using it. Now the rule continues to show up – but the other domain controller who didn’t have it installed, exhibits the original display problem.

It makes it appear as if it had to be installed at least once for GPMC to display that rule. Unsure if there are other items “missing” in this fashion, but very clearly this is one of them.

Monitoring Your Lync 2013 Peak Call Capacity

Recently, my company wanted to analyze our SIP trunk usage with our vendor and determine if we had unnecessary capacity. When the vendor was only able to provide call detail records, I went looking for solutions on the web.

I found several, but the problem was that they relied on the LcsCDR database to determine what sessions were open. The behavior has changed with Lync 2013 and session detail is no longer stored until the end of the call. This means that you cannot query for active/open sessions to determine usage.

I found this script by Rune Stokes in his article Monitoring your Lync Peak Call Capacity. It wasn’t viable as written for Lync 2013 because the counter names changed.

I updated it for the 2013 counter names as well as making a few other adjustments to the loop in order to stabilize the script on our W2k12 Mediation Server.

Download here: Get-CallCounters

<# ---------------------------------------------------------------------
    .SYNOPSIS
    Get-CallCounters.ps1
    
    Script to retrieve the performance counters of ongoing inbound and
    outbound calls, and also calculate the combined usage.
    
    It will output the average and peak values for each hour for as long 
    as it runs.

    .NOTES
    Must be run on Mediation server.
    
    Made by Rune Stoknes - https://stoknes.wordpress.com/
	Adaptations by Duncan Bachen - http://eureka.greenhead.com

    v0.5 - Apr 15 2014 - first simple version, CSV output only
    v0.8 - Apr 16 2014 - added console output for current counters
    v0.9 - Feb 16 2015 - Altered the outbound and inbound call paths to match what the counter references were in Lync 2013 [Duncan Bachen]
         - Feb 17 2015 - Changed clear screen not use console class so it can be tested in ISE and other shells [Duncan Bachen] 
					   - Script kept hanging during long running without explanation. Changed the wait for keypress logic slightly.
					   - Added loop output time and counter to the display so that it's easier to monitor and make sure it's progressing.  
					   - Altered loop time to 10 secs to have finer granularity of counters [Duncan Bachen] 
		 
   
   -------------------------------------------------------------------- #>

   # Lets define the actual counter paths we will be fetching.
   # Changed to 2013 paths
<#
   $outboundCallPath = "\LS:MediationServer - 00 - Outbound Calls(_total)\- 000 - Current"
   $inboundCallPath = "\LS:MediationServer - 01 - Inbound Calls(_total)\- 000 - Current" #>
   
   
   $outboundCallPath = "\LS:MediationServer - Outbound Calls(_Total)\- Current"
   $inboundCallPath = "\LS:MediationServer - Inbound Calls(_Total)\- Current"

  
   # Create CSV file for output. It will be named \Users\<user>\Documents\Call counters DDMMYYY.csv
   
   $outputPath = $env:HOMEPATH + "\Documents\Call counters " + (Get-Date -UFormat "%d%m%Y") + ".csv"
   
   try
   {
        "Date;Hour;Avg inbound;Peak inbound;Avg outbound;Peak outbound;Avg concurrent;Peak concurrent" | Out-File -FilePath $outputPath -Encoding default
   }
   catch
   {
        "Unable to create file " + $outputPath
        exit
   }

   # Create empty arrays to store counters. These will be reset every hour.

   [int[]]$inboundCalls = $null
   [int[]]$outboundCalls = $null
   [int[]]$concurrentCalls = $null
   [int]$counter = 0
   
   # Get the time
   $now = Get-Date

   # Now, for every 15 seconds we will repeat this same procedyre:
   # -> Fetch current call counter for inbound and outbound
   # -> Store data in array
   # -> if the hour changes, calculate average and peak counters and store to file

   $Stopped = $False

   do {

        # That was then, this is now
        $then = $now
        $counter += 1

        # Get the counters we want, and add them to the arrays
        [int]$currentInbound = (Get-Counter $inboundCallPath).CounterSamples[0].CookedValue.ToString()
        [int]$currentOutbound = (Get-Counter $outboundCallPath).CounterSamples[0].CookedValue.ToString()
                
        $inboundCalls += $currentInbound
        $outboundCalls += $currentOutbound
        $concurrentCalls += ($currentInbound + $currentOutbound)

        
        # Now let's get the time
        $now = Get-Date
        
        # Output to console
        # Console class is not available is ISE and other shells, so don't call it directly
        # [System.Console]::Clear()
        Clear-Host
        "Current number of inbound calls: " + $currentInbound
        "Current number of outbound calls: " + $currentOutbound
        "Current number of concurrent calls: " + ($currentInbound + $currentOutbound)
	    "Current Hour Counter: " + $counter
        "Current Loop Start: " +  $Now

        
        # Has the hour changed since then?
        if ($now.Hour -inotlike $then.Hour)
        {
 
            # The peak and average value can be calculated and derived using Measure-Object
            $inbound = ($inboundCalls | Measure-Object -Maximum -Average)
            $outbound = ($outboundCalls | Measure-Object -Maximum -Average)
            $concurrent = ($concurrentCalls | Measure-Object -Maximum -Average)

            # Let's append this to the CSV file we created
            (Get-Date -Date $then -UFormat "%d %b %Y;") + $then.Hour + ";" + $inbound.Average + ";" + $inbound.Maximum +  ";" + $outbound.Average + ";" + $outbound.Maximum + ";" + $concurrent.Average + ";" + $concurrent.Maximum | Out-File -FilePath $outputPath -Encoding default -Append

            # Let's not forget to reset our arrays now that the hour has changed
            [int[]]$inboundCalls = $null
            [int[]]$outboundCalls = $null
            [int[]]$concurrentCalls = $null
            [int]$counter = 0
                        
        }

        # Let's hear if the user wants to end the script
	    Write-Host "`n `nPress ESC to end the script on next 10s loop." -ForegroundColor Red

                
        if ($Host.UI.RawUI.KeyAvailable -and ($Host.UI.RawUI.ReadKey("IncludeKeyUp,NoEcho").VirtualKeyCode -eq 27)) 
         {
           Write-Host "`nExiting shortly...`nCheck output file for any historical data:`n" $outputPath "`n" -Background DarkRed
              # Clean up and exit while pointing to the results file
           $Stopped = $True
        }

    Start-Sleep -Seconds 10

   }
   Until ($Stopped)

Responsive ISA login form for TMG

My company uses Microsoft Forefront TMG in order to authenticate our external users trying to reach our internal applications.

Designed in a era before the proliferation of mobile devices, TMG’s default login pages are unwieldy. Users have to zoom in if they want to try and input into the username and password fields.

Additionally, my company uses Safenet Authentication Service to provide 2 factor authentication for our external users. This provides one more field that the users struggle to enter data into, and a time sensitive one at that.

Scott Glew developed a set of TMG Responsive Auth Forms that partially solve this problem. While they are ideal with a company only using Name/Password, they don’t work if a company utilizes RADIUS OTP.

I’ve forked Scott’s project and submitted a revision to the file which supports the use of a 2 factor passcode.

Here’s the text submitted as part of that patch which summarizes what I did:

The original ISA responsive form is only for the page asking for username and password (usr_pwd.htm). If you elect to ‘Collected additional delegation credentials in the form” on the listener for use with RADIUS OTP, TMG instead uses usr_pwd_pcode,htm.

This version merges the code from Microsoft’s original non-responsive page into Scott’s responsive form.

We use this version with CryptoCard/BlackShield.

Unlike the original Microsoft form, I have switched the order of the fields to be Name/Password/Passcode instead of Name/Passcode/Password, as the natural flow for users is to enter their regular credentials first and then pause to add the 2-factor passcode.

In the original format, since Passcode is time sensitive, users  often had their passcode rejected as they fumbled with data entry and switching between fields.

Server Committed a Protocol Violation CR must be followed by LF

 

..or “Why something as simple as a space can break things”

Since early November, we’ve been getting hundreds of errors a day as a result of redirecting traffic to our sister website in the UK.  The timing corresponded to them releasing a new version of their website, but they were unhelpful in trying to determine the cause of the error.

The error was: System.Net.WebException: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF.

Every article on the subject indicated that the problem was on the server side not adhering to the HTTP 1.1 protocol properly, but it was being caused because the default behavior of your .net application and how it handled the error.

There appeared to be three possible solutions:

  1. Get the other developers to fix their website. Of course you know this unlikely to happen with a 3rd party unless you have a relationship with them.
  2. Suppress the error in your application by changing useUnsafeHeaderParsing =”true” in your web.config. This isn’t a great solution because it effects the entirely application and may expose you to other issues.
  3. Use reflection to programmatically set your config settings. Here is a good thread on Stack Overflow that showcases solutions #2 and #3. While this would solve the problem, our internal discussion decided that we shouldn’t have to add code (nor did we have the time) in order to deal with a problem on their end.

The majority of the articles tackled the problem at face value. The error says that the response needs to be terminated with a CRLF, but only had a CR.

Mehdi El Gueddari has a great article on how he went through a variety of steps to debug this exact problem.

I tried those same steps myself and brought in one of our Senior developers. Interestingly, we experienced the same problem with Fiddler “fixing” what was a bad response header. All of the headers were displayed, so nothing initially jumped out as the problem. However, we did see the UK was inserting a custom header for a “Powered By:” key, and focused our effort there. The value was “Eggs and Ham”.

For security reasons, changing your response headers can be a good thing, and we believed that the UK made the change for that reason (and for a little humor).

Mehdi’s article put us on the right track, but it was his Attempt #4: Wireshark that finally helped us identify the true problem. In his case, he was looking for the missing CRLF (0D 0A hex).

When we used Wireshark, the hex view showed that the headers had the required CRLF but in the preview not all of the headers were being decoded properly. This was the Aha! moment for us and full circle back to that custom header we saw in Fiddler.

The problem was that the key was called “Powered By” and not “PoweredBy” or “Powered-By”.  Having a space is not valid in an HTTP response header key.

Fiddler showed the key, but it automatically fixed the problem and displayed ALL of the headers, so it wasn’t obvious.

WireShark on the other hand was unable to parse the header response because of that space, and we didn’t see the remaining headers (including valid ones like “content-length” and “date”)

WireShark

Now that it was identified what the true problem was, we were able to go back to the UK team with our findings. Presented with these details, they were able to quickly make the change on their end to prevent future errors.

Finding SQL Column Dependencies

 

I’m working on a project to rewrite an older SSIS package. As part of that rewrite, I noticed that there were some duplicate columns (with different names) being sent from vendor. They corrected their end, and now I want to drop the extra columns in the database.

Problem is, how can you make sure that those columns weren’t used in some other procedure or view?

I found this article on Different Ways to Find SQL Server Object Dependencies and started with Svetlana’s second example.

Unfortunately it didn’t quite give me what I wanted. Since we heavily use schemas in our database and we don’t use Hungarian notation, I wanted to make sure that I could more easily locate the objects I needed.

I revised the code to include the schema name and object type, as well as pulling the items out into variables for easy reading.

DECLARE @TableName AS VARCHAR(50) = 'Customers' -- Table that holds column you are dropping or renaming
DECLARE @ColumnName AS VARCHAR(50) = 'Cstomer'	-- Column you are looking for

SELECT
	OBJECT_SCHEMA_NAME(referencing_id) + '.' + OBJECT_NAME(referencing_id)	AS referencing_name
,   o.type
,	referenced_database_name
,	referenced_schema_name
,	referenced_entity_name
FROM sys.sql_expression_dependencies d
INNER JOIN sys.objects o ON d.referencing_id = o.object_id
WHERE OBJECT_NAME(d.referenced_id) = @TableName 
    AND OBJECT_DEFINITION (referencing_id)  LIKE '%' + @ColumnName + '%'
ORDER by OBJECT_SCHEMA_NAME(referencing_id), OBJECT_NAME(referencing_id)

Temporarily Preventing SSIS flow while debugging

When debugging an SSIS package, sometimes you have a need to only execute a small portion of the package. While you do have the option of disabling elements, this can prove time consuming if you have a lot of them. Although you could remove the constraint all together, this won’t solve the problem because then it will be executed in parallel.

A quick solution is to set an expression which will always evaluate to false, such as 1 > 2.

This will prevent the package from continuing past the constraint and is very easy to undo.

PrecedenceConstraint

Faulting SandboxHost with Office Web Apps

On our new Office Web Apps installation, I was seeing errors in the event viewer repeating every few minutes such as

Faulting application name: SandboxHost.exe, version: 15.0.4502.1000, time stamp:
0x512d262b
Faulting module name: KERNELBASE.dll, version: 6.3.9600.17278, time stamp:
0x53eebf2e

I applied each of the April, May, June and July Cumulative updates, and each time hoped that they would solve the problem. They didn’t.

The key here was taking a closer look at the ULS logs, which have additional detail. This log can be found (default) at C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS

I found a corresponding entry which provided the hint to the real problem

Failed to launch sandbox process, SE_ASSIGNPRIMARYTOKEN_NAME
and SE_INCREASE_QUOTA_NAME privileges are required

This error is typically seen in Sharepoint installations, but the root cause is the same. The service account which is running doesn’t have the necessary permissions.

Office Web Apps uses the LocalSystem account by default. I had changed this to a specialized service account which had permissions into Sharepoint Content databases, in order to isolate a different problem related to Web Apps not working on Sharepoint personal sites.

As a result, it didn’t have the required permissions.

Utilizing this article at Al’s Tech Tips, I made the appropriate changes to the Local Security Policy. Whereas his article talks about the service account running the Sharepoint User Code Host, the one we are concerned about with Office Web Apps Server is Office Web Apps (WACSM)

Office Web Apps Server Required SAN

I’m building out our WAC server, and although the server appeared to be working correctly, it was constantly in an unhealthy state.

This can be checked by issuing the command:

Get-OfficeWebAppsMachine

WAC1

I looked through several articles including the often referenced How to Get Office Web Apps Server 2013 to Report a Healthy Health Status and Office Web Apps Server 2013 – machines are always reported as Unhealthy. I’ve found that Wictor’s articles tend to have the level of detail that I prefer when trying to solve in the problem.

We were experiencing the exact issue that was reported with the Watchdog processes not establishing an SSL connection.

<HealthMessage>BroadcastServicesWatchdog_Wfe reported status for
BroadcastServices_Host in category '4'. Reported status: 
Contacting Present_2_0.asmx failed with an exception:
 Could not establish trust relationship for the SSL/TLS 
secure channel with authority 'machinename.internaldomain.tld'.</HealthMessage>

Initially, we had a certificate issued by GlobalSign. Even though it was trusted it still wouldn’t work. For troubleshooting, I switched to a certificate which was issued by our internal CA – one that I knew would work.

I issued the certificate with the CN=webapps.ourdomain.com and the SAN=machinename.internaldomain.com.

In the past, I’ve never repeated the CN as a SAN.

Both of the articles mentioned having the domain listed as a SAN, but they didn’t explicitly mention needing in both the CN and SAN.

I went ahead and reissued my internal cert, and repeated webapps.ourdomain.com as both a CN and SAN. Lo and behold, the errors with SSL stopped.

Upon further research about the necessity of having a domain name repeated in both the CN and SAN, I learned that its often repeated for best practice for compatibility.

Viktor Varga summarized it very nicely in this Stack Exchange question: SSL – How do Common Names (CN) and Subject Alternative Names (SAN) work together?

Is Excel Missing from Import and Export Data?

 

Recently, a user was trying to import some data from a Microsoft Excel file using the Import and Export Data wizard.  The dropdown choices did not Excel as an option, but they know they had used it in the past.

64bit

The reason for this is that they were running the 64-bit wizard. Since there is no 64-bit version of the Excel driver, it wasn’t on the list.

Running the 32-bit version provided the data sources they were expecting.

32-bit

In addition to Excel, the 32-bit version provides drivers for:

  1. Microsoft Access (Microsoft Access Database Edition)
  2. Microsoft Access (Microsoft Jet Database Engine)
  3. Microsoft Office 15.0 Access Database Engine OLE DB Provider
  4. Microsoft OLE DB Provider for Analysis Servers 11.0
  5. Microsoft OLE DB Provider for Oracle

In our case, we are running SQL 2012, but we’ve installed the SQL 2014 tools in order to use the latest SSMS.

By default the 64-bit SQL Server 2014 Import and Export Data is located at C:\Program Files\Microsoft SQL Server\120\DTS\Binn\DTSWizard.exe.

The 32-bit version can be found at C:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\DTSWizard.exe