Azure DevOps Graph API and continuation tokens

I recently found out that the Azure DevOps Graph API documentation is somewhat confusing regarding its description of when and where to expect continuation tokens when performing API calls.

As an example, lets say you call the groups API:

https://vssps.dev.azure.com/{account name}/_apis/graph/groups?api-version=4.1-preview.1

The documentation will mention that if the data can not be “returned in a single page”, the “result set” will contain a continuation token. It turns out that the definitions of a single page and result set are both not entirely intuitive.

To start with the latter a result set in this case is not only the resulting JSON document as you might expect, but also the response headers of the API call. To be precise, the
x-ms-continuationtoken response header will contain the continuation token if one is needed to retrieve the next page.

The definition of a page in this API is also somewhat strange. In our account I received 495 results in the first page and 66 in the second (and last page) for a call to the above API without any filtering. When I apply filtering however (for instance, I want only the AAD groups) I receive 33 items in the first page and 5 in the second (and again last page).

Lessons learned: look everywhere for that continuation token even if the number of results doesn’t lead you to believe that it is a full page.

WCF services on an Azure website returning 502 Bad Gateway

So the other day I moved a web role containing WCF services over to an Azure website. Which seemed like a breeze, after deployment I called up the svc file in the browser and all seemed fine. However when I tested with an actual client of the service it received only 502 Bad Gateway responses.

Now there are lots of reasons 502 responses happen, especially in cloud environments where load balancers and what not sit between you and the site/service. However after some research a pattern started to emerge where infrastructure problems seemed unlikely to cause this problem, and a few seemingly random questions on stack overflow caused me to consider: might the problem be caused by my own code/configuration.

You see, a regular website or service should usually not respond with a 502 bad gateway, this is mostly something proxies and load balancers etc. do (as far as I know). In this case too, the error is returned by some intermediate device and not the webserver itself. This intermediate device does this because the website severed then TCP connection abruptly. For instance because the application pool for the website was shutdown unexpectedly. And in a .NET WCF service, what causes the application pool to shutdown unexpectedly is usually something that brings the .NET application domain down. Stuff like, OutOfMemoryException, StackOverflowException and the like.

If you don’t catch these kinds of exceptions yourself (and indeed you usually should not, but that is another discussion entirely) and they bring down the application domain, no logging is done whatsoever (not as far as I could find, and I’ve searched for it quite a while). So the best way to find out what is really going on is remote debugging the azure website. A good tutorial on that can be found here. Be sure to deploy a debug build of your website for easiest debugging.

So now you have that connected, hit that offending service with your client, and presto… you get a nice unhandled exception pop-up which will make you google some more find a solution for that problem and then you have rid yourself of that pesky 502 error. Except… in my case no unhandled exception popped up, I double checked my exception handling settings (twice) to make sure I had that set correctly. So this means… its not my code…

Back to the debugger, this time I turn off the ‘Just My Code’ feature in the debugger settings hit the service again and get presented with an actual unhandled exception. My particular problem was related to the one described in this Stack Overflow post.

I hope writing these steps down lets me (and maybe someone else) fix it considerably faster next time I hit this error. This was quite a long afternoon of headaches I’d love to get back.

 

“Windows 10 SMB Secure negotiation” or “Why will my network shares not work on Windows 10 anymore”

So, a couple of years ago I was the first person in the office upgrade to Windows 8. I had the blessing of corporate IT as long as I troubleshoot my own problems if they were Windows 8 specific. And of course if I encountered and fixed any errors let them know what it was and how to fix it.

One of the first problems I encountered was problems connecting to our $50k SAN. After some digging it turned out that it did not support a new SMB feature turned on by default in Windows 8 called Secure Negotiate. Which basically wants to negotiate with the server about which encryption to use when transferring files. A solution was quickly found: Turn off the feature.

This could be done setting the following registry key:

HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecureNegotiate=0

Everything worked as expected until I upgraded to Windows 10 when that came out. Microsoft had a very valid reason to remove the above workaround and not allow you to bypass any security features unless the server indicated during negotiation that it would not support certain things.

However, the SAN still didn’t support any secure negotiate feature. So after some more research I found out that I could just tell the client to force secure transfer without the need for negotiation. So if you can’t seem to access your SMB shares anymore since upgrading to Windows 10, open a Powershell prompt as Administrator and run the following command:

Set-SmbClientConfiguration -RequireSecuritySignature $true

Please note that I am not an SMB protocol guru so the above text may be a bit inaccurate in its details. If you want more info however, someone at Microsoft who does know what he is talking about did a very detailed write-up about the feature. You can find it here:

https://blogs.msdn.microsoft.com/openspecification/2015/08/11/smb-3-1-1-pre-authentication-integrity-in-windows-10/

Azure ServiceBus Relay – 50200: Bad Gateway

<TL;DR;>This error message is not always caused by proxy issues. After last weeks updates an old version of the service bus DLL’s (2.2.3) on the relay server side caused this error on the client side when trying to call service operations.</TL;DR;>

Last week I arrived at the office and was greeted by a status screen that contained a lot more red lights than when I had left the day before. That in itself wasn’t too strange, we monitor customer’s servers as well and who know what kind of update/reboot schedule these guys have. However, the fact that the only servers that were experiencing problems were the ones we host ourselves made me a bit suspicious.

After some investigation I noticed the error message from the title in our logging. Apparently it can be found in two variations: 50200: Bad Gateway, and of course 502: Bad Gateway. I had encountered this issue before at a customer using a proxy, and all google pages led me to believe that this was indeed a proxy issue on our side as well. However, we don’t have a proxy running in our network, and it was working fine before.

After some digging I noticed only the servers that received updates and were rebooted the night before were experiencing issues. Servers that had not been updated were fine. It turned out that one of the updates did not play well with the old (2.2.3) version of the service bus DLL’s we were still using (software had been running fine for 3 years, why update?). So after updating it to the latest version that could still run on .NET 4 (2.8.0 if I remember correctly) and updating the software on the rebooted servers, we were back in business again.

MSBuild command line building a single project from a solution.

I recently needed to build just one project (and its dependencies) from a solution. I quickly found the following MSDN article on exactly how to do this:

https://msdn.microsoft.com/en-us/library/ms171486.aspx

However, I couldn’t get it to work for the life of me. The command always complained along the lines of:

MySolution.sln.metaproj : error MSB4057: The target "My.Project:Clean" does not exist in the project. [MySolution.sln]

Luckily during a search on the internet about troubleshooting MSBuild issues, I came across a way to save the intermediate project file created by MSBuild from the solution. Because as you might have noticed when you look at a .sln file, its not even close to a regular MSBuild project file. MSBuild interprets the solution file and generates one big MSBuild project file from it, then builds that file.

This can be done by setting an environment variable before calling the MSBuild for a solution. In a command prompt type the following:

Set MSBuildEmitSolution=1

When you then for instance build a solution with the following command:

msbuild MySolution.sln /t:Clean

This will perform a clean of the solution, but also save the entire MSBuild project file in a file called MySolution.sln.metaproj.

I thought this was a good idea because the MSDN article above talks about targets, and usually targets in a project file are called Clean, or Rebuild or something like that. Why would there be a target “MyProjectName:Clean”? Well, because MSBuild generate that target in the aforementioned .metaproj file.

It turns out however that target names may not contain the . character. And MSBuild nicely works around this by replacing them with _ characters. So to get my single project building I had to call:

msbuild MySolution.sln /t:My_Project:Rebuild

Hopefully this post saves someone else some time.

Microsoft Edge not starting after Windows 10 update (v1511)

I recently updated my work machine to the latest Windows 10 update (1511). After the update was finished I noticed that I couldn’t start Microsoft Edge anymore. I didn’t think much of it at the time since it is not my main browser. However it started to annoy me a bit when it turned out it was my main PDF reader.

Rather than setting another app as the default PDF reader I decided to try and fix the cause of the problem. This turned out the be harder than expected though. I don’t know why the problem reared its head after the latest update, but suffice to say after a reinstall Edge worked but then after configuring my PC it didn’t anymore.

Reinstalling again then checking after each step revealed that things went wrong after connecting my work account with my PC. And with Work account I don’t mean my domain account, but rather my Office 365 Organizational account (that you can connect using the Accounts settings page in Windows 10).

Things, however, did not return to normal after I had severed the connection. And I had to remove my profile and recreate it to ensure Edge worked again. If you are using a roaming profile this might not work for you, and also do not take removing your profile lightly. It is holds more of your settings and configuration than you might realize.

Generating and consuming JSON Web Tokens with .NET

Maybe you have read my previous blog post in which I talked about token generation in OWIN. After the issues we had there with Machine key and OWIN versions, I decided to take a look at some alternatives.

After some research I decided JSON Web Tokens (or JWT’s, which apparently should be pronounced as the English word ‘jot’) would fit the bill. They are small, it is an open standard, and has a simple string representation (URL-safe). More info on the standard can be found in this draft.

After this research it should be a easy to incorporate this into my solution right? Well… not as easy as I thought. It turns out many samples are just using an external STS to create and verify tokens, or using some own custom implementation which doesn’t support all of the options. Let alone complete samples of generating a token in a WCF service and using it in a client to pass on to another service. However after a lot of searching, researching etc. I decided to make my own sample.

So here comes the first part, generating and consuming:

I will be using the “JSON Web Token Handler for the Microsoft .NET Framework 4.5” NuGet Package as it is called by its full name. It is also called System.IdentityModel.Tokens.Jwt. So in this post I’ll just show you how to create a token from some claims and then how to turn the token back into claims again. Just in a console application so we can more easily see what is going on.

I have just created a new Console application in Visual Studio 2015, and added the aforementioned NuGet package. At the time of writing the latest stable version is 4.0.2.206221351. Don’t forget to add a reference to the System.IdentityModel assembly as well, it is part of the .NET Framework since v4.5.

First we will add some using clauses we will need:

using System.IdentityModel.Tokens;
using System.Security.Claims;

Before we can sign a token we need a secret to sign it with. There a multiple options like certificates and whatnot. The easiest to use in this example however is just a normal shared secret text. Which we will need to turn into a byte array before we can make it a secret key. Also we will have to put it in a SigningCredentials object together with the algorithms we will use to sign it with:

var plainTextSecurityKey = "This is my shared, not so secret, secret!";
var signingKey = new InMemorySymmetricSecurityKey(
    Encoding.UTF8.GetBytes(plainTextSecurityKey));
var signingCredentials = new SigningCredentials(signingKey, 
    SecurityAlgorithms.HmacSha256Signature, SecurityAlgorithms.Sha256Digest);

You can use a couple of different security algorithms but you should specify one which ends in signature for the first one, and one that ends in digest for the second algorithm. Some will throw a NotSupportedException (because, not supported) and HmacSha256Signature and Sha256Digest seem to be the default in most examples I have seen.

After that we will need a few claims to put in the token, otherwise why would we need a token:

var claimsIdentity = new ClaimsIdentity(new List<Claim>()
{
    new Claim(ClaimTypes.NameIdentifier, "myemail@myprovider.com"),
    new Claim(ClaimTypes.Role, "Administrator"),
}, "Custom");

Now we can create the security token descriptor:

var securityTokenDescriptor = new SecurityTokenDescriptor()
{
    AppliesToAddress = "http://my.website.com",
    TokenIssuerName = "http://my.tokenissuer.com",
    Subject = claimsIdentity,
    SigningCredentials = signingCredentials,
};

Please note that the AppliesToAddress and TokenIssuerName must be valid URI’s. Not in the sense that they should be resolvable, but they must be in a valid URI format (if you have accidentally read the v3.5 WIF documentation this can be confusing, it says that any string will do). The AppliesToAddress should contain the token’s audience, which means the website or application that will receive te token. The TokenIssuerName is the application issuing the token obviously.

This token descriptor can now be used with any WIF (Windows Identity Foundation) token handler (see the SecurityTokenHander class MSDN help). The JwtSecurityTokenHandler we are going to use is a descendant from that class (and implements the necessary abstract members).

Here is the code to create a token, then sign and encode it:

var tokenHandler = new JwtSecurityTokenHandler();
var plainToken = tokenHandler.CreateToken(securityTokenDescriptor);
var signedAndEncodedToken = tokenHandler.WriteToken(plainToken);

If you want you can print the stuff on the screen now to see what it generated:

Console.WriteLine(plainToken.ToString());
Console.WriteLine(signedAndEncodedToken);
Console.ReadLine();

Now that we have the encoded token that is easily transportable we might want some other application to validate the token (to see that it was not tampered with). To do this, we first need an instance of the TokenValidationParameters class:

var tokenValidationParameters = new TokenValidationParameters()
{
    ValidAudiences = new string[]
    {
        "http://my.website.com",
        "http://my.otherwebsite.com"
    },
    ValidIssuers = new string[]
    {
        "http://my.tokenissuer.com",
        "http://my.othertokenissuer.com"
    },
    IssuerSigningKey = signingKey
};

As you can see, the TokenValidationParameters class allows us to specify multiple valid issuers and audiences. You will also need to specify the same signing key as when you created the token (obviously). We can now simply validate the token the following way:

SecurityToken validatedToken;
tokenHandler.ValidateToken(signedAndEncodedToken,
    tokenValidationParameters, out validatedToken);

Console.WriteLine(validatedToken.ToString());
Console.ReadLine();

You might be wondering how the token handler knows which signature and digest algorithms should be used. However if you look carefully you will see that the algorithm name is encoded into the token (this encoding is simply Base64, not encrypted).

The source code to this sample can be found here.

Problems with OAuth Access Token encryption and decryption using Microsoft.OWIN hosted in IIS.

If you want to secure access to your WebAPI calls, a mechanism you can use is OAuth2 Bearer tokens. These tokens are generated via a login call for instance, and the website or mobile app can hold on to this token to authenticate with the server. These tokens can be generated using Microsoft’s OWIN implementation (also known as Katana).

These tokens have an expiration date. After that date you won’t accept the token obviously. However there are also some situations that can occur where the token can’t even be decrypted.

First of all, the default way of encrypting the token when you host the Owin/Katana in your own process (HttpListener or otherwise) is different from when it is being hosted in IIS using the SystemWeb host (which is a separate NuGet package btw). The former uses the DPAPI to protect the tokens, while the latter uses ASP.NET’s machine key data protection. There is also the option of providing your own provider/format.

I am currently only familiar with the SystemWeb host under IIS, and we recently ran into some problems after updating our software and moving it to another machine. See, we had these mobile devices who registered with our WebAPI service and stored a token which should not expire. However, after the update we found the tokens would not decrypt anymore and our users were presented with a security error, which meant they had to reregister the device with our software.

We quickly found out that we forgot to set the machine key in our web.config so encryption on the new server was different than the old one. However after configuring our web.config to use the same machine key as the old server tokens were still not being decrypted.

After a lot of searching it turned out that Microsoft.Owin 3.0.1 will not decrypt tokens created by Microsoft.Owin 3.0.0. As soon as we downgraded all our Microsoft.Owin packages back to 3.0.0 version it worked again.

To make a long story short:

Make sure both machine key and Microsoft.Owin versions stay the same if you want your tokens to keep working after an update of your software. Otherwise you find out the hard way why you should probably have used your own token encryption/decryption scheme in the first place. Our next order of business is finding a way to update our Microsoft.Owin version in the future without breaking our current user’s device registrations.

NuGet package UI always indicates an update for some packages.

Or, why you don’t get a nice reassuring green checkmark after an update of a package.

I recently noticed in a rather large solution with around 70 installed NuGet packages (don’t ask) that an update of some packages did not result in a green checkmark. Also when you open the update screen again, it again indicates there is an update for that package. Also when you try to update again it will not allow you to select projects to update since it thinks (knows) all your solution projects already have the update:

I recreated the situation in a simple solution with the following two packages. I have rigged the Json.NET package to display the above behaviour. When I update both packages one gets the green checkmark and the other doesn’t.

This is usually caused by a rogue copy of an old version of the package still in the packages folder under your solution folder:

As you can see above, there is a Newtonsoft.Json.6.0.2 folder, And also the newer installation. There are probably many reasons why this can happen (for instance the EntityFramework package asks for a restart of Visual Studio, and if you don’t do that and just continue on working on your solution it might leave the old version behind). The solution is simple, just delete the folder.

Afterwards it won’t show up in the packages that need updates anymore.

Please be aware that if you have a project under the solution root folder that is not in your solution, but does contain nuget package references, these can still reference the folder you just deleted (and prompty restore it again if you open the project).

Improving the 2nd way in knockout-winjs’s two way data binding

In the original version of the knockout-winjs sample I wrote a couple of days ago (see this post)there is a basic implementation of writing back to an observable when a control changes value. The part that does this can be found in the init method of the generic binding handler and it looks like this:

// Add event handler that will kick off changes to the observable values
// For most controls this is the "change" event
if (eventName) {
    ko.utils.registerEventHandler(element, eventName, function changed(e) {
        // Iterate over the observable properties
        for (var property in value) {
            // Check to see if they exist
            if (value.hasOwnProperty(property)) {
                // Determine if that value is a writableObservable property
                if (ko.isWriteableObservable(value[property])) {
                    // Kickoff updates 
                    value[property](control[property]);
                }
            }
        }
    });
}

You might notice that if a control has a change event (the name is control dependent and not all controls need one but in this case that name is set in the eventName variable) then we register an event handler with the element and update ALL writable observables bound to a property of the control.

This is of course not how we want to update our observables. I would prefer to update just the one that needs updating. So I introduced another field in the definition for each control’s binding handler (I named it the changedProperty). Also I won’t bind to the element’s event but to the control’s event directly. This has one issue however if you also want to be able to bind to this event explicitly.

To accomplish this I changed the above code to the following:

// Add event handler that will kick off changes to the observable values
// For most controls this is the "change" event
if (eventName) {
    // If the change event is already bound we wrap the current handler with our update routine.
    var currentEventHandler = null;
    if (control[eventName]) {
        currentEventHandler = control[eventName];
    }

    control[eventName] = (eventInfo) => {
        if (value.hasOwnProperty(changedProperty)) {
            // Determine if that value is a writableObservable property
            if (ko.isWriteableObservable(value[changedProperty])) {
                // Kickoff updates 
                value[changedProperty](control[changedProperty]);
            }
        }

        if (currentEventHandler) {
            currentEventHandler(eventInfo);
        }
    };
}

I also found out that if we want to bind two events on a single control the current implementation of binding event has a bug. It read as follows:

// After the control is created we can bind the event handlers.
for (var property in value) {
    if (value.hasOwnProperty(property) && (property.toString().substr(0, 2) === "on")) {
        control[property] = (eventInfo) => {
            value[property].bind(viewModel, viewModel, eventInfo)();
        };
    }
}

It turns out however that the ‘property’ variable is changed by the time the actual event is called. So we can’t really use it in the event. I fixed it in the following fashion:

// After the control is created we can bind the event handlers.
for (var property in value) {
    if (value.hasOwnProperty(property) && (property.toString().substr(0, 2) === "on")) {
        control[property] = (eventInfo) => {
            // Must use eventInfo.type here because 'property' will
            // be changed by the time the actual event is fired.
            value["on" + eventInfo.type].bind(viewModel, viewModel, eventInfo)();
        };
    }
}

To watch all this in action download this expanded example: ToDoApp4