Friday, August 18, 2017

Tuples in C#7

One of the nice features in C#7 is the support for Tuples as a lightweight datastructure using the System.ValueTuple NuGet package. It simplifies your codebase where you had to fallback before to out parameters or arbitrary objects.

Let’s have a look at a simple example:

image

This sample shows how easy it is to return tuples from your methods. Only problem I have with this implementation is that if you try to access the output of this method call, you still see the Item1, Item2 properties which are not really meaningful:

image

You don’t have to stop there, we can update the method signature with some extra metadata:

image

If we now try to access the tuple values again, we see the following instead:

image

NOTE: The name associated with the tuple element is not a runtime metadata, i.e. there is no such a property/field with the name on the actual instance of that value tuple object, the property names are still Item1, Item2, etc., all element names are design time and compiler time only. If we decompile the code with JustDecompile, we see the following:

image

Notice the attribute generated on top of the code. This TupleElementNames attribute is picked up by Visual Studio and the compiler and provides the necessary intellisense. This guarantees that it also works when you import this DLL into another project…

Thursday, August 17, 2017

Are your if statements not hidden sagas?

This video by Udi Dahan made me rethink all if statements in my code:

In the video Udi uses the following deceptive simple looking requirement as an example:

“Disallow the user from buying products that are no longer available.”

Doh! This must be the easiest requirement I’ve ever seen. Let’s implement it…

Ok, when the user goes to the products page and we show a list of products, let’s add an extra check that only shows the items who are not deleted from our product catalog:

if(item.state== States.Deleted)

///Filter item from list

Ok, perfect. Problem solved! But wait, what if the user leaves the pages open for a while and in the mean time the product gets removed from the catalog, what happens if the user tries to add this product to his shopping cart? Ok, let’s add an extra check when the user tries to add an item to his cart:

if(item.state== States.Deleted)

///Show warning to user that product is no longer available

Ok, perfect. Problem solved! But wait, what if the user adds some products to his cart, leaves his cart open for a while and in the mean time the product gets removed from the catalog, what happens if the user tries to checkout his order? Ok, let’s add an extra check when the user tries to checkout his cart:

if(item.state== States.Deleted)

///Show warning to user that product is no longer available

Ok, perfect. Problem solved! But wait, what if the user spends a few minutes searching for his credit card during the checkout process and and in the mean time the product gets removed from the catalog, what happens if the user pays for his order?

Wait! Stop! Let’s break up here. It becomes obvious that there is always a moment where the if check is just to late.

The problem is that we end up with a business oriented eventual consistency problem that is hard to solve. Turns out that these kind of ‘if statements’ get better removed and replaced by long running processes that can impact the domain in multiple places.

To return to our example, the moment we set the IsDeleted flag to true for a product in our database, we’ll start a long running process that checks all active shopping carts, remove the deleted item from the carts and display the user a message when he returns to your website and opens his shopping cart:

image

Wednesday, August 16, 2017

Chrome HTTPS error on localhost: NET::ERR_CERT_COMMON_NAME_INVALID

If you are a developer and are using a Self-Signed certificate for your HTTPS server, you recently may have seen the following error in Chrome(or a non-Dutch equivalent Winking smile):

image

Starting from Chrome 58 an extra security check got introduced that requires certificates specify the hostname(s) to which they apply in the SubjectAltName field. After they first introduced this change, the error message was not very insightfull but today if you take a look at the Advanced section of the error message or the Security panel in the Developer tools, you’ll get some more details pointing to the SubjectAltName issue:

image

image

Create a new self-signed certificate

To fix it, we have to create a new self-signed certificate. We can not use the good old makecert.exe utility as it cannot set the SubjectAltName field in certificates. Instead, we’ll use the  New-SelfSignedCertificate command in PowerShell:

New-SelfSignedCertificate `
    -Subject localhost `
    -DnsName localhost `
    -KeyAlgorithm RSA `
    -KeyLength 2048 `
    -CertStoreLocation "cert:CurrentUser\My" `
    -FriendlyName "Localhost certificate"
Now you have a new certificate with a correct Subject Alternative Name in your Personal certificate store:
image
Next step is to trust this certificate by moving it to the Trusted Root Authorities. You can either do this by hand using the certmgr tool in Windows or script it with Powershell as well:
# set certificate password here
$pfxPassword = ConvertTo-SecureString -String "YourSecurePassword" -Force -AsPlainText
$pfxFilePath = "c:\tmp\localhost.pfx"
$cerFilePath = "c:\tmp\localhost.cer"

# create pfx certificate
Export-PfxCertificate -Cert $certificatePath -FilePath $pfxFilePath -Password $pfxPassword
Export-Certificate -Cert $certificatePath -FilePath $cerFilePath

# import the pfx certificate
Import-PfxCertificate -FilePath $pfxFilePath Cert:\LocalMachine\My -Password $pfxPassword -Exportable

# trust the certificate by importing the pfx certificate into your trusted root
Import-Certificate -FilePath $cerFilePath -CertStoreLocation Cert:\CurrentUser\Root

Import it in IIS

OK, almost there. A last step to get it working in IIS is to import the pfx in IIS:

  • Open IIS using inetmgr.
  • Go to Server Certificates.

image

  • Click on the Import… action on the right. The Import certificate screen is shown.

image

  • Select the pfx, specify the password and click OK.
  • Now that the certificate is available in IIS, you can change the bindings to use it. Click on the Default Web site(or any other site) on the left.
  • Click on the Bindings… action on the right. The Site Bindings screen is shown.

image

  • Click on the https item in the list and choose Edit… . The Edit Site Binding screen is shown.

image

  • Select the newly created SSL certificate from the list and click OK.

Monday, August 14, 2017

Using F# in Visual Studio Code

If you are interested in F# and want to start using it inside Visual Studio Code, I have a great tip for you:

Have a look at the F# with Visual Studio Code gitbook. This contains a short guide that explains you step by step on how to get your Visual Studio Code environment ready for your first lines of pure F# magic.

image

Happy coding!

Thursday, July 20, 2017

Caching your static files in ASP.NET Core

In ASP.NET Core static files(images, css,…) are typically served using the Static file middleware. The static file middleware can be configured by adding a dependency on the Microsoft.AspNetCore.StaticFiles package to your project and then calling the UseStaticFiles extension method from Startup.Configure:

Unfortunately this code will not do its job in the most efficient way. By default, no caching is applied meaning that the browser will request these files again and again increasing the load on your server.

Luckily it’s not that hard to change the middleware configuration to introduce caching. In this example we set the caching to one day:

Remark: An alternative approach would be to let your proxy server(IIS,…) handle the static file requests as discussed here.

Wednesday, July 19, 2017

Guaranteeing “exactly once” semantics by using idempotency keys

A few weeks ago I had a discussion with a colleague about the importance of idempotency.

From http://www.restapitutorial.com/lessons/idempotency.html:

From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request. Note that while idempotent operations produce the same result on the server (no side effects), the response itself may not be the same (e.g. a resource's state may change between requests).

A good example where you can get into trouble is when your API withdraws some money from a customer account. If the user accidently calls your API twice the customer is double-charged, which I don’t think they’ll like very much…

A solution for this problem is the usage of idempotency keys. The idea is that the client generates a unique key that is send to the server along with the normal payload. The server captures the key and stores is together with the executed action. If a second requests arrives with the same key, the server can recognize the key and take the necessary actions.

What are situations that can happen?

  • Situation 1 – The request didn’t made it to the server; in this case when the second request arrives the server will not know the key and just process the request normally
  • Situation 2 –The request made it to the server but the operation failed somewhere in between; in this case when the second request arrives the server should pick up the work where it failed previously. This behavior can of course vary from situation to situation.
  • Situation 3 – The request made it to the server, the operation succeeded but the result didn’t reach the client; in this case when the second request arrives the server recognizes the key and returns the (cached) result of the succeeded operation.

Note: Idempotency keys get important when you are running systems that are not ACID compliant. If you are running an ACID transactional system, you can just re-execute the same operation as the previous operation should be rolled back(or at least that’s the theory Winking smile).

Tuesday, July 18, 2017

Check compatibility between .NET versions

Compatibility is a very important goal of the .NET team at Microsoft. The team always made a great deal to guarantee that newer versions will not break previous functionality. However sometimes this is unavoidable to address security issues, fix bugs or improve performance.

To understand the consequences you have to make a difference between runtime changes and retargeting changes:

  • Runtime changes: Changes that occur when a new runtime is placed on a machine and the same binaries are run, but expose different behavior
  • Retargeting changes: Changes that occur when an assembly that was targeting .NET FW version x is now set to target version y. 

To help you identify possible compatibility issues, Microsoft created the .NET Compatibility Diagnostics, a set of Roslyn based analyzers.

Here is how to use them:

  • First you have to choose if you want to check for Runtime changes or for Retargeting changes
  • Now you need to select the ‘From .NET Framework version’ and the ‘To .NET Framework version’:

image

  • After making your selection, you’ll get a list of all changes classified by their impact:

image