Sometimes entity framework can miss concurrency check

Written by val. Posted in EntityFramework, Uncategorized

It wasn’t obvious to me that when an entity property is marked as an optimistic concurrency token, EF uses the original value of the property to perform a concurrency check. This behaviour is not explicitly mentioned in MSDN and does not interfere with the official concurrency handling example. However it has caused one obscure bug in my code.

My application is written in a way that when a user invokes the “UpdateEntity” API method, the code loads the entity and then copies all passed values into that entity:

The Entity Framework ignores a Version value assigned at the highlighted line and takes Version value loaded from the database instead.

If you encounter the same problem, here are a number of tests, which will hopefully save you some time:  

 

Project structure for cross platrorm Xamarin and MonoGame development

Written by val. Posted in Uncategorized, Xamarin

I have experimented a lot with the best project structure for cross platform game development and I want to share the one structure which works for me, to save you some trouble:

Here are rules that have helped me to organize my projects and source code:

  • For a shared source project use the following naming scheme:
    /[ProjectName]/[ProjectName].[PlatformName].csproj
  • For a platform specific source, create separate Host projects for every platform and put each into a separate directory:
    /[ProjectName].Host.[PlatformName]/[ProjectName].Host.[PlatformName].csproj
  • Put as much platform specific code as possible into the Host project. Use the inversion of control pattern to inject it into the main project.
  • Watch out for #ifdef [PlatformName] structures. This approach can easily get out of hand.

Here is a file structure of my current project (Picross Mania):
Cross Platform Project Files

Unfortunately, with this approach, it is tedious to synchronize source file links between platforms. Luckily, the MonoGame community has developed a very versatile tool named Protobuild to automate this task. This tool can create a snapshot of your current project, then detect changes and propagate them into other platform specific versions of the project.

For my needs it was overkill and I ended up writing a simple set of PowerShell scripts which will simply propagate new files from my master project to platform specific ones.

Happy coding!

Secure WCF Service with ASP.NET Identity

Written by val. Posted in Uncategorized

Microsoft has just released a new ASP.NET Identity framework to replace the ASP.NET Membership provider. Fortunately ASP.NET Identity is based on an OWIN stack and can be configured to secure legacy WCF Services.

To secure a WCF Service hosted in IIS, you will need to configure Identity to run authentication as a part of the IIS pipeline and to configure WCF to authorize through the HttpContext instance. 

To configure the Identity to run as a part of the IIS pipeline simply modify the application Startup.cs file and add the  app.UseStageMarker(PipelineStage.Authenticate) line after authentication configuration.

Startup.cs

To configure WCF to check for authorization data in the HttpContext object, create a custom authorization policy and enable it in your web.config file.

HttpContextAuthorizationPolicy.cs
 Web.config

 After that, every method marked with the  [PrincipalPermission(SecurityAction.Demand)]  attribute will be authorized through an OWIN middleware. 

ISecureService.cs

 

Have a great coding day!

Scala for the C# guy

Written by val. Posted in Uncategorized

For fun, and to exercise my grey goo, I’ve decided to learn Scala. Being fairly confident C# guy with a solid experience in imperative programming and .net stack I’ll have a lot to learn.

 My first goal would be to implement B+ tree in Scala.

 First, lets start with environment configuration. In order to do so, seems like only sbt is required. I was really impressed how seamless it to control environment versions with sbt build chain.

To get all necessary versions of sbt and scala, it was required to set scalaVersion in [root]/project/build.sbt and sbt.version in [root]/project/build.properties. That is it, sbt took care of downloading all necessary binaries after that.

 The next thing is to setup ScalaTest (NUnit for Scala). Again, it is only one line in [root]/build.sbt

 As it happened last time, sbt will take care of downloading and installing necessary libraries.

 My first take on B+ tree in scala is located here: link.

 The good parts:

  • it compiles
  • it works

The bad parts:

  • the code is imperative
  • the code does not use any Scala specific features
  • the state is mutable

 However, as long as it is covered by tests, it should be fun and easy task to re-factor in to purely functional implementation and make it benefit from language features.

 

Kick-ass database migration setup with RoundhousE and Sublime Text 2

Written by val. Posted in Uncategorized

Keeping you database under source-control is a hard task. There is already a great series of posts by Jeff Artwood at CodingHorror. While the article covers the methodology of the Database source control, there is no hints  on which tools can be used for this.
I would like to share my experience of using RoundhousE migration framework in conjunction with sublime-text 2 text editor to manage scripts for SQL server database.
This set of tools works well for me:
  • Roundhouse is a database migrations tool. The only thing it does – is running provided SQL scripts in specific order. There is lot more of course, but all other features are for supporting it’s main goal.
  • Sublime-text 2 is a text editor which supports file tree and can be integrated with a build system
  • Powerup, a DB schema export tool for Roundhouse
The source code for the all entities of the database is stored in the one separate folder (say AdventureWorks). To deploy all the sources to the database it is enough to invoke RH.exe (Rundhouse executable) in the target folder. To speed-up things, we can do trigger deployment with help of sublime build system. To make sublime invoke deployment on Ctrl-B, it is required to configure sublime-project accordingly:
As, I am dealing with legacy database, it makes it a bit harder to start using RH, due to missing SPROC scripts. That is where Powerup tool comes very handy. Running it against database will produce a set of scripts runnable by Roundhouse.
When executed, Powerup will produce following structure
Due to unique dataase settings, I had a few issues with Powerup. Some views were created with schemabinding enabled and Powerup didn’t exported some database specific settings (If was ANSI_NULL in my case). So watch out for that.
I was aiming to provide frictionless modification of the stored procedures and database Up scripts, with easy navigation from procedure to procedure and automated deployment of the changes  to the development SQL server. It turned out to be well:
  • Navigation between stored procedures is convenient using Ctrl-P shortcut
  • Changes can be quickly deployed with Ctrl-B shortcut

Deployment automation with Octopus deploy

Written by val. Posted in Uncategorized

I’ve just implemented great deployment automation solution using Octopus Deploy and TeamCity software combination and I would like to share details of the implementation.

Overview

Benefits

  • QA can create and deploy release from any version of compiled sources
  • It easy to assemble different packages into one releasable item
  • Built in functionality to upload packages on remote machines run deployment scripts and provide results to a IT team.

Advantages over TeamCity+NAnt combination

  • Octopus have built-in mechanism to upload compiled binaries to target servers
  • Visibility over current versions over environments
  • Enforces culture of packaging solutions into deployable packages
  • Easy to scale on multiple projects

Target QA Environment

  • 2 Web servers, each of them is running Contoso.Web web site
  • Backend1 server, which is running Windows Services named “Contoso.Backend1″ and “Contoso.Backend2″
  • Backend2 server, which is running Windows Services “Contoso.Backend2″ and “Contoso.Backend3″
  • Database server.

Contoso.App QA Environment

How it works

It is very easy to figure out how Octopus, TeamCity and VCS collaborate together, just by following how code change is being propagated to the QA environment. Here is an example of commit propagation to the QA environment:
  1. Developer commits revision #101 to VCS
  2. TeamCity picks it up, compiles and creates 5 packages using Nuget:
    1. Contoso.Web.101.nupgk
    2. Contoso.Backend.1.101.nupkg
    3. Contoso.Backend.2.101.nupkg
    4. Contoso.Backend.3.101.nupkg
    5. Contoso.DB.Update.101.nupkg
  3. Those 5 packages are stored in a private NuGet Feed, hosted by TeamCity
  4. IT Team creates a release at Octopus, through web console.
  5. IT Team schedules release for deployment on selected environment (DEV or QA)
  6. Octopus distributes packages to the machines with according to machine roles.
  7. Octopus tentacle unpacks the package, installs it into target folder and executes custom scripts.

Nuspec file for Octopus

Each deployable project (Service, Web site, DB Scripts) is packaged using .Nuspec file. Example of windows service package is below. The Nuspec file specifies which binaries to take and which deployment script to use for this project.

Post deployment script

After package has been uploaded and unpacked on a target machine, some configuration is usually required. In my case, it would be configuration file switch and windows service restart. Here is an example of typical windows service post deployment script The script itself is a modified version of it’s octopus version. The only addition to this script is a switchConfig command. Switch config command is responsible for search-and-replace every configSource property in a given file to a corresponding file for a current environment. For example if we are deploying to QA it would replace <appSettings configSource=”config/appSettings.dev.config”/> with <appSettings configSource=”config/appSettings.qa.config”/> . While octopus has build-in functionality for configuration management, I still prefer to keep all configuration settings under VCS.

Post deployment database script

The solution for database management is simple and effective. There is a plethora of tools and methods, which can be used for database migrations, however I prefer to stick to a set of PowerShell scripts and plain .sql files. This setup allows more flexibility if something goes wrong. In this setup, the project contains set of .sql forward migration scripts. Those scripts are packaged into separate Nuget package and applied during post deployment step.

Build file

I am using NAnt for scripting my TeamCity builds. Along with typical MSBuild and NUnit code there is a special target for packaging .nuspec files. The code below takes every .nuspec file it can find and assembles package out of it. Latest Octopus build has TeamCity integration plugin for building packages but I still prefer to keep it as a part of NAnt build file.

TeamCity configuration

In this deployment setup TeamCity is responsible for hosting Nuget feed. That feed is consumed by Octopus. TeamCity has built-in support of Nuget feeds. This feature is implemented using artifact functionality of TeamCity. It is enough to specify which .nuget package to grab from a build, to make it work.

Projects -- TeamCity

Octopus configuration

Projects and packages

There is a separate project for Contoso application. The project consists of 4 Packages (Web, Backend1, Backend2, Backend3, DBUpdate). Those packages are pointers to the .nuget feeds in TeamCity. To determine, on witch machine package should be installed, the package is tied up to the machine role. Steps

Projects and environments

Environment is required to group target machines together. For Contoso project I have 3 environments defined.

Each environment have set of machines, which matches to roles of the packages in the project. Dashboard

Environments and machine roles

Each environment consists of set of machines with roles assigned to it. Machine role is required to select which package should be installed on a machine.

Environments

The concept of roles is very handy, when you want to use different number of servers for different environments. For example there is only one backend server in DEV environment, but there is 2 backend servers in QA environment.

Deployment

For each deployment run Octopus provides detailed console output from each tentacle with nice overview of the packages.

DeploymentProcess

Results

By switching build automation to Octopus, I’ve mitigated big-bang integration anti-pattern. Currently QA deployment takes less than 10 minutes and can be done by QA team members using web browser only. However I haven’t foreseen nice side-effect of good deployment infrastructure. The scalability of deployment process greatly lowers an amount of effort required to roll-out a brand new project to production environment.

Commute Toronto

Written by val. Posted in Uncategorized

Recently I’ve been up to Windows Phone programming. Yes, I built a TTC app :)

That was an unexpectedly good start: reviews are great and within the first month there are already more than 200 users in Toronto. Although my goal was just to get a feel for the technology and produce a decent “Hello, world” thing, I’m glad it turned out to be practical.

Check it out on Windows Marketplace

How to handle ASP.NET Update Panel in WatiN

Written by val. Posted in Uncategorized

Handling asp.net update panels with WatiN is a pretty annoying thing. Click and WaitForLoad usually throws timeout every time.

I have found “just fine” solution that works for me:

Usage

Call stored procedure with output parameter using Builder pattern

Written by val. Posted in Uncategorized

My custom DAL requirements is:

  1. By given Business objects (BOs) call stored procedures
  2. Handle output parameters (updating BOs fields)
  3. Throw exceptions, if error code returned from stored procedure

Coding this functionality using ADO.NET is a bit annoying. The majority of code consists of communication with framework interface, and does not explicitly express developer’s intensions. Example.

Such annoying code can be refactored nicely using Builder pattern. Target interface for me is interface like this:

This interface is versatile enough to express required stored procedures call scenarios and explicit enough to be readable by human. Basically Update call does this:

  1. If Stored procedure returns –1 from parameter “err_code_out”, Concurrency exception will be thrown.
  2. Parameters “id”, “last_update_date” and “last_updated_by” will be added to stored procedures
  3. Parameter “last_update_date_out” will be threaded as output parameter of type DateTime and mapped to Row.LastUpdateDate.

IMPLEMENTATION

The idea of Builder pattern, is to hide complex construction logic behind clear and explicit API. Builder responsibility is to collect consumer “wishes” and return something that consumer can execute, to make “wishes” come true.

Here is builder implementation:

After class consumer calls create on Builder, it returns Command executor. Command executor is already configured to execute desired stored procedures with parameters and map output parameters after execution.

I have cheated a bit with ErrorCodeHandler, it is a builder and an executor in the same time (say hello to SRP), but it doesn’t causes much damage for now. The execution interface is hidden by IErrorCodeHandlerBuilder.

The output parameter mapper is a bit tricky. It introduces 2 classes Map and Map to bypass C# generic typing systems, and make polymorphism to work.

Command executor class, when called blows all that stuff (triggers exception throwing and parameters mapping)

The idea of builder pattern was shared by my colleague, thanks friend :)

What I discovered when I was writing a .Net connector for Redis

Written by val. Posted in Uncategorized

Redis is a distributed key-value storage. Connector is a protocol-driver for storing/retrieving data from Redis. Recently I have developed one. I taught following simple lessons from it:
SRP PRINCIPLE MAKES CODE BETTER PREPARED FOR CHANGES
During development of the connector, I were stick to “SRP” and “TDD” principle. Later i have realized that SRP is really helps, if you face requirement of unexpected code changes. In short SRP says that you should have only one reason to change the class.
.NET CODE IS AS FAST AS NATIVE IMPLEMENTATION
Spuriously, native ANSI-C implementation of Redis benchmark is as fast, as .NET implementation of the connector with same benchmark, moreover with all those patterns and enterprise stuff.
.NET BUFFERED STREAM IS A GOOD ABSTRACTION
Connector is written in a way – that avoids copying arguments (byte arrays) in one big array, for sending to Redis. Instead it saves references to all arguments and writes in in a sequence. That causes serious performance problems. I have avoided this, just by wrapping socket’s write stream with buffered stream. It’s gave me 2x performance ‘boost’ (2000 requests per second vs approx. 5000).
COMAND PIPELINING IS A GOOD IDEA
Command pipelining, it’s a feature, when you put all you requests trough single socket. For example, is you send (in a row) 3 GET’s for “foo”, “bar” and “baz” keys – you will receive commands results in the same order. It gave me good results over the 100MBit network:
MULTITHREADED CODE IS VERY HARD TO DEBUG
During implementation of pipelining I faced with a racing problem. Old pipelining algorithm allows racing during parsing response from Redis. Microsoft Chess helped me a lot to test and localize this problem. It’s a special tool, that runs test code with worst-case thread switching scenario.
FREE PROFILERS EXISTS (NPROF AND SLIMTUNE)
Earlier, I had no idea how to profile application at low cost (JetBrains and Ant’s Profilers are expensive). Recently I found NProf and SlimTune. They helped me a lot to find bottlenecks.