Blog

Trace Listener for Elmah (ASP / MVC Exception Logger)

Elmah is an awesome tool, you can make it LOG most of your exceptions for your ASP.NET app just by installing the NUGET package. (with a little work you can change most to all)

Install-Package elmah

Once you have it installed any exception that is not handled will be logged by elmah and will show in the exception web interface. Just navigate to <yoursite>/elmah./elmah.axd and you will see your errors.

If you have ever used log4net you will probably found its trace listener very usefull, you just need code like

 
Diagnostics.Trace.TraceInformation(&amp;quot;Hello, this is some INFO&amp;quot;);

 

And this will make its way to your log4net log. To be honest, I allays use Trace and then if I use the code in a console I use a console trace listener and if I run the code from MSTEST logs end up in MSTEST. At any point I can just plug in a log4net listener in and change the logging destination. For me that just feels right.

I could not find a TraceListener for Elmah so, I thought I would code one up.

Firstly, Elmah logging requires an Exception Class and this class name is used in the log. So, a typical log looks like

log display showing category as class name

In order to show a type that indicates the log type eg, Warning,Information or Error we create a class to represent each of these :

 
internal class TraceInformation : Exception
{
  internal TraceInformation(string message) : base(message){}
}
internal class TraceError: Exception
{
  internal TraceError(string message) : base(message) { }
}
internal class TraceWarning : Exception
{
  internal TraceWarning(string message) : base(message) { }
}
internal class TraceWrite : Exception
{
  internal TraceWrite(string message) : base(message) { }
} 

As you can see, we also have a Write class, this is for simple Trace.Writes.

Now we need to create a Listener, this is simple and the code looks like this

    internal class ElmahListener : System.Diagnostics.TraceListener
    {
        public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string format, params object[] args)
        {
            TraceEvent(eventCache, source, eventType, id, string.Format(format, args));
        }
        public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) //
        {
            Exception exception;
            switch (eventType)
            {
                case TraceEventType.Information:
                    exception = new TraceInformation(message);
                    break;
                case TraceEventType.Error:
                    exception = new TraceError(message);
                    break;
                case TraceEventType.Warning:
                    exception = new TraceWarning(message);
                    break;
                default:
                    exception = new TraceWrite(message);
                    break;
            }
            if (HttpContext.Current.Session == null)
            {
                ErrorLog.GetDefault(null).Log(new Error(exception));
            }
            else
            {
                ErrorSignal.FromCurrentContext().Raise(exception );
            }
        }
        public override void TraceTransfer(TraceEventCache eventCache, string source, int id, string message, Guid relatedActivityId)
        {
            base.TraceTransfer(eventCache, source, id, message, relatedActivityId);
        }
        public override void Write(string message)
        {
        }
        public override void WriteLine(string message)
        {
        }
    }

As mentioed earlyer, we need to create an exception to hold our message as this is just how Elmah works. You need to pass it an exception and it uses the TypeName for the Type column in the website. If we just used exception then the type logged would be exception for all our Trace Logs.

As we are not throwing or catching these exception we don’t take any performance hit, we are just creating them to hold the trace message and type.

You will notice that we check HttpContext.Current.Session , this will be NULL if we have no request object on the context, we sometimes have a context but no request for example in the Application_Start we have Context but no Request and that will cause Elmah to error.

An example of some this in action.

LogExample

Sourcecode https://github.com/TrueNorthIT/Elmah/tree/master/TrueNorth.Elmah

Adding a custom Http header to BizTalk WCF Http messages

We recently had a requirement to add a custom http header to messages being sent on a BizTalk basicHttpBinding send port.

If you want to add SOAP headers, this is supported by the biztalk adapters. You just simply set the WCF.OutboundCustomHeaders context property on your outgoing message and they get added in.

I was expecting a similar context property to be available for the Http headers, WCF.HttpHeaders being a good candidate, or maybe Http.UserHttpHeaders.

After much testing and frustration, neither of those appeared to work.

Implementing a IClientMessageInspector class seemed to be recommended injection point for this sort of thing:

https://social.msdn.microsoft.com/Forums/en-US/96f24b06-76c7-4604-b946-8f3aa96f3b17/how-to-add-custom-http-header-using-wcfbasichttp-adpater-from-custom-send-pipeline?forum=biztalkgeneral

However, I needed to set the Http header to a session cookie that came from an earlier message call, this was only available in a context property of the message, not the message itself. I didn’t think the biztalk context properties would be available to the IClientMessageInspector, but, after looking at this blog on protocol transition:

http://blogs.msdn.com/b/paolos/archive/2009/01/20/biztalk-server-and-protocol-transition.aspx

I found that BizTalk does helpfully write all the context properties to the WCF message properties.

So, let me summarise how to do this.

First, we need to implement an IClientMessageInspector to add in any headers defined in the WCF.HttpHeaders context property:

    public class AddHttpHeaderInspector : IClientMessageInspector
    {
        public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
        {
        }

        public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, IClientChannel channel)
        {
            //Adds any headers in the WCF.HttpHeaders context property to the HTTP Headers
            //expects headers in the form "Header1: Value1, Header2: Value2"

            const string httpHeadersKey = "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties#HttpHeaders";
            if (request.Properties.ContainsKey(httpHeadersKey))
            {
				//LINQ rocks
                var headers = ((string) request.Properties[httpHeadersKey])
                    .Split(',')
                    .Select(str => str.Split(':').Select(str2 => str2.Trim()).ToArray())
                    .Where(header => header.Length == 2)
                    .Select(header => Tuple.Create(header[0], header[1]));

                HttpRequestMessageProperty httpRequestMessage;
                object httpRequestMessageObject;
                if (request.Properties.TryGetValue(HttpRequestMessageProperty.Name, out httpRequestMessageObject))
                {
                    httpRequestMessage = httpRequestMessageObject as HttpRequestMessageProperty;
                }
                else
                {
                    httpRequestMessage = new HttpRequestMessageProperty();
                    request.Properties.Add(HttpRequestMessageProperty.Name, httpRequestMessage);
                }

                foreach (var header in headers)
                {
                    httpRequestMessage.Headers[header.Item1] = header.Item2;
                }
            }

            return null;
        }
    }

Then we need an IEndpointBehaviour and BehaviorExtensionElement implementation to allow us to hook this inspector into the pipeline and configure it:

    public class AddHttpHeaderBehavior : IEndpointBehavior
    {

        public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) { }

        public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
        {
            AddHttpHeaderInspector headerInspector = new AddHttpHeaderInspector();
            clientRuntime.MessageInspectors.Add(headerInspector);
        }

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { }

        public void Validate(ServiceEndpoint endpoint) { }
    }

    public class AddHttpHeaderBehaviorExtensionElement : BehaviorExtensionElement
    {
        protected override object CreateBehavior()
        {
            return new AddHttpHeaderBehavior();
        }

        public override Type BehaviorType
        {
            get { return typeof(AddHttpHeaderBehavior); }
        }
    }

Now, strongly sign this class, GAC it and register it in the machine.config (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config for 32 bit hosts, and the Framework64 version for 64 bit hosts):

	<behaviorExtensions>
                <!--snip-->
		<add name="biztalkAddHttpHeader" type="TrueNorth.BizTalk.AddHttpHeaderBehaviorExtensionElement, TrueNorth.BizTalk.AddHttpHeader, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e44caa3cea47e2cf"/>
	</behaviorExtensions>

And lastly, restart BizTalk Admin Console, change your port to a WCF-Custom one and on the Behavior tab you should be able to add the new behavior:

bizTalkAddHttpHeader

If you want to add any configuration elements to this screen just add some [ConfigurationProperty] fields to the BehaviorExtensionElement class above.

MVC Authentication Module (Part 1)

Here at TrueNorth like most developers we make use of FREE open source components. Even if when you start creating something you intend to make it reusable, you cannot just make it open source and useful to someone. Just being on GIT or CodePlex does not necessarily mean people can reuse it without much help.

open sourcing modules and components does take time.

We have written a Authentication module for MVC sites that allows you to store user credentials in CRM. I have seen this done before but for many projects but not in a reusable manner and not CRM centric.

CRM Centric? what does that mean?

Well, all data is stored in CRM: user details, password and security roles.
When the user requests a password reset a “tn_passwordResetRequest” entity record is created (this name might be shortened, no entity name will be harmed during this process). This record contains the reset url. This allows you to… well do whatever you want! It’s CRM so you can easily trigger a workflow. We expect the main usage would be to send an email to the user in question. (This reset URL will only work ONCE, and if the user record is change the URL will no longer be valid)

Passwords are hashed in CRM, if you wish to change a users password you don’t need to jump through hoops. Just enter a new password in the PASSWORD field within CRM, don’t worry it will be hashed and stored securely. Alternatively, the user can reset their own password via the MVC application, in this case it is hashed on the web server and just written directly to CRM.

We hash with salt and a configurable amount of interactions, following best practises for password storage.

Hashed Password

We don’t do anything clever to make all this happen, really we don’t.. the only clever thing we have done is built this on top of the OWIN MVC authentication standards.

What’s next to make this open source ready? We intend to document it, package up the CRM bits into a free solution for the market place, and add the code to GIT or codeplex.

Granular named item locking in C#

A couple of times recently we have had a requirement to lock on a named item. We have not required a distributed lock (either between app domains or between servers), simply an in memory lock on a specific item.

There is no standard way to achieve this in .Net and getting the details correct is surprisingly difficult (as with all multi-threaded code) so I will share the two versions we came up with.

Version 1 uses a standard Dictionary of lock objects. This dictionary stores the lock object being used for the key in question, as well as a reference count, i.e the number of threads attempting to take the lock. This is important, without storing a count of references you cannot ensure the lock object is removed cleanly when the last reference is released. Access to the dictionary is controlled by a standard global lock object, but this should be very low contention as it is only taken whilst the dictionary is updated. If the body of work done in the lock is very small, then this is probably not an appropriate solution and techniques such as lock striping should be considered.

Firstly some shared code between the two versions:

    public interface ILock<T>
    {
        IDisposable Enter(T id);
    }

    public class ActionDisposable : IDisposable
    {
        private readonly Action action;

        public ActionDisposable(Action action)
        {
            if (action == null)
            {
                throw new ArgumentNullException("action");
            }

            this.action = action;
        }

        public void Dispose()
        {
            action();
        }
    }

Version 1:


    public class NamedItemLock<T> : ILock<T>
    {

        private readonly Dictionary<T, Tuple<object, int>> locks = new Dictionary<T, Tuple<object, int>>();
        private readonly object globalLock = new object();

        public IDisposable Enter(T id)
        {
            object lockHandle;

            lock (globalLock)
            {
                if (!locks.ContainsKey(id))
                {
                    locks[id] = Tuple.Create(new object(),1);
                }
                else
                {
                    var lockCount = locks[id];
                    locks[id] = Tuple.Create(lockCount.Item1, lockCount.Item2 + 1);
                }
                lockHandle = locks[id].Item1;
            }

            //Any exceptions dealing with the dictionary up to this point are fine as the lock has not been taken
            //we need to be more careful after this point

            bool lockTaken = false;
            try
            {
                //Monitor.Enter can throw an exception AND take the lock
                //we need to catch this and exit the lock, making sure we rethrow the exception as well
                Monitor.Enter(lockHandle, ref lockTaken);
            }
            catch
            {
                if (lockTaken)
                {
                    Monitor.Exit(lockHandle);
                    throw;
                }
            }

            try
            {
                //don't think this call should ever fail but lets make sure we release the lock if it has
                return new ActionDisposable(() => exit(id, lockHandle));
            }
            catch
            {
                Monitor.Exit(lockHandle);
                throw;
            }
        }

        private void exit(T id, object lockHandle)
        {
            //We need to be careful here that we actually release the lock
            //since the dictionary might blow up, we actually pass the lockHandle to this method as well
            //any exception occuring before we have managed to update the dictionary
            //is caught and the lock released anyway. 
            //The worst case scenario is we will end up with the dictionary saying there are outstanding locks for a certain key
            //Whereas there are actually none
            //This won't cause a deadlock, but may cause a minor memory leak

            lock (globalLock)
            {
                Tuple<object,int> lockCount = default(Tuple<object,int>);
                try
                {
                    lockCount = locks[id];
                    if (lockCount.Item2 == 1)
                    {
                        locks.Remove(id);
                    }
                    else
                    {
                        locks[id] = Tuple.Create(lockCount.Item1, lockCount.Item2 - 1);
                    }
                }
                catch
                {
                    Monitor.Exit(lockHandle);
                    throw;
                }
                Monitor.Exit(lockCount.Item1);
            }
        }
    }

The second version (courtesy of Steve) uses the ConcurrentDictionary class and simply spins until we successfully add the new item, anything else trying to take the lock will spin until this thread releases it.

    public class NamedItemLockSpin<T> : ILock<T>
    {

        private readonly ConcurrentDictionary<T, object> locks = new ConcurrentDictionary<T, object>();

        private readonly int spinWait;

        public NamedItemLockSpin(int spinWait)
        {
            this.spinWait = spinWait;
        }

        public IDisposable Enter(T id)
        {
            while(!locks.TryAdd(id, new object()))
            {
                Thread.SpinWait(spinWait);
            }

            return new ActionDisposable(() => exit(id));
        }

        private void exit(T id)
        {
            object obj;
            locks.TryRemove(id, out obj);
        }
    }

I expected the spin lock version to be slightly slower when the lock needs to be held for a long time (i.e high contention and long waits), however in testing there is not much between the two.

Usage is fairly straightforward:

            var namedLocks = new NamedItemLock<int>();

            using (var sync = namedLocks.Enter(10))
            {
                //do something for item 10
            }

Unit Testing BizTalk Maps – External functoids

We have been busy with BizTalk 2013 recently and like all good programmers we like to test, test test!

BizTalk 2010 made this quite a bit easier as you can enable unit testing on your schemas and maps. However, we hit a problem with the supplied TestableMapBase class whilst testing maps with external function calls. I will run through the process from start to finish to demonstrate the issue and the fix.

We have a very simple schema and map setup:

bizTalkBlog1

For which we enable unit testing in the Deployment tab of the Project properties:

biztalkBlog2

Now we create a unit test properties and add a reference to our schema / map projects and the following BizTalk dlls (ignore the Fakes assemblies for the moment):

bizTalkBlog3

Once this is setup, we can test our map in code as follows:

        [TestMethod]
        [DeploymentItem(&quot;Input.xml&quot;)]
        public void NoLookup()
        {
            var target = new Test();
            var map = new NoLookup();
            var output = new Test();
            var source = &quot;Input.xml&quot;;
            var outFile = &quot;Output.xml&quot;;

            Assert.IsTrue(target.ValidateInstance(source, OutputInstanceType.XML));
            map.TestMap(source, InputInstanceType.Xml, outFile, OutputInstanceType.XML);
            Assert.IsTrue(output.ValidateInstance(outFile, OutputInstanceType.XML));
            TestContext.AddResultFile(outFile);
        }

Note we have to add the output file to the text context otherwise it will be deleted when the test ends. See http://msdn.microsoft.com/en-us/library/ms404699(v=vs.80).aspx

Ok, that is one map tested. Let us try a different map that uses an external function call, specifically the GetCommonValue functoid. This looks up BizTalk Xref data from the management database:

BizTalkblog4

If we now run the equivalent code for this map, we get a rather unhelpful exception:

Microsoft.BizTalk.TestTools.BizTalkTestAssertFailException: Transform Failure
Result StackTrace:
at Microsoft.BizTalk.TestTools.Mapper.TestableMapBase.PerformTransform(String inputXmlFile, String outputXmlFile)
at Microsoft.BizTalk.TestTools.Mapper.TestableMapBase.TestMap(String inputInstanceFilename, InputInstanceType inputType, String outputInstanceFilename, OutputInstanceType outputType)
at TrueNorth.BizTalk.Blog.Testing.MapTest.Lookup()

It doesn’t like our external lookup functoid. With some digging we uncovered the ‘real’ exception:

System.Xml.Xsl.XslTransformException: Cannot find a script or an extension object associated with namespace ‘http://schemas.microsoft.com/BizTalk/2003/ScriptNS0&#8217;.

The issue is with Microsoft.BizTalk.TestTools.Mapper.TestableMapBase class, specifically with the PerformTransform method:

    private void PerformTransform(string inputXmlFile, string outputXmlFile)
    {
      XmlReader input = (XmlReader) null;
      XmlWriter results = (XmlWriter) null;
      try
      {
        XslCompiledTransform compiledTransform = new XslCompiledTransform();
        XmlReader stylesheet = (XmlReader) new XmlTextReader((TextReader) new StringReader(this.XmlContent));
        XsltSettings settings = new XsltSettings(true, true);
        input = XmlReader.Create(inputXmlFile);
        int num = (int) input.MoveToContent();
        results = XmlWriter.Create(outputXmlFile, new XmlWriterSettings()
        {
          Indent = true,
          IndentChars = &quot;\t&quot;
        });
        compiledTransform.Load(stylesheet, settings, (XmlResolver) null);
        compiledTransform.Transform(input, results);
      }
	  /* snip */
    }

The instance object for each external function should be associated with the correct namespace in an XsltArgumentList object. This should then be passed to the Transform method on the last line:

compiledTransform.Transform(input, this.TransformArgs, output);

works correctly.

So now we know the problem, how do we fix this? Well, the easiest way is to ‘hack’ the XslCompiledTransform method call to do what we want for the purposes of the test. We can do this using Microsoft Fakes (http://msdn.microsoft.com/en-us/library/hh549175.aspx).

First add a reference to System.Xml. Then r-click the assembly and select ‘Add Fakes…’

bizTalkBlog5

We can now inject code into any of the method calls for the faked assembly. The magic code for us is:

            using (ShimsContext.Create())
            {
                var target = new Test();
                var map = new Lookup();
                var output = new Test();
                var source = &quot;Input.xml&quot;;
                var outFile = &quot;Output.xml&quot;;

                System.Xml.Xsl.Fakes.ShimXslCompiledTransform.AllInstances.TransformXmlReaderXmlWriter = (xslCompiledTransform, xmlReader, xmlWriter) =&gt;
                {
                    xslCompiledTransform.Transform(xmlReader, map.TransformArgs, xmlWriter);
                };
				
				/* rest of test */
			}

Here we are rerouting the Transform call made by the TestableMapBase class to the appropriate override which takes an XslArgumentList. We capture the argument list for the map we are testing in a closure.

Success!

BizTalkBLog6

CRM Email Router – Emails stuck at Pending Send

I have just been setting up some new workflow generated emails in CRM 2011. This was largely an exercise in frustration as the rich text editor is pretty flaky. Getting your fonts and spacing correct is very hit and miss.

Happily, this area is getting an overhaul in CRM 2015:

Microsoft Dynamics CRM 2015 Release Preview Guide

The new Email editor provides marketers with the ability to select from pre-defined templates or create an Email from scratch using an interactive drag and drop build process or an advanced editor for the CSS & HTML experts.

Anyway, after moving on to testing, we noticed that whilst most of the emails were being picked up by the email router, a couple of the messages weren’t and were just sat at ‘Pending Send’.

The obvious difference between the stuck and non-stuck emails was that the former were created in a two stage process, first they were created with a ‘Draft’ status and at some point later they had their status changed to ‘Pending Send’.

The latter were created with the ‘Send Email’ workflow activity which creates and sends them in one action.

A mockup of both types is show here:

emailBlog1

We obtained the email router query via a sql trace, a slightly tidied version is:

select 
top 5 "email0".Subject as "subject"
, "email0".Description as "description"
, "email0".PriorityCode as "prioritycode"
, "email0".ActivityId as "activityid"
, "email0".ModifiedOn as "modifiedon"
, "email0".StateCode as "statecode"
, "email0".StatusCode as "statuscode"
, "email0".DeliveryAttempts as "deliveryattempts" 
from
Email as "email0" (NOLOCK)  join ActivityParty as "activityparty1" (NOLOCK)  
 on ("email0".ActivityId  =  "activityparty1".ActivityId 
      and ("activityparty1".ParticipationTypeMask = @ParticipationTypeMask0 
      and ("activityparty1".PartyId in (@PartyId0
                  , @PartyId1
                  , @PartyId2
                  , @PartyId3
                  , @PartyId4)))) 
where
      (("email0".StateCode = @StateCode0 
      and ("email0".StatusCode != @StatusCode0 or "email0".StatusCode is null) 
      and "email0".DirectionCode = @DirectionCode0 
      and ("email0".DeliveryAttempts = @DeliveryAttempts0))) order by
            "email0".ActualEnd asc

The problem for us was the ‘Delivery Attempts’ filter. The router will never try and pick up an email if the Delivery Attempts field is null.

So, a simple change to the ‘Create’ workflow activity to set this to 0 initially fixed our problem:

emailBlog2

Dynamics CRM Paging Part II – Lazy Paging

Following on from Dynamics CRM Paging Cookies – Some gotchas!, here is a helper class we use for all our paging needs.

An example of its usage:

const string fetchXml = @"
    <fetch mapping='logical' count='5000' version='1.0' page='{0}' {1}>
	    <entity name='account'>
		    <attribute name='name' />
	    </entity>
    </fetch>";

var accounts = new PagedRetriever<Tuple<Guid, string>>(
                    entity => Tuple.Create( entity.Id, 
                                            (string)entity.Attributes["name"]), 
                    (page, cookie) => String.Format(fetchXml, page, cookie))
                .GetData(service);

The utility class has a couple of nice features:

    Laziness – Extra pages are only retrieved as they are enumerated
    Memory efficient – Entities are converted on the fly, saving a huge amount of memory if a large enumeration is converted to a list

Here is the complete code for the Paged Retriever class:

public class PagedRetriever<T>
{
    private readonly Func<Entity, T> converter;
    private readonly Func<int, string, string> pagedFetchXml;
    private readonly bool usePagingCookie;

    /// <summary>
    /// Utility class to retrieve paged results from CRM
    /// </summary>
    /// <param name="converter">Converts the returned entity to a wrapper type</param>
    /// <param name="pagedFetchXml">pagedFetchXml takes the pagenumber and the pagingCookie and returns the fetchXml</param>
    /// <param name="usePagingCookie">Whether the paging cookie should be used. Set to yes for performance if the primary entity id is unique in the resultset.</param>
    public PagedRetriever(Func<Entity, T> converter, Func<int, string, string> pagedFetchXml, bool usePagingCookie = true)
    {
        this.converter = converter;
        this.pagedFetchXml = pagedFetchXml;
        this.usePagingCookie = usePagingCookie;
    }


    /// <summary>
    /// Get all entities using the standard RetrieveMultiple call
    /// </summary>
    /// <param name="service"></param>
    /// <returns></returns>
    public IEnumerable<T> GetData(IOrganizationService service)
    {
        return GetData(query => service.RetrieveMultiple(query));
    }

    /// <summary>
    /// Gets all entities using a custom collection producer (eg. a RetrieveByResourcesServiceRequest)
    /// </summary>
    /// <param name="retriever"></param>
    /// <returns></returns>
    public IEnumerable<T> GetData(Func<QueryBase, EntityCollection> retriever)
    {
        int page = 1;
        string pagingCookie = String.Empty;
        while (true)
        {
            var pagedXml = pagedFetchXml(page, pagingCookie);

            var entityCollection = retriever(new FetchExpression(pagedXml));
            if (entityCollection == null)
            {
                break;
            }

            foreach (var convertedEntity in entityCollection.Entities.Select(converter))
            {
                yield return convertedEntity;
            }

            if (!entityCollection.MoreRecords)
            {
                yield break;
            }

            if (usePagingCookie && !String.IsNullOrEmpty(entityCollection.PagingCookie))
            {
                pagingCookie = "paging-cookie='" + System.Web.HttpUtility.HtmlEncode(entityCollection.PagingCookie) + "'";
            }

            page++;
        }
    }
}

Word redundancy factors

“Studiously” – 369,000 google hits
“Studiously ignored” – 85,500 google hits
“Studiously ignores” – 59,700 google hits

Here at TrueNorth we like words almost as much as programming, well I do anyway. I have been looking for a good example of this phenomenon for a while, where a regular english word becomes so attached to its partner that it may as well not exist on its own.

I think ‘bated’ is one of the best:

“bated” – 1,230,000
“bated breath” – 401,000

That gives a word redundancy factor (yes I am making this up) of 0.33.

I opened this up on FB recently and got some very nice examples:

“Earth shattering” – 0.30
“Casting aspersions” – 0.40
“Backhanded compliment” – 0.40 (backhanded criticism is a mere 6,000 hits)
“Foregone conclusion” – 0.49
“Extenuating circumstances” – 0.11

The current record holder seems a bit cheaty, as it is not an idiom but a snippet of a larger cliche:

“Let bygones” – an incredible wrf of 0.65

A few that work because they use proper nouns:

“Midas touch” – 0.22
“Poseidon Adventure” – 0.57

And one that works because this usage swamps the common usages of the individual words:

“Star Trek” – 0.46

Some have gone beyond the 1.0 extinction limit and the words have simply merged:

“Ruth-less”
“Over-whelm”

So, the ‘Official’ rules of the game are:

1) Pick a two word pairing
2) Google one of the words in quotes and record the number of hits
3) Google the pairing in quotes and record the number of hits
4) Divide 3) by 2) to give the wrf
5) Spellings are exact but must all be regular english words
5) No proper nouns

Can you beat ‘let bygones’?

The importance of code reviews – shared responsibility

Just wanted to share a story from university that taught me early on the importance of peer reviews and your responsibilities as a reviewer.

The situation was a group practical during which we were designing a toy 4-bit microcontroller. The whole thing was mainly built out of MOSFETs if my memory is correct (my electronics is definitely rusty ~15 years down the line!). These were plugged together into basic logic gates, logic gates into half-adders and latches and so on. Each member of the group had responsibility for part of the final microcontroller: memory, ALU, clock, registers, control unit etc.

The whole thing was to be simulated on some industrial software that let you draw the gates in a CAD type interface. It then determined the layout and consequent masks that would be required to actually burn the wafer in a fab plant. It was all exciting stuff with NDAs being signed to use the software (I think it was probably a couple of generations old by this point, probably a 600nm process or something). And, just like punch card computing of yesteryear, the simulation of the design to check whether it will actually work when burned took a nightly run, no REPL for this badboy.

We had two shots to get it running over a 4 day practical. As well as designing your own part, everyone in the group of about 8 was assigned someone else’s part to check. So, there I was reviewing one of my peer’s work, and by god was it boring. My eyes were going square (probably not helped by the extremely cheap Queen’s College Bar that I frequented too often) as basically I was staring at a long truth table cross-checking the wiring of a mass of gates.

You can probably see where this is going, I didn’t do my due diligence. After checking about 60% of the table and finding no mistakes I thought sod this, surely there won’t be a mistake in the rest of it and signed it off.

We all arrived the next morning eager to see the results of the overnight simulation in which the microcontroller was put through it paces with a simple program. Needless to say, it had bombed out with all sorts of whacky signals going on. Much head scratching and debugging work followed until the fault was found. Yep you guessed it, a pesky 1 where there should be a 0 in one single line of the 40% of the truth table of one component that yours truly was supposed to have reviewed!

At the time, I was annoyed at my peer. How could he have been so slack in his own work?!

However, looking back with the wisdom of maturity (although I don’t think my peers needed much looking back to arrive at this conclusion), I can see that most of the fault was with me.

The moral of this rather length story is: you share responsibility for a piece of work if you have reviewed it and signed it off. If you can’t review it properly and have some confidence that it will work, then speak up loud and clear. Too often I see programmers treat peer reviews as a box ticking exercise, rubber stamping stuff that goes out the door and straight back in again when straightforward mistakes are missed. We are all human and mistakes get made, make it a mission to find at least one on every review you do!

Oh, and in case you were wondering, it did work next time 🙂

Don’t burn your free azure credits, or max out the corporate card.

Please note, this can be scheduled with azure automation

To set up creds to connect see Authenticating to Azure using Azure Active Directory

At TrueNorth we try to get the most out azure. We use it for Development, PreSales, Testing infrastructure architectures, Performance testing and more. It so easy to forget to turn a VM off and then you find yourself trying to navigate the site on your mobile to deallocate your VM.

So, we have created a PowerShell script that can be triggered at a set time, the script enumerates all your VMs and powers them down.

As we use our BizSpark subscription for non customer VMs it allows us filter with
Select-AzureSubscription BizSpark

workflow ShutDown-AllVMs {
    param (
        [parameter(Mandatory=$true)]
        [String] $VMCredentialName = "YOUR ASSET NAME"
    )

        $Credential = Get-AutomationPSCredential -Name $VMCredentialName 

	    if ($Credential -eq $null)  {
            throw "Could not retrieve '$VMCredentialName' credential asset. Check that you created this asset in the Automation service."
        }     

        Add-AzureAccount -Credential $Credential

        Select-AzureSubscription BizSpark

        InlineScript {            
            Get-azurevm | ? { $_.Status -eq "StoppedVM"}  | Stop-AzureVM -Force
            Get-azurevm | ? { $_.Status -ne "StoppedDeallocated"}  | Stop-AzureVM -Force

            # if we miss anything, this will not include VMs that are starting up 
            Get-azurevm | ? { $_.Status -eq "ReadyRole"}  | Stop-AzureVM -Force 
        }
}

This is scheduled with the Windows Task Scheduler, it run at 8PM and will wake up the laptop if needed (it will go back to sleep). We could use the Scheduler feature of Azure but for the purpose of this post we are just going to trigger the PS script.

We are also looking into creating an Azure AddOn that will just deallocates your VM when its not in use after or at certain times.