Usefulness of the Underscore.js functional library

I usually ignore new frameworks or libraries until I see it referenced more than a dozen times and Underscore has reached that threshold. Recently I needed some more powerful libraries for working with collections and due to time constraints I decided not to write additional functions but instead give this library a try. Many of the functions in Underscore are already present in modern browsers and when these functions are found the library will defer to the native implementation–this was a huge selling point for me.

To keep this post brief I’ll get to some samples of usage:

Some of the functions that I found useful recently were:

This function

_.pluck(_productCollection.Items, "Id"), function (Id) {
return Id === product.Id;
})

Transforms the Items[] from this object into

_productCollection = {
    CriteriaType: "Products",
    Items: {
        Facility: null,
        FacilityId: null,
        Name: "",
        Id: ""
    }
}

An array of Id’s for additional processing

[Id,Id,Id]

This was useful to me when I had to compare the values of Ids against a new value to see if it was already contained in the initial object.

!_.any(_.pluck(productCollection.Items, "Id"), function (Id) {
    return Id === product.Id;
}))

Another useful area was removing an item in a collection with the following code:

productCollection.Items = _.reject(productCollection.Items, function (prod) {
    return prod.Id === productId;
});

This is definitely a useful library.

An email I received today.

I seriously just received this email from a dba at work today.

Developers,

I’m sure that a lot of you know that reading one or more large dataset(s) returned from the SQL Server and generating HTML line-by-line to display the items on the web page is terribly slow. Several years ago, I developed a set of stored procedures which – to my knowledge – are still not in general use today… and that is a shame. Let me be clear: Calling the stored procedures I wrote to write HTML for you will speed up your web pages… in some cases (e.g. parsing thousands of lines to display on the web page) by a factor of 10 or more because generating the HTML on the SQL Server side results in *many* less rows being returned to the ASP on top of the fact that the ASP doesn’t have to cycle through those lines adding the tags (e.g. “” or “…“).

When displaying the page for Allina, for example, the result set being returned to the web page goes from 25339 rows of raw data that the ASP must process one line at a time down to 811 lines of pre-generated HTML code; this is a 97% reduction in the number of rows returned. PLUS – this is on top of the fact that the HTML has already been generated and all the ASP has to do is paste it into the page being displayed.

If you would like assistance converting your web pages over to using these stored procedures, or for more details, please contact me.

Adding additional Git remote for your rails code

In a previous example I showed how I was able to get my code easily deployed out to heroku. But the problem that remains is that this is the deployment destination of your web app, and not the actual source repository.

I took a look a github, but hated the fact that my code would be public unless I got into a paid plan. Normally, this wouldn’t matter, but the fact that this is a small app that I’d one day like to charge money for took github out of the running for me. so….i went back to my ole friend unfuddle.

I quickly setup a git repo with them and ran into several problems when trying to set my unfuddle repo as my remote for git. Then I found a link in unfuddle that under Repositories > [Name of my repository] that gave me a very specific 1 page tutorial on setting up my environment to work with unfuddle. Prior to this I copied my local ssh public key into my settings in unfuddle…and two minutes later I’m up and running.

(By the way, I’m using this spectacular book to learn about RoR, freely available here)

voila.

Introduction to Aspect Oriented programming with DynamicProxy

So what’s AOP yo!?

Several months ago I read Bob Martin’s Clean Code (which by the way is a phenomenal read if you take the time to understand the concepts, and even better if you challenge yourself to apply them. :-p) and was intrigued by his review of Aspect Oriented programming.  Most of our applications have a core function which is usually entangled with other concerns (also called cross-cutting concerns or aspects) such as: security, caching, exception handling, logging, etc.   In using AOP you use a few tools to write things like caching once in your application and reuse that code all over the place without the handy ctrl-c + ctrl-p combo.  In this brief post I’ll demonstrate how you can implement AOP in your application by using StructureMap (I’m assuming you already know how to use an IoC container) and Castle project’s DynamicProxy.

Where do I start?

(Warning: I’m assuming your familiarity with StructureMap (or your tool of choice) so I won’t get into the guts of it’s setup.)

StructureMap has this neat feature that you can use when you’re wiring up your dependencies called “EnrichWith”.  Basically it allows you to decorate a class with a wrapper of your choice.  You can manually create a wrapper or in our example we’ll use DynamicProxy to do this grunt work for us.  Here’s our problem: We want to cache several methods in our service layer or in a repository, but we don’t want to copy/paste our 30 lines of caching code into 20 different methods.

  1. We need to create a custom attribute which will be used by the interceptor to know if it should use our custom caching code or not.
  2. We implement IInterceptor which is an interface from the DynamicProxy library.  This guy will intercept the method calls, we’ll see the value in the later.
  3. Finally we need to Enrich our service class with a proxy that has this new interceptor and that’s it!

Here’s my custom attribute:

[AttributeUsage(AttributeTargets.Method)]
    public class CacheMethodAttribute : Attribute
    {
        public CacheMethodAttribute()
        {
            // default
            SecondsToCache = 10;
        }

        public double SecondsToCache { get; set; }
    }

Next, here’s our Caching Interceptor:

public class CacheInterceptor : IInterceptor
    {
        private static object lockObject = new object();

        public void Intercept(IInvocation invocation)
        {
            CacheMethod(invocation);
        }

        private void CacheMethod(IInvocation invocation)
        {
            if (! IsMethodMarkedForCache(invocation.Method))
            {
                invocation.Proceed();
                return;
            }
            var cacheDuration =
                (invocation.Method.GetCustomAttributes(typeof (CacheMethodAttribute), true).First() as
                 CacheMethodAttribute ?? new CacheMethodAttribute()).SecondsToCache;

            var cacheKey = GetCacheKey(invocation);

            var cache = HttpRuntime.Cache;
            var cachedResult = cache.Get(cacheKey);

            if (cachedResult == null)
            {
                lock (lockObject)
                {
                    if (cachedResult == null)
                    {
                        invocation.Proceed();

                        if( invocation.ReturnValue == null ) return;

                        cache.Insert(cacheKey, invocation.ReturnValue, null, DateTime.Now.AddSeconds(cacheDuration),
                                     TimeSpan.Zero);
                    }
                }
            }
            else
                invocation.ReturnValue = cachedResult;
        }

        private bool IsMethodMarkedForCache(MethodInfo methodInfo)
        {
            return methodInfo.GetCustomAttributes(typeof (CacheMethodAttribute), true).Any();
        }

        private string GetCacheKey(IInvocation invocation)
        {
            var cacheKey = invocation.Method.Name;

            foreach (var argument in invocation.Arguments)
            {
                cacheKey += ":" + argument;
            }

            return cacheKey;
        }
    }

This is where all the magic happens. When the method is decorated appropriately and is called it first hits this interceptor which has a hook into the “invocation”.
It’s this invocation variable that has details about the method call made, like: arguments, method name, attributes on the method, and many many more. It’s here where we place our caching code
and then if we actually don’t find the values in cache we call

invocation.Proceed();

which sends control to the wrapped method, it makes it db call and when it returns the control is returned to this interceptor, so the next statement that is executed is

if (invocation.ReturnValue == null)...

If we actually found that value in the cache and didn’t have to make a call back to the db, then we insert that cached value into the ReturnValue and that’s it.
And finally, let’s wire up this puppy:

var dynamicProxy = new ProxyGenerator();

            Scan(assemblyScanner =>
                     {
                         assemblyScanner.TheCallingAssembly();
                         assemblyScanner.WithDefaultConventions();

                         ForRequestedType()
                             .EnrichWith(z => dynamicProxy.CreateInterfaceProxyWithTarget(z, new ExceptionInterceptor()))
                             .TheDefault.Is.OfConcreteType();
                     }
                );

So now that we have this code in place, all that we have to do is decorate any methods that we want with our custom attribute, and make sure that they are Enriched in our IoC wireup and we get caching for free!

        [CacheMethod(SecondsToCache=60)]
        public IList SearchJobs(Criteria criteria, int page, int pageSize)
        {
            var results = _myService.SearchJobs(criteria, page, pageSize);
            return results;
        }

That’s it!  That’s all our method that needs caching has to do.  You’ll notice that we no longer have to copy/paste our “caching” routine into every single method that requires caching, but instead we just use this simple attribute.

So what are the Cons?

  • There is a slight performance degradation due to the use of reflection, but in my opinion Developer performance is far more expensive than extra cycles on a machine.  Remember that Dell 2300x (fake) server doesn’t ask for vacation, get moody or require health insurance.
  • There is certainly a learning curve for noobies.  In fact it looks plain scary at first, but I promise you, use it and when you do this new tool will become quite handy.
  • I’d recommend using this on most (if not all) enterprise apps, console, web, etc), but I’d stay away from parsers that have to crunch things incredibly fast.

First deployment to Heroku

I’ve decided to use the Heroku Ruby on Rails hosting service due to it’s simplicity and integration with source control (git).  You can read more about it here.  There are many other services available, but I was attracted by the free account and also deployment is done in terminal like this:

 git push heroku master 

And creating a heroku account is done with this command:

heroku create tinyfireant

(Note: tinyfireant is the name of the site that you’re creating with Heroku)

Now one problem that I ran into was deployment, I kept getting this error:

git push heroku master

Permission denied (publickey).
fatal: The remote end hung up unexpectedly

After scouring the web for a few hours I finally gave up at 1am yesterday.  But I woke up this morning and couldn’t go to work without at least giving it another chance.  And wouldn’t you know it, 10 minutes later my site was deployed.  Thanks to this post here I discovered that I hadn’t uploaded my ssh keys (this is basically a certificate that faciliates terminal communication with Heroku without having to log in each time I interact with the service.)

So I regen’d my keys and uploaded them to heroku like this:

cd ~/.ssh

ssh-keygen -t rsa -C "myEmailAddressRegisteredWithHeroku.com"

heroku keys:add

git push heroku master

(Note: I didn’t enter a different file name, and left my passphrases empty.)

Then I saw this lovely message:

Counting objects: 73, done.
Delta compression using 2 threads.
Compressing objects: 100% (63/63), done.
Writing objects: 100% (73/73), 80.86 KiB, done.
Total 73 (delta 10), reused 0 (delta 0)

—–> Heroku receiving push
—–> Rails app detected
Compiled slug size is 80K
—–> Launching……… done
http://tinyfireant.heroku.com deployed to Heroku

To git@heroku.com:tinyfireant.git
* [new branch]      master -> master

Success! Now that I have deployment setup, I can start learning some more and deploying with 1 command line.  Hey Microsoft, you listening? :0)

Learning Ruby on Rails from a .net developer perspective: Part I of N

I played with Ruby on Rails exactly one year ago and was very impressed with the thoughtful design, architecture and tooling that the platform provided.  I used to think that asp.net Mvc 2 would really hit the mark in filling in the gaps between mvc1 and RoR.  Well, I’ve been using Mvc2 on a project at work and I can truly say that it’s not as well thought out as it needs to be.  Couple this with my lack-luster experience that I’ve had in using bizspark and interacting with their team (found here and here) I decided to once again begin learning Ruby on Rails.

This is going to be an impromptu series that will just highlight little tidbits that I found useful and feel other .net developers may benefit from.  I guess you could say that this is a series highlighting my ignorance and will make a few of the ruby guys chuckle from time to time.

Follow

Get every new post delivered to your Inbox.