Friday, August 3, 2012

Adding PostgreSQL as a DBProvider

Recently I've been working with Mono, PostgreSQL and Massive ( a lightweight 'ORM' written by Rob Connery) trying to see how much of my day to day job I can move outside of the Microsoft realm.   While trying to use Massive with PostgreSQL I ran into a problem with the connectionStrings add tag's providerName attribute.  Today's post will walk you through how to duplicate the message and more importantly how to fix the problem.

The Setup

For this project I am using PostgreSQL for the backend using my hockeydb database. If you don't have PostgreSQL installed and you are using a mac you can use this handy utility Postgres.app, which is brought to you by the good folks ate Heroku. For the non-mac crowd visit the PostgreSQL site and get the appropriate install for your machine.  After you have PostgreSQL installed you can grab a copy of the hockeydb database here.  After setting up postgres and loading the database its time to check your Mono environment setup.

For more information about the hockeydb and how to restore the PostgreSQL backup check out my previous post Hockey Databank Database.

If you already have Mono and MonoDevelop setup you can skip this paragraph, if not read on for a quick overview of how to get it setup.  First thing to do is go to the Mono Project Site and follow the download and install instructions. Once you have mono installed go to the MonoDevelop Download page and follow the the download and install instructions.

The final setup step is to download the PosgreSQL .NET driver.  Download the driver Npgsql2 driver and unzip it into a directory of your choosing.

Now we are ready to start!

Creating the Solution and Adding Massive

After you have started MonoDevelop click on the 'Start New Solution…' link.  In the 'New Solution' dialog Select 'C#' and then 'Console Project'.  Name the solution HockeyDbConsole and click the 'Forward' button.  We won't be creating a GUI so ensure that the GTK# Support checkbox is NOT checked and click the 'OK' button.  You should now see the HockeyDbConsole project under a solution with the same name.  

My next step is to download the Massive code for PostgreSQL and add it to my project. Grab it from here Massive.PostgreSQL.cs. You will also need to add the following references for Massive's dependencies:

  • Pngsql (the assembly that you downloaded during  setup)
  • System.Configuration
  • System.Data
  • Microsoft.CSharp

After adding my references there is one more file I need to add to the project, an app.config file. Unlike VisualStudio MonoDevelop doesn't offer a way to add an app.config but it does have a way to add an empty XML file. When you add the empty file make sure to name it app.config. Once the file is in the project update it so its contents look like this:

Go ahead and build the project when the build is done you should have no errors.

Getting Ready to Query the abbrev Table

To keep the output simple I will query the abbrev table. It contains a key for all the abbreviations used throughout the database. One of the ways to query the table using Massive is to create a class that maps to the table you wish to query. I will create a class named Abbrev, that way its easy to figure out which table I'm working with. Following the Massive README file https://github.com/robconery/massive/ My Abbrev class looks like this:

Now that I have the Abbrev class I'm ready to run a query.  I will add code to retrieve all the records in the abbrev table and list the results after the "Hello World" message. First I need to new up an Abbrev object and call it table. I will then make a call table.All() and loop through the results writing the records to STDOUT. Here's what the updated Main method looks like:

Ok, looks like everything is in place its time to run it.   When I do I see this error (I've added the red and bolded the most important part of the error):

Unhandled Exception: System.Configuration.ConfigurationErrorsException: Failed to find or load the registered .Net Framework Data Provider 'Npgsql'.
at System.Data.Common.DbProviderFactories.GetFactory (System.String providerInvariantName) [0x00026] in /private/tmp/monobuild/build/BUILD/mono-2.10.9/mcs/class/System.Data/System.Data.Common/DbProviderFactories.cs:80 

What does the error mean?  It's telling me that .NET doesn't know what providerName="Npgsql" in my connect string maps to.  After some head scratching and a few Google search I was able to get past this issue by declaring a Npgsql as a Data Provider.

Registering Npgsql as a Data Provider

To register Npgsql as a data provider I need to add a few lines to my app.config file.  Here's what the app.config file looks like now:

As you can see I had to ad the system.data tag.  Within it I added the Npgsql information under the DbProviderFactory tag. I set the invariant value to the base namespace and it is what is used in the providerName attribute.  I was able to gather the information for the type attribute using the Assembly Browser tool within MonoDevelop.  Right click on the Npgsql.dll reference and selecting the 'Open' option. Once the browser opens up expand the Npgsql node on the left.  Next, search for Factory and the NpgsqlFactory class is visible.  

Now that I have the app.config file updated I will re-run the app.  This time I see the results of my query.

For the demo I added the system.data information to the app.config file.  Since I'm going to be using this on multiple projects I've added the add tag to my machine.config file's system.data > DbProviderFactory tag.

Summary

By adding just a few lines to my app.config file I was able to use a PostgreSQL connectionString with Massive to run a simple query. I hope this post helps you get through this issue faster than I did.

Resources

Sunday, July 29, 2012

Hockey Databank Databases

Shortly after each NHL  season a ZIP file of all stats from the previous season plus all other NHL seasons shows up on the  Hockey Databank Yahoo group. The ZIP file contains CSV stats files with just about any hockey stat you can think of. These files are a great gift to those of us who are stats junkies but there isn't an easy way to query them. With that in mind I decided to create a database to store the stats. I wrote a quick Clojure app to load the CSV files into a PostgreSQL database. When I was about halfway through the loading process I realized it wouldn't any additional Clojure code (other than a connection information) to create and load MySQL and Sqlite databases, so I did just that.

The Database Schema

The database was created so that each table maps to a CSV file.  The tables have the same columns as the CSV files except for the master table. I added an id to the tableas a primary key. The create_db.sql files are available for all three database flavors here: MySQL, PostgreSQL, and SQLite.  These files will give you an empty database with all the necessary tables.  If you want the pre-loaded database you can download one of the database dumps below.

The Database Dumps

You can download the database backup files here:

If you aren't familiar with how to restore the backups for either the MySQL or PostgreSQL databases I will give you a quick 'how to' restore from the command line.  The example steps work on my mac and should also work from a linux command line.  The SQLite file is the actual database file so after 'un-tarring' or unzipping the file you are ready to go.

Restoring the MySQL Dump

1. Create the database by running:  

     mysql -u <username> -p<password>

     At the mysql prompt enter:  

     create database hockeydb;

     Exit out of mysql and return to the prompt.

2. 'Un-tar' the file by running:  

  tar zxvf hockeydb-6-23-12-mysql.tgz 

3. Restore the database by running the following command:

 mysql -u <username> -p<password> hockeydb < hockeydb.sql

Thats all there is to it.  You should now have a MySQL database loaded with the dat from the Hockey Data Bank files.   

Restoring the PostgreSQL Dump

1. Create the database by running:  

     createdb hockeydb

2. 'Un-tar' the file by running:  

  tar zxvf hockeydb-6-23-12-postgres.tgz 

3. Restore the database by running the following command:

 psql hockeydb < hockeydb.backup

Thats all there is to it.  You should now have a PostgreSQL database loaded with the dat from the Hockey Data Bank files.   

Summary

Hopefully these databases will help others work with the Hockey Databank data.  I would like to thank the Hockey Databank Group member dsreyn and everyone else who helped to put this data together, without their hard work these databases wouldn't be possible.

My future plans for this data is to create data access libraries for Clojure, C#, and Ruby.  When that will happen I am not sure.  When the libraries do come available I will notify everyone through this blog. 

If you have any questions, concerns or suggestions please feel free to leave a comment!

Resources

Hockey Databank Yahoo group

Mysql Downloads

PostgresSQL

SQLite

Wednesday, April 11, 2012

Creating a TFS Work Item from IronRuby

At my day job we use Team Foundation Server 2008 (TFS) for our automated builds, iteration management, and source control. TFS may not be the most ideal way to manage these processes but in our MS environment it has helped us communicate with our non-technical team members and customers. In order to enhance our feedback loop we’ve been looking into ways to add bugs automatically when automated tests fail or when errors occur in our production applications. (see Creating a TFS Work Item from ClojureCLR)

This morning I had a little time to do some research on how to programmatically create a new bug work item. My goal was to write code that would create a new work item bug with an image attached to it. Why an image? When our automated tests fail we capture what the browser looked like when it fails. Before we get into the code let me describe a TFS Work Item.

A TFS Bug Work Item

In TFS work items are a way to track work that needs to be done. There are five different types of work items available but in our projects we typically only use three: Task, Scenario, and Bug. Each work item type has its own UI with different fields.  Since I am creating bugs in this example I thought I’d show you what the UI looks like for a Bug Work Item.

empty-bug-wi-tfs

In this example we will create a new bug and enter text into the highlighted fields plus attach an image file. In order for me to create the bug I need to do a little setup.

The Setup

There are three assemblies needed to create a TFS Bug Work Item.  The are : Microsoft.TeamFoundation.dll, Microsoft.TeamFoundation.Client.dll, and Microsoft.TeamFoundation.WorkItemTracking.Client. All three of these DLLs can be found in the C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies directory. I copied the three Dlls into the project’s libs directory.

The Code

The first step is to load the TFS assemblies from libs directory using require statements. Once the assemblies have been loaded I can start working on the main method. The main method's sole purpose is to drive the WorkItem creation process. In the main method the first few lines are responsible for creating the necessary objects for WorkItem creation.

First I need a TFS server object which is created by making a call to the static method TeamFoundationServerFactory.GetServer. GetServer takes a single parameter which is the name of the server I want to work with. After the server object is created I can use it to create a WorkItemStore object. The WorkItemStore encapsulates the data store that contains all work items on a particular server. The next line grabs the particular Project object that I want to work with by using a method I wrote called find_item. The last setup line uses the Project object to retrieve the WorkItemType object that represents a Bug WorkItem. After that call the prep work is done and I am ready to create thew new Bug WorkItem.

Creating the Bug WorkItem

The create_work_item method is responsible for creating a new 'bare bones' WorkItem object. Creating a minimal WorkItem object allows for more flexibility down the line. Some times I may want to add a new bug with an attachment and some times I may not. The method is straight forward, just create the WorkItem object by passing in the WorkItemType object followed by setting the Title, Description, AreaPath and IterationPath properties. When everything has been set the new WorkItem object is returned.

Adding the Attachment

Now that I have my WorkItem object its time to add my image file attachment. I wrote another small method to create the Attachment object and add it to the WorkItem’s Attachments collection. It takes three parameters, the WorkItem object, the path to the file to be attached and a description of the file. First, I use the path and desc parameters to create the Attachment object. After the Attachment object has been created I add it to the WorkItem.Attachments collection by calling its Add method passing in the Attachment object.

At this point I have a WorkItem object with an Attachment in memory. That’s nice and all but the WorkItem has not been stored in the WorkItem data set yet. In order to add it to the data set I need to save the object. Surprisingly enough all I need to do is call the WorkItem’s save method.

work_item.Save

Now when I look at the Pending Bugs report in TFS I will see my newly created bug in the list.

pending-bugs-clj-ir

Summary

In this post I showed you how easy it is to use .NET Assemblies in an IronRuby script. I walked you through the simple process of creating a Bug WorkItem in TFS. Having the ability to programmatically create and report bugs from our IronRuby scripts will help us close the loop on bug reporting in our automated testing environment. Having an image attached to the bug will help us figure out why the test failed speeding up bug fix times.

In The Future

We long since switched away from the MSBuild approach to our builds in favor of using Rake and Albacore. The switch has removed some of the built in integration with TFS that MSBuild provided. I will be investigating the build and source control TFS APIs to see if we can enhance our Rake code in hopes of totally removing XML files from our build kick offs. As my investigation progresses I will be writing posts about my progress.

Resources

TFS API: http://msdn.microsoft.com/en-us/library/bb130146(v=vs.90).aspx

My Source (this blog’s code is the 0-Create-TFS-Work-Item): https://github.com/rippinrobr/My-Programming-Adventure-Blog/zipball/master

Not really related to this post but mentioned and worth looking into: Rake and Albacore

Monday, February 13, 2012

Book Review: Node for Front End Developers by Garann Means (O’Reilly Media)

In my previous post I mentioned that I needed to come up to speed with CoffeeScript for my next project.  That same project will be using Node on the server side. Since I liked the CoffeeScript book I thought  I would  give Node for Front-End Developers a read.  Once again, O’Reilly has published a book that is a quick read but gives you enough information so that you can put the book down and start writing code.  I found this book gave me enough information to start building my first node based application. 

The Review

The book starts off by walking the reader through how to set up his/her node environment.  It walks you through installing node and its package management system, npm. Once you have your environment set up the you are introduced to node-for-front-end-devsthe REPL (read – evaluate – print – loop) with a few short code snippets.  The chapter ends with a discussion how to declare which modules your application depends on by using the package.json file. 

Chapter two walks the reader down the path of serving up resources be it a string of HTML or static resources like HTML, CSS, and/or JS files. The first example shows you how to write a server that serves up HTML ‘by hand’ which is then quickly followed by how much easier it is to serve static pages when you use npm modules like connect.  

After learning how to serve static files the author introduces shows you how to interact with the client by processing HTTP GET requests. The first example uses the module querystring to process GETs with parameters.  It reminded me of the early days of the web.   Thankfully after another example of processing parameters on the URL with querystring, the author shows you how to do the same thing in a more concise manner using the connect module.

The next topic was how to process HTTP POST requests.  We followed the same pattern here, learn how to ‘roll your own’ to process a POST which was then followed by an example of how to do it using the connect module.  I liked the approach the author takes throughout the book, show you how to do it yourself first and then introduce a module that can do the same thing in a less verbose manner.

After a brief overview of how to handle JSONP requests the discussion moved onto real-time communication using socket.io.  Having just finished a project that uses SignalR (which is a .NET open source project that has similar functionality) I found this subject very interesting.  The example made socket.io seem simply and straight forward.  In fact, after I finished reading this chapter, I started a spike to redo the project I just finished using socket.io just to see if it was any easier with socket.io.

Chapter four introduced the type of server side templates. It walks you through how to use mustache to layout templates for your application showing you how to use templates and sub templates to promote re-use on the UI side. Towards the end of the chapter the author starts to discuss best practices on how to group your code, separate out code that handles a certain task to promote code re-use and separation of concern.

The next topic of discussion was data access and application flow.  The first part of the chapter uses Redis to show the reader how to work in data access to your node applications.  I hadn’t worked with Redis before but after that part of the book I am now looking into incorporating it into a few projects currently underway.  After the Redis discussion was complete workflow was discussed using a pub/sub example using events.

Up to this point in the book I found the flow of each chapter easy to follow.  I appreciated the process of doing it by hand and then doing it again with a pre-existing module.  It’s a great way to show you how something works and exposing you to the node module ecosystem.  It seems like there’s a module for just about anything you may want to do.

The last chapter varied from the previous ones by jumping into a big chunk of code right away.  The code is an example of how to create an MVC application with node.  On the plus side, this chapter introduces the express module, which was inspired by ruby’s Sinatra web framework and is used by many other node modules.  Express has a tool that will create a directory structure for your app and has a view engine, Jade, to create HTML views.  Jade’s syntax takes a little bit to get use to but once you do it makes creating HTML views easier and much easier to read.  Overall the chapter wasn’t bad it was just a little code heavy when compared to the previous chapters.

My Thoughts

I came to this book with very little experience with Node.js, I had created a small app for my personal use but nothing huge.  Now that I have read this book I feel comfortable enough to use it in a few projects I have on the horizon.  I liked the methodology the author took for the first five chapters of doing it the hard way and then showing the reader the easier way to do with available modules.

On a side note, this is the second short book I’ve read by O’Reilly.  I hope they continue these type of short but sweet introductions to new(er) technologies.  It certainly helps us come up to speed quickly. 

Monday, February 6, 2012

Book Review: The Little Book on CoffeeScript by Alex MacCaw (O’Reilly Media)

When I started reading The Little Book on CoffeeScript I had zero experience with CoffeeScript. I was looking for a book to quickly bring me up to speed since my next project will rely heavily on CoffeeScript.  At 60 pages I thought the book the_little_coffeescript_bookwould give me enough information to start writing CoffeeScript code.  After reading the book I can say that I made the right choice.  This book has given me enough knowledge to get started writing CoffeeScript code. 

The Review

The book starts off with a chapter on CoffeeScript syntax in a nice, concise manner.  It covers functions, loops, arrays and CoffeeScript specific operators and aliases.  There were plenty of examples with just enough text to explain what was going on in the code. 

The next chapter discussed classes, specifically how to declare and use them.  While discussing class properties the author pointed out a shortcut as to how to set a class property to a value that will save you typing.  Lets say you have a class called Animal with a Name property that you want to set by passing a value to the class’s constructor.  Here’s the ‘long hand’ code for that:

class Animal 
constructor: (name) ->
@name = name

Not a lot of typing but the author shows you how you can do it in fewer lines. The ‘short hand’ way is here:

class Animal 
constructor: (@name) –>

It doesn’t seem like much but over the long haul I appreciate the short hand method.  There were a few other places in the book that the author shared shortcuts like this with the reader. 

Following the classes chapter, the next type for discussion was CoffeeScript idioms.  Here the author points out that using the English words for things like and instead of && and or instead of || were the preferred way to do logical ‘ands’ and ‘ors’.   Most of the chapter is dedicated to showing the reader how to accomplish things like how to perform ‘each’, ‘select’, ‘map’ and other functionality in the language.  The text had a nice way to show a person who is new to CoffeeScript how to do the ‘typical’ programming tasks.

The next chapter give a quick overview on how you can use CoffeeScript in conjunction with Node and node packages to create an application.  Overall, I found this chapter to be a nice introduction to creating an application but there were a few problems I ran into while following along.  The issues I found were that when I went to run the app I was missing five modules:  underscore, async, connect, qs and mime.  Thankfully the error messages were straight forward and fixing the problem was as easy as running ‘npm install <module name here>’ for each module.  The last bit of the chapter walked the reader through how to deploy our application to Heroku.  It was much easier than I thought it would be.

After our hello world’ish app was created the author switched over to discuss how CoffeeScript can fix some of the JavaScript warts and how it can’t ‘fix’ some of the other JavaScript warts. The chapter is broken down into unfixed and fixed sections.  The unfixed section the shows you how and why the JavaScript typeof functionality is broken and then follows the explanation up with how you can ‘fix’ typeof by writing your own function to do it.  As an example of how CoffeeScript can ‘fix’ a JavaScript wart, the author informs the reader that CoffeeScript uses the strong equality check for all equality checks.

The book is summarized in a chapter written by Jeremy Ashkenas, the creator of CoffeeScript. In it he discuss the philosophy behind CoffeeScript, which the quote below sums up.

“express core JavaScript concepts in as simple and minimal a syntax as we can find for them.” – Jeremy Ashkenas

My Thoughts

When I started this book I had zero experience with CoffeeScript and I was hoping that after reading it I would feel comfortable enough to write my own code.  I would say that this book has shown me enough CoffeeScript that I feel comfortable enough to start writing code for my next project.  

You can find out more about the book by checking it out The Little Book on CoffeeScript over at http://oreilly.com/ .

Saturday, February 4, 2012

Let Me Introduce Myself

Welcome to my new blog, My Programming Adventure.  I am .NET developer by day who has recently been drawn back into the open source world.  In my time away from the Linux/Unix world there has been a HUGE explosion in solid tooling and languages available for free.  Literally, anything you need you can find for free and more than likely the free ‘stuff’ will be better than anything you can pay for to do the same job.

One of the languages that I have really become fond of is Clojure.  While I was learning Clojure I found out about ClojureCLR, a version of Clojure that runs on the .NET stack .  Since I work in a .NET environment I wanted to use it in our production environment at work. While I was learning the ins and outs of ClojureCLR I found out that there was very little out there in the way of ‘How-To’ examples so I started a blog called My Clojure Adventure.  There I give small examples on how to connect to a database, interact with .NET libraries among other things. 

While I enjoy writing posts for My Clojure Adventure there are times where I have other topics I’d like to blog about that either have very little or nothing to do with Clojure.   That’s where My Programming Adventure comes in.  On this blog I will be writing about technologies, languages, books or whatever other programming related subject catches my eye.  An example of a few topics I’ll be writing about shortly are CoffeeScript, Node, and setting up Emacs for writing C# code.

In the next few days I will have a book review on The Little Book on CoffeeScript.   Now that I’ve introduced myself its time for me to go and work on my book review!  Come back and let me know what you think.