tag:blogger.com,1999:blog-27630683787866536332024-03-17T20:03:01.612-07:00My Programming AdventureRobhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-2763068378786653633.post-34751742492468855982014-03-04T04:29:00.000-08:002014-03-04T04:29:12.689-08:00Learning Go with Martini - Working with MongoDB<style>
blockquote {
margin: 0;
padding: 0;
}
blockquote {
padding: 13px 13px 21px 15px;
margin-bottom: 18px;
font-family:georgia,serif;
font-style: italic;
}
blockquote:before {
content:"\201C";
font-size:40px;
margin-left:-10px;
font-family:georgia,serif;
color:#eee;
}
blockquote p {
font-size: 14px;
font-weight: 300;
line-height: 18px;
margin-bottom: 0;
font-style: italic;
}
code, pre {
font-family: Monaco, Andale Mono, Courier New, monospace;
}
code {
background-color: #fee9cc;
color: rgba(0, 0, 0, 0.75);
padding: 1px 3px;
font-size: 12px;
-webkit-border-radius: 3px;
-moz-border-radius: 3px;
border-radius: 3px;
}
pre {
display: block;
padding: 14px;
margin: 0 0 18px;
line-height: 16px;
font-size: 11px;
border: 1px solid #d9d9d9;
white-space: pre-wrap;
word-wrap: break-word;
}
pre code {
background-color: #fff;
color:#737373;
font-size: 11px;
padding: 0;
}
</style>
<p><em>This is the second post in a series of posts on creating Go based web applications/APIs. If you missed the first post <a href="http://progadventure.blogspot.com/2014/02/learning-go-with-martini-basics.html">Learning Go with Martini - The Basics</a> Go check out and then come back to this post</em></p>
<h2>Intro</h2>
<p>Now that I understand how to handle GET requests its time to add some database interaction. In this post I will walk you through how to add a datgabase connectivity middleware to each request. Once I have that in place I will convert the /attributes/:resource into a ‘real’ GET, one that reads from a database and returns the results fo the query. I will add a POST route, storing the data that was ‘POSTed’ to the app in a MongoDB collection.</p>
<h2>Setup</h2>
<p>Before we get started you will need to have MongoDB installed or have access to an instance of MongoDB. To install it locally visit this <a href="https://www.mongodb.org/downloads">MongoDB’s download page</a>. If you’d like to use MongoDB in the ‘cloud’ checkout <a href="https://mongolab.com/welcome/">Mongolab</a>.</p>
<p>Also, if you haven’t already done so you’ll need to have a functional Go language environment. Visit the <a href="http://golang.org/doc/install#install">golang.org’s install page</a> and follow their instructions for your platform of choice.</p>
<p>Lastly, The code for the beginning of this post is stored Gist style and can be found <a href="https://gist.github.com/rippinrobr/9122011">here</a>.</p>
<h2>Creating the Middleware</h2>
<p>Before I start working with the database I need a way to ensure that each handler functions has access to the database connection. To do that I’m going to create a middleware function that middleware that will make a database connection available to each request handler. Before I jump right into that piece of middleware I'm going to start off with a 'Hello Middleware' function.</p>
<h3><em>Step 0. The Hello Middleware Edition</em></h3>
<p>To make sure I have the handler working correctly I’m going to start off easy. I’m going to create a handler function that simply writes out “Someday I will be a MongoDB connection”. Here’s what the skeletal function looks like.</p>
<pre><code>func Mongo() martini.Handler {
return func (c martini.Context ) {
fmt.Println("Someday I will be a MongoDB connection")
c.Next()
}
}
</code></pre>
<p>The middleware returns a martini.Handler as an anonymous function. All my function does is write a string back and call c.Next(). The Next call yields until after all of the other handlers have executed.</p>
<p>To add the function into the request stack I need to add the line <code>m.Use( Mongo() )</code> after the <code>martini.Classic()</code> call in the main function.</p>
<p>Now, each and every request will make a call to the Mongo() handler. If you download the gist above and add the function and the <code>m.Use</code> call to it you’ll be able to build and run the code. Do a few requests, valid and invalid. In the window where you started the service you should see the output of the middleware for each and every request.</p>
<h3><em>Step 1. Setting up for MongoDB</em></h3>
<p>For the MongoDB connection I will be using the <a href="http://labix.org/mgo">mgo library</a>. Before you can install it you’ll need to have the bazaar tool installed. So, if you don't already have bazaar installed visit the project's go to the project’s <a href="http://golang.org/s/gogetcmd">download page</a>. If you are on a mac and have homebrew installed you can install both by running the following:</p>
<pre><code>brew install bazaar
go get labix.org/v2/mgo
</code></pre>
<p>Now that I have mgo installed I’m going to change the Mongo function so that it adds a database session to each request. First I need to add the mgo package to my import statement, labix.org/v2/mgo, to my imports list. After updating the imports list I updated the function so that it now looks like this:</p>
<pre><code>func Mongo() martini.Handler {
session, err := mgo.Dial( "localhost/goattrs" )
if err != nil {
panic( err )
}
return func (c martini.Context ) {
reqSession := session.Clone()
c.Map( reqSession.DB( "goattrs" ) )
defer reqSession.Close()
c.Next()
}
}
</code></pre>
<p>It looks quite a bit different that the 'Hello' version. The first line in the function uses the <code>mgo.Dial</code> function to create the MongoDB connection, you can think of it as a database connection. The <code>Dial</code> function has two return values the session object and an error object if necessary. If an error occurrs then we 'panic', no sense on continuing if we have no database conection. If there was no error an anonymous function is created with a single parameter, a <code>martini.Conext</code>. The context object is what I will use to make the database session available to the handlers. Before that I create a clone of the original object so that each request can close its databse connection. The <code>defer</code> call, puts the <code>reqSession.Close()</code> call ‘on hold’ until the end of the handler function. Once the function has ended reqSessoin.Close() is called and the requests database connection is closed.</p>
<h2>Converting the GET /attributes/:resource Handler</h2>
<p>Now that the Mongo() middleware is in place I can convert the handler over to use it. First, I need to add the parameter <code>db *mgo.Database</code> to the getAttributes function. Now that I have a pointer to the database conection I can look for attributes for the given resource. I am going to change the if statement so that if the results of my query are not null, I will return a 200 along with a JSON version of the results. If the query results are null, I will return a 404 and an errorMsg JSON object. Here’s the updated function.</p>
<pre><code>func getAttributes( params martini.Params, writer http.ResponseWriter, db *mgo.Database) (int, string) {
resource := strings.ToLower( params["resource"] )
writer.Header().Set("Content-Type", "application/json")
var attrs []resourceAttributes
db.C("resource_attributes").Find(bson.M{"resource": resource }).All(&attrs);
if attrs != nil {
return http.StatusOK, jsonString( attrs )
} else {
return http.StatusNotFound, jsonString( errorMsg{"No attributes found for the resource: " + resource} )
}
}
</code></pre>
<p>The meat of the changes are the two lines that handle the database query. First the <code>var attrs []resourceAttributes</code> line declares the array that will contain the returned data if any is found. The next line is where all the magic happens. The <code>db.C(“resource_attributes”)</code> tells the driver that we want to work with the <code>resource_attributes</code> collection. If you aren’t familiar with MongoDB think of a collection like a table in a relational database. The Find call, similar to a select in rdbms world, takes a map as its parameter. Here I’m looking for an objet that has a resource property equal to the lower case version of the resource that was passed in. The <code>.All</code> call will take the results of my query and store them in the attrs array. If no matches are found then the attrs array will be nil. Now when I make a the /attributes/tv call I get an empty result set back. To fix that, I'm going to add a POST handler to create attributes.</p>
<h2>Creating the POST /attributes/:resource Handler</h2>
<p>I’m going to start off by setting up a new route in the attr-server.go file. The route will call a second function in the attribute-routes.go file that will handle the heavy lifting of creating a new attribute. Here’s what the new route looks like:</p>
<pre><code>m.Post("/attributes/:resource", addAttribute )
</code></pre>
<p>Not much going on here, now any post to the <code>/attributes/:resource</code> url will be handled by the <code>addAttribute</code> function. The first version of the function will be a simple placeholder, one that shows me that I have the POST support wired up correctly. Here’s the function:</p>
<pre><code>func addAttribute( params martini.Params, writer http.ResponseWriter, db *mgo.Database) (int, string) {
resource := strings.ToLower( params["resource"] )
writer.Header().Set("Content-Type", "application/json")
return http.StatusOK, "POST placeholder " + resource
}
</code></pre>
<p>Nothing new happening here other than the fact it handles a POST instead of a GET. After I recompiled I used the following curl command to test the code:</p>
<pre><code>curl -X POST [http://localhost:3000/attributes/tv](http://localhost:3000/attributes/tv)
</code></pre>
<p>which returned <code>POST placeholder tv</code>. Now I know that the route and handler are wired up correctly, its time to add code to process the JSON data.</p>
<h3><em>Adding JSON Support</em></h3>
<p>In order to support the JSON POST I'm going to use another package, this one comes from the martini-contrib ecosystem, <a href="https://github.com/martini-contrib/binding">martini-contrib/binding</a> package. The binding package will allow me to tell martini to take the submitted JSON object and magically load it into an attribute struct. To add the support I need to modify the route definition in the <code>main()</code> function. Specifically I need to tell the binding package which struct to bind the incoming JSON to. The updated definition looks like this:</p>
<pre><code>m.Post("/attributes/:resource", binding.Json( attribute{} ), addAttribute )
</code></pre>
<p>I also need to update the handler function to support the new attribute parameter.</p>
<pre><code>func addAttribute( attr attribute, params martini.Params, writer http.ResponseWriter, db *mgo.Database)
</code></pre>
<p>Now the handler can access the posted JSON data through the attr paramter. Running the curl POST parameter with JSON data will return a JSON string that represents the submitted data.</p>
<pre><code>curl -XPOST -d '{
"name":"location”,
"type":"string",
"description":"Where the TV is located, which area of a facility is it loaded in",
"required":"true"}'
-H "Content-Type: application/json"
http://localhost:3000/attributes/tv
</code></pre>
<p>Returns the following:</p>
<pre><code>{"name":"location","type”:"string","description":"Where the TV is located, which area of a facility is it loaded in","required":false}
</code></pre>
<p>The code is now parsing the JSON and its being converted into something I can use. Thats great, but what happens if I run this curl command, what do I get then?</p>
<pre><code>curl -XPOST http://localhost:3000/attributes/tv
</code></pre>
<p>The curl command returns an empty atttribute struct. I don’t want that to happen. I want the name field to be required on all POSTs. If nothing is supplied for the type and description fields I want them to default to an empty string. Of course you can do this after the new attribute struct is given to the handler function but I want this to be done outside of the handler code. Thankfully, the binder package has that functionality built in.</p>
<h4><em>Adding Validation</em></h4>
<p>The martini-contrib/binding package defines a Validator interface that contains one method declaration, Validate(<em>Errors, </em>http.Request). In order for me to add my validation requirement I will need to create a method for the attribute struct.</p>
<pre><code>func (attr *attribute) Validate( errors *binding.Errors, req *http.Request ) {
if attr.Name == "" {
errors.Overall["missing-requirement"] = "name is a required field";
}
if attr.DataType == "" {
attr.DataType = "string"
}
}
</code></pre>
<p>In addition to the interface, the package also has an Errors struct that has two maps in it, Overall and Fields. I’m using the Overall map to report validation errors. Specifically I'm using “missing-requirement” as the key to the error message when no name is either empty or nil. While I’m checking for the required field I will also check to see what data type was provided in the JSON object’s type property. If nothing was provided I default it to string.</p>
<p>After my validation is complete I can check to see if there were any validation errors by calling errors.Count(). If that is greater than zero then I know a validation error occurred. To let the client know I will send a response with 409 and a JSON string representation of the ErrorMsg struct. Otherwise, I all return a 200 and a JSON string representation of the new attribute. Here’s latest version of the POST handler.</p>
<pre><code>func addAttribute( attr attribute, err binding.Errors, params martini.Params, writer http.ResponseWriter, db *mgo.Database) (int, string) {
writer.Header().Set("Content-Type","pplication/json")
if err.Count() > 0 {
return http.StatusConflict, jsonString( errorMsg{ err.Overall["missig-reuiement"] } )
}
return http.StatusOK jsonString( attr )
}
</code></pre>
<p>You may be wondering why I chose to return HTTP Status Code 409. I chose it after reading <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10">what w3c had to say</a>. Basically it boiled down to this line:</p>
<blockquote><p>This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough information for the user to recognize the source of the conflict</p></blockquote>
<p>After grabbing the value of the resource parameter I create a query that will be used to find the document. Once the query is defined I create a <a href="http://godoc.org/labix.org/v2/mgo#Change">mgo.Change</a> struct. This is how I will be able modify the data if it exists or insert it if it doesn't (this is done by setting Upsert to true). The Update is what will do the actual changing of the document. If you aren’t familiar with MongoDB what my statement says is if a matching document is found only add the attribute to the attributes array if it isn’t already in there. If it already exists it will not be added to the array.</p>
<p>Now that I have my query and change objects in place its time to run the database update. The Apply call returns two results <a href="http://godoc.org/labix.org/v2/mgo#ChangeInfo">ChangeInfo</a> and a Go error struct. Since I don’t care about the ChangeInfo struct, I put a <code>_</code> in its place. Obviously I’m interested in any errors that occur so I grab that value and check for an error. If an error does occur I will send an errorMsg struct back to the caller. If no error occurs then the I send back a 200 status and an empty object.</p>
<h2>The Finishing Touches</h2>
<p>Now that I’m storing data I’ve noticed that the GET /attributes/:resource returns a JSON Array when it should return an object. I tweaked to the getAttributes section of code that returns the results of the query to look like this:</p>
<pre><code>attrs := resourceAttributes{}
err := db.C("resource_attributes").Find(bson.M{"resource": resource }).One(&attrs)
if err == nil {
</code></pre>
<p>I changed the attrs declaration to be a single object instead of an array, you’ll see why on the next line. I also changed from the <code>All()</code> function to the <code>One()</code> call since only one document. If a record isn’t found it, the <code>One()</code> will return an error object so I’ve changed the if statement to use the presence of an error to determine which JSON string to return.</p>
<h2>Summary</h2>
<p>I've changed the attributes services to read from a MongoDB database. I did that by adding the <code>Mongo()</code> middleware function so that the handler functions can interact with the database. I added a POST route that illustrated how to use the martini-contrib/binding package to bind the incoming JSON to the attribute struct. After finishing up the POST method I went back to the GET handler to change the returned data from an array to a single object.</p>
<p>In my next post I'm going to create my first Go package. The package will be used to interact with the <a href="https://github.com/coreos/etcd">etcd</a> to retrieve the database configuration information.</p>
<h2>Resources</h2>
<h4>Packages</h4>
<ul>
<li><a href="https://github.com/martini-contrib/binding">martini-contrib/binding</a></li>
<li><a href="http://godoc.org/labix.org/v2/mgo">mgo</a></li>
</ul>
<h4>Other Links</h4>
<ul>
<li><a href="http://golang.org/doc/install#install">Go Install</a></li>
<li><a href="https://www.mongodb.org/downloads">MongoDB</a></li>
<li><a href="https://mongolab.com/welcome/">MongoLabs</a></li>
<li><a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10">W3C Status Codes</a></li>
</ul>
<h4>Github and Gists</h4>
<ul>
<li><a href="https://github.com/rippinrobr/learning-go-with-martini/tree/2nd-post">This Posts code in github (branch 2nd-post)</a></li>
<li><a href="https://gist.github.com/rippinrobr/9122011">The Starting Gist</a></li>
</ul>
<h4>Previous Posts in the Series</h4>
<ul>
<li><a href="http://progadventure.blogspot.com/2014/02/learning-go-with-martini-basics.html">#1 Learning Go with Martini - The Basics</a></li>
</ul>
Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com10tag:blogger.com,1999:blog-2763068378786653633.post-78813672219261728052014-02-26T06:27:00.000-08:002014-02-26T06:27:02.035-08:00Learning Go with a Martini - The Basics<h2>Intro</h2>
<p>I’m working with my son to build a system to manage all sorts of data, files and devices. Our apps will need to run on OSX, Windows, Linux, and possibly Raspberry Pi machines. In addition to multiple platform support, our apps need to be fast, we will eventually be working with near real time data. So I started looking for a language that would fit the bill and Go caught my eye. Why? For one, it compiles, I mean it <em>really</em> compiles down to an actual executable, not byte code. Another reason is some of the speed benchmarks I saw said go was second only to C for speed. Also, if I can’t get Go to be fast enough I can load C libraries too. Lastly, Go appears to have a vibrate third party package ecosystem. So I thought I’d give Go a try. Since I most of my day job work is in web development I thought I’d learn Go while I build out a web service and share my experience with others who may be curious about Go.</p>
<h2>Goal</h2>
<p>In this post we will start building out a API to manage attributes. Every resource in our system will have attributes and these attributes will be different per each type of resource. The API will provide the necessary functionality to assign,remove and list attributes the available attributes assigned to a particular resource. In this post we will set up the Go development environment and create the basics for the GET /attributes/:resource.</p>
<h2>Setup</h2>
<h4><em>Installing and Configuring Go</em></h4>
<p>Installing Go is pretty straight forward, just follow the instructions on golang.org’s install page. To ensure Go is setup correctly in your development environment run <code>go version</code> and you should see output similar to this, <code>go version go1.1.2 darwin/386</code></p>
<p>After installing Go I need to do a few tasks to setup my Go work space. The Go tools are created to work with a certain directory structure, mainly a ‘home’ directory that contains three subdirectories: src, where my source files will go; pkg, where any of 3rd party packages I install will live; and bin, where any executables I install will live. the directories have been created with in the ‘home’ directory a GOPATH environment variable should be set to that directory. In my environment my <code>$GOPATH</code> variable is set to <code>~/src/go</code>. For a more detailed overview of the workspace layout read the <a href="http://golang.org/doc/code.html#GOPATH">“How to Write Go Code”</a> page on the golang.org site.</p>
<h4><em>Editor Setup</em></h4>
<p>I use emacs to write the vast majority of my code. I would imagine that my editor choice is not the norm for most of you reading this post so I’m going to add a few other editors that have Go support.</p>
<table>
<thead>
<tr>
<th> </th>
<th> </th>
</tr>
</thead>
<tbody>
<tr>
<td> Eclipse </td>
<td> <a href="https://github.com/sesteel/goclipse">goclipse</a> </td>
</tr>
<tr>
<td> Emacs </td>
<td> <a href="http://dominik.honnef.co/posts/2013/03/writing_go_in_emacs/">go-mode.el</a> </td>
</tr>
<tr>
<td> Sublime </td>
<td> <a href="https://github.com/DisposaBoy/GoSublime">go plugin</a> </td>
</tr>
<tr>
<td> Vim </td>
<td> <a href="http://golang.org/misc/vim/readme.txt">go language vim support</a> </td>
</tr>
</tbody>
</table>
<p>Follow the instructions and your editing environment will be ready to ‘Go’.</p>
<h4><em>Installing Third Party Packages</em></h4>
<p>The last step before we start writing code is to install the lone third-party package that will use to create the attributes API. Go makes it easy to install the packages by providing the go get tool. As you can see below you just add the package you wish to install after go get. When the install completes the package can be found by running <code>ls $GOPATH/pkg</code> in a terminal window.</p>
<pre><code>go get github.com/codegangsta/martini
</code></pre>
<p>Now that I have Go installed, my work environment setup including my editor, installed martini, I’m ready to start coding. I will be writing the code for this blog series in the <code>$GOPATH/src/github.com/rippinrobr/martini-to-go-posts</code> directory.</p>
<h2>Writing the Code</h2>
<h4><em>A Basic HTTP Server</em></h4>
<p>One of the reasons why I chose to get familiar with Martini is it allows you to get a functional web server up and running with about 10 lines of code. So to make sure I have everything in place and I can respond to an HTTP GET / request I’m going to start with a very basic app. This app will respond to the HTTP GET / request by returning a string that reads "Where are the attributes?!?!”
Here’s the code</p>
<pre><code>package main
// loading in the Martini package
import "github.com/codegangsta/martini"
func main() {
// if you are new to Go the := is a short variable declaration
m := martini.Classic()
// the func() call is creating an anonymous function that retuns a stringa
m.Get("/", func() string {
return "Where are the attributes?!?!"
})
m.Run()
}
</code></pre>
<p>Before you run it let’s go over the code. The first line defines the main package, all Go code must be in a package. This code belongs to the main package which is a special package in Go. The main package is where the main function must be for all executable Go projects. Once I’ve defined the package I need to let go know what packages I want to import. In this code I am only importing the the martini package. This statement makes the martini functions, structs, and interfaces available to my code. To call anything in this package I need to preface the call with <code>martini</code>. In this example I’m only using one function from martini, <code>martini.Classic()</code>.</p>
<p>The <code>martini.Classic()</code> call creates the classic martini object that I will use to declare the supported routes and start the service. This particular line makes use of the ‘short variable declaration’ syntax. The := determines the type of the object, var, etc.. on the right side and creates a variable of that type on the left side of things. The := syntax can only be used within the body of a function.</p>
<p>Now that I have created my martini object I can start setting up to handle our HTTP GET / request. Adding a route is to handle is pretty straight forward. For this simple example I declare the route I want to respond to “/“ and I am using an anonymous function to handle the requests. If you are new to Go the string that follows func() is the return value of the function.</p>
<pre><code>m.Get( “/”, func() string {
return “Where are the attributes?!?!”
})
</code></pre>
<p>Since this is the only route I’ve declared any other route sent to the service will result in a 404 error. The last bit of code is the Run() call which starts the HTTP server. By default the server will listen on port 3000, if you want to change that port set the PORT environment variable to the new value and restart the server. Martini automatically looks for the PORT variable.</p>
<p>The next step is to actually see the code in action. The easiest way to do that is by calling go run.</p>
<pre><code>go run attr-server.go
</code></pre>
<p>The go run command will compile and run the application. If there are no errors you should see a message <code>[martini] listening on host:port :8000</code>. <em>My server is running on port 8000 because I've set my PORT environment variable to 8000</em>.</p>
<p>Now, to make sure that the response is what I expect. I'm going to run:</p>
<pre><code>curl http://localhost:8000/
</code></pre>
<p>And I should see should see <code>Where are the attributes?!?!</code> string returned. As you can see it is pretty simple to get a basic HTTP server up and running.</p>
<h4><em>GET /attributes/:resource</em></h4>
<p>Ok, now that I’ve shown you the basics of martini its time to build out our first ‘real’ route. Remember, the goal of this service is to track a resource’s attributes. Resources can be anything from a TV, scoreboard, ad boards, etc.. Each of these will have its own set of attributes. For this blog post the /attributes/:resource route need to do the following:</p>
<ol>
<li>If the resource requested is a TV then we will return a JSON object with all the attributes assigned to a TV and the HTTP Status code of 200</li>
<li>If the resource is not a TV then we will return a JSON error object that we will define and a status code of 404.</li>
</ol>
<h5><em>New Packages</em></h5>
<p>I am going to need to include a few more packages to the code to meet my needs. The first import is the <code>net/http</code> package. I’m importing this package so I can use <code>http.StatusOK</code> instead of the number 200, it will make the code a little more readable. The next new package is the <code>strings</code> package. I’m using this package so I can convert the requested resource to lower case so that I can ensure my string comparison is comparing the input in the same case as my test string.</p>
<pre><code>import (
“net/http” // this will allow us to use http.StatusOK and http.StatusNotFound instead of 200 and 404
“strings” // I’m adding this so I can ensure that we are comparing lower case strings.
“github.com/codegangsta/martini”
)
</code></pre>
<p>Notice that the import call has changed. When there are multiple packages to import you can group them together as I have above or you could use an import call for each one. Either way works but I believe the way I have it here is the more idiomatic Go way.</p>
<h5><em>The New GET Handler</em></h5>
<p>The next change is that I’ve replaced the m.Get call we had previously with this one:</p>
<pre><code>m.Get("/attributes/:resource", func( params martini.Params ) (int, string) {
resource := strings.ToLower( params["resource"] )
if resource == "tv" {
return http.StatusOK, “a TV attributes object will be returned here"
} else {
return http.StatusNotFound, "JSON Object here"
}
})
</code></pre>
<p>The new m.Get call has a bunch of new parts to it. The first <code>/attributes/:resource</code> tells martini what route to look for. The :resource is used to indicate to martini that whatever value is here we want to store in params map. The value will be stored under the key ‘resource’, notice that the key does not have the leading colon. This handler’s function has one parameter, params, which will contain all route parameters. Next is the return value declaration. This version of the handler returns two values, the HTTP Status code and a string.</p>
<p>The guts of the function are there to determine if the resource being requested is a TV or not. If it is return OK if not return a not found error. Right now the code has placeholders in it but soon the strings will be string representations of a JSON object. If you are new to Go like me the <code>if</code> statement looks a little naked, there are no () around the test portion. The return statements are a little different than what I’m used to seeing also. Remember that this function has two return values and on the return lines the values are separated by a comma.</p>
<p>Now that we’ve talked it to death if you want to see it an action download the code form here <a href="https://gist.github.com/rippinrobr/9084362">https://gist.github.com/rippinrobr/9084362</a> and run it using:</p>
<pre><code>go run attr-server.go
</code></pre>
<p>In a seperate terminal run the following curl command and you should see similar output.</p>
<pre><code>curl -v http://localhost:3000/attributes/tvs
* Adding handle: conn: 0x7f911c004400
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f911c004400) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 3000 (#0)
* Trying ::1...
* Connected to localhost (::1) port 3000 (#0)
> GET /attributes/tvs HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:3000
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< Content-Length: 16
< Date: Wed, 19 Feb 2014 01:27:00 GMT
<
* Connection #0 to host localhost left intact
JSON Object here
</code></pre>
<p>The <code>curl -v</code> displayed enough output so that you can see just about everything that happened during the request. I’m using it here so you can see that the call above does in fact return a 404 code in addition to the error message. To see what happens when you pass it TV rerun the command after removing the trailing s from tvs. The HTTP/1.1 status should now be 200.</p>
<h4><em>Sending the JSON Object</em></h4>
<p>Now that I have the basic logic in place its time to start building out the infrastructure to support resource attributes. To help model the resource to attributes relationship I am introducing two new structs, Attribute and ResourceAttributes.</p>
<pre><code>type Attribute struct {
Name string `json:"name"`
DataType string `json:"type"`
Description string `json:"description"`
Required bool `json:"required"`
}
type ResourceAttributes struct {
ResourceName string `json: "resourceName"`
Attributes []Attribute `json: "attributes"`
}
</code></pre>
<p>The Attribute struct contains all of the information I want to store about each attribute. The ResourceAttribute struct is used to represent the relationship between a resource and its attributes. Since these structs will be converted to JSON and I want to the names of the field to follow proper JSON naming conventions I’m using the “field's tag value” to convert the names to lower case during the JSON conversion process.</p>
<p>The updated handler will be returning a string representation of the JSON object and in order to do that I need to create a String method for each of the types I’ve declared. I want to send all of my structs back to the client as JSON objects I need to create two String() methods. The method below is used on the ResourceAttributes struct.</p>
<pre><code>func (ra ResourceAttributes) String() (s string) {
jsonObj, err := json.Marshal(ra)
if err != nil {
s = ""
} else {
s = string( jsonObj )
}
return
}
</code></pre>
<p>There are two differences in the declaration of this method from the functions I declared earlier. The first is right after the <code>func</code> keyword is what looks like a parameter declaration. What it does is declare what type is the 'receiver' for this method. What that means is any ResourdeAttribute object can call the <code>String()</code> method. The second difference is in the way the return value is declared. This method makes use of Go's named return value. What that means is whatever the value of the variable <code>s</code> is at the time that the method returns will be the value returned by the method.</p>
<p>First, the method converts the receiver into JSON. If there are no errors returned during the conversion process then the JSON representation is converted to a string and stored in <code>s</code>. If an error occurs then s is set to an empty string.</p>
<p>To be able to send all of the structs I declared as JSON I would have to create a <code>String()</code> method for each type. The methods would be exactly the same except for the reciever. Not exactly keeping the code DRY. Thankfully shortly after writing the code of this part of the blog I reached a section on interfaces in <a href="http://www.amazon.com/Programming-Language-Phrasebook-Developers-Library/dp/0321817141/ref=tmm_pap_title_0?ie=UTF8&qid=1393344859&sr=8-1">The Go Programming Language Phrasebook</a> and I was happy to see that using
interfaces will let me DRY up the String() methods.</p>
<p>A Go interface is a set of methods. Any struct that has all of the methods in the interface declaration is said to implement the interface. Interfaces can have 1, 10, or no methods. So I decided to try an empty interface declaration that would stay within my main package.</p>
<pre><code>type jsonConvertible interface { }
</code></pre>
<p>Since the name of this type starts with a lower case character it is only visible within the package it was declared in. Now any struct I declare in the <code>main</code> package will implement the jsonConvertible interface. After creating the interface I moved away from using methods back to a normal function, I created a new function named JsonString. JsonString has a single parameter, a jsonConvertable struct. Now I can have one function to convert all my structs into a JSON string.</p>
<p> func JsonString( obj jsonConvertible ) (s string) {
jsonObj, err := json.Marshal( obj )</p>
<pre><code>if err != nil {
s = ""
} else {
s = string( jsonObj )
}
return
</code></pre>
<p> }</p>
<p>If you want to see this version of the code in action you can grab it in this <a href="https://gist.github.com/rippinrobr/9114673">gist</a> and <code>go run</code> it.</p>
<h4><em>Setting the Content-type to application/json</em></h4>
<p>When you run the latest and greatest you see that the server does send back a JSON string but if you look at the headers you can see that the Content-Type is set to text/plain. I want the Content-Type to be application/json. In order to do that I need to set the Content-Type before I send the response. Luckily, using martini makes this as easy as adding a new parameter to my handler function, writer http.ResponseWriter. I can use the new parameter to set the correct Content-Type.</p>
<pre><code>writer.Header().Set("Content-Type", "application/json")
</code></pre>
<p>Now whichever object is returned, it will have the correct Content-Type set. To see for yourself, <a href="https://github.com/rippinrobr/learning-go-with-martini/tree/1st-post">clone the repository, checkout the 1st-post branch and run it</a>. You’ll see in the headers that I now have the correct Content-Type set.</p>
<h2>Summary</h2>
<p>With that, I’ve completed everything that I set out to do by the end of this post. I showed you where to get Go and how to setup your environment. I’ve walked you through how to create structs, respond to HTTP GET calls and how to return a JSON string. In addition to that I introduced you to interfaces in Go.</p>
<p>In my next post, I will add CRUD functionality using MongoDB, showing you how to add middleware to pass along database connectivity to the request handlers. By the end of the second post the attr-server will retrieve all available attributes assigned to a TV from the database using our /attributes/:resource route.</p>
<h2>Resourcs</h2>
<p><em>Editors</em></p>
<table>
<thead>
<tr>
<th> </th>
<th> </th>
</tr>
</thead>
<tbody>
<tr>
<td> Eclipse </td>
<td> <a href="https://github.com/sesteel/goclipse">goclipse</a> </td>
</tr>
<tr>
<td> Emacs </td>
<td> <a href="http://dominik.honnef.co/posts/2013/03/writing_go_in_emacs/">go-mode.el</a> </td>
</tr>
<tr>
<td> Sublime </td>
<td> <a href="https://github.com/DisposaBoy/GoSublime">go plugin</a> </td>
</tr>
<tr>
<td> Vim </td>
<td> <a href="http://golang.org/misc/vim/readme.txt">go language vim support</a> </td>
</tr>
</tbody>
</table>
<p><em>Go Language</em></p>
<ul>
<li><a href="http://golang.org/doc/install#install">golang.org’s install page</a></li>
<li><a href="http://golang.org/doc/code.html#GOPATH">How to Write Go Code</a></li>
<li><a href="http://martini.codegangsta.io/">The Martini Project</a></li>
</ul>
<p><em>GitHub & Gits</em></p>
<ul>
<li><a href="https://github.com/rippinrobr/learning-go-with-martini">My Github project</a> - each post will have its own branch with the master branch being the latest and greatest.</li>
<li><a href="https://gist.github.com/rippinrobr/9025157">Where are the attributes</a></li>
<li><a href="https://gist.github.com/rippinrobr/9084362">The New GET Handler</a></li>
<li><a href="https://gist.github.com/rippinrobr/9114673">The DRY Conversion</a></li>
</ul>
<p><em>Blogs & Books</em></p>
<ul>
<li><a href="http://www.amazon.com/Programming-Language-Phrasebook-Developers-Library/dp/0321817141/ref=tmm_pap_title_0?ie=UTF8&qid=1393344859&sr=8-1">The Go Programming Language Phrasebook</a></li>
</ul>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com26tag:blogger.com,1999:blog-2763068378786653633.post-53157747878040859422013-11-14T09:44:00.000-08:002013-11-14T09:44:33.038-08:00Intro to the MEAN Stack - Part 1 - The Data<p>I recently changed jobs to join a startup. One of the many reasons why I took the job is the fact that the software is built with the <a href="http://blog.mongodb.org/post/49262866911/the-mean-stack-mongodb-expressjs-angularjs-and">MEAN stack</a> stack – MongoDb, Express.js, AngularjS and Node.js. Prior to joining the company I had dabbled in each of the parts of the stack but I hadn’t used any of them on a ‘real’ project. So to reinforce what I’m learning during the day by using the MEAN stack to build an app to view the stats for the New York/San Francisco Giants. I thought I would share my experiences hopefully to help someone else who's learning the MEAN stack. This is the first post in a three part series that will walk you through building my app. The planned posts are:</p>
<p><strong>Part 1 – The Data:</strong>
Converts the MySQL data model to a suitable model for MongoDB.</p>
<p><strong>Part 2 – The API:</strong>
Building out a Node.js based API that will allow us to retrieve the stats.</p>
<p><strong>Part 3 – The UI:</strong>
Covers building out an AngularJS UI</p>
<h2>Goal</h2>
<p>The goal for this post is to have the data modeled and loaded into a MongoDB database so we can use it in the next post.</p>
<h2>Setup</h2>
<p>If you want to 'follow along' with this post you will need to <a href="http://www.mongodb.org/downloads">download and install MongoDB</a> and <a href="http://www.nodejs.org/download">Node.js</a>.
Once you have installed mongo and node you can <a href="https://github.com/rippinrobr/mean-blog-series">download the part 1 code</a>. Keep in mind before you run the load scripts you will need to do install the node packages. To do that, simply change into the <YOUR CODE DIR>/post-1-the-data/scripts directory and run:</p>
<pre>npm install</pre>
<p>npm is the node package manager. The install option tells npm to read the package.json file and install any of the requirements that have not already been installed. If you plan on loading the Giants data I have already parsed out you are good to go. However if you want to start the from the beginning yourself you will need to download the data from the <a href="https://github.com/chadwickbureau/baseballdatabank/tree/2012update">Baseball Databank project</a>. The most up to date branch is the 2012update branch. After cloning the repository read the <a href="https://github.com/rippinrobr/mean-blog-series/blob/master/post-1-the-data/scripts/README.md">scripts/README.md</a> and you will then be ready to generate your own Giants data or any other team’s data.</p>
<h2>My Two Second MongoDB Introduction</h2>
<p>Before I dive into the meat of the post I want to give you a <em>very</em> brief MongoDB intro. We will be storing data in collections, which are analogous with tables. Each collection will contain a document, which you can think of as a row in relational databases. As the NoSQL term implies we will not be using SQL to retrieve the data. Instead we will use javascript.</p>
<h2>The Data</h2>
<p>As a kid who grew up reading box scores every morning while I ate my breakfast I am very thankfully that there is an open source project out there dedicated to providing the statistics for Major League Baseball. It is distributed using 24 CSV files, each file maps to a MySQL table that was used to generate it. All 24 data files either describe a manager, player or a team. So as we build out the database we will create and load three collections: managers, players, and seasons.</p>
<p>Each collection will house documents that have been designed so that one document will represent one player, one manager or one season. This will make our development of the API much easier. For the most part one call should retrieve everything we need. To give you an idea of what type of data the documents will contain I’ve created a map between the collections and files. Remember each file is a table in the Baseball Databank database. In some cases you’d have to write some pretty complicated joins to get the data. In Mongo, our queries will be straight forward.</p>
<p><em>Managers Collection (4 tables) => AwardsManagers, Managers, ManagersHalf, Master</em></p>
<p><em>The Players Collection (11 tables) => AllStars, Appearances, AwardsPlayers, Batting, BattingPost, Fielding, FieldingPost, Master, Pitching, PitchingPost, Salaries</em></p>
<p><em>The Seasons Collection (3 tables) => SeriesPost, Teams, TeamsHalf</em></p>
<p>If you’d like a description of the tables checkout the <a href="https://github.com/chadwickbureau/baseballdatabank/blob/2012update/official/readme59.txt">Baseball Databank README</a></p>
<h2>The MongoDB Side</h2>
<p>For the rest of this post I am going to walk you through an example of a document that is stored in each collection. The description will also have examples of how to retrieve the data from the mongo console app. We will start with the simplest of the collections, the managers.</p>
<hr>
<h4>Managers</h4>
<p>Any manager that has managed at least a game for either the New York or San Francisco Giants will have a document in this collection. A managers document has demographic information,, managerial record plus any awards they may have won. Our managerial document example is Rogers Hornsby’s. He managed the New York Giants in 1927.</p>
<script src="https://gist.github.com/rippinrobr/7466490.js"></script>
<p>The first property of the document is the _id property. By default each record would have
an _id field that is a randomly generated ObjectID created by mongoDB during the insert. Here's what an ObjectID looks like
<pre>ObjectId("528398bb3b06760000000004")</pre>
In some cases that may work fine but for this collection I am using the baseball databank
managerID value. This allows me to take advantage of the built in unique constraint
on the _id index. The documents properties are self explanatory however I would like to discuss the record properties.</p>
<p>The record property is an array of JSON objects. Each entry in the array represents a full or partial season
with the Giants. Since Hornsby only
managed part of one season for the Giants he only has one item in the array. You can think of the
entries in the record array as a row in a record table in a relational database. Using an array allows you
to keep all data related to managers in one document making it easy to retrieve all of the manager's data when needed.
We will make use of arrays in all of our documents. If the Giants had made
it to the playoffs or if Hornsby had won any managerial awards his document would have two more array properties, playoffs and awards.</p>
<p>You may have not caught that last bit but documents in the same collection do not have to have the same properties. In all of our collections the documents will have about 95% of the properties in common. I will show you how we can check for the existence of a property when querying the database in a bit. Stay tuned.</p>
<p>You might be wondering how I retrieved the Hornsby document. Let me walk you through the queries I used.</p>
<p>At a command prompt fire up the mongo client by running:</p>
<pre>
mongo giants
</pre>
<p>This will connect you to a local instance of mongodb and switch you into the
giants database. You could run just mongo to connect and the use giants at
the mongo prompt to do the same thing. For more options read the <a href="http://docs.mongodb.org/v2.2/mongo/">mongo shell documentation.</a></p>
<p>Now that I'm in the giants database I can retrieve the Hornsby document by running:</p>
<pre>
db.managers.find( { nameLast: 'Hornsby' }).pretty()
</pre>
<p>What the query says is in the current database search the managers collection for a document that has a nameLast property equal to 'Hornsby'. The <a href="http://docs.mongodb.org/manual/reference/method/db.collection.find/">find</a>
call will return an array containing all of the documents that match our query. The pretty function formats the results of the find in a more human readable fashion. Try the <a href="http://docs.mongodb.org/manual/reference/method/db.collection.find/">find</a><br/>
call with and without the pretty and you will see what I mean.</p>
<p>If you look closely at the Hornsby document you will see in the record array his only entry
has the inseason property set to 2. This means there was at least two managers for
the 1927 Giants. To find out who the other managers there were we can run the following query.</p>
<pre>
db.managers.find( { 'record.yearID' : 1927} )
</pre>
<p>This will return two documents one for John McGraw and one for Hornsby. Look closely at the query and your will
see that I'm using a 'dot notiatiod' to find the managers for the 1927 season. Like the previous find it will return an array of matching documents. Unlike the previous query this one looks inside of the record array. Each document within the record array will have its yearID property compared to 1927. If it has one entry with that yearID then the document will be returned. Going back to my analogy of each entry in the record array being equal to a row in a relation database table you can almost thing of the dot notation as a join. One thing to keep in mind is whenever you use dot notation you must quote the property like I have done. Failure to do so will cause an error from mongoDB.</p>
<p>
Remember I said I would show you how to test for the existence of a property? We will get a count of all the managers who have the playoffs property in their document.
</p>
<pre>
db.managers.find({playoffs: {$exists:true}}).count()
</pre>
<P>
This query simply says, find all the documents in managers that have the playoffs property. I am using <a href="http://docs.mongodb.org/manual/reference/operator/query/exists/">$exists</a> which is one of the built in query operators.
To see what other operators are available checkout the <a href="http://docs.mongodb.org/manual/reference/operator/">MongoDB Operators</a> page.
</p>
<p>The last bit of work I need to do is to setup the indexes on the managers collection. Since most of
the queries I run will be on either the last name or to look for a particular season I will
add an index on nameLast and record.yearID. Here’s how to create an index in mongo:</p>
<pre>
db.managers.ensureIndex({ nameLast:1 })
db.managers.ensureIndex({ ‘record.yearID’:1})
</pre>
<p>These two calls create two separate indexes on the nameLast and record.yearID properties.
Notice that I can use the dot notation when declaring an index also.
The 1 indicates that we want the index created using ascending ordering. To create an index that
uses the descending order swap out the 1 for a -1. Now our managers collection has three indexes:
one on the _id property, one on the nameLast property and one on the record.yearID property.
To see what indexes are on a collection you can run:</p>
<pre>
db.managers.getIndexes();
</pre>
<p>For more information on ensureIndex and getIndexes visit:
<a href="http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/">http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/</a>
<a href="http://docs.mongodb.org/manual/reference/method/db.collection.getIndexes/">http://docs.mongodb.org/manual/reference/method/db.collection.getIndexes/</a></p>
<hr />
<h4>Players</h4>
<p>Just like the managers, every player that has stepped onto the diamond in a New York or San Francisco Giants uniform will
have a document in this collection. A player’s document will contain demographics, statistics, and appearances. If the player
has been an all-star, won an award or has been inducted into the hall of fame his document will have additional properties.
Below is the document for Eddie ‘Hotshot’ Mayo who played for the New York Giants in 1936.</p>
<script src="https://gist.github.com/rippinrobr/7466671.js"></script>
<p>The players document is considerably larger than the managers document is. The reason for that is I chose this design was it allows me to retrieve the Giants history of a player with a single query. Even though I've chosen a player centric design it is still relatively easy to find roster related information. As an example lets say we want to see who else played third for the Giants during the 1936 season. I could run the following query:</p>
<script src="https://gist.github.com/rippinrobr/7466781.js"></script>
<p>The query returns a total of four documents, four full player documents which makes it a little hard to read the names of the players. All I really want to see is the nameLast, nameFirst and the value fiendingStats.G for the players who played third. I can convert the output to only contain the values I've indicated by using a projection. I am also changing the names of the properties to something I find a little nicer to read. Now when I run the query I should have four much easier to read results. The updated query and results are below.</p>
<p>That returns the following:</p>
<script src="https://gist.github.com/rippinrobr/7466861.js"></script>
<p>Now that we have the data in a readable layout I would like to sort the players so that the man who played the most games at third will be listed first. Sorting is as easy as adding the $sort operator. Here's what the query looks like with the sort call added.</p>
<script src="https://gist.github.com/rippinrobr/7466968.js"></script>
<p>Notice that I used the new name that I created in the $project call. The -1 indicates we want to sort the games in a descending fashion. The results of the updated query are below.</p>
<script src="https://gist.github.com/rippinrobr/7467203.js"></script>
<p>If you’ve been paying attention you noticed that I was using a function called
<a href="http://docs.mongodb.org/manual/reference/aggregation/">aggregate</a>
instead of find. The <a href="http://docs.mongodb.org/manual/reference/aggregation/">aggregate</a>
function allows us to chain commands together. We can use the aggreation pipeline to ‘filter’ our data. It works by passing the results from one task to another as illustrated in the $project and $sort calls. I used $project to rename the fieldingStats.G property to just games. I then used the new name, games, to sort by. Let’s walk through the last query to get a better picture of whats going on.</p>
<p><em>$unwind</em></p>
<pre>{ $unwind : "$fieldingStats" }, </pre>
<p>What <a href="http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/">$unwind</a>
does is create a new document or each member of an array. That means a copy of the demographics is put together with each entry in the fieldingStats array. So if a player has 10 entries there will be 10 documents with the same demographic information. Each document will have a single entry in the fieldingStats directory. I chose the fieldingStats property to $unwind on because I am only
interested in third basemen. Notice that fieldingStats has a $ in front of it. Remember, that means that you want to use the value of the $fieldingStats property in the command. If I executed the query now with only the $unwind call I would receive the following message.</p>
<pre>
aggregation result exceeds maximum document size (16MB)
</pre>
<p>The message brings up one thing I haven't mentioned yet and that is all documents must be less than 16MB in size. Remember the unwind creates many new documents. The players collection has 1675 documents in it, if each player has 5 years worth of stats for 3 different positions, you can see how the size of the result set will increase. Thankfully, in my case I'm filtering the results of the $unwind call down so the 16MB limit is not a problem for me. In my three months of working in MongoDB I have yet to have the size limit cause any issues for me.</p>
<p><em>$match</em></p>
<pre>{ $match : {"fieldingStats.POS": "3B",
"fieldingStats.yearID" : 1936 }},</pre>
<p>The output of the $unwind call are passed as to the <a href="http://docs.mongodb.org/manual/reference/operator/aggregation/match/">$match</a> call as input. $match searches the input documents looking for documents that match the given parameters. In this case anyone who played third base during the 1936 season will be returned. The number of documents have gone from the thousands to four. The four complete players documents are passed the $project operator.</p>
<p><em>$project</em></p>
<pre>
{ $project : { _id : "$_id",
lastName : "$nameLast",
firstName : "$nameFirst",
games : "$fieldingStats.G"}}
</pre>
<p>I've already gone over what the <a href="http://docs.mongodb.org/manual/reference/operator/aggregation/project/">$project</a>
call does so I won't go into it again.</p>
<p><em>$sort</em></p>
<pre>{ $sort : { games: -1 } } </pre>
<p>Since I have already gone over the $sort call, I won't do it again here.</p>
<h4>Indexes</h4>
<p>There will be a few more player API calls so I am going to create a few more ensureIndex calls. Here are the players collection indexes.</p>
<script src="https://gist.github.com/rippinrobr/7467442.js"></script>
<hr />
<h4>Seasons</h4>
<p>Each season the New York/San Francisco Giants have played in professional baseball is represented by a document in the seasons collection.<br/>
Each document in the collection will have the team’s regular and playoffseason records, team statistics, the roster, and list of managers.
The document below represents the 2012 season when the Giants won their second World Series championship in 3 years.</p>
<script src="https://gist.github.com/rippinrobr/7467508.js"></script>
<p>To get the 2012 season document I used another <em>select</em> function,
<a href="http://docs.mongodb.org/manual/reference/method/db.collection.findOne">findOne</a>. It is similar to find
but it only returns a single object. In cases where there are more than one matching document <a href="http://docs.mongodb.org/manual/reference/method/db.collection.findOne">findOne</a> will
return the first document found in the ‘natural order’, it will return the first one stored on disk.</p>
<p>The seasons document is similar to the players and managers document in that it has
a ‘core’ set of data that pertains the team’s season plus arrays that store information
about the players who were on the team that year as well as the managers.</p>
<p>For the seasons collection I will add the indexes below. The API will make use of these
indexes as you’ll see in the next post.</p>
<script src="https://gist.github.com/rippinrobr/7467621.js"></script>
<h3>Summary</h3>
<p>I have taken the data from 18 database tables and stored them in three different collections
in my MongoDB database. The new schema will allow us to make the fewest calls to the database
when retrieving player, managerial or season related data. Throughout the post I showed you
how to run <em>select</em> statements in the mongo client using find, findOne and the aggregation
pipeline. I this post helped illustrate some ways that MongoDB can be used to store data in
ways that makes using the data easier.</p>
<h3>Resources</h3>
<ul>
<li>MongoDB: <a href="http://www.mongodb.org/downloads">http://www.mongodb.org/downloads</a></li>
<li>Node.js: <a href="http://nodejs.org/">http://nodejs.org/</a></li>
<li>MySQL: <a href="http://dev.mysql.com/downloads/">http://dev.mysql.com/downloads/</a></li>
<li>Data: <a href="https://github.com/chadwickbureau/baseballdatabank">https://github.com/chadwickbureau/baseballdatabank</a> (I’m using the 2012update branch)</li>
<li>My Code: <a href="https://github.com/rippinrobr/mean-blog-series">https://github.com/rippinrobr/mean-blog-series</a> (includes loading scripts and mongodb ‘dump’)</li>
</ul>
<h4>MongoDB Doc Links</h4>
<ul>
<li><a href="http://docs.mongodb.org/manual/reference/aggregation/">http://docs.mongodb.org/manual/reference/aggregation/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/">http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/method/db.collection.getIndexes/">http://docs.mongodb.org/manual/reference/method/db.collection.getIndexes/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/method/db.collection.find/">http://docs.mongodb.org/manual/reference/method/db.collection.find/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/method/db.collection.findOne/">http://docs.mongodb.org/manual/reference/method/db.collection.findOne/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/operator/aggregation/match/">http://docs.mongodb.org/manual/reference/operator/aggregation/match/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/operator/aggregation/project/">http://docs.mongodb.org/manual/reference/operator/aggregation/project/</a></li>
<li><a href="http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/">http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/</a></li>
</ul>
<h4>Baseball Sites</h4>
<ul>
<li><a href="http://www.baseball-reference.com/">http://www.baseball-reference.com/</a></li>
<li><a href="http://retrosheet.org/">http://retrosheet.org/</a></li>
</ul>
Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com73tag:blogger.com,1999:blog-2763068378786653633.post-47808092458215288112012-08-03T06:21:00.000-07:002012-08-03T06:21:02.959-07:00Adding PostgreSQL as a DBProvider<p>Recently I've been working with Mono, PostgreSQL and Massive ( a lightweight 'ORM' written by Rob Connery) trying to see how much of my day to day job I can move outside of the Microsoft realm. While trying to use Massive with PostgreSQL I ran into a problem with the connectionStrings add tag's providerName attribute. Today's post will walk you through how to duplicate the message and more importantly how to fix the problem.</p>
<h2>The Setup</h2>
<p>For this project I am using PostgreSQL for the backend using my hockeydb database. If you don't have PostgreSQL installed and you are using a mac you can use this handy utility <a href="http://postgresapp.com/">Postgres.app</a>, which is brought to you by the good folks ate Heroku. For the non-mac crowd visit the <a href="http://www.postgresql.org/download/">PostgreSQL site</a> and get the appropriate install for your machine. After you have PostgreSQL installed you can grab a copy of the hockeydb database <a href="https://github.com/rippinrobr/hockeydb/downloads">here</a>. After setting up postgres and loading the database its time to check your Mono environment setup.</p>
<em>For more information about the hockeydb and how to restore the PostgreSQL backup check out my previous post <a href="http://progadventure.blogspot.com/2012/07/hockey-databank-databases.html">Hockey Databank Database</a>. </em>
<p>If you already have Mono and MonoDevelop setup you can skip this paragraph, if not read on for a quick overview of how to get it setup. First thing to do is go to the <a href="http://www.go-mono.com/mono-downloads/download.html">Mono Project Site</a> and follow the download and install instructions. Once you have mono installed go to the <a href="http://monodevelop.com/Download">MonoDevelop Download page</a> and follow the the download and install instructions.</p>
<p>The final setup step is to download the PosgreSQL .NET driver. Download the driver <a href="http://pgfoundry.org/frs/?group_id=1000140&release_id=1960">Npgsql2 driver</a> and unzip it into a directory of your choosing.</p>
<p>Now we are ready to start!</p>
<h2>Creating the Solution and Adding Massive</h2>
<p>After you have started MonoDevelop click on the 'Start New Solution…' link. In the 'New Solution' dialog Select 'C#' and then 'Console Project'. Name the solution HockeyDbConsole and click the 'Forward' button. We won't be creating a GUI so ensure that the GTK# Support checkbox is NOT checked and click the 'OK' button. You should now see the HockeyDbConsole project under a solution with the same name. </p>
<p>My next step is to download the Massive code for PostgreSQL and add it to my project. Grab it from here <a href="https://raw.github.com/robconery/massive/master/Massive.PostgreSQL.cs">Massive.PostgreSQL.cs</a>. You will also need to add the following references for Massive's dependencies:</p>
<ul>
<li>Pngsql (the assembly that you downloaded during setup)</li>
<li>System.Configuration</li>
<li>System.Data</li>
<li>Microsoft.CSharp</li>
</ul>
<p>After adding my references there is one more file I need to add to the project, an app.config file. Unlike VisualStudio MonoDevelop doesn't offer a way to add an app.config but it does have a way to add an empty XML file. When you add the empty file make sure to name it app.config. Once the file is in the project update it so its contents look like this:</p>
<script src="https://gist.github.com/3240724.js?file=gistfile1.xml"></script>
<p>Go ahead and build the project when the build is done you should have no errors.</p>
<h2>Getting Ready to Query the abbrev Table</h2>
<p>To keep the output simple I will query the abbrev table. It contains a key for all the abbreviations used throughout the database. One of the ways to query the table using Massive is to create a class that maps to the table you wish to query. I will create a class named Abbrev, that way its easy to figure out which table I'm working with. Following the Massive README file <a href="https://github.com/robconery/massive/">https://github.com/robconery/massive/</a> My Abbrev class looks like this:</p>
<script src="https://gist.github.com/3240874.js?file=Abbrev.cs"></script>
<p>Now that I have the Abbrev class I'm ready to run a query. I will add code to retrieve all the records in the abbrev table and list the results after the "Hello World" message. First I need to new up an Abbrev object and call it table. I will then make a call table.All() and loop through the results writing the records to STDOUT. Here's what the updated Main method looks like:</p>
<script src="https://gist.github.com/3241637.js?file=main.cs"></script>
<p>Ok, looks like everything is in place its time to run it. When I do I see this error (I've added the red and bolded the most important part of the error):</p>
<p>Unhandled Exception: System.Configuration.ConfigurationErrorsException: <span style="font-size: 13px; color: #c32123;"><strong>Failed to find or load the registered .Net Framework Data Provider 'Npgsql'.</strong></span><br /> at System.Data.Common.DbProviderFactories.GetFactory (System.String providerInvariantName) [0x00026] in /private/tmp/monobuild/build/BUILD/mono-2.10.9/mcs/class/System.Data/System.Data.Common/DbProviderFactories.cs:80 </p>
<p>What does the error mean? It's telling me that .NET doesn't know what providerName="Npgsql" in my connect string maps to. After some head scratching and a few Google search I was able to get past this issue by declaring a Npgsql as a Data Provider.</p>
<h2>Registering Npgsql as a Data Provider</h2>
<p>To register Npgsql as a data provider I need to add a few lines to my app.config file. Here's what the app.config file looks like now:</p>
<script src="https://gist.github.com/3241701.js?file=gistfile1.xml"></script>
<p>As you can see I had to ad the system.data tag. Within it I added the Npgsql information under the DbProviderFactory tag. I set the invariant value to the base namespace and it is what is used in the providerName attribute. I was able to gather the information for the type attribute using the Assembly Browser tool within MonoDevelop. Right click on the Npgsql.dll reference and selecting the 'Open' option. Once the browser opens up expand the Npgsql node on the left. Next, search for Factory and the NpgsqlFactory class is visible. </p>
<p>Now that I have the app.config file updated I will re-run the app. This time I see
the results of my query. </p>
<p><em>For the demo I added the system.data information to the app.config file. Since I'm going to be using this on multiple projects I've added the add tag to my machine.config file's system.data > DbProviderFactory tag.</em></p>
<h2>Summary</h2>
<p>By adding just a few lines to my app.config file I was able to use a PostgreSQL connectionString with Massive to run a simple query. I hope this post helps you get through this issue faster than I did.</p>
<h2>Resources</h2>
<ul>
<li><a href="http://www.go-mono.com/mono-downloads/download.html">Mono</a></li>
<li><a href="http://monodevelop.com/Download">MonoDevelop</a></li>
<li><a href="http://postgresapp.com/">PosgreSQL (Mac)</a></li>
<li><a href="http://www.postgresql.org/download/">PostgreSQL (Non-Mac)</a></li>
<li><a href="http://pgfoundry.org/frs/?group_id=1000140&release_id=1960">Npgsql2 (.NET PosgreSQL driver)</a></li>
<li><a href="https://raw.github.com/robconery/massive/master/Massive.PostgreSQL.cs">Massive.PostgreSQL.cs</a></li>
<li><a href="https://github.com/rippinrobr/hockeydb/downloads">PosgtreSQL version of hockeydb</a> For more information on the database you can read my previous blog post: <em><a href="http://progadventure.blogspot.com/2012/07/hockey-databank-databases.html">Hockey Databank Database</a>.</em></li>
</ul>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com1tag:blogger.com,1999:blog-2763068378786653633.post-69447909274746467942012-07-29T11:56:00.001-07:002012-07-29T11:56:42.186-07:00Hockey Databank Databases<p>Shortly after each NHL season a ZIP file of all stats from the previous season plus all other NHL seasons shows up on the <a href="http://sports.groups.yahoo.com/group/hockey-databank/">Hockey Databank Yahoo group</a>. The ZIP file contains CSV stats files with just about any hockey stat you can think of. These files are a great gift to those of us who are stats junkies but there isn't an easy way to query them. With that in mind I decided to create a database to store the stats. I wrote a quick Clojure app to load the CSV files into a PostgreSQL database. When I was about halfway through the loading process I realized it wouldn't any additional Clojure code (other than a connection information) to create and load MySQL and Sqlite databases, so I did just that.</p>
<h2>The Database Schema</h2>
<p>The database was created so that each table maps to a CSV file. The tables have the same columns as the CSV files except for the master table. I added an id to the tableas a primary key. The create_db.sql files are available for all three database flavors here: <a href="https://raw.github.com/rippinrobr/hockeydb/master/databases/mysql/create_db.sql?login=rippinrobr&token=f92ef3b12e87a25e5c8058493a752fbe">MySQL</a>, <a href="https://raw.github.com/rippinrobr/hockeydb/master/databases/postgres/create_db.sql?login=rippinrobr&token=5c59efcd36d6e42a1116784d52713111">PostgreSQL</a>, and <a href="https://raw.github.com/rippinrobr/hockeydb/master/databases/sqlite/create_db.sql?login=rippinrobr&token=78aea161014d0f4a466a3b3c90ae00c5">SQLite</a>. These files will give you an empty database with all the necessary tables. If you want the pre-loaded database you can download one of the database dumps below.</p>
<h2>The Database Dumps</h2>
<p>You can download the database backup files here:</p>
<ul>
<li>MySQL (<a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-mysql.tgz">tgz</a> or <a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-mysql.zip">zip</a>)</li>
<li>PostgreSQL (<a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-postgres.tgz">tgz</a> or <a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-postgres.zip">zip</a>)</li>
<li>SQLite (<a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-sqlite.tgz">tgz</a> or <a href="https://github.com/downloads/rippinrobr/hockeydb/hockeydb-6-23-12-sqlite.zip">zip</a>)</li>
</ul>
<p>If you aren't familiar with how to restore the backups for either the MySQL or PostgreSQL databases I will give you a quick 'how to' restore from the command line. The example steps work on my mac and should also work from a linux command line. The SQLite file is the actual database file so after 'un-tarring' or unzipping the file you are ready to go. </p>
<h3>Restoring the MySQL Dump</h3>
<p>1. Create the database by running: </p>
<p> <span style="font-family: 'Letter Gothic Std'; font-size: 14px;">mysql -u <username> -p<password></span></p>
<p> At the mysql prompt enter: </p>
<p> <span style="font-family: 'Letter Gothic Std'; font-size: 14px;">create database hockeydb;</span></p>
<p> Exit out of mysql and return to the prompt.</p>
<p>2. 'Un-tar' the file by running: </p>
<p><span style="font-family: 'Letter Gothic Std'; font-size: 14px;"> tar zxvf hockeydb-6-23-12-mysql.tgz</span> </p>
<p>3. Restore the database by running the following command:</p>
<p> mysql -u <username> -p<password> hockeydb < hockeydb.sql</p>
<p>Thats all there is to it. You should now have a MySQL database loaded with the dat from the Hockey Data Bank files. </p>
<h3>Restoring the PostgreSQL Dump</h3>
<p>1. Create the database by running: </p>
<p> <span style="font-family: 'Letter Gothic Std'; font-size: 14px;">createdb hockeydb</span></p>
<p>2. 'Un-tar' the file by running: </p>
<p><span style="font-family: 'Letter Gothic Std'; font-size: 14px;"> tar zxvf hockeydb-6-23-12-postgres.tgz</span> </p>
<p>3. Restore the database by running the following command:</p>
<p> psql hockeydb < hockeydb.backup</p>
<p>Thats all there is to it. You should now have a PostgreSQL database loaded with the dat from the Hockey Data Bank files. </p>
<h2>Summary</h2>
<p>Hopefully these databases will help others work with the Hockey Databank data. I would like to thank the Hockey Databank Group member dsreyn and everyone else who helped to put this data together, without their hard work these databases wouldn't be possible.</p>
<p>My future plans for this data is to create data access libraries for Clojure, C#, and Ruby. When that will happen I am not sure. When the libraries do come available I will notify everyone through this blog. </p>
<p>If you have any questions, concerns or suggestions please feel free to leave a comment!</p>
<h2>Resources</h2>
<p><a href="http://sports.groups.yahoo.com/group/hockey-databank/">Hockey Databank Yahoo group</a></p>
<p><a href="http://dev.mysql.com/downloads/">Mysql Downloads</a></p>
<p><a href="http://www.postgresql.org/download/">PostgresSQL</a></p>
<p><a href="http://sqlite.org/download.html">SQLite</a></p>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com0tag:blogger.com,1999:blog-2763068378786653633.post-38951695136567069922012-04-11T08:19:00.000-07:002012-04-11T13:07:05.701-07:00Creating a TFS Work Item from IronRuby<p>At my day job we use Team Foundation Server 2008 (TFS) for our automated builds, iteration management, and source control. TFS may not be the most ideal way to manage these processes but in our MS environment it has helped us communicate with our non-technical team members and customers. In order to enhance our feedback loop we’ve been looking into ways to add bugs automatically when automated tests fail or when errors occur in our production applications. (see <a href="http://www.myclojureadventure.com/2012/04/creating-tfs-work-item-from-clojureclr.html">Creating a TFS Work Item from ClojureCLR</a>)</p>
<p>This morning I had a little time to do some research on how to programmatically create a new bug work item. My goal was to write code that would create a new work item bug with an image attached to it. Why an image? When our automated tests fail we capture what the browser looked like when it fails. Before we get into the code let me describe a TFS Work Item.</p> <h4>A TFS Bug Work Item</h4> <p>In TFS work items are a way to track work that needs to be done. There are five different types of work items available but in our projects we typically only use three: Task, Scenario, and Bug. Each work item type has its own UI with different fields. Since I am creating bugs in this example I thought I’d show you what the UI looks like for a Bug Work Item.</p>
<p><a href="http://lh5.ggpht.com/-AhuNET2wW18/T4WUv5vI4NI/AAAAAAAAAr0/QzraQSypI-s/s1600-h/empty-bug-wi-tfs3.png"><img style="background-image: none; border-right-width: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px" title="empty-bug-wi-tfs" border="0" alt="empty-bug-wi-tfs" src="http://lh5.ggpht.com/-1jmZnrCZvWs/T4WUwnxh-RI/AAAAAAAAAr8/VT3fxBrwDWc/empty-bug-wi-tfs_thumb1.png?imgmax=800" width="585" height="384"></a></p>
<p>In this example we will create a new bug and enter text into the highlighted fields plus attach an image file. In order for me to create the bug I need to do a little setup. </p> <h4>The Setup</h4> <p>There are three assemblies needed to create a TFS Bug Work Item. The are : Microsoft.TeamFoundation.dll, Microsoft.TeamFoundation.Client.dll, and Microsoft.TeamFoundation.WorkItemTracking.Client. All three of these DLLs can be found in the C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies directory. I copied the three Dlls into the project’s libs directory. </p>
<h4>The Code</h4>
<script src="https://gist.github.com/2354470.js?file=gistfile1.rb"></script>
<p>The first step is to load the TFS assemblies from libs directory using require statements. Once the assemblies have been loaded I can start working on the main method. The main method's sole purpose is to drive the WorkItem creation process. In the main method the first few lines are responsible for creating the necessary objects for WorkItem creation.</p>
<script src="https://gist.github.com/2354646.js?file=gistfile1.rb"></script>
<p>First I need a TFS server object which is created by making a call to the static method TeamFoundationServerFactory.GetServer. GetServer takes a single parameter which is the name of the server I want to work with. After the server object is created I can use it to create a WorkItemStore object. The WorkItemStore encapsulates the data store that contains all work items on a particular server. The next line grabs the particular Project object that I want to work with by using a method I wrote called find_item. The last setup line uses the Project object to retrieve the WorkItemType object that represents a Bug WorkItem. After that call the prep work is done and I am ready to create thew new Bug WorkItem.</p>
<h5>Creating the Bug WorkItem</h5>
<script src="https://gist.github.com/2354721.js?file=gistfile1.rb"></script>
<p>The create_work_item method is responsible for creating a new 'bare bones' WorkItem object. Creating a minimal WorkItem object allows for more flexibility down the line. Some times I may want to add a new bug with an attachment and some times I may not. The method is straight forward, just create the WorkItem object by passing in the WorkItemType object followed by setting the Title, Description, AreaPath and IterationPath properties. When everything has been set the new WorkItem object is returned. </p>
<h5>Adding the Attachment</h5>
<script src="https://gist.github.com/2354728.js?file=gistfile1.rb"></script>
<p>Now that I have my WorkItem object its time to add my image file attachment. I wrote another small method to create the Attachment object and add it to the WorkItem’s Attachments collection. It takes three parameters, the WorkItem object, the path to the file to be attached and a description of the file. First, I use the path and desc parameters to create the Attachment object. After the Attachment object has been created I add it to the WorkItem.Attachments collection by calling its Add method passing in the Attachment object.</p>
<p>At this point I have a WorkItem object with an Attachment in memory. That’s nice and all but the WorkItem has not been stored in the WorkItem data set yet. In order to add it to the data set I need to save the object. Surprisingly enough all I need to do is call the WorkItem’s save method.</p>
<code>work_item.Save</code>
<p>Now when I look at the Pending Bugs report in TFS I will see my newly created bug in the list.</p>
<p><a href="http://lh4.ggpht.com/-4G2tJNZQNNA/T4WUxD0u4hI/AAAAAAAAAsE/cg8mazWWdNs/s1600-h/pending-bugs-clj-ir5.png"><img style="background-image: none; border-right-width: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px" title="pending-bugs-clj-ir" border="0" alt="pending-bugs-clj-ir" src="http://lh6.ggpht.com/-WhSJmJ42LJc/T4WUxv6GGzI/AAAAAAAAAsM/brCNHQGwiS4/pending-bugs-clj-ir_thumb3.png?imgmax=800" width="565" height="185"></a></p>
<h4>Summary</h4>
<p>In this post I showed you how easy it is to use .NET Assemblies in an IronRuby script. I walked you through the simple process of creating a Bug WorkItem in TFS. Having the ability to programmatically create and report bugs from our IronRuby scripts will help us close the loop on bug reporting in our automated testing environment. Having an image attached to the bug will help us figure out why the test failed speeding up bug fix times.</p>
<h4>In The Future</h4>
<p>We long since switched away from the MSBuild approach to our builds in favor of using Rake and Albacore. The switch has removed some of the built in integration with TFS that MSBuild provided. I will be investigating the build and source control TFS APIs to see if we can enhance our Rake code in hopes of totally removing XML files from our build kick offs. As my investigation progresses I will be writing posts about my progress.</p>
<h4>Resources</h4>
<p>TFS API: <a title="http://msdn.microsoft.com/en-us/library/bb130146(v=vs.90).aspx" href="http://msdn.microsoft.com/en-us/library/bb130146(v=vs.90).aspx">http://msdn.microsoft.com/en-us/library/bb130146(v=vs.90).aspx</a></p>
<p>My Source (this blog’s code is the 0-Create-TFS-Work-Item): <a title="https://github.com/rippinrobr/My-Programming-Adventure-Blog/zipball/master" href="https://github.com/rippinrobr/My-Programming-Adventure-Blog/zipball/master">https://github.com/rippinrobr/My-Programming-Adventure-Blog/zipball/master</a></p>
<p>Not really related to this post but mentioned and worth looking into: <a href="http://docs.rubyrake.org/tutorial/index.html">Rake</a> and <a href="http://albacorebuild.net/">Albacore</a></p>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com7tag:blogger.com,1999:blog-2763068378786653633.post-22066165202980468842012-02-13T15:34:00.000-08:002012-02-13T15:34:56.961-08:00Book Review: Node for Front End Developers by Garann Means (O’Reilly Media)<p>In my previous post I mentioned that I needed to come up to speed with CoffeeScript for my next project. That same project will be using Node on the server side. Since I liked the CoffeeScript book I thought I would give <a href="http://shop.oreilly.com/product/0636920023258.do">Node for Front-End Developers</a> a read. Once again, O’Reilly has published a book that is a quick read but gives you enough information so that you can put the book down and start writing code. I found this book gave me enough information to start building my first node based application. </p> <h5>The Review</h5> <p>The book starts off by walking the reader through how to set up his/her node environment. It walks you through installing node and its package management system, npm. Once you have your environment set up the you are introduced to <a href="http://lh5.ggpht.com/-37GRZKc6HaQ/TzmA4Unyc4I/AAAAAAAAAq8/YftpUjBnC0k/s1600-h/node-for-front-end-devs%25255B2%25255D.gif"><img style="background-image: none; border-bottom: 0px; border-left: 0px; margin: 5px 10px 0px 0px; padding-left: 0px; padding-right: 0px; display: inline; float: left; border-top: 0px; border-right: 0px; padding-top: 0px" title="node-for-front-end-devs" border="0" alt="node-for-front-end-devs" align="left" src="http://lh3.ggpht.com/-jT7KHjych9Y/TzmA4uizoVI/AAAAAAAAArE/yfGYjKgfnnw/node-for-front-end-devs_thumb.gif?imgmax=800" width="149" height="194"></a>the REPL (read – evaluate – print – loop) with a few short code snippets. The chapter ends with a discussion how to declare which modules your application depends on by using the package.json file. </p> <p>Chapter two walks the reader down the path of serving up resources be it a string of HTML or static resources like HTML, CSS, and/or JS files. The first example shows you how to write a server that serves up HTML ‘by hand’ which is then quickly followed by how much easier it is to serve static pages when you use npm modules like connect. </p> <p>After learning how to serve static files the author introduces shows you how to interact with the client by processing HTTP GET requests. The first example uses the module querystring to process GETs with parameters. It reminded me of the early days of the web. Thankfully after another example of processing parameters on the URL with querystring, the author shows you how to do the same thing in a more concise manner using the connect module.</p> <p>The next topic was how to process HTTP POST requests. We followed the same pattern here, learn how to ‘roll your own’ to process a POST which was then followed by an example of how to do it using the connect module. I liked the approach the author takes throughout the book, show you how to do it yourself first and then introduce a module that can do the same thing in a less verbose manner.</p> <p>After a brief overview of how to handle JSONP requests the discussion moved onto real-time communication using socket.io. Having just finished a project that uses <a href="https://github.com/SignalR/SignalR">SignalR</a> (which is a .NET open source project that has similar functionality) I found this subject very interesting. The example made socket.io seem simply and straight forward. In fact, after I finished reading this chapter, I started a spike to redo the project I just finished using socket.io just to see if it was any easier with socket.io.</p> <p>Chapter four introduced the type of server side templates. It walks you through how to use mustache to layout templates for your application showing you how to use templates and sub templates to promote re-use on the UI side. Towards the end of the chapter the author starts to discuss best practices on how to group your code, separate out code that handles a certain task to promote code re-use and separation of concern. </p>
<p>The next topic of discussion was data access and application flow. The first part of the chapter uses Redis to show the reader how to work in data access to your node applications. I hadn’t worked with Redis before but after that part of the book I am now looking into incorporating it into a few projects currently underway. After the Redis discussion was complete workflow was discussed using a pub/sub example using events.</p> <p>Up to this point in the book I found the flow of each chapter easy to follow. I appreciated the process of doing it by hand and then doing it again with a pre-existing module. It’s a great way to show you how something works and exposing you to the node module ecosystem. It seems like there’s a module for just about anything you may want to do.</p> <p>The last chapter varied from the previous ones by jumping into a big chunk of code right away. The code is an example of how to create an MVC application with node. On the plus side, this chapter introduces the express module, which was inspired by ruby’s Sinatra web framework and is used by many other node modules. Express has a tool that will create a directory structure for your app and has a view engine, Jade, to create HTML views. Jade’s syntax takes a little bit to get use to but once you do it makes creating HTML views easier and much easier to read. Overall the chapter wasn’t bad it was just a little code heavy when compared to the previous chapters.</p> <h5>My Thoughts</h5> <p>I came to this book with very little experience with Node.js, I had created a small app for my personal use but nothing huge. Now that I have read this book I feel comfortable enough to use it in a few projects I have on the horizon. I liked the methodology the author took for the first five chapters of doing it the hard way and then showing the reader the easier way to do with available modules. </p> <p>On a side note, this is the second short book I’ve read by O’Reilly. I hope they continue these type of short but sweet introductions to new(er) technologies. It certainly helps us come up to speed quickly. </p>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com1tag:blogger.com,1999:blog-2763068378786653633.post-54426024330411461702012-02-06T08:00:00.000-08:002012-02-06T08:00:53.143-08:00Book Review: The Little Book on CoffeeScript by Alex MacCaw (O’Reilly Media)<p>When I started reading <a href="http://shop.oreilly.com/product/0636920024309.do">The Little Book on CoffeeScript</a> I had zero experience with CoffeeScript. I was looking for a book to quickly bring me up to speed since my next project will rely heavily on CoffeeScript. At 60 pages I thought the book <a href="http://lh5.ggpht.com/-x60NhfC6pDQ/Ty2hrx28SeI/AAAAAAAAAqs/6eeO3zSf9KQ/s1600-h/the_little_coffeescript_book%25255B5%25255D.gif"><img style="background-image: none; border-right-width: 0px; margin: 5px 10px 5px 0px; padding-left: 0px; padding-right: 0px; display: inline; float: left; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px" title="the_little_coffeescript_book" border="0" alt="the_little_coffeescript_book" align="left" src="http://lh4.ggpht.com/-dmdirkgnzb0/Ty2hsP-GFoI/AAAAAAAAAq0/4bWhT1hjySs/the_little_coffeescript_book_thumb%25255B1%25255D.gif?imgmax=800" width="149" height="194"></a>would give me enough information to start writing CoffeeScript code. After reading the book I can say that I made the right choice. This book has given me enough knowledge to get started writing CoffeeScript code. </p> <h5>The Review</h5> <p>The book starts off with a chapter on CoffeeScript syntax in a nice, concise manner. It covers functions, loops, arrays and CoffeeScript specific operators and aliases. There were plenty of examples with just enough text to explain what was going on in the code. </p> <p>The next chapter discussed classes, specifically how to declare and use them. While discussing class properties the author pointed out a shortcut as to how to set a class property to a value that will save you typing. Lets say you have a class called Animal with a Name property that you want to set by passing a value to the class’s constructor. Here’s the ‘long hand’ code for that: </p><pre>class Animal <br /> constructor: (name) -> <br /> @name = name </pre><p>Not a lot of typing but the author shows you how you can do it in fewer lines. The ‘short hand’ way is here:</p><pre>class Animal <br /> constructor: (@name) –></pre><p>It doesn’t seem like much but over the long haul I appreciate the short hand method. There were a few other places in the book that the author shared shortcuts like this with the reader. </p><p>Following the classes chapter, the next type for discussion was CoffeeScript idioms. Here the author points out that using the English words for things like and instead of && and or instead of || were the preferred way to do logical <font face="Courier New">‘ands’</font> and <font face="Courier New">‘ors’</font>. Most of the chapter is dedicated to showing the reader how to accomplish things like how to perform ‘each’, ‘select’, ‘map’ and other functionality in the language. The text had a nice way to show a person who is new to CoffeeScript how to do the ‘typical’ programming tasks.</p><p>The next chapter give a quick overview on how you can use CoffeeScript in conjunction with Node and node packages to create an application. Overall, I found this chapter to be a nice introduction to creating an application but there were a few problems I ran into while following along. The issues I found were that when I went to run the app I was missing five modules: underscore, async, connect, qs and mime. Thankfully the error messages were straight forward and fixing the problem was as easy as running ‘npm install <module name here>’ for each module. The last bit of the chapter walked the reader through how to deploy our application to Heroku. It was much easier than I thought it would be.</p><p>After our hello world’ish app was created the author switched over to discuss how CoffeeScript can fix some of the JavaScript warts and how it can’t ‘fix’ some of the other JavaScript warts. The chapter is broken down into unfixed and fixed sections. The unfixed section the shows you how and why the JavaScript typeof functionality is broken and then follows the explanation up with how you can ‘fix’ typeof by writing your own function to do it. As an example of how CoffeeScript can ‘fix’ a JavaScript wart, the author informs the reader that CoffeeScript uses the strong equality check for all equality checks.</p><p>The book is summarized in a chapter written by Jeremy Ashkenas, the creator of CoffeeScript. In it he discuss the philosophy behind CoffeeScript, which the quote below sums up.</p><blockquote><p>“express core JavaScript concepts in as simple and minimal a syntax as we can find for them.” – Jeremy Ashkenas </p></blockquote><h5>My Thoughts</h5><p>When I started this book I had zero experience with CoffeeScript and I was hoping that after reading it I would feel comfortable enough to write my own code. I would say that this book has shown me enough CoffeeScript that I feel comfortable enough to start writing code for my next project. </p><p>You can find out more about the book by checking it out <a href="http://shop.oreilly.com/product/0636920024309.do">The Little Book on CoffeeScript</a> over at <a href="http://oreilly.com/">http://oreilly.com/</a> .</p>Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com0tag:blogger.com,1999:blog-2763068378786653633.post-48637551676947002912012-02-04T11:42:00.001-08:002012-02-04T11:59:27.480-08:00Let Me Introduce Myself<p>Welcome to my new blog, My Programming Adventure. I am .NET developer by day who has recently been drawn back into the open source world. In my time away from the Linux/Unix world there has been a HUGE explosion in solid tooling and languages available for free. Literally, anything you need you can find for free and more than likely the free ‘stuff’ will be better than anything you can pay for to do the same job.</p> <p>One of the languages that I have really become fond of is Clojure. While I was learning Clojure I found out about ClojureCLR, a version of Clojure that runs on the .NET stack . Since I work in a .NET environment I wanted to use it in our production environment at work. While I was learning the ins and outs of ClojureCLR I found out that there was very little out there in the way of ‘How-To’ examples so I started a blog called <a href="http://www.myclojureadventure.com/">My Clojure Adventure</a>. There I give small examples on how to connect to a database, interact with .NET libraries among other things. </p> <p>While I enjoy writing posts for <a href="http://www.myclojureadventure.com/">My Clojure Adventure</a> there are times where I have other topics I’d like to blog about that either have very little or nothing to do with Clojure. That’s where <a href="http://progadventure.blogspot.com/">My Programming Adventure</a> comes in. On this blog I will be writing about technologies, languages, books or whatever other programming related subject catches my eye. An example of a few topics I’ll be writing about shortly are CoffeeScript, Node, and setting up Emacs for writing C# code.</p> <p>In the next few days I will have a book review on <a href="http://shop.oreilly.com/product/0636920024309.do">The Little Book on CoffeeScript</a>. Now that I’ve introduced myself its time for me to go and work on my book review! Come back and let me know what you think.</p> Robhttp://www.blogger.com/profile/04054382840301560558noreply@blogger.com1