Friday, July 10, 2009

A CouchDB Store PUTter

‹prev | My Chain | next›

Let's see, where was I?:
cstrom@jaynestown:~/repos/couch_design_docs$ spec ./spec/
couch_design_docs_spec.rb spec_helper.rb
cstrom@jaynestown:~/repos/couch_design_docs$ spec ./spec/couch_design_docs_spec.rb
...*

Pending:

CouchDesignDocs::Directory a valid directory should assemble all documents into a single docs structure
(you can do a better job with deep hash merging than that)
./spec/couch_design_docs_spec.rb:39

Finished in 0.006632 seconds

4 examples, 0 failures, 1 pending
Yeah, yeah. Very funny yesterday self. Still, I do need to get a little better.

I am converting this directory structure into a hash:
fixtures/a/b/c.js
fixtures/a/b/d.js
The two .js files generate these two hashes that need to be merged:
# fixtures/a/b/c.js =>
{
'a' => {
'b' => {
'c' => 'function(doc) { return true; }'
}
}
}

# fixtures/a/b/d.js =>
{
'a' => {
'b' => {
'd' => 'function(doc) { return true; }'
}
}
}
I have all of this working. The conversion of the directory / file structure to hash works well. The merging of the two hashes, not so well. I am using this recursive method to merge things:
   def deep_hash_merge(h1, h2)
h2.each_key do |k|
if h1.key? k
deep_hash_merge(h1[k], h2[k])
else
h1[k] = h2[k]
end
end
h1
end
Mostly, I do not like the line h1[k] = h2[k]—updating the original hash is an unnecessary side-effect. After noodling through the problem on my own I Google a bit to find how Rails has solved this. I use a scaled down version of this:
class Hash
def deep_merge(other)
self.merge(other) do |key, oldval, newval|
oldval.deep_merge(newval)
end
end
end
The new Hash#deep_merge method makes use of the optional block parameter for the Ruby core Hash#merge method. The block is evaluated only when the two source hashes have conflicting keys—exactly what I need. Specifically, I need to keep working down the two Hash trees (i.e. when merging both 'a' keys), until no conflicts occur. When no conflicts occur, the normal merge takes place, ignoring the block. I am exploiting the fact that my data structures will always be pure hashes—there is no need to resort to type checking as is done in the Rails code.

With that, I am much happier and am ready to move onto the next class in my gem. Now that I can generate hashes from javascript files and the directory structure in which they are stored on the filesystem, it is time to get them loaded in the CouchDB store.

The first thing I need for that is a URL:

it "should require a CouchDB URL Root for instantiation" do
lambda { Store.new }.
should raise_error

lambda { Store.new("uri") }.
should_not raise_error
end
I plan to stub out the integration points between my gem and CouchDB so I am not even bothering to try a real URL in the example. If the URL is bad, RestClient will fail for me. To get these examples to pass, I implement this code:
module CouchDesignDocs
class Store
attr_accessor :url
def initialize(url)
@url = url
end
end
end
(the CouchDesignDocs namespace is not needed in the example thanks to an include in the spec_helper.rb as mentioned yesterday)

Next, given a valid Store object and a document hash:
  context "a valid store" do
before(:each) do
@it = Store.new("uri")

@hash = {
'a' => {
'b' => {
'c' => 'function(doc) { return true; }'
}
}
}
end
...
end
I want to create design documents. Again, I am stubbing the interface to CouchDB in the gem. Thankfully, I have much experience with RestClient updates to CouchDB. When uploading the hash in the preconditions, I expect a RestClient.put call like:
    it "should be able to load a hash into design docs" do
RestClient.
should_receive(:put).
with("uri/_design/a",
'{"b":{"c":"function(doc) { return true; }"}}',
:content_type => 'application/json')
@it.load(@hash)
end
And, to make that example pass I write a Store#load method:
    def load(h)
h.each_pair do |document_name, doc|
RestClient.put "#{url}/_design/#{document_name}",
doc.to_json,
:content_type => 'application/json'
end
end
With the Directory and Store classes done, I have reached a good stopping point for the day. I will pick up tomorrow putting it all together.

No comments:

Post a Comment