Tuesday, December 30, 2008

The Wire tour of Baltimore

A few people on Twitter asked me for the itinerary of The Wire tour I did last week with a friend who's a fan of the show who had come in from out of town. Here's a quick and dirty version. Mild spoiler alert!
  1. Start at Penn Station.  This is where Marlo Stanfield intentionally gets himself arrested (I think in season 4), thus confirming the wiretap that Prop Joe warned him about.
  2. Drive south to City Hall.  Several scenes throughout the series take place outside of City Hall, including a brief sequence in the opening.  Across from City Hall, you'll see the veteran's memorial steps where Stringer Bell and Prop Joe convene so Joe can warn Stringer about police surveillance (after Cheese gets detained after the dogfight). 
  3. Drive south to the Inner Harbor. This is where Omar meets Stringer Bell at the end of season 1, trying to get Stringer Bell to talk about the Barksdale organization on tape.
  4. Head east to the Broadway Market, where McNulty's kids follow Stringer Bell.
  5. Head north on Broadway to see The Ritz, which in season one is Orlando's strip bar.
  6. Head east to Canton Waterfront Park, home of the Baltimore Police Marine unit, where McNulty "rides the boat" in season 2.
  7. Head east on Boston Street to see the Port of Baltimore, the focus of season 2.
  8. Head northwest to Patterson Park, where Prop Joe meets with members of The Greek's organizaton.
  9. Head east to Collington Square Park, and commence following the City Paper's Wire Tour, which covers several awesome sites from seasons 1-4, including Hamsterdam.  My favorite stop is #5, Marlo's outdoor meeting spot, but seeing the rim store where Marlo hangs out is pretty awesome also.
  10. After leaving stop #7 on the City Paper tour, also visit Greenmount Cemetery, site of numerous meetups and interesting scenes.
This tour is very light on west-side locations.  If anyone wants to expand this itinerary, I would look at these sites:

Wednesday, December 24, 2008

SproutCore and the Future of Web Apps at DCRUG

A few weeks ago I presented an updated version of my SproutCore demo at DCRUG. Below is a video of it. Since you can't see the slides too well, so I also posted an updated copy of the slides on slideshare.



I borrow a lot from Charles Jolley's writings on the SproutCore blog for the introductory material about the uncanny valley etc.

Saturday, November 15, 2008

random_data v1.5 Released

The random_data gem provides methods for generating random test data including names, mailing addresses, dates, phone numbers, e-mail addresses, and text. v1.5 includes a primitive Markov text generator and an array "roulette" function.

Thanks to Hugh Sasse for contributing code for the new features!

Wednesday, November 5, 2008

SocialDevCampEast #2 recap

SocialDevCampEast #2, held in Baltimore on November 1st, was another great tech event in Baltimore.  The biggest value for me was making and reenforcing weak ties by connecting with people I don't directly work with or socialize with on a daily basis (especially people I only converse with via social media).  For example, I got a chance to meet Adam Boalt and hear him talk about the success of RushMyPassport.com, which is the exact kind of business I'm interested in building in the future (something with online and offline components, and something that is not considered 'sexy' by most of the cognoscenti). 

As far as presentations went, I gave a short version of my SproutCore demo and also led a session about cloud infrastructure (and what's not so great about it).  The best sessions I attended were:
Jimmy Gardner posted some great photos that convey the feeling of the event.  Congratulations to Dave Troy and Ann Bernard for another success!

Sunday, October 19, 2008

SproutCore and the Future of Web Apps

Bryan Liles took a great (but long) video of my recent presentation on SproutCore at B'More on Rails.  A lot of my introductory material is adapted from the SproutCore blog especially this post from Charles Jolley.

The slides themselves are on slideshare.

This is a great technology that I'm having a lot of fun learning.  Standby for more blog posts about it!

Wednesday, October 15, 2008

Ignite Baltimore and SocialDevCampEast

Two really cool events are going down in the next couple of weeks here in Baltimore that everyone should check out.

Ignite Baltimore is an event I'm helping to organize where 16 people each get five minutes on stage to talk about something they are passionate about.  We've got hackers, bloggers, painters, reverends, entrepreneurs, green builders, and so on all coming together to share what they know. The event debuts Thursday, October 16th at 6 pm at The Windup Space (12 W. North Ave).

The second SocialDevCampEast conference, taking place on November 1st, is a great gathering of the smartest tech innovators on the East coast.  As Dave Troy writes:
SocialDevCamp is a real user-powered unconference with no commercial agenda -- the entire purpose is to CONNECT the leaders who will shape the next wave of tech innovation on the east coast.  Come see how dynamic a truly user-powered conference can be!
I'm really interested in these events because they are a great way to connect with others interested in technology and digital culture, and learn new things while I'm doing it.  See you there!

Friday, September 19, 2008

My Rails TakeFive interview

Now that OtherInbox has launched we can publicly talk about what we're doing and why, and what technologies we use and why.  I was honored to be interviewed for the FiveRuns "Take Five" feature. 

Tuesday, September 16, 2008

Video of my talk at Lone Star Ruby Conference

Confreaks has posted my entire Ruby in the Cloud talk from Lone Star Ruby Conference.  Check it out and let me know what you think!

Here's a 30 second synopsis filmed by Gregg Pollack:


Monday, September 8, 2008

Today we launch OtherInbox!

Josh and I are in San Francisco at Techcrunch50.  Sometime between 1 and 5 pm Pacific time we'll be launching OtherInbox!  You can see the demo live at otherinbox.com.

What a great year it's been!  Thanks for everyone's help and support.

Sunday, September 7, 2008

My talk at Lone Star Ruby Conference


I had a great time at Lone Star Ruby Conference,
meeting fellow Rubyists and hanging out with the entire OtherInbox dev team.  I had the privilege of speaking about all the benefits we've gained from using as much Ruby as possible in running our service.  Confreaks will be posting the video soon, but here are my slides:

Ruby in the Cloud (PDF, 5.8 MB)

I really enjoyed sharing our experiences with the conference, but the highlight of the event for me was getting to hear the inventor of Ruby speak, and also getting to meet him.  You will not meet a more generous, noble, joyful person than Matz, and I can't think of anyone I don't know personally who has had more of an impact on my professional life than him.


Tuesday, August 19, 2008

Ignite Baltimore Call for Participants

Hey everyone, we're organizing Ignite Baltimore, and we're looking for interesting topics.  Check out the site for more details, or visit these other city Ignite pages for inspiration:
If you're a Twitter person, follow @ignitebaltimore

Tuesday, August 12, 2008

Humane Sproutcore Server Development Environment

Recently I've started to build a new, rich user interface for OtherInbox using SproutCore, which has been very enjoyable.  One thing that was not enjoyable was properly configuring my development environment.  

In production, you'll usually deploy your SproutCore app as a static file, so all you have to do is arrange for your users to hit that URL (which out of the box is configured as /static, but could be anything).

In development mode, though, you want to be regenerating your client on the fly by serving it dynamically from sc-server.  To use your app, you talk to http://localhost:4020, and if you want your client to communicate with a backend server, you configure the "proxy" setting in sc-config.  Thus when the Sproutcore server gets a request for "/gadgets", it proxies that request to your local development server.

For some kinds of apps, this works well.  For OtherInbox though, everything the Sproutcore app does requires you to be signed-in and have an active session with the Rails application server.  This caused all kinds of cookie problems, probably because of same origin policy (e.g. my Rails app running at otherinbox.dev was issuing cookies that were somehow getting mangled by the proxying process).

Here's what I solved the problem.  Check out my local Apache setup for details about the whole stack:


 <VirtualHost *:80>

ProxyPass /app http://localhost:4020/other_inbox/
ProxyPassReverse /app http://localhost:4020/other_inbox

ProxyPass /static http://localhost:4020/static/
ProxyPassReverse /static http://localhost:4020/static

ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000

ProxyPreserveHost on
</VirtualHost>
See how that works?  When I load http://otherinbox.dev/app in the browser, Apache proxies that request to sc-server, which dynamically generates my Sproutcore client app.  

When components of that app make requests for other parts of Sproutcore, using the /static URL, Apache also proxies those back to sc-server.  

When the app makes requests for anything else, those requests get proxied by Apache to the Mongrel I have running the Rails code.  Because my Sproutcore app is making REST calls to the backend, this ensures that anything the Sproutcore app asks for from my server gets proxied properly, in this case to localhost:3000.

As soon as I did this, all of the cookie issues were done.  You'll also have to add some application-specific code about how you want to force logins if the user is not already signed-in. In our code, I just check for a logged-in cookie, and if it's not there, we open up the URL for a signin window.

Wednesday, July 30, 2008

Cool Medialets Micro-App

My good friend Paul Barry and I have been helping our friends at Medialets out with a neat little Rails micro-app that culls iPhone App Store data from Apple's endless array of plists, and makes a chart, a dynamically-generated Gruff graph, and a bunch of RSS feeds.  

My favorite thing about this app is the New Apps RSS feed which lets me keep up with the newest time-wasters/productivity enhancers for the iPhone.  Let me take this opportunity to say that the world does not need any more iPhone tip calculators or fortune-telling games.

I wrote a bit more about the latest features over at the Medialets blog

Friday, July 18, 2008

My iPhone app screens

I got stuck for 90 minutes at the dentist office, so I decided to organize all my iPhone apps. First screen (left) is essentials, stuff I use every day. Second screen (right) is references and tools.


Third screen is "communications and entertainment". Fourth screen is the graveyard, for things I don't use regularly or can't get rid of.
I can think of worse ways to kill 90 minutes.

Wednesday, July 2, 2008

Contributing to Rails is easier than you think

Rails_2At OtherInbox we love open source and are looking for ways to share some of our labors with the community. Today I came across a great opportunity to contribute something to Ruby-on-Rails core development. I'm posting it here so everyone can see how easy it is to contribute.

I was building a JSON API to enable some new awesome features we're working on. Following the JSON request specification, I had the client setting its MIME type to "application/jsonrequest". But this was not causing Rails to recognize the request as JSON and thus the request body was not properly parsed. After doing some digging, I realized that Rails only looks for MIME type "application/json".

Fortunately, MIME type processing is implemented really humanely in Rails, so I whipped up a little patch that adds "application/jsonrequest" as a synonym for the JSON MIME type. First I wrote a test to prove that this was a problem. Once I had a failing test, I added the MIME type, and got my test passing. I followed the git patch instructions on lighthouse, then jumped into IRC #rails-contrib to garner support for it.

I happened to see that Rick Olson, the author of the existing JSON parsing code, was in the chat, so I pinged him with the lighthouse ticket. He tested it and applied it, and now our one line of code is a part of Rails!

Hopefully this will save some future JSON implementer a bit of pain.

Cross-posted from the OtherInbox blog

Tuesday, June 10, 2008

Speaking at Lone Star Ruby Conference

I will be speaking at the Lone Star Ruby Conference in September about how we use Ruby to deploy, monitor, and manage a cluster of servers running in the Amazon Web Services virtual cloud.   Below is a summary of what I'll be talking about.

In OtherInbox, almost every system administration task imaginable is carried out using Ruby, meaning we as developers can enjoy all of Ruby's expressive benefits and spend less time scripting the shell, writing cron tasks, or using other languages. Because we make fewer context switches from thinking in Ruby to thinking in other languages, we also reap a big productivity benefit.

Using Ruby throughout our cloud also means that porting the application to run in different production environments is a trivial task, because Ruby is the glue connecting the Ruby components together, thus all we require is a Ruby interpreter to deploy.

Two key Ruby technologies have matured in the previous 18 months which make it ideal for almost every layer of managing a cluster of servers:
  • god.rb allows fine-grained process monitoring and daemon control (a la monit)
  • rufus-scheduler enables Ruby-based scheduling (replacing cron, and providing a great facility for running daemons that must be executed on a recurring basis)
When combined with these Ruby workhorses, developers today can spend much more of their time writing Ruby code, and less time struggling with the vagaries of their production environment:The talk will also include a discussion of using several different AWS gems to make cloud computing simple, by illustrating the use of Amazon's S3 and SQS services to distribute asychnronous work and handle communication between servers.

(cross-posted from the OtherInbox blog)

Friday, June 6, 2008

random_data v1.3.1 released

random_data v1.3.1 is out.  Courtesy of stalwart contributor Hugh Sasse, this release includes more firstnames, and two new methods: Random.firstname_male and Random.firstname_female

Install it via:
sudo gem install random_data
or get it manually.

Monday, June 2, 2008

RailsConf 2008 Recap

I wrote up a few articles on the OtherInbox blog about my experiences at RailsConf 2008:
The best blow-by-blow coverage so far is from Drew Blas.  Of the ones I attended or heard the best feedback about, I most strongly recommend looking at the slides for these: 

Wednesday, May 28, 2008

See you at Railsconf 2008

I'll be leading two Birds of a Feather sessions at RailsConf 2008 that I hope everyone will attend (or flock to):
I was also invited to sign copies of the book Advanced Rails Recipes (to which I contributed a couple of recipes) at Powell's books in Portland on May 30th at 12:30 pm.  Hope to see you there!

I'll be there with the full OtherInbox contingent, so if you're looking for an awesome startup to join, come and track us down.

Sunday, May 25, 2008

random_data v1.3.0 released

random_data is a testing and seed data gem I wrote a few years back to help get Ruby projects up and running with semi-realistic fake data (the faker gem provides similar functionality).

I just released version 1.3.0 which includes a bunch of RDoc enhancements as well as some new features contributed by the tireless (and patient!) Hugh Sasse:
  • Added RandomData::Grammar, which lets you create simple random sentences from a grammar supplied as a hash, like so:
    Random::grammatical_construct({:story => [:man, " bites ", :dog], :man => { :bob => "Bob"}, :dog => {:a =>"Rex", :b =>"Rover"}}, :story)
    ==> "Bob bites Rex"
  • Added Random.bit and Random.bits
  • Added Random.uk_post_code
  • Bug fix: zipcodes should strings, not integers
Thanks Hugh!  Open source is awesome!

Using Ruby's Autoload Method To Configure Your App Just-in-Time

Reading The Ruby Programming Language was a great experience — like revisiting a country I thought I knew intimately, but with expert tour guides who showed me whole new landscapes. It's also a good primer on what's changing in Ruby 1.9.

One of my favorite discoveries was Ruby's autoload method. Using autoload, you can associate a constant with a filename to be loaded the first time that constant is referenced, like so:
autoload :BCrypt, 'bcrypt'
autoload :Digest, 'digest/sha2'
The first time the interpreter encounters the constant BCrypt, it will load the file 'bcrypt' from Ruby's current load path, which it assumes will contain the definition of that constant. (Note that autoload takes the name of the constant, in symbol form, not the constant itself).

Here's an example of how useful it can be. OtherInbox uses beanstalkd in a few places where we haven't yet migrated to SQS. I was loading the beanstalk client with a Rails initializer, 'config/initializers/beanstalk.rb':
require 'beanstalk-client'
BEANSTALK = Beanstalk::Pool.new(['localhost:11300'])
Making this initial connection on our production server takes five seconds or more each time I restarted the app or dropped into the console. That doesn't sound like much but when you're doing that a few times a day, it starts to add up. So I moved the beanstalk code out of the initializer and into 'lib/etc/load_beanstalk.rb'. I placed all of my autoloads in a single initializer, 'config/initializers/autoload.rb'. For beanstalk, the statement is:
autoload :BEANSTALK, 'etc/load_beanstalk'
Now, the app starts more quickly, and even better, this library doesn't get loaded into memory by parts of the app that don't need it.

Wednesday, May 21, 2008

ORDER BY null kills MySQL filesorts dead

I spent some time today optimizing OtherInbox.  As our private beta expands, we are starting to see heavier usage, and so it's time to revisit some of my beloved SQL queries.

I use the MySQL slow query log to find out which queries were taking the most time -- that's been a wonderful tool that I plan to blog about later.  I will note that the output is much easier to make sense of if you parse it first with this script.  (It's really old, so I had to modify it to look at Query_time instead of Time)

Using the EXPLAIN command for each of the slow queries identified in the above log, I found one that was especially disturbing.   According to EXPLAIN's "extra" column, MySQL was resolving these queries with two fairly expensive operations: 
Using where; Using temporary; Using filesort
Fixing the "Using temporary" required some nimble manipulation of indices, but filesort was really perplexing me.  I wasn't using an ORDER BY clause, so as I read the docs, there was no reason to be doing a filesort.  But then I remembered that MySQL automatically does ordering based on GROUP BY clauses.  All I had to do was add "ORDER BY null" to the end of my query, and that did the trick:
Using where; Using temporary
If only all optimizations were so simple.

Wednesday, May 14, 2008

Applications due for Baltimore Improv Festival 2008

We have a formal application process for this year's Baltimore Improv Festival taking place in August.  Submissions are due May 23rd.  We'd like to get as diverse a pool of performers from all over the country as we can.

On a related note, BIG has been accepted into Artscape to perform on the Saturday night show (7/19)  along with the Mimehunters.  See you there!

Monday, May 12, 2008

SocialDevCampEast recap

SocialDevCampEast on Saturday was a blast.  Lots of talk about building up the "Amtrak corridor" running from DC to Boston as one unified, tech-ified region.  The talks people gave were interesting (I even got to do one on my experiences with Amazon Web Services), but for me the most valuable experiences took place in between sessions and at the after-party held at Brewer's Art.  

I met several cool DC, MD, and VA entrepreneurs which was really invigorating.  I found out about some new blogs that I need to be reading, including eastcoastblogging.com, written by the creator of the newly-launched MyDropBin.com

I also saw first-hand the value of Twitter, which I'm very late to the party on.  I don't know that many people in my own circle who were using it, but after seeing how it was fostering the collaboration among people in the conference environment, I'm sold, especially if you have something client-side that checks tweets for you (my friend Brian Lyles aka Smarticus recommended Twitterific, which works great).  So you can now catch me @subelsky.

Thanks to the sponsors and organizers for a great event; I'll be at the next one for sure.

Tuesday, May 6, 2008

Startup community in Baltimore

I just found about a very interesting Barcamp called SocialDevCamp East planned in Baltimore on Saturday, just a few miles south of my house. It's very heartening to see enthusiasm building for a tech community here on the east coast and this has gotten me thinking about the Baltimore startup community in general. Below are some notes on startup life in Baltimore.




  • Here are some Baltimore startups I know about:




    I'm sure there are more; send me some and I'll add them to the list.


  • I've talked to many other Baltimore entrepreneurs with ideas in various stages of development, and there are lots of hackers here working remotely (or commuting to DC and Northern Virginia) on startups. We're also the home of advertising.com which employs a lot of smart people and brings a lot of talent to the area.


  • I meet a lot of cool hackers through the local Ruby on Rails meetup.


  • One of my favorite blogs is written by a Baltimorean, Paul Barry. He's a Ruby on Rails expert, but has in no way drunk the Kool Aid. He's got plenty of love for Java and Scala and whatever else gets the job done.


  • I've been dreaming for awhile about organizing a "Baltimore Demo Night" where all of us could gather and show off our wares, get feedback, and so on. Who's down for that?

Advanced Rails Recipes out now

I contributed two recipes to the newly-released Advanced Rails Recipes which I highly recommend.  It's got 84 very eye-opening solutions to problems faced by a lot of Rails programmers.  

I feel lucky to be working with cool, open technology that's yielded an opportunity to be part of a project like this.

Thursday, May 1, 2008

Looking for Rails developers (who isn't?)

For the past seven months I've been building a cool new consumer web app with some folks in Austin, TX, otherinbox.com/, and we're looking for experienced Ruby On Rails developers. We're a small agile team led by Steven Smith, founder of FiveRuns.

The whole site is Rails-based and makes extensive use of Amazon Web Services (EC2, S3, and SQS) and there's a new awesome challenge every day. More info on our jobs page.

Sunday, March 23, 2008

Great explanation of Rails auto-loading

I thought Mark Bush did a great job of explaining Rails' auto-loading behavior on the Rails-talk mailing list.  I'm posting it here to help others find it, and so I remember where to find it next time I'm struggling with it.  I've known that Rails automatically requires and loads classes on the fly using standard conventions for module names -> file names, but I had a hard time grasping how it worked.  Mark explains it very concisely.

The Rails list has a low signal-to-noise ratio but it's still worth paying attention to for gems like this.

Saturday, March 15, 2008

Using stunnel to wrap Ruby network operations on the fly

In my current project, we need to be able to connect to POP3 servers. Some POP3 servers, such as Gmail, only allow SSL connections. Unfortunately, the Ruby 1.8.x net/pop library doesn't support SSL (although the 1.9 library does, but that was not an option for us in this project).

The usual answer here is to wrap your connection using stunnel, which acts as an SSL proxy for whatever traffic you want to send over it. Usually you run stunnel as a separate service pointing at the server, but since we'll be connecting to many different POP servers, I needed to be able to set up and tear down stunnels on the fly. The first attempt looked something like this:
system("echo -e 'foreground = yes\npid =\n[mail]\nclient = yes\n \
accept = 127.0.0.1:2000\nconnect = #{server}:#{port}\n' \
| stunnel -fd 0")
Since stunnel doesn't accept command-line options, you have to pipe options to it. The "fd -0" tells stunnel to read its configuration from file descriptor 0, better known as STDIN.

Since I need to run that command in a child process, then have the parent resume and make use of the child service, I embarked on a fun foray of Ruby's forking and threading capabilities.

First, I tried forking, replacing the child process with a call to exec instead of system, then detaching the parent and killing the child process when the POP session was done. This partially worked, but I couldn't figure out how to kill the child process, so I'd end up with multiple copies of stunnel running after the script ran, or the parent process itself would hang.

Looking through the Pickaxe chapter on threads and processes, I discovered IO.popen, which works perfectly. I can pipe input to STDIN, avoiding the ugliness of the "echo -e" above, and I can more easily kill the child process when I'm done.

This is what the final method looks like:
def stunnel_wrap(server, port)
stunnel = IO.popen("stunnel -fd 0",'w+')
stunnel.puts("foreground = yes\npid =\n[mail]\nclient = yes\n \
accept = 127.0.0.1:2000\nconnect = #{server}:#{port}\n"
)
stunnel.close_write
Kernel.sleep(1)
yield
ensure
Process.kill(9,stunnel.pid)
end
I handle exceptions at a higher layer in this class, so here all I do is make sure the stunnel process gets killed no matter what. I'm not sure if the sleep call is needed, but when I was testing this with Gmail it seemed to help to wait one second for the tunnel to activate before trying to use it.

To make the above example work, you just need to point your POP client at the stunnel (in this case 127.0.0.1 port 2000) and you'll be talking SSL to the server.
stunnel_wrap('pop.gmail.com',995) do
Net::POP3.start('127.0.0.1', 2000, account, password) do |pop|
# pop securely
end
end

Monday, February 18, 2008

ActiveRecord Double Validation Errors in RSpec

I had a strange error occur in one of my rspec model unit tests today, and I wanted to document it here because my solution (which is a bit of a hack) is the opposite of what worked for other people.

I have a bunch of tests that check to make sure I'm validating various properties of a model. All of a sudden, I started having tests fail because the validations seemed to be adding the same error twice.
'User creation should require domain names to be unique' FAILED
expected 1 error on :domain, got 2
This error only occured when my full suite of tests ran. If I ran the unit test by itself (or even if I just ran only the model tests by themselves), it didn't happen.  Unfortunately I didn't notice whatever it was I did to introduce this error, so I couldn't reverse it.

Several other people have encountered this problem, so I know it's because my tests are leaking state in some way.  Somehow I am doing something during my tests that rspec isn't able to clean up.  Usually it's because the tests are doing something weird with extra require or load statements, which causes multiple copies of the class to be loaded.  Removing these statements usually works.  

I had no such statements and so spent a while trying to sort this out.  On a whim, I added a require statement to the top of the failing User model spec, and that fixed it:
require 'user'
I wish I had more time to investigate, because it makes me thing I don't know enough about Rails autoloading behavior, or Ruby's loading behavior. 

If you're running into this same problem, I found these mailing lists threads to be useful:

Tuesday, February 12, 2008

Unobtrusive Firefox Plugin Click-to-Install

I've been working on a really cool new project, to be announced soon, where I've built a Rails-based web app that has two interfaces: one for humans to interact with inside of a browser, and a RESTful API for browser plugins to interact with via GETs and POSTs.  We want people to be able to interact with our site while visiting other sites.


I had seen other Firefox plugins that were click-to-install, but I had a hard time figuring out how to make it work for our plugin. Firefox users were having to "Save Link As.." and open the downloaded .xpi file manually.  Very old-fashioned.  So here's a quick note to help future Mozilla or Firefox developers who need to create a click-to-install plugin:


1) It's all done through Javascript, so anyone without Javascript will have to install your plugin the old-fashioned way.  The Mozilla site documents the API call you need to make.


2) I'm a huge proponent of unobtrusive javascript (UJS), which I learned by using Dan Webb's excellent LowPro framework. Thus I wanted to make sure that the click-to-install javascript was offered as a progressive enhancement to the normal HTML links we provided.  That way, everyone could have a link to the plugin file itself for manual installation, but people with Javascript could enjoy click-to-install.


In this part of the site, we weren't using any other Javascript libraries, so it seemed like overkill to include Prototype and LowPro just for this one effect.  So it was a great chance to learn how to roll my own UJS without library support.  I whipped up a quick UJS click-to-install technique following inspiration from this presentation.  Here's what I came up with:

<script defer="defer" type="text/javascript">
//<![CDATA[
function doXPITrigger() {
if (!document.getElementsByTagName) return false;
var links = document.getElementsByTagName("a");
for (var i=0; i < links.length; i++) {
if (links[i].className.match("firefox")) {
links[i].onclick = function() {
xpi={'Awesome New Project Toolbar':'/downloads/awesome_project.xpi'};
InstallTrigger.install(xpi);
return false;
}
}
}
}

window.onload = doXPITrigger
//]]>

3) I've seen other advice recommending you configure your web server to recognize the .xpi mimetype appropriately.  I did this but it didn't make much difference in my case.  Still, it's probably worth doing.  I added this line to our Apache config:


AddType application/x-xpinstall .xpi

Wednesday, January 23, 2008

Shout out to Rails Envy & More Autotest Love

Thanks to the Rails Envy Podcast for mentioning my note on autotest configuration. Another aspect I enjoy about using the verbose flag is that autotest tells you exactly what has triggered the re-test. Kind of a fun way to look under the hood of your testing system:
[["app/controllers/admin/users_controller.rb", Wed Jan 23 14:06:10 -0600 2008]]
/usr/local/bin/ruby -S script/spec -O spec/spec.opts spec/controllers/other_inboxes_controller_spec.rb spec/controllers/admin/users_controller_spec.rb
If you like that, you might also be interested in the autotest timestamp plugin, which stamps autotest runs like so:
# Waiting at 2008-01-23 14:06:17
All you have to do is uncomment this line in your ~/.autotest file:
require 'autotest/timestamp'

Wednesday, January 16, 2008

Autotest with verbose flag on

I followed a tip from David Chelimsky's blog and began running autotest with the -v flag for verbosity.  When you first run it, you get a bunch of lines like this:
Dunno! spec/other_inbox_spec_helpers.rb
Dunno! app/views/layouts/main/_footer.erb
Which show all the files for which autotest doesn't have a mapping.  With ZenTest 3.8.0 out, it's easy to add mappings (which tell autotest which tests to run when a matching file changes) and exceptions (which tell autotest which files to ignore).  You can also set up .autotest files for particular projects, or for your whole development machine (~/.autotest).

Here's an example of a per-project .autotest I use, sitting in the Rails root directory.  I've got a custom spec helper and I want to rerun all my tests if this file ever changes:
Autotest.add_hook :initialize do |at|
%w{ domain_regexp perfdata coverage reports }.each { |exception| at.add_exception(exception) }

at.add_mapping(/spec\/app_spec_helper.rb/) do |_, m|
at.files_matching %r%^spec/(controllers|helpers|lib|models|views)/.*\.rb$%
end

end
And here's part of my site-wide .autotest file, mostly cribbed from David's blog, where I'm ignoring other kinds of cruft that pile up in projects.  Also note the mapping for spec/defaults.rb, a file I commonly setup in my specs containing default parameters for different models.
Autotest.add_hook :initialize do |at|
%w{.hg .git .svn stories tmtags Rakefile Capfile README spec/spec.opts spec/rcov.opts vendor/gems autotest svn-commit .DS_Store }.each {|exception|at.add_exception(exception)}

at.add_mapping(/spec\/defaults.rb/) do |f, _|
at.files_matching %r%^spec/(controllers|helpers|lib|models|views)/.*\.rb$%
end

end

Tuesday, January 8, 2008

Presenter classes help with Rails complexity

A few weeks back I gave a talk at a Bmore on Rails meeting all about Presenter classes, which I learned about from Jay Fields.  I can't claim any authorship of this idea, but it's been very helpful to me, so I thought I'd share some of my examples here plus a bit of extra code I wrote to make Presenter integration with ActiveRecord a bit more fun.

Presenters allow you to extract the logic needed for complex views (especially views that require the use of more than one model) into a separate, easily testable class.  This helps you write clean code and skinny controllers, among other benefits. 

1) The key background material is here:


2) I extended Jay Fields' code by adding methods to combine error messages from different models:


3) An example Presenter, combining a User object and an Account object into a Preference presenter, is here:


4) An example controller, using the Preference presenter, is here:


Also, Jay wrote an excellent recipe for Advanced Rails Recipes that covers this technique.

Thursday, January 3, 2008

Dynamic Content Caching Recipe

hello everyone,

Pragmatic Programmers just released an updated beta version of Advanced Rails Recipes which contains another recipe that I contributed, this one about dynamic content caching. I've built a few sites that had a lot of static content, with only bit of dynamic content on each page (usually a signout button, or an admin link). In these cases, I was able to use page caching with a little bit of Javascript that looks for a cookie in the client browser and alters the page accordingly. The book is shaping up great and I'm learning a lot by reading other recipes, so definitely check it out!

Wednesday, January 2, 2008

Improv and personal transformation

In summer 2007 Towson Unitarian Universalist Church asked me to speak at one of their services about my experiences as an improv theater director, teacher, and performer.  Rather than tell them how great improv is, I decided to share what I think improv reveals about the human condition.  They were nice enough to tape the talk and give me a copy, which I present here for all eternity, in MP3 format (it's about 19 minutes long).

UPDATE: I had this speech transcribed by escriptionist.com. The transcript wants some editing and translation, but if you just feel like skimming instead of listening to the audio, it's pretty good (PDF file).