Jekyll2019-01-01T21:43:07+00:00https://www.bencurtis.com/atom.xmlThe Business of the WebMaking money on the web, one customer at a timeData retention with the Serverless Framework, DynamoDB, and Go2018-04-12T07:32:00+00:002018-04-12T07:32:00+00:00https://www.bencurtis.com/2018/04/data-retention-with-dynamodb-and-golang<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> we have standard retention periods for data from which our customers can choose. Based on which subscription plan they choose, we’ll store their error data up to 180 days. Some customers, though, need to have a custom retention period. Due to compliance or other reasons, they may want to have enforce a data retention period of 30 days even though they subscribe to a plan that offers a longer retention period. We allow our customers to configure this custom retention period on a per-project basis, and we then delete each error notification based on the schedule that they have set. Since we store customer error data on S3, we need to keep track of every S3 object we create and when it should be expired so that we can delete it at the right time. This blog post describes how we use S3, DynamoDB, Lambda, and the <a href="https://serverless.com/">Serverless Framework</a> to accomplish this task.</p>
<h2 id="keeping-track-of-the-s3-objects-we-create">Keeping track of the S3 objects we create</h2>
<p>As our processing pipeline receives and processes error notifications, we store the payload from each notification in an S3 object. We also create objects in a separate S3 bucket that contain a batched list of the resulting S3 keys and the expiration time for each of those keys. These objects are just JSON arrays that look like this:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"key"</span><span class="p">:</span><span class="w"> </span><span class="s2">"pu3We2Ie/ea40b606-1b48-40cb-942f-a046755c7a0f"</span><span class="p">,</span><span class="w">
</span><span class="s2">"expire_at"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2017-03-03T23:59:58Z"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"key"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ieb0ieVu/fe3c6b8a-76d7-48d8-ab71-ea8a4f3bce08"</span><span class="p">,</span><span class="w">
</span><span class="s2">"expire_at"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2017-07-31T23:59:58Z"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span></code></pre></div></div>
<p>The S3 bucket that is used to store these lists of keys sends a message to an SNS topic for each PUT request. We have a Lambda function that is subscribed to this topic to process each object as it is created, configured in our <code class="highlighter-rouge">serverless.yml</code> as the send-to-dynamo function:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">functions</span><span class="pi">:</span>
<span class="na">send-to-dynamo</span><span class="pi">:</span>
<span class="na">handler</span><span class="pi">:</span> <span class="s">bin/send-to-dynamo</span>
<span class="na">events</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">sns</span><span class="pi">:</span> <span class="s2">"</span><span class="s">arn:aws:sns:us-east-1:*:expirations-${opt:stage,</span><span class="nv"> </span><span class="s">self:provider.stage}"</span>
</code></pre></div></div>
<p>Tip: If you start your Serverless project with <code class="highlighter-rouge">serverless create --template aws-go</code>, you will get a project layout that puts the code for each function in a <code class="highlighter-rouge">main.go</code> file with its own subdirectory and a Makefile to help with compiling the code before deploying. Handy!</p>
<p>Ok, back to our configuration… Since this function needs to read the contents of the S3 objects referenced by the SNS notification, we have these permissions defined in <code class="highlighter-rouge">serverless.yml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">provider</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">aws</span>
<span class="na">runtime</span><span class="pi">:</span> <span class="s">go1.x</span>
<span class="na">environment</span><span class="pi">:</span>
<span class="na">STAGE</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${opt:stage,</span><span class="nv"> </span><span class="s">self:provider.stage}"</span>
<span class="na">iamRoleStatements</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">Effect</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Allow"</span>
<span class="na">Action</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">s3:GetObject"</span>
<span class="na">Resource</span><span class="pi">:</span> <span class="s2">"</span><span class="s">arn:aws:s3:::expirations-bucket-${opt:stage,</span><span class="nv"> </span><span class="s">self:provider.stage}/*"</span>
</code></pre></div></div>
<p>That STAGE environment variable is used in the Go code to know which DynamoDB table to use, since we create a table per serverless environment that we deploy. Here’s how we define that table and give the Lambda function permission to write to it in our config:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">provider</span><span class="pi">:</span>
<span class="c1"># ...</span>
<span class="na">iamRoleStatements</span><span class="pi">:</span>
<span class="c1"># ...</span>
<span class="pi">-</span> <span class="na">Effect</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Allow"</span>
<span class="na">Action</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">dynamodb:PutItem"</span>
<span class="na">Resource</span><span class="pi">:</span>
<span class="s">Fn::GetAtt</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">expirationsTable</span>
<span class="pi">-</span> <span class="s">Arn</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="na">Resources</span><span class="pi">:</span>
<span class="na">expirationsTable</span><span class="pi">:</span>
<span class="na">Type</span><span class="pi">:</span> <span class="s">AWS::DynamoDB::Table</span>
<span class="na">Properties</span><span class="pi">:</span>
<span class="na">TableName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">expirations-${opt:stage,</span><span class="nv"> </span><span class="s">self:provider.stage}"</span>
<span class="na">AttributeDefinitions</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">AttributeName</span><span class="pi">:</span> <span class="s">ID</span>
<span class="na">AttributeType</span><span class="pi">:</span> <span class="s">S</span>
<span class="pi">-</span> <span class="na">AttributeName</span><span class="pi">:</span> <span class="s">ExpireAt</span>
<span class="na">AttributeType</span><span class="pi">:</span> <span class="s">N</span>
<span class="na">KeySchema</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">AttributeName</span><span class="pi">:</span> <span class="s">ID</span>
<span class="na">KeyType</span><span class="pi">:</span> <span class="s">HASH</span>
<span class="pi">-</span> <span class="na">AttributeName</span><span class="pi">:</span> <span class="s">ExpireAt</span>
<span class="na">KeyType</span><span class="pi">:</span> <span class="s">RANGE</span>
<span class="na">StreamSpecification</span><span class="pi">:</span>
<span class="na">StreamViewType</span><span class="pi">:</span> <span class="s">OLD_IMAGE</span>
<span class="na">TimeToLiveSpecification</span><span class="pi">:</span>
<span class="na">AttributeName</span><span class="pi">:</span> <span class="s">ExpireAt</span>
<span class="na">Enabled</span><span class="pi">:</span> <span class="no">true</span>
<span class="c1"># ...</span>
</code></pre></div></div>
<p>You’ll notice that the DynamoDB table has been configured with a TTL field called ExpireAt, and that it will emit a stream of events. Since we only care about the TTL events, we use OLD_IMAGE as the StreamViewType, since that will give us the contents of the fields in DynamoDB row when it is deleted.</p>
<p>Since we have a lot of keys arriving with the same expiration time, the Lambda function groups the list of keys by expiration time and creates one record per group in DynamoDB to reduce the write throughput. This results in one DyanmoDB record per expiration time (down to the second) which contains a list of S3 keys that are to be deleted at the expiration time.</p>
<noscript><pre>400: Invalid request
</pre></noscript>
<script src="https://gist.github.com/7cb3d82cdc8bde17299c2e77b7301d83.js?file=send-to-dynamo-main.go"> </script>
<p>The handler function receives the SNS event, looks for S3 records in the event data, then calls the <code class="highlighter-rouge">getItems</code> function to load each of those S3 objects and return lists of S3 keys to be deleted, grouped by expiration time. Each of those groups gets inserted as one DynamoDB record.</p>
<h2 id="using-dynamodb-streams-to-know-when-to-delete">Using DynamoDB streams to know when to delete</h2>
<p>Now that we have the triggers and code in place to store the keys of the objects that are to be deleted, we need to finish the job by watching for TTL events in the DynamoDB stream and delete the S3 objects. The <code class="highlighter-rouge">purge-from-s3</code> function does this. Here’s the configuration from <code class="highlighter-rouge">serverless.yml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">provider</span><span class="pi">:</span>
<span class="c1"># ...</span>
<span class="na">iamRoleStatements</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">Effect</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Allow"</span>
<span class="na">Action</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">s3:DeleteObject"</span>
<span class="na">Resource</span><span class="pi">:</span> <span class="s2">"</span><span class="s">arn:aws:s3:::data-bucket-${opt:stage,</span><span class="nv"> </span><span class="s">self:provider.stage}/*"</span>
<span class="na">functions</span><span class="pi">:</span>
<span class="c1"># ...</span>
<span class="na">purge-from-s3</span><span class="pi">:</span>
<span class="na">handler</span><span class="pi">:</span> <span class="s">bin/purge-from-s3</span>
<span class="na">events</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">stream</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">dynamodb</span>
<span class="na">arn</span><span class="pi">:</span>
<span class="s">Fn::GetAtt</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">expirationsTable</span>
<span class="pi">-</span> <span class="s">StreamArn</span>
</code></pre></div></div>
<p>We have granted the Lambda function the permission to delete objects from the bucket that stores the error payloads, and we have configured that function to be triggered by the events in the stream from our DynamoDB table.</p>
<p>This function is a simpler than the first one. When it receives a <code class="highlighter-rouge">REMOVE</code> event from DynamoDB, signifying that a record has been deleted due to the TTL, it iterates over the list of keys stored in the record, deleting each one from the bucket.</p>
<noscript><pre>400: Invalid request
</pre></noscript>
<script src="https://gist.github.com/7cb3d82cdc8bde17299c2e77b7301d83.js?file=purge-from-s3-main.go"> </script>
<p>Unfortunately, this Lambda function will get invoked for every event in the DynamoDB stream, which includes all the inserts that we did in the <code class="highlighter-rouge">send-to-dynamo</code> function. We don’t care about those records, so we just ignore them in this function. I have filed a feature request with Amazon to add the ability to filter the types of stream events that trigger a function, and you’re more than welcome to do the same. :)</p>
<p>I hope you have found this post to be a useful example of how to work with S3 and DynamoDB streams using the Serverless Framework, Lambda, and Go!</p>At Honeybadger we have standard retention periods for data from which our customers can choose. Based on which subscription plan they choose, we’ll store their error data up to 180 days. Some customers, though, need to have a custom retention period. Due to compliance or other reasons, they may want to have enforce a data retention period of 30 days even though they subscribe to a plan that offers a longer retention period. We allow our customers to configure this custom retention period on a per-project basis, and we then delete each error notification based on the schedule that they have set. Since we store customer error data on S3, we need to keep track of every S3 object we create and when it should be expired so that we can delete it at the right time. This blog post describes how we use S3, DynamoDB, Lambda, and the Serverless Framework to accomplish this task.2017 in review2018-01-10T16:23:00+00:002018-01-10T16:23:00+00:00https://www.bencurtis.com/2018/01/2017-in-review<p>I can’t recall having done a year-in-review type of blog post before,
but when <a href="https://twitter.com/patio11/status/946910434887471104">Patrick suggested it recently</a>, it seemed like a great idea, so
I thought I’d give it a shot.</p>
<p>In short, 2017 was a great year! :) I moved all of our servers from a
colocation facility to AWS in January, which helped me sleep a lot
better at night. Over the course of the year I continued to improve our
infrastructure, and we now have a very reliable and self-healing system.
Nearly everything we do (application, search, and database servers) is
self-managed, so it’s been fun to level-up my distributed system skills.
In December I put my AWS skills (literally) to the test by getting my
first AWS certification: AWS Certified Developer - Associate exam.</p>
<p>I also spent some time working on developing my marketing skills by
taking <a href="https://themarketingseminar.com">The Marketing Seminar</a> by Seth Godin. The seminar was
excellent, and I’m very glad I spent the time on it. Having spent a few
years as a freelancer, I had to get good at one-to-one sales, but trying
to market a SaaS business to a world of customers is a different kettle
of fish, and I picked up lots of helpful info from Seth’s seminar. My
favorite part of his philosophy is that if you belive you are bringing a
great product or service to the world (and why would you be doing it if
you didn’t believe that?), then you <em>owe</em> it to the world to be a
champion for your product in a way that will attract people who will be
best served by it. Of course I’m convinced that Honeybadger <em>is</em> a
great addition to the world, so Seth’s approach resonates with me. I’m
hoping that I’ll be able to apply those learnings and improve my
marketing skills in 2018.</p>
<p>During the summer I had two small freelancing projects fall in my lap
out of the blue. I hadn’t done any freelancing in several years, so it
was fun to do some work on a side project and shift the mental gears a
little bit from my day-to-day work at Honeybadger. One of the projects
was <a href="https://www.tvtattle.com">TVTattle</a>, which was a blast to work on. It’s a pretty simple
content-management Rails app, but it was fun to play with caching plus a
CDN to make it super fast.</p>
<p>So that’s my year – a lot of ops work with sprinklings of dev work
along the way. I’m still learning all the time, and business is great,
so what more could I want? :)</p>I can’t recall having done a year-in-review type of blog post before, but when Patrick suggested it recently, it seemed like a great idea, so I thought I’d give it a shot.Writing Again2017-06-02T16:24:46+00:002017-06-02T16:24:46+00:00https://www.bencurtis.com/2017/06/writing-again<p>A while back I changed the stack I used for publishing this blog with
the hope that I would write more because it would be easier to publish.
Looking back at how little I’ve written since I’ve made that change, I
can see that that didn’t work out so well. :) I’m not going to change
my stack again (just yet), but I am going to try and write a bit more.
Hopefully it won’t be another year before my next post.</p>
<p>One cause of the lack of my writing is how busy I have been running my
<a href="https://www.honeybadger.io">error tracking service</a>. It has been a lot
of fun, but it has also been a lot of work. The good news is that the
business continues to grow, and we just passed the five-year mark. My
co-founders and I are definitely living the bootstrapper’s dream, doing
work that we love, and making our customers happy.</p>
<p>Work will fill the time allotted to it, though, so I’m going to try and
carve out more time for writing. We’ll see how it goes. :)</p>A while back I changed the stack I used for publishing this blog with the hope that I would write more because it would be easier to publish. Looking back at how little I’ve written since I’ve made that change, I can see that that didn’t work out so well. :) I’m not going to change my stack again (just yet), but I am going to try and write a bit more. Hopefully it won’t be another year before my next post.Solr Recovery2016-03-02T19:53:00+00:002016-03-02T19:53:00+00:00https://www.bencurtis.com/2016/03/solr-recovery<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> this morning we had a
failure of our SolrCloud cluster (of three servers). Each of the three
servers has a replica of the eight shards of our <em>notices</em> collection.
Theoretically this means that two of the three servers can go away and
we can still serve search traffic. Sadly, reality doesn’t always match
the theory.</p>
<p>What happened to us this morning is that some of the shards became
leaderless when one of the servers ran out of disk space and started
throwing errors. In other words, we kept seeing this error in the logs:
<strong><em>No registered leader was found</em></strong>. As a result, the two remaining
servers refused to update the index, which brought a halt to
search-related operations. Since I’m relatively new to Solr, I had to
bang my head against the wall for a bit before I stumbled upon the
solution.</p>
<p>Simply bringing down one of the two remaining good servers didn’t solve
the problem. The last remaining server refused to become the leader for
four of the shards. To fix this, I had to unload each of the stubborn
shards and load them again. This was accomplished easily enough via the
admin UI, and, once completed, our search functionality was restored.</p>
<p>Once that was done I brought the other good server back up, and it
quickly caught up the one server that was now the leader for all the
shards. Easy-peasy. Sadly, bringing the last of the servers up – the
culprit with the full disk – took a bit more work. Since its data
directory was about twice the size of the directory on the other two
servers, despite all three supposedly having the same documents, I
decided to just blow away all the data on the third server and replace
it with a copy of the snapshots from the leader. The process was
basically this:</p>
<ol>
<li>Take a fresh snapshot of each shard from the leader</li>
<li>Copy the eight snapshots from the leader to the damaged server</li>
<li>Move the index data in place, renaming the shards</li>
<li>Unload and load the cores on the damaged server</li>
</ol>
<p>Here’s a <a href="https://gist.github.com/stympy/29673e824e7f619bb7c6#file-solr-backup-rb">ruby script for step 1</a> and a <a href="https://gist.github.com/stympy/29673e824e7f619bb7c6#file-reload-cores-sh">shell script for the other steps</a>.</p>
<p>With that, the damaged server quickly got each of the shards back in
sync (since I had just taken snapshots on the leader), and everything
was back to normal.</p>At Honeybadger this morning we had a failure of our SolrCloud cluster (of three servers). Each of the three servers has a replica of the eight shards of our notices collection. Theoretically this means that two of the three servers can go away and we can still serve search traffic. Sadly, reality doesn’t always match the theory.Steppin’ up2015-03-27T14:01:43+00:002015-03-27T14:01:43+00:00https://www.bencurtis.com/2015/03/steppin-up<p>Rob Walling wrote a <a href="http://www.softwarebyrob.com/2015/03/26/the-stairstep-approach-to-bootstrapping/">great post yesterday</a>
about building up your bootstrapped business over time by taking on
smaller projects before diving into big ones. His post reminded me of
Amy Hoy’s <a href="https://unicornfree.com/stacking-the-bricks">Stacking the Bricks</a> philosophy, and I
think that taking the approach of learning to walk before learning to
run makes sense. Rob’s post made me reflect on my experience building
products that have gone from producing no income, to putting some change
in my pocket, to providing a nice income for my family, and I thought it
would be fun to share.</p>
<p>My day job has always been building web apps, so my first side projects
were also web apps: first, a community site, and later, a SaaS app for
managing test plan execution for software testers.
Those were fun, but never amounted to much.</p>
<p>The first side project I did with the goal of making money was a
self-published ebook about building e-commerce sites with Rails. This
was in 2006, when Rails was young, and that $12 ebook sold pretty well.</p>
<p>In 2007 I started freelancing full time, and I decided that I needed a
product with recurring revenue to help even out my cash flow, so I
started on <a href="http://www.catchthebest.com/">Catch the Best</a>, a SaaS app
that scratched my own itch. I launched that in October of 2007 (working
on in it part-time while working on client projects), and it got some
paying customers from day one. The revenue from that app has never been
large on a MRR basis, but it has been consistent, so I’m pretty happy
having that as a cash machine.</p>
<p>In 2008 I was building a SaaS billing system in Rails for the third
time. The first time was for Catch the Best, and second two times were
for clients that had engaged me to build SaaS apps for them.<br />
It occurred to me that other developers might be
interested in buying what I had built so that they could save
themselves the time of building their own. So I cleaned up the code I
had written and launched <a href="https://www.railskits.com">RailsKits</a> to sell
that billing code to other Rails developers. I priced it starting at $250,
and it was a hit. It effectively replaced my freelance income for a
while, and while it doesn’t make as much as it used to (since other
options for implementing billing have become available), it still is a consisitent revenue stream
for me.</p>
<p>After RailsKits, I knew I wanted to do another SaaS project, and in 2012
I found the right one: <a href="https://www.honeybadger.io">Honeybadger</a> – an
application health monitoring service for Ruby developers. It has been
an incredibly fun project with awesome co-founders. Since its launch in
the fall of 2012 it has grown consistently, allowing me to cut back and
eventually eliminate my freelancing business.</p>
<p>I didn’t set out with a plan to start with an ebook, then move to a
larger product, then move to a recurring revenue product, but after
having considered Rob’s Stairstep Approach and having reflected on the
past decade of my own experience, I can certainly recommend going that
route. It’s not the only way to go, but it does give you a variety of
opportunities to learn how to find customers and sell something to them,
and it can be a whole lot of fun.</p>Rob Walling wrote a great post yesterday about building up your bootstrapped business over time by taking on smaller projects before diving into big ones. His post reminded me of Amy Hoy’s Stacking the Bricks philosophy, and I think that taking the approach of learning to walk before learning to run makes sense. Rob’s post made me reflect on my experience building products that have gone from producing no income, to putting some change in my pocket, to providing a nice income for my family, and I thought it would be fun to share.Inject Your App Data Into Help Scout2014-04-18T06:57:00+00:002014-04-18T06:57:00+00:00https://www.bencurtis.com/2014/04/inject-your-app-data-into-help-scout<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> we use <a href="http://www.helpscout.net/">Help Scout</a>
to manage our customer support, and
that has worked out well for us. One thing I’ve wanted for quite a
while is more integration between Help Scout, our internal dashboard,
and the Stripe dashboard. After taking a mini-vacation to attend
<a href="http://www.microconf.com/">MicroConf</a> this week, I decided it was time to make my dreams come true.
:)</p>
<p>Help Scout allows you to plug “apps” into their UI, and you can build
your own apps to populate the sidebar when looking at a help ticket.
All you have to do is provide a URL that Help Scout can hit which
returns a blob of HTML to be rendered on the page. Your app receives a
signed POST request where the payload is some information about the
support ticket you are viewing, which includes the email address of the
person who created the ticket. Here’s a Rails controller that receives
the request, verifies the signature, and returns some HTML for the user
found by email address:</p>
<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">require</span> <span class="s1">'base64'</span>
<span class="nb">require</span> <span class="s1">'hmac-sha1'</span>
<span class="k">class</span> <span class="nc">HelpscoutController</span> <span class="o"><</span> <span class="no">ApplicationController</span>
<span class="n">skip_before_filter</span> <span class="ss">:verify_authenticity_token</span>
<span class="n">before_filter</span> <span class="ss">:verify_signature</span>
<span class="k">def</span> <span class="nf">user</span>
<span class="n">payload</span> <span class="o">=</span> <span class="no">JSON</span><span class="p">.</span><span class="nf">parse</span><span class="p">(</span><span class="n">request</span><span class="p">.</span><span class="nf">raw_post</span><span class="p">)</span>
<span class="k">if</span> <span class="n">payload</span><span class="p">[</span><span class="s1">'customer'</span><span class="p">]</span> <span class="o">&&</span> <span class="n">payload</span><span class="p">[</span><span class="s1">'customer'</span><span class="p">][</span><span class="s1">'email'</span><span class="p">]</span> <span class="o">&&</span> <span class="vi">@user</span> <span class="o">=</span> <span class="no">User</span><span class="p">.</span><span class="nf">where</span><span class="p">(</span><span class="ss">email: </span><span class="n">payload</span><span class="p">[</span><span class="s1">'customer'</span><span class="p">][</span><span class="s1">'email'</span><span class="p">]).</span><span class="nf">first</span>
<span class="n">render</span> <span class="ss">json: </span><span class="p">{</span> <span class="ss">html: </span><span class="n">render_to_string</span><span class="p">(</span><span class="ss">action: :user</span><span class="p">,</span> <span class="ss">layout: </span><span class="kp">false</span><span class="p">)</span> <span class="p">}</span>
<span class="k">else</span>
<span class="n">render</span> <span class="ss">json: </span><span class="p">{</span> <span class="ss">html: </span><span class="s2">"User not found"</span> <span class="p">}</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="kp">protected</span>
<span class="k">def</span> <span class="nf">verify_signature</span>
<span class="n">bail</span> <span class="n">and</span> <span class="k">return</span> <span class="kp">false</span> <span class="k">unless</span> <span class="p">(</span><span class="n">sig</span> <span class="o">=</span> <span class="n">request</span><span class="p">.</span><span class="nf">headers</span><span class="p">[</span><span class="s1">'X-Helpscout-Signature'</span><span class="p">]).</span><span class="nf">present?</span>
<span class="p">(</span><span class="n">hmac</span> <span class="o">=</span> <span class="no">HMAC</span><span class="o">::</span><span class="no">SHA1</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="s2">"secret-that-you-enter-in-helpscout's-ui"</span><span class="p">)).</span><span class="nf">update</span><span class="p">(</span><span class="n">request</span><span class="p">.</span><span class="nf">raw_post</span><span class="p">)</span>
<span class="n">bail</span> <span class="n">and</span> <span class="k">return</span> <span class="kp">false</span> <span class="k">unless</span> <span class="n">sig</span><span class="p">.</span><span class="nf">strip</span> <span class="o">==</span> <span class="no">Base64</span><span class="p">.</span><span class="nf">encode64</span><span class="p">(</span><span class="n">hmac</span><span class="p">.</span><span class="nf">digest</span><span class="p">).</span><span class="nf">strip</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">bail</span>
<span class="n">render</span> <span class="ss">json: </span><span class="p">{</span> <span class="ss">html: </span><span class="s2">"Bad signature"</span> <span class="p">},</span> <span class="ss">status: </span><span class="mi">403</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre></div></div>
<p>After fetching the user record, it returns a blob of HTML via a HAML
view:</p>
<div class="language-haml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">%ul</span>
<span class="nt">%li</span> Created on <span class="si">#{</span><span class="n">l</span><span class="p">(</span><span class="vi">@user</span><span class="p">.</span><span class="nf">created_at</span><span class="p">.</span><span class="nf">to_date</span><span class="p">,</span> <span class="ss">format: :long</span><span class="p">)</span><span class="si">}</span>
</code></pre></div></div>
<p>Then you’re done! Now when you view a ticket in Help Scout you’ll see
info from your database about that user in the sidebar.</p>At Honeybadger we use Help Scout to manage our customer support, and that has worked out well for us. One thing I’ve wanted for quite a while is more integration between Help Scout, our internal dashboard, and the Stripe dashboard. After taking a mini-vacation to attend MicroConf this week, I decided it was time to make my dreams come true. :)Default a Postgres column to the current date in a Rails migration2014-02-28T13:38:00+00:002014-02-28T13:38:00+00:00https://www.bencurtis.com/2014/02/default-a-postgres-column-to-the-current-date-in-a-rails-migration<p>If you want to have a Postgres column (aside from created_at) that you want to be populated with the current date if no other date is specified, you may be tempted to create a migration like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>add_column :invoices, :paid_on, :date, default: 'now()'
</code></pre></div></div>
<p>That will look like it works – you create a record, it gets populated with today’s date, and all is good. However, if you look at your schema, you will notice that new field has a default of today’s date instead of now(). Oops. :)</p>
<p>You might try to create the column with the recommendation from the Postgres documentation:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>add_column :invoices, :paid_on, :date, default: 'CURRENT_DATE'
</code></pre></div></div>
<p>But that fails because Rails tries to quote that ‘CURRENT_DATE’ for you before it goes to Postgres, which blows up. Now what?</p>
<p>Here’s how to do what you want:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>add_column :invoices, :paid_on, :date, default: { expr: "('now'::text)::date" }
</code></pre></div></div>
<p>This avoids the quoting problem (by using expr) and avoids the always-insert-migration-date’s-date problem (by using the default function of (‘now’::text)::date, which is effectively the same as CURRENT_DATE.</p>
<p>And <strong>now</strong> when you insert a record without specifying a value for that field, you get the date of the insertion, rather than the date of the field being created. :)</p>If you want to have a Postgres column (aside from created_at) that you want to be populated with the current date if no other date is specified, you may be tempted to create a migration like this:Searchlight and CanCan2013-12-03T11:34:00+00:002013-12-03T11:34:00+00:00https://www.bencurtis.com/2013/12/searchlight-and-cancan<p>I’m currently working on a client project where site adminstrators use
the same UI that site users do, so there are permissions checks in the
views and controllers to ensure the current user has the right to do or
see certain things. <a href="https://github.com/ryanb/cancan/">CanCan</a> provides the access control, which takes
care of most of the issues with a simple <code class="highlighter-rouge">can?</code> check or
<code class="highlighter-rouge">load_and_authorize_resource</code>.</p>
<p>In one case I wanted to provide search on a list of items (the index
action) to admins so they could search through all items in the database, but users
should be able to only search on their own items. I’m using <a href="https://github.com/nathanl/searchlight">Searchlight</a>
(highly recommended) for search, which returns results as an
<code class="highlighter-rouge">ActiveRecord::Relation</code>, so it’s easily chainable via CanCan, like so:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class InvoicesController < ApplicationController
def index
@search = InvoiceSearch.new(params[:search])
@invoices = @search.results.accessible_by(current_ability, :index)
end
end
</code></pre></div></div>
<p>Searchlight is also smart enough to return all results if there no
search params provided, so this also works as a typical index action
that lists all items the user can see. If you’re curious about the
<code class="highlighter-rouge">@search</code> instance variable, that is used in the search form in the
index view.</p>
<p>So, if you need search with access control, use Searchlight and
CanCan… they are a great combo!</p>I’m currently working on a client project where site adminstrators use the same UI that site users do, so there are permissions checks in the views and controllers to ensure the current user has the right to do or see certain things. CanCan provides the access control, which takes care of most of the issues with a simple can? check or load_and_authorize_resource.Installing Ruby 2.02013-08-10T04:34:00+00:002013-08-10T04:34:00+00:00https://www.bencurtis.com/2013/08/installing-ruby-2-dot-0<p>I had a bit of an adventure this morning getting Ruby 2.0 installed on
my mac with Mountain Lion, so I thought I’d share the tale with you in
case it can help save you some time on doing the same. Up until now
I’ve been developing my Rails apps with 1.9.3, but it was time to
upgrade and experience all the new hotness of the latest Ruby. I had
tried to install Ruby 2.0 before, but I had been stymied by an openssl
error when building. Today was to day to get that sorted.</p>
<p>I’m using rbenv to manage the different Ruby versions on my machine, so
the first step was to update ruby-build, which I have installed via
homebrew, so that I could fetch and build the latest Ruby. Sadly, I had
some weirdness with my homebrew installation that prevented me from
getting the latest homebrew, which prevented me from getting the latest
ruby-build, which prevented me from being able to install the latest
Ruby (2.0.0-p247):</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew update
error: Your local changes to the following files would be overwritten by merge:
...
</code></pre></div></div>
<p>I was pretty sure I hadn’t changed anything in homebrew myself, and I
found some guidance in the github issues list for homebrew that I should
just blow away my local changes with a git reset, which didn’t initially
work because apparently some permissions had changed in /usr/local:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /usr/local/Library
$ git reset --hard FETCH_HEAD
error: unable to unlink old 'README.md' (Permission denied)
error: unable to unlink old 'SUPPORTERS.md' (Permission denied)
fatal: Could not reset index file to revision 'FETCH_HEAD'.
$ sudo chown -R `whoami` /usr/local
$ git stash && git clean -d -f
$ brew update
</code></pre></div></div>
<p>Now I was in business. Next up I upgraded ruby-build, and since I had
already installed openssl via homebrew previously, I could use that
while compiling:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew upgrade ruby-build
$ env CONFIGURE_OPTS='--with-openssl-dir=/usr/local/opt/openssl' rbenv install 2.0.0-p247
</code></pre></div></div>
<p>Boom! Ruby 2.0 was finally installed. But then I hit a snag while
trying to install gems for one of my Rails projects:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bundle
Could not verify the SSL certificate for https://code.stripe.com/.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about
OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
</code></pre></div></div>
<p>That was an especially useful error message, since that link provided a
tip on easily getting some updated ssl certificates:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew install curl-ca-bundle
</code></pre></div></div>
<p>And that tells you to</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ export SSL_CERT_FILE=/usr/local/opt/curl-ca-bundle/share/ca-bundle.crt
</code></pre></div></div>
<p>And now everything works. Woohoo!</p>
<p><em>Update - September, 2014:</em> According to <a href="https://github.com/Homebrew/homebrew/issues/32065">this issue</a>, something has changed in homebrew that makes this change break things. So try leaving out that step.</p>I had a bit of an adventure this morning getting Ruby 2.0 installed on my mac with Mountain Lion, so I thought I’d share the tale with you in case it can help save you some time on doing the same. Up until now I’ve been developing my Rails apps with 1.9.3, but it was time to upgrade and experience all the new hotness of the latest Ruby. I had tried to install Ruby 2.0 before, but I had been stymied by an openssl error when building. Today was to day to get that sorted.Proven: Customer interviews save you time and money2013-05-01T11:01:00+00:002013-05-01T11:01:00+00:00https://www.bencurtis.com/2013/05/proven-customer-interviews-save-you-time-and-money<p>I’m on my way home from <a href="http://www.microconf.com/">MicroConf 2013</a>,
having learned a lot and having had a lot of fun. This was the third
year Rob and Mike have put on the conference, and the third year that it
has been an <a href="http://twitter.com/TheDaveCollins">awesome</a> experience. As
with the previous years, the speakers and the attendees were bright,
informative, friendly, motivated, and motivating. If you run a
bootstrapped biz, or are thinking about running one, MicroConf is the
place to be to really increase your motivation for and your knowledge
about running your business.</p>
<p>As an aside, I’m looking forward to
<a href="http://www.baconbizconf.com/">BaconBizConf</a> at the end of
this month for the same reasons – I have no doubt it will also be a
great place to be for those who are interested in getting better at
making money. :)</p>
<p>I could write several blog posts about terrific takeaways from this
conference, but in this one I want to focus on the urging (especially by
<a href="http://twitter.com/hnshah">Hiten Shah</a> and <a href="http://www.twitter.com/SingleFounder">Mike Taber</a>)
to focus on customer development when starting and growing your business.
This topic had a particular impact on me at this time because it’s been
on my mind the last few weeks.</p>
<p>A couple of weeks ago I had the privilege of attending
the Switch Workshop put on by the <a href="http://www.therewiredgroup.com/">Rewired Group</a> and hosted by Jason
Fried of 37 Signals. That one-day workshop (which I also highly
recommend) was entirely focused on
how people make purchasing decisions, and how understanding that process
can help you find the right customers and give them the value for which
they are searching. Going through that workshop really got
me in the mindset of putting myself in my customer’s shoes (and head),
and letting that drive the decisions I make about my business. I had
been interested in customer development before, but that workshop really
sold me on just how effective it is to talk to customers about how and
why they’ve made the decisions they have made.</p>
<p>I work with entrepreneurs quite a bit in my consulting business (in
fact, I work almost exclusively with people who have an idea and are
wondering how they can convert that idea into a business), and all too
often I see a lack of diligent customer research cause time and money to
be wasted, building a product or service that not enough people really
want. That’s the bad news. But the good news is that this particular
problem is actually easy to overcome! And, even better, it doesn’t have
to take a lot of time or money! And, best of all, it’s even fun! That
workshop, and the emphasis that Hiten and Mike placed on really
understanding your customer, gave me the the information and motivation
I needed to implement a process to solve this problem for my own
business and help others who face this very common problem.</p>
<p>The super-simplified, fun-to-follow and obscenely-effective process
can be broken down into two steps. First, follow Hiten’s advice, and
start with a hypothesis, which I’ll paraphrase a bit:</p>
<blockquote>
<p>We believe that (some type of customer / customer segment) has a
problem (with something or doing something)</p>
</blockquote>
<p>Once you fill in the blanks with a very specific description of what you
think your ideal customer is and with a description of what the job they
want done is, then you are ready to move on to step number two (and this
is where it gets fun): talking to those customers. But here’s the
trick: Don’t talk about you! Talk about them! Talk about who they are
and what they do (thus making sure they really are your target customer,
and/or helping you learn about who your target customer is). Then, talk
about what they are doing today to solve the problem that you are
thinking of addressing. Ask them how they came to the solution that
they are using, or how they confronted the problem on which you are
focused. If all goes well, you’ll end up talking very little about the
product or service idea in your head, and instead you’ll end up hearing
a lot about what they do and how they do it, which will give you such
good insights that you’ll find yourself smiling as they are talking.</p>
<p>Of course this process isn’t limited to just new ideas, products, or
services. You can use this same process when working on a new feature
for your current product, or when trying to find out why what you are
currently offering isn’t quite as successful as you’d like it to be.</p>
<p>Now, you might be wondering how I know this process is worthwhile and
fun. It’s because I’ve done it myself, of course! :) And I can
certainly attest that it works wonders. In fact, I got to practice the
process while still at MicroConf, since some of the attendees are
ideal customers for my Rails application monitoring service,
<a href="https://www.honeybadger.io/">Honeybadger</a>. There’s a new feature I’ve
been thinking about the past few weeks, and I wanted to make sure I was
on the right track with it. I had a hypothesis that Rails developers
would like a better way to do activity X than the way they are currently
doing it. So I found a few Rails developers that I had already been
talking to during the conference, and I asked them one question: When
do you do activity X and why? From that one simple question, and a short
conversation that followed it, I realized that I was indeed on to
something with my planned feature. More importantly, what I heard from
my customers let me know that my planned approach was actually a bit
flawed. And better yet, I realized that the approach that would
<em>better</em> solve the problem (as they saw the problem) would actually be
<em>easier</em> for me to implement than my initial plan. I was thrilled!</p>
<p>I hope my small and simple experience of doing customer development –
and particularly interviewing customers – has helped persuade you to get
out there and do it yourself, if you aren’t already doing it.</p>
<p>If you think this sounds interesting, and you’d like some help saving time
and money while using this process to help build your product or your next
feature, do <a href="mailto:ben@bencurtis.com">get in touch</a>. I’ll be happy to
answer your questions or provide whatever help I can.</p>I’m on my way home from MicroConf 2013, having learned a lot and having had a lot of fun. This was the third year Rob and Mike have put on the conference, and the third year that it has been an awesome experience. As with the previous years, the speakers and the attendees were bright, informative, friendly, motivated, and motivating. If you run a bootstrapped biz, or are thinking about running one, MicroConf is the place to be to really increase your motivation for and your knowledge about running your business.