<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Ben Curtis</title>
  <subtitle>Fractional CTO &amp; Co-founder of Honeybadger</subtitle>
  <link href="https://www.bencurtis.com/atom.xml" rel="self" type="application/atom+xml"/>
  <link href="https://www.bencurtis.com/" rel="alternate" type="text/html"/>
  <id>https://www.bencurtis.com/</id>
  <updated>2026-01-30T17:00:00.000Z</updated>
  <author>
    <name>Ben Curtis</name>
    <email>ben@bencurtis.com</email>
  </author>
  <entry>
    <title>Building Breakwater with AI</title>
    <link href="https://www.bencurtis.com/2026/01/building-breakwater-with-ai/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2026/01/building-breakwater-with-ai/</id>
    <published>2026-01-30T17:00:00.000Z</published>
    <updated>2026-01-30T17:00:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>Recently I&#39;ve been blown away with how good the AI models have become at helping me with my work. Over the past year, starting with ChatGPT conversations, then switching to Cursor as my primary editor, then diving into Claude Code, AI has become more and more useful in my day to day. But something changed in November-December... the models got much better at building web applications, and it has changed how I work. </p>
<p>Late last year, I started thinking about how we could better manage the distribution of Docker images from a private repository to <a href="https://www.honeybadger.io/">Honeybadger</a> customers who were interested in self-hosting. I looked at deploying our own instance of <a href="https://goharbor.io/">Harbor</a>, an open source container registry with authentication, but then it started to feel like a capital-p project. I had plenty of other things to do, so I put it on the back burner. But as I thought more about it, I realized that while it felt like a less-than-exciting work project, it could be a fun side project if I turned it into a product. Because <em>of course</em> other people will have this problem and go look for a solution, right?</p>
<p><a href="https://www.breakwaterapp.com/">Breakwater</a>, an app that helps software creators license their Docker images, is that product. You set up your products, associate them with your private repository, and create customer licenses for them. Customers can then use those licenses to pull your images the same way they would from Docker Hub or any other registry. If you want a way to sell your app as a self-hosted product, and you can package it as a Docker image, Breakwater is for you. In this post I&#39;m going to walk you through how I built it with AI, and share some observations on how AI is changing the game for SaaS creators.</p>
<h2>From idea to plan</h2>
<p>Over the Christmas break I sat down with ChatGPT to hash out what the product would do and what features would be interesting. I find myself going to ChatGPT first when I have an idea to flesh out, or a question about how to approach a problem, etc. I think that&#39;s primarly because ChatGPT was the first AI chatbot I started using, but it helps that its icon lives in my menu bar. I&#39;m sure Claude would be just fine for this, too.</p>
<p>But anyway, I planned to build a front-end for Harbor, and layer on concepts like Customers and Licenses — the things you need when you&#39;re distributing software commercially rather than hosting public images that are freely available to pull. Here&#39;s my initial prompt:</p>
<blockquote>
<p>I&#39;m interested in having a docker registry for customers where they use a web app to get credentials that are then used when pulling images. The goal is to sell licensed software that is delivered as docker containers, and once a customer purchases a license, they will have access to pull the containers. What technologies do you recommend I use for this?</p>
</blockquote>
<p>I didn&#39;t say anything about Harbor, as I wanted to leave ChatGPT open to suggest anything. Naturally, Harbor got priority placement in its recommendations. Here&#39;s the initial response:</p>
<blockquote>
<p>Short version:
Use a standard OCI registry with token-based auth in front of it, and have your web app mint per-customer pull tokens/robot credentials based on license state. Don’t reinvent the registry itself; glue existing pieces together.</p>
</blockquote>
<p>After presenting that and an overall architecture design, it presented three options:</p>
<ol>
<li>Use Harbor as the registry and implement licensing in the web app, using the Harbor API to manage robot accounts and projects.</li>
<li>Use a standard OCI registry like <a href="https://github.com/distribution/distribution">distribution</a> and implement token-based authentication using a custom web app.</li>
<li>Use a managed registry service like Amazon ECR, Google Artifact Registry, or Azure Container Registry.</li>
</ol>
<p>I decided to go with option 1, since it seemed like the fastest path to a working solution. I also liked the idea of being able to use its features for pulling images from upstream sources (my future customers&#39; own registries) and enforcing quotas. When I selected option 1, ChatGPT then generated a high-level implementation plan which seemed reasonable. </p>
<p>Then, knowing that I love to dive head-first into building stuff without doing much research, I asked about what solutions already existed for this problem:</p>
<blockquote>
<p>Before we get too far into this, is there a SaaS that already offers this kind of functionality?</p>
</blockquote>
<p>It gave me info about <a href="https://quay.io/">Quay.io</a> and <a href="https://jfrog.com/artifactory/">JFrog Artifactory</a> as well as providing suggestions for making <a href="https://github.com/features/packages">GHCR</a> or <a href="https://aws.amazon.com/ecr/">AWS ECR</a> do what I wanted. None of those solutions were exactly what I was looking for, so I decided to plow ahead:</p>
<blockquote>
<p>I think I want to build my own thing rather than relying on a SaaS. Can you help me come up with a project plan? I&#39;m thinking I&#39;d build a Rails app for this.</p>
</blockquote>
<p>It gave me a reasonable-looking plan, so I decided to go with it, and I asked it to give me the plan as a markdown download so I could save it to my new Rails project and use it with Claude Code.</p>
<h2>Brainstorming the brand</h2>
<p>Once I had settled on what I wanted to build, I decided to start a new chat session to brainstorm a name. It gave me names emphasizing licensing / entitlement, names themed around harbors / registries / containers, names focused on keys / access / authentication, names centered on delivery of software artifacts, etc., ending with a top-10 list of recommended names. After a bit of back-and-forth, and my suggestion to focus on &quot;terms related to defending a harbor&quot;, Breakwater was the top choice. As we wrapped up the name selection process, ChatGPT suggested creating a brand board, so I had it do that.</p>
<p>It provided three options, and each of them had a vibe (<em>Strong, crisp, premium, enterprise SaaS. Feels like Cloudflare, HashiCorp, or Stripe.</em>), color palette, typography, logo/mark ideas, iconography, and a UI direction. After I picked the option &quot;Modern Industrial Security&quot;, it provided a detailed brand board with all the elements mentioned above, and then suggested building a landing page, logo concepts (in SVG format), etc., and I asked it create the landing page using Tailwind. The current home page is still largely what ChatGPT created, though I made some changes while workshopping with Claude as the project progressed.</p>
<p>I wrapped up that chat by asking for a final deliverable before getting to work:</p>
<blockquote>
<p>Can you give me the brand board, visual design instructions, etc., in a markdown file I can download to be used to guide an AI while building HTML and CSS for my Rails app?</p>
</blockquote>
<h2>Claude gets to work</h2>
<p>Once I had the brand and vision in place, I switched to Claude for the actual development work. Claude quickly built me a Rails 8 app using Tailwind, following the color scheme in the brand board doc that I downloaded from the conversation with ChatGPT. It took just a few hours to get the models in place, get Harbor running, get the Rails app working with the Harbor API, and have authenticated pushes and pulls working. All along the way, I had Claude updating the project plan document to keep track of progress and to provide context for new work sessions. Every phase (Harbor infrastructure, Rails app skeleton, Harbor integration, etc.) was a new Claude session, and at the end of each session I would have Claude update the project plan document and generate a new document summarizing the decisions made and the work done in that phase.</p>
<p>As I played with the app I realized that using Harbor was adding friction — my models weren&#39;t lining up well with Harbor&#39;s structure — so I revisited my first chat with ChatGPT, picking up where we left off with this:</p>
<blockquote>
<p>I&#39;ve been going down this path with Harbor, and I&#39;m not sure it&#39;s working for me... I&#39;m feeling a bit of friction between my app and Harbor. I&#39;m not sure I&#39;m ready to abandon that approach yet, but I would like to explore option B: providing an auth token layer for plain registry. Can you dive into that option a bit more?</p>
</blockquote>
<p>And of course ChatGPT was happy to help. After a bit of back and forth and clarifying questions on both sides, I was ready to point Claude in the new direction, and I asked for an updated plan from ChatGPT:</p>
<blockquote>
<p>I&#39;d be interested in seeing a detailed implementation plan that includes the info you mentioned, with the plan written as downloadable markdown so I can use it to drive an AI in my project to build the task list.</p>
</blockquote>
<p>With the new plan in hand, and with an hour or two of work time per day, Claude and I made good progress. Claude was <em>much</em> faster than I would have been at building the auth token support in my Rails app that the registry needed for handling Docker authentication. After about ten hours of work, I had a working Rails app that was deployed and ready for alpha testing.</p>
<h2>When plans change</h2>
<p>Eventually I realized that my registry + authentication setup wouldn&#39;t support a key feature I wanted: being able to grant or deny access to images based on tag versions. The registry didn&#39;t pass that information along to the Rails app when it received a pull request, so I couldn&#39;t implement version-based licensing. I was pretty bummed by this, and, packing it up for the night, I explained my disappointment to my wife. I knew what needed to be done — build an authenticating proxy in front of the registry — but it felt like a lot of work, and I wasn&#39;t sure it would be worth it. I considered dropping the feature, but then my wife said:</p>
<blockquote>
<p>Why don&#39;t you just ask Claude to help you build it?</p>
</blockquote>
<p>Well, duh.</p>
<p>So the next morning I had Claude help me build an authenticating proxy in Go that would sit in front of the registry and pass the information to the Rails app that I needed. Again, it went much, much faster with Claude than if I had tried to do it on my own. The initial code session for that took less than an hour, and within three hours or so, over the course of a few days, it was done. </p>
<p>It was around this time that I decided to start having Codex review Claude&#39;s work, and that has been awesome. I&#39;ll ask Codex to do a review of the current changes, then copy paste Codex&#39;s findings into my Claude Code session, and have it implement the fixes. Then when I push a PR to GitHub, Copilot will review the changes and provide feedback, which I&#39;ll then have Claude address. My solo project has turned into a team project, and it&#39;s been a blast.</p>
<h2>Why this works so well</h2>
<p>Claude is amazing when it&#39;s helping you with work that you already know how to do. Being an expert web developer and operator (19 years of running SaaS apps and counting!) and using AI to build a new web app is crazy fast for a few reasons:</p>
<ol>
<li><p><strong>Claude has been trained on a <em>lot</em> of web app code.</strong> Granted, not all of that code is awesome, so Claude doesn&#39;t always generate the exact code I would write, but it&#39;s usually pretty good. Having Claude generate code much faster than I could write it, even if it&#39;s not perfect, is a win. </p>
</li>
<li><p><strong>I know what features I need.</strong> &quot;Claude, build vendor onboarding. Claude, build an audit log.&quot; I can describe the features at a high level because I know the domain well. And when I don&#39;t know exactly what I want, Claude can often suggest relevant features or improvements. I can literally type, &quot;what&#39;s this app missing&quot; and Claude will offer suggestions that are usually very good. Or I can type, &quot;I&#39;m thinking of building X, please help me create a plan to build it and ask me questions to flesh out the details&quot;, which is surprisingly helpful in finding blind spots and ironing out implementation details.</p>
</li>
<li><p><strong>I know good output when I see it.</strong> I can validate that Claude is not just producing something, but is producing the <em>right</em> thing. When Claude&#39;s implementation doesn&#39;t quite fit my needs, a simple prompt gets it back on track. AI in the hands of a novice can be dangerous... if I asked ChatGPT a question about a medical condition or a legal issue, I&#39;d have no idea whether it was accurate or not. But when I ask Claude to build a feature, I know what it should do and how it should work. I know how to evaluate the generated code for potential security and performance issues.</p>
</li>
<li><p><strong>I don&#39;t have to focus 100% on the work.</strong> Getting in the zone while writing code is awesome. I love experiencing flow while working on a project. But having work get done while I&#39;m doing something else is also awesome. I can spin up a task with Claude, then switch to doing something else, and come back to find a working feature. I can prompt Claude Code Web to do something from my phone when I&#39;m out and about, and then when get back to my computer, I can review and ship what it built for me. </p>
</li>
<li><p><strong>It&#39;s easier to get started on a task.</strong> Activation energy is a thing. Starting with blank slate, whether it be prose or code, can feel daunting, but starting with a prompt can be low-effort. For example, I&#39;m usually not excited to work on frontend code. Typing something like &quot;I want the product page to be updated when a repository is linked to it&quot; can get me 80% or 90% of the way there, making it feel less daunting to get started.</p>
</li>
</ol>
<p>I&#39;m sure there are more benefits I could list, but you get the point. I&#39;m so much faster at delivering features with AI than I am without it. I&#39;ve lost track of how many times I&#39;ve sat down to build out the next phase of the plan and it&#39;s done within ten minutes. Build an admin UI? Done. Add a multi-step vendor onboarding workflow? Done. Create a documentation portal? API? Self-hosted analytics? Done, done, and done.</p>
<p>I&#39;m not spending time in the editor, typing code. Claude is typing the code. I&#39;m directing the work, thinking about the bigger picture, and shipping faster than ever before. It&#39;s amazing.</p>
<h2>Looking ahead</h2>
<p>B2B SaaS in 2026 is going to be wild thanks to Claude and other AI coding assistants. It&#39;s scary, exciting, and mind-blowing all at the same time. The barrier to building new products has dropped dramatically. Solo founders and small teams can move at a pace that wasn&#39;t possible before, and be more ambitious in what they can build. The bottleneck is no longer &quot;can I build this?&quot; Instead, it&#39;s &quot;should I build this?&quot; and &quot;do people want this?&quot; and &quot;how will I get this in front of people?&quot;</p>
<p>I&#39;m excited to see where <a href="https://www.breakwaterapp.com/">Breakwater</a> goes, and I&#39;m even more excited about what this new era of AI-assisted development means for indie hackers and bootstrappers everywhere.</p>
]]></content>
  </entry>
  <entry>
    <title>Still At It</title>
    <link href="https://www.bencurtis.com/2024/11/still-at-it/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2024/11/still-at-it/</id>
    <published>2024-11-19T14:56:00.000Z</published>
    <updated>2024-11-19T14:56:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>With the rise of <a href="https://bsky.social/">Bluesky</a> and setting <a href="https://bsky.app/profile/bencurtis.com">my profile</a> to point to my domain here, I was motivated to write a blog post for the first time in 6 years. 😀</p>
<p>In that time I picked up motorcycle riding as a hobby, I started a <a href="https://www.founderquestpodcast.com/">podcast</a> with my business partners, and I&#39;ve continued to plug away at <a href="https://www.honeybadger.io">Honeybadger</a>. Honeybadger has been my day job for more than a decade, and it&#39;s still fun to show up every day and solve problems. I continue to be focused on building the product and scaling it — but now with the help of three more developers. It&#39;s so nice to be able to take a little more time off than I could in years past, and I&#39;m not on-call 24/7 these days. 😅</p>
<p>In short, life is good!</p>
]]></content>
  </entry>
  <entry>
    <title>Data retention with the Serverless Framework, DynamoDB, and Go</title>
    <link href="https://www.bencurtis.com/2018/04/data-retention-with-dynamodb-and-golang/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2018/04/data-retention-with-dynamodb-and-golang/</id>
    <published>2018-04-12T07:32:00.000Z</published>
    <updated>2018-04-12T07:32:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> we have standard retention periods for data from which our customers can choose. Based on which subscription plan they choose, we&#39;ll store their error data up to 180 days.  Some customers, though, need to have a custom retention period.  Due to compliance or other reasons, they may want to have enforce a data retention period of 30 days even though they subscribe to a plan that offers a longer retention period.  We allow our customers to configure this custom retention period on a per-project basis, and we then delete each error notification based on the schedule that they have set.  Since we store customer error data on S3, we need to keep track of every S3 object we create and when it should be expired so that we can delete it at the right time.  This blog post describes how we use S3, DynamoDB, Lambda, and the <a href="https://serverless.com/">Serverless Framework</a> to accomplish this task.</p>
<h2>Keeping track of the S3 objects we create</h2>
<p>As our processing pipeline receives and processes error notifications, we store the payload from each notification in an S3 object.  We also create objects in a separate S3 bucket that contain a batched list of the resulting S3 keys and the expiration time for each of those keys.  These objects are just JSON arrays that look like this:</p>
<pre><code class="language-json">[
  {
    &quot;key&quot;: &quot;pu3We2Ie/ea40b606-1b48-40cb-942f-a046755c7a0f&quot;,
    &quot;expire_at&quot;: &quot;2017-03-03T23:59:58Z&quot;
  },
  {
    &quot;key&quot;: &quot;Ieb0ieVu/fe3c6b8a-76d7-48d8-ab71-ea8a4f3bce08&quot;,
    &quot;expire_at&quot;: &quot;2017-07-31T23:59:58Z&quot;
  }
]
</code></pre>
<p>The S3 bucket that is used to store these lists of keys sends a message to an SNS topic for each PUT request.  We have a Lambda function that is subscribed to this topic to process each object as it is created, configured in our <code>serverless.yml</code> as the send-to-dynamo function:</p>
<pre><code class="language-yaml">functions:
  send-to-dynamo:
    handler: bin/send-to-dynamo
    events:
      - sns: &quot;arn:aws:sns:us-east-1:*:expirations-${opt:stage, self:provider.stage}&quot;
</code></pre>
<p>Tip: If you start your Serverless project with <code>serverless create --template aws-go</code>, you will get a project layout that puts the code for each function in a <code>main.go</code> file with its own subdirectory and a Makefile to help with compiling the code before deploying. Handy!</p>
<p>Ok, back to our configuration... Since this function needs to read the contents of the S3 objects referenced by the SNS notification, we have these permissions defined in <code>serverless.yml</code>:</p>
<pre><code class="language-yaml">provider:
  name: aws
  runtime: go1.x
  environment:
    STAGE: &quot;${opt:stage, self:provider.stage}&quot;
  iamRoleStatements:
   - Effect: &quot;Allow&quot;
     Action:
       - &quot;s3:GetObject&quot;
     Resource: &quot;arn:aws:s3:::expirations-bucket-${opt:stage, self:provider.stage}/*&quot;
</code></pre>
<p>That STAGE environment variable is used in the Go code to know which DynamoDB table to use, since we create a table per serverless environment that we deploy.  Here&#39;s how we define that table and give the Lambda function permission to write to it in our config:</p>
<pre><code class="language-yaml">provider:
  # ...
  iamRoleStatements:
    # ...
    - Effect: &quot;Allow&quot;
      Action:
        - &quot;dynamodb:PutItem&quot;
      Resource:
        Fn::GetAtt:
          - expirationsTable
          - Arn

resources:
  Resources:
    expirationsTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: &quot;expirations-${opt:stage, self:provider.stage}&quot;
        AttributeDefinitions:
          - AttributeName: ID
            AttributeType: S
          - AttributeName: ExpireAt
            AttributeType: N
        KeySchema:
          - AttributeName: ID
            KeyType: HASH
          - AttributeName: ExpireAt
            KeyType: RANGE
        StreamSpecification:
          StreamViewType: OLD_IMAGE
        TimeToLiveSpecification:
          AttributeName: ExpireAt
          Enabled: true
        # ...
</code></pre>
<p>You&#39;ll notice that the DynamoDB table has been configured with a TTL field called ExpireAt, and that it will emit a stream of events.  Since we only care about the TTL events, we use OLD_IMAGE as the StreamViewType, since that will give us the contents of the fields in DynamoDB row when it is deleted.</p>
<p>Since we have a lot of keys arriving with the same expiration time, the Lambda function groups the list of keys by expiration time and creates one record per group in DynamoDB to reduce the write throughput.  This results in one DyanmoDB record per expiration time (down to the second) which contains a list of S3 keys that are to be deleted at the expiration time.</p>
<p>{% gist 7cb3d82cdc8bde17299c2e77b7301d83 send-to-dynamo-main.go %}</p>
<p>The handler function receives the SNS event, looks for S3 records in the event data, then calls the <code>getItems</code> function to load each of those S3 objects and return lists of S3 keys to be deleted, grouped by expiration time.  Each of those groups gets inserted as one DynamoDB record.</p>
<h2>Using DynamoDB streams to know when to delete</h2>
<p>Now that we have the triggers and code in place to store the keys of the objects that are to be deleted, we need to finish the job by watching for TTL events in the DynamoDB stream and delete the S3 objects.  The <code>purge-from-s3</code> function does this.  Here&#39;s the configuration from <code>serverless.yml</code>:</p>
<pre><code class="language-yaml">provider:
  # ...
  iamRoleStatements:
   - Effect: &quot;Allow&quot;
     Action:
       - &quot;s3:DeleteObject&quot;
     Resource: &quot;arn:aws:s3:::data-bucket-${opt:stage, self:provider.stage}/*&quot;

functions:
  # ...
  purge-from-s3:
    handler: bin/purge-from-s3
    events:
      - stream:
          type: dynamodb
          arn:
            Fn::GetAtt:
              - expirationsTable
              - StreamArn
</code></pre>
<p>We have granted the Lambda function the permission to delete objects from the bucket that stores the error payloads, and we have configured that function to be triggered by the events in the stream from our DynamoDB table.</p>
<p>This function is a simpler than the first one.  When it receives a <code>REMOVE</code> event from DynamoDB, signifying that a record has been deleted due to the TTL, it iterates over the list of keys stored in the record, deleting each one from the bucket. </p>
<p>{% gist 7cb3d82cdc8bde17299c2e77b7301d83 purge-from-s3-main.go %}</p>
<p>Unfortunately, this Lambda function will get invoked for every event in the DynamoDB stream, which includes all the inserts that we did in the <code>send-to-dynamo</code> function.  We don&#39;t care about those records, so we just ignore them in this function.  I have filed a feature request with Amazon to add the ability to filter the types of stream events that trigger a function, and you&#39;re more than welcome to do the same. :)</p>
<p>I hope you have found this post to be a useful example of how to work with S3 and DynamoDB streams using the Serverless Framework, Lambda, and Go!</p>
]]></content>
  </entry>
  <entry>
    <title>2017 in review</title>
    <link href="https://www.bencurtis.com/2018/01/2017-in-review/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2018/01/2017-in-review/</id>
    <published>2018-01-10T16:23:00.000Z</published>
    <updated>2018-01-10T16:23:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>I can&#39;t recall having done a year-in-review type of blog post before,
but when <a href="https://twitter.com/patio11/status/946910434887471104">Patrick suggested it recently</a>, it seemed like a great idea, so
I thought I&#39;d give it a shot.</p>
<p>In short, 2017 was a great year! :) I moved all of our servers from a
colocation facility to AWS in January, which helped me sleep a lot
better at night. Over the course of the year I continued to improve our
infrastructure, and we now have a very reliable and self-healing system.
Nearly everything we do (application, search, and database servers) is
self-managed, so it&#39;s been fun to level-up my distributed system skills.
In December I put my AWS skills (literally) to the test by getting my
first AWS certification: AWS Certified Developer - Associate exam.</p>
<p>I also spent some time working on developing my marketing skills by
taking <a href="https://themarketingseminar.com">The Marketing Seminar</a> by Seth Godin.  The seminar was
excellent, and I&#39;m very glad I spent the time on it.  Having spent a few
years as a freelancer, I had to get good at one-to-one sales, but trying
to market a SaaS business to a world of customers is a different kettle
of fish, and I picked up lots of helpful info from Seth&#39;s seminar.  My
favorite part of his philosophy is that if you belive you are bringing a
great product or service to the world (and why would you be doing it if
you didn&#39;t believe that?), then you <em>owe</em> it to the world to be a
champion for your product in a way that will attract people who will be
best served by it.  Of course I&#39;m convinced that Honeybadger <em>is</em> a
great addition to the world, so Seth&#39;s approach resonates with me.  I&#39;m
hoping that I&#39;ll be able to apply those learnings and improve my
marketing skills in 2018.</p>
<p>During the summer I had two small freelancing projects fall in my lap
out of the blue.  I hadn&#39;t done any freelancing in several years, so it
was fun to do some work on a side project and shift the mental gears a
little bit from my day-to-day work at Honeybadger.  One of the projects
was <a href="https://www.tvtattle.com">TVTattle</a>, which was a blast to work on.  It&#39;s a pretty simple
content-management Rails app, but it was fun to play with caching plus a
CDN to make it super fast.</p>
<p>So that&#39;s my year -- a lot of ops work with sprinklings of dev work
along the way.  I&#39;m still learning all the time, and business is great,
so what more could I want? :)</p>
]]></content>
  </entry>
  <entry>
    <title>Writing Again</title>
    <link href="https://www.bencurtis.com/2017/06/writing-again/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2017/06/writing-again/</id>
    <published>2017-06-02T16:24:46.000Z</published>
    <updated>2017-06-02T16:24:46.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>A while back I changed the stack I used for publishing this blog with
the hope that I would write more because it would be easier to publish.
Looking back at how little I&#39;ve written since I&#39;ve made that change, I
can see that that didn&#39;t work out so well. :)  I&#39;m not going to change
my stack again (just yet), but I am going to try and write a bit more.
Hopefully it won&#39;t be another year before my next post.</p>
<p>One cause of the lack of my writing is how busy I have been running my
<a href="https://www.honeybadger.io">error tracking service</a>.  It has been a lot
of fun, but it has also been a lot of work.  The good news is that the
business continues to grow, and we just passed the five-year mark.  My
co-founders and I are definitely living the bootstrapper&#39;s dream, doing
work that we love, and making our customers happy.</p>
<p>Work will fill the time allotted to it, though, so I&#39;m going to try and
carve out more time for writing.  We&#39;ll see how it goes. :)</p>
]]></content>
  </entry>
  <entry>
    <title>Solr Recovery</title>
    <link href="https://www.bencurtis.com/2016/03/solr-recovery/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2016/03/solr-recovery/</id>
    <published>2016-03-02T19:53:00.000Z</published>
    <updated>2016-03-02T19:53:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> this morning we had a
failure of our SolrCloud cluster (of three servers).  Each of the three
servers has a replica of the eight shards of our <em>notices</em> collection.
Theoretically this means that two of the three servers can go away and
we can still serve search traffic.  Sadly, reality doesn&#39;t always match
the theory.</p>
<p>What happened to us this morning is that some of the shards became
leaderless when one of the servers ran out of disk space and started
throwing errors.  In other words, we kept seeing this error in the logs:
<em><strong>No registered leader was found</strong></em>.  As a result, the two remaining
servers refused to update the index, which brought a halt to
search-related operations.  Since I&#39;m relatively new to Solr, I had to
bang my head against the wall for a bit before I stumbled upon the
solution.</p>
<p>Simply bringing down one of the two remaining good servers didn&#39;t solve
the problem.  The last remaining server refused to become the leader for
four of the shards.  To fix this, I had to unload each of the stubborn
shards and load them again.  This was accomplished easily enough via the
admin UI, and, once completed, our search functionality was restored.</p>
<p>Once that was done I brought the other good server back up, and it
quickly caught up the one server that was now the leader for all the
shards.  Easy-peasy.  Sadly, bringing the last of the servers up -- the
culprit with the full disk -- took a bit more work.  Since its data
directory was about twice the size of the directory on the other two
servers, despite all three supposedly having the same documents, I
decided to just blow away all the data on the third server and replace
it with a copy of the snapshots from the leader.  The process was
basically this:</p>
<ol>
<li>Take a fresh snapshot of each shard from the leader</li>
<li>Copy the eight snapshots from the leader to the damaged server</li>
<li>Move the index data in place, renaming the shards</li>
<li>Unload and load the cores on the damaged server</li>
</ol>
<p>Here&#39;s a <a href="https://gist.github.com/stympy/29673e824e7f619bb7c6#file-solr-backup-rb">ruby script for step 1</a> and a <a href="https://gist.github.com/stympy/29673e824e7f619bb7c6#file-reload-cores-sh">shell script for the other steps</a>.</p>
<p>With that, the damaged server quickly got each of the shards back in
sync (since I had just taken snapshots on the leader), and everything
was back to normal.</p>
]]></content>
  </entry>
  <entry>
    <title>Steppin&apos; up</title>
    <link href="https://www.bencurtis.com/2015/03/steppin-up/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2015/03/steppin-up/</id>
    <published>2015-03-27T14:01:43.000Z</published>
    <updated>2015-03-27T14:01:43.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>Rob Walling wrote a <a href="http://www.softwarebyrob.com/2015/03/26/the-stairstep-approach-to-bootstrapping/">great post yesterday</a>
about building up your bootstrapped business over time by taking on
smaller projects before diving into big ones.  His post reminded me of
Amy Hoy&#39;s <a href="https://unicornfree.com/stacking-the-bricks">Stacking the Bricks</a> philosophy, and I
think that taking the approach of learning to walk before learning to
run makes sense.  Rob&#39;s post made me reflect on my experience building
products that have gone from producing no income, to putting some change
in my pocket, to providing a nice income for my family, and I thought it
would be fun to share.</p>
<p>My day job has always been building web apps, so my first side projects
were also web apps: first, a community site, and later, a SaaS app for
managing test plan execution for software testers.
Those were fun, but never amounted to much.</p>
<p>The first side project I did with the goal of making money was a
self-published ebook about building e-commerce sites with Rails.  This
was in 2006, when Rails was young, and that $12 ebook sold pretty well.</p>
<p>In 2007 I started freelancing full time, and I decided that I needed a
product with recurring revenue to help even out my cash flow, so I
started on <a href="http://www.catchthebest.com/">Catch the Best</a>, a SaaS app
that scratched my own itch.  I launched that in October of 2007 (working
on in it part-time while working on client projects), and it got some
paying customers from day one.  The revenue from that app has never been
large on a MRR basis, but it has been consistent, so I&#39;m pretty happy
having that as a cash machine.</p>
<p>In 2008 I was building a SaaS billing system in Rails for the third
time.  The first time was for Catch the Best, and second two times were
for clients that had engaged me to build SaaS apps for them.<br>It occurred to me that other developers might be
interested in buying what I had built so that they could save
themselves the time of building their own.  So I cleaned up the code I
had written and launched <a href="https://www.railskits.com">RailsKits</a> to sell
that billing code to other Rails developers.  I priced it starting at $250,
and it was a hit.  It effectively replaced my freelance income for a
while, and while it doesn&#39;t make as much as it used to (since other
options for implementing billing have become available), it still is a consisitent revenue stream
for me.</p>
<p>After RailsKits, I knew I wanted to do another SaaS project, and in 2012
I found the right one: <a href="https://www.honeybadger.io">Honeybadger</a> -- an
application health monitoring service for Ruby developers.  It has been
an incredibly fun project with awesome co-founders. Since its launch in
the fall of 2012 it has grown consistently, allowing me to cut back and
eventually eliminate my freelancing business.</p>
<p>I didn&#39;t set out with a plan to start with an ebook, then move to a
larger product, then move to a recurring revenue product, but after
having considered Rob&#39;s Stairstep Approach and having reflected on the
past decade of my own experience, I can certainly recommend going that
route.  It&#39;s not the only way to go, but it does give you a variety of
opportunities to learn how to find customers and sell something to them,
and it can be a whole lot of fun.</p>
]]></content>
  </entry>
  <entry>
    <title>Inject Your App Data Into Help Scout</title>
    <link href="https://www.bencurtis.com/2014/04/inject-your-app-data-into-help-scout/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2014/04/inject-your-app-data-into-help-scout/</id>
    <published>2014-04-18T06:57:00.000Z</published>
    <updated>2014-04-18T06:57:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>At <a href="https://www.honeybadger.io/">Honeybadger</a> we use <a href="http://www.helpscout.net/">Help Scout</a>
to manage our customer support, and
that has worked out well for us.  One thing I&#39;ve wanted for quite a
while is more integration between Help Scout, our internal dashboard,
and the Stripe dashboard.  After taking a mini-vacation to attend
<a href="http://www.microconf.com/">MicroConf</a> this week, I decided it was time to make my dreams come true.
:)</p>
<p>Help Scout allows you to plug &quot;apps&quot; into their UI, and you can build
your own apps to populate the sidebar when looking at a help ticket.
All you have to do is provide a URL that Help Scout can hit which
returns a blob of HTML to be rendered on the page.  Your app receives a
signed POST request where the payload is some information about the
support ticket you are viewing, which includes the email address of the
person who created the ticket.  Here&#39;s a Rails controller that receives
the request, verifies the signature, and returns some HTML for the user
found by email address:</p>
<pre><code class="language-ruby">require &#39;base64&#39;
require &#39;hmac-sha1&#39;

class HelpscoutController &lt; ApplicationController
  skip_before_filter :verify_authenticity_token
  before_filter :verify_signature

  def user
    payload = JSON.parse(request.raw_post)
    if payload[&#39;customer&#39;] &amp;&amp; payload[&#39;customer&#39;][&#39;email&#39;] &amp;&amp; @user = User.where(email: payload[&#39;customer&#39;][&#39;email&#39;]).first
      render json: { html: render_to_string(action: :user, layout: false) }
    else
      render json: { html: &quot;User not found&quot; }
    end

  end

  protected

    def verify_signature
      bail and return false unless (sig = request.headers[&#39;X-Helpscout-Signature&#39;]).present?

      (hmac = HMAC::SHA1.new(&quot;secret-that-you-enter-in-helpscout&#39;s-ui&quot;)).update(request.raw_post)

      bail and return false unless sig.strip == Base64.encode64(hmac.digest).strip
    end

    def bail
      render json: { html: &quot;Bad signature&quot; }, status: 403
    end
end
</code></pre>
<p>After fetching the user record, it returns a blob of HTML via a HAML
view:</p>
<pre><code class="language-haml">%ul
  %li Created on #{l(@user.created_at.to_date, format: :long)}
</code></pre>
<p>Then you&#39;re done!  Now when you view a ticket in Help Scout you&#39;ll see
info from your database about that user in the sidebar.</p>
]]></content>
  </entry>
  <entry>
    <title>Default a Postgres column to the current date in a Rails migration</title>
    <link href="https://www.bencurtis.com/2014/02/default-a-postgres-column-to-the-current-date-in-a-rails-migration/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2014/02/default-a-postgres-column-to-the-current-date-in-a-rails-migration/</id>
    <published>2014-02-28T13:38:00.000Z</published>
    <updated>2014-02-28T13:38:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>If you want to have a Postgres column (aside from created_at) that you want to be populated with the current date if no other date is specified, you may be tempted to create a migration like this:</p>
<pre><code>add_column :invoices, :paid_on, :date, default: &#39;now()&#39;
</code></pre>
<p>That will look like it works -- you create a record, it gets populated with today&#39;s date, and all is good.  However, if you look at your schema, you will notice that new field has a default of today&#39;s date instead of now().  Oops. :)</p>
<p>You might try to create the column with the recommendation from the Postgres documentation:</p>
<pre><code>add_column :invoices, :paid_on, :date, default: &#39;CURRENT_DATE&#39;
</code></pre>
<p>But that fails because Rails tries to quote that &#39;CURRENT_DATE&#39; for you before it goes to Postgres, which blows up.  Now what?</p>
<p>Here&#39;s how to do what you want:</p>
<pre><code>add_column :invoices, :paid_on, :date, default: { expr: &quot;(&#39;now&#39;::text)::date&quot; }
</code></pre>
<p>This avoids the quoting problem (by using expr) and avoids the always-insert-migration-date&#39;s-date problem (by using the default function of (&#39;now&#39;::text)::date, which is effectively the same as CURRENT_DATE.</p>
<p>And <strong>now</strong> when you insert a record without specifying a value for that field, you get the date of the insertion, rather than the date of the field being created. :)</p>
]]></content>
  </entry>
  <entry>
    <title>Searchlight and CanCan</title>
    <link href="https://www.bencurtis.com/2013/12/searchlight-and-cancan/" rel="alternate" type="text/html"/>
    <id>https://www.bencurtis.com/2013/12/searchlight-and-cancan/</id>
    <published>2013-12-03T11:34:00.000Z</published>
    <updated>2013-12-03T11:34:00.000Z</updated>
    <author>
      <name>Ben Curtis</name>
    </author>
    <content type="html"><![CDATA[<p>I&#39;m currently working on a client project where site adminstrators use
the same UI that site users do, so there are permissions checks in the
views and controllers to ensure the current user has the right to do or
see certain things.  <a href="https://github.com/ryanb/cancan/">CanCan</a> provides the access control, which takes
care of most of the issues with a simple <code>can?</code> check or
<code>load_and_authorize_resource</code>.</p>
<p>In one case I wanted to provide search on a list of items (the index
action) to admins so they could search through all items in the database, but users
should be able to only search on their own items.  I&#39;m using <a href="https://github.com/nathanl/searchlight">Searchlight</a>
(highly recommended) for search, which returns results as an
<code>ActiveRecord::Relation</code>, so it&#39;s easily chainable via CanCan, like so:</p>
<pre><code>class InvoicesController &lt; ApplicationController
  def index
    @search = InvoiceSearch.new(params[:search])
    @invoices = @search.results.accessible_by(current_ability, :index)
  end
end
</code></pre>
<p>Searchlight is also smart enough to return all results if there no
search params provided, so this also works as a typical index action
that lists all items the user can see.  If you&#39;re curious about the
<code>@search</code> instance variable, that is used in the search form in the
index view.</p>
<p>So, if you need search with access control, use Searchlight and
CanCan... they are a great combo!</p>
]]></content>
  </entry>
</feed>