I’m a Systems/Software Engineer in the San Francisco Bay Area. I moved from Columbus, Ohio in 2007 after getting a B.S. in Physics from the Ohio State University. I'm married, and we have dogs.

Under my github account (https://github.com/addumb): I open-sourced python-aliyun in 2014, I have an outdated python 2 project starter template at python-example, and I have a pretty handy “sshec2” command and some others in tools.

The Cost of 4K 60fps video storage, a dad's perspective

February 20, 2021

I’m a dad, now! Yay! My son is awesome. Being a dad during the COVID-19 pandemic is awful in myriad ways. It has silver linings, but this post isn’t about that.

I ran out of iCloud storage

I’ve been taking 4K 60fps videos of my son doing things which I find earth-shattering: looking at me, sneezing, babbling, laughing, eating, even bathing and peeing LOL. I’m taking these in 4k 60fps on my iPhone whatever (11 pro, midnight green for sure) which offers a simple set of toggles to do so. Obviously, recording video at 4x the resoultion and 2x the framerate will likely cost in the ballpark of 8x in storage according to napkin math. This is the same ballpark as an order of magnitude increase in storage for video. It’s not, but my napkin math instinct should have informed me.

These toggles are, to my new dad perspective, future proofing my memories. Back in the days before the pandemic, people would have high school graduation parties. At my own, my parents lovingly trawled through the family archives to unearth the most dubiously endearing but truthfully embarrassing photos and videos they could find. Great success on there part, if memory serves me well.

I want a trove of ancient videos to share with the world when my son hits these comming of age milestones. I want to show him how I’m such an idiot that I sacrificed future flexibility on my part, fitting-in better on his part, and generally not being embarrassed on all parts. I’m already so proud of him, I cannot imagine how proud I’ll be if he has one of these shindigs to celebrate a major achievement among his peers. Him taking a crap is a major achievement in my book, so I’m just gonna be constantly gushing about how awesome he is. I have to keep that discount for myself.

So, I ran out of iCloud storage, of course. Who tf doesn’t? That’s the entire point of iCloud storage: to run out and pay apple for more “backups.” This got me thinking, how am I gonna save these 4k 60fps videos? How am I going to archive them and then retrieve them at a later point? Well, let’s start with Apple’s estimates of storage costs of these videos. The iPhone settings app has estimates listed under the camera details:

A minute of video will be approximately:

  • 40 MB with 720p HD [editorial: LOL @ HD] at 30 fps (space saver)
  • 60 MB with 1080p HD at 30 fps (default)
  • 90 MB with 1080p HD at 60 fps (smoother)
  • 135 MB with 4k [editorial: why no “UHD”?] at 24 fps (film style) 🙄
  • 170 MB with 4k at 30 fps (higher resolution)
  • 400 MB with 4k at 60 fps (higher resolution, smaller)

1080p @30fps is 60MB/minute or 1MB/s. 4k @60fps is 400MB per minute, or 6.66667whatever MB/s. Let’s invert it so we don’t have repeating decimals, they’re annoying in text: 1080p @30fps is 0.15 times the size of 4k @60fps. Cool, 8x (or 1/0.125x) was pretty close! Let’s just blame the difference as a rounding compromise at Apple to keep the camera settings nice and neat, though imprecise. We’re still gonna use their numbers for now, though. My napkin math is supported by only one scientist.

How much video do I take?

Well this is highly variable, but if there’s one thing I learned to get my Physics degree, it’s that you can approximate the shape of a cow as roughly a sphere. So let’s approximate. Let’s see how much I’ve taken at 4k 60fps so far:

$ ls -l /mnt/lilstorage/Videos/Other/iPhone/2021-02-20/|wc -l

Ok, 310 videos. Let’s see how to get their duration. (googling…) looks like mediainfo will do it, so sudo apt-get install mediainfo or whatever, then away we go:

$ for f in *; do mediainfo $f | grep ^Duration | head -1; done
... snip
Duration                                 : 1 min 4 s
Duration                                 : 50 s 277 ms
... snip

Great, the tool unhelpfully assumes that I, a human, am not a computer. Ends up humans can compute, so that’s annoying. Oh, but it supports * args and has a JSON output option:

$ mediainfo --Output=JSON * | jq .
... snip uhhhh on second thought, you don't want to see this

Ok, so it has some structure. For each file, it outputs 3 “tracks”: general, video, and audio. But it already looks a bit askance. I need to compare a 30fps and 60fps file, a 4k and a 1080p file, and maybe even the full cross to understand this tool. One 4k 60fps file I have is named IMG_4165.MOV, and a 1080p 30fps file is IMG_3603.MOV. Let’s compare their outputs:

$ mediainfo --Output=JSON IMG_4165.MOV IMG_3603.MOV | jq .
... snip 
... snip

Yeah, I crapped out, but that’s the point at least: it’s getting a bit more complicated just to tally up the duration of 4k vs 1080p videos I’ve taken. I’ll plop this out to a file and reat it into ipython:

$ mediainfo --Output=JSON * | jq '.[].media.track[1] | "\(.Width) \(.Height) \(.FrameRate) \(.Duration)" | sed 's/"//' > ~/derp.txt

jq cares about quotes, I don’t.

Then let’s load it up and group things together:

#!/usr/bin/env python3
derp = open("derp.txt", "r").readlines()

from collections import namedtuple, defaultdict

vidfile = namedtuple("vidfile", "width height framerate duration")
vidfiles = [vidfile(*l.strip().split(" ")) for l in derp]

def framebucket(fps):
    fps = float(fps)
    if fps < 20:
        return "timelapse"
    elif fps < 26:
        return "24"
    elif fps < 35:
        return "30"
    elif fps < 65:
        return "60"
        return "slowmo"

vid = namedtuple("vid", "res framebucket duration")
vids = [
    vid(int(v.width) * int(v.height), framebucket(v.framerate), v.duration)
    for v in vidfiles

summary = defaultdict(int)
for v in vids:
    summary[f"{v.res} @ {v.framebucket}"] += float(v.duration)

for k, v in sorted(summary.items()):
    print(k, v)

This outputs that I make 5x more 1080p @ 30fps videos than 4k @ 60fps videos:

1701120 @ 24 15.255
1701120 @ 60 42.568999999999996
2073600 @ 24 896.58
2073600 @ 30 10020.490000000009
2073600 @ 60 313.47099999999995
2073600 @ slowmo 18.997
2764800 @ 30 1.8079999999999998
8294400 @ 30 233.902
8294400 @ 60 2065.2369999999996
921600 @ 30 632.647

Ignore the rounding errors, you’ll see 10020.5 and 2065.237 for each, respectively.

How much video will I take?

I confess: I take a bunch of very short videos of specific interactions. I’m trying to take longer videos, but it’s hard to do that while the subject tries their best to get into trouble.

Well my son is 9 months old (minus 3 days) and I’ve taken lots. My phone broke and I had to swap-up when he was only 1 month old and that’s what these files cover. So 10020 seconds in 8 months of 1080p 30fps video and 2065 seconds of 4k 60fps video, or on rough average: 0.347917 hours per month of 1080p 30fps and only 0.07170138 hours per month of 4k 60fps.

If we pretend I keep my habit as-is, by the time he’s 18, I’ll have 75 hours of ancient 1080p 30fps video and a measly 15.5 hours of less-ancient 4k 60fps video. Decent to choose from for a simple carousel style loop.

Phew! I was worried I’d need to factor in how 1080p -> 4k -> 8k etc progression will happen and estimate filesize growth over time.

By Apple’s estimates, my 15.5 hours of 4k 60fps video will take only about 360GB to store. Easy peasy!

Technology progression costs more storage!

But wait, I do need to estimate the progression! I don’t have the energy to estimate, so I’ll over-estimate by supposing my entire archive is best approximated as being 400MB per minute (nullifying the fun I just had): 2TB. Big whoop. I guess I won’t get that NAS and will just shove them into S3 glacier for now.

Mobile App Operability

July 15, 2020

Mobile apps often get a bad rep from backend infrastructure people. Rightly so when comparing operability between backend infrastructure services and mobile applications. First, so that we’re on the same page: operability is trait of a software system which makes detection,  remediation, and anticipation of errors is low-effort. What is this for mobile applications? Well… it’s the trait of a mobile application which makes detection, remediation, and anticipation of errors is low-effort, of course. Mobile apps are software systems. They’re usually best modeled as trivially parallelized distributed software systems which just happen to run on mobile devices rather than on owned infrastructure.

Mobile operability is the trait of mobile apps which makes detection, diagnosis, remediation, and anticipation of errors low-cost and low-effort.

Errors here can be anything unwanted: crashes, freezes, error messages, lacking correct responses to interactions, and even having incorrect responses to interactions.


To consider what good operability is for mobile apps, we need a way to detect these errors. To detect errors, we have to record them and deliver that record to a detection system. That system does not have to be outside of the app! In fact, it’s best if your app has a self-diagnostic mechanism built-in to help understand errors in a way that respects your users’ privacy. Common detection mechanisms are crashlytics, the Google Play console Vitals reports, and app reviews and star ratings. The app usage analytics system should also be used for collecting errors, though you should use a separate analytics aggregation/processing method here to ensure your error data has as little user attributes as possible. Why not use the same one? Well with normal app analytics, you’re subject to GDPR, deletion requests, and heightened security expectations. Errors shouldn’t be like this, errors should be pared down so far that you can safely and securely retain them forever.

Detection is focused on efficiently detecting occurrences of errors, not making them easy to debug.

Each error category may have different and multiple avenues of diagnosis. Some likely have none at all, which is why I’m writing this. There are three general mechanisms of delivering errors:

  1. 3rd party: things which you don’t own, manage, nor control like Google Play console Vitals, app store reviews, star ratings, etc.

  2. Out-of-band: your app aggregates, interprets, and potentially sends error details to a separate backend system than what your app primarily interacts with. Common examples are analytics systems, custom crash interpreters, and local error aggregation.

  3. In-band: your app includes some details of errors within the primary client/server RPC, your server may include some global error information within the primary RPC’s as well.


There is a middle ground between zero context and debug-level context. You need to know the stack trace, view hierarchy, and some information about the device and user which stand-out to help diagnose a problem. You can quickly detect that a crash happened, but diagnosing why may require far more insight. The middle ground is in the aggregable context of an error: stack trace, activity or view, device type, etc. You should not require a full core dump for understanding that the app crashed, that’s for understanding why.

Diagnosis requires great context for the error. This context can come from a few different places:

  1. The app itself: it can submit an error report to you.

  2. The backend(s): maybe your web servers sent an HTTP 500 response which caused the user-visible error.

  3. The session, user, or customer: if you can determine which account experienced a specific error, you may be able to reconstruct context from their user data.

Ever-increasing context is not the path to perfect diagnosis, there should always be a trade-off between increasing context and value to the user. If you want to collect full core dumps, remember that transmitting 4GB+ of data over a cell network can bankrupt some people, so don’t do it. Adding the device manufacturer or even device model to the error report may give you critical context for some problems.

Every error may be different. There’s no magical set of fields to supply in an error to give sufficient context to efficiently diagnose. You, as the app developer, should have the best idea of what pieces of context will be most useful in each subsystem of your app. For example, if you’re implementing the password change flow, you may want to include metadata about the account’s password: how old was it, how many times has this person changed it recently, or even how they arrived at the screen if you have multiple paths like an email deep link.


There are many scenarios where a quantity in the application may be increasing or decreasing toward a critical threshold: resident set size, battery power, CPU cycles, cloud storage quota, etc. There are a few strategies to know if these are about to produce an error:

  • High/low water marks

  • Quotas

  • Correlational estimates: flow complexity, runtime, user sophistication, and others I have yet to learn or hear about.

High/low watermarks: if your application has a finite amount of a resource, you should implement it so that it operates optimally under a low amount of the resources. Once it reaches a threshold of usage of the resource, it can start disabling or degrading functionality. It’s common to implement at least two thresholds here: high and low watermarks. If the low watermark is reached, the app can start to refuse new allocations into the resource pool, or delay unnecessary work. The app should still generally be functioning as expected, though maybe a bit slower. Upon reaching the high water mark, the app should outright disable functionality and evict low priority resource usage.

Example: Let’s say an app takes, uploads, and views images. One of the resources it’s sure to use is local disk: to cache captured images, cache or even synchronize downloaded images, and to cache different image resolutions or image effect applications. If we set a low watermark of 1GB local storage, the app can switch to a mode where it does all processing on only previously allocated image files. This prevents some increase in storage used. However, if we hit a high watermark, we could have the app actively de-allocate images in storage and force re-fetching, in-memory processing, or even decreased resolution or effect support.

Quotas: these follow a similar concept as watermarks, but they have an inverse usage expectation. Nothing should change in behavior until the quota is met, at which point there can be a user flow for increasing the quota or decreasing the usage.

Correlational estimates: try not to use these. If a critical error occurs, once the dust settles and everything is recovered, it’s common to wonder if you could have seen this coming. One common question in the blameless post-mortem is “Was there a leading indicator which could have predicted this issue?” This is grasping at straws. If your application is so complex that there is no direct indicator which you can add, nor but you can fix, you’re already in too deep without proper operability. Push back against adding an alert on a correlational leading indicator. Instead, try to reduce the complexity of the app around the failure.


If we’ve detected and diagnosed an error, how do we fix it? This is where things get fun :) The general idea is to use everything at your disposal! The well-trod areas are:

  • Backend remediation

  • In-app feature flags

  • Hotfix

I’d like to urge you to consider another: a failover app wrapper. The failover wrapper is not just a blanket exception handler, it’s as separated as possible from the icky, risky, and most problematic part of your app: your code (mine included!). There are many ways to go about this and none of them are sufficient, otherwise it would be a fail-safe. Nothing in your application’s logic should be able to get into such a bad state that the best recourse available is to harm the core user experience. You can implement this as a stale view of data in a separate process, a babysitter process which spawns your main application’s surfaces, but the best available options are indeed to use a separate process. Within your main application’s logic, you can have a mechanism of “crashing” which only exits the child process, not the wrapper which is still able to provide some useful functionality to the user. The IPC between the processes to synchronize the user’s state could be done all sorts of ways, but again needs to be overly simple so that the failover wrapper doesn’t collect the same bugs as the main app.

Backend remediation is what we end up with when we don’t plan for emergencies as we write the app. The backend sends some special configuration telling the app to disable something, or sends response which was carefully crafted to side-step the problematic app code. You’ll end up with some of these, and that’s a good thing.

In-app feature flags deserve a whole separate post. I’ll also split them out into two categories: killswitches and dials. The general idea here is to implement graceful degradation within the app. Similar to the idea above about high/low watermarks, if the app encounters some critical error, the app can have a mechanism to reduce or remove functionality. One simple example is to crash when detecting a privacy error. Rather than show any activity with a privacy problem, just exit the app. This is the bare minimum, though, because how can you fix the privacy problem and stop the app from crashing?

Killswitches: these are simple boolean indicators which the app checks any time it exercises some specific functionality. The functionality is skipped if the killswitch is on. Your implementation may vary in the use of true vs. false to indicate on or off, but make it consistent. These parameters should be delivered through an out-of-band RPC to configure your application, ideally changing while the app is running. Distributing killswitches is a fun distributed systems problem with many complex solutions, just be sure that you’re comfortable with the coverage and timeframe of your solution, be it polling, pushing, swarming, or other fancier options I need to learn about.

Warning about killswitches: if you have killswitches for very small pieces of your application, you will end up exercising code which is behind tens or hundreds of conditional killswitch checks. Nobody ever tests every combination of killswitches, because they are killswitches. Use A/B testing to control the on/off choice of these smaller, less disaster-prone surfaces of your application. Killswitches should be reserved for disabling large pieces of functionality.

Dials: some functionality in your application can scale up and down in demand or resource usage, both on the client and on the backend. In the example of watermarks above, image resolution could be a dial from the client to reduce resource usage on the client or on the server. Video resolution, time between polls, push batching, and staleness of data are all examples of dials which you can use to remediate huge categories of errors if you support dialing to 0.

Hotfix: you can always ship a new build of your app with the offending bug fixed. This takes a long time and may take a whole team of people to produce. Even then, you’re at the mercy of the app stores, which may take days or even weeks to finally push your update. Just be sure only to do a true hotfix: fix the code from the very commit used to build the affected version.

The Perfect App

Remember: “The perfect is the enemy of the good.” Having a product is more important than having a perfectly operable product. The flip-side also holds: having an operable product is more important than having a perfect product. Don’t go all-in on either, exercise the balance with moderation. Learn from your mistakes. Just as a competitor is about to eat your lunch by getting to market sooner, another competitor may swoop in when you inevitably have a disaster and don’t have the operational capability to recover.

I don't write

October 30, 2019

I don’t write, and I often get feedback that it’s hurting my career in software engineering.

Disclaimer: I’m butthurt about struggling to advance in my career.

The following is what happens when I try to write anything (this included):

  • I think I have a good idea
  • I flesh it out into fairly broad strokes
  • I start writing
  • I doubt my idea
  • I fear being bullied
  • I delete what I wrote

Why does this hurt my career? Because I’m a software engineer. We conflate visibility and productivity in this industry, thinking the engineer who writes Medium articles all the time must know their shit because they seem to have quite a following.

I’m not saying I shouldn’t have to broadly communicate. There’s obviously value in broadly communicating. It’s why the fucking printing press was a world-changing invention. I’m not talking about spreading the Good News here, though. I’m talking about rehashing architecture choices, workflow recommendations, production operations plans, and shit as banal as unit test coverage. Why would I bore the world or my peers with things that are trivial to search for? How could that possibly be valuable? How is that required for me to “level up”?

I’d like to explore why I don’t write, particularly for work. So let’s look at my steps of abandonment.

How to Tank Any Technical Career by “Not Communicating Enough”

I think I have a good idea

We all have good ideas. We’re humans. Each of us has a unique experience and can share ideas which others haven’t had. Stupid simple. I don’t have confidence to call many of my ideas good, but I figure some probably are good.

I flesh it out into fairly broad strokes

This is actually a pretty quick process: trim it down to the bare essentials, don’t demean anybody’s work, have a strong recommendation or request of the coworker who reads it. Easy peasy.

I start writing

For larger documents or plans, where more context and guiding is needed, I go off the rails pretty easily. I get bogged down in minutiae. I start trying to cover every angle of technical attack this thing may face.

I doubt my idea

As I’m writing a defense without an attack, I start to convince myself that it’s not worth writing. That it contradicts something else I hadn’t seen. That it’s obvious, not novel. That I’m missing some piece of context that makes the whole thing moot.

I fear being bullied

I can hear it now: “well you probably just can’t take feedback very well if you always think it’s bullying.” Sure, fine. There’s nothing I can say to that. It’s just shutting down a conversation. If that’s what you’re thinking, then this isn’t for you. Otherwise, you may have experienced this. The Super Sr. Hon. Principled Engineer XIV stepping in to the idea you had and shitting all over it. Not because they have a better idea, but because they can shoot mine down without bothering to consider it. Because one of the key words I used matched a thing they own. Because I recommended using a system they didn’t write. Because they can.

I delete what I wrote

Not this time!

That’s all, I mostly needed to vent. But in the spirit of not deleting what I wrote, here are snippets from earlier versions of what I thought I wanted to say…

I’m talking about advancing a career through sheer confidence. Faking it until you make it seems to be a requirement in software engineering career progression.

I do this personally, for nerd stuff or regular-ass personal stuff. I have a handful of partial drafted things I’ve written out. One weird thing about the personal ones is the audience. The audience is always me. I can’t write advice since I’ll just doubt the value of it. I mean hey, it didn’t help me the first time around, right? So I write to straighten my thoughts, to categorize and reinterpret.

I do this abandonment process at work almost every day. Let’s say I have an idea, or a direction shift, which I need a lot of people to get on-board with before the wheels come off of the product/app/website/project/whatever. So, I say it in a chat and a couple people agree, then I say it more broadly and get complete and utter silence. I could bring it up in a meeting, but I usually talk myself down from that by pointing out what happens every time I do: people say that’s a good idea, pretend it takes 10x the engineering time it does, and proceed to politely tell me to go fuck myself by saying that this next deadline is more important.

This has trained me not to share my ideas widely. It has taught me that my ideas are bad.

So I instead have 1:1 or small group conversations about technical direction, driving everybody to a higher bar and generally getting people excited to work on the most boring stuff possible: reliability. I do this a lot at work, pointing out subtle tactical changes and creating a new vision of whatever it is I’m working with.

I moved addumb.com into GitHub pages

March 07, 2016

It was kind of fun :) Believe it or not, I actually have a couple other pages under addumb.com aside from shitty blog posts:

Those were all really easy to move. Moving my shitty blog from wordpress.com into GitHub pages was a little more complicated than all the guides make it seem. At a high level, you’ll do these four big steps:

  • Do the normal shit for GitHub pages (make a new repo, setup a jekyll site)
  • Import your Wordpress blog.
  • Fix all the stuff that just broke all over the place

GitHub Pages

The GitHub docs on this are pretty straightforward. You should end up with a site with a directory structure like this:

$ tree -L 2
├── Gemfile
├── README.md
├── _config.yml
├── _includes
│   ├── footer.html
│   ├── head.html
│   ├── header.html
│   └── sidebar.html
├── _layouts
│   ├── default.html
├── derp.html
├── index.md
├── posts.md
├── sitemap.xml
└── vendor
    └── bundle

The contents of CNAME here is just “addumb.com” for me. Don’t do that for yours. Gemfile only contains 2 lines right now: “source ‘https://rubygems.org’” and “gem ‘github-pages’”. README.md is just a regular GitHub readme. _config.yml has some annoying pieces you may need to dig up, particularly this:

markdown: kramdown
  input: GFM
  force_wrap: false

Then the _includes are just your site template pieces, _layouts just has a default for now which is basically empty:

<!DOCTYPE html>
{% include head.html %}
  {% include header.html %}
  {{ content }}
  {% include footer.html %}

You should be able to run this to get your jekyll site running locally:

$ bundle exec jekyll serve

The Wordpress.com Import Stuff

Now that you have a fancy new GitHub Pages site up and running, it’s time to move your shitty blog over to it from Wordpress.com. The steps are very similar for other Wordpress setups, so try to read between the lines.

Warning Make a new branch in your repo, you’re going to totally fuck shit up.

Okay, then follow along here…

  1. Login to your wordpress.com account, then your “site”, then export it all as a single XML file.
  2. Add the jekyll-import dependency to your Gemfile where you should already have github-pages:

    source 'https://rubygems.org'
    gem 'jekyll-import'
    gem 'github-pages'

    Don’t try to add this at the outset. Install github-pages, and then jekyll-import. They have a fake version conflict.

  3. Now run this to get your import started:

    bundle exec jekyll import wordpressdotcom --source-file ~/Downloads/addumb.wordpress*.xml

    This will spew all kinds of unhelpful gem dependency failures for a while. One by one, you’ll need to fix them. This step is total bullshit.

    Whoops! Looks like you need to install 'hpricot' before you can use this importer.
    If you're using bundler:
      1. Add 'gem "hpricot"' to your Gemfile
      2. Run 'bundle install'
    If you're not using bundler:
      1. Run 'gem install hpricot'.

    This is pretty straightforward:

    echo "gem 'hpricot'" >> Gemfile
    bundle install
    bundle exec jekyll import wordpressdotcom --source-file ~/Downloads/addumb.wordpress*.xml

    and repeat until you don’t get a Gem error.

  4. Okay, now you have a steaming pile of malformatted “.html” files under _posts. Each one has “frontmatter” at the top of the page, which is put between YAML comment markers: ---. The frontmatter is just YAML describing some specifics about each post. Go through each one and clean up the garbage left over, when you’re done it should look something like this post’s frontmatter:

     layout: post
     title: I moved addumb.com into GitHub pages
     date: 2016-03-07 11:26:00.000000000 -07:00
     type: post
     published: true
     status: publish
     description: Moving a blog from wordpress and website from AWS to GitHub Pages.
     keywords: github pages, aws migration, wordpress export
     - Tech
     - Tip
     - Tech
       login: addumb
       email: [email protected]
       display_name: addumb
       first_name: 'Adam'
       last_name: 'Gray'

    You should have removed garbage like this:

     _publicize_pending: '1'
     _edit_last: '16162427'
     _wp_old_slug: '1'
     original_post_id: '1'

The Layout(s)

The import created a bunch of _posts which reference a layout called post. What is that and how do you create it and make it somewhat useful? (I cannot give any advice on making it truly useful.)

You might want to make the post layout so that your posts aren’t rendered plainly, like this. Here’s a starter, just plop this in _layouts/post.html:

<!DOCTYPE html>
    {% if page.title %} {{ page.title }}
    {% else %} {{ site.title }}
    {% endif %}
    <h1>{{ page.title }}</h1>
    {{ content }}

That will just make a web page with your post’s title in the title bar of the browser along with a big header up top, then your unadulterated goodness underneath.

Next, you may want to list your posts in a few different places. Here’s how I made the sidebar/bottombar list of posts here. I made a file in _includes named sidebar.html and it’s this:

Other posts...
  {% for post in site.posts %}
  {% if page.url == post.url %}
  <li>&raquo; {{post.title}}</li>
  {% else %}
  <li><a href="{{post.url}}">{{post.title}}</a></li>
  {% endif %}
  {% endfor %}

Then you can update your post.html layout to include this on the right-hand side however you’d like.


Ha! I have no idea what I’m talking about here, but generally do these things:

  1. Make a sitemap.xml.
  2. Add descriptions and keywords to your site and to each post. Check the head layout on this site: _includes/head.html
  3. Add some jQuery, that helps, right??
  4. Bootstrap something, is that still fashionable?

Walk Away

That’s it! That’s how I made this thing. It was fun :)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 United States License. :wq