Who is this feature for?

In software development, when thinking about and designing new features, or trying to solve existing issues, it’s important to, early, ask the question: Who is this feature for?

That applies to writing tools intended for your developer colleagues, web application features for customers or anything in between.

I found, that the result only tends to be good, if the feature or solution was actually designed for a specific person, doing specific task and this solution allows them to improve the way they do this task considerably.

For example when deciding for a Content Management System for a website, developers often jump to first evaluating various offerings out there, based on features, APIs and supported languages. Instead of first identifying the person that is going to be working with such a system and establishing what their workflow is going to be. What’s important for them, what’s not important and only based on that then look for a solution, or build a custom one.

Tools that are build for a very specific need and for a real person that can tell you, this helps, or not; tend to end up being simpler, more pleasurable to use, last longer and require less maintenance and generally tend to achieve the goal set out for them at the beginning.

Don’t forget about X-Forwarded-Host header

Recently I was working on a Rails application on Heroku living behind a reverse proxy. This application serves requests coming to a specific folder on the target domain. For it to correctly generate full URLs, you have to somehow tell this app the hostname you want it to use. In Rails, you can configure a hostname in the environment config file, but that’s a static value, which has to be maintained and changed per environment. Also it does not work well if you want to access the application from multiple domains.

It’s much better to be able to set something up on the proxy itself. For this reason the X-Forwarded-Host HTTP header exists. Rails–being a good web citizen–supports it out of the box.

Before I learned about this header, I even implemented my own middleware to deal with this issue and a custom header. I was able to dump that extra code once I stumbled at this header.

Web technology apps taking over?

It’s interesting to see that my 3 most used desktop apps, during my day to day computer use, are all web technology based:

  • Atom – My code/text editor.
  • Slack – Communication with my team.
  • Chrome – of course

The last time I was working on a desktop app, I used NW.js, which is a platform for building desktop apps using web technologies.

From my perspective, it’s no coincidence. HTML, CSS and JS are great tools for building lots of types of apps. I like this trend.

curl request and return headers only

The UNIX command line tools is something that just keeps giving. Within web development I often find myself wanting to quickly debug a URL, see whether it’s alive or what the response is. Often I do not want to download the whole content (a large file for example). Before I learned the following, I would use Chromes Developer Tools. That is until I learned how to do it more efficiently and quicker with good old curl:

curl -I https://klevo.sk

Which returns something like:

HTTP/1.1 200 OK
Server: cloudflare-nginx
Date: Sat, 27 Jun 2015 17:27:17 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive

It’s especially handy when setting up and testing temporary or permanent HTTP redirects. Doing that in a browser can be cumbersome due to caching.

Faster SSH workflow with multiplexing

I was reading The Art of Command Line (great stuff) and tried the SSH configuration tips. With the below config I noticed considerable speedup in various SSH and Git related workflows. My ~/.ssh/config now includes:

Host *
  ControlMaster auto
  ControlPath /tmp/%r@%h:%p
  ControlPersist yes

Speed improvements I noticed:

  • I push my code to the remote often. Thanks to the keep alive options, the connection is kept open and subsequent pushes do not incur the penalty of establishing a new connections.
  • The same applies to server provisioning and maintenance. Once the initial connection is established, it is kept alive and sessions opened in new terminal tab or window begin instantly.

More on this topic with in depth explanations:

Speeding up bundle install with in-memory file system

On some of the servers I work with, due to cheap hard drives in software RAID configuration, I’ve found that bundle install can be extremely slow (take half an hour to complete). This obviously became unacceptable during deploys.

I thought that it might have something to do with how bundler writes a lot of small files during the installation of the gems. So I decided to try putting the deploy bundle directory (where all the gems are being installed) onto the in-memory filesystem. On Ubuntu this is /dev/shm.

It works flawlessly. The install time improved from half an hour down to a few seconds. After the bundle install is complete however, we do not want to leave the installed gems in the memory, as during restart they would be purged. So we just copy the directory back to the disk. Strangely enough, copying the whole directory from /dev/shm does not trash the disk so much and it only takes up to a minute for a few hundred MB of gems.

It’s cool to be able to find and utilize such a useful and simple part of Linux to solve and work around a slow hardware problem, while for everything else the server does, it’s still perfectly usable and more than capable of performing it.

Here’s my Capistrano 3 lib I use in my deploys that integrates this speedup:

namespace :bundler_speedup do
  task :symlink_to_shm do
    on roles(:all) do
      bundle_shm_path = fetch(:bundle_shm_path)
      # Make sure bundle dir exists
      execute "if [ ! -d #{shared_path}/bundle ]; then mkdir #{shared_path}/bundle; fi" 

      # todo: what if #{shared_path}/bundle is a symlink - meaning an interrupted install from previous time?

      cmds = []
      # Copy the bundle dir to /dev/shm/
      cmds << "cp -r #{shared_path}/bundle #{bundle_shm_path}"
      # Remove the shared bundle dir and symlink the shm dir instead
      cmds << "mv #{shared_path}/bundle #{shared_path}/bundle.old"
      cmds << "ln -s #{bundle_shm_path} #{shared_path}/bundle"
      # We're ready to do a fast in-memory bundle install now...
      execute cmds.join(' && ')
      info "shared/bundle was copied to /dev/shm for in-memory bundle install"

  task :remove_from_shm do
    on roles(:all) do
      bundle_shm_path = fetch(:bundle_shm_path)
      cmds = []
      # Copy the shm bundle to shared
      cmds << "cp -r #{bundle_shm_path} #{shared_path}/bundle.new"
      # Remove the symlink and move in the dir on disk
      cmds << "rm #{shared_path}/bundle"
      cmds << "mv #{shared_path}/bundle.new #{shared_path}/bundle"
      # Remove the in memory bundle
      cmds << "rm -rf #{bundle_shm_path}"
      cmds << "rm -rf #{shared_path}/bundle.old"
      # Bundle is persisted and in place
      execute cmds.join(' && ')
      info "shared/bundle was restored from bundle install within /dev/shm"
  before 'bundler:install', 'bundler_speedup:symlink_to_shm'
  after 'bundler:install', 'bundler_speedup:remove_from_shm'

namespace :load do
  task :defaults do
    set :bundle_shm_path, -> { "/dev/shm/#{fetch(:application).gsub(' ', '_').downcase}_bundle" }

In a Rails project, place it in lib/capistrano/tasks/bundler_speedup.rake. Capistrano should auto-load this for you.

This code is released under the MIT license.

Effort – Personal To-do and Project manager

I have open sourced a Rails app that I’ve been personally using for years. The code is available on Github under the MIT license. From the README:

I’ve modeled this app for my own personal use, note keeping and personal project management loosely after Basecamp. The single most important point for me is to have To-do lists that work in a particular way – that’s why I’ve build this for myself.

I am open-sourcing it to see if somebody finds it useful and can maybe build on it. Let’s see what happens.

This is a standard Rails 4 app, build the “Rails way”. Test coverage is minimal, just enough for the purposes of this app at this stage.

effiort todo lists
To-do lists – the most important and the most used part of this project.

Installing Windows 8.1 on 2009 Mac mini

Today I was busy with refurbishing an old 2009 Mac mini, software wise. It’s such a nice device and it’s still running well, apart from the dead dvd rom. Until now, it was running Windows XP, which is no longer supported by MS, so it was time to upgrade. I bought a fresh copy of Windows 8.1 and did a native install, without Bootcamp. It took quite a few times to figure out which combination of disk formatting and architecture (x86/x64) this old Mac mini can handle.

I followed this guide, but figured out through trial and error what works for this particular machine. The biggest difference is that I ended up installing the x86 (32 bit) version of Windows on a MBR type disk partition scheme. The other combinations mentioned in the article resulted in the machine not being able to boot.

My guide

  1. Power on the Mac, hold down Alt to be able to select the startup disk.
  2. The Windows installation DVD should be in your Mac’s drive or an external DVD drive (both will work).
  3. Select this DVD to boot from, do not select any UEFI options.
  4. Once the installer starts, fire up the command line (Shift+F10) and issue the following commands:
select disk 0
convert mbr

This is where we diverge from the above linked guide: we’re converting the disk to use the older MBR partition scheme as this is what our 32 bit Windows needs to work on this Mac. Once this is done, you can exit the command line and continue with the installer as normal.

The only thing that did not work for me after the Windows is installed, is the build in sound card. I ended up using an external one that was lying around.

Windows 8.1 is surprisingly snappy on just 2GB of RAM that this Mac mini has and overall the machine is a joy to use for some office work, which is its purpose.

Disclaimer: follow this guide on your own risk.

Sketch replaced Photoshop in my web-design workflow

I am really happy that I stumbled on some article (can’t find it now) comparing Sketch to Photoshop. This convinced me to give Sketch a try for a web-design part of a project I was working on. I downloaded the trial version, went through a few tutorials and I quickly saw the potential of greatly improving my workflow. Sketch is clearly a tool that was developed with web in mind from the start. It paid off, because once I started designing the website, it’s UI, logo and typography I was impressed at how much faster I was able to accomplish things compared to Photoshop.

Better tool opens up time for experimentation

What I did not necessarily expect at the beginning is that the time and effort that Sketch saves me putting my idea or vision into the computer as a graphic design–this saved time I would use for experimentation and fine tuning of my work. It’s so much easier to just select all the elements of the same type on a webpage design and fine tune the border, shadow or size all in one go (just one of many features of Sketch). This results in me producing work that I am more satisfied with.

Vectors are everywhere

In Sketch, everything is a vector. Now vector graphics is not something I had too much experience with. In the past, I used Illustrator only on a few occasions, never becoming fully comfortable with it to a degree where I would use it as the first place to go when I needed to create a logo or similar graphic asset for a project. So this time, I gave myself a specific task to finish, unrelated to any work project I was busy with. The whole point of this task was to push myself to improve my vector skills and produce something tangible, not just playing aimlessly with shapes and colors. I created this african mask illustration and made it available on Envato (which is also something I wanted to try for a while).

African Mask


Learning this new tool and enjoying the faster and more effortless web-design workflow is something I now enjoy thanks to Sketch.