curl request and return headers only

The UNIX command line tools is something that just keeps giving. Within web development I often find myself wanting to quickly debug a URL, see whether it’s alive or what the response is. Often I do not want to download the whole content (a large file for example). Before I learned the following, I would use Chromes Developer Tools. That is until I learned how to do it more efficiently and quicker with good old curl:

curl -I https://klevo.sk

Which returns something like:

HTTP/1.1 200 OK
Server: cloudflare-nginx
Date: Sat, 27 Jun 2015 17:27:17 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive

It’s especially handy when setting up and testing temporary or permanent HTTP redirects. Doing that in a browser can be cumbersome due to caching.

Faster SSH workflow with multiplexing

I was reading The Art of Command Line (great stuff) and tried the SSH configuration tips. With the below config I noticed considerable speedup in various SSH and Git related workflows. My ~/.ssh/config now includes:

Host *
  TCPKeepAlive=yes
  ServerAliveInterval=15
  ServerAliveCountMax=6
  Compression=yes
  ControlMaster auto
  ControlPath /tmp/%r@%h:%p
  ControlPersist yes

Speed improvements I noticed:

  • I push my code to the remote often. Thanks to the keep alive options, the connection is kept open and subsequent pushes do not incur the penalty of establishing a new connections.
  • The same applies to server provisioning and maintenance. Once the initial connection is established, it is kept alive and sessions opened in new terminal tab or window begin instantly.

More on this topic with in depth explanations:

Speeding up bundle install with in-memory file system

On some of the servers I work with, due to cheap hard drives in software RAID configuration, I’ve found that bundle install can be extremely slow (take half an hour to complete). This obviously became unacceptable during deploys.

I thought that it might have something to do with how bundler writes a lot of small files during the installation of the gems. So I decided to try putting the deploy bundle directory (where all the gems are being installed) onto the in-memory filesystem. On Ubuntu this is /dev/shm.

It works flawlessly. The install time improved from half an hour down to a few seconds. After the bundle install is complete however, we do not want to leave the installed gems in the memory, as during restart they would be purged. So we just copy the directory back to the disk. Strangely enough, copying the whole directory from /dev/shm does not trash the disk so much and it only takes up to a minute for a few hundred MB of gems.

It’s cool to be able to find and utilize such a useful and simple part of Linux to solve and work around a slow hardware problem, while for everything else the server does, it’s still perfectly usable and more than capable of performing it.

Here’s my Capistrano 3 lib I use in my deploys that integrates this speedup:

namespace :bundler_speedup do
  task :symlink_to_shm do
    on roles(:all) do
      bundle_shm_path = fetch(:bundle_shm_path)
    
      # Make sure bundle dir exists
      execute "if [ ! -d #{shared_path}/bundle ]; then mkdir #{shared_path}/bundle; fi" 

      # todo: what if #{shared_path}/bundle is a symlink - meaning an interrupted install from previous time?

      cmds = []
      # Copy the bundle dir to /dev/shm/
      cmds << "cp -r #{shared_path}/bundle #{bundle_shm_path}"
      # Remove the shared bundle dir and symlink the shm dir instead
      cmds << "mv #{shared_path}/bundle #{shared_path}/bundle.old"
      cmds << "ln -s #{bundle_shm_path} #{shared_path}/bundle"
      # We're ready to do a fast in-memory bundle install now...
      execute cmds.join(' && ')
      
      info "shared/bundle was copied to /dev/shm for in-memory bundle install"
    end
  end

  task :remove_from_shm do
    on roles(:all) do
      bundle_shm_path = fetch(:bundle_shm_path)
      cmds = []
      # Copy the shm bundle to shared
      cmds << "cp -r #{bundle_shm_path} #{shared_path}/bundle.new"
      # Remove the symlink and move in the dir on disk
      cmds << "rm #{shared_path}/bundle"
      cmds << "mv #{shared_path}/bundle.new #{shared_path}/bundle"
      # Remove the in memory bundle
      cmds << "rm -rf #{bundle_shm_path}"
      cmds << "rm -rf #{shared_path}/bundle.old"
      # Bundle is persisted and in place
      execute cmds.join(' && ')
      
      info "shared/bundle was restored from bundle install within /dev/shm"
    end
  end
  
  before 'bundler:install', 'bundler_speedup:symlink_to_shm'
  after 'bundler:install', 'bundler_speedup:remove_from_shm'
end

namespace :load do
  task :defaults do
    set :bundle_shm_path, -> { "/dev/shm/#{fetch(:application).gsub(' ', '_').downcase}_bundle" }
  end
end

In a Rails project, place it in lib/capistrano/tasks/bundler_speedup.rake. Capistrano should auto-load this for you.

This code is released under the MIT license.

Some useful command line tools

It’s about time I got myself familiar with some of the core UNIX command line tools. No matter how good the GUI applications look like and work, when using command line alternatives the stuff gets done faster and it’s less prone to error.

I am talking here especially about FTPing stuff around. I’ve using Cyberduck so far but there were some bugs in the recent release and I’ve also read about the comparison of speed vs command line tools. It’s slower.

So I’ve learned to use rsync and lftp. It’s pure awesomeness. I can’t imagine going back to dragging files around in the GUI.

Also I’ve learned some basic Vim usage. It’s the fastest way to do quick .htaccess edits, or turning on/off CakePHP debug mode on the server and similar stuff. I realize some programmers use it full time for their work, but to go there Textmate shouldn’t have to exist.