Jekyll2022-01-10T20:05:54+00:00https://blog.kintoandar.com/feed.xmlkintoandarA place where knowledge is Free and Open Source prevailsJoel BastosBuilding healthier containers2018-01-01T00:00:00+00:002018-01-01T00:00:00+00:00https://blog.kintoandar.com/2018/01/Building-healthier-containers<div style="text-align:center">
<p><img src="/images/containers_not_vms.jpg" alt="lightweightvm" /></p>
</div>
<h2 id="intro">Intro</h2>
<p>Containers are <b>nothing</b> like virtual machines!<br />
<br />
Now that we’ve cleared that up, this post will try to shed some light regarding:</p>
<ul>
<li>How containers, as we know them, came to exist</li>
<li>Major differences between containers and virtual machines</li>
<li>Examples of how to build minimal containers</li>
<li>Demystifying the <code class="language-plaintext highlighter-rouge">scratch</code> container</li>
<li>Examples of how to debug running containers using other containers</li>
<li>Benefits of minimal containers</li>
<li>Tools that help build minimal containers</li>
</ul>
<h3 id="disclaimer">Disclaimer</h3>
<p>I’ll be using docker throughout this post as it’s more widely used, but these concepts should apply to other runtime environments like, for example, rkt, lxd or containerd.</p>
<h2 id="its-all-about-abstraction">It’s all about abstraction</h2>
<p>When virtual machine <a href="https://en.wikipedia.org/wiki/Hypervisor">Hypervisors</a> started their rise, they provided full or paravirtualization, fancy names for virtualizing everything or using special drivers on the guest to improve the manipulation of the real machine (host). Both guest and host had a full operating system copy, including their own kernel, libraries, tools and so on.<br />
<br />
With containers (jails, zones, etc.), the host and the “guest” share the same kernel to achieve process isolation. Eventually, a set of nifty new Linux kernel features called <a href="http://man7.org/linux/man-pages/man7/cgroups.7.html">cgroups(7)</a> (CPU, memory, disk I/O, network, etc.) and <a href="http://man7.org/linux/man-pages/man7/namespaces.7.html">namespaces(7)</a> (mnt, pid, net, ipc, uts and user) appeared to better restrict and enforce that isolation.</p>
<div style="text-align:center">
<p><img src="/images/lxc.png" alt="lxc" /></p>
</div>
<p>It used to be a very daunting task to manage those kernel features, so tools were created to abstract that complexity. <a href="https://linuxcontainers.org/">LXC</a> was the first I used and spent more time with. It wasn’t very user friendly, but it got the job done.<br />
<br />
Apparently, it was so cool that some folks created an abstraction layer over it to make it trivial to anyone. I first saw that abstraction, back in 2013, showcased on this <a href="https://www.youtube.com/watch?v=wW9CAH9nSLs">talk by Solomon Hykes</a>, an engineer working for a company called dotCloud, nowadays known as Docker.</p>
<div style="text-align:center">
<p><img src="/images/docker_2018.png" alt="docker" /></p>
</div>
<p>And the rest is history. Eventually docker dropped the need for LXC, it now deals with the kernel features abstraction directly (libcontainer) and has an entire ecosystem for container management.</p>
<h2 id="but-it-still-looks-like-a-virtual-machine-to-me">But it still looks like a Virtual Machine to me</h2>
<p>I can understand why we compare containers to virtual machines. They “feel” the same, and that’s great.
But keep in mind virtual machines need their own kernel, init system, drivers, etc., and containers just use the host’s kernel to isolate processes (preferably, one process per container).<br /></p>
<blockquote>
<p>So, why are people shipping an entire kernel and system tooling inside a container, generating massive images with stuff that will never be used?<br /></p>
</blockquote>
<p>The container runtime provides the basic filesystem and kernel features for your application to run, which means you can focus on your aplication and benefit from the advantages of a minimal container.<br />
<br />
I’ve prepared a few examples to help materialize these concepts.</p>
<h2 id="meet-busybox">Meet busybox</h2>
<div style="text-align:center">
<p><img src="/images/busybox.png" alt="busybox" /></p>
</div>
<p><a href="https://busybox.net/about.html">Busybox</a> is a very handy binary. It performs several functions depending on how it’s called. We’ll use it as our example application:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">mkdir</span> <span class="nt">-p</span> /tmp/container/bin
<span class="nb">cd</span> /tmp/container/bin
curl <span class="nt">-LO</span> https://busybox.net/downloads/binaries/1.27.1-i686/busybox
<span class="nb">chmod</span> +x ./busybox
<span class="c"># If you want all the things</span>
<span class="c"># for T in $(busybox --list); do ln -s busybox $T; done</span>
<span class="nb">ln</span> <span class="nt">-s</span> busybox <span class="nb">ls
ln</span> <span class="nt">-s</span> busybox <span class="nb">sleep
cd</span> /tmp/container
<span class="nb">tar</span> <span class="nt">-cvf</span> /tmp/container.tar .</code></pre></figure>
<p>And now that we have a new shiny tar file (container image) with a binary, a couple of symlinks and with no kernel or extra junk, it’s time to import it:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># myapp is just a tag</span>
docker import /tmp/container.tar myapp</code></pre></figure>
<p>At this time, things are starting to get interesting. Let’s try running our <code class="language-plaintext highlighter-rouge">myapp</code> container and do a simple <code class="language-plaintext highlighter-rouge">ls</code>:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">docker run <span class="nt">-ti</span> myapp /bin/ls <span class="nt">-lah</span>
total 16
drwxr-xr-x 1 0 0 4.0K Dec 28 15:41 <span class="nb">.</span>
drwxr-xr-x 1 0 0 4.0K Dec 28 15:41 ..
<span class="nt">-rwxr-xr-x</span> 1 0 0 0 Dec 28 15:41 .dockerenv
drwxr-xr-x 2 501 0 4.0K Dec 28 15:39 bin
drwxr-xr-x 5 0 0 360 Dec 28 15:41 dev
drwxr-xr-x 2 0 0 4.0K Dec 28 15:41 etc
dr-xr-xr-x 133 0 0 0 Dec 28 15:41 proc
dr-xr-xr-x 13 0 0 0 Dec 28 15:41 sys</code></pre></figure>
<p>Where did all that stuff come from? Shouldn’t it only have <code class="language-plaintext highlighter-rouge">/bin</code>?<br />
<br />
There are differences between a container image and that same image during runtime. The <a href="https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md">Open Container Initiative (OCI) libcontainer spec</a> explains it quite nicely.</p>
<h2 id="thats-sourcery-i-want-my-dockerfile-back">That’s sourcery, I want my Dockerfile back</h2>
<p>Sure, whatever rocks your boat.<br />
You probably heard about the <code class="language-plaintext highlighter-rouge">scratch</code> container. Let’s build our own and call it <code class="language-plaintext highlighter-rouge">zero</code>:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /tmp/container
<span class="c"># Create and import an empty container image, just like scratch</span>
<span class="nb">touch </span>zero.tar
docker import zero.tar zero
<span class="c"># Generate a Dockerfile for our app</span>
<span class="nb">cat</span> <span class="o"><<</span><span class="no">EOF</span><span class="sh">> Dockerfile
FROM zero
COPY bin/ /bin/
</span><span class="no">EOF
</span><span class="c"># Build and run our app</span>
docker build <span class="nb">.</span> <span class="nt">-t</span> myapp-zero
docker run <span class="nt">-ti</span> myapp-zero /bin/ls <span class="nt">-lah</span></code></pre></figure>
<h3 id="spoiler-alert">Spoiler alert</h3>
<p>If we’re on the same page, you must be realizing that all this fuss around containers should be instead around tar files, right?</p>
<p>🤔</p>
<h2 id="i-demand-a-shell">I demand a shell</h2>
<p>One could argue that a shell is mandatory for debugging. Obviously strace has to be present, but what if I need to copy files from/to the container? Maybe use a SSH daemon?<br />
<br />
Well, let me put this crystal clear: <b>You don’t!</b><br />
<br />
As one of the underlying building blocks of containers are namespaces, you can use <a href="http://man7.org/linux/man-pages/man1/nsenter.1.html">nsenter(1)</a> to run programs with namespaces of other processes.<br />
<br />
If that’s so, why don’t we use the same PID/NET namespace between containers, effectively sharing those resources?<br />
<br />
For instance, you could build a toolkit container with all the tools one could ever need and attach it to a container that doesn’t even have a shell.<br />
<br />
I, for one, <a href="https://github.com/kintoandar/dockerfiles#toolkit">did exactly that</a>. And we’ll be using it in this example:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># Instantiate myapp using sleep to keep it up</span>
docker run <span class="nt">-tid</span> myapp /bin/sleep 600
<span class="c"># Get the hash of the running myapp</span>
<span class="nv">CONTAINER_HASH</span><span class="o">=</span><span class="si">$(</span>docker ps | <span class="nb">grep </span>myapp | <span class="nb">grep </span>Up | <span class="nb">awk</span> <span class="s1">'{print $1}'</span><span class="si">)</span>
<span class="c"># Copy something to myapp from the host</span>
<span class="nb">touch </span>SOMETHING
docker <span class="nb">cp </span>SOMETHING <span class="nv">$CONTAINER_HASH</span>:/bin
<span class="c"># Attach a toolkit container to myapp</span>
docker run <span class="nt">-it</span> <span class="se">\</span>
<span class="nt">--pid</span><span class="o">=</span>container:<span class="nv">$CONTAINER_HASH</span> <span class="se">\</span>
<span class="nt">--net</span><span class="o">=</span>container:<span class="nv">$CONTAINER_HASH</span> <span class="se">\</span>
<span class="nt">--cap-add</span> sys_admin <span class="se">\</span>
kintoandar/toolkit</code></pre></figure>
<p>Now we’re on a bash shell in the <code class="language-plaintext highlighter-rouge">toolkit</code> container attached to the running <code class="language-plaintext highlighter-rouge">myapp</code>. Let’s look around.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@d10ba9eb50c7:/# ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/sleep 600
6 root 0:00 bash
15 root 0:00 ps aux</code></pre></figure>
<p>We can see the <code class="language-plaintext highlighter-rouge">sleep</code> process is running as PID 1, but where’s the <code class="language-plaintext highlighter-rouge">myapp</code> filesystem?</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@d10ba9eb50c7:/# <span class="nb">ls</span> <span class="nt">-lah</span> /proc/1/root/bin/
total 908K
drwxr-xr-x 1 501 root 4.0K Dec 29 19:13 <span class="nb">.</span>
drwxr-xr-x 1 root root 4.0K Dec 29 19:13 ..
<span class="nt">-rw-r--r--</span> 1 501 root 0 Dec 29 19:13 SOMETHING
<span class="nt">-rwxr-xr-x</span> 1 501 root 900K Dec 29 18:43 busybox
lrwxrwxrwx 1 501 root 7 Dec 29 18:43 <span class="nb">ls</span> -> busybox
lrwxrwxrwx 1 501 root 7 Dec 29 18:43 <span class="nb">sleep</span> -> busybox</code></pre></figure>
<p>So, do you still think someone needs a shell and all those tools on the <code class="language-plaintext highlighter-rouge">myapp</code> container?<br />
<br />
I can argue that there’s someone who would if, per example, a remote code execution vulnerability on the application was found. In this case, that malicious someone would love to have a shell laying around and maybe some useful tools like curl/wget.<br />
<br />
With that said, let’s then strive to restrict the attack surface on our containers and, as a bonus, you’ll get:</p>
<ul>
<li>Less network bandwidth required to move container images around</li>
<li>Less storage requirements due to image size</li>
<li>Less IOPS needed due to image size</li>
<li>Less software equals less vulnerabilities to scan, manage, patch, upgrade…</li>
<li>Faster build times</li>
<li>Faster ship times</li>
</ul>
<h2 id="dependency-hell">Dependency hell</h2>
<p>I get it, it’s hard to manage all the dependencies of a real application and completely detach it from the operating system where it was built, but rest assured, there are more people who feel the same and the community is here to help.<br />
<br />
These are some tools to make things less painful:</p>
<ul>
<li><a href="https://docs.docker.com/engine/userguide/eng-image/multistage-build/">docker multi-stage build</a></li>
<li><a href="http://dnf.readthedocs.io/en/latest/command_ref.html">dnf using installroot</a></li>
<li><a href="https://wiki.debian.org/Debootstrap">Debootstrap</a></li>
</ul>
<p>If you want to have complete control of what’s inside your container and not depend on prebuilt packages (rpm, deb, etc.), just use <a href="https://buildroot.org/">buildroot</a>.</p>
<div style="text-align:center">
<p><img src="/images/buildroot.png" alt="buildroot" /></p>
</div>
<p>For more buzzwordy tools I recommend <a href="https://www.youtube.com/watch?v=5D_SqLv92V8">this talk</a> from Michael Ducy.</p>
<h2 id="thats-it-thats-all">That’s it, that’s all</h2>
<p>Well, not quite, this is just the beginning. There are a lot of standards/implementations evolving and being adopted (OCI, CNI, CRI, etc.).</p>
<div style="text-align:center">
<p><img src="/images/oci.png" alt="oci" /></p>
</div>
<p>All of them improve the ecosystem around containerization allowing everyone to step in and contribute.<br />
<br />
Containers are here to stay and understanding what makes them tick is no longer optional.</p>Joel BastosDeconstructing containers and examples to better understand the technologyBaking delicious cloud instances2017-06-13T00:00:00+01:002017-06-13T00:00:00+01:00https://blog.kintoandar.com/2017/06/Baking-delicious-cloud-instances<div style="text-align:center">
<p><img src="/images/cake.jpg" alt="cake" /></p>
</div>
<p>Nowadays, configuration management is getting the backseat on the cloud infrastructure ride.<br />
<br />
With immutable and phoenix servers rising in demand, we tend to see more and more shell scripts taking the spotlight on instance configuration, where Dockerfiles are the supreme example. However, this post is not about containers.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">periodic reminder. Immutable means can't change not don't change</p>— Gareth Rushgrove (@garethr) <a href="https://twitter.com/garethr/status/872440195714621440">June 7, 2017</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br />
Let’s put immutable servers aside for a moment and focus on phoenix servers and how to deal with their configuration at boot time, I started thinking there must be a simpler and efficient way to manage their configurations.</p>
<h2 id="amazon-web-services">Amazon Web Services</h2>
<div style="text-align:center">
<p><img src="/images/amazon_aws.png" alt="amazon_aws" /></p>
</div>
<p>During my career I’ve worked with bare metal servers, XEN and KVM virtualization, LXC containers and private cloud environments like VMware vCloud and Openstack. This year, I’ve been looking more closely into Amazon Web Services (AWS), so that’s what I’ll be addressing here.<br />
<br />
So far I’ve seen a couple of patterns:</p>
<ul>
<li>Bake an Amazon Machine Image (AMI) adding a complex shell script to take care of the final provisioning at boot time.</li>
<li>Ship the AMI with all the environment configurations and, at boot time, just load the correct one, depending on the environment where it’s spun up.</li>
</ul>
<p>No matter which one you choose, you’ll be left with the same problem: the need to build the AMI baking process and, afterwards, getting the final configuration to run at boot.<br />
<br />
One could argue that service discovery could address the latter, but you would still need to take care of interfacing it with your service. Most of the services out there don’t support service discovery… yet. So I won’t dwell into that right now.</p>
<h2 id="configuration-management">Configuration Management</h2>
<p>For the past few years I’ve been using Chef and Ansible in complex environments, authored several Chef cookbooks, recipes and providers as well as Ansible playbooks, roles and modules. As such, I have strong opinions regarding each one of these tools. In a nutshell:</p>
<div style="text-align:center">
<p><img src="/images/Chef.png" alt="chef" /></p>
</div>
<ul>
<li>Chef has a steep learning curve and it’s hard to get your head around it, but it allows easy management of complex scenarios by having ruby exposed alongside the domain specific language.</li>
</ul>
<div style="text-align:center">
<p><img src="/images/ansible.png" alt="ansible" /></p>
</div>
<ul>
<li>Ansible is dead simple to learn <strong>and teach</strong>, but quickly becomes problematic when complexity needs to be addressed. “Coding” YAML is a pain and Jinja templates are not a walk in the park either. On the modules’ side python helps quite a lot.
<br /><br /></li>
</ul>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">OH: "Principal YAML Engineer"</p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/873489931871694854">June 10, 2017</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br />
Don’t get me wrong, I really appreciate these tools and most of the deployment orchestration I’ve helped build ended up using both, taking advantage of the strengths of each.<br />
<br />
Keep in mind that choosing a technology shouldn’t be taken lightly and you must consider the impact on the organization, so truly scrutinize what best suits your requirements. Running after the next shiny tool/hype, the language you feel more comfortable with or the agile framework <em>‘du jour’</em>, is meaningless in comparison with the overall company growth you seek, or should aspire to.</p>
<blockquote>
<p>Regardless, nothing beats a shell script, right?</p>
</blockquote>
<p>If you’re the only one managing the system, sure, but if your goal is to provide the tooling for everyone to use, maybe not so much.<br />
<br />
Becoming someone like Brent (<a href="https://www.goodreads.com/book/show/17255186-the-phoenix-project">The Phoenix Project</a> IT ninja and all around entreprise bottleneck) is not an option. In order to support the systems’ growth and therefore the business prosperity, we need to ensure technical knowledge is distributed and that everyone is able to modify, improve and use the tooling we build.</p>
<h2 id="infrastructure-management">Infrastructure Management</h2>
<div style="text-align:center">
<p><img src="/images/terraform.png" alt="terraform" /></p>
</div>
<p>Tools like Terraform try to fill the gap on infrastructure orchestration as a whole, but I wouldn’t call it configuration management per se, at least regarding the instances. Assuming your instances reside inside a private subnet, the interaction provided sums up to the <a href="https://www.terraform.io/docs/providers/aws/r/instance.html#user_data">user_data</a> configuration, injecting a <a href="https://ahmet.im/blog/cloud-instance-provisioning/#cloudinit">cloud-init</a> script.</p>
<h2 id="cooking-up-a-plan">Cooking up a plan</h2>
<p>All this got me wondering… <em>What do I want out of this?</em></p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRY</a> up the process</li>
<li>Anyone should understand and feel empowered to modify the baking scripts and the final provision ones</li>
<li>Improve code reusability</li>
<li>Environment parity</li>
</ul>
<p>When I got to this list, I started by figuring out how could I meet all of the above and ended up designing a workflow from scratch to achieve all requirements. The following sequence diagram illustrates the workflow:</p>
<div style="text-align:center">
<p><img src="/images/bakery_sequence.jpg" alt="bakery_sequence" /></p>
</div>
<p><strong>Disclamer:</strong> Being totally honest, my preference would have been Chef instead of Ansible. But since I want more people involved and contributing, I’ve looked for a way to reduce the entry level on the required configuration management knowledge. As described above, I opted for Ansible, trying to keep things simple and straightforward to ease the onboarding of other collaborators.</p>
<h2 id="components">Components</h2>
<p>This workflow has a few core components, each one with a set of tasks and inherent responsibilities.</p>
<h3 id="orchestrator">Orchestrator</h3>
<p>Where all the tasks are triggered, so pick your poison:</p>
<ul>
<li>Jenkins</li>
<li>Travis</li>
<li>Thoughtworks Go</li>
<li>…</li>
</ul>
<h3 id="local-cache">Local Cache</h3>
<p>Where the project repository is checked out and all dependencies are stored. This path will be backed up and sent into the AMI at <code class="language-plaintext highlighter-rouge">/root/bakery</code> by <a href="https://github.com/kintoandar/bakery/">our Ansible playbook</a>.<br />
<br />
When the AMI is instantiated using our cloud-init file, the Ansible playbook will be ran again, now locally, but as the override <code class="language-plaintext highlighter-rouge">cloud_init</code> will be enabled a different flow will be performed.</p>
<h3 id="ansible-galaxy">Ansible-galaxy</h3>
<p>If you’re familiar with the awesome <a href="https://docs.chef.io/berkshelf.html">Berkshelf</a> and <code class="language-plaintext highlighter-rouge">berks vendor</code>, ansible-galaxy has a similar purpose to resolve Ansible roles dependencies and create a local copy of them.<br />
<br />
You can even use private repos instead of publicly available roles, if that’s your thing. Here’s an example:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">~ <span class="nv">$ </span><span class="nb">cat </span>requirements.yml
<span class="nt">---</span>
- src: git@github.com:PrivateCompany/awesome-role.git
scm: git
version: v1.0.0
name: awesome-role</code></pre></figure>
<p>You can download all the required roles using:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">ansible-galaxy <span class="nb">install</span> <span class="nt">-r</span> ./requirements.yml <span class="nt">-p</span> ./roles</code></pre></figure>
<h3 id="ansible">Ansible</h3>
<p>Improves code reusability through roles. Makes configuration templating much easier instead of using cryptic <code class="language-plaintext highlighter-rouge">sed</code> one liners (even though ❤️ regex).<br />
<br />
If the roles have concerns separated per task file, it’s possible to call the templating features separately from the installation ones. This will truly help split the code workflow for the baking and the final configuration at boot up.</p>
<h3 id="packer">Packer</h3>
<div style="text-align:center">
<p><img src="/images/packer.png" alt="packer" /></p>
</div>
<p>More about it on a <a href="/2015/01/veewee-packer-kickstarting-vms-into-gear.html">previous post</a>.</p>
<h2 id="tasting-some-baked-goods">Tasting some baked goods</h2>
<p>With this approach, the code used for the AMI’s bake is the same as the one used on the final provision during boot up. The only difference is the code flow on the playbook between the two steps.<br />
<br />
I’ve built a <a href="https://github.com/kintoandar/bakery">small example</a> to demonstrate how everything comes together so you can try it out. Obviously, this is far, far away from production ready but you’ll get the gist and it’ll make the entire workflow much clearer.<br />
<br />
In this example we want to spin up an instance with <a href="https://prometheus.io/docs/introduction/overview/">Prometheus</a> using a separate data volume, useful for instance failures and service upgrades. The flow is something like this:</p>
<div style="text-align:center">
<p><img src="/images/bakery_flow.jpg" alt="bakery_flow" /></p>
</div>
<h3 id="requirements">Requirements</h3>
<p>This demo has the following requirements:</p>
<ul>
<li>AWS API access tokens</li>
<li>Packer >= 1.0.0</li>
<li>Ansible >= 2.3.0.0</li>
</ul>
<h3 id="mixing-the-ingredients">Mixing the ingredients</h3>
<p>For generating an AMI to use in this workflow you just need to run:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># Get the repo</span>
git clone https://github.com/kintoandar/bakery.git
<span class="nb">cd </span>bakery
<span class="c"># Download ansible roles dependencies</span>
ansible-galaxy <span class="nb">install</span> <span class="nt">-r</span> ./requirements.yml <span class="nt">-p</span> ./roles
<span class="c"># Use packer to build the AMI (using ansible)</span>
<span class="nb">export </span><span class="nv">AWS_SECRET_ACCESS_KEY</span><span class="o">=</span>XXXXXXXXXXXXXXXXXXXXXXXX
<span class="nb">export </span><span class="nv">AWS_ACCESS_KEY_ID</span><span class="o">=</span>XXXXXXXXXXXXXXXXXX
packer build ./packer.json</code></pre></figure>
<p>Now, to test the new AMI, just launch it with an extra EBS volume and use the provided <a href="https://raw.githubusercontent.com/kintoandar/bakery/master/cloud-init-example.conf">cloud-init configuration example</a> as the <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html">user-data</a>. After the boot, you can check the cloud-init log in the instance:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">less /var/log/cloud-init-output.log</code></pre></figure>
<p>Be sure that <code class="language-plaintext highlighter-rouge">/root/bakery/cloud-init.json</code> was created with all the required overrides for Ansible, specially the <code class="language-plaintext highlighter-rouge">cloud_init=true</code>.<br /></p>
<h3 id="profit">Profit!</h3>
<p>You may destroy and attach a new instance to the separated data volume and Ansible will take care of the logic of dealing with it.<br />
<br />
Using Terraform to manage those instances becomes easier: you only need to take care of templating the cloud-init file and attach the volume to the instance. You may find <a href="https://raw.githubusercontent.com/kintoandar/bakery/master/cloud-init-example.conf">here</a> an example of a cloud-init configuration that you can use to template.<br />
<br />
Here’s an example of a Terraform script:</p>
<script src="https://gist.github.com/583a209474e05bbc0cd6738bf909a9a1.js?file=terraform_ebs_volume_attach_example.tf"> </script>
<h2 id="pro-tips">Pro tips</h2>
<ul>
<li>You could pass Packer a json file with overrides for Ansible to use during the bake process, taking advantage of <a href="https://www.packer.io/docs/provisioners/ansible.html#extra_arguments">--extra-vars</a>.<br />
<br /></li>
<li>Terraform <a href="https://www.terraform.io/docs/providers/aws/r/ebs_volume.html">aws_ebs_volume</a> allows using a snapshot as the base for the data volume, which is quite useful in case of a major problem that forces you to rollback the data volume to a consistent state.</li>
</ul>
<h2 id="ready-to-be-served">Ready to be served</h2>
<p>I wouldn’t say this is the best workflow ever, but it does check out all the requirements I had. I’ll keep you posted on how it goes.<br />
<br />
Happy baking!<br />
<br />
🍰</p>Joel BastosA workflow for cloud instances baking and provisioning using ansible and terraformSurvival guide to tech conferences2017-01-17T00:00:00+00:002017-01-17T00:00:00+00:00https://blog.kintoandar.com/2017/01/Survival-guide-to-tech-conferences<div style="text-align:center">
<p><img src="/images/laptop.jpg" alt="laptop" /></p>
</div>
<p>Everyone that is fortunate enough to attend tech conferences has different perspectives, and expects different outcomes and benefits from these type of events. I commonly see three types of participants, there might be more or even combinations of these:</p>
<ul>
<li>There are the ones that just want to watch the talks, even if it’s on a TV outside the room where the presentation is happening;</li>
<li>Others use conferences as an excuse to party hard with teammates and don’t even stop by the venue;</li>
<li>Lastly, the ones who aim to meet others with similar interests and grab the opportunity to discuss problems/solutions with different experts.</li>
</ul>
<p>If you are the type of participant that’s exclusively interested in the talks themselves, you should know that nowadays the majority of the biggest events will publish online at least the most renowned presentations, usually with better quality than the live ones and with a richer viewing experience if you’re unfortunate to get a bad seat at the venue. Taking this into consideration will save you time/money - you no longer have to take care of trip/accommodation details, and it gives you the option to speed up playback and maximize the ingest of information.<br />
<br />
If you just want to have fun or get to know a different city using a conference as an excuse, there’s not much I can tell you except you should be calling it a team activity or vacation instead of training.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">High Mojito Availability<br /><br />Cause sometimes all you need is some downtime...<a href="https://twitter.com/hashtag/oreilly?src=hash">#oreilly</a> <a href="https://twitter.com/hashtag/sre?src=hash">#sre</a> <a href="https://twitter.com/hashtag/devops?src=hash">#devops</a> <a href="https://twitter.com/hashtag/vacation?src=hash">#vacation</a> <a href="https://t.co/otrtbfyhtg">pic.twitter.com/otrtbfyhtg</a></p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/740072341984677888">June 7, 2016</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br />
On the contrary, if your interest is to appreciate conferences as a networking opportunity, a chance to get answers from others that went through similar problems or to meet the rockstars in your field, then you stumbled upon the right post. Here, I hope to crystallize <em>my</em> current views on this subject and give you some guidelines on how to prepare yourself and get your way around conferences.</p>
<h2 id="step-1---choosing-the-conferences">Step 1 - Choosing the conference(s)</h2>
<p>From my own experience, what I value the most are open source, operations, culture and community oriented conferences. As such, I usually end up hearing about them through friends, Twitter, my personally curated RSS feeds or searching on <a href="https://lanyrd.com/conferences/">Lanyrd</a>. Try to find the keywords that define your interests, they’ll be useful when searching or even tweeting. Some of mine are:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">devops</code> (not as in job title)</li>
<li><code class="language-plaintext highlighter-rouge">SRE</code></li>
<li><code class="language-plaintext highlighter-rouge">opensource</code></li>
<li><code class="language-plaintext highlighter-rouge">culture</code></li>
</ul>
<p>When choosing where to spend your time and money, you should try to find out who will be speaking, if there are any presentations that you are eager to watch, if there are any companies attending that you want to get acquainted with, and even what type of audience the event has. If we’re talking about a conference that you’ve never heard of, you should also try to do a background check (see previous editions, confirm the feedback on social media, that sort of thing).</p>
<h2 id="step-2---how-to-get-there">Step 2 - How to get there</h2>
<p>There are various options on how to attend conferences, like being a speaker, a participant, a sponsor, a volunteer, or even a organizer. Since my experience is mostly in the first two categories, I will only focus on them. It’s not always easy to be a speaker at an event (and I will explain more later on) but, if you want to get out there, you can start by applying to lightning talks or submit papers to the events you’re most fond of.<br />
<br />
Then there’s the question of financing your participation. Being a speaker might help, since some conferences cover the speaker’s’ expenses. Paying from your own pocket is always an option and might provide more value to your spendings. Alternatively, your company may have set a budget for training, such as for online courses, certifications and/or conferences, among others. I’ve never seen much value on certifications (this could be an entire post all together), but I do see numerous benefits in attending conferences.</p>
<h2 id="step-3---call-for-papers">Step 3 - Call for papers</h2>
<p>Not everyone is comfortable giving a speech to hundreds. Heck, I bet most people don’t. Being completely honest, public speaking still makes me physically ill, but that doesn’t stop me from stepping outside my comfort zone and enduring the pain, either for a very good reason or to prove I can do it (and no, I don’t think it becomes easier with practice). Still, there are some pretty good benefits in talking to a broader audience. Here are some highlights from my personal experience:</p>
<ul>
<li>Back in 2012 I became an Ansible supporter and, as such, I did several talks and workshops about the tool. This helped me introduce the technology on two different companies and become more knowledgeable of it. And guess what? Both companies are now heavily using it and even contributing to the project.
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/darkflib">@darkflib</a> I've never heard of <a href="https://twitter.com/hashtag/ansible?src=hash">#ansible</a> but if it's from <a href="https://twitter.com/hashtag/cobbler?src=hash">#cobbler</a> creator, it's worth a try. Thanks for the tip ;)</p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/190527811897933825">April 12, 2012</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br /></p>
</li>
<li>During a Config Management Camp, the Chef community, more precisely <a href="https://twitter.com/mfdii">Michael Ducy</a> and <a href="https://twitter.com/nathenharvey">Nathen Harvey</a>, invited its members to give lightning talks about anything Chef related. I jumped into the stage and, even with my voice trembling, I shared some key elements of what I’ve learned in my professional experience, while proving myself capable of such an attitude. I felt terrified and exhilarated, but it ended up being unforgettable for the best of the reasons - I was giving something back to the community and that feeling is priceless.
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Achievement Unlock: Give a micro lightning talk to the <a href="https://twitter.com/hashtag/chef?src=hash">#chef</a> community on <a href="https://twitter.com/hashtag/cfgmgmtcamp?src=hash">#cfgmgmtcamp</a>! Thanks <a href="https://twitter.com/nathenharvey">@nathenharvey</a></p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/430732856462430208">February 4, 2014</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/kintoandar">@kintoandar</a> nice job</p>— Michael Ducy (@mfdii) <a href="https://twitter.com/mfdii/status/430733452640784384">February 4, 2014</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/kintoandar">@kintoandar</a> you rocked it!</p>— nathenharvey (@nathenharvey) <a href="https://twitter.com/nathenharvey/status/430733727854231554">February 4, 2014</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br /></p>
</li>
<li>At one time, I was keen on using the operating system as an artifact of the delivery pipeline, just like any other application running on it, providing a consistent, versioned and automated way of creating machines. I took the opportunity of another presentation I was making and also introduced this subject, which eventually was implemented and accepted by my colleagues across the company.</li>
<li>Once I got tired of recurrently dealing with Public Key Infrastructure (PKI) issues, so I prepared a presentation for my colleagues about the fundamentals of PKI. I already gave this talk 3 times (one in Portugal, other in Romania, and another in a Security Event) and the outcome couldn’t be better - it helped reduced the toil of dealing with PKI related issues, which showed me that sometimes you really have to put yourself out there.</li>
</ul>
<div style="text-align:center">
<p><img src="/images/sec_poster.gif" alt="poster" /></p>
</div>
<p>Had I not made these and other presentations, I probably wouldn’t have had the opportunity to spread out my view on these subjects, to give something extra back to the community and to reach others that I otherwise wouldn’t.<br />
<br />
Another outcome of being a speaker is the questions you’ll receive, which will open even more opportunities to exchange perspectives, to be challenged on your assumptions and to ignite meaningful discussions. I just love talking about technology, it truly does make me happy, so getting interesting questions/problems makes my day! All of this will help you be a better professional and a better person.</p>
<h2 id="step-4---the-corridor-approach">Step 4 - The corridor approach</h2>
<p>When attending an event, really appreciate the corridors. You can always have the stage later in the comfort of your living room on video (if the organization provides it), but you rarely have the same opportunity to be with those people around you.<br />
<br />
Almost everybody likes to meet new people, but sometimes it can be frightening, depending on who you’re talking to, or even on your own personality. In either case, it’s once again important to get out of your comfort zone and, for me, the key element to have in mind when approaching others is <em>empathy</em>.<br />
<br />
You should always do your homework so you know whom is present at the event and whom you want to meet. It’s also important to have a sense of when you should introduce yourself and how can you engage in conversation and make a good first impression.</p>
<h3 id="do-your-homework-but-dont-be-a-stalker">Do your homework (but don’t be a stalker)</h3>
<p>Really study the conference schedule. Try to find out more about the speakers, especially if there are multiple tracks simultaneously. Who and what captures your attention? Usually, most conferences help you by providing some insightful information on the speaker and an abstract of the talk. Do your best to choose wisely.</p>
<h3 id="introducing-yourself">Introducing yourself</h3>
<p>Understand who might be at the conference. Depending on the type of conference it is, and even who the supporting/sponsoring entities are, you can easily have an idea of who might be present. If someone interesting is (or might be) attending and you have specific questions for them, you can try to schedule a quick chat with them beforehand. This also helps eliminating most of the awkwardness of a corridor incognito approach.</p>
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/niallm">@niallm</a> so, apparently will be attending your presentation at <a href="https://twitter.com/hashtag/SREcon16Europe?src=hash">#SREcon16Europe</a> :D<br /><br />Any chance to talk a bit offline during the week?</p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/751791895542628352">July 9, 2016</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/kintoandar">@kintoandar</a> Certainly. We can take it ad-hoc or DM me for email</p>— Niall Murphy (@niallm) <a href="https://twitter.com/niallm/status/751806667533193216">July 9, 2016</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br />
If you are on site and just spotted someone you’d like to meet, or if a speaker caught your attention and you want to continue the conversation, be aware that there might never be a perfect time for you to introduce yourself. So how do you choose the timing?<br />
<br />
Here is where empathy is firstly appreciated. Ask yourself if the person seems free or if he/she seems like you could approach them. If the person seems to be working or if it’s having an interesting conversation with someone else, maybe it’s not a good time to go for it. Think of it as the working environment in an office, if a colleague is “in the zone” you wouldn’t interrupt him/her, would you?<br />
<br />
Now that you came up with the perfect timing, are you up to deal with the outcome? For all you know, this person can be a total jerk… But take my advice: I’m yet to meet someone like that and find myself in an awkward situation.<br />
<br />
Social interaction has its toll on your energy, and depending on whom you are approaching, they may also be tired. Be aware of their body posture, their mood, read the signs, know when you are disturbing them and, above all, be empathetic. If this is the case, ask for a time to chat another day - the hardest part is done anyway!</p>
<h3 id="social-interaction">Social Interaction</h3>
<p>When engaging in a conversation, be sure to have great questions to ask. In an event it gets tiresome to receive the same questions over and over again, which is more common within the rockstars/speakers. Again, be empathetic and come up with questions that are both interesting and help you give a great first impression and be seen as someone interesting to talk with.<br />
<br />
In my humble opinion, what defines a good question is one that can not be easily answered or simply researched, it requires experience to be fulfilled.<br />
<br />
Something I just love asking, if the person has the time for it, is “what’s your life story, how did you get where you currently are?”. Once I asked <a href="https://twitter.com/botchagalupe">John Willis</a> about it and he shared an awesome life story with me. Such a short question, such an engaging answer. I feel that this question is a great way to show that you are truly interested in getting to know the other person and make a connection.<br />
<br />
To illustrate what I mean, here are some of the most memorable questions I’ve had the opportunity to ask during conferences:</p>
<ul>
<li>I’ll never forget when I questioned <a href="https://twitter.com/jonlives">Jon Cowie</a> on what, in his perspective, defines a Senior Professional (title apart, obviously).</li>
<li>Or when I asked <a href="https://www.linkedin.com/in/kurta1">Kurt Andersen</a> how Linkedin maintains software/procedures coherence having multiple SRE teams instead of a single one.</li>
<li>Another time, I asked <a href="https://twitter.com/kelseyhightower">Kelsey Hightower</a> about his vision for the next 5 years in the tech world and if all the fuss around unikernels was justified.
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Achievement Unlocked: Get a signed copy of "<a href="https://twitter.com/hashtag/Kubernetes?src=hash">#Kubernetes</a>: Up and Running" <a href="https://t.co/cq3Hwh21n3">pic.twitter.com/cq3Hwh21n3</a></p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/660516032767852544">October 31, 2015</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br /></p>
</li>
<li>I also had the opportunity to question <a href="https://twitter.com/garethr">Gareth Rushgrove</a> on how to go about testing the infrastructure that is always changing and evolving.</li>
<li>On one occasion, I was lucky enough to inquire <a href="https://twitter.com/patrickdebois">Patrick Debois</a> regarding Disaster Recovery planning in complex infrastructures.
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Post-It extravaganza, sponsored by <a href="https://twitter.com/patrickdebois">@patrickdebois</a> <a href="https://t.co/iHrqRPn7p3">pic.twitter.com/iHrqRPn7p3</a></p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/480359971960672257">June 21, 2014</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br /></p>
</li>
<li>Once I was talking with <a href="https://twitter.com/niallm">Niall Murphy</a> about management issues, particularly, how a team changes from a project-oriented mentality to task-oriented one, with all the loss of visibility, responsibility and ownership and what that ensues (and no, I’m not a manager). Not only he gave me his vision on the subject, he also recommended me <a href="https://twitter.com/lizthegrey">Liz Fong-Jones</a>. I was blown away! How didn’t I knew about her sooner?
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">If there's a talk about management you should watch, this is it!<a href="https://t.co/QjSTeBB28a">https://t.co/QjSTeBB28a</a> <a href="https://t.co/VAz8nU8CdE">https://t.co/VAz8nU8CdE</a></p>— Joel (@kintoandar) <a href="https://twitter.com/kintoandar/status/780704828435693568">September 27, 2016</a></blockquote>
<script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><br /></p>
</li>
<li>Therefore, during a lunch break, I politely asked if I could join her. Liz welcomed me to the table and I questioned her what were the true benefits of having a manager. Both her’s and Niall’s answers helped make a very tough decision.</li>
</ul>
<p>I truly value these opportunities and, as I do see value in good questions and engaging in great conversations, I always hope the person in question will also feel the same.</p>
<h3 id="getting-around">Getting around</h3>
<p>While getting yourself around, you are also susceptible to be approached by others. And I believe you should make an effort to show you are approachable. People don’t like leaving the comfort zone, I sure don’t, but if you stay in a closed group, others won’t try to interact with you.<br />
<br />
Use the sponsors booths to learn more about their companies/products and dig deep into the themes you’re interested in. I had very enlightening conversations about infrastructure stacks, automation techniques and procedures from booths that weren’t even related with those kind of topics. At one time, I had a noteworthy discussion with a Google HR at their booth on how they were rated in performance reviews if there’s a conflict of interests, as their goal should be increasing candidate’s applications, but the teams are concerned with the quality of new hires and not wasting time on meaningless recruitment attempts.<br />
<br />
Even though I don’t consider myself an extrovert, at conferences I always try to be hyper-social in order to guarantee that I meet exactly whom I want to and others alike, not giving a chance to second thoughts or hesitations.</p>
<h2 id="step-5---timeline-of-events">Step 5 - Timeline of events</h2>
<p>As part of the process of attending a conference, usually we are asked to write post-event reports of what we’ve seen. We’ve all been there, struggling to remember every talk, or even the highlights of a speech. One tool I find to be the best note-taking app for tech conferences is Twitter. Not only it helps keep track of the most relevant talks and moments (adding photos and comments), which you can afterwards use to write the reports, it turned out to have several other perks, such as:</p>
<ul>
<li>By using the conference hashtag, you help the event gather traction</li>
<li>By referring a speaker, you help him/her reach a broader audience</li>
<li>By maintaining a timeline of a topic, you also allow the discussion to go beyond the presentation itself</li>
<li>You deliver information on what’s going on to the community</li>
<li>You show what the highlights are (in your opinion, obviously)</li>
<li>You delineate your twitter profile and make new friends in the process</li>
</ul>
<p>I’ve also found out that this approach helped me meet on site other participants with similar interests as mine and eager to further discuss those topics in person. Per example:</p>
<ul>
<li>It boosted me to get in touch with <a href="https://twitter.com/flameeyes">Diego Pettenò</a> and get a better understanding regarding on-call in a Google SRE team.</li>
<li><a href="https://twitter.com/superdealloc">André Medeiros</a> spotted one of my tweets and promptly invited me to Shopify’s booth where he was showcasing and we had an insightful conversation about the company.</li>
<li>Or that time when <a href="https://twitter.com/acaciocruz">Acácio Cruz</a>, former Google SRE Director, invited us alongside with a Zalando group of Portuguese to drink some pints at the bar.</li>
</ul>
<p>These kind of casual encounters are prone to happen when using the conference hashtag.<br />
<br />
If you’re not familiar with Twitter, start by following the conference you are attending and see the dynamic. Most tech events have nowadays a tweet wall and when you use their hashtag you’re instantly posted on the wall, which also guarantees you more visibility at the event.<br />
<br />
I started this method of keeping track of things almost by accident, but now that I understand the benefits it carries I recommend you also give it a try.<br />
<br />
Here are some of my previous timelines if you want to give a look:</p>
<ul>
<li><a href="https://twitter.com/search?q=from%3Akintoandar%20OR%20to%3Akintoandar%20OR%20%40kintoandar%20-morfeox%20-jorgemsrs%20-balhau%20AND%20since%3A2016-07-09%20until%3A2016-07-14&src=typd">SREcon Europe</a></li>
<li><a href="https://twitter.com/search?q=from%3Akintoandar%20OR%20to%3Akintoandar%20OR%20%40kintoandar%20-morfeox%20-jamlvs%20-balhau%20-denisjackman%20AND%20since%3A2015-10-26%20until%3A2015-11-01&src=typd">Velocity Conference</a></li>
<li><a href="https://twitter.com/search?q=from%3Akintoandar%20OR%20to%3Akintoandar%20OR%20%40kintoandar%20-nocturnvs%20-simps0n%20AND%20since%3A2014-06-19%20until%3A2014-06-22&src=typd">Devopsdays</a></li>
<li><a href="https://twitter.com/search?q=from%3Akintoandar%20OR%20to%3Akintoandar%20OR%20%40kintoandar%20-morfeox%20-nosuchuser%20-rfgamaral%20AND%20since%3A2014-02-01%20until%3A2014-02-05&src=typd">Config Management Camp</a></li>
</ul>
<h2 id="disclaimer">Disclaimer</h2>
<p>All opinions are exclusively my own. I wouldn’t say this approach is perfect but it has worked for me for a very long time, not only at conferences but meetups alike.<br />
<br />
Obviously, if you see me around please feel free to say hi and bring in questions if you have them!</p>
<h2 id="to-all-that-made-this-post-possible">To all that made this post possible</h2>
<p>Thank you all for having a minute to spare with someone that is curious by nature and relentless to find answers, I deeply appreciate your time.</p>Joel BastosStep by step guidelines on how to prepare and improve your experience at tech conferencesfwd - The little forwarder that could2016-08-30T00:00:00+01:002016-08-30T00:00:00+01:00https://blog.kintoandar.com/2016/08/fwd-the-little-forwarder-that-could<p>🚂</p>
<h2 id="about">About</h2>
<p><a href="https://github.com/kintoandar/fwd">fwd</a> is a network port forwarder written in golang.<br />
It’s cross platform, supports multiple architectures and it’s dead simple to use.<br />
<br />
In this post I’ll talk about this tool and my approach to the process of automating the build of an application in golang.<br /></p>
<h2 id="motivation">Motivation</h2>
<p>I’ve increasingly been hearing great things regarding the Go Language and thought it might be worth a shot digging into it.
As usual, there’s nothing better than creating a project with the right motivation to learn the language while maintaining a high level of interest.<br /></p>
<div style="text-align:center">
<p><img src="/images/github.png" alt="github_logo" /></p>
</div>
<p>Thus I have some projects on my github that were conceived purely to get my interest going on a particular technology, some of them were developed with no clear issue to overcome but with lots of know-how to squeeze from. However, from time to time I get the feeling I might be solving a real problem, not only mine but of the community as well, and <a href="https://github.com/kintoandar/git-hooks">git-hooks</a> is one those projects.<br />
<br />
As <code class="language-plaintext highlighter-rouge">fwd</code> is currently filling a gap on my <em>tools-of-the-trade-belt</em>, I suppose this project might be useful for other users too.<br />
<br />
I won’t go into the technical implementation details but if you want to dig deeper into TCP/IP and writing network apps I recommend <a href="http://www.cubrid.org/blog/dev-platform/understanding-tcp-ip-network-stack/">this post</a>.</p>
<h2 id="scenarios">Scenarios</h2>
<p>Two scenarios to keep in mind.</p>
<blockquote>
<p>Every now and then, I get a friend asking for advice or help on a project/configuration problem on their infrastructure, and not everyone is running a public facing <code class="language-plaintext highlighter-rouge">OpenSSH-Server</code> that I can connect through. There are some tools that can help you with that like the awesome <code class="language-plaintext highlighter-rouge">ngrok</code>, but what happens if the server you want to connect is not local to your friends box, like a home router? For the purpose of this scenario let’s try to avoid full remote desktop apps (just too messy imho).</p>
</blockquote>
<blockquote>
<p>You have an app server running on a non privileged port and for the sake of testing you need to access it through a standard HTTP/S port, without restarting the service. Once again you might use <code class="language-plaintext highlighter-rouge">nginx</code>, <code class="language-plaintext highlighter-rouge">xinetd</code> or some other trick to get that setup, but it’s too much of an hassle just for a quick test.</p>
</blockquote>
<h2 id="requirements">Requirements</h2>
<p>I decided to use golang to build a tool to solve these scenarios, having <code class="language-plaintext highlighter-rouge">Keep It Simple</code>™ as my motto.<br />
These were the requirements for the application:</p>
<ul>
<li>Rich command line experience</li>
<li>Be able to pass arguments as environment variables (calling the app name should suffice to run it)</li>
<li>Get acquainted with golang’s <em>Package net</em> (which is awesome by the way)</li>
<li>Get a feel for <em>goroutines</em> and <em>channels</em></li>
<li>Automatically build and distribute binaries for most platforms/architectures</li>
</ul>
<h2 id="use-cases">Use Cases</h2>
<p>So what is <code class="language-plaintext highlighter-rouge">fwd</code> useful for? Here are a couple of use cases which demonstrate the application features.</p>
<h3 id="simple-forwarder">Simple Forwarder</h3>
<p>Forwarding a local port to a remote port on a different network:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> +-----+ +-----+
192.168.1.99:8000 | | 172.28.128.3:80 | |
curl +-----------------> | fwd | +-------------------> | web |
| | 172.28.128.1 | |
+-----+ +-----+</code></pre></figure>
<p><img src="https://docs.google.com/uc?id=0B-SEc73VBiUwN0RheHVYQ3RlbW8" alt="demo" /></p>
<h3 id="fwd-️-ngrok">fwd ♥️ ngrok</h3>
<p>I must admit <code class="language-plaintext highlighter-rouge">ngrok</code> was an huge inspiration for <code class="language-plaintext highlighter-rouge">fwd</code>. If you don’t know the tool you should definitely check out <a href="https://www.youtube.com/watch?v=F_xNOVY96Ng">this talk</a> from <a href="https://twitter.com/inconshreveable">@inconshreveable</a>.<br />
<br />
This tool combo (fwd + ngrok) allows some wicked mischief, like taking <a href="https://en.wikipedia.org/wiki/Hole_punching_(networking)">firewall hole punching</a> to another level! And the setup is trivial.<br />
<br />
<code class="language-plaintext highlighter-rouge">ngrok</code> allows to expose a local port on a public endpoint and <code class="language-plaintext highlighter-rouge">fwd</code> allows to connect a local port to a remote endpoint. You see where I’m heading with this… With both tools you can connect a public endpoint to a remote port as long as you have access to it.<br />
<br />
Here’s how it works:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> +-----+ +-----+
:9000 | | 172.28.128.3:22 | |
Internet +-------------> | fwd | +-------------------> | ssh |
tcp.ngrok.io:1234 | | 172.28.128.1 | |
+-----+ +-----+</code></pre></figure>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># get a public endpoint, ex: tcp.ngrok.io:1234</span>
ngrok tcp 9000
<span class="c"># forward connections on :9000 to 172.28.128.3:22</span>
fwd <span class="nt">--from</span> :9000 <span class="nt">--to</span> 172.28.128.3:22
<span class="c"># get a shell on 172.28.128.3 via a public endpoint</span>
ssh tcp.ngrok.io <span class="nt">-p</span> 1234</code></pre></figure>
<blockquote>
<p>With great power comes great responsibility.</p>
<p><cite>Ben Parker</cite></p>
</blockquote>
<h2 id="community-packages">Community Packages</h2>
<p>To make this project a reality I’ve used some great golang packages built by the community, standing in the shoulders of giants and all.<br />
<br />
If you don’t know them, you should probably check them out, they made my life much easier:</p>
<div style="text-align:center">
<p><img src="/images/golang.png" alt="golang_logo" /></p>
</div>
<ul>
<li><a href="https://github.com/urfave/cli">urfave/cli</a>: <em>“cli is a simple, fast, and fun package for building command line apps in Go. The goal is to enable developers to write fast and distributable command line applications in an expressive way.”</em></li>
<li><a href="https://github.com/fatih/color">fatih/color</a>: <em>“Color lets you use colorized outputs in terms of ANSI Escape Codes in Go (Golang). It has support for Windows too! The API can be used in several ways, pick one that suits you.”</em></li>
<li><a href="https://github.com/mitchellh/gox">mitchellh/gox</a>: <em>“Gox is a simple, no-frills tool for Go cross compilation that behaves a lot like standard go build. Gox will parallelize builds for multiple platforms. Gox will also build the cross-compilation toolchain for you.”</em></li>
</ul>
<h3 id="build-pipeline">Build Pipeline</h3>
<p>In order to take a project seriously, you need to have some automation on the build and distribution of your artifacts.<br />
<br />
Next I’ll show you how I’ve setup the build system for this application.</p>
<h3 id="travis">Travis</h3>
<p><em>“Travis CI is a hosted continuous integration and deployment system.”</em><br />
<br />
It is such a delight to use, it simply doesn’t get in your way, it’s trivial to setup and above all there’s a huge community backing it up, so rest assured you’ll always find an answer if you ever hit an issue.<br /></p>
<div style="text-align:center">
<p><img src="/images/travis.png" alt="travis_logo" /></p>
</div>
<p>I wanted a build to run on every commit, deploy on tags and publish artifacts somewhere. Also, after a <code class="language-plaintext highlighter-rouge">git push</code>, no manual intervention should be required. Eventually I ended up with this configuration:</p>
<script src="https://gist.github.com/b4f1b55dee0d8880c697157b860af977.js?file=.travis.yml"> </script>
<p>The following keys are probably worth mentioning:</p>
<ul>
<li><strong>tip</strong>: Latest Go version</li>
<li><strong>file</strong>: Configuration file for bintray</li>
<li><strong>secure</strong>: Generated by <code class="language-plaintext highlighter-rouge">travis encrypt BINTRAY_API_KEY --add deploy.key</code></li>
<li><strong>tags</strong>: Run the deploy step on tags only</li>
</ul>
<p>You can find a successful build and deploy log <a href="https://travis-ci.org/kintoandar/fwd/builds/155752647">here</a>.</p>
<h3 id="bintray">Bintray</h3>
<p>I needed a place to store and distribute the build artifacts generated by <code class="language-plaintext highlighter-rouge">gox</code>. Since the upload had to be automated, bintray seemed like a good option.<br /></p>
<div style="text-align:center">
<p><img src="/images/bintray.png" alt="github_logo" /></p>
</div>
<p>The travis + bintray integration was very simple to configure and, even though I’m currently using the service as a full fledged web server (with an upload API), it gets the job done.<br />
<br />
You can find at <a href="https://dl.bintray.com/kintoandar/fwd/">my bintray endpoint</a> all application versions, alongside with the hash of every binary.<br />
<br />
The configuration I’ve used can be found <a href="https://github.com/kintoandar/fwd/blob/master/.bintray.json">here</a>, nothing exotic about it I’m afraid, just tailored the provided template.</p>
<h3 id="bumpversion">Bumpversion</h3>
<p>I always follow semantic versioning on my projects and <code class="language-plaintext highlighter-rouge">fwd</code> is no exception.<br />
<br />
On my python applications I got used to <a href="https://github.com/peritus/bumpversion">bumpversion</a>, the funny thing is I never saw it as a language independent tool, which it certainly is.<br />
<br />
My use case was pretty simple:</p>
<ol>
<li>Bump major, minor or patch version on several files</li>
<li>Create a new tag with the new version</li>
</ol>
<script src="https://gist.github.com/b4f1b55dee0d8880c697157b860af977.js?file=.bumpversion.cfg"> </script>
<p>The following keys are probably worth mentioning:</p>
<ul>
<li><strong>current_version</strong>: Keeping state of the current version</li>
<li><strong>commit</strong>: Commit when bumping a version</li>
<li><strong>tag</strong>: Create a tag when bumping a version</li>
<li><strong>bumpversion:file</strong>: Where to look for version numbers</li>
</ul>
<p>Simple usage example:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># validate config</span>
bumpversion patch <span class="nt">--dry-run</span> <span class="nt">--verbose</span>
<span class="c"># automatically bump version</span>
bumpversion patch</code></pre></figure>
<h2 id="thats-it-thats-all">That’s it, that’s all</h2>
<p><code class="language-plaintext highlighter-rouge">fwd</code> code is available at <a href="https://github.com/kintoandar/fwd">github</a>, so check it out (pun intended).<br />
<br />
Happy forwarding!</p>Joel BastosBuilding a network port forwarder in golangVault: PKI Made Easy2015-11-15T00:00:00+00:002015-11-15T00:00:00+00:00https://blog.kintoandar.com/2015/11/vault-PKI-made-easy<h2 id="disclamer">Disclamer</h2>
<p>Well, not quite <strong>PKI Made Easy</strong>, but definitely a bit more <strong>fun</strong>!<br />
I’ve done all this work on OSX, but I believe the Linux setup should be very similar.<br />
<a href="https://hashicorp.com/blog/vault-0.3.html">Vault 0.3</a> was the version used.</p>
<h2 id="containerize-all-the-things">Containerize all the things</h2>
<p>Last week I was tinkering with Docker and wanted to get Hashicorp Vault running on a container, this was mainly a plan to trick myself into learning more about Vault.<br />
<br />
The Docker stuff went pretty well and you have available a public container to prove it, check it out at:
<br /><br />
<a href="https://hub.docker.com/r/kintoandar/hashicorp-vault/">hashicorp-vault on a container</a>
<br /><br />
Regarding the plan, it worked flawlessly and I’ve got really interested in the application.</p>
<h2 id="so-whats-vault">So, what’s Vault?</h2>
<div style="text-align:center">
<p><img src="/images/vault.png" alt="vault_logo" /></p>
</div>
<blockquote>
<p>Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Vault presents a unified API to access multiple backends: HSMs, AWS IAM, SQL databases, raw key/value, and more.
<a href="https://vaultproject.io/">(source)</a></p>
</blockquote>
<p>I’m not going into depth about how the application works and all the features it provides, firstly because I just started playing with it and secondly the <a href="https://vaultproject.io/docs/index.html">documentation</a> does a very good job on that.
Instead I’ll talk about what I’ve learned regarding the PKI backend configuration and usage.
<br /><br />
These are the points I’ll cover:</p>
<ul>
<li>Install Vault</li>
<li>Configure Vault</li>
<li>Generate a Root Certification Authority (CA) and Intermediate CA</li>
<li>Create a PKI backend</li>
<li>Configure the PKI backend</li>
<li>Issue a couple of server certificates</li>
<li>Issue a Certificate Revocation List (CRL) on Vault</li>
<li>Revoke a certificate</li>
<li>Vault using TLS</li>
</ul>
<h2 id="setup">Setup</h2>
<p>Create the following <code class="language-plaintext highlighter-rouge">vault.conf</code> file:
<script src="https://gist.github.com/3677aba5a14249ac499a.js?file=vault.conf"> </script>
<br />
Create and run the following setup script on the same path as the vault.conf file:</p>
<script src="https://gist.github.com/3677aba5a14249ac499a.js?file=setup.sh"> </script>
<p><br />
You now should have a running instance of Vault using the <code class="language-plaintext highlighter-rouge">/tmp/vault/vault.conf</code> configuration.</p>
<h2 id="initialize-vault">Initialize Vault</h2>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /tmp/vault
vault init <span class="o">></span> credentials.txt
<span class="c"># check if initialized</span>
curl http://127.0.0.1:8200/v1/sys/init
<span class="c"># keep your credentials safe</span>
<span class="nb">cat </span>credentials.txt</code></pre></figure>
<h2 id="unseal-vault">Unseal Vault</h2>
<p>Vault is protected by M-of-N so you’ll need to run the unseal command 3 times using a different key each time to open it.</p>
<blockquote>
<p>The M of N feature provides a means by which organizations employing cryptographic modules for sensitive operations can enforce multi-person control over access to the cryptographic module. <a href="http://cloudhsm-safenet-docs.s3.amazonaws.com/007-011136-002_lunasa_5-1_webhelp_rev-a/Content/concepts/mofn_about.htm">(source)</a></p>
</blockquote>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vault unseal</code></pre></figure>
<h2 id="export-the-root-token">Export the Root Token</h2>
<p>This will authenticate your vault client against the Vault server.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">VAULT_TOKEN</span><span class="o">=</span>use-your-generated-root-token</code></pre></figure>
<h2 id="check-the-current-mount-points">Check the current mount points</h2>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vault mounts</code></pre></figure>
<h2 id="mount-the-pki-backend">Mount the PKI backend</h2>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vault mount pki
vault mounts
vault path-help pki</code></pre></figure>
<h2 id="get-your-hands-on-a-ca-certificate">Get your hands on a CA certificate</h2>
<p>You’ll need a CA for the next steps. Don’t have one?<br />
Here you go (thank me later):
<br /><br />
<a href="https://github.com/kintoandar/dummy_ca">dummy_ca</a></p>
<blockquote>
<p>You should never use a Root CA to issue client/server certificates, if it’s compromised you’re screwed! Instead, generate an intermediate CA and if that one it’s compromised just revoke it and issue a new one, keeping the Root CA offline.</p>
</blockquote>
<p>With your certificates generated, now build a certificate bundle with the Intermediate CA certificate and Intermediate CA key.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">DUMMY_CA</span><span class="o">=</span>/PATH/TO/dummy_ca
<span class="nb">cat</span> <span class="nv">$DUMMY_CA</span>/pki/intermediate/certs/intermediate.pem <span class="o">></span> <span class="se">\</span>
/tmp/vault/ca_bundle.pem
<span class="c"># vault does not accept encrypted keys</span>
openssl rsa <span class="nt">-in</span> <span class="nv">$DUMMY_CA</span>/pki/intermediate/private/intermediate.key <span class="o">>></span> <span class="se">\</span>
/tmp/vault/ca_bundle.pem</code></pre></figure>
<h2 id="configure-the-pki-backend">Configure the PKI backend</h2>
<p>Carefully read the documentation regarding the API endpoints <a href="https://vaultproject.io/docs/secrets/pki/index.html"><code class="language-plaintext highlighter-rouge">/pki/config/</code></a>, <a href="https://vaultproject.io/docs/secrets/pki/index.html"><code class="language-plaintext highlighter-rouge">/pki/roles</code></a> and <a href="https://vaultproject.io/docs/secrets/pki/index.html"><code class="language-plaintext highlighter-rouge">/pki/issue/</code></a></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vault write pki/config/ca <span class="nv">pem_bundle</span><span class="o">=</span><span class="s2">"@/tmp/vault/ca_bundle.pem"</span>
vault write pki/roles/test-dot-local <span class="nv">allow_any_name</span><span class="o">=</span><span class="s2">"true"</span> <span class="se">\</span>
<span class="nv">allow_subdomains</span><span class="o">=</span><span class="s2">"true"</span> <span class="nv">allow_ip_sans</span><span class="o">=</span><span class="s2">"true"</span> <span class="nv">max_ttl</span><span class="o">=</span><span class="s2">"420h"</span> <span class="se">\</span>
<span class="nv">allow_localhost</span><span class="o">=</span><span class="s2">"true"</span> <span class="nv">allow_ip_sans</span><span class="o">=</span><span class="s2">"true"</span>
vault write pki/issue/test-dot-local <span class="nv">common_name</span><span class="o">=</span>localhost <span class="se">\</span>
<span class="nv">alt_names</span><span class="o">=</span><span class="s2">"vault.test.local,*.vault.test.local"</span> <span class="se">\</span>
<span class="nv">ip_sans</span><span class="o">=</span><span class="s2">"127.0.0.1,192.168.1.77"</span> <span class="o">></span> /tmp/vault/localhost.certs
vault write pki/issue/test-dot-local <span class="se">\</span>
<span class="nv">common_name</span><span class="o">=</span>sheep.test.local <span class="o">></span> /tmp/vault/sheep.certs</code></pre></figure>
<p>Split the <code class="language-plaintext highlighter-rouge">localhost.certs</code> into a separated key and certificate files:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">localhost.pem</code></li>
<li><code class="language-plaintext highlighter-rouge">localhost.key</code></li>
</ul>
<p>Split the <code class="language-plaintext highlighter-rouge">sheep.certs</code> into a separated key and certificate files:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">sheep.pem</code></li>
<li><code class="language-plaintext highlighter-rouge">sheep.key</code></li>
</ul>
<h2 id="test-the-crl">Test the CRL</h2>
<p>This shouldn’t return any revoked certificates yet.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">curl <span class="nt">-v</span> http://127.0.0.1:8200/v1/pki/crl/pem
<span class="k">*</span> Trying 127.0.0.1...
<span class="k">*</span> Connected to 127.0.0.1 <span class="o">(</span>127.0.0.1<span class="o">)</span> port 8200 <span class="o">(</span><span class="c">#0)</span>
<span class="o">></span> GET /v1/pki/crl/pem HTTP/1.1
<span class="o">></span> Host: 127.0.0.1:8200
<span class="o">></span> User-Agent: curl/7.43.0
<span class="o">></span> Accept: <span class="k">*</span>/<span class="k">*</span>
<span class="o">></span>
< HTTP/1.1 200 OK
< Content-Type: application/pkix-crl
< Date: Sun, 15 Nov 2015 12:14:40 GMT
< Content-Length: 0
<
<span class="k">*</span> Connection <span class="c">#0 to host 127.0.0.1 left intact</span></code></pre></figure>
<h2 id="revoking-a-certicate">Revoking a certicate</h2>
<p>To revoke a certificate you first need its Serial Number.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">SHEEP_SN</span><span class="o">=</span><span class="si">$(</span>openssl x509 <span class="nt">-in</span> /tmp/vault/sheep.pem <span class="nt">-text</span> | <span class="se">\</span>
<span class="nb">grep</span> <span class="nt">-A1</span> <span class="s2">"Serial Number"</span> | <span class="nb">grep</span> <span class="nt">-v</span> <span class="s2">"Serial Number"</span> | <span class="se">\</span>
<span class="nb">awk</span> <span class="o">{</span><span class="s1">'print $1'</span><span class="o">}</span><span class="si">)</span>
curl <span class="nt">-v</span> <span class="nt">-X</span> POST http://127.0.0.1:8200/v1/pki/revoke <span class="se">\</span>
<span class="nt">-H</span> <span class="s2">"X-Vault-Token: </span><span class="nv">$VAULT_TOKEN</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">-d</span> <span class="s1">'{"serial_number":"'</span><span class="nv">$SHEEP_SN</span><span class="s1">'"}'</span></code></pre></figure>
<h2 id="test-the-crl-1">Test the CRL</h2>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">curl <span class="nt">-v</span> http://127.0.0.1:8200/v1/pki/crl/pem <span class="o">></span> <span class="se">\</span>
/tmp/vault/crl.pem
openssl crl <span class="nt">-inform</span> PEM <span class="nt">-in</span> /tmp/vault/crl.pem <span class="nt">-text</span></code></pre></figure>
<p>You should see the revoked Serial Number.</p>
<h2 id="vault-with-tls">Vault with TLS</h2>
<p>This bit took me quite a while to figure out.
<br /><br />
The documentation doesn’t mention how to do it.
The Vault server doesn’t send the Intermediate CA certificate with the leaf certificate to the vault client, this way you can’t just trust the Root CA, you’ll need to trust the Intermediate one… <code class="language-plaintext highlighter-rouge">¯\_(ツ)_/¯</code>
<br /><br />
I even tried providing a ca_bundle with the Root CA certificate in it, but no luck.
Then there was was the issue of finding out how to provide a truststore to the vault client…</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># enable the truststore</span>
<span class="nb">export </span><span class="nv">VAULT_CAPATH</span><span class="o">=</span><span class="nv">$DUMMY_CA</span>/pki/intermediate/certs/intermediate.pem</code></pre></figure>
<p>Uncomment the lines <code class="language-plaintext highlighter-rouge">tls_*_file</code> and comment out <code class="language-plaintext highlighter-rouge">tls_disable</code> on <code class="language-plaintext highlighter-rouge">vault.conf</code></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">pkill vault
vault server <span class="nt">-config</span><span class="o">=</span><span class="s2">"/tmp/vault/vault.conf"</span> &
<span class="nb">export </span><span class="nv">VAULT_ADDR</span><span class="o">=</span>https://127.0.0.1:8200
vault unseal</code></pre></figure>
<p>If it doesn’t give you a TLS error, you’re golden!
You can check the certificate the server is using and the chain if he sends it, with:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl s_client <span class="nt">-showcerts</span> <span class="nt">-connect</span> 127.0.0.1:8200</code></pre></figure>
<h2 id="thats-it-thats-all">That’s it, that’s all</h2>
<p>This blog post only scratches the surface of what Vault is capable of.
I’m currently looking into High Availability and there’s still many other backends to try out, but I hope I’ve piqued your curiosity.</p>
<p>Big thanks to <a href="https://hashicorp.com/">Hashicorp</a> for releasing such amazing open source products.</p>Joel BastosA test drive of Hashicorp Vault PKI backendVeewee, Packer and Kickstarting VMs Into Gear2015-01-18T00:00:00+00:002015-01-18T00:00:00+00:00https://blog.kintoandar.com/2015/01/veewee-packer-kickstarting-vms-into-gear<p>In a world where the Operating System (OS) installation is almost a thing of the past, with all the hosting providers giving you base boxes to use,
some of us still have the privilege to tackle this task.</p>
<div style="text-align:center">
<p><img src="/images/automation_cloud.jpg" alt="automation_cloud" /></p>
</div>
<p>Don’t get me wrong, OS installation <strong>is not</strong> a lost art and it’s vital as the underlying corner stone of an infrastructure.</p>
<blockquote>
<p>How would you guarantee important OS security/service upgrade in production will play nice with your application?</p>
</blockquote>
<blockquote>
<p>How would you be certain the development environment Virtual Machines (VMs) are a match to the production ones?</p>
</blockquote>
<p>These questions are easily answered when using the OS as an artefact of the delivery pipeline, just like the application running on it.</p>
<p>But for that you’ll need consistent, versioned and automated way of creating VMs…</p>
<p><strong>Meet <a href="https://github.com/jedi4ever/veewee" title="Veewee">Veewee</a> and <a href="https://www.packer.io/" title="Packer">Packer</a></strong>!</p>
<blockquote>
<p>Remember when you first start using kickstart files, first on floppies, then USB pens and finally using the HTTP method?</p>
</blockquote>
<blockquote>
<p>Remember trying to find a Web Server to share the kickstart file to the soon to be installed box, then using the<code class="language-plaintext highlighter-rouge">python -m SimpleHTTPServer</code> and feeling like a hacker?</p>
</blockquote>
<p>Well, this is the next step on the evolution and, as I was living under a rock regarding all things related to base VM build automation,
I just started yesterday playing around with these awesome tools (yeah… it was a rainy weekend).</p>
<h2 id="the-plan-was">The plan was</h2>
<ul>
<li>Build a minimal Centos 6.6 x86_64 VirtualBox VM</li>
<li>Use my own kickstart files</li>
<li>Import into Vagrant and use it as a base box</li>
<li>Get all the configs on version control</li>
<li>Guarantee it’s a fully automated process</li>
<li>Lean back and enjoy the show
<ul>
<li>Grab some popcorn while it builds</li>
</ul>
</li>
<li>Give Veewee and Packer a decent test-drive</li>
</ul>
<p>So, the next steps were taken to achieve the above requirements (popcorn not included).</p>
<h2 id="veewee">Veewee</h2>
<p><a href="https://github.com/jedi4ever/veewee/blob/master/doc/basics.md" title="RTFM">RTFM</a></p>
<h3 id="get-it">Get it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">gem <span class="nb">install </span>veewee</code></pre></figure>
<h3 id="grab-an-example">Grab an example</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">git clone https://github.com/kintoandar/veewee.git definitions</code></pre></figure>
<h3 id="use-it">Use it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># validate config</span>
veewee vbox validate <span class="s1">'centos-6.6-x86_64'</span>
<span class="c"># start build process</span>
veewee vbox build <span class="s1">'centos-6.6-x86_64'</span></code></pre></figure>
<h3 id="share-it">Share it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">veewee vbox <span class="nb">export</span> <span class="s1">'centos-6.6-x86_64'</span></code></pre></figure>
<h3 id="import-it">Import it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vagrant box add <span class="s1">'centos-6.6-x86_64'</span> <span class="s1">'./centos-6.6-x86_64.box'</span>
vagrant init <span class="s1">'centos-6.6-x86_64'</span>
vagrant up
vagrant ssh</code></pre></figure>
<h3 id="watch-the-magic-happen">Watch the magic happen</h3>
<iframe width="420" height="315" src="//www.youtube.com/embed/6vuqs51xiJ0" frameborder="0" allowfullscreen=""></iframe>
<p><br /></p>
<h2 id="packer">Packer</h2>
<p><a href="https://www.packer.io/docs" title="RTFM">RTFM</a></p>
<h3 id="get-it-1">Get it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">brew tap homebrew/binary
brew <span class="nb">install </span>packer</code></pre></figure>
<h3 id="migrating-from-veewee">Migrating from Veewee?</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># AKA was too lazy to build a packer template</span>
gem <span class="nb">install </span>veewee-to-packer</code></pre></figure>
<h3 id="grab-an-example-1">Grab an example</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">git clone https://github.com/kintoandar/packer.git</code></pre></figure>
<h3 id="use-it-1">Use it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd </span>packer/centos-6.6-x86_64
packer validate template.json
packer build template.json</code></pre></figure>
<h3 id="import-it-1">Import it</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">vagrant box add <span class="s1">'centos-6.6-x86_64'</span> <span class="s1">'./centos-6.6-x86_64.box'</span>
vagrant init <span class="s1">'centos-6.6-x86_64'</span>
vagrant up
vagrant ssh</code></pre></figure>
<h3 id="watch-the-magic-happen-1">Watch the magic happen</h3>
<iframe width="420" height="315" src="//www.youtube.com/embed/Etcmywy0JHs" frameborder="0" allowfullscreen=""></iframe>
<p><br /></p>
<h2 id="thats-it-thats-all">That’s it, that’s all</h2>
<p>And this is how, in a couple of hours, I’ve became a fan of Veewee and Packer, <code class="language-plaintext highlighter-rouge">#truestory</code>.</p>Joel BastosIn a world where the Operating System installation is almost a thing of the past, with all the hosting providers giving you base boxes to use, some of us still have the privilege to tackle this task.Cooking with Containers2014-11-22T00:00:00+00:002014-11-22T00:00:00+00:00https://blog.kintoandar.com/2014/11/cooking-with-containers<p>If you follow my blog you probably already know I’ve been playing around with docker and CoreOS from sometime now.
Even though I have several KVM instances of CoreOS running on my home server, I felt the need to have a VM on my mac to learn more stuff on the go.</p>
<p>I’ve spined up a CoreOS vagrant and started having some fun.</p>
<div style="text-align:center">
<p><img src="/images/coreos.png" alt="coreos" /></p>
<p><i class="fa fa-plus"></i></p>
<p><img src="/images/Docker.png" alt="Docker" /></p>
</div>
<p>Yeah, yeah, I know there’s <strong>boot2docker</strong>, that abstracts everything in a easy install, so why have all the fuss of getting CoreOS up and running?
Because I believe CoreOS will be <strong>the</strong> building block of the future of containerisation. And the time for learning about it, is now!</p>
<p>I started by building my first docker image from scratch. Things escalated quite quickly and I ended up with an awesome chef cookbook testing setup, almost by accident :p</p>
<p>Hoping you might find my setup useful as it’s been for me, here’s a blog post explaining how to get it up and running.</p>
<h2 id="software-spec">Software spec</h2>
<p>For comparison purposes, these were my software versions when I wrote this post:</p>
<table>
<tbody>
<tr>
<td><strong>Package</strong></td>
<td><strong>Version</strong></td>
</tr>
</tbody>
<tfoot>
<tr>
<td>Mac OS X</td>
<td>10.9.5</td>
</tr>
<tr>
<td>Virtualbox</td>
<td>4.3.18</td>
</tr>
<tr>
<td>Vagrant</td>
<td>1.6.5</td>
</tr>
<tr>
<td>CoreOS</td>
<td>505.1.0</td>
</tr>
<tr>
<td>Docker</td>
<td>1.3.0</td>
</tr>
<tr>
<td>ChefDK</td>
<td>0.3.5</td>
</tr>
<tr>
<td>test-kitchen</td>
<td>1.2.1</td>
</tr>
<tr>
<td>kitchen-docker</td>
<td>1.5.0</td>
</tr>
</tfoot>
</table>
<p><br /></p>
<div style="text-align:center">
<p><img src="/images/vagrant.png" alt="vagrant" /></p>
<p><i class="fa fa-plus"></i></p>
<p><img src="/images/virtualbox.png" alt="virtualbox" /></p>
</div>
<h2 id="lets-get-down-to-business">Lets get down to business</h2>
<p>Download and install the following packages:</p>
<ul>
<li><a href="http://brew.sh/">Homebrew</a></li>
<li><a href="https://www.virtualbox.org/wiki/Downloads">Virtualbox</a></li>
<li><a href="https://www.vagrantup.com/downloads.html">Vagrant</a></li>
<li><a href="https://downloads.getchef.com/chef-dk/">Chef Development Kit (chefdk)</a></li>
</ul>
<h3 id="install-the-test-kitchen-gem-and-its-docker-driver">Install the test-kitchen gem and its docker driver</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">chef gem <span class="nb">install </span>test-kitchen
chef gem <span class="nb">install </span>kitchen-docker</code></pre></figure>
<h3 id="install-docker">Install docker</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">brew update
brew <span class="nb">install </span>docker</code></pre></figure>
<h3 id="clone-coreos-vagrant-config-and-spin-it-up">Clone CoreOS vagrant config and spin it up</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">git clone https://github.com/coreos/coreos-vagrant.git
<span class="nb">cd </span>coreos-vagrant
vagrant up
vagrant ssh</code></pre></figure>
<h3 id="enable-remote-api-for-docker">Enable remote API for docker</h3>
<p>By default, CoreOS has the docker API listening on a local socket.
As we’re going to manage containers remotely we’ll need to make docker available on a TCP socket (more info about this <a href="https://coreos.com/docs/launching-containers/building/customizing-docker/">here</a>).</p>
<p>On the CoreOS box create the following file <code class="language-plaintext highlighter-rouge">/etc/systemd/system/docker-tcp.socket</code> and add this:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="o">[</span>Unit]
<span class="nv">Description</span><span class="o">=</span>Docker Socket <span class="k">for </span>the API
<span class="o">[</span>Socket]
<span class="nv">ListenStream</span><span class="o">=</span>2375
<span class="nv">BindIPv6Only</span><span class="o">=</span>both
<span class="nv">Service</span><span class="o">=</span>docker.service
<span class="o">[</span>Install]
<span class="nv">WantedBy</span><span class="o">=</span>sockets.target</code></pre></figure>
<p>Then enable the new socket:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>su -
systemctl <span class="nb">enable </span>docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker</code></pre></figure>
<p>And <strong>logout</strong> from the CoreOS box.</p>
<h3 id="adding-a-friendly-name">Adding a friendly name</h3>
<p>On your host, add a friendly hostname for your CoreOS instance</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo echo</span> <span class="s2">"172.17.8.101 coreos01"</span> <span class="o">>></span> /etc/hosts</code></pre></figure>
<h3 id="export-the-new-docker-endpoint-and-test-it-out">Export the new docker endpoint and test it out</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">DOCKER_HOST</span><span class="o">=</span>tcp://coreos01:2375
docker ps <span class="nt">-a</span></code></pre></figure>
<p>You should see something like this:</p>
<p><img src="/images/docker_ps.png" alt="docker_ps" />
<br /></p>
<p class="notice--danger"><strong>Note:</strong> If you can’t reach the coreos guest via 172.17.8.101 it might be related to an overlapping route on your host.
You’ll need to add a new route, here’s an example:
<code class="language-plaintext highlighter-rouge">route -vn add -net 172.17.8.0/24 -interface vboxnet1</code></p>
<h2 id="thats-it-let-the-cooking-begin">That’s it, let the cooking begin</h2>
<div style="text-align:center">
<p><img src="/images/Chef.png" alt="Chef" /></p>
<p><i class="fa fa-plus"></i></p>
<p><img src="/images/kitchen.png" alt="kitchen" /></p>
</div>
<p>I’ve made available on github an example so you can start testing your setup right away.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">git clone https://github.com/kintoandar/cooking_with_containers.git
<span class="nb">cd </span>cooking_with_containers
kitchen converge</code></pre></figure>
<p>This will download a docker image I’ve built from the public docker hub, start a new container, push an example cookbook into it, generate a runlist and do a chef-solo run with that runlist, all <strong>like magic</strong>.</p>
<p>If all went according to plan, you just converged your first container testing an <em>useless</em> cookbook. So give yourself a pat on the back, good job!</p>
<p>Now you can go on and build <em>awesome</em> cookbooks, fully tested on your new shiny setup, <strong>enjoy</strong>!</p>
<p class="notice--info"><strong>Pro-tip:</strong> vim + syntastic + rubocop + foodcritic = another #epic combo!</p>Joel BastosCreating testing workflows with containers using docker, coreos and test-kitchenCertificates Mumbo Jumbo2014-11-10T00:00:00+00:002014-11-10T00:00:00+00:00https://blog.kintoandar.com/2014/11/certificates-mumbo-jumbo<h2 id="everybody-hates-pki">Everybody hates PKI</h2>
<p>Yeah, everybody hates <strong>Public Key Infrastructure (PKI)</strong> and I get that, it increases the complexity of a project and when you have to deal with SSL most of your troubleshooting options go down the drain.</p>
<p>No wonder there’s an old <strong>“MIT Curse”</strong> about casting PKI on an enemy team project, cause when you use it to solve a problem you’ll end up having many more.</p>
<h2 id="secure-all-the-things">Secure all the things</h2>
<p>However, when <strong>done right</strong>, PKI is currently the best option for securing data exchanges over hostile environments (ex: internet, compromised networks).</p>
<blockquote>
<p>So, why did I mention <strong>“done right”</strong> in the previous sentence?</p>
</blockquote>
<p>Because most of the time this happens:</p>
<div style="text-align:center">
<p><img src="/images/no_idea.jpg" alt="no idea" /></p>
</div>
<p>If you’re reading this post/rant you’re probably using client side SSL on your app, so…</p>
<h2 id="who-can-you-trust">Who can you trust?</h2>
<blockquote>
<p>Do you know which Certification Authorities you currently have on your truststore and what is the <strong>certification chain</strong> needed for the target certificate so it validates it successfuly?</p>
</blockquote>
<p>If the answer is <strong>NO</strong>, you’re doing it wrong! You should only use Root Certification Authorities (CA) you trust and nothing else. Why maintain several CAs that you don’t care about and that could end up being compromised?
Yeah, I’m talking about the standard Java <code class="language-plaintext highlighter-rouge">cacerts</code> file and this message is for all of you who use old Java versions and, with it, old <code class="language-plaintext highlighter-rouge">cacerts</code> truststores. Do yourself a favor and create your own truststores, specifically for your project.</p>
<h2 id="chaining-things-together">Chaining things together</h2>
<blockquote>
<p>What da hell is a certificate chain and why should I care?</p>
</blockquote>
<p>When you buy a digital certificate, usually it’s not signed directly by a Root CA. Instead, it’s signed by an intermediate CA, which by its turn is signed by a higher CA, and so on, until it reaches the Root one. This is called the certificate chain. Here’s an example:</p>
<div style="text-align:center">
<p><img src="/images/google_chain.png" alt="ssl" /></p>
</div>
<p>As you can see, the wildcard certificate for google.com has two more intermediate CAs before the Root CA. So, answering the question above, if your truststore doesn’t have all the intermediate CAs you won’t be able build the certificate chain and the certificate won’t be successfully validated.</p>
<h2 id="one-in-a-million">One in a million</h2>
<blockquote>
<p>How to guarantee that a certificate from a certain CA is exactly the one I’m expecting and not a random one from the same CA?</p>
</blockquote>
<p>Well, a certificate has several fields that you can check against, for example, the fingerprint or the Common Name (CN). Be advised that when you use the fingerprint, if the certificate is renewed, you’ll need to update your validation scheme.</p>
<h2 id="check-it-out">Check it out</h2>
<blockquote>
<p>How do I check if a certificate as been revoked by a CA?</p>
</blockquote>
<p>There’re two mechanisms for that, <strong>Certificate Revocation Lists</strong> (CRL) and <strong>Online Certificate Status Protocol</strong> (OCSP):</p>
<ul>
<li><strong>CRL</strong> - File with a list of all revoked certificates issued by the CA. You can find a link to download it in the CA certificate.</li>
<li><strong>OSCP</strong> - Service to check if a certain certificate is still valid. If implemented by the CA, you can just query if a certain certificate issued by that CA is still valid.</li>
</ul>
<h2 id="bigger-is-not-always-better">Bigger is not always better</h2>
<blockquote>
<p>A higher key size, like 4096 bits, is always better, right?</p>
</blockquote>
<p>Security wise, definitely! But there’s a price to pay. It will have an impact in the performance of your server due to the cryptographic operations using that key. So, a key size of 2048 bits will be your best choice right now, lower than that it will be refused by some servers.</p>
<p><strong>As a heads up…</strong></p>
<p class="notice--danger">Please keep in mind that you should <strong>never, ever, <u>ever</u> share your private key!</strong> I’ve heard the most ludicrous reasons for getting a hold on a private key, and the answer should always be: <strong>NO WAY IN HELL!</strong></p>
<h2 id="command-line-primer">Command line primer</h2>
<p>Time for some useful commands, but before proceeding any further check out the following definitions:</p>
<ul>
<li><strong>x509</strong> - Public Key Infrastructure standards</li>
<li><strong><code class="language-plaintext highlighter-rouge">.csr</code></strong> - Certificate Signing Request (also known as a pkcs10)</li>
<li><strong><code class="language-plaintext highlighter-rouge">.der</code></strong> - Certificate container binary encoded</li>
<li><strong><code class="language-plaintext highlighter-rouge">.pem</code></strong> - Certificate container base64 encoded</li>
<li><strong><code class="language-plaintext highlighter-rouge">.key</code></strong> - It’s just a PEM encoded file containing only the private key</li>
<li><strong><code class="language-plaintext highlighter-rouge">.p12</code></strong> - May contain a certificate, a key and/or multiple certificates like CAs/chains (also known as .pfx or pkcs12)</li>
<li><strong><code class="language-plaintext highlighter-rouge">.jks</code></strong> - Java keystore/trustore may contain a certificate, a key and/or multiple certificates like CAs/chains</li>
</ul>
<h3 id="choose-the-common-name">Choose the Common Name</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">DOMAIN</span><span class="o">=</span><span class="s2">"www.example.com"</span></code></pre></figure>
<h3 id="generate-a-key">Generate a .key</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl genrsa <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.key 2048</code></pre></figure>
<h3 id="generate-a-csr">Generate a .csr</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl req <span class="nt">-new</span> <span class="nt">-key</span> <span class="nv">$DOMAIN</span>.key <span class="nt">-subj</span> <span class="s2">"/C=PT/L=Porto/ST=Portugal/O=Epic Organization/OU=Department of Awesomeness/CN=</span><span class="nv">$DOMAIN</span><span class="s2">"</span> <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.csr
<span class="c"># show the .csr</span>
openssl req <span class="nt">-text</span> <span class="nt">-noout</span> <span class="nt">-verify</span> <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.csr</code></pre></figure>
<p>After sending the <code class="language-plaintext highlighter-rouge">.csr</code> to a CA so it may generate a new certificate, you’ll eventually receive a <code class="language-plaintext highlighter-rouge">.pem</code> or <code class="language-plaintext highlighter-rouge">.der</code> file.</p>
<h3 id="print-pem">Print .pem</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl x509 <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.pem <span class="nt">-text</span></code></pre></figure>
<h3 id="convert-der-to-pem">Convert .der to .pem</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl x509 <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.der <span class="nt">-inform</span> DER <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.pem <span class="nt">-outform</span> PEM</code></pre></figure>
<h3 id="convert-pem-to-der">Convert .pem to .der</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl x509 <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.pem <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.der <span class="nt">-outform</span> DER</code></pre></figure>
<h3 id="check-key-pem-and-csr">Check .key, .pem and .csr</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># all must share the same hash</span>
openssl rsa <span class="nt">-noout</span> <span class="nt">-modulus</span> <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.key | openssl md5
openssl x509 <span class="nt">-noout</span> <span class="nt">-modulus</span> <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.pem | openssl md5
openssl req <span class="nt">-noout</span> <span class="nt">-modulus</span> <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.csr | openssl md5</code></pre></figure>
<h3 id="generate-p12">Generate .p12</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl pkcs12 <span class="nt">-export</span> <span class="nt">-inkey</span> <span class="nv">$DOMAIN</span>.key <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.pem <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.p12</code></pre></figure>
<h3 id="extract-key-and-pem-from-p12">Extract .key and .pem from .p12</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># .pem</span>
openssl pkcs12 <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.p12 <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.pem <span class="nt">-nokeys</span> <span class="nt">-clcerts</span>
<span class="c"># .key</span>
openssl pkcs12 <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.p12 <span class="nt">-out</span> <span class="nv">$DOMAIN</span>.key <span class="nt">-nocerts</span></code></pre></figure>
<h3 id="convert-p12-to-jks">Convert .p12 to .jks</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">keytool <span class="nt">-importkeystore</span> <span class="nt">-deststorepass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-destkeypass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-destkeystore</span> <span class="nv">$DOMAIN</span>.jks <span class="nt">-srckeystore</span> <span class="nv">$DOMAIN</span>.p12 <span class="nt">-srcstoretype</span> PKCS12 <span class="nt">-srcstorepass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-alias</span> 1</code></pre></figure>
<h3 id="convert-jks-to-p12">Convert .jks to .p12</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">keytool <span class="nt">-importkeystore</span> <span class="nt">-srckeystore</span> <span class="nv">$DOMAIN</span>.jks <span class="nt">-destkeystore</span> <span class="nv">$DOMAIN</span>.p12 <span class="nt">-srcstoretype</span> JKS <span class="nt">-deststoretype</span> PKCS12 <span class="nt">-srcstorepass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-deststorepass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-srcalias</span> 1 <span class="nt">-destalias</span> 1 <span class="nt">-srckeypass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-destkeypass</span> <span class="s2">"VeryStrongPass"</span> <span class="nt">-noprompt</span></code></pre></figure>
<h3 id="list-p12-certificates">List .p12 certificates</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl pkcs12 <span class="nt">-info</span> <span class="nt">-in</span> <span class="nv">$DOMAIN</span>.p12</code></pre></figure>
<h3 id="list-jks-certificates">List .jks certificates</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">keytool <span class="nt">-list</span> <span class="nt">-v</span> <span class="nt">-keystore</span> <span class="nv">$DOMAIN</span>.jks</code></pre></figure>
<h3 id="get-certificate-chain-from-a-webserver">Get certificate chain from a webserver</h3>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">openssl s_client <span class="nt">-showcerts</span> <span class="nt">-connect</span> <span class="nv">$DOMAIN</span>:443</code></pre></figure>
<h3 id="curl-commands">Curl commands</h3>
<h4 id="client-side-test">Client side test</h4>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">curl <span class="nt">-v</span> https://<span class="nv">$DOMAIN</span> <span class="nt">--cert</span> ./public.pem <span class="nt">--key</span> ./private.key</code></pre></figure>
<h4 id="specifying-a-truststore">Specifying a truststore</h4>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># Add the Root CA to an empty file called trustore.pem</span>
curl <span class="nt">-v</span> https://<span class="nv">$DOMAIN</span> <span class="nt">--cacert</span> trustore.pem</code></pre></figure>
<h3 id="bash-functions">Bash functions</h3>
<p>I’ve created a few helper functions that you might find useful, they’re <a href="https://github.com/kintoandar/dotfiles/blob/master/.bashrc.d/functions#L67">available here</a>.</p>
<h2 id="may-the-pki-be-with-you">May the PKI be with you</h2>
<p>I’m just a <em>Padawan</em> on the ways of PKI and obviously this post is <strong>not</strong> <em>“everything you wanted to know about PKI and were afraid to ask”</em>, but I sincerely hope it kicks you down the rabbit hole.</p>
<p>Have fun!</p>Joel BastosEverybody hates Public Key Infrastructure (PKI)Home is Where the Blog is2014-11-07T00:00:00+00:002014-11-07T00:00:00+00:00https://blog.kintoandar.com/2014/11/home-is-where-the-blog-is<p>And the day came to migrate <a href="http://kintoandar.blogspot.com">my old blog</a>. I know it’s an usual thing to do, but I’ve never done it before.</p>
<p>My previous blog survived since 2008 on blogger and, apart of some cosmetic tinkering, it was fine for me.
Time went by and I grew tired of the WYSIWYG bugs, that always ended up with me messing around with the posts HTML.</p>
<p><img src="/images/jekyll_github.png" alt="jekyll github" /></p>
<p>Out with old, in with the new. The markdown approach for blogging was so appealing I couldn’t resist, besides, <a href="http://jekyllrb.com/">Jekyll</a> + <a href="https://github.com/kintoandar">Github</a> combo was just what the doctor ordered.</p>
<p>Importing the old posts while preserving the URLs (so you don’t loose the page rank) was dead simple, but unfortunately all of them are saved as a single line per file, so <code class="language-plaintext highlighter-rouge"><irony></code> good luck editing them the future <code class="language-plaintext highlighter-rouge"></irony></code>.</p>
<p>Have user comments on your posts and don’t want to loose them? No problem, <a href="https://disqus.com/">disqus</a> has you covered and the integration with Jekyll is trivial.</p>
<p>After finding a decent theme (thanks to <a href="https://mademistakes.com/">Michael Rose</a>), it was just a matter of learning the Jekyll workflow and start typing. It’s all pretty straight forward and if you already use github personal pages, publishing the new blog is just a git push away.</p>
<p>So, if you ever need:</p>
<ul>
<li>Easy migration from other platforms (ex: Blogger, Wordpress)
<ul>
<li>Posts</li>
<li>Comments</li>
<li>URL path</li>
</ul>
</li>
<li>Markdown Posts</li>
<li>Quick integration with Google Analytics and Disqus</li>
<li>Version control</li>
<li>Fast content preview cycle (using <code class="language-plaintext highlighter-rouge">jekyll serve --watch</code>)</li>
<li>Power/freedom to do whatever you want</li>
</ul>
<p>This kind on setup is for you too!</p>
<p class="notice--danger"><strong>Heads up</strong>, if you’ve subscribed to my old blog feeds, you might have to <a href="http://blog.kintoandar.com/feed.xml">update your reader settings</a>.</p>Joel BastosA blog workflow using github-pages and jekyllBeyond the HTPC2014-08-23T00:00:00+01:002014-08-23T00:00:00+01:00https://blog.kintoandar.com/2014/08/beyond-htpc_23<p><img src="/images/simply.png" alt="simply" class="align-center" /></p>
<p>When I got my first job as a Linux sysadmin, I built a small PC to practice as much as I could on all the cool technologies I wanted to learn, and not only the stuff needed for the current job I had at that moment. That mindset escalated quickly, and the PC became a full blown server running multiple VMs, services, different network segments and with a pretty decent uptime (almost 99.0% even without UPS).</p>
<p>I must admit it has crossed my mind to host a box in the so called “cloud”, but I was too proud of and had too much fun with my personal infrastructure to come to that. Besides, as a major plus, the server was sitting on my living room, tuck away on my TV cabinet, and doubled as a HTPC.</p>
<p>After a while, requests rolled in to host several websites for friends and family, which made me to get things a little more professional and dig in DNS (pun intended), service tuning, availability and scalability.</p>
<p>After 7 years, the time has come to seriously upgrade the machine and I started searching the best way to balance power consumption, performance, noise, longevity and price. This was the end result:</p>
<table>
<tbody>
<tr>
<td>Case</td>
<td>Cooler Master Elite 110 Mini-ITX</td>
</tr>
<tr>
<td>Motherboard</td>
<td>Asus H81I-PLUS Mini-ITX</td>
</tr>
<tr>
<td>Processor</td>
<td>Intel Core i3 4340 (3.60 GHz)</td>
</tr>
<tr>
<td>Heatsink</td>
<td>Zalman CNPS8900 Quiet</td>
</tr>
<tr>
<td>RAM</td>
<td>Dimm 8GB DDR3 Crucial CL9 1600Mhz Ballistix</td>
</tr>
<tr>
<td>Disk</td>
<td>Western Digital SATAIII 1TB 7200rpm 64Mb Black 6Gb/s</td>
</tr>
<tr>
<td>PSU</td>
<td>Silverstone SFX 300W 80 Plus Bronze</td>
</tr>
</tbody>
</table>
<p>All the hardware assembly went on without any issues, the case is very versatile for its size and with plenty of space to add more storage capability. Never have used a SFX format PSU before and I was happily surprised by its size, giving a lot more room inside the case and deploying all the required power very quietly.</p>
<figure class="half ">
<a href="/images/gambit1.jpg">
<img src="/images/gambit1.jpg" alt="gambit server 1" />
</a>
<a href="/images/gambit2.jpg">
<img src="/images/gambit2.jpg" alt="gambit server 2" />
</a>
</figure>
<p>The new haswell processor architecture provides a major boost between performance and power consumption and the integrated GPU is more than enough for 1080p playback. Hey, even if you are wondering if gaming is possible in a ring like this, the answer is <em>hell yeah</em>, it counts as my own version of a steam box.</p>
<figure class="half ">
<a href="/images/gambit3.jpg">
<img src="/images/gambit3.jpg" alt="gambit server 3" />
</a>
<a href="/images/gambit4.jpg">
<img src="/images/gambit4.jpg" alt="gambit server 4" />
</a>
</figure>
<p>Regarding the software, there are two main purposes to consider, HTPC and Server, I will give a quick overview of the most important bits of each.</p>
<figure class="half ">
<a href="/images/gambit5.jpg">
<img src="/images/gambit5.jpg" alt="gambit server 5" />
</a>
<a href="/images/gambit6.jpg">
<img src="/images/gambit6.jpg" alt="gambit server 6" />
</a>
</figure>
<h2 id="the-htpc">The HTPC</h2>
<p>On the HTPC side I went with <a href="http://www.linuxmint.com/">Linux Mint</a> as the default operating system. It has <a href="http://cinnamon.linuxmint.com/">Cinnamon</a> by default, completely compatible with the hardware I’ve bought and with all the <a href="https://help.launchpad.net/Packaging/PPA">PPAs</a> to choose from - package installation was a breeze.</p>
<p>Obviously, <a href="http://xbmc.org/">XBMC</a> has to be running. It’s the best media center out there and I’ve been using it since it was only on the xbox. A while back I got a <a href="http://lightpack.tv/">lightpack</a> strapped to my TV. With <a href="http://code.google.com/p/boblight/wiki/boblightd">boblightd</a> and XBMC working together I’ve achieved an astounding ambilight experience.</p>
<p>Last but not least, <a href="http://store.steampowered.com/">Steam</a> got installed because gaming is awesome.</p>
<h2 id="the-server">The Server</h2>
<p>As for the Server, pretty much every service runs inside a virtual machine, <a href="http://www.linux-kvm.org/">KVM</a> has been my hypervisor of choice for some years now. My KVM manages several <a href="http://www.centos.org/">CentOS</a> boxes built by function. Recently I’ve also started using <a href="https://coreos.com/">CoreOS</a> to get the hang of <a href="http://www.docker.com/">Docker</a> and to spin up environments for testing.</p>
<p><a href="http://www.nagios.org/">Nagios</a> remains my faithful watchdog and keeps an eye on things for me, while <a href="http://www.ossec.net/">OSSEC</a> enforces the security policy in place and integrates with nagios for alerting on when something is fishy.</p>
<p><a href="https://wiki.archlinux.org/index.php/GNU_Screen">Screen</a> is always on so I keep an instance of <a href="http://www.irssi.org/">IRSSI</a> connected to <a href="https://freenode.net/">#freenode</a> and another of <a href="http://libtorrent.rakshasa.no/">rtorrent</a>, just for legal downloads obviously.</p>
<p>To get all of my DNS A records updated, as I don’t have public static IP addresses, <a href="http://sourceforge.net/p/ddclient/wiki/Home/">ddclient</a> does its magic.</p>
<p>No decent home server would be finished without providing a NAS, and <a href="http://www.samba.org/">Samba</a> takes care of that quite nicely.</p>
<hr />
<p>Well, there you go, the reasons I went through all this trouble and an insight on the most important stuff running in my new box. Yeah, I could have a NAS + Raspberry Pi + a couple of AWS/DigitalOcean/whatever instances, but:</p>
<blockquote>
<p>What would be the fun in that?</p>
</blockquote>Joel BastosAn overview of my home server and HTPC