Scott Lowe

Technology Short Take 93

Scott Lowe's Blog - Fri, 01/19/2018 - 01:00

Welcome to Technology Short Take 93! Today I have another collection of data center technology links, articles, thoughts, and rants. Here’s hoping you find something useful!

Networking Servers/Hardware

Nothing this time around. Feel free to hit me up on Twitter if you have links you think I should include next time!

Security Cloud Computing/Cloud Management
  • The VMware Cloud-Native Apps group has a breakdown of some of the new features/functionality found in the 1.9 release of Kubernetes.
  • Via Maish Saidel-Keesing, I saw this post about 10 open source Kubernetes tools for highly effective SRE and Ops teams.
  • Richard Boswell has a series going on VMware Integrated OpenStack (VIO) and resource schedulers that is (so far) pretty good. Check out part 1 and part 2.
  • Daniel Sanche has a “Kubernetes 101” post that might be worth reviewing if you’re (somewhat) new to Kubernetes.
  • And while we’re on the topic of getting started with Kubernetes, you might find Chris Short’s post on that topic helpful as well.
  • Robin Vasan (via The New Stack) talks a bit about how various forces are shaping the software industry. There are a number of useful observations and takeaways from this article; one that jumps to mind is Vasan’s recommendation that “architects should actively consider finding abstractions that give them portability” when it comes to selecting/utilization cloud services. (I could insert a rant here about lock-in, but I’ll abstain…)
  • Kynan Rilee has a nice post on node affinity in Kubernetes scheduling.
Operating Systems/Applications
  • PowerShell Core 6.0 is now GA and supported by Microsoft.
  • Consul is a distributed key-value store that I’ve discussed a few times here on the site. One issue that has been difficult to address is boostrapping Consul in a cloud environment; this slightly older article addresses how new functionality (as of v0.7.1) can use cloud metadata to handle boostrapping a Consul cluster—and help address scaling that cluster up or down. And while I’m on the topic of Consul, you may also find this article on using Consul with a service mesh to be somewhat helpful as well (although this one isn’t as in-depth as the previous article).
  • Ariya Hidayat shows how to run Debian under WSL (Windows Subsystem for Linux).
  • This is a tool I’d like to take for a spin as soon as I get a chance.
  • I’m glad to see someone tackle the “microservices are the answer” mindset that seems to be going around these days. Check out Dave Kerr’s post. (He also includes links to more in-depth reading on related topics.)
  • Mark Peek talks about Dispatch, the open source serverless framework VMware recently released.
Storage Virtualization Career/Soft Skills
  • I really appreciated Jérôme Petazzoni’s frank look at how he reclaimed his productivity in the face of various challenges. There are some good tips here.
  • I found this article on five trends to avoid when founding a startup (no, I’m not planning to found a startup!) interesting reading. The article helps me see companies in a different light and judge/evaluate them by different criteria, which (IMHO) is generally pretty helpful.

OK, that’s all I have this time around. Check back in a couple weeks for the next Technology Short Take!

Categories: Scott Lowe

Experimenting with Azure

Scott Lowe's Blog - Tue, 01/16/2018 - 09:00

I’ve been experimenting with Microsoft Azure recently, and I thought it might be useful to share a quick post on using some of my favorite tools with Azure. I’ve found it useful to try to leverage existing tools whenever I can, and so as I’ve been experimenting with Azure I’ve been leveraging familiar tools like Docker Machine and Vagrant.

The information here isn’t revolutionary or unique, but hopefully it will still be useful to others, even if only as a “quick reference”-type of post.

Launching an Instance on Azure Using Docker Machine

To launch an instance on Azure and provision it with Docker using docker-machine:

docker-machine create -d azure \ --azure-subscription-id $(az account show --query "id" -o tsv) \ --azure-ssh-user azureuser \ --azure-size "Standard_B1ms" azure-test

The first time you run this you’ll probably need to allow Docker Machine access to your Azure subscription (you’ll get prompted to log in via a browser and allow access). This will create a service principal that is visible via az ad sp list. Note that you may be prompted for authentication for future uses, although it will re-use the existing service principal once it is created.

Launching an Instance Using the Azure Provider for Vagrant

See this page for complete details on using the Azure provider for Vagrant. Basically, it boils down to these four steps:

  1. Install the Azure provider using vagrant plugin install vagrant-azure.
  2. Add a “dummy” box (similar to how you use Vagrant with AWS; see this post).
  3. Set up an Azure service principal for Vagrant to use to connect to Azure.
  4. Run vagrant up and you’re off to the races.

A more detailed post on using Vagrant with Azure is available here; it provides a bit more information on the above steps.

Launching an Instance using the Azure CLI

OK, maybe the Azure CLI isn’t exactly an “existing tool,” but given my affinity for CLI-based tools I think it’s probably reasonable to include it here. To launch an instance using the Azure CLI, it would look something like this:

az vm create -n vm-name -g group-name --image UbuntuLTS --size Standard_B1ms --no-wait

Of course, this assumes a pre-existing resource group. More details are available here.

If you need to install the Azure CLI, see here or here for some additional information.

Happy experimenting!

Categories: Scott Lowe

Issue with VMware-Formatted Cumulus VX Vagrant Box

Scott Lowe's Blog - Thu, 01/11/2018 - 23:00

I recently had a need to revisit the use of Cumulus VX (the Cumulus Networks virtual appliance running Cumulus Linux) in a Vagrant environment, and I wanted to be sure to test what I was doing on multiple virtualization platforms. Via Vagrant Cloud, Cumulus distributes VirtualBox and Libvirt versions of Cumulus VX, and there is a slightly older version that also provides a VMware-formatted box. Unfortunately, there’s a simple error in the VMware-formatted box that prevents it from working. Here’s the fix.

The latest version (as of this writing) of Cumulus VX was 3.5.0, and for this version both VirtualBox-formatted and Libvirt-formatted boxes are provided. For a VMware-formatted box, the latest version is 3.2.0, which you can install with this command:

vagrant box add CumulusCommunity/cumulus-vx --box-version 3.2.0

When this Vagrant box is installed using the above command, what actually happens is something like this (at a high level):

  1. The *.box file for the specific box, platform, and version is downloaded. This .box file is nothing more than a TAR archive with specific files included (see here for more details).

  2. The *.box file is expanded into the ~/.vagrant.d/boxes directory on your system. A directory tree is built that helps Vagrant support multiple versions of the same box along with multiple formats of the same box (for example, having version 3.2.0 of a VMware-formatted box alongside version 3.5.0 of a VirtualBox-formatted box on the same system).

In this case, when you install version 3.2.0 of the VMware-formatted Cumulus VX Vagrant box, you’ll end up with a set of files found in ~/.vagrant.d/boxes/CumulusCommunity-VAGRANTSLASH-cumulus-vx/3.2.0/vmware_desktop. In this directory, you’ll find all the files that would describe a VM to a product like VMware Fusion or VMware Workstation: the VMX file, one or more VMDK files, etc.

What you’ll also find for this particular box is something you don’t want: a lock file, in the form of a directory named cumulus-linux-3.2.0.vmx.lck. This lock file is normally created by a VMware desktop virtualization product to indicate that the VM is running and therefore the files are locked and can’t be accessed. Unfortunately, the presence of this directory means that the Vagrant box will not work.

If you try to run vagrant up on a Vagrant enviroment with this box, you’ll get an error indicating the files are locked, and the vagrant up command will fail.

So how does one fix this?

Simple: just delete the cumulus-linux-3.2.0.vmx.lck directory and its contents.

Once you’ve deleted that file, then using vagrant up to instantiate a Vagrant environment based on this box will work as expected.

(Side note: If you are planning to use version 3.2.0 of the VMware-formatted Cumulus VX box, there’s one additional oddity. When you use vagrant box add as outlined above to download and install the box, you’ll be prompted with a set of options for which provider to use. Be sure to use option 4—the one labeled “vmware_desktop”—and not option 5, labeled “vmware_fusion”. The latter reports an error after downloading the box and the command fails.)

Hopefully Cumulus Networks will release an updated version of the Cumulus VX Vagrant box for VMware products that addresses these issues.

Categories: Scott Lowe

Technology Short Take 92

Scott Lowe's Blog - Fri, 01/05/2018 - 13:00

Welcome to Technology Short Take 92, the first Technology Short Take of 2018. This one was supposed to be the last Tech Short Take of 2017, but I didn’t get it published in time (I decided to spend time with my family instead—some things are just more important). In any case, hopefully the delay of one additional week hasn’t caused any undue stress—let’s jump right in!

Networking
  • Lindsay Hill walks through using Telegraf, InfluxDB, and Grafana to monitor network statistics.
  • Via Ivan Pepelnjak, I found this article by Diane Patton at Cumulus Networks talking about container network designs. The article is a bit heavy on pushing the Host Pack (a Cumulus thing), but otherwise provides a good overview of several different possible container network designs, along with some of the criteria that might lead to each design.
  • Erik Hinderer takes a stab (based on his field experience) at estimating how long it takes to upgrade VMware NSX. Erik’s figures are just estimates, of course; actual values will be determined based on each customer’s specific environment.
  • This post is a bit older, but covers a challenge faced by cloud-native darling Netflix—how does one, exactly, identify which application used which IP address at a given point in time? When you’re operating at the scale at which Netflix operates, this is no trivial feat.
Servers/Hardware Security
  • The CPU architecture flaw involving speculative execution has been garnering a great deal of attention (see here, here, here, and here). Also, here’s Google Project Zero’s write-up (along with a support FAQ from Google on mitigation). There’s lots more coverage, obviously, but this should be enough to get you started.
Cloud Computing/Cloud Management
  • Kevin Carter has a detailed write-up on efforts around leveraging systemd-nspawn for deploying OpenStack via OpenStack Ansible. systemd-nspawn is an interesting technology I’ve been watching since early this year, and it will be cool (in my opinion) to see a project using it in this fashion.
  • The vSphere provider for Terraform (did you know there was one?) recently hit 1.0, and HashiCorp has a blog post (re-)introducing the provider. I thought I also saw a VMware blog post on the provider as well, but couldn’t find any link (guess I was mistaken).
  • Oh, and speaking of Terraform: check out this post on the release of Terraform 0.11.
  • Tim Nolet reviews some differences between Azure Container Instances and AWS Fargate (recently announced at AWS re:Invent 2017). Tim’s review of each of the offerings is pretty balanced (thanks for that), and I’d recommend reading this post to get a better idea of how each of them work.
  • Jorge Salamero Sanz (on behalf of Sysdig) provides a similar comparison, this time looking at ECS, Fargate, and EKS. Jorge’s explanation of Fargate as “managed ECS/EKS instances” is probably the most useful explanation of Fargate I’ve seen so far.
  • Michael Gasch digs relatively deep to address the question of how Kubernetes reconciles allocatable resources and requested resources in order to satisfy QoS. Good information here, in my opinion. Thanks Michael!
  • Running distributed systems such as etcd, Kubernetes, Linkerd, etc., to support applications means making a conscious decision to embrace a certain level of complexity in exchange for the benefits these systems offer. Read this post-mortem on an outage to gain a better idea of some of the challenges this additional complexity might present when it comes to troubleshooting.
  • Tim Hinrichs provides some details on Rego, the policy language behind the Open Policy Agent project.
  • Paul Czarkowski walks you through creating your first Helm chart.
Operating Systems/Applications
  • I came across this mention of Mitogen, a project whose goal—as described by the creator—is to “make it childsplay [sic] to run Python code on remote machines”.
  • From the “interesting-but-not-practicallly-useful” department, Nick Janetakis shows how to use Docker to run a PDP-11 simulator. The magic here, in my opinion, is in the simulator (not in Docker), but it’s still an interesting look at how one might use Docker.
  • Also from Nick, here’s an attempt to the answer the question, “Do I learn Docker Swarm or Kubernetes?”
  • I debated on adding this link because I wasn’t sure how useful it might be to readers, but decided to include it anyway. Apache Guacamole describes itself as “a clientless remote desktop gateway” supporting standard protocols like SSH, VNC, and RDP.
  • Tamás Török has a quite lengthy post on transforming your system into microservices. It’s nice to see some of the challenges—which aren’t all technical—mentioned as well, as sometimes today’s tech writers only seem to see microservices through rose-colored glasses.
  • This is an awesome collection of patched fonts.
  • OpenSSH on Windows—what a time to be alive! It almost makes me want to add a Windows 10 machine to my collection…
  • I enjoyed this developer-centric comparison of Kubernetes and Pivotal Cloud Foundry.
Storage
  • Tony Bourke has a two-part series on ZFS and Linux and encryption (part 1, part 2).
Virtualization Career/Soft Skills
  • Although targeted at “creatives,” I think there are some tips and ideas in this post that are equally applicable to IT professionals.

That’s it for this time around. Look for the next Technology Short Take in a couple of weeks, where I’ll have another curated collection of links and articles for you. Until then, enjoy!

Categories: Scott Lowe

Looking Back: 2017 Project Report Card

Scott Lowe's Blog - Wed, 01/03/2018 - 13:00

As has become my custom for the past several years, I wanted to take a look at how well I fared on my 2017 project list. Normally I’d publish this before the end of 2017, but during this past holiday season I decided to more fully “unplug” and focus on the truly important things in life (like my family). So, here’s a look back at my 2017 projects and a report card on my progress (or lack thereof, in some cases).

For reference, here’s the list of projects I set out for myself in 2017:

  1. Finish the network automation book.
  2. Launch an open source book project.
  3. Produce some video content.
  4. Get the Full Stack Journey podcast back on track.
  5. Complete a “wildcard project.”

So, how did I do with each of these projects?

  1. Finish the network automation book: I’m happy to report that all the content for the network automation book I’ve been writing with Jason Edelman and Matt Oswalt is done, and the book is currently in production (and should be available to order from O’Reilly very soon). I had hoped to get the content done in time for the book to be available for order before the end of 2017, so I’m marking myself down just a bit on this one. Grade: B

  2. Launch an open source book project: I launched The Open vSwitch Cookbook in late February, and then canceled the project in late March. Why? Basically, it boils down to my effort detracting from the effort to include high-quality documentation with Open vSwitch itself, and I decided it was better to support the efforts of the OVS project than put forth my own (competitive) project. Because I learned something from this project—how to better align my own efforts with the efforts of open source projects I want to support—I don’t consider this to be a total failure. Grade: D

  3. Produce some video content: I intended to start producing some video content, such as demos of a project or a video “how-to” for a certain technology. Unfortunately, I simply didn’t make it. Grade: F

  4. Get the Full Stack Journey podcast back on track: I was aiming for nine episodes in 2017, and I managed to publish six. More importantly, though, was that I was able to join forces with Packet Pushers, giving me a support network to make the podcast even better (in the long run). Grade: B

  5. Complete a “wildcard” project: I’ve had this on my list for the last few years, as a way of trying to account for unseen changes in the industry that may pull me in a direction different than what I had anticipated. I did lots of interesting things this past year, but the only “project” that stands out was the migration of this web site from Jekyll on GitHub Pages to Hugo on Amazon S3/CloudFront (more information is available here). This migration was very smooth and has—I think—resulted in a better site with better performance. It’s also pushed me in some new directions, which I think is a good thing. Grade: A

Overall, my progress was reasonable—not stellar, but not awful. (That’s an improvement over last year, at least!) Over the next few weeks, I’ll be evaluating the projects I want to tackle in 2018. Once I have that list ready to share, I’ll publish it here as I have in the past.

Have some feedback for me? Feel free to hit me up on Twitter, or drop me an email (my address is here on the site). Thanks!

Categories: Scott Lowe

Installing XMind 8 on Fedora 27

Scott Lowe's Blog - Fri, 12/15/2017 - 13:00

XMind is a well-known cross-platform mind mapping application. Installing the latest version of XMind (version 8) on Linux is, unfortunately, more complicated than it should be. In this post, I’ll show how to get XMind 8 running on Fedora 27.

So why is installing XMind more complicated than it should be? For reasons unknown, the makers of XMind stopped using well-known Linux package mechanisms with this version of the software, providing only a ZIP archive to download and extract. (Previous versions at least provided a Debian package.) While the ZIP archive includes a very simplistic “setup script”, the script does nothing more than install a few packages and install some fonts, and was written expressly for Debian-based systems. If you extract the archive and place the files outside of your home directory (as would be typical for installing an application on most desktop Linux distributions), you’ll run into problems with permissions. Finally, the application itself is extraordinarily brittle with regards to file locations and such; it’s easy to break it by simply moving the wrong file.

Through some research and some trial-and-error, I finally arrived at a configuration for XMind 8 on Fedora 27 that satisfies a couple criteria:

  1. The application should reside outside the user’s home directory in a location that is typical for third-party applications (for example, in the /opt directory).

  2. All user-specific directories and information would reside in the user’s home directory so as to eliminate the need for overly-permissive file/group permissions.

Here are the steps you can follow to get XMind 8 installed on Fedora 27 in a way that satisfies these criteria.

First, you’ll need to install the “java-1.8.0-openjdk” package using dnf install. XMind has a few different prerequisite packages (lame, webkitgtk, and glibc), but in my tests on Fedora 27 system this was the only package that wasn’t already installed. Note that if you’re trying to replicate these instructions on a different Linux distribution, this step is where you’ll need to locate distribution-specific package names (most of the rest of the steps are applicable to any Linux distribution).

Next, download the XMind 8 ZIP archive, and extract it into an xmind directory:

unzip xmind-8-update6-linux.zip -d xmind

For simplicity’s sake, I chose to work within my own ~/Downloads directory, but you should be able to work from within any directory where you have write permissions.

Third, create a user-specific area for XMind to store information:

mkdir -p ~/.config/xmind/workspace

I chose the ~/.config directory since it was already present and utilized for application-specific information. You can use a different directory, but the rest of the instructions will assume this path was used.

Fourth, go ahead and remove the 32-bit version of the XMind executable; it’s pretty likely you won’t need it:

rm -rf xmind/XMind_i386

Fifth, copy over two directories into the user-specific area you created earlier:

cp -a xmind/XMind_amd64/configuration ~/.config/xmind/ cp -a xmind/XMind_amd64/p2 ~/.config/xmind/

Based on my testing, this step should be the only step needed to make XMind work for additional users on your Fedora system (i.e., you’ll want to run this step for other users on the system as well in order for them to be able to run XMind).

Next, you’ll need to update XMind.ini to tell XMind the new user-specific locations. Edit this file (found in the XMind_amd64 subdirectory), and make the following changes:

  • On line 2, change ./configuration to @user.home/.config/xmind/configuration (the @user.home refers to the user’s home directory; note that you can’t use the tilde shortcut here as it won’t work)
  • On line 4, change ../workspace to @user.home/.config/xmind/workspace

You’re almost done! In the fonts subdirectory of the xmind directory in which you stored the extracted files, you’ll find some fonts that are distributed with XMind. Install these on your system (copy them to ~/.local/share/fonts or /usr/share/fonts, as you desire), and then remove the fonts subdirectory. In my particular case, some of the fonts—like the Roboto family—were already installed.

Finally, move the xmind directory to its final location and lock down the permissions:

sudo mv xmind /opt/ sudo chown -R root:root /opt/xmind

The last and final step is to create a desktop entry file so that XMind is accessible via the launcher. Here’s a sample file:

[Desktop Entry] Comment=Create and share mind maps Exec=/opt/xmind/XMind_amd64/XMind %F Path=/opt/xmind/XMind_amd64 Name=XMind Terminal=false Type=Application Categories=Office;Productivity Icon=xmind

This desktop file can go into /usr/share/applications or ~/.local/share/applications, though—for reasons I’ll share shortly—the latter may be a better choice.

That’s it! You should be able to launch XMind from the GNOME Activities screen.

Unfortunately, there are some caveats and limitations:

  • XMind apparently stores its application icon buried deep in the ~/.config/xmind/configuration directory structure. This is why you may prefer to use ~/.local/share/applications for the desktop file. If you do use this location, you’ll need to perform this step for other users of the system as well. (The icon=xmind line in the sample desktop file above won’t actually work.)
  • This manual installation doesn’t create a MIME type for XMind documents so that you can open an XMind document from within the Nautilus file manager. (The simplistic shell script supplied in the XMind download doesn’t either.) I’ve done a bit of work on this, but haven’t come to a workable solution yet.

So there you have it: how to get XMind 8 installed and running in some reasonable fashion on Fedora 27. If you have questions, comments, or corrections, feel free to hit me up on Twitter.

Categories: Scott Lowe

Installing the VMware Horizon Client on Fedora 27

Scott Lowe's Blog - Wed, 12/13/2017 - 22:30

In this post, I’ll outline the steps necessary to install the VMware Horizon client for Linux on Fedora 27. Although VMware provides an “install bundle,” the bundle does not, unfortunately, address any of the prerequisites that are necessary in order for the Horizon client to work. Fortunately, some other folks shared their experiences, and building on their knowledge I was able to make it work. I hope that this post will, in turn, help others who may find themselves in the same situation.

Based on information found here and here, I took the following steps before attempting to install the VMware Horizon client for Linux:

  1. First, I installed the libpng12 package using sudo dnf install libpng12.

  2. I then created a symbolic link for the libudev.so.0 library that the Horizon client requires:

    sudo ln -s /usr/lib64/libudev.so.1 /usr/lib64/libudev.so.0
  3. I created a symbolic link for the libffi.so.5 library the Horizon client expects to have available:

    sudo ln -s /usr/lib64/libffi.so.6 /usr/lib64/libffi.so.5

With these packages and symbolic links in place, I proceeded to install the VMware Horizon client using the install bundle downloaded from the public VMware web site (for version 4.6.0 of the client). Per the guidelines in this GitHub gist, I deselected most all of the options (Smart Card, Real-Time Audio-Video, and Multimedia Redirection were all deselected; if I recall correctly, only Virtual Printing, Client Drive Redirection, and USB Redirection were left selected). The installation proceeded without incident, and the scan at the end reported success on all fronts.

Once the installation was complete, I was able to launch the Horizon client and proceed without further issues.

Here’s hoping this information helps others who may be looking to use the Horizon Client for Linux. If you have questions, feel free to hit me up on Twitter.

Categories: Scott Lowe

Using Vagrant with Azure

Scott Lowe's Blog - Tue, 12/12/2017 - 00:00

In this post, I’ll describe how to use Vagrant with Azure. You can consider this article an extension of some of my earlier Vagrant articles; namely, the posts on using Vagrant with AWS and using Vagrant with OpenStack. The theme across all these posts is examining how one might use Vagrant to simplify/streamline the consumption of resources from a provider using the familiar Vagrant workflow.

If you aren’t already familiar with Vagrant, I’d highly recommend first taking a look at my introduction to Vagrant, which provides an overview of the tool and how it’s used.

Prerequisites

Naturally, you’ll need to first ensure that you have Vagrant installed. This is really well-documented already, so I won’t go over it here. Next, you’ll need to install the Azure provider for Vagrant, which you can handle using this command:

vagrant plugin install vagrant-azure

You’ll also (generally) want to have the Azure CLI installed. (You’ll need it for a one-time configuration task I’ll mention shortly.) I’ve published a couple posts on installing the Azure CLI; see here or here.

Once you’ve installed the vagrant-azure plugin and the Azure CLI, you’ll next need to install a box that Vagrant can use. Here, the use of Vagrant with Azure is different than the use of Vagrant with a provider like VirtualBox or VMware Fusion/VMware Workstation. Like when using Vagrant with AWS, when you’re using Vagrant with Azure the box is a “dummy box” that doesn’t really do anything; instead, you’re relying on Azure VM images. You can install a “dummy box” for Azure with this command:

vagrant box add azure-dummy https://github.com/azure/vagrant-azure/raw/v2.0/dummy.box --provider azure

You’ll then reference this “dummy box” in your Vagrantfile, as I’ll illustrate shortly.

The last and final step is to create an Azure Active Directory (AD) service principal for Vagrant to use when connecting to Azure. This command will create a service principal for you to use with Vagrant:

az ad sp create-for-rbac

Make note of the values returned in the command’s JSON response; you’ll need them later. You’ll also want to know your Azure subscription ID, which you can obtain by running az account list --query '[?isDefault].id' -o tsv.

Now that you’ve installed Vagrant, installed the vagrant-azure plugin, downloaded the dummy Azure box, and have created the Azure AD service principal, you’re ready to start spawning some Azure VMs with Vagrant.

Launching Azure VMs via Vagrant

When I use Vagrant, I prefer to keep the Vagrant configuration (stored in the file named Vagrantfile) as clean as possible, and separate details into a separate data file (typically using YAML). I outlined this approach here and here. For the purposes of this post, however, I’ll just embed the details directly into the Vagrant configuration to make it easier to understand. I have examples of using a YAML data file with Vagrant and Azure in the “Additional Resources” section below.

Here’s a snippet of a Vagrantfile you could use to spin up Azure VMs using Vagrant:

# Require the Azure provider plugin require 'vagrant-azure' # Create and configure the Azure VMs Vagrant.configure('2') do |config| # Use dummy Azure box config.vm.box = 'azure-dummy' # Specify SSH key config.ssh.private_key_path = '~/.ssh/id_rsa' # Configure the Azure provider config.vm.provider 'azure' do |az, override| # Pull Azure AD service principal information from environment variables az.tenant_id = ENV['AZURE_TENANT_ID'] az.client_id = ENV['AZURE_CLIENT_ID'] az.client_secret = ENV['AZURE_CLIENT_SECRET'] az.subscription_id = ENV['AZURE_SUBSCRIPTION_ID'] # Specify VM parameters az.vm_name = 'aztest' az.vm_size = 'Standard_B1s' az.vm_image_urn = 'Canonical:UbuntuServer:16.04-LTS:latest' az.resource_group_name = 'vagrant' end # config.vm.provider 'azure' end # Vagrant.configure

Naturally, this is a very generic configuration, so you’d need to supply the specific details you want use. In particular, you’d need to supply the following details:

  • The SSH keypair you want to use (via the config.ssh.private_key_path setting)
  • The Azure VM size you’d like to use (via the az.vm_size setting)
  • The Azure VM image you want to use (via the az.vm_image_urn setting)
  • The name of the Azure resource group you’d like to use (via the az.resource_group_name setting; this group should not already exist)

Also, you’ll note that the Vagrantfile assumes you’ve set some environment variables that will allow Vagrant to communicate with Azure. These values are taken from the JSON output of creating the Azure AD service principal:

  • The AZURE_TENANT_ID maps to the “tenant” key of the JSON output
  • The AZURE_CLIENT_ID maps to the “appID” key of the JSON output
  • The AZURE_CLIENT_SECRET maps to the “password” key of the JSON output
  • The AZURE_SUBSCRIPTION_ID is your Azure subscription ID, as shown when running az account show

Set these environment variables (you can use the export command) before running vagrant up, or you’ll get an error.

With the right configuration details in place, simply run vagrant up from the same directory where the Vagrant configuration (in Vagrantfile) is stored. Vagrant will communicate with Azure (using the values in the corresponding environment variables) and create/launch the Azure VMs per the details provided.

Once the instances are up, the “standard” Vagrant workflow applies:

  • Use vagrant ssh <name> to log into one of the VMs.
  • Use vagrant provision to apply any provisioning instructions (i.e., to copy files across or run a configuration management tool).
  • Use vagrant destroy to kill/terminate the VMs.

One nice aspect of using Vagrant with Azure in this way is that you get the same workflow and commands to work with Azure VMs as you do with AWS instances, VirtualBox VMs, or VMware Fusion/Workstation VMs.

Why Use Vagrant with Azure?

As I stated in my post on using Vagrant with AWS, using Vagrant with Azure in this fashion is a great fit for the creation/destruction of temporary environments used for testing, software development, etc. However, in situations where you are creating more “permanent” infrastructure—such as deploying production applications onto Azure—then I would say that Vagrant is not the right fit. In those cases, using a tool like Terraform (see my introductory post) would be a better fit.

Additional Resources

To help make using Vagrant with Azure easier, I’ve created a couple learning environments that are part of my GitHub “learning-tools” repository. Specifically, see the vagrant/azure directory for a sample Vagrant configuration that uses an external YAML data file for instance details.

Additionally, see the documentation for the vagrant-azure plugin on GitHub for more information.

Categories: Scott Lowe

Technology Short Take 91

Scott Lowe's Blog - Fri, 12/08/2017 - 13:00

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.

Networking
  • Amanpreet Singh has a two-part series on Kubernetes networking (part 1, part 2).
  • Anthony Spiteri has a brief look at NSX-T 2.1, which recently launched with support for Pivotal Container Service (PKS) and Pivotal Cloud Foundry, further extending the reach of NSX into new areas.
  • Jon Benedict has a brief article on OVN and its integration into Red Hat Virtualization; if you’re unfamiliar with OVN, it might be worth having a look.
  • sFlow is a networking technology that I find quite interesting, but I never seem to have the time to really dig into it. For example, I was recently browsing the sFlow blog and came across two really neat articles. The first was on RESTful control of Cumulus Linux ACLs (this one isn’t actually sFlow-related); the second was on combining sFlow telemetry and RESTful APIs for visibility and control in campus networks.
  • David Gee’s “network automation engineer persona” content continues; this time he tackles some thoughts around proof-of-concepts (PoCs).
Servers/Hardware
  • Frank Denneman (with an admittedly vSphere-focused lens) takes a look at the Intel Xeon Scalable Family in a two-part (so far) series. Part 1 covers the CPUs themselves; part 2 discusses the memory subsystem. Both articles are worth reviewing if hardware selection is an important aspect of your role.
  • Kevin Houston provides some details on blade server options for VMware vSAN Ready Nodes.
Security Cloud Computing/Cloud Management
  • The Cloud-Native Computing Foundation (CNCF) and the Kubernetes community introduced the Certified Kubernetes Conformance Program, and the first announcements of certification have started rolling in. First, here’s Google’s announcement of renaming Google Container Engine to Google Kubernetes Engine (making the GKE acronym much more applicable) as a result of its certification. Next, here’s an announcement on the certification of PKS (Pivotal Container Service).
  • Henrik Schmidt writes about the kube-node project, an effort to allow Kubernetes to manage worker nodes in a cluster.
  • Helm is a great way to deploy applications onto (into?) a Kubernetes cluster, but there are some ways you can improve Helm’s security. Check out this article from Matt Butcher on securing Helm.
  • This site is a good collection of “lessons learned from the trenches” on running Kubernetes on AWS in production.
  • I have to be honest: this blog post on using OpenStack Helm to install OpenStack on Kubernetes with Rook sounds like a massive science experiment. That’s a lot of moving pieces!
  • User “sysadmin1138” (I couldn’t find a mapping to a real name, perhaps that’s intentional) has a great write-up on her/his experience with Terraform in production. There’s some great information here for those of you thinking of (or currently) using Terraform to manage production workloads/configurations.
Operating Systems/Applications
  • Michael Crosby outlines support for multi-client support in containerD.
  • Speaking of containerD, it just recently hit 1.0.
  • This is a slightly older post by Alex Ellis on attachable networks, which (as I understand it) enable interoperability between declarative workloads (deployed via docker stack deploy) and imperative workloads (launched via docker run).
Storage Virtualization Career/Soft Skills
  • Pat Bowden discusses the idea of learning styles, and how combining learning styles (or multiple senses) can typically contribute to more successful learning.
  • I also found some useful tidbits on learning over at The Art of Learning project website.

That’s all for now (but I think that should be enough to keep you busy for a little while, at least!). I’ll have another Tech Short Take in 2 weeks, though given the holiday season is nigh upon us it might be a bit light on content. Until then!

Categories: Scott Lowe

Installing the Azure CLI on Fedora 27

Scott Lowe's Blog - Thu, 12/07/2017 - 22:00

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.

Here are the steps to install the Azure CLI into a Python virtual environment on Fedora 27. Even though they are almost identical to the Fedora 25 instructions (one additional package is required), I’m including all the information here for the sake of completeness.

  1. Make sure that the “gcc”, “libffi-devel”, “python-devel”, “openssl-devel”, “python-pip”, and “redhat-rpm-config” packages are installed (you can use dnf to take care of this). Some of these packages may already be installed; during my testing with a Fedora 27 Cloud Base Vagrant image, these needed to be installed. (The change from Fedora 25 is the addition of the “redhat-rpm-config” package.)

  2. Install virtualenv either with pip install virtualenv or dnf install python2-virtualenv. I used dnf, but I don’t think the method you use here will have any material effects.

  3. Create a new Python virtual environment with virtualenv azure-cli (feel free to use a different name).

  4. Activate the new virtual environment (typically accomplished by sourcing the azure-cli/bin/activate script; substitute the name you used when creating the virtual environment if you didn’t name it azure-cli).

  5. Install the Azure CLI with pip install azure-cli. Once this command completes, you should be ready to roll.

That’s it!

Categories: Scott Lowe

Using Vagrant with Libvirt on Fedora 27

Scott Lowe's Blog - Wed, 12/06/2017 - 17:00

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

If you’re unfamiliar with Vagrant, I recommend you start first with my quick introduction to Vagrant, after which you can browse all the “Vagrant”-tagged articles on my site for a bit more information. If you’re unfamiliar with Libvirt, you can browse all my “Libvirt”-tagged articles; I don’t have an introductory post for Libvirt.

Background

I first experimented with the Libvirt provider for Vagrant quite some time ago, but at that time I was using the Libvirt provider to communicate with a remote Libvirt daemon (the use case was using Vagrant to create and destroy KVM guest domains via Libvirt on a remote Linux host). I found this setup to be problematic and error-prone, and discarded it after only a short while.

Recently, I revisited using the Libvirt provider for Vagrant on my Fedora laptop (which I rebuilt with Fedora 27). As I mentioned in this post, installing VirtualBox on Fedora isn’t exactly straightforward. Further, what I didn’t mention in that post is that the VirtualBox kernel modules aren’t signed; this means you must turn off Secure Boot in order to run VirtualBox on Fedora. I was loathe to turn off Secure Boot, so I thought I’d try the Vagrant+Libvirt combination again—this time using Libvirt to talk to the local Libvirt daemon (which is installed by default on Fedora in order to support the GNOME Boxes application, a GUI virtual machine tool). Hence, this blog post.

Prerequisites

Obviously, you’ll need Vagrant installed; I chose to install Vagrant from the Fedora repositories using dnf install vagrant. At the time of this writing, that installed version 1.9.8 of Vagrant. You’ll also need the Libvirt plugin, which is available via dnf:

dnf install vagrant-libvirt vagrant-libvirt-doc

At the time of writing, this installed version 0.40.0 of the Libvirt plugin, which is the latest version. You could also install the plugin via vagrant plugin install vagrant-libvirt, though I didn’t test this approach. (In theory, it should work fine.)

As with most other providers (the AWS and OpenStack providers being the exceptions), you’ll also need one or more Vagrant boxes formatted for the Libvirt provider. I found a number of Libvirt-formatted boxes on Vagrant Cloud, easily installable via vagrant box add. For the purposes of this post, I’ll use the “fedora/27-cloud-base” Vagrant box with the Libvirt provider.

Finally, because Vagrant is orchestrating Libvirt on the back-end, I also found it helpful to have the Libvirt client tools (like virsh) installed. This lets you see what Vagrant is doing behind the scenes, which can be helpful at times. Just run dnf install libvirt-client.

Using Libvirt with Vagrant

Once all the necessary prerequisites are satisfied, you’re ready to start managing Libvirt guest domains (VMs) with Vagrant. For a really quick start:

  1. cd into a directory of your choice
  2. Run vagrant init fedora/27-cloud-base to create a sample Vagrantfile
  3. Boot the VM with vagrant up

For more fine-grained control over the VM and its settings, you’ll want to customize the Vagrantfile with some additional settings. Here’s a sample Vagrantfile that shows a few (there are many!) of the ways you could customize the VM Vagrant creates:

Vagrant.configure("2") do |config| # Define the Vagrant box to use config.vm.box = "fedora/27-cloud-base" # Disable automatic box update checking config.vm.box_check_update = false # Set the VM hostname config.vm.hostname = "fedora27" # Attach to an additional private network config.vm.network "private_network", ip: "192.168.100.101" # Modify some provider settings config.vm.provider "libvirt" do |lv| lv.memory = "1024" end # config.vm.provider end # Vagrant.configure

For a more complete reference, see the GitHub repository for the vagrant-libvirt provider. Note, however, that I did run into a few oddities, particularly around networking. For example, I wasn’t able to create a new private Libvirt network using the libvirt__network_address setting; it always reverted to the default network address. However, using the syntax shown above, I was able to create a new private Libvirt network with the desired network address. I was also able to manually create a new Libvirt network (using virsh net-create) and then attach the VM to that network using the libvirt__network_name setting in the Vagrantfile. Some experimentation may be necessary to get precisely the results you’re seeking.

Once you’ve instantiated the VM using vagrant up, then the standard Vagrant workflow applies:

  • Use vagrant ssh <name> to log into the VM via SSH.
  • Use vagrant provision to apply any provisioning instructions, such as running a shell script, copying files into the VM, or applying an Ansible playbook.
  • Use vagrant destroy to terminate and delete the VM.

There is one potential “gotcha” of which to be aware: when you use vagrant box remove to remove a Vagrant box and you’ve created at least one VM from that box, then there is an additional step required to fully remove the box. When you run vagrant up with a particular box for the very first time, the Libvirt provider uploads the box into a Libvirt storage pool (the pool named “default”, by default). Running vagrant box remove only removes the files from the ~/.vagrant.d directory, and does not remove any files from the Libvirt storage pool.

To remove the files from the Libvirt storage pool, run virsh pool-edit default to get the filesystem path where the storage pool is found (if no changes have been made, the “default” pool should be located at /var/lib/libvirt/images). Navigate to that directory and remove the appropriate files in order to complete the removal of a particular box (or a specific version of a box).

So far—though my testing has been fairly limited—I’m reasonably pleased with the Libvirt provider when running against a local Libvirt daemon. The performance is good, and I haven’t had to “jump through hoops” to make the virtualization provider work (as I did with VirtualBox on Fedora).

If you have any questions or feedback, hit me up on Twitter. Thanks!

Categories: Scott Lowe

AWS re:Invent 2017 Keynote with Andy Jassy

Scott Lowe's Blog - Wed, 11/29/2017 - 11:30

This is a liveblog of the re:Invent 2017 keynote with Andy Jassy, taking place on Wednesday at the Venetian. As fully expected given the long queues and massive crowds, even arriving an hour early to the keynote isn’t soon enough; there’s already a huge crowd gathered to make it into the venue. Fortunately, I did make it in and scored a reasonable seat from which to write this liveblog.

The pre-keynote time is filled with catchy dance music arranged by a live DJ (same live DJ as last year, if I’m not mistaken). There’s already been quite a few announcements made this year even before today’s keynote: Amazon Sumerian (AR/VR service), new regions and availability zones (AZs), and new bare metal instances, just to name a few of the big ones. There’s been a great deal of speculation regarding what will be announced in today’s keynote, but there’s no doubt there will be a ton of announcements around service enhancements and new services. Rumors are flying about a managed Kubernetes offering; we shall see.

Promptly at 8am, the keynote starts with a brief video, and Andy Jassy, CEO of AWS, takes the stage. Jassy welcomes attendees to the sixth annual conference, and confirms that the attendance at the event is over 43,000 people—wow!

Jassy starts with a quick update on the AWS business:

  • $18B revenue run rate
  • 42% growth rate (if I captured that correctly)
  • Millions of customers with a pretty varied customer base (lots of technology startups, enterprise customers from pretty much every vertical, and public sector users)
  • Thousands of system integrators who’ve built their business on AWS consulting

Jassy reviews the latest “Magic Quadrant,” showing AWS with a strong lead over all other competitors, and shows a study that gives AWS 44% of the public cloud marketshare (more than all other competitors combined).

Moving out of the business update, Jassy begins to lay the framework for the rest of the keynote. He compares people building technology solutions (“builders”) to musicians, who want the freedom to choose the technology building blocks (the “instruments”) to create the solution (the “song”). According to Jassy, AWS radically changes what’s possible for builders by giving them unprecedented choice and flexibility. To help with the keynote, a band is going to play five different songs, each of which captures some aspect of how AWS enables builders to build incredibly new and powerful solutions.

The first song is “Everything is Everything,” by Lauryn Hill. Jassy explains that “everything is everything” applies to technology because the choice of platform/provider is incredibly important, and builders shouldn’t have to settle for less than everything. AWS has more than any other provider, says Jassy, meaning they have the “everything” that builders need/want, leading him into a lengthy rant (in a good way) outlining the breadth of AWS’ services (including, notably, a mention of VMware Cloud on AWS).

Jassy mentions that the pace of innovation is also continuing to expand, with an expected 1,300+ service announcements over the course of 2017.

At this point, Jassy brings out Mark Okerstrom, President and CEO of Expedia. Okerstrom talks about the technology challenges that a company operating at Expedia’s scale (600M+ site visits monthly, greater than 750 million searches per day) experiences. Expedia has committed to move 80% of mission critical applications to AWS within the next 2-3 years. Why? Resiliency, optimization, and performance, says Okerstrom. Okerstrom wraps up his portion with a quote by Mark Twain (on how travel is fatal to bigotry), and Jassy returns to the stage.

Jassy turns his attention to AWS’ compute offerings. Jassy outlines the range of compute instance types (such as the new M5, H1, and I3m [bare metal] instances), and then moves to talk about containers. He positions ECS (Elastic Container Service) as something that AWS built “back when there was no predominant orchestration system,” and outlines some of the advantages that ECS offers (deep integration with other AWS services, better scale, and service integrations at the container level).

All that being said, Jassy recognizes that Kubernetes has emerged as a leading container orchestration platform, and that customers who want to run Kubernetes on AWS have some complexities to manage. Jassy recognizes Amazon Elastic Container Service for Kubernetes (EKS), a managed Kubernetes service running on top of AWS. EKS has a number of features that Jassy outlines:

  • Hybrid cloud compatible
  • Highly available (masters deployed across multiple AZs, for example)
  • Automated upgrades and patches

This gives AWS two different managed container offerings: ECS and EKS. However, Jassy says that containers want more—they want to run containers without having to manage servers and clusters. This leads to an announcement of AWS Fargate, which allows customers to run containers without managing servers, clusters, or instances. Just package your application into a container, upload it to Fargate, and AWS takes care of the rest (says Jassy). Fargate will support ECS immediately, and will support EKS in 2018. (Although at this point it’s unclear exactly what “supporting” ECS or EKS means.)

Next, Jassy moves on to discussing serverless (Functions as a Service, or FaaS). AWS Lambda has already gathered hundreds of thousands of customers. Jassy points out that FaaS really needs to be more than just code execution; you also need event-driven services (like Lambda and Step Functions), lots of event sources (all the various triggers from AWS services), and the ability to execute functions at the edge as well as in the cloud (like Lambda@Edge and Greengrass).

This brings Jassy back to the “everything is everything” mantra, and how the broad range of compute offerings that AWS supplies satisfies customers’ demands for “everything is everything.”

Changing direction slightly, Jassy talks about what “freedom” means to him and to AWS. This leads him back to the house band, who plays “Freedom” by George Michael.

The “freedom” discussion leads Jassy to a discussion about databases, and a number of not-very-subtle attacks against Oracle. Customers want open database engines, and this demand is what led AWS to create Amazon Aurora. Aurora is MySQL- and PostgreSQL-compatible but offers the scale and performance that users demand from commercial databases. Jassy states that Aurora is the fastest-growing service in the history of AWS.

Aurora offers the ability to scale out for reads, but customers wanted scale-out write support. Jassy announces a preview of Aurora Multi-Master, which supports multiple instances of Aurora for both read/write support across multiple AZs (with multi-region support coming in 2018). The preview for single region/multi-master is open today.

Next, Jassy announces Aurora Serverless—on-demand, auto-scaling Amazon Aurora. This service eliminates the need to provision instances, automatically scales up/down, and starts up and shuts down automatically.

However, relational databases are the only solution out there; sometimes a different type of solution is needed. Sometimes a key-value datastore is a better solution, leading Jassy to talk about DynamoDB and ElastiCache (which currently supports Redis and Memcached). To expand the functionality and utility of DynamoDB, Jassy announces DynamoDB Global Tables. DynamoDB Global Tables is the first fully-managed, multi-master, multi-region database. DynamoDB Global Tables enables low-latency reads and writes to locally available tables. It’s generally available today.

Jassy next announces DynamoDB Backup and Restore, to simplify the process of backing up and restoring data from/to DynamoDB databases. This new offering will enable customers to back up hundreds of terabytes of data with no performance interruption or performance impact. This offering is generally available today, with point-in-time restore coming in 2018.

To better enable using data across multiple databases, Jassy announces the launch of Amazon Neptune, a fully-managed graph database. Neptune supports multiple graph models, is fast and scalable, enables greater reliability with multiple replicas across AZs, and is easy to use with support for multiple graph query languages.

This leads to the third song, “Congregation” by the Foo Fighters. Jassy calls out the apparent contradiction in the song about having blind faith but not false hope, and compares that to the conviction that builders have when building out great ideas even when they’re not sure it will work. Getting feedback from customers is one way to help with this, and Jassy says that analytics are the answer here. Naturally, AWS has great analytics, so Jassy talks about the various solutions that AWS has to offer.

In the realm of data lakes, Jassy calls out S3 as the most popular choice for data lakes today, and takes a few minutes to talk about the advantages of S3 (he again refers to the Gartner Magic Quadrant to show S3 in a strong leadership position). S3’s position is further strengthened by ties to things like Amazon Athena, Amazon EMR, Amazon Redshift, Amazon Elasticsearch Service, Amazon Kinesis, Amazon QuickSight, and AWS Glue.

At this point, Jassy brings out Roy Joseph, Managing Director at Goldman Sachs, to talk about how Goldman uses analytics on AWS. Joseph stresses Goldman’s position as a source of innovation; 25% of Goldman’s employees are engineers who have written 1.5B lines of code across more than 7K applications. In order to compete effectively, Joseph says that Goldman Sachs needs strong engineering, risk management, and distribution. Three examples shared by Joseph include Marcus (consumer retail loans), Marquee (access to risk and pricing platform), and Symphony (secure messaging and collaboration; originally internal-only but now seeing growth as an inter-bank platform). So why public cloud? According to Joseph, a greater demand for risk management drives a need for more calculations, which in turn means more compute capacity—and the public cloud was the best way to satisfy that need. That being said, Joseph outlined some concerns that Goldman had to overcome: extending an internally-built management framework and ensuring data privacy. To help ensure data privacy, Goldman worked with AWS to create a “BYOK” (Bring Your Own Key) solution for key management.

Jassy returns to the stage to continue the discussion around analytics. To help customers perform analytics on the correct subset of data that might be stored in S3, Jassy announces S3 Select, the ability to use standard SQL statements to “filter” out or select the correct subset of S3 data. Jassy shares some TPC-DS benchmarks on a Presto queries (8 seconds without S3 Select, 1.8 seconds [4.5x faster] with S3 Select).

Jassy next announces Glacier Select, which allows you to run queries directly against data stored in Amazon Glacier. This is generally available today.

Shifting focus slightly, Jassy takes the conversation toward machine learning, and asks the house band to play another song. This time it’s “Let it Rain” by Eric Clapton, and Jassy says the lyrics of the song reflect the desire of builders for machine learning to be easier to use and embrace than it is right now.

Jassy says that Amazon has been doing machine learning for 20 years, and points to things like Amazon’s personalized recommendations, or Alexa’s natural language understanding, or the pick paths Amazon uses for the robots in the warehouse. This makes AWS well-positioned to make machine learning easier to use and consume.

According to Jassy, there are three layers to machine learning.

The bottom layer is for expert ML practitioners who deeply understand learning models and frameworks, and Jassy re-iterates AWS’ support for all the various major frameworks and interfaces customers want to use.

The middle layer is for everyday developers who aren’t experts in ML, but it’s still too complicated for most users. To help with the challenges in this layer, Jassy introduces Amazon SageMaker (leverages open source Jupyter project). SageMaker provides built-in, high performance algorithms, but doesn’t prevent users from bringing their own algorithms and frameworks. SageMaker also greatly simplifies training and tuning, and helps automate the deployment/operation of machine learning in production.

To further help get machine learning into the hands of developers, Jassy announces DeepLens, the world’s first HD video camera with built-in machine learning support. Jassy brings out Dr. Matt Wood to talk more about DeepLens and SageMaker. After talking for a few minutes, Wood does a demo of DeepLens performing album identification and facial expression recognition.

The top layer, according to Jassy, is a set of application services that leverage machine learning. Examples here are Lex, Polly, and Rekognition. Jassy announces Rekognition Video, which is real-time batch video analysis (like what Rekognition does for photos). To help get video/audio data into AWS, Jassy announces Amazon Kinesis Video Streams. Rekognition Video is deeply integrated with Kinesis Video Streams.

On the language side (as opposed to video), Jassy announces Amazon Transcribe to convert speech into accurate, gramatically correct text (initially available with English and Spanish). In the near future, Transcribe will support multiple speakers and custom dictionaries.

Jassy also announces Amazon Translate, which does real-time language translation as well as batch translation. It will support automatic language detection in the near future.

Next, Jassy announces Amazon Comprehend, a fully-managed natural language processing service. It analyzes information in text and identifies things like entities (people, places, things), key phrases, sentiment, and the language of the content. Comprehend can not only identify information in a single document, but can also be used to perform topic modeling across large numbers of documents.

To talk a bit about how the NFL is using Amazon and machine learning, Jassy brings out Michelle McKenna-Doyle, SVP and CIO of the NFL. McKenna-Doyle shares some details on Next Gen Stats (NGS), which spans AWS services like Lambda, CloudFront, DynamoDB, EC2, S3, EMR, and the Amazon API Gateway (among others). NGS generates 3TB of data for every week of NFL games. McKenna-Doyle also talks briefly about future plans for incorporating machine learning and artificial intelligence into the NFL’s NGS plans (to do things like formation detection, route detection, and key event identification).

As McKenna-Doyle leaves the stage, the house band kicks up again with another song (remember there are five songs, as outlined by Jassy). This one is “The Waiting” by Tom Petty, and Jassy connects the lyrics to IoT and edge devices.

In order to get out of the keynote in a timely fashion, I’m wrapping up the liveblog here (sorry for the abbreviated coverage).

Categories: Scott Lowe

Liveblog: Deep Dive on Amazon Elastic File System

Scott Lowe's Blog - Tue, 11/28/2017 - 18:00

This is a liveblog of the AWS re:Invent 2017 session titled “Deep Dive on Amazon Elastic File System (EFS).” The presenters are Edward Naim and Darryl Osborne, both with AWS. This is my last session of day 2 of re:Invent; thus far, most of my time has been spent in hands-on workshops with only a few breakout sessions today. EFS is a topic I’ve watched, but haven’t had time to really dig into, so I’m looking forward to this session.

Naim kicks off the session with looking at the four phases users go through when they are choosing/adopting a storage solution:

  1. Choosing the right storage solution
  2. Testing and optimizing
  3. Ingest (loading data)
  4. Running it (operating it in production)

Starting with Phase 1, Naim outlines the three main things that people think about. The first item is storage type. The second is features and performance, and the third item is economics (how much does it cost). Diving into each of these items in a bit more detail, Naim talks about file storage, block storage, and object storage, and the characteristics of each of these approaches. Having covered these approaches, Naim returns to file storage (naturally) and talks about why file storage is popular:

  • Works natively with operating systems
  • Provides shared access while providing consistency guarantees and locking functionality
  • Provides a hierarchical namespace

Generally speaker, file storage hits the “sweet spot” between latency and throughput compared to block and object storage.

According to Naim, the key features of EFS are:

  • Simple (easy to use, operate, consume)
  • Elastic (no problem growing to accommodate capacity)
  • Scalable (consistent low latencies, thousands of concurrent connnections)
  • Highly available and durable (all file system objects stored in multiple AZs)

Next, Naim shows the typical “customer logo” slide to show how widely adopted EFS is by the AWs customer base and some of the use cases seen. Although the presenter said he wasn’t going to go through all the logos, he spends more than a few minutes going through almost all of them.

With regards to security, EFS offers a number of security-related features. Network access to EFS is controlled via security groups and NACLs. File and directory access is controlled via POSIX permissions; administrative access is managed via IAM. Encryption is also supported, with key storage in KMS.

Naim next goes through some pricing comparisons showing how EFS is much cheaper than DIY storage solutions using EC2 instances and EBS volumes.

EFS completes the “trifecta” of storage solutions that AWS offers, which cover the whole range of storage types (EFS for file, EBS for block, S3 for object).

A new feature announced last week is EFS File Sync, which is designed to get data from on-premises file systems into EFS, and is designed to operate up to 5x faster than traditional Linux copy tool. Naim indicates that Osborne will discuss this in more detail later in the session.

Next Naim discusses some architectural aspects of EFS, and how NFS clients on EC2 instances across various AZs might access EFS. EFS is POSIX-compliant and supports NFS v4.0 and v4.1. Of course, Naim points out that you should always test to verify that everything works as expected.

Now Osborne steps up to talk specifically about performance. When creating an EFS instance, you can select either “general purpose” (the default) or “max I/O” (designed for very large scale-out workloads, at the cost of slightly higher latencies). GP may have lower latencies, but has a ceiling of 7K operations/second. For GP file systems, AWS does expose a CloudWatch metric that allows users to see where they fall within the 7K ops/sec limit.

Osborne next compares EFS and EBS PIOPS (not sure what this stands for). Again, there are trade-offs (there are always trade-offs with technology decisions). As with any file system, throughput is a function of I/O size, meaning that larger I/O size will create more throughput. To really take advantage of EFS, Osborne indicates that parallelism is the key.

Changing gears slightly, Osborne reviews the mount options for using EFS with Linux instances on EC2. Linux kernel 4.0 or higher is recommended, as is NFS v4.1. More details are available in the documentation, according to Osborne.

Overall throughput is gated/tied to file system capacity; sustained throughput is 50 MBps per TB of storage, with bursts up to 100 MBps.

Osborne shifts focus now to talk about ingest, i.e., getting data into EFS. He reviews a couple different options (connecting on-premises servers to EFS via Direct Connect or using a third-party VPN solution). Both of these options can, according to Osborne, be used not only for migration but also for bursting or backup/disaster recovery. In order to optimize data copy/ingest, parallelism is again the key. Osborne reviews a few standard Linux copy tools (like rsync, cp, fpsync, or mcp). According to Osborne, rysnc offers relatively poor performance. fpsync is essentially multi-threaded rsync; mcp is a drop-in replacement for cp developed by NASA; both of these tools offer better performance than their single-threaded counterparts. The best performance comes from combining tools with GNU Parallel.

This dicussion of ingest performance and tools brings Osborne back to the topic of EFS File Sync, which is a multi-threaded tool that uses parallelism to maximize throughput, and offers encrypted transfers to EFS for security. EFS File Sync can be used to transfer from on-premises to EFS, between EFS instances in different regions, or from DIY shared storage solutions to EFS.

Next, Osborne shows a recorded video of using EFS File Sync to copy data between two different EFS instances. The demo shows copying roughly 20GB of data in about 4 minutes.

Naturally, you can move objects from Amazon S3 into EFS; this would involve using an EC2 instance that accesses S3 (perhaps via the AWS CLI) and an NFS mount backed by EFS. Osborne recommends maximizing parallelism to get the best possible ingest performance. GNU Parallel comes up here again.

Naim steps up to take over to talk about operations. EFS exposes a number of CloudWatch metrics, and all EFS API calls can be logged to CloudTrail.

Osborne comes back up to talk about some reference architectures using EFS. The first example is a highly-available WordPress architecture, followed by similar architectures for Drupal and Magento. Another example architecture is an EFS backup solution implemented via a CloudFormation template.

Wrapping up the session, Naim reviews some additional resources available via the Amazon web site, and announces a new series of storage-focused training classes that provide more in-depth training on S3, EFS, and EBS. At this point, Naim closes out the session.

Categories: Scott Lowe

Liveblog: IPv6 in the Cloud - Protocol and Service Overview

Scott Lowe's Blog - Tue, 11/28/2017 - 13:00

This is a liveblog of an AWS re:Invent 2017 breakout session titled “IPv6 in the Cloud: Protocol and Service Overview.” The presenter’s name is Alan Halachmi, who is a Senior Manager of Solutions Architecture at AWS. As with so many of the other breakout sessions and workshops here at re:Invent this year, the queues to get into the session are long and it’s expected that the session will be completely full.

Halachmi starts the session promptly at 11:30am (the scheduled start time) by reviewing the current state of IP4 exhaustion, then quickly moves to a “state of the state” regarding IPv6 adoption on the Internet. Global IPv6 adoption is currently around 22%, and is expected to hit 25% by the end of the year. Mobile and Internet of Things (IoT) are driving most of the growth, according to Halachmi. T-Mobile, for example, now has 89% of their infrastructure running on IPv6.

Transitioning again rather quickly, Halachmi moves into an overview of the IPv6 protocol itself. IPv4 uses a 32-bit address space; IPv6 uses a 128-bit address space (29 orders of magnitude larger than IPv4). IPv4 uses dotted decimal with CIDR (Classless Interdomain Routing) notation; IPv6 uses colon-separated hextet notation with CIDR. To help keep IPv6 addresses more human-readable, we can omit leading zeroes. You can also use the double-colon notation to represent any contiguous group of zeroes.

IPv6 also doesn’t use broadcast addresses, instead using multicast (focusing on the ff00::/8 range). IPv6 also has some special unicast addresses:

  • The unspecified address (all zeroes) indicates the absence of an address; this is primarily used in Duplicate Address Detection (DAD)
  • The loopback address is used to communicate with itself (the IPv6 address is ::1, equivalent to IPv4 127.0.0.1)
  • The link local address (LLA), which is scoped to a specific link (they’re only usable on the local link; equivalent to IPv4 169.254.0.0/16 range). In IPv4, the link local address is typically ephemeral; in IPv6, it’s required. The typical LLA prefix is Fe80::/64, and the Interface Identifier (IID) is generated either a) manually; b) systematically using modified EUI-64; or c) derived randomly. DAD is used in almost all cases.
  • The unique local address (ULA) can be roughly compared to RFC 1918 in the IPv4 world. However, the ULA is designed to be globally unique, and comes out of the fc00::/7 range. Halachmi is not a fan of the ULA, and doesn’t go into any additional detail.
  • The Global Unicast Address (GUA), is a globally-unique address that allows IPv6-equipped workloads to communicate end-to-end without any network address translation (NAT). GUAs have both a prefix (64-bit) and an IID (also 64 bits; EUI-64 has fallen out of favor for generating the IID portion of the GUA). There are several ways to assign GUAs: manually, systematically (via Router Advertisements [RAs] and SLAAC [Stateless Address Autoconfiguration]), dynamically via DHCPv6 (materially different than DHCPv4), or randomly. DAD is almost always used to prevent duplicate addresses.

EUI-64 takes a MAC (Media Access Control) address, which is 48-bits long, and converts it to a 64-bit address. First, invert the 7th most significant bit; then insert ff:fe into the middle of the address (after the first 24 bits and before the second 24 bits).

So how do IPv6 nodes communicate across the network? Halachmi quickly reviews IPv4 and ARP, then moves into the equivalent in IPv6. IPv6 doesn’t use ARP; it uses solicited-mode multicast, using ff02::1:ff00:0/104 and adding the least significant 24 bits from the target IPv6 node address. At layer 2, this translates into an Ethernet address in the 33:33:00:00:00:00 range (again using some of the least significant bits from the target node’s IPv6 address).

Halachmi now moves on to discussing how to access resources on an IPv6 network. The first example is using either dual-stack configurations (both IPv4 and IPv6 addresses on a node) and relying on DNS A (IPv4) or DNS AAAA (IPv6) address resolution.

A few other notable differences about IPv6:

  • End-to-end philosophy (no NAT, privacy is not equal to security, public IP is not equal to reachability)
  • Intermediate nodes may not fragment packets (path MTU discovery using ICMPv6)
  • Maximum payload per packet jumps to about 4GB
  • Options extensibility with extension header chaining

Having now covered the IPv6 protocol, Halachmi moves on to discussing IPv6 at AWS. The first IPv6-enabled service at AWS was the IoT service. The IoT data plane is IPv4 and IPv6; the control plane is IPv4 only. S3 is also IPv6- and IPv4-enabled; just add .dualstack in the fully-qualified domain name (same goes for S3 Transfer Acceleration). CloudFront supports IPv6; just enable it in the distribution. (Halachmi does point out a couple considerations regarding the use of cookies to limit access, and points attendees to the documentation for CloudFront.)

Continuing, Halachmi points out that CloudFront will support IPv6 from clients, but talks IPv4 to the origin. This can help with transitions from IPv4 to IPv6.

WAF also supports IPv6 (has supported it since launch). Route 53 supports quad-A records (for returning an IPv6 address), and for the last year or so has also supported DNS queries over IPv6.

Coming to VPCs, Halachmi points out that VPC does support IPv6, and takes some time to drill down into a few details/tenets of the implementation. These tenets include end-to-end connectivity to deliver on the IPv6 promise, offering a dual-stack solution, and a couple others I couldn’t capture. IPv6 in VPCs doesn’t support ULAs. GUAs come from a fixed /56 block, and subnets are of a fixed /64 size. IPv6 in VPCs don’t support privacy/temporary addresses.

What about assignment of addresses? Halachmi points out that you can do manual/static assignment, but it isn’t recommended. SLAAC was investigated, but this didn’t work well for the AWS VPC use case. Instead, AWS uses DHCPv6 (the actual implementation is stateless and derived from the topology database).

Regarding security considerations, IPv6 is not on by default and must be enabled per interface. Even if the OS in an instance is IPv6-enabled and ends up with a LLA, the VPC won’t pass IPv6 traffic until you assign a GUA (i.e., link-local connectivity doesn’t work without a GUA). Route tables are not auto-updated except the GUA CIDR. AWS won’t modify security groups or NACLs to allow IPv6 traffic unless the security group/NACL is unmodified/left at default settings.

To provide a semantically-similar NAT mechanism for IPv6, AWS introduced the EIGW (Egress-only Internet Gateway).

Direct Connect also supports IPv6, as do Virtual Private Gateways. Direct Connect Gateway supports IPv6 as well.

Amazon WorkSpaces supports IPv6-based communications, as does AppStream 2.0. The AWS Application Load Balancer (ALB) supports IPv6, but this must be enabled at the time of creation. ALB can also be helpful in IPv4-to-IPv6 transitions, like CloudFront.

Halachmi points out that there is still lots of work to do, and welcomes feedback from customers on how AWS should prioritize the implementation of IPv6 across their APIs and services.

At this point, Halachmi ends the session and opens up for questions.

Categories: Scott Lowe

A Sample Makefile for Creating Blog Articles

Scott Lowe's Blog - Mon, 11/27/2017 - 13:00

In October of this year, I published a blog post talking about a sample Makefile for publishing blog articles. That post focused on the use of make and a Makefile for automating the process of a publishing a blog post. This post is a companion to that post, and focuses on the use of a Makefile for automating the creation of blog posts.

Since early 2015, this site has been running as a static site with the content created using Markdown. In its first iteration as a static site, the HTML was generated using Jekyll and hosted on GitHub Pages. In the current iteration, the HTML is generated using Hugo, hosted on Amazon S3, and served via Amazon CloudFront. In both cases, the use of Markdown as the content format also required specific front-matter to instruct the static site generator how to create the HTML. For Hugo, the front-matter looks something like this (I use YAML, but other formats are supported):

--- author: slowe categories: Explanation date: 2017-11-27T12:00:00Z tags: - Writing - Blogging - Productivity title: Sample Blog Post title url: /2017/11/27/sample-blog-post-title/ ---

There are obviously a lot of different ways to automate the creation of this front-matter (text expansion utilities, text editor snippets, macros, etc.). I chose to tackle this using a Makefile so that I had some level of independence from the specific editor or operating system I was using (make is easily accessible on both macOS and Linux). After some trial and error, I finally arrived at this Makefile:

POST_TEMPLATE := ~/Templates/blog-post-template.md editor ?= /usr/local/bin/subl -n name ?= new-blog-post offset ?= +0d DDATE := $(shell date -v $(offset) +%Y-%m-%d) SDATE := $(shell date -v $(offset) +%Y/%m/%d) new: @cat $(POST_TEMPLATE) | \ sed "s/YYYY-MM-DD/$(DDATE)/g" | \ sed "s|YYYY/MM/DD|$(SDATE)|g" | \ sed "s|/title|/$(name)/|g" >> $(DDATE)-$(name).md @$(editor) $(DDATE)-$(name).md

I’m no expert with make, so there may be ways to optimize this (feel free to hit me up on Twitter with suggestions). This Makefile leverages two user-supplied values (default values are supplied in case the user doesn’t provide these values on the command line):

  • The first is name, which is used for both the filename and in the “url” value in the front-matter
  • The second is offset, which is an offset (typically in days, like “+1d”) from the current date (more on this in a moment)

With these two values, what the “new” target in this Makefile does is this:

  1. Sends the content of the $(POST_TEMPLATE) variable to STDOUT.
  2. STDOUT is piped into the the first sed command, which replaces the templated “YYYY-MM-DD” value for the “date” YAML front-matter with the current date, adjusted by the user-supplied offset variable (which is set to “+0d” if the user doesn’t supply a value). This date value is taken from the DDATE variable defined at the top of the file.
  3. The output of the first sed command is piped into the second sed command, which performs the same date manipulation, but this time on the templated value of the “url” YAML front-matter entry. (The value is taken from the SDATE variable defined at the top of the file.)
  4. The output of the second sed command is piped into the third sed command, which replaces the final section of the templated “url” front-matter with the value of the name variable passed in by the user (or the default value, if the user doesn’t supply a value).
  5. The final content is redirected into a file whose filename is constructed from the date (taken from the DDATE variable) and the value of name.

Using this Makefile is pretty straightforward:

  • If I type make or make new (no additional values), then it will take the current date and the default value of “new-blog-post”. For example, if I typed this today (Monday, 27 November), I’d get a filename of 2017-11-27-new-blog-post.md, and the YAML front-matter would have the appropriate values inserted. (In case you weren’t aware, typing just make works because “new” is the first target.)
  • If I type make new name=specific-name-for-post, then the resulting file generated with have the current date in the content and the filename, but the rest of the filename and the “url” YAML front-matter will use the value of name as provided on the command line. If I typed this today, I’d get a filename of 2017-11-27-specific-name-for-post.md.
  • If I type make new name=another-specific-name offset=+1d, then the generated filename will use the value of name (as will the “url” YAML front-matter entry), but the date will be one day in the future—as specified by the value of offset. If I typed this today, the file generated would be named 2017-11-28-another-specific-name.md.

In all these examples, the file is immediately opened in Sublime Text, as instructed by the default value of editor. If I wanted to change that, I can just type make new name=a-third-specific-name editor=atom and it would open in Atom instead. This provides some additional flexibility if needed.

I’ll continue to refine this Makefile as I learn more, and readers are encouraged to hit me on Twitter with suggestions for improvement. I hope this information proves useful to other bloggers using static site generators.

Categories: Scott Lowe

Installing MultiMarkdown 6 on Fedora 27

Scott Lowe's Blog - Sat, 11/25/2017 - 13:00

Long-time readers are probably aware that I’m a big fan of Markdown. Specifically, I prefer the MultiMarkdown variant that adds some additional extensions beyond “standard” Markdown. As such, I’ve long used Fletcher Penny’s MultiMarkdown processor (the latest version, version 6, is available on GitHub). While Fletcher offers binary builds for Windows and macOS, the Linux binary has to be compiled from source. In this post, I’ll provide the steps I followed to compile a MultiMarkdown binary for Fedora 27.

The “How to Compile” page on the MMD-6 Wiki is quite sparse, so a fair amount of trial-and-error was needed. To keep my main Fedora installation as clean as possible, I used Vagrant with the Libvirt provider to create a “build VM” based on the “fedora/27-cloud-base” box.

Once the VM was running, I installed the necessary packages to compile the source code. It turns out only the following packages were necessary:

sudo dnf install gcc make cmake gcc-c++

Then I downloaded the source code for MMD-6:

curl -LO https://github.com/fletcher/MultiMarkdown-6/archive/6.2.3.tar.gz

Unpacking the archive with tar created a MultiMarkdown-6-6.2.3 directory. Changing into that directory, then the instructions from the Wiki page worked as expected:

make cd build make

I did not run make test, though perhaps I should have to ensure the build worked as expected. In any case, once the second make command was done, I was left with a multimarkdown binary that I copied out to my Fedora 27 host system via scp. Done!

Categories: Scott Lowe

Using Docker Machine with KVM and Libvirt

Scott Lowe's Blog - Sat, 11/25/2017 - 00:00

Docker Machine is, in my opinion, a useful and underrated tool. I’ve written before about using Docker Machine with various services/providers; for example, see this article on using Docker Machine with AWS, or this article on using Docker Machine with OpenStack. Docker Machine also supports local hypervisors, such as VMware Fusion or VirtualBox. In this post, I’ll show you how to use Docker Machine with KVM and Libvirt on a Linux host (I’m using Fedora 27 as an example).

Docker Machine ships with a bunch of different providers, but the KVM/Libvirt provider must be obtained separately (you can find it here on GitHub). Download a binary release (make sure it is named docker-machine-driver-kvm), mark it as executable, and place it somewhere in your PATH. Fedora 27 comes with KVM and the Libvirt daemon installed by default (in order to support the Boxes GUI virtualization app), but I found it helpful to also install the client-side tools:

sudo dnf install libvirt-client

This will make the virsh tool available, which is useful for viewing Libvirt-related resources. Once you have both the KVM/Libvirt driver and the Libvirt client tools installed, you can launch a VM:

docker-machine create -d kvm --kvm-network "docker-machines" machine1

It appears that, by default, the KVM/Libvirt provider uses a network named “docker-machines”, hence the need for the --kvm-network flag. (If you omit this flag, the provider will look for the “default” network, which may or may not exist on your system. It didn’t exist on my Fedora laptop. This is where having virsh available is really handy.)

(Note that if your user account isn’t a member of the “libvirt” group, you’ll get prompted for authentication for most every docker-machine command, even just listing machines.)

There are a few other flags that might be useful as well when creating a Docker-equipped VM with Docker Machine and the KVM/Libvirt driver:

  • Use the --kvm-cpu-count parameter to specify the number of virtual CPUs.
  • Use the --kvm-memory parameter to control the amount of RAM allocated to the VM.
  • Customize the name of the user used to log into the system (the default is “docker”) by using the --kvm-ssh-user parameter.

You can view all the parameters by running docker-machine create -d kvm --help.

Once the VM is up and running, you can use all the standard Docker Machine commands to work with this VM, just like with any other provider:

  • Use docker-machine ssh <name> to connect to the VM via SSH.
  • Use docker-machine env <name> to show the information needed to connect to the Docker Engine on the VM. (Use eval $(docker-machine env <name>) to load that information into the shell environment.)
  • Use docker-machine stop <name> to stop the VM, and docker-machine rm <name> to remove/delete the VM.

You might be wondering if there’s any value in using Docker Machine to run a VM on Linux when you can just run Docker directly on the Linux host. I haven’t fully tested the idea yet, but my initial thought is that it would enable users to run various different versions of the Docker Engine (using the --engine-install-url option, as I outlined here). If you have other ideas where this sort of arrangement might have value, I’d love to hear them; hit me up on Twitter.

Categories: Scott Lowe

Happy Thanksgiving 2017

Scott Lowe's Blog - Thu, 11/23/2017 - 13:00

In the US, today (Thursday, November 23) is Thanksgiving. I’d like to take a moment to reflect on the meaning of Thanksgiving.

Thanksgiving means different things to different people:

  • To folks outside the US, it often just a day with drastically reduced email volume and no interruptions from US-based coworkers. (Enjoy!)
  • To folks in the US, it’s a holiday filled with food (turkey, anyone?). There may also be family gatherings, football (American football, of course), and possibly some shopping. (There will most certainly be shopping tomorrow.)
  • To many people, it’s also a time to be thankful or grateful for the good things in their lives.
  • To Christians, like myself, it’s often a time to reflect on the blessings that God placed in your life. I know that I am quite blessed—blessed with a great family, an amazing wife, and the opportunity to work in a fast-paced industry (among many many other blessings).

Whatever Thanksgiving means to you, I hope that you enjoy the holiday. Happy Thanksgiving!

Categories: Scott Lowe

Installing Older Docker Client Binaries on Fedora

Scott Lowe's Blog - Wed, 11/22/2017 - 13:00

Sometimes there’s a need to have different versions of the Docker client binary available. On Linux this can be a bit challenging because you don’t want to install a “full” Docker package (which would also include the Docker daemon); you only need the binary. In this article, I’ll outline a process I followed to get multiple (older) versions of the Docker client binary on my Fedora 27 laptop.

The process has two steps:

  1. Download the RPMs for the older releases from the Docker Yum repository (for Fedora, that repository is here).
  2. Extract the files from the RPM without actually installing the RPM.

For step 1, you can use the curl program to download specific RPMs. For example, to download version 1.12.6 of the Docker client binary, you’d download the appropriate RPM like this:

curl -LO https://yum.dockerproject.org/repo/main/fedora/24/Packages/docker-engine-1.12.6-1.fc24.x86_64.rpm

You’ll note that the URL above appears to be tied to a particular Fedora version (24, in this case). However, that’s only significant/applicable for the entire RPM package; once you extract the specific binaries, you should have no issues running the binaries on a different version (I was able to run older versions of Docker—ranging from 1.9.1 to 1.13.1—on Fedora 27 with no issues).

Once you have the RPM, you can use rpm2cpio and cpio (as outlined in this article) to extract the files inside the RPM. For example, to extract the files from the RPM downloaded above, you’d use a command like this:

rpm2cpio docker-engine-1.12.6-1.fc24.x86_64.rpm | cpio -idmv

This will extract all the individual files in the RPM package. In this specific example, you’ll see two directories created: a usr directory and an etc directory. Digging into the usr directory, you’ll find the docker client binary in a bin subdirectory. This client binary is really the only thing we need from the package, so we can copy it out:

cp usr/bin/docker ./docker-1.12.6

You’ll note that I “versioned” the file in the name, so as to make it easier for multiple versions of the Docker client binary to co-exist on the same Linux system.

With the Docker client binary extracted, we can remove the extracted files and the downloaded RPM package:

rm -rf etc rm -rf usr rm docker-engine-1.12.6-1.fc24.x86_64.rpm

(Important warning: you’ll note that my references to usr and etc lack a preceding forward slash, meaning I’m referencing directories named usr and etc in the current directory. Be sure you do not include the leading slash, or you’ll be in a world of hurt.)

You can place the extracted binary wherever you’d like (I like to use /opt/docker/bin), and repeat the process for a different version. Whenever you need to run a particular version of the Docker client, you have (at least) three options:

  1. Somewhere in your PATH, create a symbolic link named docker that points to the version you want to run. Then, just run docker like you normally would.
  2. Specify the full path to the client binary you want to run.
  3. Temporarily alias docker to the particular binary version you want.

If you plan on having a “full” Docker package installed on your Linux system (quite handy, by the way), then option #1 may be a bit more complicated; you may prefer option #2 or option #3. Option #3 probably provides the best balance between ease-of-use and flexibility. Your mileage may vary, of course.

Here’s hoping others find this information useful as well!

Categories: Scott Lowe

Installing Postman on Fedora 27

Scott Lowe's Blog - Tue, 11/21/2017 - 20:00

I recently had a need to install the Postman native app on Fedora 27. The Postman site itself only provides a link to the download and a rather generic set of instructions for installing the Postman native app (a link to these instructions for Ubuntu 16.04 is also provided). There were not, however, any directions for Fedora. Hence, I’m posting the steps I took to set up the Postman native app on my Fedora 27 laptop.

(Note that these instructions will probably work with other versions of Fedora as well, but I’ve only used them on Fedora 27.)

Here are the steps I followed:

  1. Download the installation tarball, either via your browser of choice or via the command line. If you’d prefer to use the command line, this command should take care of you:

    curl -L https://www.getpostman.com/app/download/linux64 -O postman-linux-x64.tar.gz
  2. Unpack the tarball into the directory of your choice. I prefer to put third-party applications such as this into the /opt directory; you can (obviously) put it wherever you prefer. This command should do the trick:

    sudo tar xvzf postman-linux-x64.tar.gz -C /opt

    If you prefer a directory other than /opt, specify the appropriate directory in the command above.

  3. In my particular case, tar created a directory with an uppercase character (/opt/Postman) and some odd permissions (a strange user and group for ownership). I fixed those with mv and chown, respectively. You may or may not need to do anything.

  4. Create a symbolic link in a directory included in your PATH. In this example, I’m creating the symbolic link in /usr/local/bin, but you could use any directory included in your PATH:

    sudo ln -s /opt/postman/Postman /usr/local/bin/postman
  5. At this point, you should be able to launch Postman by just running postman from the terminal. However, ideally you’ll want to be able to use a graphical launcher. To do that, you need to create a “desktop launcher.” Create a file named postman.desktop in ~/.local/share/applications with these contents:

    [Desktop Entry] Name=Postman GenericName=API Client X-GNOME-FullName=Postman API Client Comment=Make and view REST API calls and responses Keywords=api; Exec=/opt/bin/postman Terminal=false Type=Application Icon=/opt/postman/resources/app/assets/icon.png Categories=Development;Utilities;
  6. Log out and log back in, and after a few minutes you should be able to see a Postman icon in your list of applications. You can now launch Postman either by using the graphical launcher or by running postman in the terminal.

Enjoy!

Categories: Scott Lowe
Syndicate content