Quick Start Guide

This guide will help you get started with Continuuity Loom. In this section, you will learn to provision a cluster using one of the preset templates.

Installing Continuuity Loom

Please follow the steps found in the Installation Guide. Once successfully installed, start all the relevant Loom components: the Loom server, provisioners, and UI.

Getting Started

Open the Loom UI using a browser at http://<loom-host>:<loom-ui-port>/ and login as an administrator. The default password is ‘admin’.

Login as an administrator

This will take you to the administrator home screen. The page, shown below, shows metrics for clusters that are currently running on the system. Note, the ‘All Nodes’ count metric indicates all the nodes provisioned since the beginning. (i.e. it is a historical cumulative number including the deleted nodes.) This page also shows the ‘Catalog’, which is a list of ‘templates’ for provisioning clusters. Several default templates are available out of the box.

Administrator home screen

Configuring a Provider

To start provisioning machines, you must first specify an IaaS provider on which the clusters will be created. Click on the ‘Providers’ icon on the sidebar to the left. Several defaults should already be available on this page, namely OpenStack, Rackspace, and Joyent. Choose the provider you want to use for this tutorial, then click on its name to navigate to its edit screen.

Each provider type has fields specific to your own provider and account. These inputs may include settings such as username and API key, and they can be obtained through the provider’s own system. If you do not already have an account with the provider, you may register or obtain one on a provider’s website. Next, we go through how to set up each of the three default providers. You will only need to set up the provider you are using.

Rackspace

An API key, username, and region are required for using Rackspace (for more information on how to obtain your personalized API key, see this page ).

Configuring a Rackspace provider

Enter the necessary fields and click on ‘Save’ to persist them.

Joyent

Joyent requires a region, key file, key name, user, and api version. The key file must be present on all machines running the Provisioner, must be owned by the user running Continuuity Loom, and must be readable only by the user that owns it (0400 permissions).

Configuring a Joyent provider

Enter the necessary fields and click on ‘Save’ to persist them.

OpenStack

OpenStack has been extensively tested on Havana, but it also supports Grizzly out of the box. OpenStack support has some limitations that are described here. Several of these limitations will be eliminated in future releases of Continuuity Loom. The first step is to configure the openstack provider to use your credentials. OpenStack requires a key file, auth url, password, key name, tenant, and user. The key file must be present on all machines running the Provisioner, must be owned by the user running Continuuity Loom, and must be readable only by the user that owns it (0400 permissions).

Configuring an OpenStack provider

Next, we need to configure the default hardware types and image types to be able to use your instance of OpenStack. Navigate to the Hardware tab on the top of the screen and edit each hardware type in the list (small, medium, and large). You will notice that joyent and rackspace are already configured for each hardware type with their corresponding flavor. They are already configured because their flavors are public and unchanging, whereas your OpenStack instance may use its own flavors. Click on the ‘Add Provider’ button, change the provider to openstack, and input your OpenStack’s flavor identifier for the corresponding hardware type. You may need to contact your OpenStack administrator to get this information.

Configuring an OpenStack hardware type

Next, we need to configure the default image types. Navigate to the Images tab of the left and edit each image type in the list (centos6 and ubuntu12). Click on the ‘Add Provider’ button, change the provider to openstack, and input your OpenStack’s image identifier for the corresponding image type. You may need to contact your OpenStack administrator to get this information.

Configuring an OpenStack image type

Provisioning your First Cluster

Click on the ‘Clusters’ icon on the right most icon on the top bar. This page lists all the clusters that have been provisioned that are accessible to the logged in user.

Creating a cluster

Click on the ‘Create’ buttom at the top right to enter the cluster creation page. In the ‘Name’ field, enter ‘loom-quickstart-01’ as the name of the cluster to create. The ‘Template’ field specifies which template in the catalog to use for this cluster. For this tutorial, let’s create a distributed Hadoop and HBase cluster.

Select ‘hadoop-distributed’ from the ‘Template’ drop down box. Enter the number of nodes you want your cluster to have (for example, 5) in the field labeled ‘Number of machines’.

Display the advanced settings menu by clicking on the small triangle next to the label ‘Advanced’. This lists the default settings for the ‘hadoop-hbase-distributed’ template. If you chose a provider other than Rackspace in the previous section, click on the drop down menu labeled ‘Provider’ to select the provider you want.

Advanced settings

To start provisioning, click on ‘Create’ at the bottom of the page (not shown in the image above). This operation will take you back to the Clusters’ home screen, where you can monitor the progress and status of your cluster. Creating a cluster may take several minutes.

Creation running

Accessing the Cluster

Once creation is complete, the cluster is ready for use.

For more information on your cluster, click on the name ‘loom-quickstart-01’ on the Clusters’ home screen. On this cluster description screen, nodes are grouped together by the set of services that are available on them. To see node details, click on the white triangles next to each service set to expand the list. The expanded list shows a list of attributes for each node.

Cluster description and details