Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).

Building Your First Certified Kubernetes Cluster On-Premises, Part 2: – iSCSI Support

9 min read

In my first post, I discussed how to build your first certified on-premises Kubernetes cluster using Docker Enterprise 3.0. In this post, I will explain how to build and configure Kubernetes external storage provisioner for iSCSI storage to work with DKS.

According to a 2018 IDC survey on containers, 85% of organizations that have adopted containers use them for production apps, and 83% use containers on multiple public clouds (3.7 clouds on average). Interestingly, 55% of them run containers on-premises, compared to 45% that run containers in the public cloud. This shows that containers are now even more common in the  on-premises world than they are in the cloud.

There are several reasons why on-premises remains the primary focus of Enterprise IT:

  • Direct access to the underlying OS’s hardware.
  • Full control over the host OS and container environment.
  • Ability  to use certain hardware features, such as storage, or processor-specific operations
  • Legacy applications that still run directly on hardware or can’t easily be moved to the cloud.

For organizations that run Kubernetes on premises, they need to connect it with local storage. In this post, I’ll explain how to use iSCSI storage with Kubernetes.

iSCSI Support for Kubernetes

Some of you are steeped in storage and dream in iSCSI, NFS and block. If that’s you, skip this intro section. If you’re just learning about storage, I’ve included a helpful primer on iSCSI in Kubernetes below.

iSCSI (Internet Small Computer System Interface) is an industry standard that allows SCSI block I/O protocol to be sent over the network using a TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients. iSCSI SAN solutions (often called IP SANs) consist of iSCSI initiators (software driver or adapter) in the application servers, connected to iSCSI arrays via standard Gigabit Ethernet switches, routers and cables. 

One of the major advantages of using iSCSI technology is the minimal investment. iSCSI allows businesses to deploy and scale storage  without changing their existing network infrastructure. iSCSI creates IP-based SANs which allows ,organizations to capitalize on their existing IP infrastructure by delivering block based storage across an IP network. Organizations do not have to invest in a new storage-only infrastructure, such as with FC which can be costly. iSCSI also allows companies to capitalize on existing IT skill sets to create IP based-SANs as in-house network expertise.

New to Kubernetes? Check out Musical TechSeries of KubeLabs.


Docker Enterprise 3.0, Kubernetes and iSCSI

Docker Enterprise 3.0 brings iSCSI support for Kubernetes for the first time. iSCSI support in Docker UCP enables Kubernetes workloads to consume persistent storage from iSCSI targets. As of today, iSCSI support in Docker Enterprise is enabled for Linux workloads, but not yet enabled for Windows.

This image has an empty alt attribute; its file name is image-6-1024x523.png

Configuring iSCSI in Kubernetes via UCP

You’ll need to cover these prerequisites to get started with iSCSI for Kubernetes:

  • Configuring an iSCSI target 
  • Setting up UCP nodes as iSCSI initiators
  • Setting up external provisioner & Kubernetes objects
  • Configuring UCP

To demonstrate iSCSI support for Kubernetes under Docker Enterprise 3.0, I will be using Dell iSCSI SAN storage.

Configuring an iSCSI Target

An iSCSI target refers to a server that shares storage and receives iSCSI commands from an initiator. An iSCSI target can run on dedicated/stand-alone hardware, or can be configured on a hyper-converged system to run alongside container workloads on UCP nodes. To provide access to the storage device, each target is configured with one or more logical unit numbers (LUNs). Note that the steps here are specific to DellEMC iSCSI storage. They will vary based on the storage platform you’re using, but the workflow should be similar for any iSCSI device.

Step 1 – Configuring the iSCSI Storage Array

First, we configure the DellEMC iSCSI SAN storage as an iSCSI target. Before we use Storage Manager Software to manage storage arrays, we need to use DellEMC Modular Disk Configuration utility(MDCU) to configure the iSCSI on each host connected to the storage array. Run the MDCU utility on Windows system and click on “Auto Discover” option to discover the storage array automatically.

You should now be able to discover your storage arrays successfully.

Next, Launch Dell MD Storage Manager. If this is the first storage array to be set up, the Add New Storage Array window appears. Choose Automatic and click OK. It may take several minutes for the discovery process to complete. Closing the discovery status window before the discovery process completes will cancel the discovery process. After discovery is complete, a confirmation screen appears. Click Close to close the screen.

When discovery is complete, the name of the first storage array found appears under the Summary tab in MD Storage Manager. Click the Initial Setup Tasks option to see links to the remaining post- installation tasks.

Step 2 – Configuring the iSCSI ports on the Storage Array

To configure the iSCSI ports on the storage array, open MD Storage Manager and click on iSCSI tab under Storage Array section to configure iSCSI Host Ports.

Step 3 – Perform Target Discovery from the iSCSI Initiator

The iSCSI server must be accessible to all UCP worker nodes, so that workloads using iSCSI volumes can be run anywhere on the cluster. If your UCP worker node runs Ubuntu, then you can just install the open-iscsi using apt as shown below:

$ sudo apt install open-iscsi


If you are running CentOS or RHEL, use the command below:

sudo yum install -y iscsi-initiator-utils


Set up UCP nodes as iSCSI initiators

Open up /etc/iscsi/initiatorname.iscsi  on each of UCP nodes and configure initiator names for each node as shown below:

$sudo cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator.  The InitiatorName must be unique
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.InitiatorName=iqn.1993-08.org.debian:01:343822ee8898

Alternatively, you can directly use the below command to add the entries. The iqn must be in the following format: iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier.

 sudo sh -c 'echo "InitiatorName=iqn.<1993-08.org.debian>:<uniqueID>" > /etc/iscsi/<initiatorname>.iscsi


Restart the iSCSI service

 sudo systemctl restart iscsid


Step 4 – Configuring Host Access

To specify which host system will access virtual disks on the storage array, we need to follow the steps below. 

Launch MD Storage Manager, click on “Configure” option to set up Host Access. You will need to enter the Hostname and select Host Type. From the Drop down menu, select the appropriate Host Type. Click on “Next” to see the known iSCSI Initiators. Click Next to enter the iSCSI initiator Name to complete this process.

This completes the configuration of the iSCSI target system. 

Configuring UCP Nodes as iSCSI Initiators

In order to configure UCP nodes as iSCSI initiators, we will need to make changes in UCP configuration file. There are two ways to configure UCP – either through web interface or by importing & exporting the UCP config in a TOML file. We will be using the latter option.

For this demonstration, we will be using  the config-toml API to export the current settings and write them to a file. I assume you have 2 Node Docker Enterprise 3.0 cluster with Kubernetes already configured. Open up the UCP master terminal and run the below command:

Get an authtoken

$ AUTHTOKEN=$(curl --silent --insecure --data '{"username":"ajeetraina","password":"XXXXX"}' https://100.98.26.115/auth/login | jq --raw-output .auth_token)


Download the config file

curl --silent --insecure -X GET "https://100.98.26.115/api/ucp/config-toml" -H  "accept: application/toml" -H  "Authorization: Bearer $AUTHTOKEN" > ucp-config.toml


Editing the ucp-config.toml file

We only need to focus on 3 entries in the ucp-config.toml file for iSCSI configuration:

  • storage-iscsi = true – enables ISCSI based Persistent Volumes in Kubernetes.
  • iscsiadm-path = <path> – specified the absolute path of the iscsiadm binary on the host. Default value is “/usr/sbin/iscsiadm”
  • iscsidb-path=<path> – specifies the path of the iscsi database on the host. Default value is “/etc/iscsi”.

Once these changes are performed, it’s time to import those configuration changes which can be achieved with the below command:  

Upload the Config file

curl --insecure -X PUT -H  "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'ucp-config.toml' https://100.98.26.115/api/ucp/config-toml

{"message":"Your UCP config has been set. It may take a moment for some config changes to propagate throughout the cluster."}

This completes the configuration to implement the UCP nodes as iSCSI initiators.

Setting up External Provisioner

An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting Persistent Volumes. It allows a cluster developer to order storage with a pre-defined type and configuration. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly. This is different from in-tree dynamic provisioners that run as part of the Kubernetes controller manager. Check out the Github repository which has a library for writing external provisioners.

Kubernetes External Provisioner for Dell ISCSI SAN

I will leverage the Kubernetes External Provisioner for Dell iSCSI SAN  forked out of the nmaupu repository in Github, and will demonstrate how to configure it as external Storage provisioner.Dell-provisioner is a Kubernetes external provisioner. It creates and deletes volumes and associates LUN using the Dell smcli command line tool on a remote SAN whenever a PersistentVolumeClaim appears on the cluster.

Building the Storage Provisioner

Under this section, we will see how to build our own external Dell provisioner and then use it with Kubernetes resources.

Before you begin

I assume that you have followed the blog till now and have already DellEMC ISCSI SAN configured, up and running. Ensure that both the hosts – UCP master and worker node are mapped to the target storage.

  • Install Ubuntu 18.04  either on VM or on bare metal system
  • Install make
sudo apt-get install make
  • Install glide
sudo add-apt-repository ppa:masterminds/glide && sudo apt-get update
sudo apt-get install glide


  • Install go
sudo wget https://dl.google.com/go/go1.13.1.linux-amd64.tar.gz
sudo tar xvf go1.13.1.linux-amd64.tar.gz
sudo mv go /home/dell/

Completing the iSCSI Setup

Congrats! You’re done with the pre-requisites. Now we can move on to completing the iSCSI setup.

Cloning the Repository

sudo git clone https://github.com/collabnix/dell-provisioner

Building the dell-provisioner

sudo make vendor && make

Once this command is successful, the binary gets copied into

bin/dell-provisioner-linux-amd64 .

Provisioning the Dell Provisioner

You will require a Docker image with Dell smcli available to be able to use dell-provisioner. The Dockerfile is available under https://github.com/collabnix/dell-provisioner/blob/master/Dockerfile

Building Dell Provisioner Docker Image

Run the below command to build a SMcli based Docker Image:

$docker build -t ajeetraina/dellsmcli-docker .

Testing the Dell Provisioner Docker Image

$ sudo docker run -itd -v /tmp:/tmp ajeetraina/dellsmcli-docker 100.98.26.154 -p <password> -c "show storageArray profile;" -o /tmp/storageprofile.txt
$cat /tmp/storageprofile.txtPROFILE FOR STORAGE ARRAY: mykube (Sun Oct 20 04:44:13 UTC 2019)
SUMMARY------------------------------  
Number of RAID controller modules:              2   
High performance tier RAID controller modules:  Disabled  
Number of disk groups:                          1   
RAID 6:                                         Enabled
…...

The above command shows that the Dell Provisioner Docker image is working and able to fetch storage system information flawlessly.

Dynamic Provisioning

In order to manage storage, there are 2 Kubernetes API resources – PersistentVolume (PV) and PersistentVolumeClaim (PVC).  PV is a piece of storage in the cluster dynamically provisioned using Storage Classes (SC).

In order to dynamically provision persistent storage, you need to define the type and configuration of the storage. SC is a medium which abstracts the underlying storage platform so that you don’t have to know all the details (supported sizes, IOPs, etc.) to provision persistent storage in a cluster. A PVC is the request to provision persistent storage with a specific type and configuration.

As shown above, we will first create a storage class which will determine the type of storage that is provisioned and the allowed ranges for sizes. Then we will create a PVC that specifies the storage type, storage class, size in gigabytes etc. Creating a PVC in a cluster automatically triggers the storage plug-in for the requested type of storage to provision storage with the given specification.The PVC and PV are automatically connected to each other. The status of the PVC and the PV changes to Bound. Then we can now use the PVC to mount persistent storage to your app.

YAML Spec for using Pod to deploy Dell Provisioner

Before you deploy Dell Provisioner, RBAC need to be configured. RBAC is a method of restricting network access based on individual user roles within an enterprise.

Copy the full content from here & paste it on Object YAML section under UCP UI > Kubernetes option. Click on “Create” to bring up RBAC.

Next, you need to use YAML file under the root directory of the Github repository which allows us to deploy dell-provisioner. The content of the yaml file will look like this:

Click on “Create” to bring up dell-provisioner Pod. Alternatively, you can execute the below command in the UCP terminal to bring up the storage provisioner.

$kubectl apply -f dell-provisioner.yaml

This will deploy Dell Provisioner and give the Kubernetes cluster access to the Dell iSCSI storage system.

Next, we can verify if Pod is running successfully:

cse@node1-linux:~/dell-provisioner$ kubectl describe po dell-provisioner -n kube-system
Name:               dell-provisioner
Namespace:          kube-system
Priority:           0Priority
ClassName:  <none>
Node:               node2-linux.dell.com/100.98.26.116
Start Time:         Sun, 20 Oct 2019 01:28:23 -0400
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:      

  {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dell-provisioner","namespace":"kube-system"},"spec":{"containers":[{"...                   
 kubernetes.io/psp: privileged
Status:             Running
IP:                 192.168.215.5
Containers:  
  dell-provisioner:    
    Container ID:   
docker://d503b94bfb6c41be2155226cef91cdc7092a31b1b1eb7784928868dd09213cbd    
   Image:          ajeetraina/dellsmcli-docker    
   Image ID:       
docker-pullable://ajeetraina/dellsmcli-docker@sha256:e0eadfc725b3a411b4eb76d1035eeb96bdde6268c4c71a9222b80145aa59a24e    
   Port:           <none>    
   Host Port:      <none>   

Specifying the Storage Class 

You can use the sc.yaml file under the root of Github repository to bring up the storage class. We will first need a storage class which tells which provisioner to use and how to use it. 

Go to “Kubernetes” section under UCP UI and click on “+Create”. Copy the YAML from this location.

Click on “Create” to bring up a storageClass “dell-provisioner-sc” as shown below:

You can also run the CLI command below to view storage class from the master node: 

$kubectl get sc -n kube-system
NAME                            PROVISIONER                  AGE
dell-provisioner-sc       ajeetraina/dell-provisioner   2d16h


YAML Spec for Persistent Volume Claim

Click on “Create” to bring up a storageClass “dell-provisioner-sc” as shown below:

You can also use the ‘kubectl get’ CLI on your UCP node to check the status of the PVC as shown below:

$ kubectl get pvc NAME                        
STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
dell-provisioner-test-pvc   BOUND                                dell-provisioner-sc   2d16h

YAML Spec for leveraging the claim on a Testing Pod 

In order to validate that your application Pod can leverage the PVC as well as PV, use the YAML file below:

By applying the above YAML, the underlying volume should get created and thereby get associated with your pod. 

Coming up Next..

Till now, I have walked you through deploying on-premises certified Kubernetes cluster and demonstrated iSCSI support for Kubernetes under Docker Enterprise.We saw that it is possible to provision your Kubernetes cluster with persistent storage using iSCSI. In the next series of blog post, I will show how to implement Kubernetes cluster ingress on-premises using Docker Enterprise. I will show you how to install cluster ingress on a UCP cluster, deploy a sample application with Ingress rules and much more..

Additional References:



Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index