
VMAX Cinder Driver
/Open Source ProjectVMAX Cinder Driver
Copyright (c) 2014 EMC Corporation. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0
- The driver in the master branch supports Kilo and Juno.
- For the driver that supports Icehouse and Havana, go to the havana_icehouse branch.
VMAX Driver (FC and iSCSI)
Overview
This package consists of two drivers: * EMCVMAXFCDriver, based on the Cinder FibreChannelDriver * EMCVMAXISCSIDriver, based on the Cinder ISCSIDriver These drivers support the use of EMC VMAX storage arrays under OpenStack Cinder block management. They both provide equivalent functions and differ only in support for their respective host attachment methods. The drivers perform volume operations through use of the EMC SMI-S Provider, which is packaged with Solutions Enabler. The SMI-S Provider implements the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. EMC Cinder drivers also require PyWBEM, a client library written in Python that communicates with the SMI-S provider over HTTP.OpenStack Release Support
This driver package supports the Juno and Kilo releases. Compared to previously released versions, enhancements include: * Support for consistency groups. * Support for live migration. * Use lookup service in FC auto zoning.Supported Operations
The following operations are supported on VMAX arrays: * Create volume * Delete volume * Extend volume * Attach volume * Detach volume * Create snapshot * Delete snapshot * Create volume from snapshot * Create cloned volume * Copy image to volume * Copy volume to image * Create consistency group * Delete consistency group * Create Cgsnapshot (snapshot of a consistency group) * Delete CgsnapshotRequired Software Packages
Install SMI-S Provider with Solutions Enabler
- Required version: EMC SMI-S Provider 4.6.2.28 or higher (Solutions Enabler 7.6.2.28 or higher)
- SMI-S Provider is available from available from EMC’s support website
- The SMI-S Provider with Solutions Enabler can be installed as a vApp, or on a Windows or Linux host
Install PyWBEM
- Required version: PyWBEM 0.7
- Available from Sourceforge, or using the following commands:
- Install for Ubuntu:
# apt-get install python-pywbem
- Install on openSUSE:
# zypper install python-pywbem
- Install on Fedora:
# yum install pywbem
- Install for Ubuntu:
Verify the EMC VMAX Cinder driver files
EMC VMAX Drivers provided in the installer package consists of seven python files:emc_vmax_fc.py
emc_vmax_iscsi.py
emc_vmax_common.py
emc_vmax_masking.py
emc_vmax_fast.py
emc_vmax_provision.py
emc_vmax_provision_v3.py
emc_vmax_https.py
emc_vmax_utils.py
These files are located in the ../cinder/volume/drivers/emc/ directory of OpenStack node(s) where cinder-volume is running.
Cinder Backend Configuration
The EMC VMAX drivers are written to support multiple types of storage, as configured by the OpenStack Cinder administrator. Each storage type is implemented by configuring one or more Cinder backends mapped to that type. If multiple storage types are desired, multi-backend support must be enabled in the cinder.conf file as shown:[DEFAULT]
enabled_backends=CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
iscsi_ip_address = 10.10.0.50
volume_driver=cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file=/etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name=ISCSI_backend
[CONF_GROUP_FC]
volume_driver=cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file=/etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name=FC_backend
NOTE: iscsi_ip_address is required in an ISCSI configuration. This is the IP Address of the VMAX iscsi target.
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_.xml. See the section below for a description of the file contents.
Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
# cinder type-create VMAX_ISCSI
# cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
# cinder type-create VMAX_FC
# cinder type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Cinder volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC associated with FC_backend
For more details on multi-backend configuration, see OpenStack Administration Guide.
EMC-specific Configuration Files
Each enabled backend is configured via parameters contained in an EMC-specific configuration file. The default EMC configuration file is named /etc/cinder/cinder_emc config.xml, and is configured for the iSCSI driver by default. When multiple backends are configured in cinder.conf, the names of each configuration group’s file is explicitly provided in the cinder_emc_config_file parameter. Here is an example and description of the contents: 10.108.246.202
5988
admin
#1Password
OS-PORTGROUP1-PG
OS-PORTGROUP2-PG
000198700439
FC_GOLD1
GOLD1
EcomServerIp, EcomServerPort, EcomUserName and EcomPassword identify the ECOM (EMC SMI-S Provider) server to be used, and provide logon credentials.
PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this Backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the list above, to evenly distribute load across the set of groups provided.
NOTE: Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).
The Array tag holds the unique VMAX array serial number.
The Pool tag holds the unique pool name within a given array.
NOTE: For this version of the driver, we do not support over subscription of pools. Creating a pool with max_subs_percent greater than 100 is not recommended.
For backends not using FAST automated tiering, the pool is a single pool that has been created by the admin.
For backends exposing FAST policy automated tiering, the pool name is the bind pool to be used with the FAST policy.
The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
Configuring Connectivity
FC Zoning with VMAX
With the Icehouse release of OpenStack, a Zone Manager has been added to automate Fibre Channel zone management. Havana does not support this functionality. It is recommended to upgrade to the Juno release if you require FC zoning.iSCSI with VMAX
- Make sure the “iscsi-initiator-utils” package is installed on the host (use apt-get, zypper or yum, depending on Linux flavor)
- Verify host is able to ping VMAX iSCSI target ports
VMAX Masking View and Group Naming Info
Masking View Names
Masking views for the VMAX FC and iSCSI drivers are now dynamically created by the VMAX Cinder driver using the following naming conventions:OS--I-MV (for Masking Views using iSCSI)
OS--F-MV (for Masking Views using FC)
Initiator Group Names
For each host that is attached to VMAX volumes using the Cinder drivers, an initiator group is created or re-used (per attachment type). All initiators of appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the format:OS--I-IG (for iSCSI initiators)
OS--F-IG (for Fibre Channel initiators)
Note: Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA Port Groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file. (See EMC-specific Configuration Files above)Storage Group Names
As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:OS--I-SG (attached over iSCSI)
OS--F-SG (attached over Fibre Channel)
Concatenated/Striped volumes
In order to support later expansion of created volumes, the VMAX Cinder drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance. Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type -- storagetype:stripecount represents the number of strips you want to make up your volume. The example below means that all volumes created under the GoldStriped volume type will be striped and made up of 4 stripes # cinder type-create GoldStriped
# cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
# cinder type-key GoldStriped set storagetype:stripecount=4
Information
From the {code} Blog
A final thank you from the {code} Team
From the bottom of our hearts, we want to extend our greatest THANK YOU to the open source community for the amazing support and interactions you’ve given us these past 3.5 years. It has ...February 22, 2018Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0 – Collabnix
Say Bye to Kubectx ! I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac ...February 16, 2018Whose Job Is It Anyway? Kubernetes, CRI, & Container Runtimes
A talk given at Cloud Native London meetup, February 6, 2018 on the role of container runtimes in Kubernetes, the introduction of the Container Runtime Interfa… The post Whose Job Is It Anyway? Kubernetes, CRI, ...February 15, 2018
VMAX Cinder Driver
Copyright (c) 2014 EMC Corporation. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0
- The driver in the master branch supports Kilo and Juno.
- For the driver that supports Icehouse and Havana, go to the havana_icehouse branch.
VMAX Driver (FC and iSCSI)
Overview
This package consists of two drivers: * EMCVMAXFCDriver, based on the Cinder FibreChannelDriver * EMCVMAXISCSIDriver, based on the Cinder ISCSIDriver These drivers support the use of EMC VMAX storage arrays under OpenStack Cinder block management. They both provide equivalent functions and differ only in support for their respective host attachment methods. The drivers perform volume operations through use of the EMC SMI-S Provider, which is packaged with Solutions Enabler. The SMI-S Provider implements the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. EMC Cinder drivers also require PyWBEM, a client library written in Python that communicates with the SMI-S provider over HTTP.OpenStack Release Support
This driver package supports the Juno and Kilo releases. Compared to previously released versions, enhancements include: * Support for consistency groups. * Support for live migration. * Use lookup service in FC auto zoning.Supported Operations
The following operations are supported on VMAX arrays: * Create volume * Delete volume * Extend volume * Attach volume * Detach volume * Create snapshot * Delete snapshot * Create volume from snapshot * Create cloned volume * Copy image to volume * Copy volume to image * Create consistency group * Delete consistency group * Create Cgsnapshot (snapshot of a consistency group) * Delete CgsnapshotRequired Software Packages
Install SMI-S Provider with Solutions Enabler
- Required version: EMC SMI-S Provider 4.6.2.28 or higher (Solutions Enabler 7.6.2.28 or higher)
- SMI-S Provider is available from available from EMC’s support website
- The SMI-S Provider with Solutions Enabler can be installed as a vApp, or on a Windows or Linux host
Install PyWBEM
- Required version: PyWBEM 0.7
- Available from Sourceforge, or using the following commands:
- Install for Ubuntu:
# apt-get install python-pywbem
- Install on openSUSE:
# zypper install python-pywbem
- Install on Fedora:
# yum install pywbem
- Install for Ubuntu:
Verify the EMC VMAX Cinder driver files
EMC VMAX Drivers provided in the installer package consists of seven python files:emc_vmax_fc.py
emc_vmax_iscsi.py
emc_vmax_common.py
emc_vmax_masking.py
emc_vmax_fast.py
emc_vmax_provision.py
emc_vmax_provision_v3.py
emc_vmax_https.py
emc_vmax_utils.py
These files are located in the ../cinder/volume/drivers/emc/ directory of OpenStack node(s) where cinder-volume is running.
Cinder Backend Configuration
The EMC VMAX drivers are written to support multiple types of storage, as configured by the OpenStack Cinder administrator. Each storage type is implemented by configuring one or more Cinder backends mapped to that type. If multiple storage types are desired, multi-backend support must be enabled in the cinder.conf file as shown:[DEFAULT]
enabled_backends=CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
iscsi_ip_address = 10.10.0.50
volume_driver=cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file=/etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name=ISCSI_backend
[CONF_GROUP_FC]
volume_driver=cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file=/etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name=FC_backend
NOTE: iscsi_ip_address is required in an ISCSI configuration. This is the IP Address of the VMAX iscsi target.
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_.xml. See the section below for a description of the file contents.
Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
# cinder type-create VMAX_ISCSI
# cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
# cinder type-create VMAX_FC
# cinder type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Cinder volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC associated with FC_backend
For more details on multi-backend configuration, see OpenStack Administration Guide.
EMC-specific Configuration Files
Each enabled backend is configured via parameters contained in an EMC-specific configuration file. The default EMC configuration file is named /etc/cinder/cinder_emc config.xml, and is configured for the iSCSI driver by default. When multiple backends are configured in cinder.conf, the names of each configuration group’s file is explicitly provided in the cinder_emc_config_file parameter. Here is an example and description of the contents: 10.108.246.202
5988
admin
#1Password
OS-PORTGROUP1-PG
OS-PORTGROUP2-PG
000198700439
FC_GOLD1
GOLD1
EcomServerIp, EcomServerPort, EcomUserName and EcomPassword identify the ECOM (EMC SMI-S Provider) server to be used, and provide logon credentials.
PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this Backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the list above, to evenly distribute load across the set of groups provided.
NOTE: Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).
The Array tag holds the unique VMAX array serial number.
The Pool tag holds the unique pool name within a given array.
NOTE: For this version of the driver, we do not support over subscription of pools. Creating a pool with max_subs_percent greater than 100 is not recommended.
For backends not using FAST automated tiering, the pool is a single pool that has been created by the admin.
For backends exposing FAST policy automated tiering, the pool name is the bind pool to be used with the FAST policy.
The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
Configuring Connectivity
FC Zoning with VMAX
With the Icehouse release of OpenStack, a Zone Manager has been added to automate Fibre Channel zone management. Havana does not support this functionality. It is recommended to upgrade to the Juno release if you require FC zoning.iSCSI with VMAX
- Make sure the “iscsi-initiator-utils” package is installed on the host (use apt-get, zypper or yum, depending on Linux flavor)
- Verify host is able to ping VMAX iSCSI target ports
VMAX Masking View and Group Naming Info
Masking View Names
Masking views for the VMAX FC and iSCSI drivers are now dynamically created by the VMAX Cinder driver using the following naming conventions:OS--I-MV (for Masking Views using iSCSI)
OS--F-MV (for Masking Views using FC)
Initiator Group Names
For each host that is attached to VMAX volumes using the Cinder drivers, an initiator group is created or re-used (per attachment type). All initiators of appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the format:OS--I-IG (for iSCSI initiators)
OS--F-IG (for Fibre Channel initiators)
Note: Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA Port Groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file. (See EMC-specific Configuration Files above)Storage Group Names
As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:OS--I-SG (attached over iSCSI)
OS--F-SG (attached over Fibre Channel)
Concatenated/Striped volumes
In order to support later expansion of created volumes, the VMAX Cinder drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance. Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type -- storagetype:stripecount represents the number of strips you want to make up your volume. The example below means that all volumes created under the GoldStriped volume type will be striped and made up of 4 stripes # cinder type-create GoldStriped
# cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
# cinder type-key GoldStriped set storagetype:stripecount=4
From the {code} Blog
A final thank you from the {code} Team
From the bottom of our hearts, we want to extend our greatest THANK YOU to the open source community for the amazing support and interactions you’ve given us these past 3.5 years. It has ...February 22, 2018Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0 – Collabnix
Say Bye to Kubectx ! I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac ...February 16, 2018Whose Job Is It Anyway? Kubernetes, CRI, & Container Runtimes
A talk given at Cloud Native London meetup, February 6, 2018 on the role of container runtimes in Kubernetes, the introduction of the Container Runtime Interfa… The post Whose Job Is It Anyway? Kubernetes, CRI, ...February 15, 2018
From the {code} Blog
A final thank you from the {code} Team
From the bottom of our hearts, we want to extend our greatest THANK YOU to the open source community for the amazing support and interactions you’ve given us these past 3.5 years. It has ...February 22, 2018Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0 – Collabnix
Say Bye to Kubectx ! I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac ...February 16, 2018Whose Job Is It Anyway? Kubernetes, CRI, & Container Runtimes
A talk given at Cloud Native London meetup, February 6, 2018 on the role of container runtimes in Kubernetes, the introduction of the Container Runtime Interfa… The post Whose Job Is It Anyway? Kubernetes, CRI, ...February 15, 2018