Skip to main content
All Posts By

Lawrence Lai

OpenSDS Launches ZEALAND Release, Providing Unified SDS Controller Framework and API

By Announcement

OpenSDS Launches ZEALAND Release,

Providing Unified SDS Controller Framework and API

 

In collaboration with other global open-source communities, OpenSDS has launched the first release of its software codenamed “ZEALAND,” which provides users and developers with a service-oriented and software-defined storage (SDS) controller framework and standard APIs

SAN FRANCISCO, January 5, 2018 — The OpenSDS community has announced the release of the first OpenSDS software, codenamed “ZEALAND,” which provides a unified framework and APIs for end-to-end software-defined storage controller solutions. OpenSDS ZEALAND comprises two internal projects: Hotpot, the Controller Project, and Sushi, the Northbound Plugin Project. The Hotpot project provides users and developers with block storage service, which supports basic lifecycle management of volume/snapshot/attachment, discovery and registration of different types of storage systems, scheduling to storage backend by customer-defined profiles and more. The Sushi project provides the dynamic provisioner and FlexVolume plug-ins for Kubernetes.

“In the past, storage systems were difficult to manage, and users needed to rely on highly specialized storage administrators,” noted Steven Tan, chair of the OpenSDS TSC and Chief Architect at Huawei responsible for SDS planning and strategies. “This is finally changing, thanks to continuing efforts by storage vendors to make storage systems easier to use. However, the entire storage industry now faces new challenges brought on by cloud-enabled, service-driven development. To address these challenges, OpenSDS is improving the usability of existing storage by abstracting complex storage features so that they integrate seamlessly into cloud-native frameworks such as Kubernetes. This benefits our customers as advanced storage services can be easily enabled in a variety of cloud scenarios, regardless of whether they are using private, hybrid, or public clouds.”

What’s Next for OpenSDS

The OpenSDS community is preparing to start to work on its next version – OpenSDS ARUBA. The current release reflects the real-life requirements of the core members of the OpenSDS EUAC (End-User Advisory Committee). OpenSDS will also continue to work with other projects, such as OpenStack Cinder and Manila, as well as CNCF Container Storage Interface (CSI) to launch joint solutions. OpenSDS ARUBA is expected to be released in June, 2018. It provides a number of key storage service capabilities such as array and host-based replication, group snapshots and replication, and more plug-ins to support the northbound ecosystem.

“OpenSDS brings policy based, self-service storage provisioning and orchestration, which is ideal not only for cloud-native applications, but also fits well with traditional data centers and clouds,” said Rakesh Jain, an Architect and Researcher with IBM and vice-chair of the OpenSDS TSC. “We are working with various large users  to identify their pain points and addressing them in an open manner. The Zealand release is just a small step, we are well on our way to execute on the published roadmap to make storage as pain free as possible, in coordination with other open source projects.”

“The movement toward self-managed scalable infrastructure is firmly established in the industry and provides substantial improvement in operational efficiency for adopters. Achieving this vision requires revisiting nearly all aspects of data-center operations. The cross-vendor and cross-environment management capabilities provided by OpenSDS overcomes major barriers to adoption in existing data centers by allowing the unification of today’s virtual machine-oriented operating environments while anticipating the next generation of container-oriented operating environments” remarked Allen Samuels, R&D Engineering Fellow at Western Digital and member of OpenSDS TSC

About the OpenSDS Community

OpenSDS is the world’s first open source community centering on software-defined storage. It is dedicated to providing unified SDS controller framework and APIs for cross-cloud workloads. Currently, OpenSDS is supported by a number of leading storage vendors and carriers including Huawei, IBM, Hitachi, Dell EMC, Fujitsu, Western Digital, Vodafone, Yahoo! Japan, and NTT Communications.

The OpenSDS community welcomes anyone who is interested in helping to build the open standard for software-defined storage. It embraces suggestions and proposals from the open community members and developers, who help build comprehensive SDS solutions and API standards.

For more information, visit https://www.opensds.io/and https://github.com/opensds/.

OSS EU OpenSDS Mini Summit Agenda

By Announcement

OpenSDS EUAC Meeting  Oct 24 1000-1200hrs (Steven, Cosimo, Yusuke, Kei) + Howard (2hrs)

  1. OpenSDS Requirements and Roadmap Discussion
  2. OpenSDS API
  3. Participation Discussions
  4. 2018 Event Planning
  5. Other topics?

Lunch 1200-1400hrs

OpenSDS Mini-Summit Oct 24 1400-1800hrs  (4hrs)

  1. Welcome and Self-Introductions – (15mins)
  2. OpenSDS Introduction – Steven (30mins)
  3. OpenSDS Projects Development and Demo Presentation – Howard (1hr)
  4. Break – 15mins
  5. OpenSDS End-User Presentations (guys, please give me the presentation titles)
  6. Vodafone – Cosimo Rossetti (30mins)
  7. Yahoo Japan – Yusuke Sato (30mins)
  8. NTT Communications – Kei Kusunoki (30mins)
  9. CSI Introduction – Yu Jie? Saad? (1hr)
  10. Open Discussions

DellEMC To Join Open Source OpenSDS Project To Advance Storage Interoperability

By Announcement

Dell EMC to Join Open Source OpenSDS Project to Advance Storage Interoperability, Contribute Code

 

SAN FRANCISCO – December 19, 2016 – The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today welcomed Dell EMC to the OpenSDS Project. The OpenSDS community is forming to address software-defined storage integration challenges with the goal of driving enterprise adoption of open standards.

As part of its support for the open source project, Dell EMC is also contributing its first project to OpenSDS, the CoprHD SouthBound SDK (SB SDK), to help promote storage interoperability and compatibility. The SB SDK allows developers to build drivers and other tools with the assurance that they will be compatible with a wide variety of enterprise-class storage products.

The OpenSDS Project is comprised of storage users and vendors, including Fujitsu, Hitachi Data Systems, Huawei, Oregon State University and Vodafone. The project will also seek to collaborate with other upstream open source communities such as Cloud Native Computing Foundation, Docker, OpenStack and Open Container Initiative.

“Dell EMC is a welcome addition to the OpenSDS Project and we look forward to its input,” said Cameron Bahar, Senior Vice President and Global Chief Technology Officer of storage at Huawei. “We invite other vendors and enterprise customers to follow Dell EMC’s lead, and to join us in creating an open storage controller solution across cloud, containerized, virtualized and other environments, and make storage-as-a-service a reality.”

Join Dell EMC on Tuesday, December 20 at 1 p.m. EST/10 a.m. PST/18:00 GMT for an introduction and architectural tour of the SB SDK, to be webcast live on YouTube. Dell EMC will demonstrate the current capabilities of the SB SDK and provide an overview of the project’s roadmap.

“As the first storage vendor to open source a software-defined storage controller, we’re very excited to join OpenSDS,” said John Mark Walker, Director of Product Management for Dell EMC. “We look forward to collaborating with customers, partners and other vendors to create open source tools and standards for storage interoperability.”

The OpenSDS technical community hosts discussions on a dedicated mailing list: tech-discuss@opensds.io. For more information about OpenSDS, please email info@opensds.io.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Linux Foundation Creates Open Controller For Software-Defined Storage

By Announcement

The Linux Foundation Creates Open Controller for Software-Defined Storage

 

Launches OpenSDS Project to address storage management challenges and drive enterprise adoption

SAN FRANCISCO – November 8, 2016 – The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today is announcing it will host OpenSDS, a new open source project to address software-defined storage integration challenges and ultimately help drive enterprise adoption.

Storage management today is often overly complex and duplicative, with an assortment of plug-ins and competing software-defined storage controllers for each compute framework. The OpenSDS Project aims to radically simplify the state of storage by creating a common, open controller solution across cloud, containerized, virtualized and other environments.

The OpenSDS Project is comprised of storage users and vendors, including Fujitsu, Hitachi Data Systems, Huawei, Oregon State University, Vodafone and Western Digital. The project will also help unite open source communities of interest such as Cloud Native Computing Foundation, Docker, OpenStack, and Open Container Initiative.

An initial prototype release is expected to be available Q2 2017 with a BETA release by Q3 2017. OpenSDS will leverage open source technologies, such as Cinder and Manila from the OpenStack community, to best enable support across a wide range of storage products. More details on technical roadmap and release cadence will be available in the coming months.

Supporting Comments

Fujitsu
The OpenSDS Project will be a driving force for a revolution of software-defined storage and its enterprise adoption. Fujitsu is looking forward to industry-wide collaboration in OpenSDS.”
Yoshiya Eto, VP of Linux Development Div., Fujitsu

Hitachi Data Systems
“Hitachi has a long and productive history of supporting the open source community and believes it’s good for the storage community to have an open SDS controller to manage mixed storage environments across virtualized, containerized and bare-metal environments. We look forward to engaging with the OpenSDS initiative via our deep OpenStack Community collaboration, support and customer deployments.”
Michael Hay, VP and Chief Engineer, Hitachi Data Systems

Huawei
“This is a key milestone in the storage industry, with major vendors coming together for the common good of our collective customers. OpenSDS will make it easier to utilize storage from any vendor using the same SDS control architecture across different environments. Our goal is to work with the open source community to deliver value to customers with an open SDS controller that simplifies management, promotes interoperability, and delivers Storage-as-a-Service (STaaS).”
Steven Tan, Chief Architect, SDS Management, Huawei

Western Digital
“Western Digital is committed to engaging with other industry leaders to simplify the complexities of deploying and managing infrastructure built on software defined storage architectures. We are delighted to join the OpenSDS project to bring popular virtualized, containerized and cloud technologies together under a common storage controller to help users focus more on their business and less on overcoming the complexities of managing infrastructure.”
Dave Tang, senior vice president and general manager for Western Digital’s Data Center Systems business unit

The OpenSDS technical community will host discussions on a dedicated mailing list: tech-discuss@opensds.io. For more information about OpenSDS or to learn how to participate, please email info@opensds.io.

 

About The Linux Foundation
The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

OpenSDS CNCF Cluster Integration Testing

By Blog

OpenSDS CNCF Cluster Integration Testing Summary

 

0. Introduction

OpenSDS Community was established around the end of year 2016 and the team has been busy developing the code from ground up ever since. We have applied a “discuss, develop, verify” approach that we have architects from member companies to discuss the high level architecture, developers to do a prototype implementation and verify the prototype through testing. Near the end of March we have finished the stage 1 of our PoC development, and we applied for a CNCF Cluster environment to test the PoC code.

1. CNCF Cluster Application:

Proposal at https://github.com/cncf/cluster/issues/30

1.1 Key Propositions:

What existing problem or community challenge does this work address? ( Please include any past experience or lessons learned )

Storage is an important part of functionality for Kubernetes, and currently there are multiple in-tree and out-of-tree volume plugins developed for Kubernetes, but it is hard to know whether the integration of storage provider is functioning as it should be without testing it. Moreover for storage that is provided from cloud provider (e.g OpenStack), there should be functional testing to prove the integration’s actually working.

OpenSDS is a fellow Linux Foundation collaborative project and aims to provide a unified storage service experience for Kubernetes. OpenSDS could provide various types of storage resource for Kubernetes (e.g OpenStack block and file service, baremetal storage device), and we want to use CNCF Cluster to :
a) Verify the integration of OpenSDS and Kubernetes
b) Verify the performance of the inegration via metrics such as IOPS, latency
c) Utilize the Intel S3610 400GB SSD and 2TB NLSAS HDD for storage provisioning

1.2 Important specs for the granted CNCF Cluster resource:

Bonding All hosts have 4x10Gb ports connected with LACP bonding
Transit network (Internet) static assignment on interface bond0.7
Transit gateway 10.2.0.1
Transit DNS servers 10.1.8.3
vlan range for internal usage usage 300 – 319 (create on bond0 interface)
untagged vlan (for custom PXE) 300
OS version ubuntu
Compute node spec 2x Intel E5-2680v3 12-core
256GB RAM
2x Intel S3610 400GB SSD
1x Intel P3700 800GB NVMe PCIe SSD
1x QP Intel X710
Storage Node spec 2x Intel E5-2680v3 12-core
128GB RAM
2x Intel S3610 400GB SSD
10x Intel 2TB NLSAS HDD
1x QP Intel X710

 

2. OpenSDS Integration Testing:

2.1 OpenSDS Architecture Introduction


OpenSDS consists of three layers:
● API Layer: Provides a general RSTful API for storage resource management/orchestration operations.
● Controller Layer: Provides scheduling/management/orchestration capabilities such as discovery, pooling, lifecycle, data protection and so forth.
● Hub Layer: Provides mechanisms to connect with various storage backends.

2.2 Basic Deployment Architecture:



For deployments, OpenSDS usually could be compiled into two separate modules: Control module and Hub Module. OpenSDS Control contains the API and Controller functionalites meanwhile OpenSDS Hub contains the southbound abstraction layer (we call it the “dock”) and all the necessary adaptors for the storage backends.

Total of 5 nodes are used in the testing, of which one serves as the master node and three others as the worker node. The master node is running on the compute node provided by the cluster, where as the worker nodes are running on the storage nodes.
OpenSDS’s Control module is deployed at the master node together with Kubernetes’ Control module, OpenSDS’s Hub module is deployed at the storage node together with Kubernetes’ kubelet module.

OpenStack Cinder (with LVM and Ceph as backends), OpenStack Manila (NFS) and CoprHD (with simulator of EMC storage device) are deployed as the storage resources for OpenSDS.

The advantage of deploy hub module in a distributed way on each storage node is that it could help OpenSDS scale without the need to have multiple controller instances everywhere which will inevitably introduce the problem of state syncing and brainsplit. Since Hub communicates with Control via gRPC, it could be scaled to a rather large scale. The states are maintained at the Control module whereas Hub module is stateless. When the compute platform (Kubernetes, Mesos, OpenStack …) queries about the state from OpenSDS, it could get the necessary information from the central controller, and the central controller sync up with all the underlying hubs via simple heartbeat implementation.

2.3 Scenario 1: Multiple backends support and providing PV for Kubernetes Pods


In this scenario, we successfully tested the OpenSDS southbound adaptors for OpenStack Cinder, OpenStack Manila and CoprHD functions properly. We could demonstrate the CRUD operations for volumes and shared volumes (files) on OpenStack, and CRUD operations for volumes on CoprHD. This verifies that by integrating with various backends such as OpenStack and CoprHD, OpenSDS could indeed provide a unified management framework.

We also successfully tested that via OpenSDS Flex Volume plugin, OpenSDS could provide Ceph volume which is managed by Cinder for Kubernetes Pods. Alongside with the multi-backends support, this verifies that by integrating OpenSDS with Kubernetes, it could provides storage resource from various backends for kubernetes through a unified management plane.

2.4 Scenario 2: OpenSDS storage support for Kubernetes auto-scaling


In this scenario, we have OpenSDS Hub deployed on three storage nodes with Cinder and Ceph as the backend. Volumes are created on the backend of all the three storage nodes.

We first have Kubernetes create Pod1 on 10.2.1.233 and attach the volume provided by OpenSDS, and then by pre-configuration of the replication controller, Kubernetes will scale out Pod1 by creating Pod2 and Pod3 with the same application container (Nginix in this case) on 10.2.1.234 and 10.2.1.235. Via the yaml config file, Kubernetes will notify OpenSDS about the attachment of volume for the newly created Pods. With the help of distributed hubs on each storage node, OpenSDS is able to fulfill such task. The testing verifies that OpenSDS could provide the storage support for the auto-scaling in Kubernetes.

2.5 Scenario 3: OpenSDS storage support for Kubernetes Failover


In this scenario, we have OpenSDS Hub deployed on two storage nodes with Cinder and Ceph as the backend. Volumes are created on the backend of all the three storage nodes.

We first have Kubernetes create Pod1 on 10.2.1.233 and attach the volume provided by OpenSDS, and then we kill Pod1. With the pre-configuration of the replication controller, Kubernetes will replicate Pod1 on 10.2.1.234. Via the yaml config file, Kubernetes will notify OpenSDS about the attachment of the original volume for the migrated Pod. With the help of distributed hubs on each storage node, as well as the replication support provided by Ceph, OpenSDS is able to fulfill such task. The testing verifies that OpenSDS could provide the storage support for the fast failover in Kubernetes.

 

3. OpenSDS Integration Testing Conclusions

3.1 Summary

With the help of the CNCF Cluster facilities, we have successfully performed the integration testing for OpenSDS and Kubernetes, and also OpenSDS with OpenStack storage module and CoprHD. The testing verifies the OpenSDS PoC prototype’s capability to provide storage support for Kubernetes for multiple storage backends, on auto-scaling and fast failover.

3.2 Lesson Learnt

3.2.1 OpenSDS – Kubernetes

1. The flex volume plugin will read the output from stdout, therefore if you add log functionality to the plugin itself to print to stdout, it will report error.
2. The message type in protobuf is int32, therefore extra care needs to be put on the int-int32 conversion when using gRPC.

3.2.2 OpenSDS – OpenStack

1. The mountpoint parameter in OpenStack API documentation is labeled as optional, however in the testing we found that Cinder requires this parameter.
2. The pool_name parameter for Ceph has to be set in the Cinder config file.
3. OpenStack’ Goclinet (which is used for OpenSDS Cinder and Manila adaptor) currently only supports keystone v2, which has been deprecated.

3.2.3 OpenSDS –CoprHD

1. CoprHD only supports OpenSuse, however our cluster host os is Ubuntu. We have to deploy it in a OpenSuse VM.
2. CoprHD’s CLI is not as friendly as its UI, therefore makes it harder to deploy.
3. CoprHD’s default backends are VMAX and VNX, although we have simulators, the volume could not be listed after creation.

3.2.4 CNCF Cluster:

Unfulfilled goals: We did not accomplished the second and the third goal of our cluster application, mainly due to the time constraint. We spent about one week to setup the environment and another week to do the integration testing. A longer testing period would be preferred in the future if possible.