Loading…
DevConf.CZ 2019 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Storage / Ceph / Gluster [clear filter]
Saturday, January 26
 

1:00pm CET

Benchmark testing
Over the years we've accumulated in-depth knowledge about what to test, how to test it, and especially on how to interpret the results we see from various benchmark testings. These tests range from classic (software-defined) storage throughput over latency to VM density testing and all the little bits and pieces that come with it.
Since we've jumped through all those unavoidable loops, we want to share what we learned and prevent others from making the same mistakes we did.
We will cover system metrics, gluster metrics as well as an approach to test how many VMs a virtualization environment can actually run.

Speakers
avatar for Dustin Black

Dustin Black

Principal Architect, Red Hat
Dustin Black is a Storage product architect at Red Hat, primarily focused on automation and performance optimization of Gluster software-defined storage. He is the creator and maintainer of the gluster-colonizer project, a deployment orchestration toolset that leverages the power... Read More →
MK

Marko Karg

Product Marketing Manager, Red Hat



Saturday January 26, 2019 1:00pm - 1:50pm CET
G202

3:00pm CET

Ceph data services in a hybrid cloud world
IT organizations are faced with managing infrastructure that spans multiple private data centers and public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying across multiple environments, but the problem with such efforts remains that most applications require lots of state in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.

Ceph is known for scale-out file, block, and object within a single cluster, but it also includes multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable portability across clouds and how a multi-cloud perspective has shifted our roadmap, especially for Ceph object storage.

Speakers
avatar for Sage Weil

Sage Weil

Ceph Project Leader, Red Hat
Sage helped build the initial prototype of Ceph at the University of California, Santa Cruz as part of his graduate thesis. Since then he has led the open source project with the goal of bringing a reliable, robust, scalable, and high performance storage system to the free software... Read More →


Saturday January 26, 2019 3:00pm - 3:50pm CET
G202

4:00pm CET

Ceph Management and Monitoring with the Dashboard
The Ceph Manager Dashboard gives Ceph Administrators an easy to use interface to manage and monitor various aspects of their cluster without having to use the CLI or any third-party utilities.

It is based on the original Ceph Dashboard as well as the concepts and architecture of the standalone open source Ceph management framework openATTIC. The development of this new component is driven and coordinated by the openATTIC team at SUSE as well as engineers from Red Hat and other members of the Ceph community.

Features include monitoring the cluster health status, managing OSDs, Pools, RBDs and the Object Gateway (RGW). Performance graphs for each component and service are provided by embedding Grafana Dashboards into the Ceph Manager Dashboard UI.

Speakers
avatar for Lenz Grimmer

Lenz Grimmer

Engineering Team Lead, SUSE Linux GmbH



Saturday January 26, 2019 4:00pm - 4:50pm CET
G202

5:00pm CET

Active/Active NFS Serving over CephFS
While there have been NFS gateways over CephFS for a long time, scaling
that service across multiple nodes has always been a challenge. Recently,
a new recovery backend was merged into the nfs-ganesha userland NFS
server that allows a cluster of NFS servers to coordinate their recovery
periods using a shared RADOS object, allowing us to scale out a cluster
of NFS servers in a loosely-coupled fashion on top of CephFS.

This presentation will cover some basics about NFS recovery, how we solved
the problem of coordinating the recovery across a cluster of NFS servers,
and some practical deployment scenarios.

Speakers
avatar for Jeff Layton

Jeff Layton

Principal Software Engineer, Red Hat
Jeff Layton is a long time Linux kernel developer specializing in network file systems. He has made significant contributions to the kernel's NFS client and server, the CIFS client and the kernel's VFS layer. Recently, he has taken an interest in Ceph, in particular as a backend for... Read More →


Saturday January 26, 2019 5:00pm - 5:25pm CET
G202
 
Sunday, January 27
 

9:30am CET

Online disk reencryption with LUKS2
Session will focus on new LUKS2 cryptsetup reencryption designed with goal to provide better resilience when dealing with crash event. LUKS2 implementation also provides option to reencrypt live (mounted) devices and better suits HA systems emphasising minimal downtime. Both requirements were significant milestones on road to get LUKS2 reencryption deployed in future enterprise
environments.

In the talk we'll go through features of new reencryption with
description of data protection methods implemented as
safeguards against data corruption on crash event. We'll
demonstrate new reencryption tool on basic use cases including
example of automatic crash recovery after simulated system crash.

Resources:
- https://gitlab.com/cryptsetup/cryptsetup
- https://gitlab.com/cryptsetup/LUKS2-docs
https://okozina.fedorapeople.org/online-disk-reencryption-with-luks2.pdf

Speakers
OK

Ondrej Kozina

software engineer, Red Hat
I'm software engineer working for Red Hat in storage/LVM team and also RHEL cryptsetup maintainer.You can discuss cryptsetup, LUKS2 and reencryption with me.



Sunday January 27, 2019 9:30am - 9:55am CET
G202

10:00am CET

Introducing Storage Instantiation Daemon
Setting up the linux storage stack correctly has never been more complicated! Mirroring, RAID, multipath, thin provisioning, caching, compression, encryption, LVM... Much of the burden of activating devices on a linux system today relies upon udev but today's increasing level of complexity was probably never envisaged.

The new Storage Instantiation Daemon (SID) aims to control and report upon the identification, grouping and activation of the disparate storage layers from a single location. It works in partnership with udev, and tries to improve the handling of awkward configurations.

We will look at the new SID architecture. We will also summarize the problems that led to introduction of this new and modular infrastructure and how it addresses them.

Slides: https://redhat.slides.com/prajnoha/sid-intro?token=wfeEy9l8

Speakers
avatar for Peter Rajnoha

Peter Rajnoha

Senior Software Engineer, Red Hat



Sunday January 27, 2019 10:00am - 10:25am CET
G202

10:30am CET

lvm2 and VDO will it blend ?
Lvm2 starts to support VDO type devices. Session will present lvm2 interface to create and maintain VDO devices within lvm2 world. Basic knowledge of lvm2 is expected by session visitors.

Speakers
avatar for Zdenek Kabelac

Zdenek Kabelac

Red Hat
Senior software engineer working for Red Hat. Member of lvm2 development team.



Sunday January 27, 2019 10:30am - 10:55am CET
G202