Loading…
DevConf.CZ 2019 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Storage / Ceph / Gluster [clear filter]
Saturday, January 26
 

1:00pm CET

Benchmark testing
Over the years we've accumulated in-depth knowledge about what to test, how to test it, and especially on how to interpret the results we see from various benchmark testings. These tests range from classic (software-defined) storage throughput over latency to VM density testing and all the little bits and pieces that come with it.
Since we've jumped through all those unavoidable loops, we want to share what we learned and prevent others from making the same mistakes we did.
We will cover system metrics, gluster metrics as well as an approach to test how many VMs a virtualization environment can actually run.

Speakers
avatar for Dustin Black

Dustin Black

Principal Architect, Red Hat
Dustin Black is a Storage product architect at Red Hat, primarily focused on automation and performance optimization of Gluster software-defined storage. He is the creator and maintainer of the gluster-colonizer project, a deployment orchestration toolset that leverages the power... Read More →
MK

Marko Karg

Product Marketing Manager, Red Hat



Saturday January 26, 2019 1:00pm - 1:50pm CET
G202

2:00pm CET

Intro to Ceph, the Distributed Storage System
Ceph is an open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. With an advanced placement algorithm, active storage nodes, and peer-to-peer gossip protocols, Ceph is software-defined storage for scaling from terabytes to exabytes with no single point of failure. Powerful features like instantaneous snapshotting and copy-on-write clones, along with self-management and automatic healing, make Ceph friendly to administrators and users. This talk describes the Ceph architecture, from its bottom-level RADOS object store to the CephFS distributed filesystem, RADOS Block device, and S3- and Swift-compatible RADOS Gateway, and will also discuss major new and upcoming features.

Speakers
avatar for Gregory Farnum

Gregory Farnum

Principal Software Engineer, Red Hat
Greg Farnum has been in the core Ceph development group since 2009. Now a Red Hat employee, Greg has done major work on all components of the Ceph ecosystem, previously served as the CephFS tech lead, and currently works as an individual contributor focused on the core RADOS syst... Read More →



Saturday January 26, 2019 2:00pm - 2:50pm CET
G202

3:00pm CET

Ceph data services in a hybrid cloud world
IT organizations are faced with managing infrastructure that spans multiple private data centers and public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying across multiple environments, but the problem with such efforts remains that most applications require lots of state in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.

Ceph is known for scale-out file, block, and object within a single cluster, but it also includes multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable portability across clouds and how a multi-cloud perspective has shifted our roadmap, especially for Ceph object storage.

Speakers
avatar for Sage Weil

Sage Weil

Ceph Project Leader, Red Hat
Sage helped build the initial prototype of Ceph at the University of California, Santa Cruz as part of his graduate thesis. Since then he has led the open source project with the goal of bringing a reliable, robust, scalable, and high performance storage system to the free software... Read More →


Saturday January 26, 2019 3:00pm - 3:50pm CET
G202

4:00pm CET

Ceph Management and Monitoring with the Dashboard
The Ceph Manager Dashboard gives Ceph Administrators an easy to use interface to manage and monitor various aspects of their cluster without having to use the CLI or any third-party utilities.

It is based on the original Ceph Dashboard as well as the concepts and architecture of the standalone open source Ceph management framework openATTIC. The development of this new component is driven and coordinated by the openATTIC team at SUSE as well as engineers from Red Hat and other members of the Ceph community.

Features include monitoring the cluster health status, managing OSDs, Pools, RBDs and the Object Gateway (RGW). Performance graphs for each component and service are provided by embedding Grafana Dashboards into the Ceph Manager Dashboard UI.

Speakers
avatar for Lenz Grimmer

Lenz Grimmer

Engineering Team Lead, SUSE Linux GmbH



Saturday January 26, 2019 4:00pm - 4:50pm CET
G202

5:00pm CET

Active/Active NFS Serving over CephFS
While there have been NFS gateways over CephFS for a long time, scaling
that service across multiple nodes has always been a challenge. Recently,
a new recovery backend was merged into the nfs-ganesha userland NFS
server that allows a cluster of NFS servers to coordinate their recovery
periods using a shared RADOS object, allowing us to scale out a cluster
of NFS servers in a loosely-coupled fashion on top of CephFS.

This presentation will cover some basics about NFS recovery, how we solved
the problem of coordinating the recovery across a cluster of NFS servers,
and some practical deployment scenarios.

Speakers
avatar for Jeff Layton

Jeff Layton

Principal Software Engineer, Red Hat
Jeff Layton is a long time Linux kernel developer specializing in network file systems. He has made significant contributions to the kernel's NFS client and server, the CIFS client and the kernel's VFS layer. Recently, he has taken an interest in Ceph, in particular as a backend for... Read More →


Saturday January 26, 2019 5:00pm - 5:25pm CET
G202
 
Sunday, January 27
 

9:00am CET

How Glusterfs achieves high availability ?
Now a days, with the increase in usage of applications and dependency on them, any disruptions to these applications is not desirable. High availability has become a vital feature. Have you wondered how high availability is implemented in distributed systems?

Glusterfs is a scale out, open source distributed file system. By attending this talk, you will understand the implementation of Automatic File Replication (AFR) feature which provides high availability to Glusterfs.

Resource Links:
https://docs.gluster.org/en/latest/

Speakers
VR

Varsha Rao

Red Hat
I am a recent computer science graduate. Currently, I am working on Glusterfs project at Red Hat. I was an Outreachy Round 14 intern for the Linux Kernel Nftables project.



Sunday January 27, 2019 9:00am - 9:25am CET
G202

9:30am CET

Online disk reencryption with LUKS2
Session will focus on new LUKS2 cryptsetup reencryption designed with goal to provide better resilience when dealing with crash event. LUKS2 implementation also provides option to reencrypt live (mounted) devices and better suits HA systems emphasising minimal downtime. Both requirements were significant milestones on road to get LUKS2 reencryption deployed in future enterprise
environments.

In the talk we'll go through features of new reencryption with
description of data protection methods implemented as
safeguards against data corruption on crash event. We'll
demonstrate new reencryption tool on basic use cases including
example of automatic crash recovery after simulated system crash.

Resources:
- https://gitlab.com/cryptsetup/cryptsetup
- https://gitlab.com/cryptsetup/LUKS2-docs
https://okozina.fedorapeople.org/online-disk-reencryption-with-luks2.pdf

Speakers
OK

Ondrej Kozina

software engineer, Red Hat
I'm software engineer working for Red Hat in storage/LVM team and also RHEL cryptsetup maintainer.You can discuss cryptsetup, LUKS2 and reencryption with me.



Sunday January 27, 2019 9:30am - 9:55am CET
G202

10:00am CET

Introducing Storage Instantiation Daemon
Setting up the linux storage stack correctly has never been more complicated! Mirroring, RAID, multipath, thin provisioning, caching, compression, encryption, LVM... Much of the burden of activating devices on a linux system today relies upon udev but today's increasing level of complexity was probably never envisaged.

The new Storage Instantiation Daemon (SID) aims to control and report upon the identification, grouping and activation of the disparate storage layers from a single location. It works in partnership with udev, and tries to improve the handling of awkward configurations.

We will look at the new SID architecture. We will also summarize the problems that led to introduction of this new and modular infrastructure and how it addresses them.

Slides: https://redhat.slides.com/prajnoha/sid-intro?token=wfeEy9l8

Speakers
avatar for Peter Rajnoha

Peter Rajnoha

Senior Software Engineer, Red Hat



Sunday January 27, 2019 10:00am - 10:25am CET
G202

10:30am CET

lvm2 and VDO will it blend ?
Lvm2 starts to support VDO type devices. Session will present lvm2 interface to create and maintain VDO devices within lvm2 world. Basic knowledge of lvm2 is expected by session visitors.

Speakers
avatar for Zdenek Kabelac

Zdenek Kabelac

Red Hat
Senior software engineer working for Red Hat. Member of lvm2 development team.



Sunday January 27, 2019 10:30am - 10:55am CET
G202

11:00am CET

Advanced block storage test devices
Storage tests can involve hard-to-reproduce scenarios involving a complex sequence of events, often complicated by the differing behavior characteristics of different types of block devices. Linux has various block device testing devices: scsi_debug, dm-flakey, dm-delay, and so on. But what if the test requires a scenario not covered by the existing test devices? See examples of test devices created to simulate the behavior of real-world storage devices, in complex support cases, during the development of Virtual Data Optimizer (now in RHEL 7.5).

Speakers
avatar for Bryan Gurney

Bryan Gurney

Senior Software Engineer, Red Hat
I'm a software engineer on Virtual Data Optimizer, a Linux kernel module that provides block-level deduplicaton and compression. I specialize in testing, performance, and advanced support of VDO, as well as hardware performance and behavior.



Sunday January 27, 2019 11:00am - 11:25am CET
G202

11:00am CET

Ceph Community BoF
Come meet with Ceph developers and users to discuss the current state of the project and user experiences. We’ll open with a short presentation on new features in the Nautilus release and then turn to group discussions. Bring your questions, your war stories, and your feature needs!

Speakers
avatar for Gregory Farnum

Gregory Farnum

Principal Software Engineer, Red Hat
Greg Farnum has been in the core Ceph development group since 2009. Now a Red Hat employee, Greg has done major work on all components of the Ceph ecosystem, previously served as the CephFS tech lead, and currently works as an individual contributor focused on the core RADOS syst... Read More →
avatar for Sage Weil

Sage Weil

Ceph Project Leader, Red Hat
Sage helped build the initial prototype of Ceph at the University of California, Santa Cruz as part of his graduate thesis. Since then he has led the open source project with the goal of bringing a reliable, robust, scalable, and high performance storage system to the free software... Read More →


Sunday January 27, 2019 11:00am - 12:50pm CET
R211 - Students Club