Now that Machine Learning has moved from the esoteric realm and into the world of daily application, what steps are required to prepare existing data to become ML training data sets? What constitutes a good use case? What is the actual cost in effort and time to prepare the data? How much data is required, and what accessible techniques can be applied to achieve that minimum? This session explores the practical steps for non-experts to prepare actual issue tracking data for consumption by ML training algorithms, leveraging available community and Red Hat resources.
Here is a link to the associated video
https://www.youtube.com/watch?v=T1sYLbuHBZU