Recently I read an article titled Train sklearn 100x faster, which is about an open-source Python module named sk-dist. The module implements a "distributed scikit-learn" by extending it’s built-in parallelisation of meta-estimator, such as,
ensemble.BaggingClassifier, etc., using spark.
It was 1AM in the morning. Wise-men and women have told me not to stay up late and use computers. However, I have too sedentary life to sleep early, I am too bored with netflix and chill, and I am too sober to dream about the next big thing since tiktok. So, I did the next best thing…
A deep dive in Spark transformation and action is essential for writing effective spark code. This article provides a brief overview of Spark's transformation and action.
For simplicity, this article focuses on PySpark and DataFrame API. The concepts are applied similarly to other languages in the Spark framework. Furthermore, it is necessary to understand the following concepts to grasp the rest of the material easily.
Resilient Distributed Dataset: Spark jobs are typically executed against Resilient Distributed Dataset (RDD), which is fault-tolerant partitions of records that can be concurrently operated. …
People change teams all the time. There are many reasons including changing jobs, internal migration, personal time off, etc. Gone are the days when people stayed with company long time, let alone with one team. Embracing this fact and being prepared enables a team robust to changes. A big part of the preparation includes a solid onboarding plan. Machine Learning (ML) teams are different since they include many different techniques and skill dimensions compared to typical software products. Onboarding in such teams, therefore, brings in some unique challenges. With this article, we show an onboarding process as part of handling…
If you do not have the time to read the full article, consider reading the 30 seconds version.
If you have Machine Learning (ML) pipelines in production, you have to worry about backward compatibility of changes made to the pipeline. It may be tempting to increase test coverage, but a high test coverage cannot guarantee that your recent changes have not broken the pipeline or generated low quality results. To do that, you need to develop end-to-end tests that can be executed as part of the continuous integration pipelines. Developing such a test requires sampling the dataset that powers the…
If you do not have time, here is the 30-second version:
Engineering Manager: AI, Analytics and Data @H&M. Opening little boxes, one at a time