Blog | luminousmen

The ultimate Python style guidelines

Coding guidelines help engineering teams to write consistent code which is easy to read and understand for all team members. Python has an excellent style guide called PEP8. It covers most of the situations you will step into while writing Python. I like PEP8, I believe there has been much...

Introduction to Pyspark join types

This article is written to visualize various types of joins, a cheat sheet so that all join types are listed in the same place with examples and without stupid circles. Aaaah, circles! I've tired of these explanations of joins with intersections of sets and circles. It seems to be both clear and...

The 5-minute guide to using bucketing in Pyspark

There are many different tools in the world, each of which solves a range of problems. Many of them are judged by how well and correct they solve this or that problem, but there are tools that you just like, you want to use them. They are properly designed and fit well in your hand, you do not...

Spark tips. Don't collect data on driver

There are many different tools in the world, each of which solves a range of problems. Many of them are judged by how well and correct they solve this or that problem, but there are tools that you just like, you want to use them. They are properly designed and fit well in your hand, you do not...

How to not leap in time using Python

If you want to display the time to a user of your application, you query the time of day. However, if your application needs to measure elapsed time, you need a timer that will give the right answer even if the user changes the time on the system clock. The system clock which tells the time of...

Spark History Server and monitoring jobs performance

Imagine a situation that you wrote a spark job for processing huge amount of data and it took 2 days for this job to finish. It happens. Actually, it happens regularly. To tune these jobs engineers need information. It can be obtained from spark events(if you run something on a cluster in Spark...

Spark tips. DataFrame API

There are many different tools in the world, each of which solves a range of problems. Many of them are judged by how well and correct they solve this or that problem, but there are tools that you just like, you want to use them. They are properly designed and fit well in your hand, you do not...

Schema-on-Read vs Schema-on-Write

When we talk about working with data, we usually doing it in a system that belongs to one of two types. The first of them is a schema-on-write. Schema-on-write Probably many of you already have worked with relational databases. And you understand that the first step to working with a relational...

Demystifying hypothesis testing

There are a lot of engineers who have never been involved in statistics or data science. So, to build data science pipelines or rewrite produced by data scientists code to an adequate, easily maintained code many nuances and misunderstandings arise from the engineering side. For these Data/ML...

Big Data file formats

Apache Spark supports many different data formats, such as the ubiquitous CSV format and web-friendly JSON format. Common formats used primarily for big data analytical purposes are Apache Parquet and Apache Avro. In this post, we’re going to cover the properties of these 4 formats — CSV, JSON,...