Data Scientist | https://zerowithdot.com | makes data make sense

It’s been quite some time since we wrote on any “engineering-like” topic. As we all want to stay efficient and productive, it is a good time to revisit Google Colaboratory.

Google Colaboratory, or Colab for short, has been a great platform for data scientists or machine-learning enthusiasts in general. It offers a free instance of GPU and TPU for a limited time plus it serves a prettified version of a Jupyter notebook. It is a great combination for various smaller or mid-size projects.

Unfortunately, it comes with certain limitations. The biggest ones are the lack of storage persistency, as well…

Rarely do I get a feeling that there is not enough advice to go around. It’s quite the opposite. Advice is often free and accessible, despite not always being useful or applicable.

In this article, I would like to share some thoughts on what I think is useful. It is the product of my conclusions, which I have continuously been testing for the last four years. It works. It helped me, and I believe it will help you too.

2016 was a turning year for my family. We lost our place to live, both got unemployed, and if that was…

Writing a custom implementation of a popular algorithm can be compared to playing a musical standard. For as long as the code reflects upon the equations, the functionality remains unchanged. It is, indeed, just like playing from notes. However, it lets you master your tools and practice your ability to hear and think.

In this post, we are going to re-play the classic Multi-Layer Perceptron. Most importantly, we will play the solo called **backpropagation**, which is, indeed, one of the machine-learning standards.

As usual, we are going to show how the *math translates into code*. In other words, we will…

I promised to myself not to write about Covid-19.

However, with my recent inclination in going back to fundamentals and revisiting some of the more interesting topics in **mathematics**, I thought it would be fun and useful to explain why **forecasting a time series** (e.g. a disease progression) is so challenging. More precisely, I want to explain why making such simulations can really be hard sometimes by showing how things work at the fundamental level.

We will start with some basic equations and discuss the main challenges that relate to **data** and building **models**. Then, we will move on to…

This is the second part of the work that attempts to find a recipe towards **financial independence** — a stage where you no longer need to work to support yourself.

In the previous article, we tried to formulate the problem of personal finance through a system of ordinary differential equations (ODE), which we later solved numerically using python. Given a set of input parameters, our numerical model was able to determine your financial condition [Github].

In this article, we bring it to the next stage. We add **randomness** to the equation to account for life’s unpredictability. …

Polynomial regression is one of the most fundamental concepts used in data analysis and prediction. Not only can any (infinitely differentiable) function be expressed as a polynomial through Taylor series at least within a certain interval, it is also one of the first problems that a beginner in machine-learning is confronted with. It is used across various disciplines such as financial analysis, signal processing, medical statistics, and more.

While polynomial regression is a relatively simple concept, it became a sort of “hello world” problem in machine-learning as it touches upon many core concepts. …

Imagine one day you wake up and you know you are free to do whatever you like for the rest of your life… and… money is no longer a problem. You became truly *financially independent* and you no longer need to work to make it the next year. Does it sound appealing?

While it may sound so, the path towards that goal is certainly not easy (unlike what Youtube Ads say about it). There exist many factors to be taken into consideration when dealing with your finance and reasoning is often obscured by the complexity.

In this article, we are…

In this article, we present an example of an (im-)practical application of the Hidden Markov Model (HMM). It is an artificially constructed problem, where we create a case for a model, rather than applying a model to a particular case… although, maybe a bit of both.

Here, we will rely on the code we developed earlier (see the repo), and discussed in the earlier article: “Hidden Markov Model — Implementation from scratch”, including the mathematical notation. Feel free to take a look. …

The Internet is full of good articles that explain the theory behind the Hidden Markov Model (HMM) well (e.g. 1, 2, 3 and 4). However, many of these works contain a fair amount of rather advanced mathematical equations. While equations are necessary if one wants to explain the theory, we decided to take it to the next level and create a **gentle step by step practical implementation** to complement the good work of others.

In this short series of *two articles*, we will focus on translating all of the complicated mathematics into code. Our starting point is the document written…

One of the fields where WKMC algorithm can be applied is demographics. Imagine a situation, in which you would like to see how people group or would group if all administration divisions or historical conflicts disappeared or ethnic, national or tribal identity would not matter? How would then people go about creating communities?

In this post, we will use the WKMC algorithm to find out how people would group only based on their present geographical distribution. For this reason we will look at two *parameters*:

- Geographical coordinates,
- Population density at a specific location.

As this is a curiosity-driven simulation, it…