Photo by Lukas on Pexels

During the last months, I’ve probably run the t-test dozens of times but recently I realized that I did not fully understand some concepts such as why it is not possible to accept the null hypothesis or where the numbers in the t-tables come from. After doing some research, I found that several articles provide those answers but not so many gather all of the information together.

Therefore, I decided to write this article to explain step-by-step the t-test so anyone can use it as a reference whenever they have to run the test or review the concepts.

Depending on…


Hands-on Tutorials

Implementing MultiHead and CBAM attention modules in PyTorch

Photo by Negative Space on Pexels

Ever since the introduction of Transformer in the work “Attention is all you need”, there has been a transition in the field of NLP towards replacing Recurrent Neural Networks (RNN) with attention-based networks. In current literature, there are already many great articles that describe this method. Here are two of the best ones I found during my review: The Annotated Transformer and Transformers explained Visually.

However, after researching how to implement attention in computer vision (Best found articles: Understanding Attention Modules, CBAM, Papers with Code-Attention, Self-Attention, Self-Attention and Conv), I noticed that only a few of them clearly describe the…


Implementing basic LSTM, LSTM-Linear, and CNN-LSTM-Linear

Photo by Negative Space on Pexels

Last week, I had to reimplement an LSTM-based neural network. After checking the PyTorch documentation, I had to spend some time again reading and understanding all the input parameters.

Therefore, this time I have decided to write this article where I have made a summary of how to implement some basics LSTM- neural networks. Here is the structure of the article:

  • Basic LSTM
  • LSTM-Linear neural network
  • CNN-LSTM-Linear neural network

1. Basic LSTM

  • input: (seq_len, batch, input_size)
  • h_0: (num_layers * num_directions, batch, hidden_size)
  • c_0: (num_layers * num_directions, batch, hidden_size)

Important notes:

  • The seq_length parameter corresponds to the length of your input, not the number…


Photo by Karolina Grabowska on Pexels

In 1966, John von Neumann, who is considered to be one of the top mathematicians of the 20th century, introduced the term Cellular Automaton and defined it as a dynamic system that evolves in discrete steps. Four years later, John Conway created the well-known Game of Life, whose main feature is that evolution is determined by its initial state and does not require further input.


How and when to implement the lockdowns policies

Photo on Pexels

Although over the past months we have finally started to receive a vaccine for the COVID-19, the leading approach over the last year for the effect commonly known as “flattening the curve” of infections has been social distancing. However, there has been a clear lack of consensus among the governmental administrations throughout the globe, being possible to dedicate a whole set of articles to the multiple measures implemented in each country.

Instead of describing those measures and assessing the optimal ones, we decided to leave the questions about when and how to implement lockdowns to a model that could optimize…


Comparing and assessing Conv1d and Conv2D

Photo by Negative Space on Pexels

Probably, most of the people reading this article have already implemented some CNN-based neural networks and have wondered whether to use Conv1D or Conv2D when doing time series analysis. Along with this article, I will explain and compare both types of convolutions and answer the following question: Why am I saying that Conv1D is a subclass of Conv2D?

If you cannot easily answer this question, I think this article will become interesting to you. As always, any questions/comments will be well welcomed.

Several months ago, I was asked to create a neural network for a time series-based challenge. …


Thoughts and Theory

A new normalization method for training deep neural networks studied on an EEG-based emotion classification dataset.

Photo by Negative Space on Pexels

A few months ago, I started researching how to classify evoked emotions using EEG recordings when I rapidly faced the most challenging problem when developing Brain Imaging Methods: the poor homogeneity of EEG activity across participants. This problem can be easily explained using Figure 1, where the plots were obtained by first extracting features for each EEG recordings for each video and participant and then implementing the dimensionality reduction tool UMAP to embed the data. While on the left, colours indicate emotions; on the right, colours indicate to whom the data corresponds to. …


Photo by Lukas on Pexels

After running several statistical tests to assess my models, I decided to dig deeper into the theory and ask myself questions such as why the number of samples is relevant for the statistical test, why the standard deviation has a square root in the denominator, or why statisticians differentiate between Z- and t-distribution.

Since I did not find a blog post that answered all these questions, I decided to run some simulations in Python and post the results along with this article for people interested.

1. Central Limit Theorem

The central limit theorem states that if you sufficiently select random samples from a population…


Building an autoencoder to reconstruct images using the Fashion-MNIST dataset

Photo by Negative Space on Pexels

This article is written for people who want to build a basic autoencoder using PyTorch. The dataset in which this article is based on is the Fashion-MNIST dataset.

The article is divided into the following sections:

  1. Introduction to autoencoders.
  2. Loading the dataset.
  3. Building the neural network architecture.
  4. Training the neural network.
  5. Visualization in Tensorboard.

1. Introducing autoencoders

As defined in Wikipedia:

An autoencoder is a type of neural network used to learn efficient data codings in an unsupervised manner.

In other words, the aim of an autoencoder is to learn a lower representation of a set of data, which is useful for feature…


Dataset: Fashion-Mnist

Photo by Negative Space on Pexels

This article is written for people who want to learn or review how to build a basic Convolutional Neural Network in Keras. The dataset in which this article is based is the Fashion-Mnist dataset.

Along with this article, we will explain how:

  1. To build a basic CNN in Pytorch.
  2. To run the neural networks.
  3. To save and load checkpoints.

Dataset description

Fashion-MNIST is a dataset of Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST…

Javier Fernandez

Artificial Intelligence researcher and developer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store