Leaky ReLU is an activation function which induces a small negative slope to ReLU, to keep the neurons alive.
x: an input data point
Though Leaky ReLU isn’t much significant, it is better than ReLU, because it addresses the problem of dead neurons.
# Leaky ReLU def leaky_relu(x): result =  for i in x: if i<0: i = 0.01*i result.append(i) return result y = leaky_relu(x) plot_graph(x, y, 'Leaky ReLU')
Used in feedforward neural networks. Can find the applications of Leaky ReLU in Automatic Speech Recognition systems.
Address the problem of dying neurons by inducing a small negative slope and utilises the same boundedness and unboundedness properties of ReLU.
No significant result improvement except in sparsity and dispersion.
Though the outputs form a zero-centered distribution, the one-sided saturation leading to convergence isn’t attained effectively.