Scaled ELU Activation Function

Scaled Exponential Linear Unit activation function is a modified ELU.

 x: an input data point

In general and .

 

Range: 

# Scaled Exponential Linear Unit
def s_elu(x, a=1.6733, t=1.0507):
    result = []
    for i in x:
        if i<0:
            i = t*(a*(np.exp(i)-1))
        result.append(i)
    return result

y = s_elu(x, a=1)
plot_graph(x, y, 'Scaled Exponential Linear Unit') 

Uses:

        Used in hidden layers and can be used as an alternative to ReLU.

 

Pros:

SELU induces self-normalizing property to the neural networks. That is, the neuron activations converge towards zero mean and unit variance.

 

It isn’t affected by vanishing and exploding gradient problems.

Cons:

        

Need more computation power while training the network.

Share

Share on twitter
Share on facebook
Share on linkedin
Share on whatsapp
Share on reddit
Share on telegram
Share on pinterest
Share on email
Share on facebook

Readers who read this also read

Close Menu