I know those activations differ in their definition, however, when reading ReLU's documentation, it takes a parameter alpha as an input with 0 as default, and says
relu
relu(x, alpha=0.0, max_value=None) Rectified Linear Unit.
Arguments
x: Input tensor. alpha: Slope of the negative part. Defaults to zero. max_value: Maximum value for the output. Returns
The (leaky) rectified linear unit activation: x if x > 0, alpha * x if x < 0. If max_value is defined, the result is truncated to this value.
And there is also a LeakyReLU with a similar documentation, but as part of other module (advanced activation)
Is there a difference between them? and how shoud I import relu to instantiate it with alpha?
from keras.layers.advanced_activations import LeakyReLU
..
..
model.add(Dense(512, 512, activation='linear'))
model.add(LeakyReLU(alpha=.001)) # using Relu insted of LeakyRelu
Note that when using LeakyReLU
I'm getting the following error:
AttributeError: 'LeakyReLU' object has no attribute '__name__'
but when I use ReLU instead, It works:
model.add(Activation('relu')) # This works correctly but can't set alpha
To sum up: What are de diferencies and how can I import ReLU to pass aplha to it?