Keras layers API.
Module <code>tf.compat.v1.keras.layers</code>
<code>class AbstractRNNCell</code>: Abstract object representing an RNN cell.
<code>class Activation</code>: Applies an activation function to an output.
<code>class ActivityRegularization</code>: Layer that applies an update to the cost function based input activity.
<code>class Add</code>: Layer that adds a list of inputs.
<code>class AdditiveAttention</code>: Additive attention layer, a.k.a. Bahdanau-style attention.
<code>class AlphaDropout</code>: Applies Alpha Dropout to the input.
<code>class Attention</code>: Dot-product attention layer, a.k.a. Luong-style attention.
<code>class Average</code>: Layer that averages a list of inputs.
<code>class AveragePooling1D</code>: Average pooling for temporal data.
<code>class AveragePooling2D</code>: Average pooling operation for spatial data.
<code>class AveragePooling3D</code>: Average pooling operation for 3D data (spatial or spatio-temporal).
<code>class AvgPool1D</code>: Average pooling for temporal data.
<code>class AvgPool2D</code>: Average pooling operation for spatial data.
<code>class AvgPool3D</code>: Average pooling operation for 3D data (spatial or spatio-temporal).
<code>class BatchNormalization</code>: Base class of Batch normalization layer (Ioffe and Szegedy, 2014).
<code>class Bidirectional</code>: Bidirectional wrapper for RNNs.
<code>class Concatenate</code>: Layer that concatenates a list of inputs.
<code>class Conv1D</code>: 1D convolution layer (e.g. temporal convolution).
<code>class Conv2D</code>: 2D convolution layer (e.g. spatial convolution over images).
<code>class Conv2DTranspose</code>: Transposed convolution layer (sometimes called Deconvolution).
<code>class Conv3D</code>: 3D convolution layer (e.g. spatial convolution over volumes).
<code>class Conv3DTranspose</code>: Transposed convolution layer (sometimes called Deconvolution).
<code>class ConvLSTM2D</code>: Convolutional LSTM.
<code>class Convolution1D</code>: 1D convolution layer (e.g. temporal convolution).
<code>class Convolution2D</code>: 2D convolution layer (e.g. spatial convolution over images).
<code>class Convolution2DTranspose</code>: Transposed convolution layer (sometimes called Deconvolution).
<code>class Convolution3D</code>: 3D convolution layer (e.g. spatial convolution over volumes).
<code>class Convolution3DTranspose</code>: Transposed convolution layer (sometimes called Deconvolution).
<code>class Cropping1D</code>: Cropping layer for 1D input (e.g. temporal sequence).
<code>class Cropping2D</code>: Cropping layer for 2D input (e.g. picture).
<code>class Cropping3D</code>: Cropping layer for 3D data (e.g. spatial or spatio-temporal).
<code>class CuDNNGRU</code>: Fast GRU implementation backed by cuDNN.
<code>class CuDNNLSTM</code>: Fast LSTM implementation backed by cuDNN.
<code>class Dense</code>: Just your regular densely-connected NN layer.
<code>class DenseFeatures</code>: A layer that produces a dense <code>Tensor</code> based on given <code>feature_columns</code>.
<code>class DepthwiseConv2D</code>: Depthwise separable 2D convolution.
<code>class Dot</code>: Layer that computes a dot product between samples in two tensors.
<code>class Dropout</code>: Applies Dropout to the input.
<code>class ELU</code>: Exponential Linear Unit.
<code>class Embedding</code>: Turns positive integers (indexes) into dense vectors of fixed size.
<code>class Flatten</code>: Flattens the input. Does not affect the batch size.
<code>class GRU</code>: Gated Recurrent Unit - Cho et al. 2014.
<code>class GRUCell</code>: Cell class for the GRU layer.
<code>class GaussianDropout</code>: Apply multiplicative 1-centered Gaussian noise.
<code>class GaussianNoise</code>: Apply additive zero-centered Gaussian noise.
<code>class GlobalAveragePooling1D</code>: Global average pooling operation for temporal data.
<code>class GlobalAveragePooling2D</code>: Global average pooling operation for spatial data.
<code>class GlobalAveragePooling3D</code>: Global Average pooling operation for 3D data.
<code>class GlobalAvgPool1D</code>: Global average pooling operation for temporal data.
<code>class GlobalAvgPool2D</code>: Global average pooling operation for spatial data.
<code>class GlobalAvgPool3D</code>: Global Average pooling operation for 3D data.
<code>class GlobalMaxPool1D</code>: Global max pooling operation for temporal data.
<code>class GlobalMaxPool2D</code>: Global max pooling operation for spatial data.
<code>class GlobalMaxPool3D</code>: Global Max pooling operation for 3D data.
<code>class GlobalMaxPooling1D</code>: Global max pooling operation for temporal data.
<code>class GlobalMaxPooling2D</code>: Global max pooling operation for spatial data.
<code>class GlobalMaxPooling3D</code>: Global Max pooling operation for 3D data.
<code>class InputLayer</code>: Layer to be used as an entry point into a Network (a graph of layers).
<code>class InputSpec</code>: Specifies the ndim, dtype and shape of every input to a layer.
<code>class LSTM</code>: Long Short-Term Memory layer - Hochreiter 1997.
<code>class LSTMCell</code>: Cell class for the LSTM layer.
<code>class Lambda</code>: Wraps arbitrary expressions as a <code>Layer</code> object.
<code>class Layer</code>: Base layer class.
<code>class LayerNormalization</code>: Layer normalization layer (Ba et al., 2016).
<code>class LeakyReLU</code>: Leaky version of a Rectified Linear Unit.
<code>class LocallyConnected1D</code>: Locally-connected layer for 1D inputs.
<code>class LocallyConnected2D</code>: Locally-connected layer for 2D inputs.
<code>class Masking</code>: Masks a sequence by using a mask value to skip timesteps.
<code>class MaxPool1D</code>: Max pooling operation for temporal data.
<code>class MaxPool2D</code>: Max pooling operation for spatial data.
<code>class MaxPool3D</code>: Max pooling operation for 3D data (spatial or spatio-temporal).
<code>class MaxPooling1D</code>: Max pooling operation for temporal data.
<code>class MaxPooling2D</code>: Max pooling operation for spatial data.
<code>class MaxPooling3D</code>: Max pooling operation for 3D data (spatial or spatio-temporal).
<code>class Maximum</code>: Layer that computes the maximum (element-wise) a list of inputs.
<code>class Minimum</code>: Layer that computes the minimum (element-wise) a list of inputs.
<code>class Multiply</code>: Layer that multiplies (element-wise) a list of inputs.
<code>class PReLU</code>: Parametric Rectified Linear Unit.
<code>class Permute</code>: Permutes the dimensions of the input according to a given pattern.
<code>class RNN</code>: Base class for recurrent layers.
<code>class ReLU</code>: Rectified Linear Unit activation function.
<code>class RepeatVector</code>: Repeats the input n times.
<code>class Reshape</code>: Reshapes an output to a certain shape.
<code>class SeparableConv1D</code>: Depthwise separable 1D convolution.
<code>class SeparableConv2D</code>: Depthwise separable 2D convolution.
<code>class SeparableConvolution1D</code>: Depthwise separable 1D convolution.
<code>class SeparableConvolution2D</code>: Depthwise separable 2D convolution.
<code>class SimpleRNN</code>: Fully-connected RNN where the output is to be fed back to input.
<code>class SimpleRNNCell</code>: Cell class for SimpleRNN.
<code>class Softmax</code>: Softmax activation function.
<code>class SpatialDropout1D</code>: Spatial 1D version of Dropout.
<code>class SpatialDropout2D</code>: Spatial 2D version of Dropout.
<code>class SpatialDropout3D</code>: Spatial 3D version of Dropout.
<code>class StackedRNNCells</code>: Wrapper allowing a stack of RNN cells to behave as a single cell.
<code>class Subtract</code>: Layer that subtracts two inputs.
<code>class ThresholdedReLU</code>: Thresholded Rectified Linear Unit.
<code>class TimeDistributed</code>: This wrapper allows to apply a layer to every temporal slice of an input.
<code>class UpSampling1D</code>: Upsampling layer for 1D inputs.
<code>class UpSampling2D</code>: Upsampling layer for 2D inputs.
<code>class UpSampling3D</code>: Upsampling layer for 3D inputs.
<code>class Wrapper</code>: Abstract wrapper base class.
<code>class ZeroPadding1D</code>: Zero-padding layer for 1D input (e.g. temporal sequence).
<code>class ZeroPadding2D</code>: Zero-padding layer for 2D input (e.g. picture).
<code>class ZeroPadding3D</code>: Zero-padding layer for 3D data (spatial or spatio-temporal).
<code>Input(...)</code>: <code>Input()</code> is used to instantiate a Keras tensor.
<code>add(...)</code>: Functional interface to the <code>Add</code> layer.
<code>average(...)</code>: Functional interface to the <code>Average</code> layer.
<code>concatenate(...)</code>: Functional interface to the <code>Concatenate</code> layer.
<code>deserialize(...)</code>: Instantiates a layer from a config dictionary.
<code>dot(...)</code>: Functional interface to the <code>Dot</code> layer.
<code>maximum(...)</code>: Functional interface to the <code>Maximum</code> layer that computes
<code>minimum(...)</code>: Functional interface to the <code>Minimum</code> layer.
<code>multiply(...)</code>: Functional interface to the <code>Multiply</code> layer.
<code>serialize(...)</code>
<code>subtract(...)</code>: Functional interface to the <code>Subtract</code> layer.