Other NN Layers

Customized Layers

GramMatrix

class encoding.nn.GramMatrix[source]

Gram Matrix for a 4D convolutional featuremaps as a mini-batch

\[\mathcal{G} = \sum_{h=1}^{H_i}\sum_{w=1}^{W_i} \mathcal{F}_{h,w}\mathcal{F}_{h,w}^T\]

Normalize

class encoding.nn.Normalize(p=2, dim=1)[source]

Performs \(L_p\) normalization of inputs over specified dimension.

Does:

\[v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}\]

for each subtensor v over dimension dim of input. Each subtensor is flattened into a vector, i.e. \(\lVert v \rVert_p\) is not a matrix norm.

With default arguments normalizes over the second dimension with Euclidean norm.

Parameters:
  • p (float) – the exponent value in the norm formulation. Default: 2
  • dim (int) – the dimension to reduce. Default: 1

View

class encoding.nn.View(*args)[source]

Reshape the input into different size, an inplace operator, support SelfParallel mode.

Standard Layers

Standard Layers as in PyTorch but in encoding.parallel.SelfDataParallel mode. Use together with SyncBN.

Conv1d

class encoding.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

Applies a 1D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size \((N, C_{in}, L)\) and output \((N, C_{out}, L_{out})\) can be precisely described as:

\[\begin{array}{ll} out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{{k}=0}^{C_{in}-1} weight(C_{out_j}, k) \star input(N_i, k) \end{array}\]

where \(\star\) is the valid cross-correlation operator

stride controls the stride for the cross-correlation.
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).
Parameters:
  • in_channels (int) – Number of channels in the input image
  • out_channels (int) – Number of channels produced by the convolution
  • kernel_size (int or tuple) – Size of the convolving kernel
  • stride (int or tuple, optional) – Stride of the convolution. Default: 1
  • padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
  • Input: \((N, C_{in}, L_{in})\)
  • Output: \((N, C_{out}, L_{out})\) where \(L_{out} = floor((L_{in} + 2 * padding - dilation * (kernel\_size - 1) - 1) / stride + 1)\)
Variables:
  • weight (Tensor) – the learnable weights of the module of shape (out_channels, in_channels, kernel_size)
  • bias (Tensor) – the learnable bias of the module of shape (out_channels)
Examples::
>>> m = nn.Conv1d(16, 33, 3, stride=2)
>>> input = autograd.Variable(torch.randn(20, 16, 50))
>>> output = m(input)

Conv2d

class encoding.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

Applies a 2D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size \((N, C_{in}, H, W)\) and output \((N, C_{out}, H_{out}, W_{out})\) can be precisely described as:

\[\begin{array}{ll} out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{{k}=0}^{C_{in}-1} weight(C_{out_j}, k) \star input(N_i, k) \end{array}\]

where \(\star\) is the valid 2D cross-correlation operator

stride controls the stride for the cross-correlation.
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).
The parameters kernel_size, stride, padding, dilation can either be:
  • a single int – in which case the same value is used for the height and width dimension
  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:
  • in_channels (int) – Number of channels in the input image
  • out_channels (int) – Number of channels produced by the convolution
  • kernel_size (int or tuple) – Size of the convolving kernel
  • stride (int or tuple, optional) – Stride of the convolution. Default: 1
  • padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
  • Input: \((N, C_{in}, H_{in}, W_{in})\)
  • Output: \((N, C_{out}, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0] - dilation[0] * (kernel\_size[0] - 1) - 1) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1] - dilation[1] * (kernel\_size[1] - 1) - 1) / stride[1] + 1)\)
Variables:
  • weight (Tensor) – the learnable weights of the module of shape (out_channels, in_channels, kernel_size[0], kernel_size[1])
  • bias (Tensor) – the learnable bias of the module of shape (out_channels)
Examples::
>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = autograd.Variable(torch.randn(20, 16, 50, 100))
>>> output = m(input)

ConvTranspose2d

class encoding.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]

Applies a 2D transposed convolution operator over an input image composed of several input planes. This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).

stride controls the stride for the cross-correlation.
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
If output_padding is non-zero, then the output is implicitly zero-padded on one side for output_padding number of points.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).
The parameters kernel_size, stride, padding, output_padding can either be:
  • a single int – in which case the same value is used for the height and width dimensions
  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:
  • in_channels (int) – Number of channels in the input image
  • out_channels (int) – Number of channels produced by the convolution
  • kernel_size (int or tuple) – Size of the convolving kernel
  • stride (int or tuple, optional) – Stride of the convolution. Default: 1
  • padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
  • output_padding (int or tuple, optional) – Zero-padding added to one side of the output. Default: 0
  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
Shape:
  • Input: \((N, C_{in}, H_{in}, W_{in})\)
  • Output: \((N, C_{out}, H_{out}, W_{out})\) where \(H_{out} = (H_{in} - 1) * stride[0] - 2 * padding[0] + kernel\_size[0] + output\_padding[0]\) \(W_{out} = (W_{in} - 1) * stride[1] - 2 * padding[1] + kernel\_size[1] + output\_padding[1]\)
Variables:
  • weight (Tensor) – the learnable weights of the module of shape (in_channels, out_channels, kernel_size[0], kernel_size[1])
  • bias (Tensor) – the learnable bias of the module of shape (out_channels)
Examples::
>>> # With square kernels and equal stride
>>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = autograd.Variable(torch.randn(20, 16, 50, 100))
>>> output = m(input)
>>> # exact output size can be also specified as an argument
>>> input = autograd.Variable(torch.randn(1, 16, 12, 12))
>>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)
>>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(input)
>>> h.size()
torch.Size([1, 16, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12])

ReLU

class encoding.nn.ReLU(inplace=False)[source]

Applies the rectified linear unit function element-wise \({ReLU}(x)= max(0, x)\) :param inplace: can optionally do the operation in-place. Default: False

Shape:
  • Input: \((N, *)\) where * means, any number of additional dimensions
  • Output: \((N, *)\), same shape as the input
Examples::
>>> m = nn.ReLU()
>>> input = autograd.Variable(torch.randn(2))
>>> print(input)
>>> print(m(input))

Sigmoid

class encoding.nn.Sigmoid[source]

Applies the element-wise function \(f(x) = 1 / ( 1 + exp(-x))\) Shape:

  • Input: \((N, *)\) where * means, any number of additional dimensions
  • Output: \((N, *)\), same shape as the input

Examples:

>>> m = nn.Sigmoid()
>>> input = autograd.Variable(torch.randn(2))
>>> print(input)
>>> print(m(input))

MaxPool2d

class encoding.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

Applies a 2D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) can be precisely described as:

\[\begin{array}{ll} out(N_i, C_j, h, w) = \max_{{m}=0}^{kH-1} \max_{{n}=0}^{kW-1} input(N_i, C_j, stride[0] * h + m, stride[1] * w + n) \end{array}\]
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
dilation controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of what dilation does.
The parameters kernel_size, stride, padding, dilation can either be:
  • a single int – in which case the same value is used for the height and width dimension
  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:
  • kernel_size – the size of the window to take a max over
  • stride – the stride of the window. Default value is kernel_size
  • padding – implicit zero padding to be added on both sides
  • dilation – a parameter that controls the stride of elements in the window
  • return_indices – if True, will return the max indices along with the outputs. Useful when Unpooling later
  • ceil_mode – when True, will use ceil instead of floor to compute the output shape
Shape:
  • Input: \((N, C, H_{in}, W_{in})\)
  • Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0] - dilation[0] * (kernel\_size[0] - 1) - 1) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1] - dilation[1] * (kernel\_size[1] - 1) - 1) / stride[1] + 1)\)
Examples::
>>> # pool of square window of size=3, stride=2
>>> m = nn.MaxPool2d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.MaxPool2d((3, 2), stride=(2, 1))
>>> input = autograd.Variable(torch.randn(20, 16, 50, 32))
>>> output = m(input)

AvgPool2d

class encoding.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]

Applies a 2D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) can be precisely described as:

\[\begin{array}{ll} out(N_i, C_j, h, w) = 1 / (kH * kW) * \sum_{{m}=0}^{kH-1} \sum_{{n}=0}^{kW-1} input(N_i, C_j, stride[0] * h + m, stride[1] * w + n) \end{array}\]
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
The parameters kernel_size, stride, padding can either be:
  • a single int – in which case the same value is used for the height and width dimension
  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:
  • kernel_size – the size of the window
  • stride – the stride of the window. Default value is kernel_size
  • padding – implicit zero padding to be added on both sides
  • ceil_mode – when True, will use ceil instead of floor to compute the output shape
  • count_include_pad – when True, will include the zero-padding in the averaging calculation
Shape:
  • Input: \((N, C, H_{in}, W_{in})\)
  • Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0] - kernel\_size[0]) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1] - kernel\_size[1]) / stride[1] + 1)\)
Examples::
>>> # pool of square window of size=3, stride=2
>>> m = nn.AvgPool2d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.AvgPool2d((3, 2), stride=(2, 1))
>>> input = autograd.Variable(torch.randn(20, 16, 50, 32))
>>> output = m(input)

AdaptiveAvgPool2d

class encoding.nn.AdaptiveAvgPool2d(output_size)[source]

Applies a 2D adaptive average pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.

Parameters:output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single number H for a square image H x H

Examples

>>> # target output size of 5x7
>>> m = nn.AdaptiveAvgPool2d((5,7))
>>> input = autograd.Variable(torch.randn(1, 64, 8, 9))
>>> output = m(input)
>>> # target output size of 7x7 (square)
>>> m = nn.AdaptiveAvgPool2d(7)
>>> input = autograd.Variable(torch.randn(1, 64, 10, 9))
>>> output = m(input)

Dropout2d

class encoding.nn.Dropout2d(p=0.5, inplace=False)[source]

Randomly zeroes whole channels of the input tensor. The channels to zero-out are randomized on every forward call. Usually the input comes from Conv2d modules. As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then iid dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, nn.Dropout2d() will help promote independence between feature maps and should be used instead.

Parameters:
  • p (float, optional) – probability of an element to be zeroed.
  • inplace (bool, optional) – If set to True, will do this operation in-place
Shape:
  • Input: \((N, C, H, W)\)
  • Output: \((N, C, H, W)\) (same shape as input)
Examples::
>>> m = nn.Dropout2d(p=0.2)
>>> input = autograd.Variable(torch.randn(20, 16, 32, 32))
>>> output = m(input)

Linear

class encoding.nn.Linear(in_features, out_features, bias=True)[source]

Applies a linear transformation to the incoming data: \(y = Ax + b\)

Parameters:
  • in_features – size of each input sample
  • out_features – size of each output sample
  • bias – If set to False, the layer will not learn an additive bias. Default: True
Shape:
  • Input: \((N, *, in\_features)\) where * means any number of additional dimensions
  • Output: \((N, *, out\_features)\) where all but the last dimension are the same shape as the input.
Variables:
  • weight – the learnable weights of the module of shape (out_features x in_features)
  • bias – the learnable bias of the module of shape (out_features)
Examples::
>>> m = nn.Linear(20, 30)
>>> input = autograd.Variable(torch.randn(128, 20))
>>> output = m(input)
>>> print(output.size())