I am very interested in your work. When I run this program (https://github.com/cics-nd/ar-pde-cnn/blob/master/1D-KS-SWAG/nn/denseEDcirc.py), the program output is as follows:
DenseED(
(features): Sequential(
(In_conv): Conv1d(1, 48, kernel_size=(7,), stride=(2,), padding=(6,), bias=False)
(EncBlock1): _DenseBlock(
(denselayer1): _DenseLayer(
(norm1): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(48, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer2): _DenseLayer(
(norm1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(64, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer3): _DenseLayer(
(norm1): BatchNorm1d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(80, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
)
(TransDown1): _Transition(
(norm1): BatchNorm1d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(96, 48, kernel_size=(1,), stride=(1,), bias=False)
(norm2): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace)
(conv2): Conv1d(48, 48, kernel_size=(4,), stride=(2,), padding=(2,), bias=False)
)
(DecBlock1): _DenseBlock(
(denselayer1): _DenseLayer(
(norm1): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(48, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer2): _DenseLayer(
(norm1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(64, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer3): _DenseLayer(
(norm1): BatchNorm1d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(80, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer4): _DenseLayer(
(norm1): BatchNorm1d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(96, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
)
(TransUp1): _Transition(
(norm1): BatchNorm1d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(112, 56, kernel_size=(1,), stride=(1,), bias=False)
(norm2): BatchNorm1d(56, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace)
(upsample): UpsamplingNearest1d()
(conv2): Conv1d(56, 56, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(DecBlock2): _DenseBlock(
(denselayer1): _DenseLayer(
(norm1): BatchNorm1d(56, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(56, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer2): _DenseLayer(
(norm1): BatchNorm1d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(72, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
(denselayer3): _DenseLayer(
(norm1): BatchNorm1d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(88, 16, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
)
)
(LastTransUp): Sequential(
(norm1): BatchNorm1d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(104, 52, kernel_size=(1,), stride=(1,), bias=False)
(norm2): BatchNorm1d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace)
(upsample): UpsamplingNearest1d()
(conv2): Conv1d(52, 26, kernel_size=(3,), stride=(1,), padding=(2,), bias=False)
(norm3): BatchNorm1d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu3): ReLU(inplace)
(conv3): Conv1d(26, 1, kernel_size=(5,), stride=(1,), padding=(4,), bias=False)
)
(Tanh): Tanh()
)
)
input: torch.Size([16, 1, 512])
In_conv: torch.Size([16, 48, 256])
EncBlock1: torch.Size([16, 96, 256])
TransDown1: torch.Size([16, 48, 128])
DecBlock1: torch.Size([16, 112, 128])
TransUp1: torch.Size([16, 56, 256])
DecBlock2: torch.Size([16, 104, 256])
LastTransUp: torch.Size([16, 1, 512])
Tanh: torch.Size([16, 1, 512])
(75222, 18)
Here I have a small confusion. After the layer of In_conv, the data dimension is torch.Size([16, 48, 256]), but after the EncBlock1 layer, the dimension of the data will be torch.Size([ 16, 96, 256])?
In other words, why does the number of channels of data change from 48 to 96?
According to the meaning of the convolutional layer, I am a bit incomprehensible.
Thank you very much for answering my questions during your busy schedule.