Giter VIP home page Giter VIP logo

Comments (8)

G-U-N avatar G-U-N commented on July 28, 2024

Hi, thanks for your interest. Could you please attach the detailed training/error log here?

from pycil.

tnalyahya avatar tnalyahya commented on July 28, 2024

Hi, thanks for the quick reply!

Here is a test of memory size 180, it has the same error. I printed the vectors, S, mu_p from the _construct_exemplar method in the base model to trace the error.

The dataset has 18 classes each has 100 samples. I may consider more samples so I want to know how to find the required memory size.
2023-05-09 18:07:52,260 [trainer.py] => config: ./exps/bic.json
2023-05-09 18:07:52,260 [trainer.py] => prefix: reproduce
2023-05-09 18:07:52,260 [trainer.py] => dataset: Mydataset
2023-05-09 18:07:52,260 [trainer.py] => memory_size: 180
2023-05-09 18:07:52,260 [trainer.py] => memory_per_class: 20
2023-05-09 18:07:52,260 [trainer.py] => fixed_memory: False
2023-05-09 18:07:52,260 [trainer.py] => shuffle: True
2023-05-09 18:07:52,260 [trainer.py] => init_cls: 2
2023-05-09 18:07:52,260 [trainer.py] => increment: 2
2023-05-09 18:07:52,260 [trainer.py] => model_name: bic
2023-05-09 18:07:52,260 [trainer.py] => convnet_type: resnet32
2023-05-09 18:07:52,260 [trainer.py] => device: [device(type='cpu')]
2023-05-09 18:07:52,260 [trainer.py] => seed: 1993

05-09 18:07:52,394 [trainer.py] => All params: 464154
2023-05-09 18:07:52,395 [trainer.py] => Trainable params: 464154
2023-05-09 18:07:52,395 [bic.py] => Learning on 0-2
2023-05-09 18:07:52,396 [bic.py] => Parameters of bias layer:
2023-05-09 18:07:52,396 [bic.py] => 0 => 1.000, 0.000
2023-05-09 18:08:08,732 [bic.py] => training => Task 0, Epoch 1/10 => Loss 6.796, Train_accy 79.120, Test_accy 90.320
2023-05-09 18:08:26,310 [bic.py] => training => Task 0, Epoch 2/10 => Loss 0.841, Train_accy 79.120, Test_accy 90.320
2023-05-09 18:08:42,263 [bic.py] => training => Task 0, Epoch 3/10 => Loss 0.078, Train_accy 79.120, Test_accy 90.320
2023-05-09 18:09:02,221 [bic.py] => training => Task 0, Epoch 4/10 => Loss 0.000, Train_accy 80.220, Test_accy 90.320
2023-05-09 18:09:17,682 [bic.py] => training => Task 0, Epoch 5/10 => Loss 0.000, Train_accy 19.780, Test_accy 9.680
2023-05-09 18:09:37,056 [bic.py] => training => Task 0, Epoch 6/10 => Loss 0.000, Train_accy 19.780, Test_accy 9.680
2023-05-09 18:09:53,528 [bic.py] => training => Task 0, Epoch 7/10 => Loss 0.000, Train_accy 24.180, Test_accy 19.350
2023-05-09 18:10:12,367 [bic.py] => training => Task 0, Epoch 8/10 => Loss 0.000, Train_accy 100.000, Test_accy 100.000
2023-05-09 18:10:29,918 [bic.py] => training => Task 0, Epoch 9/10 => Loss 0.000, Train_accy 100.000, Test_accy 100.000
2023-05-09 18:10:47,447 [bic.py] => training => Task 0, Epoch 10/10 => Loss 0.000, Train_accy 100.000, Test_accy 100.0002023-05-09 18:10:47,448 [base.py] => Reducing exemplars...(90 per classes)
2023-05-09 18:10:47,448 [base.py] => Constructing exemplars...(90 per classes)
vectors shape L213: (19, 64)
vectors shape L214: (19, 64)
S shape L221: ()
mu_p shape L224: (19, 64)
S shape L221: (64,)
mu_p shape L224: (18, 64)
S shape L221: (64,)
mu_p shape L224: (17, 64)
S shape L221: (64,)
mu_p shape L224: (16, 64)
S shape L221: (64,)
mu_p shape L224: (15, 64)
S shape L221: (64,)
mu_p shape L224: (14, 64)
S shape L221: (64,)
mu_p shape L224: (13, 64)
S shape L221: (64,)
mu_p shape L224: (12, 64)
S shape L221: (64,)
mu_p shape L224: (11, 64)
S shape L221: (64,)
mu_p shape L224: (10, 64)
S shape L221: (64,)
mu_p shape L224: (9, 64)
S shape L221: (64,)
mu_p shape L224: (8, 64)
S shape L221: (64,)
mu_p shape L224: (7, 64)
S shape L221: (64,)
mu_p shape L224: (6, 64)
S shape L221: (64,)
mu_p shape L224: (5, 64)
S shape L221: (64,)
mu_p shape L224: (4, 64)
S shape L221: (64,)
mu_p shape L224: (3, 64)
S shape L221: (64,)
mu_p shape L224: (2, 64)
S shape L221: (64,)
mu_p shape L224: (1, 64)
S shape L221: (64,)
mu_p shape L224: (0, 64)
Traceback (most recent call last):
File "main.py", line 31, in
main()
File "main.py", line 12, in main
train(args)
File "/home/iot/PyCIL/trainer.py", line 18, in train
_train(args)
File "/home/iot/PyCIL/trainer.py", line 65, in _train
model.incremental_train(data_manager)
File "/home/iot/PyCIL/models/bic.py", line 89, in incremental_train
self.build_rehearsal_memory(data_manager, self.samples_per_class)
File "/home/iot/PyCIL/models/base.py", line 57, in build_rehearsal_memory
self._construct_exemplar(data_manager, per_class)
File "/home/iot/PyCIL/models/base.py", line 229, in _construct_exemplar
i = np.argmin(np.sqrt(np.sum((class_mean - mu_p) ** 2, axis=1)))
File "<array_function internals>", line 180, in argmin
File "/home/iot/anaconda3/envs/torchenv/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 1312, in argmin
return _wrapfunc(a, 'argmin', axis=axis, out=out, **kwds)
File "/home/iot/anaconda3/envs/torchenv/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 57, in _wrapfunc
return bound(*args, **kwds)
ValueError: attempt to get argmin of an empty sequence

from pycil.

G-U-N avatar G-U-N commented on July 28, 2024

As you claimed that each class possesses 100 samples, the shape of mu_p should be (100, 64) at the very beginning. But your log reals that only 19 samples for that class are loaded.

from pycil.

tnalyahya avatar tnalyahya commented on July 28, 2024

I found that strange, as it reads from the loader, but it has never been 100! even when I tested using only 6 classes showing no errors.

2023-05-08 18:43:36,651 [trainer.py] => config: ./exps/bic.json
2023-05-08 18:43:36,651 [trainer.py] => prefix: reproduce
2023-05-08 18:43:36,651 [trainer.py] => dataset: Mydataset
2023-05-08 18:43:36,651 [trainer.py] => memory_size: 120
2023-05-08 18:43:36,651 [trainer.py] => memory_per_class: 20
2023-05-08 18:43:36,652 [trainer.py] => fixed_memory: False
2023-05-08 18:43:36,652 [trainer.py] => shuffle: True
2023-05-08 18:43:36,652 [trainer.py] => init_cls: 2
2023-05-08 18:43:36,652 [trainer.py] => increment: 2
2023-05-08 18:43:36,652 [trainer.py] => model_name: bic
2023-05-08 18:43:36,652 [trainer.py] => convnet_type: resnet32
2023-05-08 18:43:36,652 [trainer.py] => device: [device(type='cpu')]
2023-05-08 18:43:36,652 [trainer.py] => seed: 1993

2023-05-08 18:43:36,730 [data_manager.py] => [0, 2, 3, 4, 5, 1]
2023-05-08 18:43:36,749 [trainer.py] => All params: 464154
2023-05-08 18:43:36,749 [trainer.py] => Trainable params: 464154
2023-05-08 18:43:36,750 [bic.py] => Learning on 0-2
2023-05-08 18:43:36,750 [bic.py] => Parameters of bias layer:
2023-05-08 18:43:36,750 [bic.py] => 0 => 1.000, 0.000
2023-05-08 18:44:05,156 [bic.py] => training => Task 0, Epoch 1/10 => Loss 1.092, Train_accy 49.320, Test_accy 44.230
2023-05-08 18:44:41,376 [bic.py] => training => Task 0, Epoch 2/10 => Loss 0.272, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:45:13,448 [bic.py] => training => Task 0, Epoch 3/10 => Loss 0.460, Train_accy 49.320, Test_accy 44.230
2023-05-08 18:45:49,218 [bic.py] => training => Task 0, Epoch 4/10 => Loss 0.720, Train_accy 98.650, Test_accy 96.150
2023-05-08 18:46:22,451 [bic.py] => training => Task 0, Epoch 5/10 => Loss 1.455, Train_accy 49.320, Test_accy 44.230
2023-05-08 18:46:55,399 [bic.py] => training => Task 0, Epoch 6/10 => Loss 3.391, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:47:27,371 [bic.py] => training => Task 0, Epoch 7/10 => Loss 0.641, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:47:58,513 [bic.py] => training => Task 0, Epoch 8/10 => Loss 0.352, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:48:28,024 [bic.py] => training => Task 0, Epoch 9/10 => Loss 0.277, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:48:59,313 [bic.py] => training => Task 0, Epoch 10/10 => Loss 0.183, Train_accy 50.680, Test_accy 48.080
2023-05-08 18:48:59,316 [base.py] => Reducing exemplars...(60 per classes)
2023-05-08 18:48:59,316 [base.py] => Constructing exemplars...(60 per classes)
vectors shape L213: (75, 64)
vectors shape L214: (75, 64)
S shape L221: ()
mu_p shape L224: (75, 64)
S shape L221: (64,)
mu_p shape L224: (74, 64)
S shape L221: (64,)
mu_p shape L224: (73, 64)
S shape L221: (64,)
mu_p shape L224: (72, 64)
S shape L221: (64,)
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
S shape L221: (64,)
mu_p shape L224: (54, 64)
S shape L221: (64,)
mu_p shape L224: (53, 64)
S shape L221: (64,)
mu_p shape L224: (52, 64)
S shape L221: (64,)
mu_p shape L224: (51, 64)
S shape L221: (64,)
mu_p shape L224: (50, 64)
S shape L221: (64,)
mu_p shape L224: (49, 64)
S shape L221: (64,)
mu_p shape L224: (48, 64)
S shape L221: (64,)
mu_p shape L224: (47, 64)
S shape L221: (64,)
mu_p shape L224: (46, 64)
S shape L221: (64,)
mu_p shape L224: (45, 64)
S shape L221: (64,)
mu_p shape L224: (44, 64)
S shape L221: (64,)
mu_p shape L224: (43, 64)
S shape L221: (64,)
mu_p shape L224: (42, 64)
S shape L221: (64,)
mu_p shape L224: (41, 64)
S shape L221: (64,)
mu_p shape L224: (40, 64)
S shape L221: (64,)
mu_p shape L224: (39, 64)
S shape L221: (64,)
mu_p shape L224: (38, 64)
S shape L221: (64,)
mu_p shape L224: (37, 64)
S shape L221: (64,)
mu_p shape L224: (36, 64)
S shape L221: (64,)
mu_p shape L224: (35, 64)
S shape L221: (64,)
mu_p shape L224: (34, 64)
S shape L221: (64,)
mu_p shape L224: (33, 64)
S shape L221: (64,)
mu_p shape L224: (32, 64)
S shape L221: (64,)
mu_p shape L224: (31, 64)
S shape L221: (64,)
mu_p shape L224: (30, 64)
S shape L221: (64,)
mu_p shape L224: (29, 64)
S shape L221: (64,)
mu_p shape L224: (28, 64)
S shape L221: (64,)
mu_p shape L224: (27, 64)
S shape L221: (64,)
mu_p shape L224: (26, 64)
S shape L221: (64,)
mu_p shape L224: (25, 64)
S shape L221: (64,)
mu_p shape L224: (24, 64)
S shape L221: (64,)
mu_p shape L224: (23, 64)
S shape L221: (64,)
mu_p shape L224: (22, 64)
S shape L221: (64,)
mu_p shape L224: (21, 64)
S shape L221: (64,)
mu_p shape L224: (20, 64)
S shape L221: (64,)
mu_p shape L224: (19, 64)
S shape L221: (64,)
mu_p shape L224: (18, 64)
S shape L221: (64,)
mu_p shape L224: (17, 64)
S shape L221: (64,)
mu_p shape L224: (16, 64)
vectors shape L213: (73, 64)
vectors shape L214: (73, 64)
S shape L221: ()
mu_p shape L224: (73, 64)
S shape L221: (64,)
mu_p shape L224: (72, 64)
S shape L221: (64,)
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
S shape L221: (64,)
mu_p shape L224: (54, 64)
S shape L221: (64,)
mu_p shape L224: (53, 64)
S shape L221: (64,)
mu_p shape L224: (52, 64)
S shape L221: (64,)
mu_p shape L224: (51, 64)
S shape L221: (64,)
mu_p shape L224: (50, 64)
S shape L221: (64,)
mu_p shape L224: (49, 64)
S shape L221: (64,)
mu_p shape L224: (48, 64)
S shape L221: (64,)
mu_p shape L224: (47, 64)
S shape L221: (64,)
mu_p shape L224: (46, 64)
S shape L221: (64,)
mu_p shape L224: (45, 64)
S shape L221: (64,)
mu_p shape L224: (44, 64)
S shape L221: (64,)
mu_p shape L224: (43, 64)
S shape L221: (64,)
mu_p shape L224: (42, 64)
S shape L221: (64,)
mu_p shape L224: (41, 64)
S shape L221: (64,)
mu_p shape L224: (40, 64)
S shape L221: (64,)
mu_p shape L224: (39, 64)
S shape L221: (64,)
mu_p shape L224: (38, 64)
S shape L221: (64,)
mu_p shape L224: (37, 64)
S shape L221: (64,)
mu_p shape L224: (36, 64)
S shape L221: (64,)
mu_p shape L224: (35, 64)
S shape L221: (64,)
mu_p shape L224: (34, 64)
S shape L221: (64,)
mu_p shape L224: (33, 64)
S shape L221: (64,)
mu_p shape L224: (32, 64)
S shape L221: (64,)
mu_p shape L224: (31, 64)
S shape L221: (64,)
mu_p shape L224: (30, 64)
S shape L221: (64,)
mu_p shape L224: (29, 64)
S shape L221: (64,)
mu_p shape L224: (28, 64)
S shape L221: (64,)
mu_p shape L224: (27, 64)
S shape L221: (64,)
mu_p shape L224: (26, 64)
S shape L221: (64,)
mu_p shape L224: (25, 64)
S shape L221: (64,)
mu_p shape L224: (24, 64)
S shape L221: (64,)
mu_p shape L224: (23, 64)
S shape L221: (64,)
mu_p shape L224: (22, 64)
S shape L221: (64,)
mu_p shape L224: (21, 64)
S shape L221: (64,)
mu_p shape L224: (20, 64)
S shape L221: (64,)
mu_p shape L224: (19, 64)
S shape L221: (64,)
mu_p shape L224: (18, 64)
S shape L221: (64,)
mu_p shape L224: (17, 64)
S shape L221: (64,)
mu_p shape L224: (16, 64)
S shape L221: (64,)
mu_p shape L224: (15, 64)
S shape L221: (64,)
mu_p shape L224: (14, 64)
2023-05-08 18:49:01,517 [bic.py] => Parameters of bias layer:
2023-05-08 18:49:01,517 [bic.py] => 0 => 1.000, 0.000
2023-05-08 18:49:05,115 [bic.py] => Exemplar size: 120
2023-05-08 18:49:05,115 [trainer.py] => CNN: {'total': 48.08, '00-09': 48.08, 'old': 0, 'new': 48.08}
2023-05-08 18:49:05,115 [trainer.py] => NME: {'total': 80.77, '00-09': 80.77, 'old': 0, 'new': 80.77}
2023-05-08 18:49:05,115 [trainer.py] => CNN top1 curve: [48.08]
2023-05-08 18:49:05,115 [trainer.py] => CNN top2 curve: [100.0]
2023-05-08 18:49:05,115 [trainer.py] => NME top1 curve: [80.77]
2023-05-08 18:49:05,115 [trainer.py] => NME top2 curve: [100.0]

2023-05-08 18:49:05,115 [trainer.py] => All params: 464286
2023-05-08 18:49:05,116 [trainer.py] => Trainable params: 464286
2023-05-08 18:49:05,116 [bic.py] => Learning on 2-4
Size train_data: 253
2023-05-08 18:49:05,117 [bic.py] => Stage1 dset: 253, Stage2 dset: 24
2023-05-08 18:49:05,117 [bic.py] => Lambda: 0.500
2023-05-08 18:49:05,118 [bic.py] => Parameters of bias layer:
2023-05-08 18:49:05,118 [bic.py] => 0 => 1.000, 0.000
2023-05-08 18:49:05,118 [bic.py] => 1 => 1.000, 0.000
2023-05-08 18:50:16,701 [bic.py] => training => Task 1, Epoch 1/10 => Loss 2.795, Train_accy 31.230, Test_accy 20.000
2023-05-08 18:51:31,241 [bic.py] => training => Task 1, Epoch 2/10 => Loss 2.289, Train_accy 51.380, Test_accy 43.160
2023-05-08 18:52:43,481 [bic.py] => training => Task 1, Epoch 3/10 => Loss 0.801, Train_accy 57.310, Test_accy 53.680
2023-05-08 18:53:52,888 [bic.py] => training => Task 1, Epoch 4/10 => Loss 0.774, Train_accy 57.710, Test_accy 51.580
2023-05-08 18:55:02,622 [bic.py] => training => Task 1, Epoch 5/10 => Loss 0.609, Train_accy 56.920, Test_accy 51.580
2023-05-08 18:56:10,273 [bic.py] => training => Task 1, Epoch 6/10 => Loss 0.501, Train_accy 91.700, Test_accy 86.320
2023-05-08 18:57:27,345 [bic.py] => training => Task 1, Epoch 7/10 => Loss 0.396, Train_accy 94.470, Test_accy 94.740
2023-05-08 18:58:36,977 [bic.py] => training => Task 1, Epoch 8/10 => Loss 0.320, Train_accy 90.510, Test_accy 84.210
2023-05-08 18:59:46,209 [bic.py] => training => Task 1, Epoch 9/10 => Loss 0.264, Train_accy 91.700, Test_accy 90.530
2023-05-08 19:00:50,349 [bic.py] => training => Task 1, Epoch 10/10 => Loss 0.217, Train_accy 94.470, Test_accy 90.530
2023-05-08 19:00:58,284 [bic.py] => bias_correction => Task 1, Epoch 1/10 => Loss 0.889, Train_accy 95.830, Test_accy 94.740
2023-05-08 19:01:08,106 [bic.py] => bias_correction => Task 1, Epoch 2/10 => Loss 0.831, Train_accy 95.830, Test_accy 94.740
2023-05-08 19:01:18,964 [bic.py] => bias_correction => Task 1, Epoch 3/10 => Loss 0.909, Train_accy 100.000, Test_accy 97.890
2023-05-08 19:01:27,776 [bic.py] => bias_correction => Task 1, Epoch 4/10 => Loss 0.803, Train_accy 100.000, Test_accy 97.890
2023-05-08 19:01:38,451 [bic.py] => bias_correction => Task 1, Epoch 5/10 => Loss 0.805, Train_accy 100.000, Test_accy 97.890
2023-05-08 19:01:47,668 [bic.py] => bias_correction => Task 1, Epoch 6/10 => Loss 0.806, Train_accy 100.000, Test_accy 97.890
2023-05-08 19:01:57,343 [bic.py] => bias_correction => Task 1, Epoch 7/10 => Loss 0.801, Train_accy 100.000, Test_accy 97.890
2023-05-08 19:02:08,864 [bic.py] => bias_correction => Task 1, Epoch 8/10 => Loss 0.885, Train_accy 100.000, Test_accy 96.840
2023-05-08 19:02:18,724 [bic.py] => bias_correction => Task 1, Epoch 9/10 => Loss 0.854, Train_accy 100.000, Test_accy 96.840
2023-05-08 19:02:28,829 [bic.py] => bias_correction => Task 1, Epoch 10/10 => Loss 0.836, Train_accy 100.000, Test_accy 96.840
2023-05-08 19:02:28,830 [base.py] => Reducing exemplars...(30 per classes)
2023-05-08 19:02:31,283 [base.py] => Constructing exemplars...(30 per classes)
vectors shape L213: (82, 64)
vectors shape L214: (82, 64)
S shape L221: ()
mu_p shape L224: (82, 64)
S shape L221: (64,)
mu_p shape L224: (81, 64)
S shape L221: (64,)
mu_p shape L224: (80, 64)
S shape L221: (64,)
mu_p shape L224: (79, 64)
S shape L221: (64,)
mu_p shape L224: (78, 64)
S shape L221: (64,)
mu_p shape L224: (77, 64)
S shape L221: (64,)
mu_p shape L224: (76, 64)
S shape L221: (64,)
mu_p shape L224: (75, 64)
S shape L221: (64,)
mu_p shape L224: (74, 64)
S shape L221: (64,)
mu_p shape L224: (73, 64)
S shape L221: (64,)
mu_p shape L224: (72, 64)
S shape L221: (64,)
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
S shape L221: (64,)
mu_p shape L224: (54, 64)
S shape L221: (64,)
mu_p shape L224: (53, 64)
vectors shape L213: (75, 64)
vectors shape L214: (75, 64)
S shape L221: ()
mu_p shape L224: (75, 64)
S shape L221: (64,)
mu_p shape L224: (74, 64)
S shape L221: (64,)
mu_p shape L224: (73, 64)
S shape L221: (64,)
mu_p shape L224: (72, 64)
S shape L221: (64,)
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
S shape L221: (64,)
mu_p shape L224: (54, 64)
S shape L221: (64,)
mu_p shape L224: (53, 64)
S shape L221: (64,)
mu_p shape L224: (52, 64)
S shape L221: (64,)
mu_p shape L224: (51, 64)
S shape L221: (64,)
mu_p shape L224: (50, 64)
S shape L221: (64,)
mu_p shape L224: (49, 64)
S shape L221: (64,)
mu_p shape L224: (48, 64)
S shape L221: (64,)
mu_p shape L224: (47, 64)
S shape L221: (64,)
mu_p shape L224: (46, 64)
2023-05-08 19:02:35,484 [bic.py] => Parameters of bias layer:
2023-05-08 19:02:35,484 [bic.py] => 0 => 1.000, 0.000
2023-05-08 19:02:35,484 [bic.py] => 1 => 1.279, -0.087
2023-05-08 19:02:46,191 [bic.py] => Exemplar size: 120
2023-05-08 19:02:46,191 [trainer.py] => CNN: {'total': 96.84, '00-09': 96.84, 'old': 94.23, 'new': 100.0}
2023-05-08 19:02:46,191 [trainer.py] => NME: {'total': 97.89, '00-09': 97.89, 'old': 96.15, 'new': 100.0}
2023-05-08 19:02:46,191 [trainer.py] => CNN top1 curve: [48.08, 96.84]
2023-05-08 19:02:46,192 [trainer.py] => CNN top2 curve: [100.0, 100.0]
2023-05-08 19:02:46,192 [trainer.py] => NME top1 curve: [80.77, 97.89]
2023-05-08 19:02:46,192 [trainer.py] => NME top2 curve: [100.0, 98.95]

2023-05-08 19:02:46,192 [trainer.py] => All params: 464418
2023-05-08 19:02:46,192 [trainer.py] => Trainable params: 464418
2023-05-08 19:02:46,193 [bic.py] => Learning on 4-6
Size train_data: 247
2023-05-08 19:02:46,194 [bic.py] => Stage1 dset: 247, Stage2 dset: 18
2023-05-08 19:02:46,194 [bic.py] => Lambda: 0.667
2023-05-08 19:02:46,195 [bic.py] => Parameters of bias layer:
2023-05-08 19:02:46,195 [bic.py] => 0 => 1.000, 0.000
2023-05-08 19:02:46,195 [bic.py] => 1 => 1.279, -0.087
2023-05-08 19:02:46,195 [bic.py] => 2 => 1.000, 0.000
2023-05-08 19:04:01,054 [bic.py] => training => Task 2, Epoch 1/10 => Loss 1.201, Train_accy 58.300, Test_accy 48.670
2023-05-08 19:05:17,190 [bic.py] => training => Task 2, Epoch 2/10 => Loss 1.000, Train_accy 58.700, Test_accy 49.330
2023-05-08 19:06:35,572 [bic.py] => training => Task 2, Epoch 3/10 => Loss 0.826, Train_accy 50.200, Test_accy 58.670
2023-05-08 19:07:54,688 [bic.py] => training => Task 2, Epoch 4/10 => Loss 0.736, Train_accy 80.570, Test_accy 72.670
2023-05-08 19:09:12,495 [bic.py] => training => Task 2, Epoch 5/10 => Loss 0.709, Train_accy 90.690, Test_accy 88.670
2023-05-08 19:10:29,218 [bic.py] => training => Task 2, Epoch 6/10 => Loss 0.646, Train_accy 88.660, Test_accy 84.000
2023-05-08 19:11:47,301 [bic.py] => training => Task 2, Epoch 7/10 => Loss 0.620, Train_accy 83.810, Test_accy 74.000
2023-05-08 19:12:53,948 [bic.py] => training => Task 2, Epoch 8/10 => Loss 0.573, Train_accy 93.120, Test_accy 83.330
2023-05-08 19:13:52,325 [bic.py] => training => Task 2, Epoch 9/10 => Loss 0.555, Train_accy 96.360, Test_accy 89.330
2023-05-08 19:14:56,310 [bic.py] => training => Task 2, Epoch 10/10 => Loss 0.521, Train_accy 97.980, Test_accy 94.000
2023-05-08 19:15:11,720 [bic.py] => bias_correction => Task 2, Epoch 1/10 => Loss 1.230, Train_accy 88.890, Test_accy 95.330
2023-05-08 19:15:25,464 [bic.py] => bias_correction => Task 2, Epoch 2/10 => Loss 1.230, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:15:36,529 [bic.py] => bias_correction => Task 2, Epoch 3/10 => Loss 1.229, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:15:50,971 [bic.py] => bias_correction => Task 2, Epoch 4/10 => Loss 1.228, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:16:07,220 [bic.py] => bias_correction => Task 2, Epoch 5/10 => Loss 1.227, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:16:21,684 [bic.py] => bias_correction => Task 2, Epoch 6/10 => Loss 1.226, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:16:35,563 [bic.py] => bias_correction => Task 2, Epoch 7/10 => Loss 1.225, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:16:51,270 [bic.py] => bias_correction => Task 2, Epoch 8/10 => Loss 1.224, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:17:05,477 [bic.py] => bias_correction => Task 2, Epoch 9/10 => Loss 1.224, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:17:19,908 [bic.py] => bias_correction => Task 2, Epoch 10/10 => Loss 1.223, Train_accy 88.890, Test_accy 96.000
2023-05-08 19:17:19,909 [base.py] => Reducing exemplars...(20 per classes)
2023-05-08 19:17:25,590 [base.py] => Constructing exemplars...(20 per classes)
vectors shape L213: (71, 64)
vectors shape L214: (71, 64)
S shape L221: ()
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
S shape L221: (64,)
mu_p shape L224: (54, 64)
S shape L221: (64,)
mu_p shape L224: (53, 64)
S shape L221: (64,)
mu_p shape L224: (52, 64)
vectors shape L213: (74, 64)
vectors shape L214: (74, 64)
S shape L221: ()
mu_p shape L224: (74, 64)
S shape L221: (64,)
mu_p shape L224: (73, 64)
S shape L221: (64,)
mu_p shape L224: (72, 64)
S shape L221: (64,)
mu_p shape L224: (71, 64)
S shape L221: (64,)
mu_p shape L224: (70, 64)
S shape L221: (64,)
mu_p shape L224: (69, 64)
S shape L221: (64,)
mu_p shape L224: (68, 64)
S shape L221: (64,)
mu_p shape L224: (67, 64)
S shape L221: (64,)
mu_p shape L224: (66, 64)
S shape L221: (64,)
mu_p shape L224: (65, 64)
S shape L221: (64,)
mu_p shape L224: (64, 64)
S shape L221: (64,)
mu_p shape L224: (63, 64)
S shape L221: (64,)
mu_p shape L224: (62, 64)
S shape L221: (64,)
mu_p shape L224: (61, 64)
S shape L221: (64,)
mu_p shape L224: (60, 64)
S shape L221: (64,)
mu_p shape L224: (59, 64)
S shape L221: (64,)
mu_p shape L224: (58, 64)
S shape L221: (64,)
mu_p shape L224: (57, 64)
S shape L221: (64,)
mu_p shape L224: (56, 64)
S shape L221: (64,)
mu_p shape L224: (55, 64)
2023-05-08 19:17:30,071 [bic.py] => Parameters of bias layer:
2023-05-08 19:17:30,074 [bic.py] => 0 => 1.000, 0.000
2023-05-08 19:17:30,074 [bic.py] => 1 => 1.279, -0.087
2023-05-08 19:17:30,074 [bic.py] => 2 => 0.905, -0.175
2023-05-08 19:17:45,153 [bic.py] => Exemplar size: 120
2023-05-08 19:17:45,158 [trainer.py] => CNN: {'total': 96.0, '00-09': 96.0, 'old': 95.79, 'new': 96.36}
2023-05-08 19:17:45,158 [trainer.py] => NME: {'total': 96.67, '00-09': 96.67, 'old': 97.89, 'new': 94.55}
2023-05-08 19:17:45,158 [trainer.py] => CNN top1 curve: [48.08, 96.84, 96.0]
2023-05-08 19:17:45,158 [trainer.py] => CNN top2 curve: [100.0, 100.0, 98.67]
2023-05-08 19:17:45,158 [trainer.py] => NME top1 curve: [80.77, 97.89, 96.67]
2023-05-08 19:17:45,158 [trainer.py] => NME top2 curve: [100.0, 98.95, 97.33]

from pycil.

tnalyahya avatar tnalyahya commented on July 28, 2024

Any suggestions of why I'm getting the wrong value for mu_p? Do I have to change the memory size in bic.json to make it work?
Noting that the image size in 32x32

from pycil.

G-U-N avatar G-U-N commented on July 28, 2024

I think the error might occur when you load your datasets. If you are adding a new dataset following our code, add the following lines in the def download_data(self): in data.py

print(self.train_data.shape)
print(self.train_targets)

to check whether your dataset is loaded correctly.

from pycil.

zhoudw-zdw avatar zhoudw-zdw commented on July 28, 2024

Try setting fixed_memory to True.

This problem sometime occurs when the minimum number of instances per class is lower than the average number of exemplar.

from pycil.

tnalyahya avatar tnalyahya commented on July 28, 2024

Thanks all!
It worked :)

from pycil.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.