Giter VIP home page Giter VIP logo

lightgcn-pytorch's Introduction

Update

2020-09:

  • Change the print format of each epoch
  • Add Cpp Extension in code/sources/ for negative sampling. To use the extension, please install pybind11 and cppimport under your environment

LightGCN-pytorch

This is the Pytorch implementation for our SIGIR 2020 paper:

SIGIR 2020. Xiangnan He, Kuan Deng ,Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang(2020). LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation, Paper in arXiv.

Author: Prof. Xiangnan He (staff.ustc.edu.cn/~hexn/)

(Also see Tensorflow implementation)

Introduction

In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named LightGCN,including only the most essential component in GCN—neighborhood aggregation—for collaborative filtering

Enviroment Requirement

pip install -r requirements.txt

Dataset

We provide three processed datasets: Gowalla, Yelp2018 and Amazon-book and one small dataset LastFM.

see more in dataloader.py

An example to run a 3-layer LightGCN

run LightGCN on Gowalla dataset:

  • command

cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64

  • log output
...
======================
EPOCH[5/1000]
BPR[sample time][16.2=15.84+0.42]
[saved][[BPR[aver loss1.128e-01]]
[0;30;43m[TEST][0m
{'precision': array([0.03315359]), 'recall': array([0.10711388]), 'ndcg': array([0.08940792])}
[TOTAL TIME] 35.9975962638855
...
======================
EPOCH[116/1000]
BPR[sample time][16.9=16.60+0.45]
[saved][[BPR[aver loss2.056e-02]]
[TOTAL TIME] 30.99874997138977
...

NOTE:

  1. Even though we offer the code to split user-item matrix for matrix multiplication, we strongly suggest you don't enable it since it will extremely slow down the training speed.
  2. If you feel the test process is slow, try to increase the testbatch and enable multicore(Windows system may encounter problems with multicore option enabled)
  3. Use tensorboard option, it's good.
  4. Since we fix the seed(--seed=2020 ) of numpy and torch in the beginning, if you run the command as we do above, you should have the exact output log despite the running time (check your output of epoch 5 and epoch 116).

Extend:

  • If you want to run lightGCN on your own dataset, you should go to dataloader.py, and implement a dataloader inherited from BasicDataset. Then register it in register.py.
  • If you want to run your own models on the datasets we offer, you should go to model.py, and implement a model inherited from BasicModel. Then register it in register.py.
  • If you want to run your own sampling methods on the datasets and models we offer, you should go to Procedure.py, and implement a function. Then modify the corresponding code in main.py

Results

all metrics is under top-20

pytorch version results (stop at 1000 epochs):

(for seed=2020)

  • gowalla:
Recall ndcg precision
layer=1 0.1687 0.1417 0.05106
layer=2 0.1786 0.1524 0.05456
layer=3 0.1824 0.1547 0.05589
layer=4 0.1825 0.1537 0.05576
  • yelp2018
Recall ndcg precision
layer=1 0.05604 0.04557 0.02519
layer=2 0.05988 0.04956 0.0271
layer=3 0.06347 0.05238 0.0285
layer=4 0.06515 0.05325 0.02917

lightgcn-pytorch's People

Contributors

dependabot[bot] avatar fengh16 avatar gabyustc avatar gusye1234 avatar hozrifai avatar tekrus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lightgcn-pytorch's Issues

LightGCN-L1-L

Thanks your job. how to set is using L1 normalization at the left side only (i.e., LightGCN-L1-L)?

About Layer Weights αk

大佬您好,您的论文中有提到将每层embedding的权重设置为1/(K+1),但在代码中并没有体现,请问这是怎么回事

layer 1 results different from paper on Yelp2018

I have trained the model for yelp2018 with 1 layer, i was able to reach the recall@20 = 0.056 , which is similar to what is mentioned in this github page but in paper they have achieved 0.0631 for 1 layer , how is this possible?

How to inference on users without any interactions?

Hi,
In the example dataset (amazon, gowalla,...), all of the users in test data has interactions with at least one item.
However, I am trying to implement on a problem where the number of item we will recommend to users is only 1.
Therefore, there would be reasonably lots of users without any single interaction in test data, and we will still try to provide recommendation to them.
I am wondering if this implementation code can apply to this problem?

Statistics of the datasets

I downloaded the amazon book dataset of 10-core. But i get different statistics of the datasets: item_num = 128939, user_num = 158650, sample = 4701968. Thus, could you give me the code of preprocessing of amazon book dataset, please. It is really important to me.

αK

Thank you for your work. Where do you set αK?

Lack of validation data

Hi! In the paper, you mentioned, "The early stopping and validation strategies are the same as NGCF". However, I cannot find any validation set in the provided code. Am I missing anything? Thanks!

Data set

Are there any reference on how to generate the data set? Thanks a lot!

Hello, an issue about the forward function

Hi, I read all your codes, but I didn't find the entrance of forward function. I didn't find any functions call the forward function. So what is the meaning of the forward function in model.py?

Hi,an issue about th hyper-parameter alpha

image
in

image

Hi,in the paper:LightGCN which is published in SIGIR ,2020.The formula 4 means k layer embedding should multiple a weight 1/(k+1),but in your code ( model.py ,function computer), different layer embedding share the same weight. Is there anything wrong?
您好:论文里公式4,第k层的embedding是需要乘一个系数1/(k+1)再加和。但代码model.py的computer函数里,是直接将若干层的embedding取均值。请问这是什么情况?

Why didn't you directly modify the code on NGCF? I did so, but the result was not that ideal.

Following the descriptions of this paper, I try to carry out LightGCN directly based on NGCF model as follows:


def forward(self, users, pos_items, neg_items, drop_flag=True):

    A_hat = self.sparse_mean_adj

    ego_embeddings = torch.cat([self.embedding_dict['user_emb'], self.embedding_dict['item_emb']], 0)

    all_embeddings = [ego_embeddings]

    for k in range(len(self.layers)):
        ego_embeddings = torch.sparse.mm(A_hat, ego_embeddings)

        all_embeddings.append(ego_embeddings)

    all_embeddings = torch.stack(all_embeddings, dim=1)
    all_embeddings = torch.mean(all_embeddings, dim=1)

    u_g_embeddings = all_embeddings[:self.n_user, :]
    i_g_embeddings = all_embeddings[self.n_user:, :]

    """
    look up
    """
    u_g_embeddings = u_g_embeddings[users, :]
    pos_i_g_embeddings = i_g_embeddings[pos_items, :]
    neg_i_g_embeddings = i_g_embeddings[neg_items, :]

    return u_g_embeddings, pos_i_g_embeddings, neg_i_g_embeddings

I have removed both the feature transformation matrices and non-linear activation function. Additionally, I replace norm_adj with mean_adj. Other hyper-parameter settings are as follows: lr = 0.001, lamda = 1e-4, K = 3, epoch = 1000. But the result seems not that ideal:

/home/lwy/anaconda3/envs/KGAT-Pytorch/bin/python /home/lwy/NGCF-LightGCN2/NGCF/main.py
n_users=29858, n_items=40981
n_interactions=1027370
n_train=810128, n_test=217242, sparsity=0.00084
already load adj matrix (70839, 70839) 0.12331271171569824
Epoch 0 [60.2s]: train==[266.12308=265.91809 + 0.20498]
Epoch 1 [60.2s]: train==[103.31055=102.82377 + 0.48685]
Epoch 2 [61.3s]: train==[76.84050=76.21890 + 0.62169]
Epoch 3 [61.8s]: train==[65.20748=64.50112 + 0.70636]
Epoch 4 [61.0s]: train==[57.59498=56.82709 + 0.76793]
Epoch 5 [60.6s]: train==[52.46275=51.64690 + 0.81580]
Epoch 6 [61.2s]: train==[48.68194=47.82713 + 0.85480]
Epoch 7 [60.6s]: train==[45.87556=44.98744 + 0.88810]
Epoch 8 [60.9s]: train==[43.57308=42.65576 + 0.91729]
Epoch 9 [60.6s + 86.3s]: train==[40.85046=39.90646 + 0.94400], recall=[0.12405, 0.26871], precision=[0.03890, 0.01713], hit=[0.47357, 0.70631], ndcg=[0.10771, 0.15002]
Epoch 10 [60.1s]: train==[39.45785=38.49084 + 0.96697]
Epoch 11 [60.5s]: train==[37.43333=36.44530 + 0.98802]
Epoch 12 [60.8s]: train==[36.24932=35.24067 + 1.00868]
Epoch 13 [60.2s]: train==[34.43538=33.40791 + 1.02747]
Epoch 14 [60.5s]: train==[33.88741=32.84295 + 1.04452]
Epoch 15 [60.8s]: train==[32.50555=31.44544 + 1.06010]
Epoch 16 [60.8s]: train==[31.84496=30.77052 + 1.07444]
Epoch 17 [60.4s]: train==[31.18940=30.10144 + 1.08792]
Epoch 18 [60.2s]: train==[30.06595=28.96548 + 1.10049]
Epoch 19 [61.0s + 86.2s]: train==[29.07747=27.96435 + 1.11312], recall=[0.13514, 0.29390], precision=[0.04178, 0.01860], hit=[0.49873, 0.73354], ndcg=[0.11629, 0.16292]
Epoch 20 [60.1s]: train==[28.85291=27.72777 + 1.12516]
Epoch 21 [60.6s]: train==[27.99307=26.85705 + 1.13602]
Epoch 22 [60.5s]: train==[27.20474=26.05751 + 1.14725]
Epoch 23 [60.2s]: train==[26.73112=25.57269 + 1.15840]
Epoch 24 [60.5s]: train==[26.02621=24.85734 + 1.16887]
Epoch 25 [60.9s]: train==[25.75082=24.57274 + 1.17807]
Epoch 26 [61.5s]: train==[25.21010=24.02402 + 1.18605]
Epoch 27 [60.2s]: train==[24.87771=23.68341 + 1.19428]
Epoch 28 [60.6s]: train==[24.10084=22.89890 + 1.20193]
Epoch 29 [60.8s + 86.1s]: train==[23.83304=22.62252 + 1.21055], recall=[0.13947, 0.30446], precision=[0.04313, 0.01924], hit=[0.50670, 0.74134], ndcg=[0.12042, 0.16890]
Epoch 30 [60.3s]: train==[23.28778=22.07078 + 1.21701]
Epoch 31 [60.9s]: train==[23.03534=21.81107 + 1.22426]
Epoch 32 [60.6s]: train==[22.75390=21.52212 + 1.23175]
Epoch 33 [60.1s]: train==[22.24394=21.00479 + 1.23913]
Epoch 34 [60.1s]: train==[21.83437=20.58843 + 1.24596]
Epoch 35 [60.1s]: train==[21.71244=20.45982 + 1.25263]
Epoch 36 [60.3s]: train==[21.40071=20.14179 + 1.25890]
Epoch 37 [60.7s]: train==[20.95679=19.69266 + 1.26413]
Epoch 38 [60.2s]: train==[20.55039=19.28064 + 1.26975]
Epoch 39 [60.7s + 85.8s]: train==[20.44785=19.17182 + 1.27603], recall=[0.14338, 0.31395], precision=[0.04432, 0.01979], hit=[0.51477, 0.74921], ndcg=[0.12412, 0.17426]
Epoch 40 [60.4s]: train==[20.17955=18.89828 + 1.28127]
Epoch 41 [60.6s]: train==[19.87287=18.58629 + 1.28659]
Epoch 42 [60.5s]: train==[19.71936=18.42737 + 1.29199]
Epoch 43 [60.4s]: train==[19.13259=17.83528 + 1.29731]
Epoch 44 [60.4s]: train==[18.95424=17.65184 + 1.30243]
Epoch 45 [60.6s]: train==[18.72514=17.41759 + 1.30756]
Epoch 46 [61.6s]: train==[18.39277=17.07915 + 1.31360]
Epoch 47 [62.3s]: train==[18.07647=16.75833 + 1.31813]
Epoch 48 [61.0s]: train==[18.08525=16.76234 + 1.32290]
Epoch 49 [60.8s + 86.7s]: train==[17.87114=16.54349 + 1.32764], recall=[0.14837, 0.32109], precision=[0.04568, 0.02022], hit=[0.52385, 0.75511], ndcg=[0.12718, 0.17785]
Epoch 50 [60.1s]: train==[17.42008=16.08800 + 1.33209]
Epoch 51 [60.9s]: train==[17.52630=16.18950 + 1.33681]
Epoch 52 [60.8s]: train==[17.27466=15.93404 + 1.34060]
Epoch 53 [60.5s]: train==[16.96308=15.61821 + 1.34486]
Epoch 54 [60.7s]: train==[17.04684=15.69842 + 1.34843]
Epoch 55 [60.8s]: train==[16.48394=15.13171 + 1.35224]
Epoch 56 [60.3s]: train==[16.37258=15.01538 + 1.35720]
Epoch 57 [60.6s]: train==[15.99654=14.63284 + 1.36370]
Epoch 58 [60.5s]: train==[15.85061=14.48335 + 1.36725]
Epoch 59 [60.4s + 85.9s]: train==[15.79825=14.42683 + 1.37142], recall=[0.15054, 0.32554], precision=[0.04628, 0.02053], hit=[0.52740, 0.76080], ndcg=[0.12876, 0.18028]
Epoch 60 [60.0s]: train==[15.53743=14.16208 + 1.37533]
Epoch 61 [60.2s]: train==[15.39389=14.01473 + 1.37916]
Epoch 62 [60.2s]: train==[15.08164=13.69826 + 1.38338]
Epoch 63 [60.2s]: train==[15.31049=13.92367 + 1.38683]
Epoch 64 [60.3s]: train==[14.80666=13.41616 + 1.39048]
Epoch 65 [60.4s]: train==[14.61778=13.22313 + 1.39465]
Epoch 66 [60.7s]: train==[14.59712=13.19897 + 1.39816]
Epoch 67 [60.7s]: train==[14.48054=13.07812 + 1.40242]
Epoch 68 [61.8s]: train==[14.11763=12.71134 + 1.40630]
Epoch 69 [60.7s + 85.6s]: train==[14.25008=12.84063 + 1.40946], recall=[0.15324, 0.33087], precision=[0.04716, 0.02085], hit=[0.53262, 0.76445], ndcg=[0.13133, 0.18351]
Epoch 70 [60.4s]: train==[13.70188=12.28905 + 1.41283]
Epoch 71 [60.7s]: train==[13.67261=12.25485 + 1.41777]
Epoch 72 [60.4s]: train==[13.45830=12.03676 + 1.42153]
Epoch 73 [60.5s]: train==[13.66729=12.24264 + 1.42466]
Epoch 74 [60.4s]: train==[13.48132=12.05340 + 1.42792]
Epoch 75 [60.4s]: train==[13.29481=11.86464 + 1.43017]
Epoch 76 [60.9s]: train==[13.20742=11.77449 + 1.43294]
Epoch 77 [60.9s]: train==[13.15398=11.71758 + 1.43639]
Epoch 78 [60.8s]: train==[12.93547=11.49618 + 1.43930]
Epoch 79 [60.9s + 85.7s]: train==[12.78802=11.34579 + 1.44223], recall=[0.15535, 0.33372], precision=[0.04765, 0.02104], hit=[0.53654, 0.76814], ndcg=[0.13218, 0.18466]
Epoch 80 [60.2s]: train==[12.58874=11.14276 + 1.44596]
Epoch 81 [60.4s]: train==[12.19576=10.74619 + 1.44958]
Epoch 82 [60.9s]: train==[12.24909=10.79602 + 1.45308]
Epoch 83 [60.6s]: train==[12.22942=10.77261 + 1.45682]
Epoch 84 [60.5s]: train==[12.16876=10.70818 + 1.46059]
Epoch 85 [61.1s]: train==[12.11418=10.65182 + 1.46235]
Epoch 86 [60.6s]: train==[11.83784=10.37170 + 1.46614]
Epoch 87 [61.2s]: train==[11.71266=10.24333 + 1.46933]
Epoch 88 [60.2s]: train==[11.87091=10.39799 + 1.47292]
Epoch 89 [60.2s + 86.7s]: train==[11.65144=10.17511 + 1.47632], recall=[0.15660, 0.33638], precision=[0.04813, 0.02123], hit=[0.53831, 0.76998], ndcg=[0.13369, 0.18662]
Epoch 90 [60.1s]: train==[11.23581=9.75627 + 1.47954]
Epoch 91 [60.7s]: train==[11.44357=9.96125 + 1.48233]
Epoch 92 [60.6s]: train==[11.35559=9.87024 + 1.48535]
Epoch 93 [60.4s]: train==[11.14640=9.65725 + 1.48915]
Epoch 94 [60.4s]: train==[11.21134=9.71909 + 1.49225]
Epoch 95 [60.5s]: train==[11.02477=9.52928 + 1.49550]
Epoch 96 [60.6s]: train==[10.91048=9.41239 + 1.49809]
Epoch 97 [60.6s]: train==[10.83567=9.33497 + 1.50071]
Epoch 98 [61.2s]: train==[10.92937=9.42558 + 1.50379]
Epoch 99 [61.7s + 86.4s]: train==[10.41902=8.91162 + 1.50739], recall=[0.15799, 0.33832], precision=[0.04857, 0.02137], hit=[0.53962, 0.77108], ndcg=[0.13481, 0.18793]
Epoch 100 [60.7s]: train==[10.42737=8.91670 + 1.51068]
Epoch 101 [60.4s]: train==[10.44687=8.93280 + 1.51407]
Epoch 102 [60.5s]: train==[10.41840=8.90180 + 1.51661]
Epoch 103 [60.9s]: train==[10.34340=8.82457 + 1.51884]
Epoch 104 [60.5s]: train==[10.08196=8.55952 + 1.52245]
Epoch 105 [60.9s]: train==[9.96486=8.43928 + 1.52557]
Epoch 106 [60.3s]: train==[10.10532=8.57640 + 1.52892]
Epoch 107 [60.5s]: train==[9.91326=8.38158 + 1.53169]
Epoch 108 [61.8s]: train==[9.93930=8.40537 + 1.53394]
Epoch 109 [60.2s + 85.8s]: train==[9.82825=8.29161 + 1.53665], recall=[0.15889, 0.33996], precision=[0.04881, 0.02149], hit=[0.54093, 0.77296], ndcg=[0.13505, 0.18846]
Epoch 110 [60.1s]: train==[9.89731=8.35803 + 1.53927]
Epoch 111 [60.2s]: train==[9.68567=8.14511 + 1.54055]
Epoch 112 [60.7s]: train==[9.62892=8.08452 + 1.54440]
Epoch 113 [60.3s]: train==[9.61257=8.06528 + 1.54729]
Epoch 114 [60.3s]: train==[9.61021=8.06060 + 1.54960]
Epoch 115 [60.3s]: train==[9.36545=7.81435 + 1.55110]
Epoch 116 [60.3s]: train==[9.41847=7.86497 + 1.55349]
Epoch 117 [60.5s]: train==[9.20634=7.64965 + 1.55669]
Epoch 118 [61.0s]: train==[9.24341=7.68390 + 1.55952]
Epoch 119 [60.4s + 86.0s]: train==[8.96627=7.40440 + 1.56186], recall=[0.16035, 0.34069], precision=[0.04906, 0.02153], hit=[0.54334, 0.77490], ndcg=[0.13590, 0.18912]
Epoch 120 [60.3s]: train==[9.02170=7.45705 + 1.56466]
Epoch 121 [60.7s]: train==[8.90828=7.34105 + 1.56723]
Epoch 122 [60.1s]: train==[8.87648=7.30680 + 1.56967]
Epoch 123 [60.7s]: train==[8.80425=7.23266 + 1.57158]
Epoch 124 [60.8s]: train==[8.85096=7.27670 + 1.57424]
Epoch 125 [60.3s]: train==[8.72812=7.15078 + 1.57735]
Epoch 126 [60.9s]: train==[8.77514=7.19571 + 1.57942]
Epoch 127 [60.5s]: train==[8.60509=7.02251 + 1.58259]
Epoch 128 [60.8s]: train==[8.67968=7.09404 + 1.58564]
Epoch 129 [60.3s + 86.6s]: train==[8.42672=6.83914 + 1.58760], recall=[0.15989, 0.34315], precision=[0.04903, 0.02163], hit=[0.54153, 0.77624], ndcg=[0.13554, 0.18959]
Epoch 130 [60.1s]: train==[8.59887=7.00849 + 1.59037]
Epoch 131 [60.2s]: train==[8.47767=6.88626 + 1.59140]
Epoch 132 [59.9s]: train==[8.39836=6.80522 + 1.59314]
Epoch 133 [59.5s]: train==[8.37008=6.77470 + 1.59538]
Epoch 134 [60.0s]: train==[8.28425=6.68692 + 1.59732]
Epoch 135 [59.7s]: train==[8.49748=6.89880 + 1.59868]
Epoch 136 [59.7s]: train==[8.17713=6.57651 + 1.60061]
Epoch 137 [59.6s]: train==[8.08149=6.47830 + 1.60320]
Epoch 138 [59.7s]: train==[8.13093=6.52483 + 1.60609]
Epoch 139 [59.7s + 74.5s]: train==[8.00626=6.39854 + 1.60772], recall=[0.16084, 0.34412], precision=[0.04935, 0.02170], hit=[0.54260, 0.77755], ndcg=[0.13590, 0.18991]
Epoch 140 [59.1s]: train==[8.02596=6.41510 + 1.61087]
Epoch 141 [59.2s]: train==[8.03624=6.42338 + 1.61286]
Epoch 142 [59.3s]: train==[7.76434=6.14867 + 1.61567]
Epoch 143 [59.3s]: train==[7.74384=6.12575 + 1.61810]
Epoch 144 [59.2s]: train==[7.86404=6.24304 + 1.62100]
Epoch 145 [59.3s]: train==[7.65367=6.03018 + 1.62350]
Epoch 146 [59.3s]: train==[7.79558=6.17091 + 1.62467]
Epoch 147 [59.4s]: train==[7.64348=6.01648 + 1.62700]
Epoch 148 [59.4s]: train==[7.60244=5.97293 + 1.62950]
Epoch 149 [59.4s + 74.4s]: train==[7.64775=6.01733 + 1.63042], recall=[0.16036, 0.34484], precision=[0.04915, 0.02173], hit=[0.54119, 0.77818], ndcg=[0.13561, 0.19003]
Epoch 150 [59.5s]: train==[7.54225=5.90988 + 1.63237]
Epoch 151 [59.6s]: train==[7.40085=5.76583 + 1.63502]
Epoch 152 [59.7s]: train==[7.42210=5.78523 + 1.63688]
Epoch 153 [59.5s]: train==[7.48156=5.84386 + 1.63770]
Epoch 154 [59.7s]: train==[7.39741=5.75809 + 1.63932]
Epoch 155 [59.7s]: train==[7.22317=5.58149 + 1.64167]
Epoch 156 [59.7s]: train==[7.23868=5.59447 + 1.64421]
Epoch 157 [59.7s]: train==[7.12774=5.48157 + 1.64617]
Epoch 158 [59.7s]: train==[7.17159=5.52278 + 1.64881]
Epoch 159 [59.7s + 74.6s]: train==[7.27608=5.62471 + 1.65135], recall=[0.16046, 0.34535], precision=[0.04927, 0.02178], hit=[0.53868, 0.77725], ndcg=[0.13563, 0.19018]
Epoch 160 [59.4s]: train==[7.00549=5.35218 + 1.65331]
Epoch 161 [59.2s]: train==[6.99893=5.34303 + 1.65590]
Epoch 162 [59.3s]: train==[7.12252=5.46511 + 1.65740]
Epoch 163 [59.3s]: train==[7.23824=5.57958 + 1.65866]
Epoch 164 [59.4s]: train==[6.91963=5.25922 + 1.66041]
Epoch 165 [59.4s]: train==[6.99961=5.33781 + 1.66181]
Epoch 166 [59.3s]: train==[6.99266=5.32886 + 1.66381]
Epoch 167 [59.3s]: train==[6.99133=5.32623 + 1.66510]
Epoch 168 [59.4s]: train==[6.81604=5.14932 + 1.66672]
Epoch 169 [59.4s + 74.4s]: train==[6.89832=5.22984 + 1.66848], recall=[0.15889, 0.34526], precision=[0.04879, 0.02180], hit=[0.53584, 0.77758], ndcg=[0.13417, 0.18929]
Epoch 170 [59.1s]: train==[6.86331=5.19311 + 1.67020]
Epoch 171 [59.5s]: train==[6.74868=5.07659 + 1.67209]
Epoch 172 [59.5s]: train==[6.77525=5.10093 + 1.67432]
Epoch 173 [59.5s]: train==[6.68779=5.01278 + 1.67501]
Epoch 174 [59.5s]: train==[6.59584=4.91946 + 1.67637]
Epoch 175 [59.5s]: train==[6.66248=4.98471 + 1.67777]
Epoch 176 [59.5s]: train==[6.57267=4.89164 + 1.68103]
Epoch 177 [59.5s]: train==[6.44634=4.76320 + 1.68314]
Epoch 178 [59.5s]: train==[6.47096=4.78654 + 1.68442]
Epoch 179 [59.6s + 74.9s]: train==[6.61471=4.92859 + 1.68611], recall=[0.15950, 0.34521], precision=[0.04889, 0.02179], hit=[0.53637, 0.77865], ndcg=[0.13456, 0.18948]
Epoch 180 [59.5s]: train==[6.42855=4.74134 + 1.68721]
Epoch 181 [59.6s]: train==[6.56521=4.87680 + 1.68842]
Epoch 182 [59.7s]: train==[6.58104=4.89148 + 1.68957]
Epoch 183 [59.7s]: train==[6.27660=4.58523 + 1.69138]
Epoch 184 [59.3s]: train==[6.28665=4.59311 + 1.69353]
Epoch 185 [59.5s]: train==[6.29950=4.60468 + 1.69482]
Epoch 186 [59.4s]: train==[6.34056=4.64325 + 1.69731]
Epoch 187 [59.6s]: train==[6.29903=4.60059 + 1.69844]
Epoch 188 [59.6s]: train==[6.39128=4.69142 + 1.69986]
Epoch 189 [59.6s + 74.3s]: train==[6.14241=4.44064 + 1.70176], recall=[0.15999, 0.34552], precision=[0.04917, 0.02180], hit=[0.53704, 0.77818], ndcg=[0.13511, 0.18988]
Early stopping is trigger at step: 5 log:0.15999063147947018
Best Iter=[13]@[13017.9] recall=[0.16084 0.22535 0.27380 0.31192 0.34412], precision=[0.04935 0.03497 0.02847 0.02448 0.02170], hit=[0.54260 0.64884 0.70755 0.74684 0.77755], ndcg=[0.13590 0.15620 0.17059 0.18131 0.18991]

Process finished with exit code 0

So I sincerely ask you that why you didn't directly modify the code on NGCF, and why my result is not ideal? Thank you very much!

Normalized coefficient pui during neighborhood aggregation operation is not symmetric normalization in NGCF as LightGCN.

对于数据集Gowalla,NGCF中的归一化邻接矩阵mean_adj和LightGCN中的归一化邻接矩阵pre_adj_mat并不相同:NGCF中某一行的各元素均相同且为归一化后的值,而LightGCN中的pre_adj_mat某一行的各元素是不相同的。

我想,是不是因为NGCF中聚合操作使用的是直接归一化的方法,而LightGCN中使用的是对称归一化的方法?

A question

Why only embed ID, which is supposed to have no real meaning

recall and precision

def RecallPrecision_ATk(test_data, r, k): """ test_data should be a list? cause users may have different amount of pos items. shape (test_batch, k) pred_data : shape (test_batch, k) NOTE: pred_data should be pre-sorted k : top-k """ right_pred = r[:, :k].sum(1) precis_n = k recall_n = np.array([len(test_data[i]) for i in range(len(test_data))]) recall = np.sum(right_pred/recall_n) precis = np.sum(right_pred)/precis_n return {'recall': recall, 'precision': precis}

In this way, the result of recall may be greater than 1. Is there any mistakes?

BPR loss sigmoid vs softplus

Hello, thanks for your implementation in the first place.
I notice that in BPR loss you used softplus instand of sigmiod, which is different from the original paper.
image
Could you please explain why do so? Thx~

error in function

in line 61 of utils.py, it calls UniformSample_original_python with two arguments, but the function only accepts one

not sure if it was worth submitting a whole PR just for this, so I'll just leave a note here for people trying to replicate the tests

Amazon Dataset

Hi~ Thank for providing the code.
I have questions about Amazon Book Dataset. What is the train.txt file? Does that mean a test dataset in machine learning? If so, I guess both Amazon train dataset and test datasets are actually customer-evaluated data.

It is difficult for me to understand the process by which theories are coded, even though I have referred to related papers. In particular, it is difficult to understand the codes in which data become nodes. I'd like to get recommendations for reference materials.

When loading trained model

When setting --pretrain as 1 (loading the trained model such as 'lgn-gowalla-3-64.pth.tar'), the initial of lgn model faces the problem:
Traceback (most recent call last):
File "main.py", line 18, in
Recmodel = register.MODELS[world.model_name](world.config, dataset)
File "./LightGCN/code/model.py", line 90, in init
self.__init_weight()
File "./LightGCN/code/model.py", line 112, in _init_weight
self.embedding_user.weight.data.copy
(torch.from_numpy(self.config['user_emb']))
KeyError: 'user_emb'

It seems that there is no statement of the config['user_emb'] and config['item_emb'] in world.py or anywhere.
Would you kindly like to tell me how to fix this part? How to load the trained model correctly?

The L2 regularization

During the training of the model using the mini-batch approach, the L2 regularization term does not involve all model parameters, but only uses the part of the model parameters corresponding to the involved embeddings. Is this a deliberate trick in the experiment?

在用mini-batch方式训练模型时,L2正则化项的计算并非使用了全部模型参数,而是只用了这一批次涉及到的用户、物品嵌入对应的那一部分模型参数。请问这是实验里有意为之的trick吗?

    def bpr_loss(self, users, pos, neg):
        (users_emb, pos_emb, neg_emb, 
        userEmb0,  posEmb0, negEmb0) = self.getEmbedding(users.long(), pos.long(), neg.long())
        reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))
        pos_scores = torch.mul(users_emb, pos_emb)
        pos_scores = torch.sum(pos_scores, dim=1)
        neg_scores = torch.mul(users_emb, neg_emb)
        neg_scores = torch.sum(neg_scores, dim=1)
        
        loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores))
        
        return loss, reg_loss

Something wrong with 'amazon-book' dataset

Everything is fine when running on gowallo dataset.
Got an error, while parsing amazon dataset,
specifclly, "dataloder.py"

ValueError: invalid literal for int() with base 10: ''

RAM problem while generating adjacency matrix

Hello,

I wanted to try this solution for my own dataset that consists around 1 mln users and about 300k items (and about 42 mln interactions for training). Unfortunately, when I've prepared data for my tests and I started script, my process was killed because of exceeding 240Gi of RAM. It happens while generating adjacency matrix in the dataloader in line:
adj_mat[:self.n_users, self.n_users:] = R

Is there possibility to make it differently? Or my dataset is just too big for use with LightGCN?

Best regards

could not find a version torch==1.4.0

i used pip install -r requirements.txt in an anaconda environment, but an error occurred:

ERROR: Could not find a version that satisfies the requirement torch==1.4.0 (from -r requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)

ERROR: No matching distribution found for torch==1.4.0 (from -r requirements.txt (line 1))

i tried to use python3.8, pyhton3.7, python 3.6 and looked it up on Google, it still didn't work

trustnetwork

Can you tell me how the trustnetwork.txt file is generated?

Graph creation is not scalable

Hello!

Thank you for posting the code of LightGCN. I would like to suggest an improvement. I found that the following line of the code can cause a MemoryError in case of a large number users or items:

dense = self.Graph.to_dense()

To escape using .to_dense() I changed these lines of the code in my local version:

first_sub = torch.stack([user_dim, item_dim + self.n_users])
second_sub = torch.stack([item_dim+self.n_users, user_dim])
index = torch.cat([first_sub, second_sub], dim=1)
data = torch.ones(index.size(-1)).int()
self.Graph = torch.sparse.IntTensor(index, data, torch.Size([self.n_users+self.m_items, self.n_users+self.m_items]))
dense = self.Graph.to_dense()
D = torch.sum(dense, dim=1).float()
D[D==0.] = 1.
D_sqrt = torch.sqrt(D).unsqueeze(dim=0)
dense = dense/D_sqrt
dense = dense/D_sqrt.t()
index = dense.nonzero()
data = dense[dense >= 1e-9]
assert len(index) == len(data)
self.Graph = torch.sparse.FloatTensor(index.t(), data, torch.Size([self.n_users+self.m_items, self.n_users+self.m_items]))
self.Graph = self.Graph.coalesce().to(world.device)

To these:

first_sub = np.vstack((trainUser,trainItem+n_users))
second_sub = np.vstack((item_dim+n_users, user_dim))
index = np.concatenate([first_sub, second_sub],1)
data = np.ones(index.shape[-1]).astype(np.int32)
Graph = sps.csr_matrix((data,index))
D = Graph.sum(1)
D[D==0] = 1.
D_sqrt = sps.diags(1/np.sqrt(D.A.ravel()))
Graph = D_sqrt * Graph * D_sqrt
index = torch.Tensor(Graph.nonzero()).long().T
data = torch.Tensor(Graph[Graph.nonzero()]).squeeze(0)
Graph = torch.sparse.FloatTensor(index.t(),
                         data,
                         torch.Size([n_users+n_items, n_users+n_items]))
Graph = Graph.coalesce().to('cpu')

They seem to be the same thing. However, the second option does not create a matrix of size (N + M) ** 2. So, your model could be run on large datasets.

Hi! Can I use word embeddings for pretrained user&item weight?

I notice that the embedding_user and embedding_item are initialized by torch.nn.init.normal_, and there is a choice that whether we use pretrained weight or not.
In my dataset, the recommendation results are strongly correlated with the item names. And I want to use BERT to get word embeddings so that similar item names have similar embedding vectors.
So can I use word embeddings for pretrained user&item weight to get better performance?

An error when running on amazon-book

Hi! There's an error when running on the amazon-book dataset.

The Python version is 3.8.12 .

cd code && python main.py --decay=1e-3 --lr=0.001 --layer=2 --seed=2020 --dataset="amazon-book" --topks="[20, 100, 300]" --recdim=64
Cpp extension not loaded
>>SEED: 2020
loading [../data/amazon-book]
Traceback (most recent call last):
  File "main.py", line 14, in <module>
    import register
  File "/home/sxr/code/LightGCN/code/register.py", line 8, in <module>
    dataset = dataloader.Loader(path="../data/"+world.dataset)
  File "/home/sxr/code/LightGCN/code/dataloader.py", line 261, in __init__
    items = [int(i) for i in l[1:]]
  File "/home/sxr/code/LightGCN/code/dataloader.py", line 261, in <listcomp>
    items = [int(i) for i in l[1:]]
ValueError: invalid literal for int() with base 10: '' 

The implementation of BPR loss is different from that stated in the paper

The code implementation of BPR loss:

def bpr_loss(self, users, pos, neg):
        (users_emb, pos_emb, neg_emb, 
        userEmb0,  posEmb0, negEmb0) = self.getEmbedding(users.long(), pos.long(), neg.long())
        reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))
        pos_scores = torch.mul(users_emb, pos_emb)
        pos_scores = torch.sum(pos_scores, dim=1)
        neg_scores = torch.mul(users_emb, neg_emb)
        neg_scores = torch.sum(neg_scores, dim=1)
    
        loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores))
            
        return loss, reg_loss

The formula stated in the paper:
image

Adjacency graph taking too long to generate

On a new dataset of about 1.3 million interactions, it takes almost 1 hour to create the adjacency graph. Is this correct? Is there a way to make graph creation faster?

Thank you.

Update:

I have found that it's the following step that takes an excessive amount of time:

adj_mat[:self.n_users, self.n_users:] = R

Is there any explanation for this?

feedback

Your codes styles are so ugly, names of variants are too difficult to understand.

Numpy not installed error when running main.py

I'm trying to run the main.py as instructed in the Readme, however I keep getting the below error claiming numpy/ pandas in not installed. However, I have already installed the dependencies correctly in the conda environment using requirements.txt
While np 1.22.0 is not actually available, I have np 1.21.5

Is there something in the code that is causing this weird error?

image (1)

the results of loss are not the same with yours

hi,

I run the lightgcn using the command you provided in readme file 'cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64'

However, my loss results in 5/116 epoch are not the same with yours.

The log of mine:
(deeplearning-pytorch) yandeMacBook-Pro:LightGCN-PyTorch-master yan$ cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64
Cpp extension not loaded

SEED: 2020
loading [../data/gowalla]
810128 interactions for training
217242 interactions for testing
gowalla Sparsity : 0.0008396216228570436
gowalla is ready to go
===========config================
{'A_n_fold': 100,
'A_split': False,
'bigdata': False,
'bpr_batch_size': 2048,
'decay': 0.0001,
'dropout': 0,
'keep_prob': 0.6,
'latent_dim_rec': 64,
'lightGCN_n_layers': 3,
'lr': 0.001,
'multicore': 0,
'pretrain': 0,
'test_u_batch_size': 100}
cores for test: 6
comment: lgn
tensorboard: 1
LOAD: 0
Weight path: ./checkpoints
Test Topks: [20]
using bpr loss
===========end===================
use NORMAL distribution initilizer
loading adjacency matrix
successfully loaded...
don't split the matrix
lgn is already to go(dropout:0)
load and save to /Users/yan/PycharmProjects/LightGCN-PyTorch-master/code/checkpoints/lgn-gowalla-3-64.pth.tar
[TEST]
{'precision': array([0.00018755]), 'recall': array([0.00053749]), 'ndcg': array([0.00040836])}
EPOCH[1/1000] loss0.545-|Sample:10.23|
^Z
[1]+ Stopped python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64
(deeplearning-pytorch) yandeMacBook-Pro:code yan$ cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64
bash: cd: code: No such file or directory
(deeplearning-pytorch) yandeMacBook-Pro:code yan$ python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
Cpp extension not loaded
SEED: 2020
loading [../data/gowalla]
810128 interactions for training
217242 interactions for testing
gowalla Sparsity : 0.0008396216228570436
gowalla is ready to go
===========config================
{'A_n_fold': 100,
'A_split': False,
'bigdata': False,
'bpr_batch_size': 2048,
'decay': 0.0001,
'dropout': 0,
'keep_prob': 0.6,
'latent_dim_rec': 64,
'lightGCN_n_layers': 3,
'lr': 0.001,
'multicore': 0,
'pretrain': 0,
'test_u_batch_size': 100}
cores for test: 6
comment: lgn
tensorboard: 1
LOAD: 0
Weight path: ./checkpoints
Test Topks: [20]
using bpr loss
===========end===================
use NORMAL distribution initilizer
loading adjacency matrix
successfully loaded...
don't split the matrix
lgn is already to go(dropout:0)
load and save to /Users/yan/PycharmProjects/LightGCN-PyTorch-master/code/checkpoints/lgn-gowalla-3-64.pth.tar
[TEST]
{'precision': array([0.00018755]), 'recall': array([0.00053749]), 'ndcg': array([0.00040836])}
EPOCH[1/1000] loss0.545-|Sample:11.30|
EPOCH[2/1000] loss0.240-|Sample:9.95|
EPOCH[3/1000] loss0.163-|Sample:10.90|
EPOCH[4/1000] loss0.131-|Sample:9.84|
EPOCH[5/1000] loss0.112-|Sample:9.75|
EPOCH[6/1000] loss0.099-|Sample:9.67|
EPOCH[7/1000] loss0.090-|Sample:9.56|
EPOCH[8/1000] loss0.084-|Sample:9.70|
EPOCH[9/1000] loss0.078-|Sample:9.62|
EPOCH[10/1000] loss0.074-|Sample:9.80|
[TEST]
{'precision': array([0.03665852]), 'recall': array([0.12015017]), 'ndcg': array([0.10065857])}
EPOCH[11/1000] loss0.071-|Sample:9.76|
EPOCH[12/1000] loss0.068-|Sample:9.65|
EPOCH[13/1000] loss0.065-|Sample:9.86|
EPOCH[14/1000] loss0.064-|Sample:9.80|
EPOCH[15/1000] loss0.061-|Sample:9.60|
EPOCH[16/1000] loss0.059-|Sample:9.76|
EPOCH[17/1000] loss0.057-|Sample:9.61|
EPOCH[18/1000] loss0.055-|Sample:9.71|
EPOCH[19/1000] loss0.054-|Sample:9.69|
EPOCH[20/1000] loss0.052-|Sample:9.68|
[TEST]
{'precision': array([0.03968451]), 'recall': array([0.13136514]), 'ndcg': array([0.10890214])}
EPOCH[21/1000] loss0.052-|Sample:9.80|
EPOCH[22/1000] loss0.050-|Sample:9.57|
EPOCH[23/1000] loss0.049-|Sample:9.58|
EPOCH[24/1000] loss0.048-|Sample:9.65|
EPOCH[25/1000] loss0.047-|Sample:9.64|
EPOCH[26/1000] loss0.046-|Sample:9.71|
EPOCH[27/1000] loss0.045-|Sample:9.51|
EPOCH[28/1000] loss0.044-|Sample:9.67|
EPOCH[29/1000] loss0.043-|Sample:9.55|
EPOCH[30/1000] loss0.042-|Sample:9.68|
[TEST]
{'precision': array([0.04201554]), 'recall': array([0.13925258]), 'ndcg': array([0.1155325])}
EPOCH[31/1000] loss0.042-|Sample:9.78|
EPOCH[32/1000] loss0.041-|Sample:9.52|
EPOCH[33/1000] loss0.040-|Sample:9.69|
EPOCH[34/1000] loss0.039-|Sample:9.62|
EPOCH[35/1000] loss0.039-|Sample:9.78|
EPOCH[36/1000] loss0.038-|Sample:9.61|
EPOCH[37/1000] loss0.037-|Sample:9.61|
EPOCH[38/1000] loss0.037-|Sample:9.65|
EPOCH[39/1000] loss0.036-|Sample:9.71|
EPOCH[40/1000] loss0.036-|Sample:9.70|
[TEST]
{'precision': array([0.04349923]), 'recall': array([0.14439921]), 'ndcg': array([0.12029571])}
EPOCH[41/1000] loss0.035-|Sample:9.65|
EPOCH[42/1000] loss0.035-|Sample:9.66|
EPOCH[43/1000] loss0.034-|Sample:9.59|
EPOCH[44/1000] loss0.034-|Sample:9.80|
EPOCH[45/1000] loss0.033-|Sample:9.55|
EPOCH[46/1000] loss0.033-|Sample:9.63|
EPOCH[47/1000] loss0.032-|Sample:9.67|
EPOCH[48/1000] loss0.032-|Sample:9.68|
EPOCH[49/1000] loss0.032-|Sample:9.68|
EPOCH[50/1000] loss0.031-|Sample:9.54|
[TEST]
{'precision': array([0.04473173]), 'recall': array([0.14867354]), 'ndcg': array([0.1240188])}
EPOCH[51/1000] loss0.031-|Sample:9.90|
EPOCH[52/1000] loss0.030-|Sample:9.55|
EPOCH[53/1000] loss0.030-|Sample:9.66|
EPOCH[54/1000] loss0.030-|Sample:9.58|
EPOCH[55/1000] loss0.030-|Sample:9.71|
EPOCH[56/1000] loss0.029-|Sample:9.63|
EPOCH[57/1000] loss0.030-|Sample:9.71|
EPOCH[58/1000] loss0.028-|Sample:9.70|
EPOCH[59/1000] loss0.029-|Sample:9.51|
EPOCH[60/1000] loss0.028-|Sample:9.84|
[TEST]
{'precision': array([0.04583194]), 'recall': array([0.15272959]), 'ndcg': array([0.12772477])}
EPOCH[61/1000] loss0.028-|Sample:9.78|
EPOCH[62/1000] loss0.028-|Sample:9.89|
EPOCH[63/1000] loss0.027-|Sample:9.51|
EPOCH[64/1000] loss0.027-|Sample:9.66|
EPOCH[65/1000] loss0.027-|Sample:9.62|
EPOCH[66/1000] loss0.027-|Sample:9.57|
EPOCH[67/1000] loss0.026-|Sample:9.66|
EPOCH[68/1000] loss0.026-|Sample:9.48|
EPOCH[69/1000] loss0.026-|Sample:9.66|
EPOCH[70/1000] loss0.026-|Sample:9.59|
[TEST]
{'precision': array([0.04668598]), 'recall': array([0.15544668]), 'ndcg': array([0.13033168])}
EPOCH[71/1000] loss0.026-|Sample:9.80|
EPOCH[72/1000] loss0.025-|Sample:9.57|
EPOCH[73/1000] loss0.025-|Sample:9.68|
EPOCH[74/1000] loss0.025-|Sample:9.62|
EPOCH[75/1000] loss0.024-|Sample:9.68|
EPOCH[76/1000] loss0.024-|Sample:9.60|
EPOCH[77/1000] loss0.024-|Sample:9.53|
EPOCH[78/1000] loss0.023-|Sample:9.73|
EPOCH[79/1000] loss0.023-|Sample:9.55|
EPOCH[80/1000] loss0.023-|Sample:9.76|
[TEST]
{'precision': array([0.0476472]), 'recall': array([0.15882603]), 'ndcg': array([0.13296691])}
EPOCH[81/1000] loss0.023-|Sample:9.70|
EPOCH[82/1000] loss0.023-|Sample:9.70|
EPOCH[83/1000] loss0.023-|Sample:9.74|
EPOCH[84/1000] loss0.023-|Sample:9.70|
EPOCH[85/1000] loss0.023-|Sample:9.55|
EPOCH[86/1000] loss0.022-|Sample:9.67|
EPOCH[87/1000] loss0.022-|Sample:9.59|
EPOCH[88/1000] loss0.022-|Sample:9.79|
EPOCH[89/1000] loss0.022-|Sample:9.60|
EPOCH[90/1000] loss0.022-|Sample:9.64|
[TEST]
{'precision': array([0.04831536]), 'recall': array([0.16129594]), 'ndcg': array([0.13489544])}
EPOCH[91/1000] loss0.022-|Sample:9.84|
EPOCH[92/1000] loss0.021-|Sample:9.64|
EPOCH[93/1000] loss0.021-|Sample:9.52|
EPOCH[94/1000] loss0.021-|Sample:9.63|
EPOCH[95/1000] loss0.021-|Sample:9.58|
EPOCH[96/1000] loss0.021-|Sample:9.63|
EPOCH[97/1000] loss0.021-|Sample:9.48|
EPOCH[98/1000] loss0.021-|Sample:9.70|
EPOCH[99/1000] loss0.021-|Sample:9.55|
EPOCH[100/1000] loss0.021-|Sample:9.62|
[TEST]
{'precision': array([0.04904716]), 'recall': array([0.16339545]), 'ndcg': array([0.13703003])}
EPOCH[101/1000] loss0.020-|Sample:9.81|
EPOCH[102/1000] loss0.020-|Sample:9.93|
EPOCH[103/1000] loss0.020-|Sample:9.67|
EPOCH[104/1000] loss0.020-|Sample:9.55|
EPOCH[105/1000] loss0.020-|Sample:9.79|
EPOCH[106/1000] loss0.020-|Sample:9.56|
EPOCH[107/1000] loss0.020-|Sample:9.69|
EPOCH[108/1000] loss0.019-|Sample:9.65|
EPOCH[109/1000] loss0.019-|Sample:9.70|
EPOCH[110/1000] loss0.019-|Sample:9.69|
[TEST]
{'precision': array([0.04963829]), 'recall': array([0.16556552]), 'ndcg': array([0.13885787])}
EPOCH[111/1000] loss0.019-|Sample:9.71|
EPOCH[112/1000] loss0.019-|Sample:9.70|
EPOCH[113/1000] loss0.019-|Sample:9.61|
EPOCH[114/1000] loss0.019-|Sample:9.73|
EPOCH[115/1000] loss0.019-|Sample:9.61|
EPOCH[116/1000] loss0.019-|Sample:9.62|
EPOCH[117/1000] loss0.019-|Sample:9.68|
EPOCH[118/1000] loss0.018-|Sample:9.61|
EPOCH[119/1000] loss0.018-|Sample:9.62|
EPOCH[120/1000] loss0.018-|Sample:9.41|
[TEST]
{'precision': array([0.05002344]), 'recall': array([0.1664573]), 'ndcg': array([0.13995462])}
EPOCH[121/1000] loss0.018-|Sample:9.87|
EPOCH[122/1000] loss0.018-|Sample:9.54|
EPOCH[123/1000] loss0.018-|Sample:9.70|
EPOCH[124/1000] loss0.018-|Sample:9.57|
EPOCH[125/1000] loss0.018-|Sample:9.70|
EPOCH[126/1000] loss0.018-|Sample:9.67|
EPOCH[127/1000] loss0.018-|Sample:9.55|
EPOCH[128/1000] loss0.018-|Sample:9.63|
EPOCH[129/1000] loss0.018-|Sample:9.50|
EPOCH[130/1000] loss0.018-|Sample:9.69|
[TEST]
{'precision': array([0.05054257]), 'recall': array([0.16798868]), 'ndcg': array([0.14152368])}
EPOCH[131/1000] loss0.017-|Sample:9.71|
EPOCH[132/1000] loss0.017-|Sample:9.63|
EPOCH[133/1000] loss0.017-|Sample:9.55|
EPOCH[134/1000] loss0.017-|Sample:9.62|
EPOCH[135/1000] loss0.017-|Sample:9.67|
EPOCH[136/1000] loss0.017-|Sample:9.62|
EPOCH[137/1000] loss0.017-|Sample:9.68|
EPOCH[138/1000] loss0.017-|Sample:9.45|
EPOCH[139/1000] loss0.017-|Sample:9.64|
EPOCH[140/1000] loss0.017-|Sample:9.52|
...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.