Giter VIP home page Giter VIP logo

Comments (4)

lrjconan avatar lrjconan commented on May 23, 2024 1
  • use_eigen_decomp=False will use Lanczos, otherwise normal eigen solver will be used. Sorry for not well documented.
  • larger k doesn't stabilize the model, this is good to know. The variance of the Lanczos algorithm is an interesting phenomenon. I will look into it once I got time.
  • Interesting to know AdaLanczosNet works better, thanks for the information.
  • I update the known issues of citation networks with the correct paper link in case you are interested.

from lanczosnetwork.

lrjconan avatar lrjconan commented on May 23, 2024

Thanks for your interests in our work!

Regarding to your implementation, I think it is import to make sure:

  • When you call get_graph_laplacian_eigs function, the argument graph_laplacian_type is set to L4 as it is better than L1 empirically.
  • How do you handle the shape of L, is it of shape 1 X N X N X 1?
  • One way to possibly reduce the variance of Lanczos is to run get_graph_laplacian_eigs with slightly large value of k, e.g., 40, and only use the first 20.

Also, as an FYI, there are some known issues with these small citation datasets as they are fairly easy to overfit. Performances are typically very sensitive to hyperparameters like dropout. Therefore, they are not ideal and less conclusive for benchmarking.

Let me know if this helps to solve your problem.

from lanczosnetwork.

felixgwu avatar felixgwu commented on May 23, 2024

Thank you for your instant reply!

  • Yes, I use L4. I use this command to get the Laplacian. Hopefully, it's correct. D, V, adj = get_graph_laplacian_eigs(adj, k=20, graph_laplacian_type='L4', use_eigen_decomp=True, is_sym=True)
  • The L is of shape 1 x N x N. Since there is only one type of edges, I just drop the last dimension and use do state = torch.bmm(L, state) in my code.
  • I'll try larger values. Thanks!

from lanczosnetwork.

felixgwu avatar felixgwu commented on May 23, 2024

I realized that I should use use_eigen_decomp=False when decomposing the Laplacian.
Unfortunately, using a larger k doesn't stabilize the model. I tried up to 200.
AdaLanczosNet also has the same problem.
However, I don't think this is a big deal as long as people select models based on the validation score.
Additionally, in my experiments, I do find AdaLanczosNet works even better than LanczosNet.
Thank you for sharing this great work!

from lanczosnetwork.

Related Issues (10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.