Giter VIP home page Giter VIP logo

Comments (3)

ys-zong avatar ys-zong commented on June 27, 2024

Hi,

Thanks for your interest. Can you tell me which dataset are you using, and what dimension have you changed to? It looks a bit strange that the ARI of the major training is even lower than pretraining.

from const.

frickyinn avatar frickyinn commented on June 27, 2024

I was using spatialLIBD/151673 just as conST_cluster.ipynb did. And I cropped the tissue image myself according to the spatial positions and extracted the features with run_mae_extract_feature.py.

Because I tried to train the model with MAE features, there were several input dimensions of the layers different from merely using gene expression, including:

  • self.latent_dim
  • self.cluster_layer
  • self.fc1
  • self.fc2
  • self.disc_c
  • self.disc

For details, please see: compare. Basically, it was because z = torch.cat((feat_x, gnn_z), 1) had changed to z = torch.cat((feat_x, gnn_z, feat_img), 1) when using image, and the following layers' dimensions changed accordingly.

Instead of using eval_resolution = res_search_fixed_clus(adata_conST, n_clusters) to detect the best resolution in every epoch, I set resolution constantly to be 1, so the results were little lower.
But the problem of lower ARI after major training does exist. When I ran the conST_cluster.ipynb with use_pretrained=False and commented out the major training line:

conST_net.pretraining()
# conST_net.major_training()

to compared their results, I found merely pretraining was 0.499, which was higher than 0.439 of both pretraining and major training.

I am wondering if I have set something wrong.

Thank you!

from const.

ys-zong avatar ys-zong commented on June 27, 2024

It's a bit difficult for me to tell from this code directly, but I suggest trying to debug it in two steps.

First, can you obtain similar results without using image features? For data spatialLIBD/151673, there is a slight improvement when using image features from MAE but not much, because as you can see from the histology images, the image patches look similar among spots. So, if you can't obtain similar results from this step, there is probably something wrong with your initial settings, environment, etc. From your result, I suspect there is something wrong with this step, because I didn't come across a situation where the performance is worse than the pretraining after the major training.

If the first step is okay, then you can try to reduce the input dimension of the image features from 748 to a lower dimension, e.g. 100 with PCA (or even smaller), and see how the performance goes. By reducing the proportion of image features, you can check if the image features are successfully extracted by MAE.

P.S. Note that you may not get exact same results every time you run the experiment, which is due to the Cuda non-deterministic characterics of pytorch-geometrics.

from const.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.