google-deepmind / clrs Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Hi,
What is the reason that you chose to use undirected graphs for the algorithms mentioned in the title? As far as I see, they should all be able to support directed graphs as well.
Thanks!
Hi, I think there is a problem with generating undirected ER graphs. First, the probability of edges that are not a loop is p^2, not p. In addition, the distribution of weights on undirected edges is slightly different from the uniform one (as the geometric mean of two uniformly distributed variables).
Hi,
In the attached screenshot I have printed the first 30 hints (pred_h) for an insertion sort problem and at the very bottom of the screenshot is the pred. I have some questions about this please.
I also see similar repetition behaviour when generating BFS samples, so have no doubt this is a misunderstanding on my end but I would like to understand more about this. Specifically, if I were to take the dataset and apply it outside this context how to take the input/output pairs (without hints) correctly.
Thank you!
I am using this code to generate the screenshot:
for i, feedback in enumerate(train_ds.as_numpy_iterator()):
print(feedback.features.hints[2].data[:30])
print("------------")
print("feedback key is ", feedback.features.inputs[0].data)
print("feedback pos is ", feedback.features.inputs[1].data)
print("outputs is ", feedback.outputs[0].data)
Thanks to the authors for constructing this benchmark.
I'm having trouble reproducing some of the test scores reported in the paper, in Table 2. Comparing my runs against the paper results (averaging across 3 seeds: 42, 43, and 44):
Graham Scan task:
MPNN: 0.6355 vs. 0.9104 published
PGN: 0.3622 vs. 0.5687 published
Binary Search task:
MPNN: 0.2026 vs. 0.3683 published
PGN: 0.4390 vs. 0.7695 published
Here are the values I used for my reproduction experiments:
Values for batch size, train items, learning rate, and hint teaching forcing noise were obtained from sections 4.1 and 4.2 of the paper. Values for eval_every, dropout, use_ln, and use_lstm (which were not found in the paper) were default values in the provided run file. Additionally, I used processor type "pgn_mask" for the PGN experiments.
What setting should I use to more accurately reproduce the paper results? Were there hyperparameter settings unspecified in the paper (or specified) that I am getting wrong?
Finally, I noticed the most recent commit, fixing the axis for mean reduction in PGN. Would that cause PGN to perform differently than reported in the paper? And perhaps explain the discrepancy in results I obtained.
Consider
DIRECTED = np.array([
[0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
])
#out, probes = graphs.bfs(DIRECTED, 0)
out, probes = graphs.dfs(DIRECTED)
probes
# From probes for both dfs anf bfs algorithms:
'output': {'node': {'pi': {'data': array([0, 0, 0, 1, 1, 2]),
'type_': 'pointer'}},
How can I evaluate my DL model to conduct BFS or DFS algorithms if both algorithms provide the same output in probes? Is it a mistake in how these algorithms are implemented in the CLRS benchmark or I miss something? Of course, there are hints: pi_h that show the trajectory of the visited nodes, but still the final outputs are the same for BFS and DFS. What if I want to train a DL model without hints?
Proposed solutions:
Represent the outputs of bfs/dfs with the indexes of visited nodes in the order or visiting, so for the example above it would be: [0,1,2,3,4,5] for bfs and [0,1,3,4,2,5] for dfs - this would uniquely identify both algorithms.
Looking at the Insertion sort example https://arxiv.org/pdf/2205.15659, I can see you want to keep the input and output of the DL model to be a graph of the same size and use edges to indicate the output of an algorithm. So why not using the predecessor node indexes to previously visiting nodes. If the node has not previously visited node, then keep the node index itself, then:
bfs: [0,0,0,1,1,2]
dfs: [0,0,1,3,4,2]
PS. It took me sometime to figure out what is the meaning of the output for BFS/DFS algorithms - Now I know these are the predecessor nodes in the graph but only for those nodes that were visited during the algorithm run. If the node was not visited, then it would point to itself (what motivates this representation?) This is clear when looking at the hints for pi:
'pi_h': {'data': array([[0, 1, 2, 3, 4, 5],
[0, 0, 0, 3, 4, 5],
[0, 0, 0, 1, 1, 2]]),
How about adding better documentation for bfs/dfs algorithms to explain what the output is. Currently there no doc at all in the code: https://github.com/google-deepmind/clrs/blob/master/clrs/_src/algorithms/graphs_test.py
Hi, in some parts of the code you increase input signals for a few algorithms for evaluation. But, the generated dataset on google storage seems to not contain them (i.e., it contains only 32 trajectories of each). Is the change going to be reflected in future versions?
Hi,
thank you for the quite extensive work in putting CLRS together.
Do you have a reference or reasoning about what hints
you included? For example, can you elaborate on why you included hint A_t
in strongly connected components?
Thanks!
Hi, I think there is a bug in the KMP implementation. It does not find a match for some inputs in the dataset (i.e., outputs the end of the string).
Is it possible to add Tarjan's strongly connected component algorithm?
Thank you for the work and making the source public!
I just wanted to clarify if the package version from PyPI had already been updated yet.
After digging for some time for a research purpose there's some consistency in some methods.
For example functionalities in 2df2d48 either are not available or have different usage/implementation in the PyPI version.
Is there a plan to update the PyPI package just to make it consistent? Looking forward to hear!
Any comments on using 'tensorflow-macos' and 'tensorflow-metal' in the 'clrs' ecosystem?
I was able to install tensoflow-macos and tensorflow-metal in the clrs virtual enviornment.
My AMD GPU is being recognized ... but 'clrs' is looking for: 'tpu_driver' , 'cuda', 'tpu'.
Any ideas?
% python3 -m clrs.examples.run
I0605 17:03:39.042836 4560762368 run.py:196] Using CLRS30 spec: {'train': {'num_samples': 1000, 'length': 16, 'seed': 1}, 'val': {'num_samples': 32, 'length': 16, 'seed': 2}, 'test': {'num_samples': 32, 'length': 64, 'seed': 3}}
I0605 17:03:39.044355 4560762368 run.py:180] Dataset found at /tmp/CLRS30/CLRS30_v1.0.0. Skipping download.
Metal device set to: AMD Radeon Pro 5700 XT
systemMemory: 128.00 GB
maxCacheSize: 7.99 GB
...
devices = jax.devices()
logging.info(devices)
I0605 17:03:40.532486 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
I0605 17:03:40.534332 4560762368 xla_bridge.py:330] Unable to initialize backend 'gpu': NOT_FOUND: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host
I0605 17:03:40.536365 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
W0605 17:03:40.536445 4560762368 xla_bridge.py:335] No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
I installed the required libraries by pip install -r requirement.txt. The CUDA works well and the GPU can be found by tensorflow. However, when I try to run the code, an error occurs.
"Unable to initialize backend 'cuda'": module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
I searched on the web and found it might be caused by the version of the libraries.
Could you please share the versions of those packages you used with me? Thank you very much.
from clrs._src.algorithms import graphs
import networkx as nx
DIRECTED = np.array(
[
[0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 1],
[0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1],
]
)
out, _ = graphs.dfs(DIRECTED)
print(out) # [0 0 2 4 1 2]
What does the result of executing the code output mean?
If there are multiple results in DFS, how to judge whether it is right or wrong?
Thank you for the benchmark on neural algorithmic reasoning. I love the throwback to the classical CLRS textbook!
I remember bumping into the PDF paper online, but cannot seem to access it anymore. Is the paper associated with the repo available soon?
Hi, I think there is an issue in sampling the undirected weighted graphs. The sampled graph is first symmetrized and then weights are sampled, which makes the weights of each direction different. For algorithms that are capable of handling directed graphs, this might not cause any disruption in algorithm behavior. But, for the ones that output undirected edges (e.g., MST Kruskal), this would not be the true behavior of the algorithm.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.