Giter VIP home page Giter VIP logo

clrs's Introduction

The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms. The CLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous work toward evaluation algorithmic reasoning by providing a suite of implementations of classical algorithms. These algorithms have been selected from the third edition of the standard Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.

Getting started

The CLRS Algorithmic Reasoning Benchmark can be installed with pip, either from PyPI:

pip install dm-clrs

or directly from GitHub (updated more frequently):

pip install git+git://github.com/deepmind/clrs.git

You may prefer to install it in a virtual environment if any requirements clash with your Python installation:

python3 -m venv clrs_env
source clrs_env/bin/activate
pip install git+git://github.com/deepmind/clrs.git

Once installed you can run our example baseline model:

python3 -m clrs.examples.run

If this is the first run of the example, the dataset will be downloaded and stored in --dataset_path (default '/tmp/CLRS30'). Alternatively, you can also download and extract https://storage.googleapis.com/dm-clrs/CLRS30_v1.0.0.tar.gz

Algorithms as graphs

CLRS implements the selected algorithms in an idiomatic way, which aligns as closely as possible to the original CLRS 3ed pseudocode. By controlling the input data distribution to conform to the preconditions we are able to automatically generate input/output pairs. We additionally provide trajectories of "hints" that expose the internal state of each algorithm, to both optionally simplify the learning challenge and to distinguish between different algorithms that solve the same overall task (e.g. sorting).

In the most generic sense, algorithms can be seen as manipulating sets of objects, along with any relations between them (which can themselves be decomposed into binary relations). Accordingly, we study all of the algorithms in this benchmark using a graph representation. In the event that objects obey a more strict ordered structure (e.g. arrays or rooted trees), we impose this ordering through inclusion of predecessor links.

How it works

For each algorithm, we provide a canonical set of train, eval and test trajectories for benchmarking out-of-distribution generalization.

Trajectories Problem Size
Train 1000 16
Eval 32 x multiplier 16
Test 32 x multiplier 64

Here, "problem size" refers to e.g. the length of an array or number of nodes in a graph, depending on the algorithm. "multiplier" is an algorithm-specific factor that increases the number of available eval and test trajectories to compensate for paucity of evaluation signals. "multiplier" is 1 for all algorithms except:

  • Maximum subarray (Kadane), for which "multiplier" is 32.
  • Quick select, minimum, binary search, string matchers (both naive and KMP), and segment intersection, for which "multiplier" is 64.

The trajectories can be used like so:

train_ds, num_samples, spec = clrs.create_dataset(
      folder='/tmp/CLRS30', algorithm='bfs',
      split='train', batch_size=32)

for i, feedback in enumerate(train_ds.as_numpy_iterator()):
  if i == 0:
    model.init(feedback.features, initial_seed)
  loss = model.feedback(rng_key, feedback)

Here, feedback is a namedtuple with the following structure:

Feedback = collections.namedtuple('Feedback', ['features', 'outputs'])
Features = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])

where the content of Features can be used for training and outputs is reserved for evaluation. Each field of the tuple is an ndarray with a leading batch dimension. Because hints are provided for the full algorithm trajectory, these contain an additional time dimension padded up to the maximum length max(T) of any trajectory within the dataset. The lengths field specifies the true length t <= max(T) for each trajectory, which can be used e.g. for loss masking.

The examples directory contains a full working Graph Neural Network (GNN) example using JAX and the DeepMind JAX Ecosystem of libraries. It allows training of multiple algorithms on a single processor, as described in "A Generalist Neural Algorithmic Learner".

What we provide

Algorithms

Our initial CLRS-30 benchmark includes the following 30 algorithms. We aim to support more algorithms in the future.

  • Sorting
    • Insertion sort
    • Bubble sort
    • Heapsort (Williams, 1964)
    • Quicksort (Hoare, 1962)
  • Searching
    • Minimum
    • Binary search
    • Quickselect (Hoare, 1961)
  • Divide and conquer
    • Maximum subarray (Kadane's variant) (Bentley, 1984)
  • Greedy
    • Activity selection (Gavril, 1972)
    • Task scheduling (Lawler, 1985)
  • Dynamic programming
    • Matrix chain multiplication
    • Longest common subsequence
    • Optimal binary search tree (Aho et al., 1974)
  • Graphs
    • Depth-first search (Moore, 1959)
    • Breadth-first search (Moore, 1959)
    • Topological sorting (Knuth, 1973)
    • Articulation points
    • Bridges
    • Kosaraju's strongly connected components algorithm (Aho et al., 1974)
    • Kruskal's minimum spanning tree algorithm (Kruskal, 1956)
    • Prim's minimum spanning tree algorithm (Prim, 1957)
    • Bellman-Ford algorithm for single-source shortest paths (Bellman, 1958)
    • Dijkstra's algorithm for single-source shortest paths (Dijkstra et al., 1959)
    • Directed acyclic graph single-source shortest paths
    • Floyd-Warshall algorithm for all-pairs shortest-paths (Floyd, 1962)
  • Strings
    • Naïve string matching
    • Knuth-Morris-Pratt (KMP) string matcher (Knuth et al., 1977)
  • Geometry
    • Segment intersection
    • Graham scan convex hull algorithm (Graham, 1972)
    • Jarvis' march convex hull algorithm (Jarvis, 1973)

Baselines

Models consist of a processor and a number of encoders and decoders. We provide JAX implementations of the following GNN baseline processors:

  • Deep Sets (Zaheer et al., NIPS 2017)
  • End-to-End Memory Networks (Sukhbaatar et al., NIPS 2015)
  • Graph Attention Networks (Veličković et al., ICLR 2018)
  • Graph Attention Networks v2 (Brody et al., ICLR 2022)
  • Message-Passing Neural Networks (Gilmer et al., ICML 2017)
  • Pointer Graph Networks (Veličković et al., NeurIPS 2020)

If you want to implement a new processor, the easiest way is to add it in the processors.py file and make it available through the get_processor_factory method there. A processor should have a __call__ method like this:

__call__(self,
         node_fts, edge_fts, graph_fts,
         adj_mat, hidden,
         nb_nodes, batch_size)

where node_fts, edge_fts and graph_fts will be float arrays of shape batch_size x nb_nodes x H, batch_size x nb_nodes x nb_nodes x H, and batch_size x H with encoded features for nodes, edges and graph respectively, adj_mat a batch_size x nb_nodes x nb_nodes boolean array of connectivity built from hints and inputs, and hidden a batch_size x nb_nodes x H float array with the previous-step outputs of the processor. The method should return a batch_size x nb_nodes x H float array.

For more fundamentally different baselines, it is necessary to create a new class that extends the Model API (as found within clrs/_src/model.py). clrs/_src/baselines.py provides one example of how this can be done.

Creating your own dataset

We provide a tensorflow_dataset generator class in dataset.py. This file can be modified to generate different versions of the available algorithms, and it can be built by using tfds build after following the installation instructions at https://www.tensorflow.org/datasets.

Alternatively, you can generate samples without going through tfds by instantiating samplers with the build_sampler method in clrs/_src/samplers.py, like so:

sampler, spec = clrs.build_sampler(
    name='bfs',
    seed=42,
    num_samples=1000,
    length=16)

def _iterate_sampler(batch_size):
  while True:
    yield sampler.next(batch_size)

for feedback in _iterate_sampler(batch_size=32):
  ...

Adding new algorithms

Adding a new algorithm to the task suite requires the following steps:

  1. Determine the input/hint/output specification of your algorithm, and include it within the SPECS dictionary of clrs/_src/specs.py.
  2. Implement the desired algorithm in an abstractified form. Examples of this can be found throughout the clrs/_src/algorithms/ folder.
  • Choose appropriate moments within the algorithm’s execution to create probes that capture the inputs, outputs and all intermediate state (using the probing.push function).
  • Once generated, probes must be formatted using the probing.finalize method, and should be returned together with the algorithm output.
  1. Implement an appropriate input data sampler for your algorithm, and include it in the SAMPLERS dictionary within clrs/_src/samplers.py.

Once the algorithm has been added in this way, it can be accessed with the build_sampler method, and will also be incorporated to the dataset if regenerated with the generator class in dataset.py, as described above.

Citation

To cite the CLRS Algorithmic Reasoning Benchmark:

@article{deepmind2022clrs,
  title={The CLRS Algorithmic Reasoning Benchmark},
  author={Petar Veli\v{c}kovi\'{c} and Adri\`{a} Puigdom\`{e}nech Badia and
    David Budden and Razvan Pascanu and Andrea Banino and Misha Dashevskiy and
    Raia Hadsell and Charles Blundell},
  journal={arXiv preprint arXiv:2205.15659},
  year={2022}
}

clrs's People

Contributors

adria-p avatar avlife avatar dbudden avatar hawkinsp avatar hbq1 avatar petarv- avatar rchen152 avatar rerrayne avatar sigeisler avatar sinopalnikov avatar superbobry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clrs's Issues

Repetition of indexes in pred

Hi,

In the attached screenshot I have printed the first 30 hints (pred_h) for an insertion sort problem and at the very bottom of the screenshot is the pred. I have some questions about this please.

  1. Is there a reason pred_h starts with two 0 indexes and no 15 index?
  2. On line 16 of the pred_h's the two 0's swap to being two 7's, is there a reason behind this behaviour?
  3. In the first 30 pred_h’s there is no index 15, however in pred there is an index 15, is there a reason for this?
  4. Here it states pred should be a permutation, but the printed output is not a permutation as far as I can see as it has two 7's. What does SHOULD_BE_PERMUTATION mean please.

I also see similar repetition behaviour when generating BFS samples, so have no doubt this is a misunderstanding on my end but I would like to understand more about this. Specifically, if I were to take the dataset and apply it outside this context how to take the input/output pairs (without hints) correctly.

Thank you!

image

I am using this code to generate the screenshot:

for i, feedback in enumerate(train_ds.as_numpy_iterator()):
    print(feedback.features.hints[2].data[:30])
    print("------------")
    print("feedback key is ", feedback.features.inputs[0].data)
    print("feedback pos is ", feedback.features.inputs[1].data)
    print("outputs is ", feedback.outputs[0].data)

What does DFS output result mean ?

from clrs._src.algorithms import graphs
import networkx as nx

DIRECTED = np.array(
    [
        [0, 1, 0, 1, 0, 0],
        [0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 1, 1],
        [0, 1, 0, 0, 0, 0],
        [0, 0, 0, 1, 0, 0],
        [0, 0, 0, 0, 0, 1],
    ]
)
out, _ = graphs.dfs(DIRECTED)
print(out)  # [0 0 2 4 1 2]

What does the result of executing the code output mean?
If there are multiple results in DFS, how to judge whether it is right or wrong?

Problems with jax

I installed the required libraries by pip install -r requirement.txt. The CUDA works well and the GPU can be found by tensorflow. However, when I try to run the code, an error occurs.

"Unable to initialize backend 'cuda'": module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'

I searched on the web and found it might be caused by the version of the libraries.

Could you please share the versions of those packages you used with me? Thank you very much.
微信图片_20220903050002

Issue with distribution of undirected graphs

Hi, I think there is a problem with generating undirected ER graphs. First, the probability of edges that are not a loop is p^2, not p. In addition, the distribution of weights on undirected edges is slightly different from the uniform one (as the geometric mean of two uniformly distributed variables).

Why the outputs of bfs and dfs algorithms are the same

Consider

DIRECTED = np.array([
    [0, 1, 1, 0, 0, 0],
    [0, 0, 0, 1, 1, 0],
    [0, 0, 0, 0, 0, 1],
    [0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0],
])

#out, probes = graphs.bfs(DIRECTED, 0)
out, probes = graphs.dfs(DIRECTED)
probes
# From probes for both dfs anf bfs algorithms:
'output': {'node': {'pi': {'data': array([0, 0, 0, 1, 1, 2]),
    'type_': 'pointer'}},

How can I evaluate my DL model to conduct BFS or DFS algorithms if both algorithms provide the same output in probes? Is it a mistake in how these algorithms are implemented in the CLRS benchmark or I miss something? Of course, there are hints: pi_h that show the trajectory of the visited nodes, but still the final outputs are the same for BFS and DFS. What if I want to train a DL model without hints?

Proposed solutions:

  1. Represent the outputs of bfs/dfs with the indexes of visited nodes in the order or visiting, so for the example above it would be: [0,1,2,3,4,5] for bfs and [0,1,3,4,2,5] for dfs - this would uniquely identify both algorithms.

  2. Looking at the Insertion sort example https://arxiv.org/pdf/2205.15659, I can see you want to keep the input and output of the DL model to be a graph of the same size and use edges to indicate the output of an algorithm. So why not using the predecessor node indexes to previously visiting nodes. If the node has not previously visited node, then keep the node index itself, then:
    bfs: [0,0,0,1,1,2]
    dfs: [0,0,1,3,4,2]

PS. It took me sometime to figure out what is the meaning of the output for BFS/DFS algorithms - Now I know these are the predecessor nodes in the graph but only for those nodes that were visited during the algorithm run. If the node was not visited, then it would point to itself (what motivates this representation?) This is clear when looking at the hints for pi:

'pi_h': {'data': array([[0, 1, 2, 3, 4, 5],
           [0, 0, 0, 3, 4, 5],
           [0, 0, 0, 1, 1, 2]]),

How about adding better documentation for bfs/dfs algorithms to explain what the output is. Currently there no doc at all in the code: https://github.com/google-deepmind/clrs/blob/master/clrs/_src/algorithms/graphs_test.py

tensorflow-macos and tensorflow-metal

Any comments on using 'tensorflow-macos' and 'tensorflow-metal' in the 'clrs' ecosystem?

I was able to install tensoflow-macos and tensorflow-metal in the clrs virtual enviornment.
My AMD GPU is being recognized ... but 'clrs' is looking for: 'tpu_driver' , 'cuda', 'tpu'.
Any ideas?

% python3 -m clrs.examples.run                                  
I0605 17:03:39.042836 4560762368 run.py:196] Using CLRS30 spec: {'train': {'num_samples': 1000, 'length': 16, 'seed': 1}, 'val': {'num_samples': 32, 'length': 16, 'seed': 2}, 'test': {'num_samples': 32, 'length': 64, 'seed': 3}}
I0605 17:03:39.044355 4560762368 run.py:180] Dataset found at /tmp/CLRS30/CLRS30_v1.0.0. Skipping download.
Metal device set to: AMD Radeon Pro 5700 XT

systemMemory: 128.00 GB
maxCacheSize: 7.99 GB
...
  devices = jax.devices()
  logging.info(devices)

I0605 17:03:40.532486 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: 
I0605 17:03:40.534332 4560762368 xla_bridge.py:330] Unable to initialize backend 'gpu': NOT_FOUND: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host
I0605 17:03:40.536365 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
W0605 17:03:40.536445 4560762368 xla_bridge.py:335] No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)

google/jax#7163

Update of PyPI version

Thank you for the work and making the source public!

I just wanted to clarify if the package version from PyPI had already been updated yet.
After digging for some time for a research purpose there's some consistency in some methods.
For example functionalities in 2df2d48 either are not available or have different usage/implementation in the PyPI version.

Is there a plan to update the PyPI package just to make it consistent? Looking forward to hear!

More input signals for evaluation

Hi, in some parts of the code you increase input signals for a few algorithms for evaluation. But, the generated dataset on google storage seems to not contain them (i.e., it contains only 32 trajectories of each). Is the change going to be reflected in future versions?

Sampling bug on undirected weighted graphs

Hi, I think there is an issue in sampling the undirected weighted graphs. The sampled graph is first symmetrized and then weights are sampled, which makes the weights of each direction different. For algorithms that are capable of handling directed graphs, this might not cause any disruption in algorithm behavior. But, for the ones that output undirected edges (e.g., MST Kruskal), this would not be the true behavior of the algorithm.

Inability to reproduce paper results

Thanks to the authors for constructing this benchmark.

I'm having trouble reproducing some of the test scores reported in the paper, in Table 2. Comparing my runs against the paper results (averaging across 3 seeds: 42, 43, and 44):

Graham Scan task:
MPNN: 0.6355 vs. 0.9104 published
PGN: 0.3622 vs. 0.5687 published

Binary Search task:
MPNN: 0.2026 vs. 0.3683 published
PGN: 0.4390 vs. 0.7695 published

Here are the values I used for my reproduction experiments:
image

Values for batch size, train items, learning rate, and hint teaching forcing noise were obtained from sections 4.1 and 4.2 of the paper. Values for eval_every, dropout, use_ln, and use_lstm (which were not found in the paper) were default values in the provided run file. Additionally, I used processor type "pgn_mask" for the PGN experiments.

What setting should I use to more accurately reproduce the paper results? Were there hyperparameter settings unspecified in the paper (or specified) that I am getting wrong?

Finally, I noticed the most recent commit, fixing the axis for mean reduction in PGN. Would that cause PGN to perform differently than reported in the paper? And perhaps explain the discrepancy in results I obtained.

Hint `A_t` in SCC

Hi,

thank you for the quite extensive work in putting CLRS together.

Do you have a reference or reasoning about what hints you included? For example, can you elaborate on why you included hint A_t in strongly connected components?

Thanks!

Bug in KMP implementation

Hi, I think there is a bug in the KMP implementation. It does not find a match for some inputs in the dataset (i.e., outputs the end of the string).

Is the paper still available?

Thank you for the benchmark on neural algorithmic reasoning. I love the throwback to the classical CLRS textbook!

I remember bumping into the PDF paper online, but cannot seem to access it anymore. Is the paper associated with the repo available soon?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.