HOGA is an attention model for scalable and generalizable learning on circuits. By leveraging a novel gated attention module on hop-wise features, HOGA not only outperforms prior graph learning models on challenging circuit problems, but is also friendly to distributed training by mitigating communication overhead caused by graph dependencies. This renders HOGA applicable to industrial-scale circuit applications. More details are available in our paper.
Figure1: An overview of HOGA and gated attention module. |
- python 3.9
- pytorch 1.12 (CUDA 11.3)
- torch_geometric 2.1
Check at: https://huggingface.co/datasets/yucx0626/Gamora-CSA-Multiplier/tree/main
Check at: https://zenodo.org/records/6399454#.YkTglzwpA5k
More experiments and details will be provided soon.