Giter VIP home page Giter VIP logo

privacy-preserving-computation-paperlist's Introduction

Privacy-Preserving-Computation-PaperList

Up-to-date list of papers and relevant material of privacy-preserving computation techniques.

Table of Contents

Survey

Code

Dataset

  • [Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics], NeurIPS Datasets and Benchmarks, 2021

Federated Learning

  • [Accelerated Federated Learning with Decoupled Adaptive Optimization], ICML, 2022
  • [Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization], ICML, 2020
  • [Acceleration of Federated Learning with Alleviated Forgetting in Local Training], ICLR, 2022
  • [Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning], ICLR, 2021
  • [Adaptive Federated Optimization], ICLR, 2021
  • [Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning], NeurIPS, 2021
  • [Agnostic Federated Learning], ICML, 2019
  • [An Agnostic Approach to Federated Learning with Class Imbalance], ICLR, 2022
  • [An Efficient Framework for Clustered Federated Learning], NeurIPS, 2020
  • [Analyzing Federated Learning through an Adversarial Lens], ICML, 2019
  • [Anarchic Federated Learning], ICML, 2022
  • [Architecture Agnostic Federated Learning for Neural Networks], ICML, 2022
  • [Attack of the Tails: Yes, You Really Can Backdoor Federated Learning], NeurIPS, 2020
  • [Bayesian Nonparametric Federated Learning of Neural Networks], ICML, 2019
  • [Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning], ICML, 2021
  • [Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization], ICML, 2022
  • [Blockchain-Based Federated Learning Against End-Point Adversarial Data Corruption], ICMLA, 2020
  • [Breaking the centralized barrier for cross-device federated learning], NeurIPS, 2021
  • [Catastrophic Data Leakage in Vertical Federated Learning], NeurIPS, 2021
  • [Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning], ICML, 2021
  • [Communication-Efficient Adaptive Federated Learning], ICML, 2022
  • [CRFL: Certifiably Robust Federated Learning against Backdoor Attacks], ICML, 2021
  • [DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning], ICML, 2022
  • [Data-Free Knowledge Distillation for Heterogeneous Federated Learning], ICML, 2021
  • [DBA: Distributed Backdoor Attacks against Federated Learning], ICLR, 2020
  • [Debiasing Model Updates for Improving Personalized Federated Training], ICML, 2021
  • [Deep Neural Network Fusion via Graph Matching with Applications to Model Ensemble and Federated Learning], ICML, 2022
  • [DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning], NeurIPS, 2021
  • [Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning], NeurIPS, 2021
  • [Differentially Private Federated Bayesian Optimization with Distributed Exploration], NeurIPS, 2021
  • [Differentially-Private Federated Linear Bandits], NeurIPS, 2020
  • [Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring], ICML, 2022
  • [DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training], ICML, 2022
  • [Distributionally Robust Federated Averaging], NeurIPS, 2020
  • [Ditto: Fair and Robust Federated Learning Through Personalization], ICML, 2021
  • [Diurnal or Nocturnal? Federated Learning of Multi-branch Networks from Periodically Shifting Distributions], ICLR, 2022
  • [Divergence-aware Federated Self-Supervised Learning], ICLR, 2022
  • [Diverse Client Selection for Federated Learning via Submodular Maximization], ICLR, 2022
  • [EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning], ICML, 2022
  • [Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization], ICLR, 2022
  • [Ensemble Distillation for Robust Model Fusion in Federated Learning], NeurIPS, 2020
  • [Evaluating Gradient Inversion Attacks and Defenses in Federated Learning], NeurIPS, 2021
  • [Exploiting Shared Representations for Personalized Federated Learning], ICML, 2021
  • [Fair Resource Allocation in Federated Learning], ICLR, 2020
  • [Fast Composite Optimization and Statistical Recovery in Federated Learning], ICML, 2022
  • [Fast Federated Learning in the Presence of Arbitrary Device Unavailability], NeurIPS, 2021
  • [Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee], NeurIPS, 2021
  • [FedAR: Activity and Resource-Aware Federated Learning Model for Distributed Mobile Robots], ICMLA, 2020
  • [FedBABU: Toward Enhanced Representation for Federated Image Classification], ICLR, 2022
  • [FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning], ICLR, 2021
  • [FedBN: Federated Learning on Non-IID Features via Local Batch Normalization], ICLR, 2021
  • [FedBoost: A Communication-Efficient Algorithm for Federated Learning], ICML, 2020
  • [FedChain: Chained Algorithms for Near-optimal Communication Cost in Federated Learning], ICLR, 2022
  • [FedDR - Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization], NeurIPS, 2021
  • [Federated Accelerated Stochastic Gradient Descent], NeurIPS, 2020
  • [Federated Adversarial Domain Adaptation], ICLR, 2020
  • [Federated Bayesian Optimization via Thompson Sampling], NeurIPS, 2020
  • [Federated Composite Optimization], ICML, 2021
  • [Federated Continual Learning with Weighted Inter-client Transfer], ICML, 2021
  • [Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity], ICML, 2021
  • [Federated Graph Classification over Non-IID Graphs], NeurIPS, 2021
  • [Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing], NeurIPS, 2021
  • [Federated Learning Based on Dynamic Regularization], ICLR, 2021
  • [Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients], ICLR, 2022
  • [Federated Learning of Neural Network Models with Heterogeneous Structures], ICMLA, 2020
  • [Federated Learning of User Verification Models Without Sharing Embeddings], ICML, 2021
  • [Federated Learning under Arbitrary Communication Patterns], ICML, 2021
  • [Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms], ICLR, 2021
  • [Federated Learning with Label Distribution Skew via Logits Calibration], ICML, 2022
  • [Federated Learning with Matched Averaging], ICLR, 2020
  • [Federated Learning with Only Positive Labels], ICML, 2020
  • [Federated Learning with Partial Model Personalization], ICML, 2022
  • [Federated Learning with Positive and Unlabeled Data], ICML, 2022
  • [Federated Linear Contextual Bandits], NeurIPS, 2021
  • [Federated Minimax Optimization: Improved Convergence Analyses and Algorithms], ICML, 2022
  • [Federated Multi-Task Learning under a Mixture of Distributions], NeurIPS, 2021
  • [Federated Principal Component Analysis], NeurIPS, 2020
  • [Federated Reconstruction: Partially Local Federated Learning], NeurIPS, 2021
  • [Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling], ICML, 2022
  • [Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning], ICLR, 2021
  • [Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis], NeurIPS, 2021
  • [Federated-EM with heterogeneity mitigation and variance reduction], NeurIPS, 2021
  • [FedMix: Approximation of Mixup under Mean Augmented Federated Learning], ICLR, 2021
  • [FedNest: Federated Bilevel, Minimax, and Compositional Optimization], ICML, 2022
  • [FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning], ICML, 2022
  • [FedNL: Making Newton-Type Methods Applicable to Federated Learning], ICML, 2022
  • [FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning], ICLR, 2022
  • [FedScale: Benchmarking Model and System Performance of Federated Learning at Scale], ICML, 2022
  • [FedSplit: an algorithmic framework for fast federated optimization], NeurIPS, 2020
  • [FetchSGD: Communication-Efficient Federated Learning with Sketching], ICML, 2020
  • [Few-Round Learning for Federated Learning], NeurIPS, 2021
  • [Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification], ICML, 2022
  • [FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout], NeurIPS, 2021
  • [FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis], ICML, 2021
  • [FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective], NeurIPS, 2021
  • [From Local SGD to Local Fixed-Point Methods for Federated Learning], ICML, 2020
  • [Generalized Federated Learning via Sharpness Aware Minimization], ICML, 2022
  • [Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix], ICML, 2021
  • [Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge], NeurIPS, 2020
  • [HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients], ICLR, 2021
  • [Heterogeneity for the Win: One-Shot Federated Clustering], ICML, 2021
  • [Hybrid Local SGD for Federated Learning with Heterogeneous Communications], ICLR, 2022
  • [Improving Availability Applying Intelligent Replication in Federated Cloud Storage Based on Log Analysis], ICMLSC, 2017
  • [Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters], ICLR, 2022
  • [Inverting Gradients - How easy is it to break privacy in federated learning], NeurIPS, 2020
  • [Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients], NeurIPS, 2021
  • [Lower Bounds and Optimal Algorithms for Personalized Federated Learning], NeurIPS, 2020
  • [Multi-Level Branched Regularization for Federated Learning], ICML, 2022
  • [Multi-query Optimization in Federated Databases Using Evolutionary Algorithm], ICMLA, 2015
  • [Neural Tangent Kernel Empowered Federated Learning], ICML, 2022
  • [Neurotoxin: Durable Backdoors in Federated Learning], ICML, 2022
  • [No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data], NeurIPS, 2021
  • [On Bridging Generic and Personalized Federated Learning for Image Classification], ICLR, 2022
  • [On Large-Cohort Training for Federated Learning], NeurIPS, 2021
  • [One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning], ICML, 2021
  • [Optimality and Stability in Federated Learning: A Game-theoretic Approach], NeurIPS, 2021
  • [Orchestra: Unsupervised Federated Learning via Globally Consistent Clustering], ICML, 2022
  • [Parameterized Knowledge Transfer for Personalized Federated Learning], NeurIPS, 2021
  • [PartialFed: Cross-Domain Personalized Federated Learning via Partial Initialization], NeurIPS, 2021
  • [Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning], ICML, 2022
  • [Personalized Federated Learning through Local Memorization], ICML, 2022
  • [Personalized Federated Learning using Hypernetworks], ICML, 2021
  • [Personalized Federated Learning via Variational Bayesian Inference], ICML, 2022
  • [Personalized Federated Learning with First Order Model Optimization], ICLR, 2021
  • [Personalized Federated Learning With Gaussian Processes], NeurIPS, 2021
  • [Personalized Federated Learning with Moreau Envelopes], NeurIPS, 2020
  • [Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach], NeurIPS, 2020
  • [ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training], ICML, 2022
  • [Proximal and Federated Random Reshuffling], ICML, 2022
  • [QSFL: A Two-Level Uplink Communication Optimization Framework for Federated Learning], ICML, 2022
  • [QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning], NeurIPS, 2021
  • [Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank], ICLR, 2022
  • [Resilient and Communication Efficient Learning for Heterogeneous Federated Systems], ICML, 2022
  • [Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models], ICLR, 2022
  • [Robust Federated Learning: The Case of Affine Distribution Shifts], NeurIPS, 2020
  • [Sageflow: Robust Federated Learning against Both Stragglers and Adversaries], NeurIPS, 2021
  • [SCAFFOLD: Stochastic Controlled Averaging for Federated Learning], ICML, 2020
  • [STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning], NeurIPS, 2021
  • [Subgraph Federated Learning with Missing Neighbor Generation], NeurIPS, 2021
  • [Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization], NeurIPS, 2020
  • [The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation], ICML, 2021
  • [The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning], ICML, 2022
  • [The Poisson Binomial Mechanism for Unbiased Federated Learning with Secure Aggregation], ICML, 2022
  • [The Skellam Mechanism for Differentially Private Federated Learning], NeurIPS, 2021
  • [Throughput-Optimal Topology Design for Cross-Silo Federated Learning], NeurIPS, 2020
  • [Towards Model Agnostic Federated Learning Using Knowledge Distillation], ICLR, 2022
  • [Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy], ICML, 2022
  • [Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning], ICML, 2022
  • [What Do We Mean by Generalization in Federated Learning], ICLR, 2022
  • [ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity], ICLR, 2022

Differential Privacy

  • [Accuracy, Interpretability, and Differential Privacy via Explainable Boosting], ICML, 2021
  • [Adversarially Robust Streaming Algorithms via Differential Privacy], NeurIPS, 2020
  • [Antipodes of Label Differential Privacy: PATE and ALIBI], NeurIPS, 2021
  • [Bayesian Differential Privacy for Machine Learning], ICML, 2020
  • [Bounding User Contributions: A Bias-Variance Trade-off in Differential Privacy], ICML, 2019
  • [Capacity Bounded Differential Privacy], NeurIPS, 2019
  • [Context Aware Local Differential Privacy], ICML, 2020
  • [Data-dependent PAC-Bayes priors via differential privacy], NeurIPS, 2018
  • [Deep Learning with Label Differential Privacy], NeurIPS, 2021
  • [Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent], NeurIPS, 2021
  • [Differential Privacy for Growing Databases], NeurIPS, 2018
  • [Differential Privacy Has Disparate Impact on Model Accuracy], NeurIPS, 2019
  • [Differential Privacy Over Riemannian Manifolds], NeurIPS, 2021
  • [Differentially Private Bagging: Improved utility and cheaper privacy than subsample-and-aggregate], NeurIPS, 2019
  • [Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy], NeurIPS, 2019
  • [Elliptical Perturbations for Differential Privacy], NeurIPS, 2019
  • [Empirical Risk Minimization in Non-interactive Local Differential Privacy Revisited], NeurIPS, 2018
  • [Facility Location Problem in Differential Privacy Model Revisited], NeurIPS, 2019
  • [Generalized Linear Bandits with Local Differential Privacy], NeurIPS, 2021
  • [Graphical-model based estimation and inference for differential privacy], ICML, 2019
  • [Hyperparameter Tuning with Renyi Differential Privacy], ICLR, 2022
  • [Improving Sparse Vector Technique with Renyi Differential Privacy], NeurIPS, 2020
  • [Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising], ICML, 2018
  • [Instance-optimal Mean Estimation Under Differential Privacy], NeurIPS, 2021
  • [Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms], NeurIPS, 2020
  • [Local Differential Privacy for Evolving Data], NeurIPS, 2018
  • [Local Differential Privacy for Regret Minimization in Reinforcement Learning], NeurIPS, 2021
  • [Minimax Optimal Estimation of Approximate Differential Privacy on Neighboring Databases], NeurIPS, 2019
  • [Mitigating Bias in Adaptive Data Gathering via Differential Privacy], ICML, 2018
  • [Numerical Composition of Differential Privacy], NeurIPS, 2021
  • [On Sparse Linear Regression in the Local Differential Privacy Model], ICML, 2019
  • [Online Learning via the Differential Privacy Lens], NeurIPS, 2019
  • [Optimal Algorithms for Mean Estimation under Local Differential Privacy], ICML, 2022
  • [Optimal Differential Privacy Composition for Exponential Mechanisms], ICML, 2020
  • [Pain-Free Random Differential Privacy with Sensitivity Sampling], ICML, 2017
  • [PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees], ICLR, 2019
  • [Photonic Differential Privacy with Direct Feedback Alignment], NeurIPS, 2021
  • [Poission Subsampled R??nyi Differential Privacy], ICML, 2019
  • [Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning], NeurIPS, 2021
  • [Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data], ICML, 2022
  • [Robust anomaly detection and backdoor attack detection via differential privacy], ICLR, 2020
  • [Scalable Differential Privacy with Certified Robustness in Adversarial Learning], ICML, 2020
  • [Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion], ICML, 2020
  • [Smoothly Bounding User Contributions in Differential Privacy], NeurIPS, 2020
  • [The Composition Theorem for Differential Privacy], ICML, 2015
  • [The Discrete Gaussian for Differential Privacy], NeurIPS, 2020
  • [The Flajolet-Martin Sketch Itself Preserves Differential Privacy: Private Counting with Minimal Space], NeurIPS, 2020
  • [The Price of Differential Privacy for Online Learning], ICML, 2017
  • [Transfer Learning In Differential Privacy's Hybrid-Model], ICML, 2022
  • [Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy], ICML, 2022
  • [User-Specified Local Differential Privacy in Unconstrained Adaptive Online Learning], NeurIPS, 2019

Secure Multi-Party Computation

Trusted Execution Environment

Ethics

Fairness

  • [A Unified Approach to Fair Online Learning via Blackwell Approachability], NeurIPS, 2021
  • [Adaptive Droplet Routing in Digital Microfluidic Biochips Using Deep Reinforcement Learning], ICML, 2020
  • [An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning], NeurIPS, 2021
  • [Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training], NeurIPS, 2021
  • [Conditional Learning of Fair Representations], ICLR, 2020
  • [Delayed Impact of Fair Machine Learning], ICML, 2018
  • [Differentially Private Fair Learning], ICML, 2019
  • [Ditto: Fair and Robust Federated Learning Through Personalization], ICML, 2021
  • [Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning], NeurIPS, 2020
  • [Fair Learning with Private Demographic Data], ICML, 2020
  • [Fair Representation Learning through Implicit Path Alignment], ICML, 2022
  • [Fair Resource Allocation in Federated Learning], ICLR, 2020
  • [Fair Sequential Selection Using Supervised Learning Models], NeurIPS, 2021
  • [Fairness in Reinforcement Learning], ICML, 2017
  • [Fairness without Demographics through Adversarially Reweighted Learning], NeurIPS, 2020
  • [Fairness-Aware Learning for Continuous Attributes and Treatments], ICML, 2019
  • [FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout], NeurIPS, 2021
  • [Flexibly Fair Representation Learning by Disentanglement], ICML, 2019
  • [Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning], NeurIPS, 2021
  • [Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness], NeurIPS, 2019
  • [Inherent Tradeoffs in Learning Fair Representations], NeurIPS, 2019
  • [Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning], ICLR, 2022
  • [Learning Adversarially Fair and Transferable Representations], ICML, 2018
  • [Learning Certified Individually Fair Representations], NeurIPS, 2020
  • [Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning], ICML, 2021
  • [Learning Fair Policies in Multi-Objective (Deep) Reinforcement Learning with Average and Discounted Rewards], ICML, 2020
  • [Learning fair representation with a parametric integral probability metric], ICML, 2022
  • [Learning Fair Representations], ICML, 2013
  • [Learning Fairness in Multi-Agent Systems], NeurIPS, 2019
  • [Learning Optimal Fair Policies], ICML, 2019
  • [Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics], ICML, 2020
  • [Metric-Fair Active Learning], ICML, 2022
  • [Metric-Free Individual Fairness in Online Learning], NeurIPS, 2020
  • [My Fair Bandit: Distributed Learning of Max-Min Fairness with Multi-player Bandits], ICML, 2020
  • [Online Learning with an Unknown Fairness Metric], NeurIPS, 2018
  • [Paradoxes in Fair Machine Learning], NeurIPS, 2019
  • [Policy Learning for Fairness in Ranking], NeurIPS, 2019
  • [Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer], NeurIPS, 2018
  • [Predict Responsibly: Increasing Fairness by Learning to Defer], ICLR, 2018
  • [Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness], ICML, 2018
  • [Probably Approximately Metric-Fair Learning], ICML, 2018
  • [Residual Unfairness in Fair Machine Learning from Prejudiced Data], ICML, 2018
  • [Retiring Adult: New Datasets for Fair Machine Learning], NeurIPS, 2021
  • [The Implicit Fairness Criterion of Unconstrained Learning], ICML, 2019
  • [Two Simple Ways to Learn Individual Fairness Metrics from Data], ICML, 2020

Robustness

Security

Wellbeing

privacy-preserving-computation-paperlist's People

Contributors

pengyuan-zhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.