Giter VIP home page Giter VIP logo

dlordinal's Introduction

Deep learning utilities library

dlordinal is an open-source Python toolkit focused on deep learning with ordinal methodologies.

Overview
CI/CD !codecov !docs !python
Code ![pypi] ![binder] !black Linter: Ruff

Table of Contents

⚙️ Installation

dlordinal v2.0.0 is the last version supported by Python 3.8, Python 3.9 and Python 3.10.

The easiest way to install dlordinal is via pip:

pip install dlordinal

📖 Documentation

Sphinx is a documentation generator tool that is commonly used in the Python ecosystem. It allows developers to write documentation in a markup language called reStructuredText (reST) and generates HTML, PDF, and other formats from it. Sphinx provides a powerful and flexible way to document code, making it easier for developers to create comprehensive and user-friendly documentation for their projects.

To document dlordinal, it is necessary to install all documentation dependencies:

pip install -e '.[docs]'

Then access the docs/ directory:

docs/
↳ api.rst
↳ conf.py
↳ distributions.rst
↳ references.bib
↳ ...

If a new module is created in the software project, the api.rst file must be modified to include the name of the new module:

.. _api:

=============
API Reference
=============

This is the API for the **dlordinal** package.

.. toctree::
   :maxdepth: 2
   :caption: Contents:

   losses
   datasets
   distributions
   layers
   metrics
   sklearn_integration
   ***NEW_MODULE***

Afterwards, a new file in .rst format associated to the new module must be created, specifying the automatic inclusion of documentation from the module files containing a docstring, and the inclusion of the bibliography if it exists within any of them.

docs/
↳ api.rst
↳ conf.py
↳ distributions.rst
↳ new_module.rst
↳ references.bib
↳ ...
.. _new_module:

New Module
==========

.. automodule:: dlordinal.new_module
    :members:

.. footbibliography::

Finally, if any new bibliographic citations have been added, they should be included in the references.bib file.

Collaborating

Code contributions to the dlordinal project are welcomed via pull requests. Please, contact the maintainers (maybe opening an issue) before doing any work to make sure that your contributions align with the project.

Guidelines for code contributions

  • You can clone the repository and then install the library from the local repository folder:
git clone [email protected]:ayrna/dlordinal.git
pip install ./dlordinal
  • In order to set up the environment for development, install the project in editable mode and include the optional dev requirements:
pip install -e '.[dev]'
  • Install the pre-commit hooks before starting to make any modifications:
pre-commit install
  • Write code that is compatible with all supported versions of Python listed in the pyproject.toml file.
  • Create tests that cover the common cases and the corner cases of the code.
  • Preserve backwards-compatibility whenever possible, and make clear if something must change.
  • Document any portions of the code that might be less clear to others, especially to new developers.
  • Write API documentation as docstrings.

dlordinal's People

Contributors

franberchez avatar victormvy avatar javierbg avatar

Stargazers

 avatar Juan Carlos Fernández Caballero avatar Valentina Corbetta avatar Javier Sánchez avatar Marcos avatar  avatar  avatar Rafael Ayllón Gavilán avatar  avatar David Guijo-Rubio avatar

Watchers

Antonio Manuel Durán Rosal avatar Juan Carlos Fernández Caballero avatar María Pérez Ortiz avatar

Forkers

rafaaygar

dlordinal's Issues

Avoid default value for `num_classes` parameter in Unimodal Loss Functions

The num_classes parameter in unimodal loss functions currently has a default value. However, it's important to note that specifying the correct number of classes for this parameter must be done for every case. Relying on a default value may lead to errors that are difficult to diagnose.

Recommendation:

  • Remove the default value for the num_classes parameter in BetaCrossEntropyLoss, BinomialCrossEntropyLoss, ExponentialRegularisedCrossEntropyLoss, GeneralTriangularCrossEntropyLoss, PoissonCrossEntropyLoss, and TriangularCrossEntropyLoss

[BUG] Numerical instability in CLM activation layer

Describe the bug

The CLM layer with cloglog and logit link functions has a numerical instability in the computation of the z3 variable. It uses a torch.exp(-z3) so when z3 is aproximately above 15 it returns infinity.

Steps/Code to reproduce the bug

.

Expected results

.

Actual results

.

Warnings in tutorials

The tutorials require an update to work with the latest versions of torch and torchvision. Currently in the model creation section they return the following warnings

UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.

[MNT] `label_smoothing` parameter of `CrossEntropyLoss` should not be exposed in soft labelling loss functions

Describe the issue

label_smoothing parameter of CrossEntropyLoss applies label smoothing by mixing the one-hot targets with a uniform distribution. However, in soft labelling loss functions, it makes no sense to mix an already soft label encoding with a uniform distribution.

Suggest a potential alternative/fix

The label_smoothing parameter should be removed from

  • PoissonCrossEntropyLoss
  • BinomialCrossEntropyLoss
  • ExponentialCrossEntropyLoss
  • BetaCrossEntropyLoss
  • TriangularCrossEntropyLoss
  • GeneralTriangularCrossEntropyLoss

Then, the value passed to the CrossEntropyLoss when initialising the ce_loss attribute should be 0.

Additional context

Thank you!

[ENH, DOC] Displaying descriptions of the class attributes.

Hello team!
Thank you for this very useful tool.
I've been working with it for a few days and I've detected that there is a problem with displaying the descriptions of the class attributes of the datasets module in the software documentation.

Could you please fix it?

Thank you again!

[API, MNT] `PytorchEstimator` deprecation

Describe the issue

PytorchEstimator currently offers a very basic classifier with the interface of scikit-learn. However, it lacks numerous essential functionalities. Some python packages like skorch provide implementations for these missing features. Since dlordinal elements seamlessly integrate with such packages, it seems unnecessary and beyond the package's intended scope to have an estimator class within it.

Suggest a potential alternative/fix

The PytorchEstimator class should be deprecated and subsequently removed from this package. Instead, users should be encouraged to utilize third-party packages that already incorporate a PyTorch estimator with a scikit-learn interface. To facilitate this transition, comprehensive tutorials should be provided, describing how to seamlessly integrate dlordinal with these third-party alternatives.

Additional context

No response

PytorchEstimator predict and predict_proba interface changes

The predict and predict_proba methods within the PytorchEstimator class should be modified to return numpy arrays instead of Tensors. This adjustment is necessary to align with the interface conventions of scikit-learn estimators, ensuring seamless integration and consistency across frameworks.

Also, a verbose parameter should be included to enable or disable the messages which are printed in the current version.

[ENH, API] PytorchEstimator verbosity

Currently, the PytorchEstimator lacks flexibility in managing verbosity during the training phase. It uniformly prints progress updates on each epoch, displaying only the current epoch and the total number of epochs. However, there are scenarios where users might prefer to customize this output. Some may seek to remove this message, while others might find it beneficial to include additional information such as the loss value per epoch.

Is it possible to add a verbose parameter to achieve this? Thank you!

[API] `distributions` module should be renamed to `soft_labelling`

distributions module do not implement probability distributions. Instead, it employs different probability distributions to determine soft labels for a given number of splits. Therefore, the whole module should be renamed to softlabelling and the functions that it contains should also be renamed as follows:

  • get_beta_probabilities -> get_beta_softlabels
  • get_binomial_probabilities -> get_binomial_softlabels
  • get_exponential_probabilities -> get_exponential_softlabels
  • get_triangular_probabilities -> get_triangular_softlabels
  • get_general_triangular_probabilities -> get_general_triangular_softlabels

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.