Giter VIP home page Giter VIP logo

nouiz / transformerengine Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nvidia/transformerengine

0.0 1.0 0.0 3.76 MB

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference.

Home Page: https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html

License: Apache License 2.0

Shell 0.21% C++ 10.69% Python 50.17% C 2.07% Cuda 36.52% CMake 0.34%

transformerengine's Introduction

License

Transformer Engine

Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your own framework-specific code. TE also includes a framework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8 support for Transformers.

As the number of parameters in Transformer models continues to grow, training and inference for architectures such as BERT, GPT and T5 become very memory and compute intensive. Most deep learning frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for many deep learning models. Using mixed-precision training, which combines single-precision (FP32) with lower precision (e.g. FP16) format when training a model, results in significant speedups with minimal differences in accuracy as compared to FP32 training. With the introduction of Hopper GPU architecture FP8 precision was introduced, which offers improved performance over FP16 with no degradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is not available today.

TE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language Model (LLM) libraries. It provides python layer consisting of modules to easily build Transformer layer as well as framework agnostic library in C++ including structs and kernels needed for FP8 support. Modules provided by TE internally maintain scaling factors and other values needed for FP8 training, greatly simplifying for the users.

Examples

pyTorch

import torch
import transformer_engine.pytorch as te
from transformer_engine.common import recipe

# Set dimensions.
in_features = 768
out_features = 3072
hidden_size = 2048

# Initialize model and inputs.
model = te.Linear(in_features, out_features, bias=True)
inp = torch.randn(hidden_size, in_features, device="cuda")

# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, interval=1, fp8_format=recipe.Format.E4M3)

# Enable autocasting for the forward pass
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
    out = model(inp)

loss = out.sum()
loss.backward()

JAX

import jax
import jax.numpy as jnp
import transformer_engine.jax as te
from transformer_engine.common import recipe

BATCH = 32
SEQLEN = 128
HIDDEN = 1024

# Initialize RNG and inputs.
rng = jax.random.PRNGKey(0)
init_rng, data_rng = jax.random.split(rng)
inp = jax.random.normal(data_rng, [BATCH, SEQLEN, HIDDEN], jnp.float32)

# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, interval=1, fp8_format=recipe.Format.HYBRID)

# Enable autocasting for the forward pass
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
    model = te.DenseGeneral(features=HIDDEN)

    def loss_fn(params, other_vars, inp):
      out = model.apply({'params':params, **other_vars}, inp)
      return jnp.mean(out)

    # Initialize models.
    variables = model.init(init_rng, inp)
    other_variables, params = variables.pop('params')

    # Construct the forward and backward function
    fwd_bwd_fn = jax.value_and_grad(loss_fn, argnums=(0, 1))

    for _ in range(10):
      loss, (param_grads, other_grads) = fwd_bwd_fn(params, other_variables, inp)
      # Update FP8 metas
      other_variables = te.update_fp8_metas(other_grads)

Highlights

  • Easy-to-use modules enabling building of the Transformer layers with FP8 support on H100 GPUs.
  • Optimizations (e.g. fused kernels) for Transformer models across all precisions and NVIDIA GPU architectures.

Installation

In the NGC container

Transformer Engine comes preinstalled in the pyTorch container on NVIDIA GPU Cloud (versions 22.09 and later).

From source

Clone the repository and inside it type:

NVTE_FRAMEWORK=all pip install .     # Building with all frameworks.
NVTE_FRAMEWORK=pytorch pip install . # Building with pyTorch only.
NVTE_FRAMEWORK=jax pip install .     # Building with JAX only.

User Guide

For examples, tutorials and API reference please refer to the User Guide.

Transformer Architectures

While the more granular modules in Transformer Engine allow building any Transformer architecture, the TransformerLayer API of Transformer Engine is flexible enough to build multiple major variations of Transformers.

NOTE: For simplicity, we only show pyTorch examples below. For the usage of TransformerLayer of all supported frameworks, refer to examples.

GPT

GPT architecture has LayerNorm at the input side (before QKV Gemm) and the residual connection is taken from the input of that LayerNorm. In TE this can be achieved by setting the following arguments in the TransformerLayer API.

transformer_engine.pytorch.TransformerLayer(
        ...,
        ...,
        apply_residual_connection_post_layernorm=False,
        output_layernorm=False,
        layer_type="encoder",
)

BERT

BERT architecture has LayerNorm at the output side (after the final BiasDropoutAdd) and the residual connection is taken from the output of that LayerNorm. In TE this can be achieved by setting the following arguments in the TransformerLayer API.

transformer_engine.pytorch.TransformerLayer(
        ...,
        ...,
        apply_residual_connection_post_layernorm=True,
        output_layernorm=True,
        layer_type="encoder",
)

T5

T5 architecture has an additional cross-attention + BiasDropoutAdd + LayerNorm block before the MLP layer. In TE this can be added by setting the layer_type to decoder in the TransformerLayer API.

transformer_engine.pytorch.TransformerLayer(
        ...,
        ...,
        layer_type="decoder",
)

Contributing to Transformer Engine

We welcome contributions to Transformer Engine. To contribute to TE and make pull requests, follow the guidelines outlined in the CONTRIBUTING.rst document.

Useful Links

transformerengine's People

Contributors

ksivaman avatar ptrendx avatar timmoon10 avatar nzmora-nvidia avatar jeng1220 avatar mingxu1067 avatar zlsh80826 avatar schetlur-nv avatar vasunvidia avatar asfiyab-nvidia avatar cyanguwa avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.