Giter VIP home page Giter VIP logo

tenet.jl's Introduction

Tenǝʇ.jl

CI codecov Aqua QA Registry Documentation: stable Documentation: dev

Important

The code for quantum tensor networks has been moved to the new Qrochet library.

A Julia library for Tensor Networks. Tenet can be executed both at local environments and on large supercomputers. Its goals are,

  • Expressiveness Simple to use. 👶
  • Flexibility Extend it to your needs. 🔧
  • Performance Goes brr... fast. 🏎️

Features

  • Optimized Tensor Network contraction order, powered by EinExprs
  • Tensor Network slicing/cuttings
  • Automatic Differentiation of TN contraction
  • Distributed contraction
  • Local Tensor Network transformations
    • Hyperindex converter
    • Rank simplification
    • Diagonal reduction
    • Anti-diagonal gauging
    • Column reduction
    • Split simplification
  • 2D & 3D visualization of large networks, powered by Makie

Preview

A video of its presentation at JuliaCon 2023 can be seen here:

Watch the video

tenet.jl's People

Contributors

arturgs avatar emapuljak avatar github-actions[bot] avatar jofrevalles avatar mofeing avatar todorbsc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

momka1234

tenet.jl's Issues

Inconsistent `node_attr` behavior for invisible "ghost" nodes in `plot`

Summary

The graphplot function does not allow plotting graphs with nodes that have unconnected edges. To manage this, "ghost" nodes with zero size are internally created at the edges where there are no direct connections. While these "ghost" nodes are invisible in the plot, they introduce inconsistencies when using the node_attr keyword arguments such as node_size, node_color, etc., in the plot function.

Specifically, when a user passes a Vector for some node_attr, the function also expects values for these invisible "ghost" nodes, which I believe is counter-intuitive to the user. Users would naturally expect to specify attributes only for the "real" nodes, and have the "ghost" nodes' attributes be handled internally by the plot function. Requiring users to pass in extra attribute values for "ghost" nodes complicates the interface and can lead to errors.

Example

julia> using Makie; using CairoMakie; using Tenet

julia> fig = Figure()

julia> t_lm = Tensor(rand(2, 2), (:l, :m))
2×2 Tensor{Float64, 2, Matrix{Float64}}: ...

julia> t_ilm = Tensor(rand(2, 2, 2), (:i, :l, :m))
2×2×2 Tensor{Float64, 3, Array{Float64, 3}}: ...

julia> tn = TensorNetwork([t_ilm, t_lm])
TensorNetwork{Arbitrary, NamedTuple{(), Tuple{}}}(#tensors=2, #inds=3)

julia> plot(tn; node_color=[:red, :green])
Error showing value of type Makie.FigureAxisPlot:
ERROR: All non scalars need same length, Found lengths for each argument: (3, 2, 3, 1, 1, 3, 3, 1), (Vector{Point{2, Float32}}, Vector{ColorTypes.RGBA{Float32}}, Vector{Real}, ColorTypes.RGBA{Float32}, Float32, Vector{BezierPath}, Vector{Vec{2, Float32}}, Quaternionf)
Stacktrace:
...

Consider integration with `MetaGraphs.jl`

A MetaGraph is a concrete subtype of Graphs.jl AbstractGraph abstract type that allows metadata on all of its components.
https://github.com/JuliaGraphs/MetaGraphs.jl

In this sense, it looks like a TensorNetwork would be a particular case of MetaGraph.

Some questions:

  • How can we store hyperindices?
  • How can we force edge-index and vertex-tensor relations?
  • Does functionality improve?
  • How can we dispatch on Ansatz type?

Add `replace!` function for replacing a `Tensor` from a `TensorNetwork`

Summary

Add a replace! function that allows users to replace a specific Tensor within a TensorNetwork. The function should ensure that the new Tensor maintains the same connections with respect to the labels as the original Tensor.

The replace! function should have the following input parameters and expected behavior:

  • Input: TensorNetwork, index of the Tensor to be replaced, new Tensor
  • Output: Updated TensorNetwork with the specified Tensor replaced
  • Checks: Verify that the new Tensor is compatible with the existing TensorNetwork, maintaining the same label connections

Example

function replace!(state::TensorNetwork, index::Int, new_tensor::Tensor)
    # Perform checks and update the state
    ...
end

Try `Dagger` on multinode

Summary

Try the integration of our Tensor methods with Dagger on a multinode execution. See issue #95 for more information.

Test plotting on CI

From #28, we found out that there might be crashes on plotting code. We should not seek testing deterministic behaviour but just checking that it does not crash.

Problem of writing a test that calls Makie.plot is that it will fail on CI runs because they do not have a graphical user interface. I recall that there was a package for creating a fake graphic server, to be used in these cases. I don't remember the name but probably is used by GraphMakie or some Makie backend.

Testing `contract` on complex numbers fails

When running ChainRulesTestUtils.test_frule and ChainRulesTestUtils.test_rrule on the product of complex numbers by using the contract function, the result is not the expected one. Usually the actual result is the negative or the conjugate of the result.

Currently, testing on complex numbers product is disabled but we should strife to find a solution.

`DimensionMismatch` error not raised for inconsistent dimensions in `TensorNetwork`

Summary

In the current implementation of the TensorNetwork function, inconsistent dimensions of the same tensor index are not correctly checked and handled. This issue manifests when two tensors with the same index label, but have different dimensions in the same TensorNetwork.

Example

julia> using Tenet

julia> A = Tensor(rand(3, 3), (:i, :j))
3×3 Tensor{Float64, 2, Matrix{Float64}}:
 0.397522  0.872019   0.541417
 0.463644  0.0310421  0.479054
 0.429812  0.923011   0.628436

julia> B = Tensor(rand(2, 2), (:j, :k))
2×2 Tensor{Float64, 2, Matrix{Float64}}:
 0.383957  0.154286
 0.454446  0.608856

julia> tn = TensorNetwork([A, B]) # No error is raised here
TensorNetwork{Arbitrary, NamedTuple{(), Tuple{}}}(#tensors=2, #labels=3)

In the above example, despite :j index having dimensions of 3 in Tensor A and 2 in Tensor B, no DimensionMismatch error is raised, which is counter-intuitive and can potentially lead to incorrect results.

Interestingly, the DimensionMismatch error is correctly thrown when using the push! function:

julia> push!(tn, Tensor(rand(3, 3), (:k, :l))) # Raises DimensionMismatch error
ERROR: DimensionMismatch: size(tensor,k)=3 but should be equal to size(tn,k)=2
Stacktrace:
 [1] push!(tn::TensorNetwork{Arbitrary, NamedTuple{(), Tuple{}}}, tensor::Tensor{Float64, 2, Matrix{Float64}})
   @ Tenet ~/git/Tenet.jl/src/TensorNetwork.jl:167
 [2] top-level scope
   @ REPL[22]:1

This suggests that the error handling mechanism might be correctly implemented in push! but is not properly functioning in the TensorNetwork constructor function.

Transform hyperindices to COPY-tensors on `plot(::TensorNetwork)`

Our plot function is currently not capable of plotting hyperedges (which should be shown as "bags" or "subsets" containing the connected nodes). One solution is to do as quimb and show hyperindices as tensors connecting involved tensors.

In order to do that, I created the HyperindConverter transformation, which replaces hyperindices with COPY-tensors. It should be as easy to use as:

tn_plot = transform(tn, HyperindConverter)

Inserted COPY-tensors have a metadata parameter called dual that contains the label of the replaced hyperindex. When plotting, the newly created edges of the COPY-tensor should not have its labels shown when the labels=true. Instead, the label in the dual metadata should be used.

Tip This code will return you the list of the newly created COPY-tensors that are dual of hyperindices:

copytensors = filter(Base.Fix2(haskey, :dual), tensors(tn_plot))

Extra-point if the label of the hyperindex is not plot on every edge of the COPY-tensor, but only once per COPY-tensor.
Extra-point if COPY-tensors are plot in a different way to the rest of tensors. I was thinking of just small black dots (like now) for COPY-tensors and white-background, black-circunference for the rest of tensors. It would be awesome if COPY-tensors could be represented with rhombus or diamonds shapes.

Consider accepting three-dimensional edge `Tensor`s for `MatrixProductState`

We could loose the current requirements that disallow creating a MatrixProductState where all the tensors are three-dimensional. In that case, we could just check that the dimension of the left and right index for the first and last Tensor, respectively, are one.

I think this could allow an easier implementation of the algorithms where MatrixProductState are used since it is easier to generalize the code, and we wouldn't have to worry about the boundary Tensors being different.
What do you think?

Incorporate `labels` representation for physical indices of each `Tensor` within a `State`

Summary

Enhance the meta field of a Tensor within a State by including a representation of the labels associated with each index, specifically emphasizing their physical representation (e.g., left, right). This can be implemented using a dictionary where each index corresponds to its respective label. This feature will be useful for manipulating the Tensors of a States in a more general fashion.

For instance, in a MatrixProductState, a typical Tensor has three indices: left, right, and physical (represented by :l, :r, and :p, respectively). We could represent the physical labels using a dictionary like :l => :label1, :r => :label2, :p => :label3, where :label1, :label2, and :label3 correspond to the labels of the left, right, and physical indices, respectively.

Consider treating `State`s and `Operator`s as Functors

Due to the "directionality" of quantum tensor networks, it might be beneficial to consider adding support to "call" tensor network states and operators, turning them into functors.

Quantum Operators are... (you guessed it) operators, so they are functions that map functions to functions. Quantum States are complex-valued functions but its duals are functionals that map States to complex numbers. This is exactly the behavior seen in Tensor Networks (and QM), where...

  • An operator $U$ acting on a function $\ket{\psi}$ leads to a function $\ket{\psi'}$ $$U \ket{\psi} = \ket{\psi'}$$
  • A functional $\bra{\psi}$ acting on an operator $U$ leads to a functional $\bra{\psi'}$ $$\bra{\psi} U = \bra{\psi'}$$
  • An operator $U$ acting on another operator $V$ leads to third operator $W$ $$U V = W$$
  • A functional $\bra{\psi}$ acting on a function $\ket{\phi}$ leads to a scalar $\alpha$ $$\braket{\psi \mid \phi} = \alpha$$

Proposal

function (tn::TensorNetwork{A})(x) where {...}
    ...
end

Things to decide

  • What is the behavior for Propertys (i.e. TNs with no open physical indices)? IMO the most intuitive thing would be to perform scalar multiplication ⇒ outer product in TN
  • Can dynamic dispatch be avoided? Plug type should be sufficient but it's unclear how to do type dispatch on it.
    • Maybe @generated functions?
    • Maybe refactor Quantum to account this information?

Accelerate dependency precompilation in CI workflow

CI workflow takes between 5-10mins to run but it only takes ~5s in my laptop (x60-120 slower in GitHub). This difference time is due to precompilation of dependency packages, specifically precompilation of some particular packages (Makie, GraphMakie, CUDA, ...).

Looks like the Julia caching Action is not caching the precompiled packages.

Implement Matrix Product Operator (MPO)

It needs to implement the following ansatz,

abstract type MatrixProductOperator{B<:Bounds} <: Operator{B} end

Define the MPOSampler (check MPSSampler as an example) and implement the methods:

MatrixProductOperator{Open}(arrays; ...)
MatrixProductOperator{Closed}(arrays; ...)

Base.eltype(::MPOSampler{B}) where {B<:Bounds}
Base.rand(::Type{MatrixProductOperator{B}}, ...)

function Base.rand(rng::Random.AbstractRNG, sampler::MPOSampler{Open,T}) where {T}
function Base.rand(rng::Random.AbstractRNG, sampler::MPSSampler{Closed,T}) where {T}

We will also need to implement the contractpath methods between MatrixProductOperators and a MatrixProductState and a MatrixProductOperator, but I'm still figuring it out with just MatrixProductStates.

Reduce compilation time

Storing index labels as parameters of NamedDimsArray structs is excessive for our use cases. The reason is that different parameters for NamedDimsArray implies different final structs and thus, compilation for each parameters case.

Although runtime performance is optimal, compilation time per simulation is overkilling.

Solution is to replace NamedDimsArray for some Tensor-like type where index labels are stored dynamically per instance and not per type.

"Multiplicity" parameter to unambiguously select `Tensor`s with same indices

There is a non-solved semantic issue with the refactor and it's that there can be (actually, are) Tensors within a TensorNetwork that are connected by the same indices. This case always appears in the contraction of a closed TN, since the last 2 tensors will forcibly have the same indices.

In other cases, this behavior is more strange but it still can happen.

The way mathematicians have solved it in graph theory is adding a "multiplicity" parameter to the edges. Since a TensorNetwork can represent both the nominal graph of TN and its inverse graph, it is easy mathematically to add a multiplicity paramater on functions that access Tensors.

In Tenet, calling select with "multiplicity" of $i$ should select the tensors that matches all the indices and choose the $i$-th element in the list. But does this function always computes the list in the same order? There should be some order but I'm still undecided about what order should it be.

@bsc-quantic/software @bsc-quantic/tensor-networks I'm invoking you to give your opinion on this.

Open indices are not shown on plots

Open indices are not shown on plots. This is due to the generation of the graph in the plot! method, because Graphs.SimpleGraph does not support open edges.

One solution is to introduces mock nodes with size 0 so edges can be created and represented in the plots.

Testing automatic differentiation of `contract` on `TensorNetwork`s fails

Summary

When running ChainRulesTestUtils.test_frule and ChainRulesTestUtils.test_rrule on the contract function for a TensorNetwork the test fails. Currenly, testing on the contraction for TensorNetworks is disabled.

Example

julia> using Tenet
julia> tn = TensorNetwork([
           Tensor(rand(Complex{Float32}, 2, 2), (:y, :z)),
           Tensor(rand(Complex{Float32}, 2, 2), (:x, :y)),
           Tensor(rand(Complex{Float32}, 2, 2),(:x, :z))
           ])
TensorNetwork{Arbitrary}(#tensors=3, #inds=3)
julia> rrule(contract, tn)
(fill(-3.8262873f0 + 1.4557784f0im), Tenet.var"#contract_pullback#186"{Arbitrary, TensorNetwork{Arbitrary}}(TensorNetwork{Arbitrary}(#tensors=3, #inds=3)))
julia> test_rrule(contract, tn)
test_rrule: contract on TensorNetwork{Arbitrary}: Error During Test at /home/jofrevalles/.julia/packages/ChainRulesTestUtils/lERVj/src/testers.jl:202
  Got exception outside of a @test
  UndefRefError: access to undefined reference
  Stacktrace:
    ...
Test Summary:                                    | Error  Total  Time
test_rrule: contract on TensorNetwork{Arbitrary} |     1      1  0.0s
ERROR: Some tests did not pass: 0 passed, 0 failed, 1 errored, 0 broken.

Enhance `replace!(tn::TensorNetwork, ...)` function for more flexible `Tensor` replacements

Summary

Currently, the replace! function can replace a Tensor within a TensorNetwork but only under the condition that the new Tensor has the same labels as the original one. This restriction imposes a considerable limitation on our usage of this function, especially in scenarios where we might want to replace a tensor with another one that differs in size and consequently, in labels.

At the moment, this implementation could potentially introduce problems. Specifically, the TensorNetwork has an inds field that links indexes with tensors. In its current form, we cannot add a new index to a TensorNetwork because of the immutable nature of this struct.

The recent refactor presented in PR #55 removes the inds field, allowing a more generalized replacement of tensors like the one just explained.

Implement `tensors` function for `Operator` types

Currently the tensors function is not implemented for Operator types. @mofeing argues that it is not easy to implement tensors for Operators when the input and output sites do not match (e. g. SpacedMatrixProductOperator). I think that currently both MPO and PEPO only support the same input/output sites, so maybe we could just implement that (as #78 intended).

Refactor TN `Ansatz`es to traits

Using type-parameters for encoding TN ansatz is not going to scale well.

  1. Composite incurs in a lot of compilation.
  2. Runtime dispatch is more costly than conditionals.

Proposal

The idea is to replace MatrixProduct, ProjectedEntangledPair, ... for Holy traits.

abstract type MatrixProduct{P,B} <: Quantum where {P<:Plug,B<:Boundary} end

would be rewritten as

abstract type Ansatz end
struct MatrixProduct{P<:Plug,B<:Boundary} <: Ansatz end

Also, the current Ansatz type would be renamed, because it would no longer express what we are meaning about it (i.e. the form of the graph). I suggest renaming it to Domain because that's what its fields are going to vary on.

Links

https://invenia.github.io/blog/2019/11/06/julialang-features-part-2/
https://www.ahsmart.com/pub/holy-traits-design-patterns-and-best-practice-book/

CI workflow fails on Julia nightly

Looks like there is some method ambiguity introduced in Julia nightly. Specifically, the problem is in the == function with arguments Tensor and SparseArrays.ReadOnly.

I decided to temporarily disable method ambiguity checks in efe0f35.

Reduce package load time by lazily loading Makie

Currently, Tenet takes about 11s to load in my laptop (Apple M1 Pro). Julia 1.9 should provide some improvements on native code caching. Meanwhile, we could lazily load packages to avoid unnecessary load overhead.

An analysis on the package load time using InteractiveUtils.@time_imports (see below) shows that the Makie package (used for visualization of tensor networks) takes around half of the time.

Since visualization is not critical and Makie is a popular package that people will most probably load in their sessions, we could use Requires.jl for lazily loading Makie-dependent code. Furthermore, Package Extensions should land on Julia 1.9 so Requires would be a dependency just for Julia 1.8.

julia> using InteractiveUtils
julia> @time_imports using Tenet
     16.4 ms  MacroTools
      1.2 ms  SimpleTraits
      2.0 ms  StaticArraysCore
    396.0 ms  StaticArrays
      3.1 ms  ArnoldiMethod
      0.5 ms  Compat
      0.4 ms  Inflate
     56.2 ms  DataStructures
     31.4 ms  Graphs
      1.2 ms  Contour
      0.6 ms  LaTeXStrings
     13.1 ms  AbstractTrees
     35.0 ms  SIMD
      0.7 ms  ScanByte
      1.3 ms  TranscodingStreams
      4.3 ms  Automa
     19.5 ms  Preferences
      0.3 ms  JLLWrappers
    158.5 ms  Bzip2_jll 99.58% compilation time (100% recompilation)
      0.8 ms  Zlib_jll
      0.5 ms  FreeType2_jll
      2.9 ms  CEnum
      3.9 ms  FreeType
     60.6 ms  FixedPointNumbers
      0.1 ms  Reexport
     55.6 ms  ColorTypes 6.46% compilation time
    220.6 ms  Colors
      4.0 ms  IrrationalConstants
     75.4 ms  ChainRulesCore
      2.4 ms  DocStringExtensions 54.33% compilation time
      9.9 ms  ChangesOfVariables
      1.2 ms  InverseFunctions
      0.5 ms  LogExpFunctions
      0.8 ms  OpenLibm_jll
      6.3 ms  CompilerSupportLibraries_jll
      0.5 ms  OpenSpecFun_jll
     12.1 ms  SpecialFunctions
      0.2 ms  TensorCore
     97.2 ms  ColorVectorSpace 3.21% compilation time
      0.2 ms  DataValueInterfaces
      1.2 ms  DataAPI
      0.1 ms  IteratorInterfaceExtensions
      0.1 ms  TableTraits
     12.8 ms  Tables
      0.3 ms  Adapt
      2.6 ms  GPUArraysCore
     19.1 ms  StructArrays
     21.9 ms  IterTools
      2.1 ms  Extents
      0.8 ms  GeoInterface
      1.4 ms  EarCut_jll
    419.7 ms  GeometryBasics
     68.7 ms  FreeTypeAbstraction 11.94% compilation time
      0.9 ms  UnicodeFun
      0.2 ms  Scratch
      0.3 ms  RelocatableFolders
     84.4 ms  MathTeXEngine
      1.2 ms  FriBidi_jll
      1.0 ms  Libiconv_jll
      0.8 ms  Libffi_jll
      0.8 ms  XML2_jll
      1.1 ms  Gettext_jll
      0.7 ms  PCRE2_jll
      4.4 ms  Glib_jll
      0.5 ms  Pixman_jll
      0.3 ms  libpng_jll
      0.1 ms  Libuuid_jll
      0.3 ms  Expat_jll
      2.9 ms  Fontconfig_jll 69.27% compilation time
      0.5 ms  LZO_jll
      1.0 ms  Cairo_jll
      0.3 ms  Graphite2_jll
      0.7 ms  HarfBuzz_jll
      0.6 ms  libass_jll
      0.3 ms  libfdk_aac_jll
      0.5 ms  LAME_jll
      0.6 ms  Ogg_jll
      0.6 ms  libvorbis_jll
      0.5 ms  libaom_jll
      0.5 ms  x264_jll
      0.6 ms  x265_jll
      1.0 ms  OpenSSL_jll
      0.3 ms  Opus_jll
      4.8 ms  FFMPEG_jll
      0.2 ms  FFMPEG
     11.5 ms  Observables
      0.1 ms  SnoopPrecompile
     13.6 ms  ColorSchemes
    391.7 ms  PlotUtils
     48.7 ms  Parsers 8.67% compilation time
     19.0 ms  JSON
      0.2 ms  ColorBrewer
      3.5 ms  Packing
      0.1 ms  SignedDistanceFields
      7.6 ms  MakieCore
     68.2 ms  OffsetArrays
      0.5 ms  SortingAlgorithms
      7.1 ms  Missings
      0.3 ms  StatsAPI
     17.8 ms  StatsBase
     17.5 ms  PDMats
      0.6 ms  Rmath_jll
     45.7 ms  Rmath 84.48% compilation time
      0.2 ms  NaNMath
      1.8 ms  Calculus
     49.0 ms  DualNumbers
      0.8 ms  HypergeometricFunctions
      4.3 ms  StatsFuns
      2.4 ms  QuadGK
    143.5 ms  FillArrays
      1.1 ms  DensityInterface
    180.1 ms  Distributions
      6.6 ms  WoodburyMatrices
      0.3 ms  Requires
    215.5 ms  Ratios 93.28% compilation time (50% recompilation)
      0.2 ms  AxisAlgorithms
     35.5 ms  Interpolations 13.22% compilation time (100% recompilation)
      8.5 ms  AbstractFFTs
      0.9 ms  FFTW_jll
    392.9 ms  FFTW 3.15% compilation time
      2.0 ms  KernelDensity
      0.5 ms  isoband_jll
      0.1 ms  Isoband
      0.1 ms  PolygonOps
    105.1 ms  GridLayoutBase
    658.2 ms  FileIO 0.64% compilation time (100% recompilation)
      2.7 ms  IndirectArrays
      4.4 ms  LazyModules 68.02% compilation time
      0.4 ms  ImageIO
      1.7 ms  TriplotBase
      1.1 ms  Qhull_jll
      0.7 ms  QhullMiniWrapper_jll
      0.1 ms  MiniQhull
     25.1 ms  IntervalSets
      0.2 ms  Showoff
      0.5 ms  Formatting
      0.3 ms  Match
      3.5 ms  Animations
   5660.4 ms  Makie 0.17% compilation time
     21.7 ms  NetworkLayout 60.69% compilation time (44% recompilation)
    164.2 ms  GraphMakie
      5.8 ms  Combinatorics
    142.3 ms  OptimizedEinsum
      1.0 ms  CovarianceEstimation
     71.8 ms  NamedDims 5.64% compilation time (100% recompilation)
      1.5 ms  TupleTools
      0.2 ms  BatchedRoutines
      0.2 ms  BetterExp
      0.3 ms  Suppressor
     14.1 ms  OMEinsumContractionOrders 30.41% compilation time (100% recompilation)
     14.5 ms  OMEinsum 30.14% compilation time (100% recompilation)
     21.2 ms  SimpleWeightedGraphs
      5.9 ms  Pango_jll
      2.4 ms  Graphics
      3.0 ms  Cairo
      3.7 ms  Media 54.67% compilation time
    260.5 ms  Juno
      2.8 ms  JpegTurbo_jll
      1.6 ms  LERC_jll
      1.6 ms  Zstd_jll
      1.3 ms  Libtiff_jll
      1.5 ms  gdk_pixbuf_jll
      4.9 ms  Librsvg_jll
      3.1 ms  Rsvg 72.57% compilation time
    510.6 ms  Luxor 35.01% compilation time (32% recompilation)
     11.2 ms  Quac
    190.8 ms  Tenet

Adjoint `Tensor` contraction produces incoherent results

Summary

The contraction of an adjoint Tensor produces incoherent results with the normal Tensor ones. Specifically, it returns an Array type of data instead of Tensor type. I believe that is also why the PR #39 is failing on some tests.

Example

julia> using Tenet
julia> b = Tensor(rand(2, 2), (:i, :j))
2×2 Tensor{Float64, 2, Matrix{Float64}}:
 0.604068  0.472015
 0.732286  0.0951967
julia> contract(5,b)
2×2 Tensor{Float64, 2, Matrix{Float64}}:
 3.02034  2.36008
 3.66143  0.475984
julia> contract(5,b')
2×2 Matrix{Float64}:
 3.02034  3.66143
 2.36008  0.475984

I believe that this is a problem with the adjoint Tensor not being a Tensor type:

julia> using Tenet
julia> b = Tensor(rand(2, 2), (:i, :j))
julia> b' isa Tensor
false

Regression on 3D visualization

Plotting directly in 3D using Spring(dim=3) from NetworkLayout no longer works (it plots in 3D).

using Tenet
using Makie
using GLMakie
using NetworkLayout

tn = rand(GenericTensorNetwork, 60, 3)

plot(tn, layout=Spring(dim=3))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.