Giter VIP home page Giter VIP logo

akg's People

Contributors

aitianshi avatar aling0 avatar anyrenwei avatar brandimarte avatar gaoxiong-1 avatar harenome avatar it-is-a-robot avatar jiaoy1224 avatar lnellos avatar mindspore-bot avatar mondayyuan avatar n00344539 avatar nicholasyanghaoran avatar sijiayang avatar spinech0 avatar xsmq avatar yuzhu-2019 avatar zhengzuohe avatar zhiqwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

akg's Issues

using tvm.hybrid.script as front end

Hi,The following is my code(actually it is generated by my own frontend but it is irrelevant to my rpoblem):

from akg import tvm
from akg.tvm.hybrid import script
import numpy as np
@script
def Conv2D(data,conv1_weight):
  #s=0.000000
  PaddedData = output_tensor((1,6,14,14),"float32")
  conv1 = output_tensor((1,4,12,12),"float32")
  for n in range(1):
    for i0 in range(6):
      for i1 in range(14):
        for i2 in range(14):
          PaddedData[n,i0,i1,i2] = 0.000000 if i1<1 or 1+12<=i1 or i2<1 or 1+12<=i2 else data[n,i0,i1-1,i2-1]
  for n in range(1):
    for c in range(4):
      for i in range(12):
        for j in range(12):
          conv1[n,c,i,j] = 0.000000
          for ric in range(6):
            for rkh in range(3):
              for rkw in range(3):
                #s = 0.000000 if i+rkh<1 or 1+12<=i+rkh or j+rkw<1 or 1+12<=j+rkw else data[n,ric,i+rkh-1,j+rkw-1]
                #conv1[n,c,i,j] = conv1[n,c,i,j]+ s*conv1_weight[c,ric,rkh,rkw]
                conv1[n,c,i,j] = conv1[n,c,i,j]+ PaddedData[n,ric,i*1+rkh,j*1+rkw]*conv1_weight[c,ric,rkh,rkw]
  return conv1
data = tvm.placeholder(shape=(1,6,12,12),name="data",dtype="float32")
conv1_weight = tvm.placeholder(shape=(4,6,3,3),name="conv1_weight",dtype="float32")
res = Conv2D(data,conv1_weight)
sch = akg.tvm.create_schedule(res.op)
mod = akg.build(sch,(data,conv1_weight,res),'cuda',[], name='myfunc', attrs={}, polyhedral=True, binds=None)

basically,this is a convolution2d using tvm.hybrid.script,paramater:data(1,6,12,12)(NCHW) conv1_weight(4,6,3,3)
stride=1,padding=1.And a error occurred:

[ERROR] AKG:2021-04-26-18:41:22.497.033 [scop_info.cc:1207] [poly] Hoist footprint of tensor PaddedData has no buffer definition

I'm a little confuing about how to write it correctly becase I'm fresh in this and learning hybrid tutorial in docs.tvm.org
of course,if I inline paddeddata into main loop just like the comments did in below code,it can run normally.But I noticed that akg could do Autoinline and Autofuse,so I would rather not to implement Autoinline in my own frontend.
some other error also occur while I'm doing some adjustment in this function such as:

[ERROR] AKG:2021-04-26-18:24:03.033.999 [storage_flatten.cc:133] [pass] Check failed: it != buf_map_.end(): Cannot find allocated buffer for placeholder(Conv2D_v1, 0x3804790)
```when trying return both paddeddata and conv1

[ERROR] AKG:2021-04-26-17:38:50.631.723 [tiling_utils.cc:98] [tiling] L0 value of axis 0_11 has not been tiled.
free(): invalid next size (fast)


Thanks!

Replce ScheduleTree with my own Schedule Tree

Hi,I'm implementing a front end which apply its own APIs to generate ScheduleTree in isl.And I want to use AKG as my back end to generate GPU or Acend code. So I want to do "AutoPoly" stage with my own Schedule Tree which is identical to AKG's generated ScheduleTree and do next sequential passes as AKG did. I replace

MakeScheduleTree()

in scop_builder.cc with my ScheduleTree string.But I noticed that its origal implementation do a lot to initialize scop_info which is critical to Transform() and GenHalide() in AutoPoly stage.(I learned that by reading the source code)
And I don't want to generate Stmt by "schedule.ScheduleOps"becase if I had did it ,there was no need to generate ScheduleTree by my own.

Futhermore,the original ScheduleTree generation including scop_info assignment depends on Stmt,so I'm a little confused how to skip Stmt and assign scop_info .Is that a possibal way to assign scop_info?If not ,any other tips to implement my thought?Or what should I learn to do that?
Thanks a lot.

fail to run python test_all.py all

Environment

Hardware Environment(/GPU RTX1060/):

Software Environment:

  • AKG version (source):
  • Python version 3.7.5):
  • OS platform and distribution (Linux Ubuntu 18.04):
  • GCC/Compiler version (gcc version 7.5.0):
  • cmake version 3.19.6

Describe the current behavior

Operater: fused_relu_grad
Time of auto schedule:
func_time_required func:random_gaussian, running:16.934264 seconds
func_time_required func:random_gaussian, running:17.178366 seconds
func_time_required func:random_gaussian, running:17.372735 seconds
[ERROR] AKG:2021-03-15-13:49:27.480.610 [unify_loop_vars.cc:111] [pass] found undefined variable: threadIdx.x
Stack trace:
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const*, air::Expr const&)+0x3f7) [0x7f03ee75a027]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Add const*, air::Expr const&)+0x88) [0x7f03edcc3d68]
[bt] (5) /akg/build/libakg.so(+0xed5d1e) [0x7f03edcc0d1e]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Load const*, air::Expr const&)+0x51) [0x7f03edcc27e1]

Operater: fused_bn_update_grad
Time of auto schedule:
[ERROR] AKG:2021-03-15-13:49:27.718.533 [unify_loop_vars.cc:111] [pass] found undefined variable: threadIdx.x
Stack trace:
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const*, air::Expr const&)+0x3f7) [0x7f03ee75a027]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Add const*, air::Expr const&)+0x88) [0x7f03edcc3d68]
[bt] (5) /akg/build/libakg.so(+0xed5d1e) [0x7f03edcc0d1e]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Load const*, air::Expr const&)+0x51) [0x7f03edcc27e1]

Operater: fused_mul_div_rsqrt_mul_isfinite_red
Time of auto schedule:
func_time_required func:random_gaussian, running:0.140863 seconds
func_time_required func:random_gaussian, running:0.150357 seconds
[ERROR] AKG:2021-03-15-13:49:28.102.035 [unify_loop_vars.cc:111] [pass] found undefined variable: threadIdx.x
Stack trace:
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const*, air::Expr const&)+0x3f7) [0x7f03ee75a027]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Load const*, air::Expr const&)+0x51) [0x7f03edcc27e1]
[bt] (5) /akg/build/libakg.so(+0xed5c2e) [0x7f03edcc0c2e]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Mul const*, air::Expr const&)+0x53) [0x7f03edcc40f3]

Run op abs error
using auto schedule:
Traceback (most recent call last):
File "test_all.py", line 529, in
op(poly_sch=True, fuzz_shape=fuzz_shape)
File "test_all.py", line 274, in abs
test_ms_abs((1024, 1024), "float32", poly_sch=poly_sch)
File "/akg/tests/operators/gpu/test_ms_abs.py", line 30, in test_ms_abs
mod = utils.op_build_test(abs_data, [shape], [dtype], attrs={"target": "cuda"}, kernel_name="abs")
File "/akg/python/akg/utils/kernel_exec.py", line 96, in wrapper
result = func_name(*args, *kwargs)
File "/akg/python/akg/utils/kernel_exec.py", line 622, in op_build_test
polyhedral, tuning)
File "/akg/python/akg/utils/kernel_exec.py", line 1012, in op_build
dump_code, tuning)
File "/akg/python/akg/utils/kernel_exec.py", line 913, in create_gpu_mod
binds=binds)
File "/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(args, kwargs)
File "/akg/python/akg/build_module.py", line 142, in build
attrs=attrs, polyhedral=polyhedral, target=target)
File "/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(args, kwargs)
File "/akg/python/akg/build_module.py", line 135, in build_to_func
polyhedral, target, cfg)
File "/akg/third_party/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in call
raise get_last_ffi_error()
tvm.ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate
(air::ir::Load const
, air::Expr const&)+0x51) [0x7f03edcc27e1]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
) const+0x62) [0x7f03ed4d60e2]
[bt] (5) /akg/build/libakg.so(+0xed5d1e) [0x7f03edcc0d1e]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Add const
, air::Expr const&)+0x88) [0x7f03edcc3d68]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
) const+0x62) [0x7f03ed4d60e2]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const
, air::Expr const&)+0x3f7) [0x7f03ee75a027]
File "/home/xh/projects/akg-binary/src/pass/unify_loop_vars.cc", line 111
TVMError: found undefined variable: threadIdx.x

Run op add error
using auto schedule:
Traceback (most recent call last):
File "test_all.py", line 529, in
op(poly_sch=True, fuzz_shape=fuzz_shape)
File "test_all.py", line 78, in add
test_ms_add((1, 1024), (1, 1024), 'float32', poly_sch=poly_sch)
File "/akg/tests/operators/gpu/test_ms_add.py", line 31, in test_ms_add
mod = utils.op_build_test(add, (shape1, shape2), (dtype, dtype), kernel_name="add", attrs={"target": "cuda"})
File "/akg/python/akg/utils/kernel_exec.py", line 96, in wrapper
result = func_name(*args, *kwargs)
File "/akg/python/akg/utils/kernel_exec.py", line 622, in op_build_test
polyhedral, tuning)
File "/akg/python/akg/utils/kernel_exec.py", line 1012, in op_build
dump_code, tuning)
File "/akg/python/akg/utils/kernel_exec.py", line 913, in create_gpu_mod
binds=binds)
File "/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(args, kwargs)
File "/akg/python/akg/build_module.py", line 142, in build
attrs=attrs, polyhedral=polyhedral, target=target)
File "/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(args, kwargs)
File "/akg/python/akg/build_module.py", line 135, in build_to_func
polyhedral, target, cfg)
File "/akg/third_party/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in call
raise get_last_ffi_error()
tvm.ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate
(air::ir::Add const
, air::Expr const&)+0x53) [0x7f03edcc3d33]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
) const+0x62) [0x7f03ed4d60e2]
[bt] (5) /akg/build/libakg.so(+0xed5c2e) [0x7f03edcc0c2e]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Load const
, air::Expr const&)+0x51) [0x7f03edcc27e1]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator
) const+0x62) [0x7f03ed4d60e2]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const
, air::Expr const&)+0x3f7) [0x7f03ee75a027]
File "/home/xh/projects/akg-binary/src/pass/unify_loop_vars.cc", line 111
TVMError: found undefined variable: threadIdx.x

Steps to reproduce the issue

  1. get docker 1.1.2
  2. git clone akg repo
  3. build akg
  4. cd /akg/tests/operators/gpu
  5. python test_all.py all

Related log / screenshot

tvm.ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /akg/build/libakg.so(air::ir::IRMutator::Mutate
(air::ir::Add const*, air::Expr const&)+0x53) [0x7f03edcc3d33]
[bt] (7) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (6) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (5) /akg/build/libakg.so(+0xed5c2e) [0x7f03edcc0c2e]
[bt] (4) /akg/build/libakg.so(air::ir::IRMutator::Mutate_(air::ir::Load const*, air::Expr const&)+0x51) [0x7f03edcc27e1]
[bt] (3) /akg/build/libakg.so(air::ir::IRMutator::Mutate(air::Expr)+0x5d) [0x7f03ed4d625d]
[bt] (2) /akg/build/libakg.so(air::NodeFunctor<air::Expr (air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*)>::operator()(air::runtime::ObjectRef const&, air::Expr const&, air::ir::IRMutator*) const+0x62) [0x7f03ed4d60e2]
[bt] (1) /akg/build/libakg.so(+0xed5bde) [0x7f03edcc0bde]
[bt] (0) /akg/build/libakg.so(akg::ir::UnifyLoopVarsMutator::Mutate_(air::Variable const*, air::Expr const&)+0x3f7) [0x7f03ee75a027]
File "/home/xh/projects/akg-binary/src/pass/unify_loop_vars.cc", line 111
TVMError: found undefined variable: threadIdx.x

Build script doesn't work with GPU backend

Environment

Hardware Environment(Ascend/GPU/CPU):

/device gpu

Software Environment:

  • AKG version (source or binary): master branch
  • Python version (e.g., Python 3.7.5):
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • GCC/Compiler version (if compiled from source): gcc (GCC) 7.3.0

Describe the current behavior

When I tried to build akg package from source, it seems that the build script doesn't work well when I ran bash ./build.sh -t gpu based on installation guidelines. And here is the error log:

mkdir /root/workspace/akg/build
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:32 (message):
  Please export CMAKE_INCLUDE_PATH to directory where gmp.h locates at.

Describe the expected behavior

I want to know how to build from source successfully.

Steps to reproduce the issue

Related log / screenshot

Special notes for this issue

cce runtime error:errno=145 segmentation fault

Hi,everyone.Now I run my demo code in cce target,But an error occurred like this:

[ERROR] RUNTIME(4472)kernal task happen error, error code=0x26, [aicore exception].
[ERROR] RUNTIME(4472)aicore kernel execute failed, device_id=0, stream_id=1, task_id=0, fault kernel_name=myfunc_kernel0, func_name=myfunc_kernel0
[ERROR] AKG:2021-05-05-12:21:33.569.843 [cce_module.cc:232] [cce] Check failed: e == RT_ERROR_NONE: Cce runtime error: errno=145, info=Unknow cce error code
Stack trace:
  [bt] (0) /home/HwHiAiUser/akg/build/libakg.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x58) [0xfffef59e1cb4]
  [bt] (1) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::CceWrappedFunc::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*, void**, long*, unsigned long) const+0x610) [0xfffef6f061f4]
  [bt] (2) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::detail::PackFuncVoidAddr_<4, air::runtime::CceWrappedFunc>(air::runtime::CceWrappedFunc, std::vector<air::runtime::detail::ArgConvertCode, std::allocator<air::runtime::detail::ArgConvertCode> > const&, int)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x2d0) [0xfffef6f0885c]
  [bt] (3) /home/HwHiAiUser/akg/build/libakg.so(std::_Function_handler<void (air::runtime::TVMArgs, air::runtime::TVMRetValue*), air::runtime::detail::PackFuncVoidAddr_<4, air::runtime::CceWrappedFunc>(air::runtime::CceWrappedFunc, std::vector<air::runtime::detail::ArgConvertCode, std::allocator<air::runtime::detail::ArgConvertCode> > const&, int)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, air::runtime::TVMArgs&&, air::runtime::TVMRetValue*&&)+0x7c) [0xfffef6f0d8bc]
  [bt] (4) /home/HwHiAiUser/akg/build/libakg.so(std::function<void (air::runtime::TVMArgs, air::runtime::TVMRetValue*)>::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x78) [0xfffef5a2e6b8]
  [bt] (5) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::PackedFunc::CallPacked(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x5c) [0xfffef5af9200]
  [bt] (6) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVM::Run(air::runtime::StackVM::State*) const+0x14d0) [0xfffef7727b30]
  [bt] (7) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVM::Run(air::runtime::TVMArgs const&, air::runtime::ModuleNode*) const+0x108) [0xfffef7726168]
  [bt] (8) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVMModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, air::runtime::ObjectPtr<air::runtime::Object> const&)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x48) [0xfffef772c1b4]

Traceback (most recent call last):

  File "vector_add.py", line 30, in <module>
    mod(a, b, c)

  File "/home/HwHiAiUser/akg/third_party/incubator-tvm/python/tvm/_ffi/function.py", line 144, in __call__
    return f(*args)

  File "/home/HwHiAiUser/akg/third_party/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__
    raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVMModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, air::runtime::ObjectPtr<air::runtime::Object> const&)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x48) [0xfffef772c1b4]
  [bt] (7) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVM::Run(air::runtime::TVMArgs const&, air::runtime::ModuleNode*) const+0x108) [0xfffef7726168]
  [bt] (6) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::StackVM::Run(air::runtime::StackVM::State*) const+0x14d0) [0xfffef7727b30]
  [bt] (5) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::PackedFunc::CallPacked(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x5c) [0xfffef5af9200]
  [bt] (4) /home/HwHiAiUser/akg/build/libakg.so(std::function<void (air::runtime::TVMArgs, air::runtime::TVMRetValue*)>::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x78) [0xfffef5a2e6b8]
  [bt] (3) /home/HwHiAiUser/akg/build/libakg.so(std::_Function_handler<void (air::runtime::TVMArgs, air::runtime::TVMRetValue*), air::runtime::detail::PackFuncVoidAddr_<4, air::runtime::CceWrappedFunc>(air::runtime::CceWrappedFunc, std::vector<air::runtime::detail::ArgConvertCode, std::allocator<air::runtime::detail::ArgConvertCode> > const&, int)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, air::runtime::TVMArgs&&, air::runtime::TVMRetValue*&&)+0x7c) [0xfffef6f0d8bc]
  [bt] (2) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::detail::PackFuncVoidAddr_<4, air::runtime::CceWrappedFunc>(air::runtime::CceWrappedFunc, std::vector<air::runtime::detail::ArgConvertCode, std::allocator<air::runtime::detail::ArgConvertCode> > const&, int)::{lambda(air::runtime::TVMArgs, air::runtime::TVMRetValue*)#1}::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*) const+0x2d0) [0xfffef6f0885c]
  [bt] (1) /home/HwHiAiUser/akg/build/libakg.so(air::runtime::CceWrappedFunc::operator()(air::runtime::TVMArgs, air::runtime::TVMRetValue*, void**, long*, unsigned long) const+0x610) [0xfffef6f061f4]
  [bt] (0) /home/HwHiAiUser/akg/build/libakg.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x58) [0xfffef59e1cb4]
  File "/home/HwHiAiUser/akg/third_party/incubator-tvm/src/runtime/cce/cce_module.cc", line 232
TVMError: Check failed: e == RT_ERROR_NONE: Cce runtime error: errno=145, info=Unknow cce error code

Segmentation fault

Here is demo snippet:

import akg
from akg import tvm
import numpy as np

n = 5

a = tvm.placeholder([n], name='a')
b = tvm.placeholder([n], name='b')
c = tvm.compute([n], lambda i: a + b[n - i - 1])

s = tvm.create_schedule(c.op)

mod = akg.build(s, (a, b, c), 'cce', [], name='myfunc', attrs={}, polyhedral=True, binds=None)

print(mod.imported_modules[0].get_source())

a_np = np.random.random([n]).astype(a.dtype)
b_np = np.random.random([n]).astype(b.dtype)

print(a_np, b_np)
ctx = tvm.context('cce')
a = tvm.nd.array(a_np, ctx)
b = tvm.nd.array(b_np, ctx)
c = tvm.nd.array(np.zeros([n], dtype=a_np.dtype), ctx)
mod(a, b, c)
ctx.sync()  
print(c)

I'm confusing about that, it's a simple operator,it makes no sense akg cannot run it.
Thanks!

akg build error: Invalid Schedule

Hi,I'm trying dive into deep learning compiler tutorial and replace ### tvm.build with ### akg.build(sch, (X,Y), 'cuda', [], name='myfunc', attrs={}, polyhedral=True, binds=None)
when I try AvgPooling operator,I'm trying to do some schedule to merge stages of avgpooling such as autoInlineInjective.But when I have merged poolsum stage and poolavg stage usingPoolSum = Y.op.input_tensors[0] sch[PoolSum].compute_at(sch[Y], sch[Y].op.axis[2]),an error ocurred.
`[ERROR] AKG:2021-04-05-17:43:31.410.549 [graph.cc:223] [schedule] Check failed: start_attach: Invalid Schedule: cannot find attach point iter_var(h, range(min=0, ext=12)) in the schedule of compute(PoolAvg, 0x3126cc0)
Stack trace:
[bt] (0) /home/sun/gitDownload/akg/mybuild/libakg.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7fd326aa5fcf]
[bt] (1) /home/sun/gitDownload/akg/mybuild/libakg.so(air::schedule::CreateAttachPath(air::Schedule)+0x5d4) [0x7fd32789e654]
[bt] (2) /home/sun/gitDownload/akg/mybuild/libakg.so(air::schedule::InferBound(air::Schedule const&)+0xda4) [0x7fd327899ad4]
[bt] (3) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::LowerStmt(air::Schedule, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, bool, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&, air::Array<air::NodeRef, void>, air::Array<air::NodeRef, void>, air::Map<air::Tensor, air::Buffer, void, void>, air::Map<air::Tensor, air::Buffer, void, void>, bool)+0x384) [0x7fd326af3b34]
[bt] (4) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::Lower(air::Schedule, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, bool, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)+0x166) [0x7fd326af67f6]
[bt] (5) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::BuildToFunc(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)+0x24f) [0x7fd326b00dbf]
[bt] (6) /home/sun/gitDownload/akg/mybuild/libakg.so(void air::runtime::detail::unpack_call_dispatcher<akg::BuildRst, 0, 9, akg::BuildRst ()(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)>::run<air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue>(akg::BuildRst ( const&)(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&), air::runtime::TVMArgs const&, air::runtime::TVMRetValue*, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVM
Traceback (most recent call last):

File "pooling.py", line 70, in
mod = akg.build(sch, (X,Y), 'cuda', [], name='myfunc', attrs={}, polyhedral=True, binds=None)

File "/home/sun/gitDownload/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(*args, **kwargs)

File "/home/sun/gitDownload/akg/python/akg/build_module.py", line 141, in build
tmp_rst = build_to_func(inputs, args, shape_params=shape_params, name=name, binds=binds,

File "/home/sun/gitDownload/akg/python/akg/utils/validation_check.py", line 135, in in_wrapper
return func(*args, **kwargs)

File "/home/sun/gitDownload/akg/python/akg/build_module.py", line 134, in build_to_func
return _api_internal._BuildToFunc(inputs, args, shape_params, name, tmp_binds, tmp_attrs,

File "/home/sun/gitDownload/akg/third_party/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in call
raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/sun/gitDownload/akg/mybuild/libakg.so(TVMFuncCall+0x65) [0x7fd32780e305]
[bt] (7) /home/sun/gitDownload/akg/mybuild/libakg.so(std::_Function_handler<void (air::runtime::TVMArgs, air::runtime::TVMRetValue*), air::runtime::TypedPackedFunc<akg::BuildRst (air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)>::AssignTypedLambda<akg::BuildRst ()(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)>(akg::BuildRst ()(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&))::{lambda(air::runtime::TVMArgs const&, air::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, air::runtime::TVMArgs&&, air::runtime::TVMRetValue*&&)+0x13a) [0x7fd326b1003a]
[bt] (6) /home/sun/gitDownload/akg/mybuild/libakg.so(void air::runtime::detail::unpack_call_dispatcher<akg::BuildRst, 0, 9, akg::BuildRst ()(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)>::run<air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue, air::runtime::TVMArgValue>(akg::BuildRst ( const&)(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&), air::runtime::TVMArgs const&, air::runtime::TVMRetValue*, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&, air::runtime::TVMArgValue&&)+0x176) [0x7fd326b0fcd6]
[bt] (5) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::BuildToFunc(air::Schedule const&, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)+0x24f) [0x7fd326b00dbf]
[bt] (4) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::Lower(air::Schedule, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, bool, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&)+0x166) [0x7fd326af67f6]
[bt] (3) /home/sun/gitDownload/akg/mybuild/libakg.so(akg::LowerStmt(air::Schedule, air::Array<air::NodeRef, void> const&, air::Array<air::NodeRef, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::Map<air::Tensor, air::Buffer, void, void> const&, air::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, air::NodeRef, void, void> const&, bool, bool, bool, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, air::BuildConfig const&, air::Array<air::NodeRef, void>, air::Array<air::NodeRef, void>, air::Map<air::Tensor, air::Buffer, void, void>, air::Map<air::Tensor, air::Buffer, void, void>, bool)+0x384) [0x7fd326af3b34]
[bt] (2) /home/sun/gitDownload/akg/mybuild/libakg.so(air::schedule::InferBound(air::Schedule const&)+0xda4) [0x7fd327899ad4]
[bt] (1) /home/sun/gitDownload/akg/mybuild/libakg.so(air::schedule::CreateAttachPath(air::Schedule)+0x5d4) [0x7fd32789e654]
[bt] (0) /home/sun/gitDownload/akg/mybuild/libakg.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7fd326aa5fcf]
File "/home/sun/gitDownload/akg/third_party/incubator-tvm/src/schedule/graph.cc", line 223
TVMError: Check failed: start_attach: Invalid Schedule: cannot find attach point iter_var(h, range(min=0, ext=12)) in the schedule of compute(PoolAvg, 0x3126cc0)Here is my source code:import akg
from akg import tvm

import numpy as np

def padding(X, ph, pw, val=0):
"""Pad X with the given value in 2-D

ph, pw : height and width padding
val : padding value, default 0
"""
assert len(X.shape) >= 2
nh, nw = X.shape[-2], X.shape[-1]
return tvm.compute(
        (*X.shape[0:-2], nh+ph*2, nw+pw*2),
        lambda *i: tvm.if_then_else(
            tvm.any(i[-2]<ph, i[-2]>=nh+ph, i[-1]<pw, i[-1]>=nw+pw),
            val, X[i[:-2]+(i[-2]-ph, i[-1]-pw)]),
        name='PaddedX')

Save to the d2ltvm package.

def conv_out_size(n, k, p, s):
"""Compute the output size by given input size n (width or height),
kernel size k, padding p, and stride s
Return output size (width or height)
"""
return (n - k + 2 * p)//s + 1

def get_conv_data(oc, ic, n, k, p=0, s=1, constructor=None,ctx=tvm.gpu(0),conv_type='direct'):
"""Return random 3-D data tensor, 3-D kernel tenor and empty 3-D output
tensor with the shapes specified by input arguments.

oc, ic : output and input channels
n : input width and height
k : kernel width and height
p : padding size, default 0
s : stride, default 1
constructor : user-defined tensor constructor
"""
np.random.seed(0)
data = np.random.normal(size=(ic, n, n)).astype('float32')
ic_weight = ic
if  conv_type =='depthwise':
    ic_weight=1
weight = np.random.normal(size=(oc, ic_weight, k, k)).astype('float32')
# data =  np.ones(shape=(ic,n,n)).astype('float32')
# weight = np.ones(shape=(oc,ic,k,k)).astype('float32')
on = conv_out_size(n, k, p, s)
out = np.empty((oc, on, on), dtype='float32')
if constructor:
    data, weight, out = (constructor(x,ctx) for x in [data, weight, out])
return data, weight, out

def pool(pool_type,c,nh,nw,kh,kw,ph=0,pw=0,sh=1,sw=1):
rkh = tvm.reduce_axis((0,kh),name='rkh')
rkw = tvm.reduce_axis((0,kw),name='rkw')

oh = conv_out_size(nh,kw,ph,sh)
ow = conv_out_size(nw,kw,pw,sw)

X = tvm.placeholder((c,nh,nw),name='X')
if pool_type=='max':
    PaddedX = padding(X,ph,pw,val=tvm.min_value(X.dtype)) if ph*pw!=0 else X
    Y = tvm.compute(
        (c,oh,ow),
        lambda c,h,w:tvm.max(PaddedX[c,h*sh+rkh,w*sw+rkw],axis=[rkh,rkw]),
        tag='pool_max',name='PoolMax'
    )
elif pool_type=='avg':
    PaddedX = padding(X,ph,pw) if ph*pw!=0 else X
    tsum = tvm.compute(
        (c,oh,ow),
       lambda c,h,w: tvm.sum(PaddedX[c,h*sh+rkh,w*sw+rkw],axis = [rkh,rkw]),
        tag='pool_avg1',name='PoolSum'
    )
    Y = tvm.compute(
        (c,oh,ow),
        lambda c,h,w:tsum[c,h,w]/(kh*kw),
        tag = 'pool_avg2',name='PoolAvg'
        )
else:
        raise ValueError("'Pool type should be 'avg' or 'max'.")
return X,Y,PaddedX

c,n,k,p,s = 4,12,3,1,1
X,Y,PaddedX = pool('avg',c,n,n,k,k,p,p,s,s)
sch = tvm.create_schedule(Y.op)
tvm.schedule.AutoInlineInjective(sch)
PoolSum = Y.op.input_tensors[0]
sch[PoolSum].compute_at(sch[Y], sch[Y].op.axis[2])

print(tvm.lower(sch,[X,Y],simple_mode=True))
mod = akg.build(sch, (X,Y), 'cuda', [], name='myfunc', attrs={}, polyhedral=True, binds=None)

ctx = tvm.context('cuda')
data,_,out_max = get_conv_data(c,c,n,k,p,s,tvm.nd.array,ctx)

mod(data,out_max)
ctx.sync()`

/device gpu
ir by tvm.lower() is printed normally,so something happened with akg.build .
do akg make a default schedule inside ? So I can't do it in a normal tvm way, any tips to merge the 2 stages in avgpooling?

Can't build AKG under Ubuntu 20.04 for cpu

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/device ascend

/device gpu

/device cpu

I am building CPU version

Software Environment:

  • AKG version (source or binary): source
  • Python version (e.g., Python 3.7.5):python 3.8.10
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): 20.04
  • GCC/Compiler version (if compiled from source): GCC 9.4.0

Describe the current behavior

Describe the expected behavior

Steps to reproduce the issue

  1. git clone https://gitee.com/mindspore/akg.git
    cd akg
    bash build.sh -e cpu -j8

Related log / screenshot

/akg/third_party/incubator-tvm/include/tvm/packed_func_ext.h:143:10: error: no matching function for call to ‘std::basic_string_view::basic_string_view(air::runtime::ObjectPtrair::runtime::Object)’
143 | return TObjectRef(ObjectPtr(ptr));

Special notes for this issue

Build failed using source code with branch r2.1

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/device cpu

Software Environment:

  • AKG version (r2.1):
  • Python version (Python 3.10):
  • OS platform and distribution (Linux Ubuntu 22.04):
  • GCC/Compiler version (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0):

Describe the current behavior

build failed

Describe the expected behavior

build success

Steps to reproduce the issue

  1. git clone https://gitee.com/mindspore/akg.git
  2. cd akg
    3.bash build.sh -e cpu -j4

Related log / screenshot

image

Special notes for this issue

can akg descrip a whole network model

Hi,I noticed that we can pass compute/hybrid or autodiff to akg,but how to descrip a whole network model which includes lot of operator? should combine all operators into a tvm.compute,then we can can akg.build(schdule,args,...).
I'm confusing about that.
Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.