neoml-lib / neoml Goto Github PK
View Code? Open in Web Editor NEWMachine learning framework for both deep learning and traditional algorithms
Home Page: https://www.abbyy.com/neoml/
License: Apache License 2.0
Machine learning framework for both deep learning and traditional algorithms
Home Page: https://www.abbyy.com/neoml/
License: Apache License 2.0
[jw@cn05 build]$ cmake --build . --target install
...
In file included from /data/jw/neoml/NeoMathEngine/src/CPU/CpuMathEngineDnn3dConv.cpp:25:
/data/jw/neoml/NeoMathEngine/src/CPU/CpuMathEnginePrivate.h:29:10: fatal error: CpuArm.h: No such file or directory
29 | #include <CpuArm.h>
| ^~~~~~~~~~
compilation terminated.
[12/83] Building CXX object NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o
FAILED: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o
/usr/bin/c++ -DBUILD_NEOMATHENGINE -DNEOML_USE_AVX -DNEOML_USE_OMP -DNeoMathEngine_EXPORTS -D_FINAL -D_LINUX -I/data/jw/neoml/NeoMathEngine/src/../include -I/data/jw/neoml/NeoMathEngine/src -I/data/jw/neoml/NeoMathEngine/src/CPU -I/data/jw/neoml/NeoMathEngine/src/CPU/x86 -O2 -DNDEBUG -fPIC -fvisibility=hidden -fopenmp -Wall -Wextra -Wpedantic -Wno-deprecated-declarations -Wno-unused-value -Wno-unknown-pragmas -Wno-strict-overflow -MD -MT NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o -MF NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o.d -o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o -c /data/jw/neoml/NeoMathEngine/src/CPU/CpuMathEngineDnnTimeConv.cpp
In file included from /data/jw/neoml/NeoMathEngine/src/CPU/CpuMathEngineDnnTimeConv.cpp:24:
/data/jw/neoml/NeoMathEngine/src/CPU/CpuMathEnginePrivate.h:29:10: fatal error: CpuArm.h: No such file or directory
29 | #include <CpuArm.h>
| ^~~~~~~~~~
compilation terminated.
[40/83] Building CXX object _deps/googletest-build/googletest/CMakeFiles/gtest.dir/src/gtest-all.cc.o
ninja: build stopped: subcommand failed.
[jw@cn05 build]$
// MathEngine blob data types
enum TBlobType {
CT_Invalid = 0,
CT_Float,
CT_Int,
};
CT_* -> BT_*
FineDebugBreak function body better to put under #ifdef _DEBUG ...
Because NeoOnnxCheck call the FineDebugBreak and if expr value is false then __debugbreak happens even in Release build.
Раз PR вы не принимаете, оформлю как issue.
Сделайте в namespace NeoML
себе что-то такое и перестаньте писать многоэтажные сравнения:
#if __cplusplus >= 201703L
using std::clamp;
#else
template<typename T, typename Compare = std::less<T>> constexpr const T& clamp(const T &v, const T &lo, const T &hi, Compare comp = Compare()) {
return comp(v, lo) ? lo : comp(hi, v) ? hi : v;
}
#endif
BatchCalculateLossAndGradient( int batchSize, NeoML::CConstFloatHandle data,
int vectorSize, NeoML::CConstFloatHandle label, int labelSize, NeoML::CFloatHandle lossValue, NeoML::CFloatHandle lossGradient )
{
assert( isInitialized );
assert( vectorSize == 1 );
assert( vectorSize == labelSize );
const int totalSize = batchSize * vectorSize;
MathEngine().VectorSub( data, label, lossValue, totalSize );
if( !lossGradient.IsNull() ) {
CFloatHandleStackVar onesVector( MathEngine(), totalSize );
MathEngine().VectorFill( onesVector, 1.0f, totalSize );
MathEngine().VectorAbsDiff( lossValue, onesVector, lossGradient, totalSize );
}
MathEngine().VectorAbs( lossValue, lossValue, totalSize );
}
Because of channelCount = lookupCount
.
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libprotobuf.a(arena.o): relocation R_X86_64_TPOFF32 against symbol `_ZN6google8protobuf5Arena13thread_cache_E' can not be used when making a shared object; перекомпилируйте с параметром -fPIC
/usr/bin/ld: final link failed: Раздел, непредставимый для вывода
collect2: error: ld returned 1 exit status
NeoOnnx/src/CMakeFiles/NeoOnnx.dir/build.make:581: ошибка выполнения рецепта для цели «NeoOnnx/src/libNeoOnnx.so»
make[2]: *** [NeoOnnx/src/libNeoOnnx.so] Ошибка 1
CMakeFiles/Makefile2:266: ошибка выполнения рецепта для цели «NeoOnnx/src/CMakeFiles/NeoOnnx.dir/all»
make[1]: *** [NeoOnnx/src/CMakeFiles/NeoOnnx.dir/all] Ошибка 2
Makefile:160: ошибка выполнения рецепта для цели «all»
make: *** [all] Ошибка 2
When trying to run the code from the documentation with using a GPU, the program interrupts execution with RuntimeError without any description. When trying to run the code with using neoml.MathEngine.CpuMathEngine, it works.
OS: Windows 10 (10.0.19043)
Python 3.9.7
GPU: Nvidia GeForce 1070
CDllLoader::loadedDlls
not updated on Free
call, so desynchronization with IsLoaded
return value is possible,
In CVulkanMathEngine::MultiplyMatrixByDiagMatrix
method, const variable toAdd
used as a condition in ternary operator (two times).
Is there any chance of supporting .net core?
For C++ integration is possible to use the C++ CLR which can wrap C++ and make all classes available in the .NET environment.
Now there are very few normal libraries for .net, despite the fact that it has long been cross-platform.
For some inputs CMatchingGenerator generates suboptimal matchings. Certain objects are effectively left unmatched, even though they have a match in the other set.
Here is a test I wrote that should pass but is currently failing. I found a few "bad" examples, this is one of the smallest. Here, two checks are passing, but the last one is failing: EXPECT_EQ( 7, FindMatchForLeft( 5 ) )
. Instead, left no.5 is matched to a different element with a score of 0.
TEST( CMatchingGenerator, UniqueMatches6x8 )
{
struct CMyPair {
typedef int Quality;
int LeftIndex = -1;
int RightIndex = -1;
Quality Score = 0;
Quality Penalty() const { return 1 - Score; }
};
const int numLeft = 6;
const int numRight = 8;
const int pairScores[numRight][numLeft] = {
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0}, // Left #3 matches right #5
{0, 0, 0, 0, 1, 0}, // Left #4 matches right #6
{0, 0, 0, 0, 0, 1} // Left #5 matches right #7
};
NeoML::CMatchingGenerator< CMyPair > generator( numLeft, numRight, 0, INT_MAX );
for( int leftInd = 0; leftInd < numLeft; ++leftInd ) {
for( int rightInd = 0; rightInd < numRight; ++rightInd ) {
CMyPair& pair = generator.PairMatrix()(leftInd, rightInd);
pair.LeftIndex = leftInd;
pair.RightIndex = rightInd;
pair.Score = pairScores[rightInd][leftInd];
}
}
generator.Build();
CArray<CMyPair> matching;
generator.GetNextMatching( matching );
const auto& FindMatchForLeft = [&matching]( int leftInd )
{
for( const CMyPair& pair : matching ) {
if( pair.LeftIndex == leftInd ) {
return pair.RightIndex;
}
}
return NotFound;
};
EXPECT_EQ( 5, FindMatchForLeft( 3 ) );
EXPECT_EQ( 6, FindMatchForLeft( 4 ) );
EXPECT_EQ( 7, FindMatchForLeft( 5 ) );
}
This assert in deleteTinyClusters
seems incorrect. Because of why number of features (matrix.Width
) affects on cluster size to be deleted? According to classic ML algorithm, it shouldn't. Fix it, please.
// Deletes the clusters that are too small
void CFirstComeClustering::deleteTinyClusters( const CSparseFloatMatrixDesc& matrix, const CArray<double>& weights,
CObjectArray<CCommonCluster>& clusters )
{
int threshold = Round( init.MinClusterSizeRatio * matrix.Width );
NeoAssert( threshold <= matrix.Height ); // a cluster may not have more than the total number of elements
...
}
Hello.
I am building a library on windows 10, MSVC 2019 x64, protobuf 3.18.1 . I get two errors:
onnx.pb.obj : error LNK2019: ссылка на неразрешенный внешний символ "class google::protobuf::internal::ExplicitlyConstr
ucted<class std::basic_string<char,struct std::char_traits,class std::allocator > > google::protobuf::inter
nal::fixed_address_empty_string" (?fixed_address_empty_string@internal@protobuf@google@@3v?$ExplicitlyConstructed@V?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@123@A) в функции "public: virtual class onnx::ValueInfoProto* __cdecl onnx::ValueInfoProto::New(void)const " (?New@ValueInfoProto@onnx@@UEBAPEAV12@XZ). [H:\libs\neoml2.0.22.0\Build\NeoOnnx\src\NeoOnnx.vcxproj]
onnx.pb.obj : error LNK2019: ссылка на неразрешенный внешний символ "struct std::atomic google::protobuf::interna
l::init_protobuf_defaults_state" (?init_protobuf_defaults_state@internal@protobuf@google@@3u?$atomic@_N@std@@A) в функции "public: virtual unsigned char * __cdecl onnx::TensorShapeProto_Dimension::_InternalSerialize(unsigned char *,class
google::protobuf::io::EpsCopyOutputStream *)const " (?_InternalSerialize@TensorShapeProto_Dimension@onnx@@UEBAPEAEPEAEPEAVEpsCopyOutputStream@io@protobuf@google@@@z). [H:\libs\neoml2.0.22.0\Build\NeoOnnx\src\NeoOnnx.vcxproj]
H:\libs\neoml2.0.22.0\Build\NeoOnnx\src\Release\NeoOnnx.dll : fatal error LNK1120: неразрешенных внешних элементов: 2 [
H:\libs\neoml2.0.22.0\Build\NeoOnnx\src\NeoOnnx.vcxproj]
How fix it?
Thank you!
Very often you override Serialize() method for SomeLayer class only for setting SerializeVersion to archive. Better to add some virtual method to setup correct SerializeVersion. In some over cases you override it to add some suffix info, but you have working mechanism for it in CCompositeLayer
, you may expand serializationHook
idea to whole Layers hierarchy.
virtual void BlobChannelwiseConvolution( const CChannelwiseConvolutionDesc& desc, const CConstFloatHandle& source,
const CConstFloatHandle& filter, const CConstFloatHandle* freeTerm, const CFloatHandle& result ) = 0;
virtual void BlobConvolution( const CConvolutionDesc& desc, const CFloatHandle& source,
const CFloatHandle& filter, const CFloatHandle* freeTerm, const CFloatHandle& result ) = 0;
All layers as layers, placed to Dnn/Layers/..., and are called CDnnConvLayer, CDnnPoolingLayer. But the Source layer placed to Dnn/Dnn.h and is called CSourceLayer. Not cool.
Please implement calculation of positivesCorrect
, positivesTotal
, negativesCorrect
, negativesTotal
on device side.
neoml/NeoML/src/Dnn/Layers/PrecisionRecallLayer.cpp
Lines 65 to 91 in 7b76536
NeoML and NeoMathEngine both contains identically Platforms.h files. Will be better to use only one instance.
NeoML showing linker error. In Qt it looks like this:
:-1: error: LNK1104: cannot open file 'NeoMathEngine.x64.Debug.lib'
And in built libraries there is nowhere to be found NeoMathEngine.x64.Debug.lib.
When I'm trying with Visual studio, I'm getting following output:
Severity Code Description Project File Line Suppression State
Error C3861 'NOT_FOUND': identifier not found NeoMLTest2 K:\NeoML\build64\debug\include\NeoML\TraditionalML\LdGraph.h 438
Code snippet where error is located seems to be some kind of assertion, but I can't understand what's wrong:
inline void CLdGraph<Arc>::DetachArc( Arc* arc )
{
// Delete the arcs from the starting node
CLdGraphVertex* initial = vertices[arc->InitialCoord() - begin];
NeoPresume( initial != 0 );
int i = initial->OutgoingArcs.Find(arc);
NeoAssert( i != NOT_FOUND ); // PROBLEM IS HERE
initial->OutgoingArcs.DeleteAt(i);
// Delete the hanging node
if( initial->OutgoingArcs.Size() == 0
&& initial->IncomingArcs.Size() == 0 )
{
delete initial;
vertices[arc->InitialCoord() - begin] = 0;
}
I have 4 of those errors.
Have a bunch mkl related linker errors when compiling NeoMathEngine. Could someone help me and tell what is this all about, since I can't look inside mkl dll? Neo ML was build with FineObjects option turned off (if it makes any difference).
Severity Code Description Project File Line Suppression State
Error LNK2038 mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx512_jit_destroy.obj) 1
Severity Code Description Project File Line Suppression State
Error LNK2038 mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx512_jit_destroy.obj) 1
Error LNK2038 mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx2_jit_destroy.obj) 1
Error LNK2038 mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx2_jit_destroy.obj) 1
Error LNK2038 mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx_jit_destroy.obj) 1
Error LNK2038 mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in common.obj NeoMathEngine K:\NeoML\build\NeoMathEngine\src\mkl_core.lib(_avx_jit_destroy.obj) 1
This code is not overflow safe because of e^(-y*r)
:
Overflow safe code:
//calculates log(1 + e^x)
void CalculateSoftPlus( const CConstFloatHandle& firstHandle,
const CFloatHandle& resultHandle, int batchSize ) const
{
//log(1 + e^x) = log( 1 + e^-|x| ) + max( 0, x )
CFloatHandleStackVar temp( MathEngine(), batchSize );
// |x|
MathEngine().VectorAbs( firstHandle, temp, batchSize );
// -|x|
CFloatHandleStackVar one( MathEngine() );
one.SetValue( 1.f );
MathEngine().VectorNegMultiply( temp, temp, batchSize, one );
// e^(-|x|)
MathEngine().VectorExp( temp, temp, batchSize );
CFloatHandleStackVar onesVector( MathEngine(), batchSize );
MathEngine().VectorFill( onesVector, 1.0f, batchSize );
// 1 + e^(-|x|)
MathEngine().VectorAdd( onesVector, temp, temp, batchSize );
//log(1 + e^(-|x|)) =
MathEngine().VectorLog( temp, temp, batchSize );
CFloatHandleStackVar zero( MathEngine() );
zero.SetValue( 0.f );
//max( 0, x )
MathEngine().VectorReLU( firstHandle, resultHandle, batchSize, zero );
//log(1 + e^x) = log( 1 + e^-|x| ) + max( 0, x )
MathEngine().VectorAdd( temp, resultHandle, resultHandle, batchSize );
}
Because of channelCount = lookupCount
.
Please support Deformable convolution layer.
Possible #else
should be used.
neoml/NeoMathEngine/src/CPU/CPUInfo.h
Line 141 in 1ecaae7
I see no difference between VectorMultiplyAndAdd.comp and VectorMultiplyAndSub.comp. Is this correct?
Hello! I'm trying to recreate simple net example, but I'm stuck here:
for( int epoch = 1; epoch < 15; ++epoch ) {
float epochLoss = 0; // total loss for the epoch
for( int iter = 0; iter < iterationPerEpoch; ++iter ) {
// trainData methods are used to transmit the data into the blob
trainData.GetSamples( iter * batchSize, dataBlob );
trainData.GetLabels( iter * batchSize, labelBlob );
net.RunAndLearnOnce(); // run the learning iteration
epochLoss += loss->GetLastLoss(); // add the loss value on the last step
}
::printf( "Epoch #%02d avg loss: %f\n", epoch, epochLoss / iterationPerEpoch );
trainData.ReShuffle( random ); // reshuffle the data
}
I don't understand what class is trainData (and testData) and can't find GetSamples and GetLabels functions on my own. Please help.
Platforms.h (both instances)
Better to use predefined macro '__ANDROID__'
, because '_ANDROID'
is not so stable (not sure about past, looks like it's absent in latest NDK's).
Seems like in backward stage sign of inputDiffBlobs[1]
must be negative.
neoml/NeoML/src/Dnn/Layers/EltwiseLayer.cpp
Lines 131 to 137 in 7b76536
Hello.
I'm trying to load the onnx-model after the tf2onnx.convert converter, the program is interrupted with an error:
"Not supported by NeoOnnx: operator Transpose".
I use the ONNX opset version 9. Same error when trying to load any model from https://github.com/onnx/models
How do I solve this?
Hello
In the CCrossValidationSubProblem::GetMatrix() code returns matrix of original problem.
Code:
It leads to the problems whyle training gradient bosting classifier, because it builds IProblem wrappes inside (those wrappers use GetMatrix while constructing)
May be can be fixed by 'return matrix'
CStratifiedCrossValidationSubProblem::GetVector
virtual method is not a member of interface, it's bad practice. And can be totally removed from class as not used.
Hello
In the CCrossValidationSubProblem::GetMatrix() code returns matrix of original problem.
Code:
It leads to the problems whyle training gradient bosting classifier, because it builds IProblem wrappes inside (those wrappers use GetMatrix while constructing)
May be can be fixed by 'return matrix'
In CVulkanShaderLoader::GetShaderData
method, object member layoutInfo.pPushConstantRanges
points to a local variable pushConstantRange
that is out of scope.
Hi. Performed my build attempt on Ubuntu 20 x86_64, with Clang 11.0 RC2 + Ninja 1.10.0 + CMake 3.18.2:
$ cmake -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DMKL_CORE_LIB="/media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/lib" -DMKL_INCLUDE_DIR="/media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/include" -DMKL_SEQUENTIAL_LIB="/media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/lib" -DMKL_INTEL_LIB="/media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/lib" ../NeoML
-- The CXX compiler identification is Clang 11.0.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND
-- No CUDA support.
-- Found OpenMP_CXX: -fopenmp=libomp (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0")
-- Found MKL: /media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/include
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Found Protobuf: /usr/lib/x86_64-linux-gnu/libprotobuf.a;-pthread (found version "3.6.1")
-- Found protoc compiler: /usr/bin/protoc
-- Configuring done
WARNING: Target "NeoMathEngine" requests linking to directory "/media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/intel_mkl_install/mkl/lib". Targets may link only to libraries. CMake is dropping the item.
-- Generating done
-- Build files have been written to: /media/ubuntu/4d5fa4ce-dc9b-4cb0-934c-72533ffc1586/neoml-master/build1
$ninja
[1/36] Linking CXX shared library NeoMathEngine/src/libNeoMathEngine.so
FAILED: NeoMathEngine/src/libNeoMathEngine.so
: && /usr/bin/clang++ -fPIC -Wl,--no-undefined -shared -Wl,-soname,libNeoMathEngine.so -o NeoMathEngine/src/libNeoMathEngine.so NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/common.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineBlas.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnn3dConv.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnConv.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnChannelwiseConv.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnDropout.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnn.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnPooling.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnRleConv.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineDnnTimeConv.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngine.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngineVectorMath.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CrtAllocatedObject.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/DllLoader.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/MathEngineDeviceStackAllocator.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/MathEngineDnnDropout.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/MathEngine.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/MathEngineHostStackAllocator.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/MemoryPool.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/PerformanceCountersCpuLinux.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlas.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineDnn.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMath.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineDnn3dConv.cpp.o -Wl,-rpath,:::::::::::::::::::::: /usr/lib/libomp.so /usr/lib/x86_64-linux-gnu/libpthread.so -Wl,--start-group -Wl,--end-group -pthread -ldl && :
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngine.cpp.o: in function .omp_outlined.': CpuMathEngine.cpp:(.text+0x251): undefined reference to
MKL_Thread_Free_Buffers'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/CpuMathEngine.cpp.o: in function NeoML::CCpuMathEngine::~CCpuMathEngine()': CpuMathEngine.cpp:(.text+0x11ee): undefined reference to
MKL_Free_Buffers'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyMatrixByMatrix(NeoML::CTypedMemoryHandle<float const> const&, int, int, int, NeoML::CTypedMemoryHandle<float const> const&, int, int, NeoML::CTypedMemoryHandle<float> const&, int, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0x3a6): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyMatrixByMatrixAndAdd(NeoML::CTypedMemoryHandle<float const> const&, int, int, int, NeoML::CTypedMemoryHandle<float const> const&, int, int, NeoML::CTypedMemoryHandle<float> const&, int, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0x6f7): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyMatrixByTransposedMatrix(NeoML::CTypedMemoryHandle<float const> const&, int, int, int, NeoML::CTypedMemoryHandle<float const> const&, int, int, NeoML::CTypedMemoryHandle<float> const&, int, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0xa05): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyMatrixByTransposedMatrixAndAdd(float const*, int, int, int, float const*, int, int, float*, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0xaea): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::MultiplySparseMatrixByTransposedMatrix(int, int, int, NeoML::CSparseMatrixDesc const&, NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&)': CpuX86MathEngineBlasMkl.cpp:(.text+0xfbb): undefined reference to
mkl_sparse_s_create_csr'
/usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1133): undefined reference to mkl_sparse_s_mm' /usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1219): undefined reference to
MKL_Simatcopy'
/usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1225): undefined reference to mkl_sparse_destroy' /usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function
NeoML::CCpuMathEngine::MultiplyTransposedMatrixBySparseMatrixAndAdd(int, int, int, NeoML::CTypedMemoryHandle const&, NeoML::CSparseMatrixDesc const&, NeoML::CTypedMemoryHandle const&)':
CpuX86MathEngineBlasMkl.cpp:(.text+0x1844): undefined reference to MKL_Somatcopy' /usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1894): undefined reference to
mkl_sparse_s_create_csr'
/usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1a82): undefined reference to mkl_sparse_s_mm' /usr/bin/ld: CpuX86MathEngineBlasMkl.cpp:(.text+0x1b74): undefined reference to
mkl_sparse_destroy'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyTransposedMatrixByMatrix(NeoML::CTypedMemoryHandle<float const> const&, int, int, NeoML::CTypedMemoryHandle<float const> const&, int, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0x1e07): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineBlasMkl.cpp.o: in function NeoML::CCpuMathEngine::multiplyTransposedMatrixByMatrixAndAdd(NeoML::CTypedMemoryHandle<float const> const&, int, int, int, NeoML::CTypedMemoryHandle<float const> const&, int, int, NeoML::CTypedMemoryHandle<float> const&, int, int)': CpuX86MathEngineBlasMkl.cpp:(.text+0x21ff): undefined reference to
cblas_sgemm'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorExp(NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0x31e): undefined reference to
vsExp'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorLog(NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0x6be): undefined reference to
vsLn'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorMultiplyAndAdd(NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int, NeoML::CTypedMemoryHandle<float const> const&)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0xae1): undefined reference to
cblas_saxpy'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorTanh(NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0xcd2): undefined reference to
vsTanh'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorPower(float, NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0xeb9): undefined reference to
vsPowx'
/usr/bin/ld: NeoMathEngine/src/CMakeFiles/NeoMathEngine.dir/CPU/x86/CpuX86MathEngineVectorMathMkl.cpp.o: in function NeoML::CCpuMathEngine::VectorEltwiseLogSumExp(NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float const> const&, NeoML::CTypedMemoryHandle<float> const&, int)': CpuX86MathEngineVectorMathMkl.cpp:(.text+0x1706): undefined reference to
vsLog1p'
clang-11: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
neoml/NeoMathEngine/src/MemoryPool.cpp
Line 174 in e791a45
Why do you use both keywords 'virtual' and 'override' in methods definition? It's redundantly and hard to read, because semantics for this keywords are different.
Here is bug in CMemoryProblem::SetClass and CMemoryProblem::SetVectorWeight.
Instead of NeoAssert( 0 <= index && index < featureCount )
we should use something like NeoAssert( 0 <= index && index < classes.Size())
. featureCount
gives us the number of features, not the objects in the dataset.
Please add online hard example mining (OHEM) based on BCE, L1, L2 and other losses for binary classification.
Loss = SumOfPositiveClassLosses / + alpha* SumOfNegativeClassLosses / + beta * SumOfHARDNegativeClassLosses / <number of hard negative class examples>
hard negatives
are top K negative class elements with highest loss value.
alpha
, beta
, K
are external parameters.
Do you have a plan to support Space2Depth
and Depth2Space
operations?
In Main.bld file you have these lines of code:
alias;sln;vs14
alias;vcproj;vsproj14
But there is no description for the command in # BLD file syntax:
part of the file.
**CFullyConnectedLayer:**
// The dimensions of the blob are NumOfElements * InputHeight * InputWidth * InputChannelsCount
CPtr<CDnnBlob> GetWeightsData() const;
// The free term blob should be of NumOfElements size
CPtr<CDnnBlob> GetFreeTermData() const;
**CBaseConvLayer:**
// A filter blob has the FilterCount * FilterHeight * FilterWidth * FilterDepth * InputChannelsCount dimensions
// (or InputChannelsCount * FilterHeight * FilterWidth * FilterDepth * FilterCount dimensions for transposed filters)
virtual CPtr<CDnnBlob> GetFilterData() const;
// The blob should be of FilterCount size
virtual CPtr<CDnnBlob> GetFreeTermData() const;
CDnnBlob has 7 dimensions, and there are no clues which dimension is FilterCount, FilterWidth, FilterHeight etc.
Please, clarify it via adding ones, for example:
// A filter blob has the 1 * FilterCount * FilterHeight * FilterWidth * FilterDepth * 1 * InputChannelsCount dimensions
PS: Why CFullyConnectedLayer has Weights but CBaseConvLayer has Filter?
Hello.
I want to ask you to put CCrossValidationSubProblem into public interfaces (from https://github.com/neoml-lib/neoml/blob/master/NeoML/src/TraditionalML/CrossValidationSubProblem.h).
It would be useful for custom cross-validation algorithms implimentations.
For nowdays, I have copy-paste of that code in another repo.
Class CDnn::DeleteLayerImpl
method must be marked as 'final'. Because in case of overriding it after inheritance it will work in a wrong way when called from CDnn destructor. Same issue with the CCompositeLayer::DeleteLayerImpl
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.