Giter VIP home page Giter VIP logo

Comments (34)

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@Linaom1214

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您好,作者,我使用环境为ubuntu18.04 cuda10.2 cudnn8.1.1 tensorrt7.2.3.4,在使用nrom/yolo.cpp 推理时,使用官方v8s模型,转换tensorrt 模型,推理后的图片出现乱框情况,是否可以给个解答呢 image

提供一下您导出的命令

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

好的,感谢您的回答,我验证一下,是否是模型问题,后续有问题,请多多指教

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

您好,我尝试了使用trtexec 转换进行推理,还是出现上述情况
trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

将模型重新转换 同样出现了上述乱框情况

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

使用仓库的转换也是出现乱框情况

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

我使用的是支持nms 的库,nrom/yolo.cpp 那个不是end2end 的

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

我晚上查看一下后处理吧,如果修改了,我告诉您一下

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

我晚上查看一下后处理吧,如果修改了,我告诉您一下

好的

from tensorrt-for-yolo-series.

Leoyed avatar Leoyed commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@Leoyed 给我一个邮箱我发送给您

from tensorrt-for-yolo-series.

Leoyed avatar Leoyed commented on July 4, 2024

@Leoyed 给我一个邮箱我发送给您

谢谢老板您,这是我的邮箱:[email protected]

from tensorrt-for-yolo-series.

Leoyed avatar Leoyed commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

您可以提一个pr,我这边修复一下

from tensorrt-for-yolo-series.

SHOUshou0426 avatar SHOUshou0426 commented on July 4, 2024

@Linaom1214 pr 让 @Leoyed 提一下把,我这边有一些自己的工作

from tensorrt-for-yolo-series.

wyq-aki avatar wyq-aki commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样

你好,请问这块是后处理的问题吗

from tensorrt-for-yolo-series.

leayz-888 avatar leayz-888 commented on July 4, 2024

您好,请问一下,norm里v8的后处理中,generate_yolo_proposals函数里面有段代码是这样的:
float box_objectness = feat_blob[basic_pos+4];
// std::cout<<feat_blob<<std::endl;
for (int class_idx = 0; class_idx < num_class; class_idx++)
{
float box_cls_score = feat_blob[basic_pos + 5 + class_idx];
float box_prob = box_objectness * box_cls_score;
if (box_prob > prob_threshold){......}
这里用feat_blob[basic_pos+4]取置信度,但是v8中是没有回归置信度的,这样取到的是第一个类别的分类概率,这样后面再根据box_objectness
box_cls_score过滤低置信度的检测框会不会有问题了,可能这是导致v8 norm出现乱框的原因哦

from tensorrt-for-yolo-series.

wyq-aki avatar wyq-aki commented on July 4, 2024

您好,请问一下,norm里v8的后处理中,generate_yolo_proposals函数里面有段代码是这样的: float box_objectness = feat_blob[basic_pos+4]; // std::cout<<_feat_blob<<std::endl; for (int class_idx = 0; class_idx < num_class; class_idx++) { float box_cls_score = feat_blob[basic_pos + 5 + class_idx]; float box_prob = box_objectness * box_cls_score; if (box_prob > prob_threshold){......} 这里用feat_blob[basic_pos+4]取置信度,但是v8中是没有回归置信度的,这样取到的是第一个类别的分类概率,这样后面再根据box_objectness_box_cls_score过滤低置信度的检测框会不会有问题了,可能这是导致v8 norm出现乱框的原因哦

是的,作者trt.py里如果end2end=False的话,在utils.py里对模型推理出来的output进行reshape时出错了,我修改成predictions=np.reshape(data,(int(4+self.n_classes), -1)).T,以及postprecess()里scores=predictions[:,4:],以及vis()里那行*ratio的去掉,预测框就正常了

from tensorrt-for-yolo-series.

Leoyed avatar Leoyed commented on July 4, 2024

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样

你好,请问这块是后处理的问题吗

对的,这后处理代码方法不对。不过这里的前处理是用CPU处理,耗时也很长。

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

您好,请问一下,norm里v8的后处理中,generate_yolo_proposals函数里面有段代码是这样的: float box_objectness = feat_blob[basic_pos+4]; // std::cout<<_feat_blob<<std::endl; for (int class_idx = 0; class_idx < num_class; class_idx++) { float box_cls_score = feat_blob[basic_pos + 5 + class_idx]; float box_prob = box_objectness * box_cls_score; if (box_prob > prob_threshold){......} 这里用feat_blob[basic_pos+4]取置信度,但是v8中是没有回归置信度的,这样取到的是第一个类别的分类概率,这样后面再根据box_objectness_box_cls_score过滤低置信度的检测框会不会有问题了,可能这是导致v8 norm出现乱框的原因哦

是的,作者trt.py里如果end2end=False的话,在utils.py里对模型推理出来的output进行reshape时出错了,我修改成predictions=np.reshape(data,(int(4+self.n_classes), -1)).T,以及postprecess()里scores=predictions[:,4:],以及vis()里那行*ratio的去掉,预测框就正常了

目前的代码v8只支持end2end, c++ 也是一样的,之前我调试的过程中onnx是全精度的情况下,结果是没有问题的。 @Leoyed 您有别的解决方案吗?

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on July 4, 2024

@Leoyed @leayz-888 @wyq-aki @SHOUshou0426 精度损失可能是预处理的问题,目前已经修复了这个问题,目前正在适配v10,v9已经完成适配,预计今天更新

from tensorrt-for-yolo-series.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.