Giter VIP home page Giter VIP logo

Comments (21)

qu-ngx avatar qu-ngx commented on June 9, 2024

Can you please kindly copy and paste down the "full log" instead of just taking a screenshot of your IDE?

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

Sure :)

Launching lib/main.dart on 22101320G in debug mode...
Running Gradle task 'assembleDebug'...
✓ Built build/app/outputs/flutter-apk/app-debug.apk.
Installing build/app/outputs/flutter-apk/app-debug.apk...
Debug service listening on ws://127.0.0.1:49474/eIPV7vQ946U=/ws
Syncing files to device 22101320G...
I/CameraManagerGlobal( 8569): Connecting to camera service
D/CameraExtStub( 8569): init android.hardware.camera2.impl.CameraExtImplXiaoMi
D/CameraInjector( 8569): updateCloudCameraControllerInfoAsync: E
D/CameraInjector( 8569): updateCloudCameraControllerInfoAsync: X
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 2
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 3
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 4
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 5
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 6
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): [soar.cts] ignore the status update of camera: 7
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 2
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 3
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 4
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 5
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 6
E/libEGL ( 8569): pre_cache appList: ,,
W/libc ( 8569): Access denied finding property "ro.hardware.chipname"
I/tflite ( 8569): Created TensorFlow Lite XNNPACK delegate for CPU.
I/tflite ( 8569): Initialized TensorFlow Lite runtime.
I/tflite ( 8569): Replacing 71 node(s) with delegate (TfLiteXNNPackDelegate) node, yielding 1 partitions.
E/LB ( 8569): fail to open node: No such file or directory
D/CompatibilityChangeReporter( 8569): Compat change id reported: 206033068; UID 10180; state: ENABLED
D/VRIMainActivity: vri.Setup new sync id=0 syncSeqId=0
D/n_ssd_mobilenet( 8569): MiuiProcessManagerServiceStub setSchedFifo
I/MiuiProcessManagerImpl( 8569): setSchedFifo pid:8569, mode:3
E/perf_hint( 8569): Session creation failed, mPreferredRateNanos: -1
D/VRIMainActivity: vri.reportDrawFinished syncSeqId=0 android.view.ViewRootImpl.lambda$createSyncIfNeeded$4$android-view-ViewRootImpl:3852 android.view.ViewRootImpl$$ExternalSyntheticLambda1.run:6 android.os.Handler.handleCallback:942 android.os.Handler.dispatchMessage:99 android.os.Looper.loopOnce:211
W/Camera ( 8569): The selected imageFormatGroup is not supported by Android. Defaulting to yuv420
W/libc ( 8569): Access denied finding property "persist.vendor.camera.privapp.list"
D/CameraExtImplXiaoMi( 8569): initCameraDevice: 0
D/MiuiMultiWindowUtils( 8569): freeform resolution args raw data:{ "liuqin": { "freeform_args": { "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.3334,"original_scale":0.835,"top_margin":0.049,"left_margin":0.2775}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.3334,"original_scale":0.835,"top_margin":-1,"left_margin":0.6525}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":0.049,"left_margin":-1}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":-1,"left_margin":0.4976} }, "mini_freeform_args":{ "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1278}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1278}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":0.1778}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":0.1778} } }, "pipa": { "freeform_args": { "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.3334,"original_scale":0.835,"top_margin":0.049,"left_margin":0.2775}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.3334,"original_scale":0.835,"top_margin":-1,"left_margin":0.6525}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":0.049,"left_margin":-1}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":-1,"left_margin":0.4976} }, "mini_freeform_args":{ "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1278}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1278}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":0.1778}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":0.1778} } }, "yunluo": { "freeform_args": { "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.48,"original_scale":0.6133,"top_margin":0.049,"left_margin":0.2775}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.48,"original_scale":0.6133,"top_margin":-1,"left_margin":0.6525}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":0.049,"left_margin":-1}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":1,"original_scale":0.468,"top_margin":-1,"left_margin":0.4976} }, "mini_freeform_args":{ "vertical_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1917}, "horizontal_portrait":{"aspect_ratio":0.5625, "original_ratio":0.1917}, "vertical_landscape":{"aspect_ratio":-1, "original_ratio":0.2667}, "horizontal_landscape":{"aspect_ratio":-1, "original_ratio":0.2667} } }, "zizhan":{ "freeform_args": { "inner":{ "vertical_portrait":{"aspect_ratio":0.626, "original_ratio":0.5643,"original_scale":0.74,"top_margin":0.168,"left_margin":0.484}, "horizontal_portrait":{"aspect_ratio":0.626, "original_ratio":0.5643,"original_scale":0.74,"top_margin":0.1222,"left_margin":0.59745}, "vertical_landscape":{"aspect_ratio":1.6, "original_ratio":1,"original_scale":0.604,"top_margin":0.2596,"left_margin":0.2624}, "horizontal_landscape":{"aspect_ratio":1.6, "original_ratio":1,"original_scale":0.604,"top_margin":0.213,"left_margin":0.3758} }, "outer":{ "vertical_portrait":{"aspect_ratio":0.626, "original_ratio":1,"original_scale":0.74,"top_margin":0.0753,"left_margin":-1}, "horizontal_portrait":{"aspect_ratio":0.626, "original_ratio":1,"original_scale":0.5756,"top_margin":-1,"left_margin":0.0753}, "vertical_landscape":{"aspect_ratio":1.6, "original_ratio":0.6847,"original_scale":0.587,"top_margin":0.0753,"left_margin":-1}, "horizontal_landscape":{"aspect_ratio":1.6, "original_ratio":0.6847,"original_scale":0.587,"top_margin":-1,"left_margin":0.0753}} }, "mini_freeform_args":{ "inner":{ "vertical_portrait":{"aspect_ratio":0.626, "original_ratio":0.147}, "horizontal_portrait":{
E/MiuiMultiWindowUtils( 8569): initFreeFormResolutionArgs failed, device is redwood
D/IS_CTS_MODE( 8569): false
D/MULTI_WINDOW_SWITCH_ENABLED( 8569): false
E/CameraManagerGlobal( 8569): Camera 4 is not available. Ignore physical camera status change
E/CameraManagerGlobal( 8569): Camera 5 is not available. Ignore physical camera status change
E/CameraManagerGlobal( 8569): Camera 6 is not available. Ignore physical camera status change
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 2
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 3
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 4
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 5
W/libc ( 8569): Access denied finding property "vendor.camera.aux.packagelist"
W/CameraManagerGlobal( 8569): ignore the torch status update of camera: 6
I/Camera ( 8569): startPreview
D/DecorView: onWindowFocusChanged hasWindowFocus true
I/HandWritingStubImpl( 8569): refreshLastKeyboardType: 1
I/HandWritingStubImpl( 8569): getCurrentKeyboardType: 1
D/CameraDevice-JV-0( 8569): waitUntilIdle: E. id = 0
D/CameraDevice-JV-0( 8569): waitUntilIdle: X
I/HandWritingStubImpl( 8569): getCurrentKeyboardType: 1
I/Camera ( 8569): CameraCaptureSession onConfigured
I/Camera ( 8569): Updating builder settings
D/Camera ( 8569): Updating builder with feature: ExposureLockFeature
D/Camera ( 8569): Updating builder with feature: ExposurePointFeature
D/Camera ( 8569): Updating builder with feature: ZoomLevelFeature
D/Camera ( 8569): Updating builder with feature: AutoFocusFeature
D/Camera ( 8569): Updating builder with feature: NoiseReductionFeature
I/Camera ( 8569): updateNoiseReduction | currentSetting: fast
D/Camera ( 8569): Updating builder with feature: FocusPointFeature
D/Camera ( 8569): Updating builder with feature: ResolutionFeature
D/Camera ( 8569): Updating builder with feature: SensorOrientationFeature
D/Camera ( 8569): Updating builder with feature: FlashFeature
D/Camera ( 8569): Updating builder with feature: ExposureOffsetFeature
D/Camera ( 8569): Updating builder with feature: FpsRangeFeature
I/Camera ( 8569): refreshPreviewCaptureSession
D/CameraDevice-JV-0( 8569): waitUntilIdle: E. id = 0
W/BpBinder( 8569): PerfMonitor binderTransact : time=289ms interface=android.hardware.camera2.ICameraDeviceUser code=14
D/CameraDevice-JV-0( 8569): waitUntilIdle: X
I/Camera ( 8569): startPreviewWithImageStream
W/n_ssd_mobilenet( 8569): Long monitor contention with owner main (8569) at void android.hardware.camera2.impl.CameraDeviceImpl.waitUntilIdle()(CameraDeviceImpl.java:1445) waiters=0 in void android.hardware.camera2.impl.CameraDeviceImpl$4.run() for 451ms
I/Camera ( 8569): CameraCaptureSession onConfigured
I/Camera ( 8569): Updating builder settings
W/n_ssd_mobilenet( 8569): Long monitor contention with owner main (8569) at void android.hardware.camera2.impl.CameraDeviceImpl.waitUntilIdle()(CameraDeviceImpl.java:1445) waiters=1 in void android.hardware.camera2.impl.CameraDeviceImpl$CameraDeviceCallbacks.onResultReceived(android.hardware.camera2.impl.CameraMetadataNative, android.hardware.camera2.impl.CaptureResultExtras, android.hardware.camera2.impl.PhysicalCaptureResultInfo[]) for 337ms
D/Camera ( 8569): Updating builder with feature: ExposureLockFeature
D/Camera ( 8569): Updating builder with feature: ExposurePointFeature
D/Camera ( 8569): Updating builder with feature: ZoomLevelFeature
D/Camera ( 8569): Updating builder with feature: AutoFocusFeature
D/Camera ( 8569): Updating builder with feature: NoiseReductionFeature
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/Camera ( 8569): updateNoiseReduction | currentSetting: fast
I/Camera ( 8569): CameraCaptureSession onClosed
D/Camera ( 8569): Updating builder with feature: FocusPointFeature
D/Camera ( 8569): Updating builder with feature: ResolutionFeature
D/Camera ( 8569): Updating builder with feature: SensorOrientationFeature
D/Camera ( 8569): Updating builder with feature: FlashFeature
D/Camera ( 8569): Updating builder with feature: ExposureOffsetFeature
D/Camera ( 8569): Updating builder with feature: FpsRangeFeature
I/Camera ( 8569): refreshPreviewCaptureSession
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/tflite ( 8569): Replacing 71 node(s) with delegate (TfLiteXNNPackDelegate) node, yielding 1 partitions.
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
I/n_ssd_mobilenet( 8569): Background concurrent copying GC freed 67811(4589KB) AllocSpace objects, 113(61MB) LOS objects, 75% free, 3668KB/14MB, paused 42us,17us total 139.197ms
W/System ( 8569): A resource failed to call Surface.release.
E/flutter ( 8569): [ERROR:flutter/runtime/dart_isolate.cc(1097)] Unhandled exception:
E/flutter ( 8569): Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 2] while shape of output provided as argument in run is: [1, 10, 4]
E/flutter ( 8569): #0 Tensor._duplicateList (package:tflite_flutter/src/tensor.dart:233:7)
E/flutter ( 8569): #1 Tensor.copyTo (package:tflite_flutter/src/tensor.dart:203:7)
E/flutter ( 8569): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:183:24)
E/flutter ( 8569): #3 _DetectorServer._runInference (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:260:19)
E/flutter ( 8569): #4 _DetectorServer.analyseImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:196:20)
E/flutter ( 8569): #5 _DetectorServer._convertCameraImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:162:23)
E/flutter ( 8569): #6 _DetectorServer._handleCommand (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:145:9)
E/flutter ( 8569): #7 _DetectorServer._run. (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:129:20)
E/flutter ( 8569): #8 _RootZone.runUnaryGuarded (dart:async/zone.dart:1594:10)
E/flutter ( 8569): #9 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339:11)
E/flutter ( 8569): #10 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271:7)
E/flutter ( 8569): #11 _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:784:19)
E/flutter ( 8569): #12 _StreamController._add (dart:async/stream_controller.dart:658:7)
E/flutter ( 8569): #13 _StreamController.add (dart:async/stream_controller.dart:606:5)
E/flutter ( 8569): #14 _RawReceivePort._handleMessage (dart:isolate-patch/isolate_patch.dart:189:12)
I/n_ssd_mobilenet( 8569): createIfNeeded: Recreate new EGLImage since dataspace changed
D/ProfileInstaller( 8569): Installing profile for com.example.live_object_detection_ssd_mobilenet

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

This condition occurs only when adding a model that is different from the model being added by sh ./scripts/download_model.sh

Note : I am training with teachablemachine

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

@muamal2 Your model won't work with the code since the code was implemented for that model specifically. You have to modify the example code to match your model's output tensor. Read the following line:

E/flutter ( 8569): Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 2] while shape of output provided as argument in run is: [1, 10, 4]

I have never used teachablemachine so I don't know what kind of model they use. Upload your model on netron, and check your output tensor's shape carefully. Try modifying the code to match the shape of the output tensor with your model.
If you want me to walk you through every step, please kindly provide me your tflite file.

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

Yes, please guide me. This is a preliminary file. I trained a few images

converted_tflite (1).zip

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

@muamal2 Since your model only returns the label, change output of your model in detector_service.dart, and remove all the lines that do not relate to labels. See the following changes:

class _DetectorServer {
  /// Input size of image (height = width = 224)
  static const int mlModelInputSize = 224;

  /// Result confidence threshold
  static const double confidence = 0.5;
  Interpreter? _interpreter;
  List<String>? _labels;

  _DetectorServer(this._sendPort);

  final SendPort _sendPort;

  // ----------------------------------------------------------------------
  // Here the plugin is used from the background isolate.
  // ----------------------------------------------------------------------

  /// The main entrypoint for the background isolate sent to [Isolate.spawn].
  static void _run(SendPort sendPort) {
    ReceivePort receivePort = ReceivePort();
    final _DetectorServer server = _DetectorServer(sendPort);
    receivePort.listen((message) async {
      final _Command command = message as _Command;
      await server._handleCommand(command);
    });
    // receivePort.sendPort - used by UI isolate to send commands to the service receiverPort
    sendPort.send(_Command(_Codes.init, args: [receivePort.sendPort]));
  }

  /// Handle the [command] received from the [ReceivePort].
  Future<void> _handleCommand(_Command command) async {
    switch (command.code) {
      case _Codes.init:
        // ----------------------------------------------------------------------
        // The [RootIsolateToken] is required for
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] and must be
        // obtained on the root isolate and passed into the background isolate via
        // a [SendPort].
        // ----------------------------------------------------------------------
        RootIsolateToken rootIsolateToken =
            command.args?[0] as RootIsolateToken;
        // ----------------------------------------------------------------------
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] for each
        // background isolate that will use plugins. This sets up the
        // [BinaryMessenger] that the Platform Channels will communicate with on
        // the background isolate.
        // ----------------------------------------------------------------------
        BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);
        _interpreter = Interpreter.fromAddress(command.args?[1] as int);
        _labels = command.args?[2] as List<String>;
        _sendPort.send(const _Command(_Codes.ready));
      case _Codes.detect:
        _sendPort.send(const _Command(_Codes.busy));
        _convertCameraImage(command.args?[0] as CameraImage);
      default:
        debugPrint('_DetectorService unrecognized command ${command.code}');
    }
  }

  void _convertCameraImage(CameraImage cameraImage) {
    var preConversionTime = DateTime.now().millisecondsSinceEpoch;

    convertCameraImageToImage(cameraImage).then((image) {
      if (image != null) {
        if (Platform.isAndroid) {
          image = image_lib.copyRotate(image, angle: 90);
        }

        final results = analyseImage(image, preConversionTime);
        _sendPort.send(_Command(_Codes.result, args: [results]));
      }
    });
  }

  Map<String, dynamic> analyseImage(
      image_lib.Image? image, int preConversionTime) {
    var conversionElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    var preProcessStart = DateTime.now().millisecondsSinceEpoch;

    /// Pre-process the image
    /// Resizing image for model [224, 224]
    final imageInput = image_lib.copyResize(
      image!,
      width: mlModelInputSize,
      height: mlModelInputSize,
    );

    // Creating matrix representation, [224, 224, 3]
    final imageMatrix = List.generate(
      imageInput.height,
      (y) => List.generate(
        imageInput.width,
        (x) {
          final pixel = imageInput.getPixel(x, y);
          return [pixel.r, pixel.g, pixel.b];
        },
      ),
    );

    var preProcessElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preProcessStart;

    var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;

    final output = _runInference(imageMatrix);

    // Classes
    final classesRaw = output.elementAt(0).first as List<double>;
    final classes = classesRaw.map((value) => value.toString()).toList();

    // Number of detections
    final numberOfDetectionsRaw = output.last.first as double;
    final numberOfDetections = numberOfDetectionsRaw.toInt();

    final List<String> classification = [];
    for (var i = 0; i < numberOfDetections; i++) {
      classification.add(_labels![classes[i]]);
    }

    /// Generate recognitions
    List<Recognition> recognitions = [];
    for (int i = 0; i < numberOfDetections; i++) {
      // Label string
      var label = classification[i];

      if (score > confidence) {
        recognitions.add(
          Recognition(label),
        );
      }
    }

    var inferenceElapsedTime =
        DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;

    var totalElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    return {
      "recognitions": recognitions,
      "stats": <String, String>{
        'Conversion time:': conversionElapsedTime.toString(),
        'Pre-processing time:': preProcessElapsedTime.toString(),
        'Inference time:': inferenceElapsedTime.toString(),
        'Total prediction time:': totalElapsedTime.toString(),
        'Frame': '${image.width} X ${image.height}',
      },
    };
  }

  /// Object detection main function
  List<List<Object>> _runInference(
    List<List<List<num>>> imageMatrix,
  ) {
    // Set input tensor [1, 224, 224, 3]
    final input = [imageMatrix];

    // Set output tensor
    // Classes: [1,2],

    final output = {
      0: [List<num>.filled(2, 0)],
    };

    _interpreter!.runForMultipleInputs([input], output);
    return output.values.toList();
  }
}

I did not consider other files so pls lemme know if something does not run accordingly.

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

Sorry to bother you with my problem, but I encountered problems here, the picture below

Screenshot 2024-02-18 at 3 14 04 AM

I use this solution for the purpose of improving definition and accuracy and it works perfectly #137

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

@muamal2 I am sorry for the 3rd, 4th, 5th bug. They are my mistakes. Unfortunately, I am afraid this model is an image classification real time detection. I might need to take a look at it more carefully. I will come back later for you with a better solution. I am sorry if I could't reply soon to you.

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

Don't worry, I'm waiting for you and please keep in mind that I made this change #137

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

@qu-ngx Hello, I hope you are well. Did you find a solution?

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

@muamal2 I am sorry. I was busy last week. Did you figure it out or you still need my help with it?

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

@qu-ngx
I did not find a solution. Please help me. I am confused

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

Can you please let me know what you trying to use the model to predict or simply provide me an image so I could test your model? Try the code below and see if it works. I am still lack in context. Can I have your debug log to see all the other issues?

class _DetectorServer {
  /// Input size of image (height = width = 224)
  static const int mlModelInputSize = 224;

  /// Result confidence threshold
  static const double confidence = 0.5;
  Interpreter? _interpreter;
  List<String>? _labels;

  _DetectorServer(this._sendPort);

  final SendPort _sendPort;

  // ----------------------------------------------------------------------
  // Here the plugin is used from the background isolate.
  // ----------------------------------------------------------------------

  /// The main entrypoint for the background isolate sent to [Isolate.spawn].
  static void _run(SendPort sendPort) {
    ReceivePort receivePort = ReceivePort();
    final _DetectorServer server = _DetectorServer(sendPort);
    receivePort.listen((message) async {
      final _Command command = message as _Command;
      await server._handleCommand(command);
    });
    // receivePort.sendPort - used by UI isolate to send commands to the service receiverPort
    sendPort.send(_Command(_Codes.init, args: [receivePort.sendPort]));
  }

  /// Handle the [command] received from the [ReceivePort].
  Future<void> _handleCommand(_Command command) async {
    switch (command.code) {
      case _Codes.init:
        // ----------------------------------------------------------------------
        // The [RootIsolateToken] is required for
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] and must be
        // obtained on the root isolate and passed into the background isolate via
        // a [SendPort].
        // ----------------------------------------------------------------------
        RootIsolateToken rootIsolateToken =
            command.args?[0] as RootIsolateToken;
        // ----------------------------------------------------------------------
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] for each
        // background isolate that will use plugins. This sets up the
        // [BinaryMessenger] that the Platform Channels will communicate with on
        // the background isolate.
        // ----------------------------------------------------------------------
        BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);
        _interpreter = Interpreter.fromAddress(command.args?[1] as int);
        _labels = command.args?[2] as List<String>;
        _sendPort.send(const _Command(_Codes.ready));
      case _Codes.detect:
        _sendPort.send(const _Command(_Codes.busy));
        _convertCameraImage(command.args?[0] as CameraImage);
      default:
        debugPrint('_DetectorService unrecognized command ${command.code}');
    }
  }

  void _convertCameraImage(CameraImage cameraImage) {
    var preConversionTime = DateTime.now().millisecondsSinceEpoch;

    convertCameraImageToImage(cameraImage).then((image) {
      if (image != null) {
        if (Platform.isAndroid) {
          image = image_lib.copyRotate(image, angle: 90);
        }

        final results = analyseImage(image, preConversionTime);
        _sendPort.send(_Command(_Codes.result, args: [results]));
      }
    });
  }

  Map<String, dynamic> analyseImage(
      image_lib.Image? image, int preConversionTime) {
    var conversionElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    var preProcessStart = DateTime.now().millisecondsSinceEpoch;

    /// Pre-process the image
    /// Resizing image for model [224, 224]
    final imageInput = image_lib.copyResize(
      image!,
      width: mlModelInputSize,
      height: mlModelInputSize,
    );

    // Creating matrix representation, [224, 224, 3]
    final imageMatrix = List.generate(
      imageInput.height,
      (y) => List.generate(
        imageInput.width,
        (x) {
          final pixel = imageInput.getPixel(x, y);
          return [pixel.r, pixel.g, pixel.b];
        },
      ),
    );

    var preProcessElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preProcessStart;

    var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;

    final output = _runInference(imageMatrix);

    // Classes
    final classesRaw = output.elementAt(0).first as List<double>;
    final classes = classesRaw.map((value) => value.toString()).toList();

    // Number of detections
    final numberOfDetectionsRaw = output.last.first as double;
    final numberOfDetections = numberOfDetectionsRaw.toInt();

    final List<String> classification = [];
    for (var i = 0; i < numberOfDetections; i++) {
      classification.add(_labels![classes[i]]);
    }

    /// Generate recognitions
    List<Recognition> recognitions = [];
    for (int i = 0; i < numberOfDetections; i++) {
      // Label string
      var label = classification[i];

      recognitions.add(
          Recognition(label),
        );
    }

    var inferenceElapsedTime =
        DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;

    var totalElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    return {
      "recognitions": recognitions,
      "stats": <String, String>{
        "Conversion time:": conversionElapsedTime.toString(),
        "Pre-processing time:": preProcessElapsedTime.toString(),
        "Inference time:": inferenceElapsedTime.toString(),
        "Total prediction time:": totalElapsedTime.toString(),
        "Frame": '${image.width} X ${image.height}',
      },
    };
  }

  /// Object detection main function
  List<List<Object>> _runInference(
    List<List<List<num>>> imageMatrix,
  ) {
    // Set input tensor [1, 224, 224, 3]
    final input = [imageMatrix];

    // Set output tensor
    // Classes: [1,2],

    final output = {
      0: [List<num>.filled(2, 0)],
    };

    _interpreter!.runForMultipleInputs([input], output);
    return output.values.toList();
  }
}

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024
Screenshot 2024-02-25 at 9 18 33 PM

These are the problems that stand out to me

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

I'm not sure about line 291 and 301. Can you paste the code where it shows error or take screenshots of to where it says String cannot be assigned to "int"?

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024
 ` 291 => classification.add(_labels![classes[i]]);`
`  301 =>  Recognition(label),`

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024

Gotcha. Wait me a little bit

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024

I'm waiting , Thanks

from flutter-tflite.

qu-ngx avatar qu-ngx commented on June 9, 2024
class _DetectorServer {
  /// Input size of image (height = width = 224)
  static const int mlModelInputSize = 224;

  /// Result confidence threshold
  static const double confidence = 0.5;
  Interpreter? _interpreter;
  List<String>? _labels;

  _DetectorServer(this._sendPort);

  final SendPort _sendPort;

  // ----------------------------------------------------------------------
  // Here the plugin is used from the background isolate.
  // ----------------------------------------------------------------------

  /// The main entrypoint for the background isolate sent to [Isolate.spawn].
  static void _run(SendPort sendPort) {
    ReceivePort receivePort = ReceivePort();
    final _DetectorServer server = _DetectorServer(sendPort);
    receivePort.listen((message) async {
      final _Command command = message as _Command;
      await server._handleCommand(command);
    });
    // receivePort.sendPort - used by UI isolate to send commands to the service receiverPort
    sendPort.send(_Command(_Codes.init, args: [receivePort.sendPort]));
  }

  /// Handle the [command] received from the [ReceivePort].
  Future<void> _handleCommand(_Command command) async {
    switch (command.code) {
      case _Codes.init:
        // ----------------------------------------------------------------------
        // The [RootIsolateToken] is required for
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] and must be
        // obtained on the root isolate and passed into the background isolate via
        // a [SendPort].
        // ----------------------------------------------------------------------
        RootIsolateToken rootIsolateToken =
            command.args?[0] as RootIsolateToken;
        // ----------------------------------------------------------------------
        // [BackgroundIsolateBinaryMessenger.ensureInitialized] for each
        // background isolate that will use plugins. This sets up the
        // [BinaryMessenger] that the Platform Channels will communicate with on
        // the background isolate.
        // ----------------------------------------------------------------------
        BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);
        _interpreter = Interpreter.fromAddress(command.args?[1] as int);
        _labels = command.args?[2] as List<String>;
        _sendPort.send(const _Command(_Codes.ready));
      case _Codes.detect:
        _sendPort.send(const _Command(_Codes.busy));
        _convertCameraImage(command.args?[0] as CameraImage);
      default:
        debugPrint('_DetectorService unrecognized command ${command.code}');
    }
  }

  void _convertCameraImage(CameraImage cameraImage) {
    var preConversionTime = DateTime.now().millisecondsSinceEpoch;

    convertCameraImageToImage(cameraImage).then((image) {
      if (image != null) {
        if (Platform.isAndroid) {
          image = image_lib.copyRotate(image, angle: 90);
        }

        final results = analyseImage(image, preConversionTime);
        _sendPort.send(_Command(_Codes.result, args: [results]));
      }
    });
  }

  Map<String, dynamic> analyseImage(
      image_lib.Image? image, int preConversionTime) {
    var conversionElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    var preProcessStart = DateTime.now().millisecondsSinceEpoch;

    /// Pre-process the image
    /// Resizing image for model [224, 224]
    final imageInput = image_lib.copyResize(
      image!,
      width: mlModelInputSize,
      height: mlModelInputSize,
    );

    // Creating matrix representation, [224, 224, 3]
    final imageMatrix = List.generate(
      imageInput.height,
      (y) => List.generate(
        imageInput.width,
        (x) {
          final pixel = imageInput.getPixel(x, y);
          return [pixel.r, pixel.g, pixel.b];
        },
      ),
    );

    var preProcessElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preProcessStart;

    var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;

    final output = _runInference(imageMatrix);

    // get first output tensor
    final result = output.first;
    int maxScore = result.reduce((a, b) => a+b);
    
   List<Recognition> recognitions = [];
    // Set classifciation map {labels: points}
    var classification = <String, double>{};
    for (var i = 0; i < numberOfDetections; i++) {
        if (result[i] != 0) {
            // Set label: points
            classification[_labels[i]] = result[i].toDouble() / maxScore.toDouble();
            recognitions.add(Recognition(classification));
        }
   }

    var inferenceElapsedTime =
        DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;

    var totalElapsedTime =
        DateTime.now().millisecondsSinceEpoch - preConversionTime;

    return {
      "recognitions": recognitions,
      "stats": <String, String>{
        "Conversion time:": conversionElapsedTime.toString(),
        "Pre-processing time:": preProcessElapsedTime.toString(),
        "Inference time:": inferenceElapsedTime.toString(),
        "Total prediction time:": totalElapsedTime.toString(),
        "Frame": '${image.width} X ${image.height}',
      },
    };
  }

  /// Object detection main function
  List<List<Object>> _runInference(
    List<List<List<num>>> imageMatrix,
  ) {
    // Set input tensor [1, 224, 224, 3]
    final input = [imageMatrix];

    // Set output tensor
    // Classes: [1,2],

    final output = {
      0: [List<num>.filled(2, 0)],
    };

    _interpreter!.runForMultipleInputs([input], output);
    return output.values.toList();
   
  }
}

from flutter-tflite.

muamal2 avatar muamal2 commented on June 9, 2024
Screenshot 2024-02-25 at 11 01 27 PM

from flutter-tflite.

zyrridian avatar zyrridian commented on June 9, 2024

@muamal2 Have you figured it out? I can't use the model that I made in teachable machine

from flutter-tflite.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.