Yolo on batch on axon board

it worked man
thankssss

also, I see you have exported the dynamic model using ultralytics. The onnx model exported so, have dfl layer which computes bounding box floating point probability by calculating expected value over a distribution and they use point wise CNN to do so. It must not be quantized to achieve correct output. So before converting onnx model to quantized rknn model, you must remove this dfl layer from the onnx model. For reference onnx graph you can check yolo11 onnx models provided here in rknn_model_zoo.

I don’t understand what you are trying to ask here!

do you have script top rmeove dfl layes

when i do int 8 it doesnt detection\when i do fp16, it detects.

the example codes in above link converts the regular onnx model to onnx model similar to what rknn model zoo provides. Be careful about version mismatch of different onnx libraries while using above code.
I had tested above codes in a newly created virtual environment in python3.10 and installing the requirements as pip install -r requirements.txt and it worked.

But if you use that onnx model, make sure you do dfl in your postprocess, example of which can be found in rknn_model_zoo github repo’s example/yolo11 folder

what i did, i dint export rknn from yolo_modifier, i just made onnx and dynamic rknn made from my script, will it work

yes, the only necessary part is that you remove the dfl layer if you need to do quantization or speed-up the inference, rest is just to provide pre-written code for ease of doing. While you export onnx, just check the graph in netron and match it with that of onnx from rknn_model_zoo as the input-output shape won’t change the graph.

File “rknn/api/session.py”, line 131, in rknn.api.session.Session.sess_build
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 485, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 575, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. In Node, (“/model.23/cv3.0/cv3.0.2/Conv_output_0_reducesum_node”, ReduceSum, “”, -1) : (“/model.23/cv3.0/cv3.0.2/Conv_output_0_sigmoid”: tensor(float),) → (“/model.23/cv3.0/cv3.0.2/Conv_output_0_reducesum”: tensor(float),) , Error Unrecognized attribute: axes for operator ReduceSum

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/vicharak/conv2.py”, line 39, in
ret = rknn.build(
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/rknn/api/rknn.py”, line 198, in build
return self.rknn_base.build(do_quantization=do_quantization, dataset=dataset, expand_batch_size=rknn_batch_size, auto_hybrid=auto_hybrid)
File “rknn/api/rknn_log.py”, line 349, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File “rknn/api/rknn_log.py”, line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: Traceback (most recent call last):
File “rknn/api/rknn_log.py”, line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File “rknn/api/rknn_base.py”, line 1990, in rknn.api.rknn_base.RKNNBase.build
File “rknn/api/graph_optimizer.py”, line 942, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
File “rknn/api/session.py”, line 34, in rknn.api.session.Session.init
File “rknn/api/session.py”, line 131, in rknn.api.session.Session.sess_build
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 485, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 575, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. In Node, (“/model.23/cv3.0/cv3.0.2/Conv_output_0_reducesum_node”, ReduceSum, “”, -1) : (“/model.23/cv3.0/cv3.0.2/Conv_output_0_sigmoid”: tensor(float),) → (“/model.23/cv3.0/cv3.0.2/Conv_output_0_reducesum”: tensor(float),) , Error Unrecognized attribute: axes for operator ReduceSum

when i try to convert onnx to rknn using your yolo_modifier

its yolov11

when i convert
nn_lite.py:41: UserWarning: pkg_resources is deprecated as an API. See Package Discovery and Resource Access using pkg_resources - setuptools 80.9.0 documentation. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
import pkg_resources
W rknn-toolkit-lite2 version: 2.3.2
E Catch exception when init runtime!
E Traceback (most recent call last):
File “/home/vicharak/miniforge3/envs/rknn/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 148, in init_runtime
self.rknn_runtime = RKNNRuntime(root_dir=self.root_dir, target=target, device_id=device_id,
File “rknnlite/api/rknn_runtime.py”, line 363, in rknnlite.api.rknn_runtime.RKNNRuntime.init
File “rknnlite/api/rknn_runtime.py”, line 607, in rknnlite.api.rknn_runtime.RKNNRuntime._load_library
File “rknnlite/api/rknn_runtime.py”, line 602, in rknnlite.api.rknn_runtime.RKNNRuntime._get_rknn_api_lib_path
Exception: Unsupported run platform: Linux aarch64

DEBUG: calling inference with shape (8, 640, 640, 3) dtype uint8
E Runtime environment is not inited, please call init_runtime to init it first!
SUCCESS: inference returned, outputs keys: o

this is due to different definition of operations in different opset version of onnx / ultralytics, that is the ReduceSum node has different type of declaration in newer onnx opset. I had mentioned above that version mismatch may cause issue, that’s why it is better to use onnx_graphsurgeon and write your own code to remove the last layer which is dfl layer. Or you need to make a fresh new python3.10 virtual environment and install the requirements in it and export the model using my code.