The input[0] need 4dims input, but 3dims input buffer feed. aXON NPU

python inference_mnpu.py -i file2 -f 3.mp4 -y 8
W rknn-toolkit-lite2 version: 2.3.0
I RKNN: [16:52:55.251] RKNN Runtime Information, librknnrt version: 2.3.0 (c949ad889d@2024-11-07T11:35:33)
I RKNN: [16:52:55.251] RKNN Driver Information, version: 0.9.2
I RKNN: [16:52:55.251] RKNN Model Information, version: 6, toolkit version: 1.6.0+81f21f4d(compiler version: 1.6.0 (585b3edcf@2023-12-11T07:42:56)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
W RKNN: [16:52:55.263] query RKNN_QUERY_INPUT_DYNAMIC_RANGE error, rknn model is static shape type, please export rknn with dynamic_shapes
W Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID. (If it is a static shape RKNN model, please ignore the above warning message.)
models/yolov8n.rknn done
W rknn-toolkit-lite2 version: 2.3.0
I RKNN: [16:52:55.331] RKNN Runtime Information, librknnrt version: 2.3.0 (c949ad889d@2024-11-07T11:35:33)
I RKNN: [16:52:55.331] RKNN Driver Information, version: 0.9.2
I RKNN: [16:52:55.332] RKNN Model Information, version: 6, toolkit version: 1.6.0+81f21f4d(compiler version: 1.6.0 (585b3edcf@2023-12-11T07:42:56)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
W RKNN: [16:52:55.343] query RKNN_QUERY_INPUT_DYNAMIC_RANGE error, rknn model is static shape type, please export rknn with dynamic_shapes
W Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID. (If it is a static shape RKNN model, please ignore the above warning message.)
models/yolov8n.rknn done
W rknn-toolkit-lite2 version: 2.3.0
I RKNN: [16:52:55.409] RKNN Runtime Information, librknnrt version: 2.3.0 (c949ad889d@2024-11-07T11:35:33)
I RKNN: [16:52:55.409] RKNN Driver Information, version: 0.9.2
I RKNN: [16:52:55.409] RKNN Model Information, version: 6, toolkit version: 1.6.0+81f21f4d(compiler version: 1.6.0 (585b3edcf@2023-12-11T07:42:56)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
W RKNN: [16:52:55.420] query RKNN_QUERY_INPUT_DYNAMIC_RANGE error, rknn model is static shape type, please export rknn with dynamic_shapes
W Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID. (If it is a static shape RKNN model, please ignore the above warning message.)
models/yolov8n.rknn done
E Catch exception when setting inputs.
E Traceback (most recent call last):
File “/home/vicharak/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 212, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File “rknnlite/api/rknn_runtime.py”, line 1082, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: The input[0] need 4dims input, but 3dims input buffer feed.

E Catch exception when setting inputs.
E Traceback (most recent call last):
File “/home/vicharak/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 212, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File “rknnlite/api/rknn_runtime.py”, line 1082, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: The input[0] need 4dims input, but 3dims input buffer feed.

E Catch exception when setting inputs.
E Traceback (most recent call last):
File “/home/vicharak/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 212, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File “rknnlite/api/rknn_runtime.py”, line 1082, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: The input[0] need 4dims input, but 3dims input buffer feed.

E Catch exception when setting inputs.
E Traceback (most recent call last):
File “/home/vicharak/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 212, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File “rknnlite/api/rknn_runtime.py”, line 1082, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: The input[0] need 4dims input, but 3dims input buffer feed.

E Catch exception when setting inputs.
E Traceback (most recent call last):
File “/home/vicharak/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py”, line 212, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File “rknnlite/api/rknn_runtime.py”, line 1082, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: The input[0] need 4dims input, but 3dims input buffer feed.

Traceback (most recent call last):
File “/home/vicharak/Downloads/YOLOv8-RK3588-Python-main/inference_mnpu.py”, line 243, in
frame, flag = pool.get()
File “/home/vicharak/Downloads/YOLOv8-RK3588-Python-main/lib/rknnpool.py”, line 54, in get
return fut.result(), True
File “/usr/lib/python3.10/concurrent/futures/_base.py”, line 451, in result
return self.__get_result()
File “/usr/lib/python3.10/concurrent/futures/_base.py”, line 403, in __get_result
raise self._exception
File “/usr/lib/python3.10/concurrent/futures/thread.py”, line 58, in run
result = self.fn(*self.args, **self.kwargs)
File “/home/vicharak/Downloads/YOLOv8-RK3588-Python-main/inference_mnpu.py”, line 162, in YoloFunc
boxes, classes, scores = post_process(outputs)
File “/home/vicharak/Downloads/YOLOv8-RK3588-Python-main/lib/postprocess.py”, line 237, in post_process
pair_per_branch = len(input_data)//defualt_branch
TypeError: object of type ‘NoneType’ has no len()

Okay, we will revert to you tomorrow.

Check if batch dimensions is also considered in input

Image input should be of dim 1_3_255_255 (batchcolor w*h)

hey. can you elaborate. i am using this github. GitHub - cluangar/YOLOv8-RK3588-Python: Base on previous repository YOLOv5-RK3588-Python

make sure you preprocess the right way and send image as list containing a single image having 4 dimensions as nchw.

If you read image using cv2, example preprocess in python is as:

def preprocess(self, frame):
        img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        # input_height and width is generally 640*640 for yolov8
        img = cv2.resize(img, (self.input_height, self.input_width),
                         interpolation=cv2.INTER_AREA)
        img = np.expand_dims(img, 0)
        input = [img]
        return input

# then pass the input to rknn_lite_model as
inference_result = rknn_lite_model(inputs=input)

# here input is a list containing single np.ndarray which a 4-d np.ndarray

0 AVR FPS: 58.80066002251478 Frame
30 AVR FPS: 65.76759098053093 Frame
30 AVR FPS: 61.32818386788931 Frame
30 AVR FPS: 61.89889203012371 Frame
30 AVR FPS: 61.75732045073392 Frame
30 AVR FPS: 65.15977020184498 Frame
30 AVR FPS: 64.15663240332844 Frame
30 AVR FPS: 64.55357527626435 Frame
30 AVR FPS: 65.0525443979095 Frame
30 AVR FPS: 55.70884387955507 Frame
30 AVR FPS: 64.25518007712922 Frame
30 AVR FPS: 64.90741227920854 Frame
30 AVR FPS: 66.15162354663083 Frame
30 AVR FPS: 53.09319496853754 Frame
30 AVR FPS: 64.12034305107548 Frame
30 AVR FPS: 62.34315382362952 Frame
30 AVR FPS: 56.861535791882524 Frame
30 AVR FPS: 59.786443591621 Frame
30 AVR FPS: 64.71689311480064 Frame
30 AVR FPS: 66.18871727756954 Frame

thanks. GOT 30 FPS avg.

If you are using yolov8n, you should get 30 fps per npu core and there are 3 npu cores, you should get total of 90 fps, when utilizing all 3 cores in parallel threads. Unless camera or input device only provides 30 fps of input frames for processing.

which usb camera to use ? kindly suggest.

well, I have tried usb camera, mipi csi camera, and phone camera (through ip webcam). For me phone camera worked best (Of course there could be better usb camera in the market than I currently have). To use phone camera, you can download “IP Webcam” app on phone and start server from your phone, and using cv2 you can capture frames as cap = cv2.VideoCapture("http://<ip_of_phone>:<port_mentioned_in_app>/video") . Provided that phone and axon is on the same network (or wifi).
Axon have dedicated mipi csi ports for camera and currently driver for ov5647 is supported and if you want to connect any other camera through mipi csi port, you can ask for kernel support if it is not supported yet and support would soon be delivered.