Hello everyone!
I am using a simple script to measure the time between a camera trigger an when the image reaches my python script. The camera I am using is a CS-MIPI-SC-132 by veye (https://www.veye.cc/en/product/cs-mipi-sc132/). I am using v4l2 drivers and gstreamer to get the images captured by the camera to the raspberry pi RAM.
The script is as follows:
Then I send a Trigger Signal to the camera, and use the strobe output to measure when the camera was triggered:
https://imgur.com/kXpfNhN
Yellow: Trigger Signal
Blue: Pin turned on by Python Script in the raspberrypi, after image is stored on a np array.
Purple: Strobe signal from the camera
We see that theres around a 40 ms delay between the exposure of the camera to the data is available on the script.
Sending multiple tirggers, I get that there might be an overlapping effect.
https://imgur.com/biR6lOs
Yellow: Trigger Signal
Blue: Pin turned on by Python Script in the raspberrypi, after image is stored on a np array.
Purple: Strobe signal from the camera
It is important to note that FPS are fine, we are producing around 120 FPS as advertised. But theres significant delay, as demostrated.
Does anyone have experience reducing this specific delay? If so, please, any suggestion is welcomed. I am open to also changing development platform (maybe using a FPGA if it comes to it). I am targeting a 3ms delay because the data to be analysed is reall time critical (high speed mobile robotics)
I am using a simple script to measure the time between a camera trigger an when the image reaches my python script. The camera I am using is a CS-MIPI-SC-132 by veye (https://www.veye.cc/en/product/cs-mipi-sc132/). I am using v4l2 drivers and gstreamer to get the images captured by the camera to the raspberry pi RAM.
The script is as follows:
Code:
from gpiozero import LEDfrom time import sleepimport cv2sc= 'v4l2scr io-mode=dmabuf device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480 ! appsink'sleep(1)led= LED(2)print("Starting camera...")cap=cv2.VideoCapture(sc, cv2.CAP_GSTREAMER)if cap.isOpened():while True:ret_val, img=cap.read()led.on()led.off()else:print("MainProcess: Camera Failed!")
https://imgur.com/kXpfNhN
Yellow: Trigger Signal
Blue: Pin turned on by Python Script in the raspberrypi, after image is stored on a np array.
Purple: Strobe signal from the camera
We see that theres around a 40 ms delay between the exposure of the camera to the data is available on the script.
Sending multiple tirggers, I get that there might be an overlapping effect.
https://imgur.com/biR6lOs
Yellow: Trigger Signal
Blue: Pin turned on by Python Script in the raspberrypi, after image is stored on a np array.
Purple: Strobe signal from the camera
It is important to note that FPS are fine, we are producing around 120 FPS as advertised. But theres significant delay, as demostrated.
Does anyone have experience reducing this specific delay? If so, please, any suggestion is welcomed. I am open to also changing development platform (maybe using a FPGA if it comes to it). I am targeting a 3ms delay because the data to be analysed is reall time critical (high speed mobile robotics)
Statistics: Posted by lgcs2500 — Fri Feb 02, 2024 8:19 am