This document is most helpful for the IT staff and on-site installation personnel.
Pink or purplish hue image
The left image has a defective IR-cut filter, while the right image has a functioning IR-cut filter.
This is due to the IR cut-filter being out of place which can happen due to shipping. The mechanical IR cut-filter can move back and forth by a small metal hook and that hook can jump out of the eyelet during transportation. You can try enabling/disabling the IR cut-filter a few times under Camera Settings. Typically the only resolution would be to open an RMA for the camera.
When the IR cut filter is in the wrong position it produces a pink hue image. This is the reason why we switch to Black & White when the filter in the off position.
Purple Fringing
The blue frames in the image are called purple fringing also known as a chromatic aberration which we can describe as "an out of focus purple ghost image around parts of a digital picture with high contrast". This is mainly due to the optics, but also due to the sensor and processing.
You could try to set the minimum shutter to a higher value e.g. 1/100 (or try different values). This is done in Camera settings > Exposure priority > choose Motion or Low noise (it doesn't matter in this case) and change the Shutter min value. This should force the iris to step down. The use of P-iris reduces this phenomenon but it does not eliminate it.
Please note that the priority we give when choosing the appropriate lens is in:
- Minimizing the image artifacts that affect the video surveillance
- Sharpness
- Good light sensitivity
- Object detection and recognition
This purple fringing does not affect the scene in a way that makes it more difficult to identify persons or objects.
Motion Blur (Ghosting)
IR light from the camera would reflect off the dome and into the lens. This causes a less than the desired image for our customers.
Firstly, don't choose the camera with the low-cost sensor. Even the low-cost sensor can perform pretty well during the day. The difference is the night vision capability. Pick a camera with at least a 1/3 inch sensor. The bigger the size is, the better the night vision will be. Like human eyes, one person can see clearly if he or she has better vision capability.
Disable the slow shutter. This can tell how well your camera performs at night. Once the slow shutter setting has been disabled, the image will become darker at higher frames per second (fps).
Add extra illuminators. It could be an Infrared or visible lamp. The infrared can project further distance than visible light, however, you have to scarify the image's color details, because the infrared always affects the color reproduction of the camera, making the image look purple. Most of the cameras will eliminate the color when it sees Infrared. The PIR lamp is another option especially when you have some existed street lights. It can pop up unexpectedly and generates deterrence when criminals approach.
Network camera latency issues
In the video world, latency is the amount of time between the instant a frame is captured and the instant that frame is displayed. Low latency is a design goal for any system where there is real-time interaction with the video content, such as video live view or casting.
But the meaning of “low latency” can vary, and the methods for achieving low latency aren’t always obvious. Latency (a measure of the time delay experienced by a system) depends on several factors, both on the network environment and used applications. Basically, the system takes a long time to process the data and it could be caused by a system overload, network congestion, or weak/old components on the decoding client.
Characterizing Video System Latency
There are several stages of processing required to make the pixels captured by a camera visible on a video display. The delays contributed by each of these processing steps—as well as the time required for transmitting the compressed video stream—together produce the total delay, which is sometimes called end-to-end latency.
Measuring Video Latency
Latency is colloquially expressed in time units, e.g., seconds or milliseconds (ms). The biggest contributors to video latency are the processing stages that require temporal storage of data, i.e., short-term buffering in some form of memory. Because of this, video system engineers tend to measure latency in terms of the buffered video data, for example, a latency of two frames or eight horizontal lines.
Converting from frames to time depends on the video’s frame rate. For example, a delay of one frame in 30 frames-per-second (fps) video corresponds to 1/30th of a second (33.3ms) of latency.
Figure 1: Representing latency in a 1080p / 30 FPS video stream.
Converting from video lines to time requires both the frame rate and the frame size or resolution. A 720p HD video frame has 720 horizontal lines, so a latency of one line at 30fps is 1/(30*720) = 0.046ms of latency. In 1080p @ 30fps, that same one-line latency takes a much briefer 0.030ms.
Defining “Low Latency”
There is no universal absolute value that defines low latency. Instead, what has considered acceptable low latency varies by application.
When humans interact with video in a live video conference or when playing a game, latency lower than 100ms is considered to be low, because most humans don’t perceive a delay that small. But in an application where a machine interacts with video—as is common in many automotive, industrial, and medical systems—then latency requirements can be much lower: 30ms, 10ms, or even under a millisecond, depending on the requirements of the system.
You will also see the term ultra-low latency applied to video processing functions and IP cores. This is a marketing description, not a technical definition, and yes, it just means “really, really low latency” for the given application.
Designing for Low Latency In A Video Streaming Application
Because it is commonplace in today’s connected, visual world, let’s examine latency in systems that stream video from a camera (or server) to a display over a network.
As with most system design goals, achieving suitably low latency for a streaming system requires tradeoffs, and success comes in achieving the optimum balance of hardware, processing speed, transmission speed, and video quality. As previously mentioned, any temporary storage of video data (uncompressed or compressed) increases latency, so reducing buffering is a good primary goal.
Video data buffering is imposed whenever processing must wait until some specific amount of data is available. The amount of data buffering required can vary from a few pixels, to several video lines, or even to a number of whole frames. With a target maximum acceptable latency in mind, we can easily calculate the amount of data buffering the system can tolerate, and hence to what level—pixel, line, or frame—one should focus on when budgeting and optimizing for latency.
For example, with our human viewer’s requirement of 100ms maximum latency for a streaming system using 1080p/ 30 FPS video, we can calculate the maximum allowable buffering through the processing pipeline as follows:
100ms/(33.3ms per frame) = 3 frames, or
1080 lines per frame x 3 frames =3240 lines, or
1920 pixels per line x 3240 lines = 6.2 million pixels
Therefore, if you have a low bandwidth network and you wish to set all your cameras to 1080P and 30 FPS you will need the following requirements times the number of cameras.
One camera set at 1080p/ 30 FPS consumes 4.2 Mbs of bandwidth as shown below.
In conclusion, when designing a system to meet low-latency goals, keep these points in mind:
- Achieving low latency will require some trade-off of decreased video quality or a higher transmission bit rate (or both).
- Identify your latency contributors throughout the system, and eliminate any unnecessary buffering. Focus on the granularity level (frame, level, pixel) that matters most in your system.
We hope this article was useful to you, please leave us a comment or feedback as it will help us improve our customer support center.