If I understand the pipeline correctly, one of the first steps is to locate plate candidates in the input image. Is the training done in such a way that you could still get a result in case the plate itself is blank or missing? I am thinking of cases where the sun is being reflected directly behind the camera, overexposing the plate, or the plate being covered in snow. Or simply removed.
Is there a way to still run the vehicle classification algorithm even if the plate is unreadable? I realize it may be difficult to group the results into a single detection without any plate information.