main| new issue| archive| editorial board| for the authors| publishing house|
Πσρρκθι
Main page
New issue
Archive of articles
Editorial board
For the authors
Publishing house

 

 


ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 3. Vol. 29. 2023

DOI: 10.17587/it.29.156-161

S. V. Kulikov, CTO LLC "Kulikov Vision", Penza, Russian Federation

Using Digital Twins and Synthetic Data for Training Neural Networks in Quality Control Computer Vision Systems

The possibility of using synthetic data and digital twins for training neural network models that being used in quality control computer vision systems is being explored. A method for generating synthetic images of qualitative and defective products is shown in this paper. Experiments have been carried out to train neural networks on generated images with various configurations of synthetic environments. The possibility and effectiveness of applying the domain randomization method, which purposefully violates the photorealism of training images, is considered. The impact of this method on the accuracy of the resulting neural network used for product defect segmentation is evaluated.
Keywords: Neural networks, quality control, synthetic dataset, digital twins, 3D model, rendering, semantic segmentation, data augmentation, domain randomization

P. 156–161

References

  1. Dosovitskiy A., Fischer P., Ilg E., Hausser P., Haz rbas C., Golkov V., Smagt P., Cremers D., Brox T. FlowNet. Learning optical flow with convolutional networks, IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2758—2766.
  2. Mayer N., Ilg E., Hausser P., Fischer P., Cremers D., Dosovitskiy A., Brox T. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation, 2015, available at: https://arxiv.org/abs/1512.02134  (date of access: 18.06.2022).
  3. Butler D. J., Wulff J., Stanley G. B., Black M. J. A naturalistic open source movie for optical flow evaluation, European Conference on Computer Vision (ECCV), 2012, pp. 611—625.
  4. Qiu W., Yuille A. UnrealCV: Connecting computer vision to Unreal Engine, 2016, available at: https://arXiv:1609.01326 / (date of access: 18.06.2022).
  5. Zhang Y., Qiu W., Chen Q., Hu X., Yuille A. UnrealSte-reo: A synthetic dataset for analyzing stereo vision, 2016, available at: https://arXiv:1612.04647 / (date of access: 18.06.2022).
  6. Handa A., Patraucean V., Badrinarayanan V., Stent S., Cipolla R. SceneNet: Understanding real world indoor scenes with synthetic data, 2015, available at: https://arXiv:1511.07041 / (date of access: 18.06.2022).
  7. McCormac J., Handa A., Leutenegger S. SceneNet RGB-D: 5M photorealistic images of synthetic indoor trajectories with ground truth, 2016, available at: https://arXiv:1612.05079 / (date of access: 18.06.2022).
  8. Ros G., Sellart L., Materzynska J., Vazquez D., Lopez A. The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes, Computer Vision and Pattern Recognition Conference (CVPR), 2016, pp. 3234—3243.
  9. Richter S. R., Vineet V., Roth S., Koltun V. Playing for data: Ground truth from computer games, European Conference on Computer Vision (ECCV), 2016, pp. 102—118.
  10. 10. Mueller M., Casser V., Lahoud J., Smith N., Ghanem B. Sim4CV: A photo-realistic simulator for computer vision applications, 2017, available at: https://arXiv:1708.0586905079 / (date of access: 18.06.2022).
    1. Gaidon A., Wang Q., Cabon Y., Vig E. Virtual worlds as proxy for multi-object tracking analysis, Computer Vision and Pattern Recognition Conference (CVPR), 2016, pp. 4340—4349.
    2. Tobin J., Fong R., Ray A., Schneider J., Zaremba W., Abbeel P. Domain randomization for transferring deep neural networks from simulation to the real world, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 23—30.
    3. Fawaz H. I., Forestier G., Weber J., Idoumghar L., Muller P.-A. Data augmentation using synthetic data for time series classification with deep residual networks, Computer Vision and Pattern Recognition Conference (CVPR), 2018, available at: https://arxiv.org/abs/1808.02455.
    4. Forestier G., Petitjean F., Dau H. A., Webb G. I., Keogh E. J. Generating synthetic time series to augment sparse datasets, IEEE International Conference on Data Mining, 2017, pp. 865—870.
    5. Khan A., Hwang H., Kim H. S. Synthetic Data Augmentation and Deep Learning for the Fault Diagnosis of Rotating Machines, Mathematics, 2021, vol. 9(18), p. 2336.
    6. Leturiondo U., Salgado O., Galar D. Validation of a physics-based model of a rotating machine for synthetic data generation in hybrid diagnosis, Structural Health Monitoring, 2016, vol. 16, pp. 458—470.
    7. Johnson-Roberson M., Barto C., Mehta R., Sridhar S. N., Rosaen K., Vasudevan R. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks?, ICRA, 2017, pp. 746—753.
    8. Hinterstoisser S., Lepetit V., Wohlhart P., Konolige K. On pre-trained image features and synthetic images for deep learning, 2017, available at: https://arXiv:1710.10710 / (date of access: 18.06.2022).
    9. Sadeghi F., Levine S. CAD2RL: Real single-image flight without a single real image, Robotics: Science and Systems (RSS), 2017, available at: https://arxiv.org/abs/1611.04201.
    10. Matthews O., Ryu K., Srivastava T. Creating a Large-scale Synthetic Dataset for Human Activity Recognition, 2007, available at: https://arxiv.org/pdf/2007.11118.pdf  (date of access: 18.06.2022).
    11. Cicek O., Abdulkadir A., Lienkamp S. S., Brox T., Ron-neberger O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation, Conference: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2016, available at: https://arxiv.org/pdf/1606.06650.pdf / (date of access: 18.06.2022).

     

    To the contents