“Computer Vision Systems and Methods for Vehicle Damage Detection with Reinforcement Learning” in Patent Application Approval Process (USPTO 20210342997): Insurance Services Office Inc. – Insurance News Net

2021 NOV 19 (NewsRx) — By a News Reporter-Staff News Editor at Insurance Daily News — A patent application by the inventors Gupta, Abhinav (Pittsburgh, PA, US); Jujjavarapu, Sashank (Sunnyvale, CA, US); Malreddy, Siddarth (Sunnyvale, CA, US); Patel, Yash (Prague, CZ); Singh, Maneesh Kumar (Princeton, NJ, US); Wang, Shengze (Champaign, IL, US), filed on December 16, 2020, was made available online on November 4, 2021, according to news reporting originating from Washington, D.C., by NewsRx correspondents.
This patent application is assigned to Insurance Services Office Inc. (Jersey City, New Jersey, United States).
The following quote was obtained by the news editors from the background information supplied by the inventors: “
“Technical Field
“The present disclosure relates generally to the field of computer vision technology. More specifically, the present disclosure relates to computer vision systems and methods for vehicle damage detection and classification with reinforcement learning.
“Related Art
“Vehicle damage detection refers to detecting damage of a detected vehicle in an image. In the vehicle damage detection field, increasingly sophisticated software-based systems are being developed for automatically detecting damage of a detected vehicle present in an image. Such systems have wide applicability, including but not limited to, insurance (e.g., title insurance and claims processing), re-insurance, banking (e.g., underwriting auto loans), and the used vehicle market (e.g., vehicle appraisal).
“Conventional vehicle damage detection systems and methods suffer from several challenges that can adversely impact the accuracy of such systems and methods including, but not limited to, lighting, reflections, vehicle curvature, a variety of exterior paint colors and finishes, a lack of image databases, and criteria for false negatives and false positives. Additionally, conventional vehicle damage detection systems and methods are limited to merely detecting vehicle damage (i.e., whether a vehicle is damaged or not) and cannot determine a location of the detected vehicle damage nor an extent of the detected vehicle damage.
“There is currently significant interest in developing systems that automatically detect vehicle damage, determine a location of the detected vehicle damage, and determine an extent of the detected and localized vehicle damage of a vehicle present in an image requiring no (or, minimal) user involvement, and with a high degree of accuracy. For example, it would be highly beneficial to develop systems that can automatically generate vehicle insurance claims based on images submitted by a user. Accordingly, the system of the present disclosure addresses these and other needs.”
In addition to the background information obtained for this patent application, NewsRx journalists also obtained the inventors’ summary information for this patent application: “The present disclosure relates to computer vision systems and methods for vehicle damage detection and classification with reinforcement learning. An embodiment of the system generates a dataset, which can include digital images of actual vehicles or simulated (e.g., computer-generated) vehicles, and trains a neural network with a plurality of images of the dataset to learn to detect damage to a vehicle present in an image of the dataset and to classify a location of the detected damage and a severity of the detected damage utilizing segmentation processing. The system can detect the damage to the vehicle and classify the location of the detected damage and the severity of the detected damage by the trained neural network where the location of the detected damage is at least one of a front, a rear or a side of the vehicle and the severity of the detected damage is based on predetermined damage sub-classes. In addition, an embodiment of the system utilizes a neural network to reconstruct a vehicle from one or more digital images.”
The claims supplied by the inventors are:
“1. A computer vision system for vehicle damage detection comprising: a memory; and a processor in communication with the memory, the processor: generating a dataset, training a neural network with a plurality of images of the dataset to learn to detect an attribute of a vehicle present in an image of the dataset and to classify at least one feature of the detected attribute, and detecting the attribute of the vehicle and classifying the at least one feature of the detected attribute by the trained neural network.
“2. The system of claim 1, wherein the processor generates a real dataset based on labeled digital images, each labeled digital image being indicative of an undamaged vehicle or a damaged vehicle.
“3. The system of claim 1, wherein the processor generates a simulated dataset by: generating components of a simulated vehicle, linking each component to generate a simulated vehicle, simulating an external force on the simulated vehicle to generate damage to the simulated vehicle, identifying and labeling the generated damage to the simulated vehicle, and storing the damaged simulated vehicle as an image of the simulated dataset.
“4. The system of claim 1, wherein the neural network is a convolutional neural network (CNN) or a fully convolutional network (FCN).
“5. The system of claim 1, wherein the processor generates a simulated dataset including a plurality of images of a reconstructed damaged vehicle based on a plurality of digital images of the damaged vehicle by: selecting digital images indicative of a fewest number of viewpoints from the plurality of digital images of the damaged vehicle, transforming the digital images by an encoder to generate two-dimensional dense feature maps utilizing a second neural network, generating a plurality of three-dimensional feature grids based on the two-dimensional dense feature maps utilizing an unprojection model, generating a three-dimensional fused feature grid by fusing the plurality of three-dimensional feature grids utilizing a recurrent fusion model, generating a three-dimensional final grid based on prior constraints and determined features utilizing the second neural network, and displaying the three-dimensional final grid as the reconstructed damaged vehicle.
“6. The system of claim 5, wherein the reconstructed damage vehicle is one of a computer aided design (CAD) model or a voxel occupancy grid.
“7. The system of claim 5, wherein the processor generates one or more depth maps based on the three-dimensional final grid utilizing a projection model, and displays the one or more depth maps as the reconstructed damaged vehicle.
“8. The system of claim 5, wherein the second neural network is a convolutional neural network (CNN) or a liquid state machine (LSM).
“9. The system of claim 1, wherein the vehicle is one of an automobile, a truck, a bus, a motorcycle, an all-terrain vehicle, an airplane, a ship, a boat, a personal water craft, or a train.
“10. The system of claim 1, wherein the processor trains the neural network to detect damage to the vehicle present in the image and to classify a location of the detected damage and a severity of the detected damage, the damage being at least one of a scratch, a scrape, a crack, a paint chip, a puncture, a dent, a deployed airbag, a deformation, a broken axle, a twisted frame or a bent frame.
“11. The system of claim 10, wherein the location of the detected damage is at least one of a front, a rear or a side of the vehicle and the severity of the detected damage is based on predetermined damage sub-classes.
“12. The system of claim 10, wherein the processor trains the neural network to learn to detect damage to the vehicle present in the image and to classify the location of the detected damage and the severity of the detected damage by: segmenting components of the vehicle, and detecting at least one segmented component of the vehicle indicative of damage.
“13. The system of claim 10, wherein the processor trains the neural network to learn to detect damage to the vehicle present in the image and to classify the location of the detected damage and the severity of the detected damage by: segmenting regions of the image based on saliency visualization data, and detecting at least one segmented region of the image indicative of damage to the vehicle.
“14. A method for vehicle damage detection by a computer vision system, comprising the steps of: generating a dataset, training a neural network with a plurality of images of the dataset to learn to detect an attribute of a vehicle present in an image of the dataset and to classify at least one feature of the detected attribute, and detecting the attribute of the vehicle and classifying the at least one feature of the detected attribute by the trained neural network.
“15. The method of claim 14, further comprising the step of generating a real dataset based on labeled digital images, each labeled digital image being indicative of an undamaged vehicle or a damaged vehicle.
“16. The method of claim 14, further comprising the steps of generating a simulated dataset by: generating components of a simulated vehicle, linking each component to generate a simulated vehicle, simulating an external force on the simulated vehicle to generate damage to the simulated vehicle, identifying and labeling the generated damage to the simulated vehicle, and storing the damaged simulated vehicle as an image of the simulated dataset.
“17. The method of claim 14, wherein the neural network is a convolutional neural network (CNN) or a fully convolutional network (FCN).
“18. The method of claim 14, further comprising the steps of generating a simulated dataset including a plurality of images of a reconstructed damaged vehicle based on a plurality of digital images of the damaged vehicle by: selecting digital images indicative of a fewest number of viewpoints from the plurality of digital images of the damaged vehicle, transforming the digital images by an encoder to generate two-dimensional dense feature maps utilizing a second neural network, generating a plurality of three-dimensional feature grids based on the two-dimensional dense feature maps utilizing an unprojection model, generating a three-dimensional fused feature grid by fusing the plurality of three-dimensional feature grids utilizing a recurrent fusion model, generating a three-dimensional final grid based on prior constraints and determined features utilizing the second neural network, and displaying the three-dimensional final grid as the reconstructed damaged vehicle.
“19. The method of claim 18, wherein the reconstructed damage vehicle is one of a computer aided design model or a voxel occupancy grid.
“20. The method of claim 18, further comprising the steps of: generating one or more depth maps based on the three-dimensional final grid utilizing a projection model, and displaying the one or more depth maps as the reconstructed damaged vehicle.
“21. The method of claim 18, wherein the second neural network is a convolutional neural network (CNN) or a liquid state machine (LSM).
“22. The method of claim 14, wherein the vehicle is one of an automobile, a truck, a bus, a motorcycle, an all-terrain vehicle, an airplane, a ship, a boat, a personal water craft, or a train.
“23. The method of claim 14, further comprising the steps of training the neural network to detect damage to the vehicle present in the image and to classify a location of the detected damage and a severity of the detected damage, the damage being at least one of a scratch, a scrape, a crack, a paint chip, a puncture, a dent, a deployed airbag, a deformation, a broken axle, a twisted frame or a bent frame.
“24. The method of claim 23, wherein the location of the detected damage is at least one of a front, a rear or a side of the vehicle and the severity of the detected damage is based on predetermined damage sub-classes.
“25. The method of claim 23, further comprising the steps of training the neural network to detect damage to the vehicle present in the image and to classify the location of the detected damage and the severity of the detected damage by: segmenting components of the vehicle, and detecting at least one segmented component of the vehicle indicative of damage.
“26. The method of claim 23, further comprising the steps of training the neural network to detect damage to the vehicle present in the image and to classify the location of the detected damage and the severity of the detected damage by: segmenting regions of the image based on saliency visualization data, and detecting at least one segmented region of the image indicative of damage to the vehicle.
“27. A non-transitory computer readable medium having instructions stored thereon for vehicle damage detection by a computer vision system which, when executed by a processor, causes the processor to carry out the steps of: generating a dataset, training a neural network with a plurality of images of the dataset to learn to detect damage to a vehicle present in an image of the dataset and to classify a location of the detected damage and a severity of the detected damage utilizing segmentation processing, and detecting the damage to the vehicle and classifying the location of the detected damage and the severity of the detected damage by the trained neural network, wherein the location of the detected damage is at least one of a front, a rear or a side of the vehicle and the severity of the detected damage is based on predetermined damage sub-classes.
“28. The non-transitory computer readable medium of claim 27, the processor further carrying out the step of generating a real dataset based on labeled digital images, each labeled digital image being indicative of an undamaged vehicle or a damaged vehicle.”
There are additional claims. Please visit full patent to read further.
URL and more information on this patent application, see: Gupta, Abhinav; Jujjavarapu, Sashank; Malreddy, Siddarth; Patel, Yash; Singh, Maneesh Kumar; Wang, Shengze. Computer Vision Systems and Methods for Vehicle Damage Detection with Reinforcement Learning. Filed December 16, 2020 and posted November 4, 2021. Patent URL: https://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220210342997%22.PGNR.&OS=DN/20210342997&RS=DN/20210342997
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Application Titled “Template Based Multi-Party Process Management” Published Online (USPTO 20210342837): International Business Machines Corporation
National Ancillary Benefits Carrier Renaissance to Acquire Utah-Based Dentist Direct: Renaissance Life & Health Insurance Company of America
Find out how you can submit content for publishing on our website.
View Guidelines
📰 Sign up for our FREE e-Newsletter!
Get breaking news, exclusive stories, and money-making insights straight into your inbox.
Start Your Path to Success Today
Learn how Simplicity Group is helping agents cut through the clutter and take the simple path to success.
🖥️ Check out the INN Webinar Archive
Instantly view any of our past webinar recordings.
Get breaking news, exclusive stories, and money- making insights straight into your inbox.



source