Dr. Yulia Gryaditskaya suggested that tools for understanding sketches could lead to more powerful human-computer interaction and more efficient design workflows. Examples of potential applications include being able to search for or create images by sketching something.
The new AI model developed by the researchers from the University of Surrey and Stanford University performs close to human levels in recognizing scene sketches. The model was trained using a combination of sketches and written descriptions, and it demonstrated a richer and more human-like understanding of these drawings than previous approaches. It was able to correctly identify and label objects in complex scenes with an 85% accuracy, outperforming other models that relied on labeled pixels.
Researchers from the University of Surrey and Stanford University have developed a new method to teach AI to understand human line drawings. This innovative approach involves teaching the AI using a combination of sketches and written descriptions. The AI then learns to group pixels, matching them against one of the categories in a description. The resulting AI displays a much richer and more human-like understanding of these drawings than previous approaches, correctly identifying and labeling objects with an 85% accuracy. This method works well with informal sketches drawn by non-artists, as well as drawings of objects it was not explicitly trained on.