Search for a command to run...
Understanding how objects are positioned and related to each other in space is an important part of visual perception. Although people can recognize these relationships quite naturally, it is not easy to teach artificial intelligence to do the same. It becomes even more difficult when only a few examples are available for each type of spatial relation within a given domain. This dataset was created to support research on how AI can learn basic spatial concepts in 2D, especially using few-shot and meta-learning methods. It includes simple but important geometric concepts, like when a shape is `alone', when two shapes are `close' or `far' from each other, and when they `overlap'. The dataset offers various object types of 2D images arranged in different spatial relationships. Each image contains one or two objects arranged according to a predefined spatial concept, with variation in shape, size, and positioning. It contains sub-datasets in three main categories: (1) regular geometric shapes (such as circles, rectangles, triangles), (2) semantically segmented objects from real-world images (such as birds, dogs, etc.), and (3) artificially generated representations of certain domain-specific structural anomalies like cracks and voids. These images are related to an application where microscopic cross-section images of solder joints in electronic components are analyzed. The data is created by a computer script written in Python, which can also be used to expand the data. The dataset is designed to be lightweight and modular, making it suitable for rapid experimentation. It enables the model to be pre-trained to handle simple geometric concepts, improving the ability of the model to generalize to novel classes with a minimal number of labeled examples. The dataset is useful in any domain where geometric relationships between possibly complex shapes are important, but only a handful of annotated samples are available per class.