×![]()
HUM-CARD: A human crowded annotated real dataset

- Objective & Motivation
- The researchers collected and annotated images portraying dense crowds in urban spaces to analyze crowd-related environmental impacts and aid related computer vision tasks.
- A major goal was to create a dataset that supports researchers developing algorithms for crowd analysis and environmental monitoring.
- Dataset Composition
- Includes thousands of real-world urban images, capturing a range of densities and scenes (e.g., streets, transportation hubs).
- Annotation Process & Quality Control
- Multiple annotators labeled humans in the scenes to ensure high accuracy.
- The dataset was refined through consensus mechanisms to improve label consistency.
- Applications & Benchmarking
- Beyond a standalone dataset, it is used to benchmark crowd-counting and density estimation models.
- Demonstrates improvements over existing datasets due to greater complexity and realism.
- Environmental Insights
- Analysis explores relationships between crowd density, urban space usage, and environmental stressors, offering a richer context for environmental-management applications.
How Labelbox Was Used
The authors employed Labelbox, a collaborative data‑labeling platform, to efficiently and accurately annotate their dataset:
- They used Labelbox to build the annotation ontology (AO): defining labeling categories (e.g., "main object," sub‑classifications, contextual options) to structure the annotation task)
- The platform enabled sophisticated editor environments, including image segmentation workflows for video frames. This streamlined annotator work and ensured consistent labels across the dataset.
- Key Labelbox features used included:
- Custom ontologies for complex scene interpretation.
- Support for bounding boxes, segmentation masks, and multi-label annotations, tailored to dense crowd imagery.
- Built-in quality control and consensus-based workflows to validate annotator agreement.
The HUM‑CARD dataset provides a rich resource for understanding urban crowding and its environmental dynamics. By leveraging Labelbox’s ontology creation tools, annotation workflows, and quality‑control mechanisms, the authors delivered a high‑quality, densely annotated dataset that supports both environmental studies and crowd‑analysis computer vision research.