Search for a command to run...
Abstract Researcher evaluation remains a central challenge in scientometrics, where reliable, transparent, and context-sensitive methods are required for decisions related to recruitment, promotion, and awards. Traditional assessment approaches rely heavily on bibliometric indices such as the h -index and its variants; however, these measures often neglect publication venue quality and provide limited discriminatory capability. This study proposes an integrated framework that combines retrospective bibliometric analysis, venue-aware modeling, and Genetic Programming (GP)-based symbolic regression for interpretable researcher evaluation. A balanced dataset of 1,200 computer science authors, equally divided between awardees and non-awardees, was constructed using Google Scholar and Publish or Perish . First, sixty-four bibliometric indices were computed to establish a retrospective ranking baseline, where the $$h_2$$ upper-index achieved the strongest performance by identifying 73% of awardees within top-ranked positions. Second, a venue-aware dataset incorporating journals, conferences, books, and patents was developed, and neural-network–estimated venue contributions indicated journals as the dominant factor (84%). Finally, GP-based symbolic regression was applied to evolve interpretable closed-form equations integrating venue-level features. The best GP-derived model identified 91% of awardees, outperforming both the retrospective baseline and the venue-aware linear model while maintaining interpretability. These findings demonstrate that combining venue-aware modeling with interpretable evolutionary computation provides a more accurate, transparent, and equitable framework for researcher evaluation, with practical implications for academic institutions and research policy design.