Use data on widely studied benchmarks and assessment regimes to analyze which core technical metrics are seeing the most activity within fields. This gives us some indication of the kinds of capabilities that are seeing progress. For example, benchmarks such as ImageNet and SuperGLUE can be used to monitor progress in computer vision and natural language tasks respectively. Monitoring in these areas over the past few years could have alerted governments to the possibility of increased commercial application of these capabilities, which could have prompted earlier investigations into potential sources of bias (by e.g. auditing systems prior to deployment), and other potential areas of societal impact (by e.g. funding earlier research into the impacts and risks of facial recognition). Other factors driving research attention and capability advancement are computational costs, data, research networks, and funding.