According to the report (lead author Yves Younan, Senior Research Engineer at Sourcefire):
We leveraged two well-respected data sources for our research. First, our classifications of vulnerabilities are based on the Common Vulnerabilities and Exposures (CVE) database which is used today as an international standard for vulnerability numbering or identification. The database provides 25 years of information on vulnerabilities to assess, spanning 1988 to current.
Next, we used information hosted in the National Vulnerability Database (NVD) at the National Institute of Standards and Technology (NIST). We did some normalization to the data with respect to vulnerability categorization to be able to provide more complete statistics.
Not wishing to steal all of the report's thunder, we will summarise only a few of the findings, the full report is available here (free registration is required).
Figure1: Total Vulnerabilities by Year
Figure2: High Severity Vulnerabilities by Year
Figure3: High Severity Vulnerabilities as a percentage of Total by Year
When the report turned its attention to the actual vulnerabilities independently of the products, it found that Cross-Site Scripting (XSS) vulnerabilities were very high in frequency, however, when the analysis was tightened to show only critical errors, this category almost completely vanished, instead, buffer overflows became the force to be reckoned with. "we believe it is now safe to declare the buffer overflow the vulnerability of the quarter-century."
Figure 4: Critical Vulnerabilities as a percentage by type
It would also appear that researchers (and 'hackers') appear to have a "flavour of the year" when it comes to discovered and reported issues.