Want to ask a question?

If you would like to propose a question for the panel yourself, please send an email to [email protected] To be considered, the question must be posed as an agree/disagree proposition.

Votes and Comments 

Participant

Institution/Country

Vote

Confidence level

Comment

Annu Kotiranta

Research Institute of the Finnish Economy/Finland

Disagree

4

According to my understanding there is a wide consensus on the benefits and restrictions of the traditional indicators. Also, the use of other complementary indicators is common, especially in drafting of innovation policy.

Carlos Aguirre

National Secretariat for Science, Technology and Innovation/Panama

Agree

8

This is the case for developing economies where R&I policies must consider many other additional considerations than traditional indicators, normally robust social indicators (which must also be developed better).

Catalina Martínez García

CSIC Institute of Public Goods and Policies/Spain

Did Not Answer

Chan-Yuan Wong

University of Malaya/Malaysia

Agree

10

The indicators are important to provide "snapshots" of innovation performance. Misguided research policies happen when policy makers unaware the limitations and (over)using it as the key measure of KPIs of an organization/economy. But one should be noted that these indicators have been instrumentals for many policy makers to assess the dynamics of technologies/sectors. One should not dismiss the usefulness of these indicators.

Charles Edquist

Lund University/Sweden

Did Not Answer

David Teece

University of California Berkeley/United States

Did Not Answer

Dirk Meissner

National Research University Higher School of Economics/Russia

Did Not Answer

Dominique Foray

École Polytechnique Fédérale de Lausanne/Switzerland

Did Not Answer

Ebrahim Souzanchi Kashani

Sharif University of Technology/Iran

Strongly Agree

8

It is normal among policy makers to think about rankings and therefore, wrong indicators would mislead them. I have seem many cases from my own country within which the policy makers take into account the rankings rather than the real system.

Frédérique Sachwald

Observatoire des sciences et des techniques (OST)/France

Disagree

10

The problèm is not the use or even dominance of those indicators, but rather their MIS-use. For example, R&D intensity being used without taking the industrial/sectoral strucutre into account and setting a 3% target for very diverse European countries, or even for the EU as an average.

Ganesh Rasagam

World Bank

Did Not Answer

Ian Hughes

Department of Jobs, Enterprise and Innovation/Ireland

Disagree

8

The traditional indicators for research and innovation have been carefully established by the OECD based on a sound understanding of the systemic nature of innovation. Such indicators cover a wide range of activities, from research excellence to business R&D expenditure, and serve a vital purpose. It is acknowledged however that these traditional indicators now need to be supplemented with additional indicators that monitor the transition of existing unsustainable systems, such as energy, health, transport etc, to more sustainable systems.

Jaideep Prabhu

University of Cambridge/United Kingdom/India

Strongly Agree

7

Governments and firms overemphasize the importance of metrics such as R&D spending and publications, partly because these are relatively easy to measure objectively. However, R&D spending and publications are at best inputs into the innovation process and are neither necessary nor sufficient for innovation outputs (new products, processes and business models) that create value and drive growth. My own research has shown that in firms across nations a far better predictor of innovation output and value creation is a culture of innovation (rather than R&D). Of course, the right culture is hard to measure and even harder to create and foster. Nevertheless, it is precisely because the right culture is hard to create that it is important to pursue and separates the most innovative firms from the rest.

Jan Wessels

VDI/VDE Innovation + Technik/Germany

Disagree

6

New Indicators are needed, but traditional indicators are still helpful for innovation policies. It is now necessary to diversify and not change the whole system.

Jan Youtie

Georgia Tech/United States

Agree

7

Johannes Gadner

Council for Research and Technology Development/Austria

Uncertain

5

There definitely is a prevailing trend towards rationalization and indicator based measurements. However, I am not sure whether there is any evidence suggesting that the use of indicators has exerted substantial influence on political decisions.

Juan Mateos-Garcia

National Endowment for Science, Technology and the Arts (NESTA)/United Kingdom

Did Not Answer

Kaye Fealing

Georgia Tech/United States

Agree

8

What we measure always carries the possibility of miscommunication, misappropriation, and misunderstanding. In addition, these “indicators” can lead to changed behavior in ways that can diminish the validity of the measure. Merely measuring outputs such as papers and patents underrepresents the important of elements that mean more in terms of public value. Furthermore, those measures can be gamed to show progress, even if fundamental advancement has not occurred.

Keun Lee

Seoul National University Korea

Did Not Answer

Luc Soete

UNU-Merit/Netherlands

Did Not Answer

Luis Sanz-Menendez

CSIC Institute of Public Goods and Policies/Spain

Agree

9

In some cases we could find misguided policies, but in other case produce positive effects in the system.

Luiz Martins de Melo

Funding Authority for Studies and Projects (FINEP)/Brazil

Agree

10

Innovation is systemic. It is very difficult that traditional indicators could capture this feature.

Magnus Gulbrandsen

University of Oslo/Norway

Did Not Answer

Margaret Kyle

MINES ParisTech/France

Uncertain

8

R&D expenditures are easily measured inputs, but sometimes treated as something policy should maximize. Counts of papers and patents are easily measured proxies for innovation. As institutions adopted them as part of pay-for-performance, it’s clear that often the proxies are maximized rather than true innovation. However, on balance, the use of these metrics has probably encouraged better policy and innovation incentives than would be the case if we abandoned them, in the absence of better alternatives.

Mari Jose Aranguren

Basque Institute of Competitiveness/Spain

Agree

10

-

Marina Yue Zhang

University of New South Wales/Australia

Did Not Answer

Mark Dodgson

University of Queensland/Australia

Agree

9

The misuse of these data lies in the way that they are used individually and not seen as piecemeal indicators that have to be combined with other insights in order to have a better picture. Some are used dangerously as indicating targets to be achieved. The old adage that what gets measured gets managed, indicates the way policy focus can be directed towards something easily recorded rather than actually important.

Maryann Feldman

University of North Carolina/United States

Did Not Answer

Masaru Yarime

City University of Hong Kong

Disagree

7

Understanding the limitations of the traditional indicators, we can still make use of them for policy making.

Melissa Ardanche

Comisión Sectorial de Investigación Científica/Uruguay

Strongly Agree

10

Often, policies don't consider indicators in ligth of the specific characteristics of National Innovation System (NIS). Despite the persistent controversy about traditional indicators and their construction, the main problem isn't the existence itself but the uncritical use of indicators and the inexistence of adequate ones. That kind of use results in unplanned and unintended effects on the NIS.

Mohamed Ramadan

Academy of Scientific Research and Technology/Egypt

Agree

7

in sometimes they need for more analysis for certain actions, and new indicators should be taken into consideration as prototypes, products, and their impact on the economy.

Oliver Gassmann

University of St. Gallen/Switzerland

Agree

9

Too often we are no longer judging the content. Science is on the way to a number crunching business with the danger of loosing track. We need more from insight to impact – and these are outputoriented KPIs.

Paola Giuri

University of Bologna/Italy

Did Not Answer

Patries Boekholt

Technopolis/Netherlands

Agree

8

Particularly research funding on the basis of the number of publications has stimulated perverse behaviour, is putting additional stress on researchers and as important, has steered the research community away from high impact and societally relevant research. In addition, if #publications is used without considering the context of research domains it is biased in favour of some disciplines at the expense of other disciplines. Using patent applications as the mere indicator for ‘economic impact’ could lead to a lot of patents that remain unused on ‘university shelves’ wasting a lot of public funding and effort. Business R&D expenditure is a useful indicator guiding policy and is unlikely to create perverse behaviour in the private sector.

Reinhilde Veugelers

KU Leuven/Belgium

Did Not Answer

Robert Atkinson

Information Technology and Innovation Foundation/United States

Agree

8

These conventional innovation indicators bias policy towards scientific research and away from equally, if not more important, areas of services innovation, process innovation, and business model innovation. In addition by not focusing on actual innovation, instead of innovation outcomes, policy is often poorly linked to outcomes and performance.

Sami Mahroum

INSEAD/United Arab Emirates

Did Not Answer

Sonja Radas

The Institute of Economics, Zagreb/Croatia

Agree

8

Economic and social landscape changes with time, so indicators need to develop as well to accurately assess the state of research and innovation. There is also the question of measurement that may affect conclusions. Context is very important in interpretation.

Stefan Kuhlmann

University of Twente/Netherlands

Did Not Answer

Susana Borrás

Copenhagen Business School/Denmark

Did Not Answer

Sylvia Schwaag Serger

Vinnova/Sweden

Agree

8

The above-mentioned indicators have shaped behavior by researchers, companies, universities, policymakers in the sense that it has led them to seek to meet quantitative targets (eg Publications in peer-reviewed journals, patent applications) rather than in strengthening the quality and impact of research and innovation.

Uri Gabai

Israel Innovation Authority

Did Not Answer

Uwe Cantner

Friedrich Schiller University Jena/Germany

Uncertain

6

We don't know the counterfactual. But I became aware that interpreting innovation related dynamics became wrong when the argument was based on the traditional indicators. E.g., a malperformance in the development of renewable energy system entirely based on patent applications - neglecting the immense learning by doing effects in the large scale business.

Wolfgang Polt

Janneum Research/Austria

Did Not Answer

Yasunori Baba

University of Tokyo/Japan

Did Not Answer

Copyright Notice: The panel data are copyrighted. Data can be reproduced for non-commercial teaching or research purposes without infringing copyright, providing that “the Confab Club” is sufficiently acknowledged.

The Confab Club is a non-profit project and received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

How you would have voted? Let us know in the comment section below

  • I studied this question many years ago (e.g., the difference between quantitative and qualitative indicators). Each has positives and failures and you have to combine both to get an intelligent measure of the value of the R&D. Blind adherence to any measure will lead you astray.

  • The indicators mentioned in the question posted are useful only with a very limited scope. Extrapolation and interpretation made by decision-makers based on this reduced set of indicators might be misleading. Particularly, the Global Innovation Index which is used by several decision makers combines in one index a set of subjective indicators collected by annual polls, showing a great variability from one year to another in the performance of individual countries.
    Experts on innovation analyses are usually using other types of more specific indicators based on innovation surveys (see for instance the Oslo Manual by OECD or Bogota Manual by RICYT). UNESCO has recently made a comprehensive analysis of all the available innovation surveys around the world (see http://uis.unesco.org/en/topic/innovation-data and the publications there). On the other hand, in order to analyze STI policies, UNESCO has recently developed a new methodological approach to measure not only the explicit STI policies and their instruments (legal framework, organizational chart and operational policy instruments like tax incentives, competitive funds, scholarships, etc.) but also the contextual factors of each individual country (governance and political stability, demography, educational and cultural factors, industrialization and trade policies, geopolitics, etc.). The project known as Global Observatory of STI Policy Instruments or GO-SPIN had published seven country profiles of around 300 pages each (Botswana, Zimbabwe, Malawi, Rwanda, Israel, Guatemala and Laos) and will launch next April 2018 a comprehensive online platform with similar information on around 50 developing countries (https://en.unesco.org/go-spin ).

  • The inadequacy of the standard suite of science and engineering indicators for policy analysis and policy making, especially regarding innovation and competitiveness, has been understood for literally decades. It is of no great moment that OECD has endorsed this approach; in this, they are nearly as misguided as NSF and the U.S. Congress have been. I say “nearly” because OECD has at least followed the lead of a number of European nations in seeking to enrich its indicators basket by making use of results from the so-called “European innovation survey.” The US has steadfastly refused to implement that somewhat modest set of improvements. More than 35 years ago, I was the P.I. on an NSF-funded project to develop new candidate indicators of innovation. John Hansen and I and our colleagues ran two trial surveys that suggested that gathering new indicators could be both feasible and fruitful. NSF had a contractor run a more ambitious test that apparently yielded substandard response rates, and that led to abandonment of the lines we had suggested. In the ensuring decades a few very small steps have been taken to improve the situation at NSF, but the improvements have been very modest, largely owing to budget shortfalls and ideological opposition to such data collection at OMB and by other White House interests. As a result, what we observe is periodic rehashing of the same tired arguments and observations about the sorry state of innovation indicators, most recently in a major study conducted by the U.S. National Academies. What is needed is not more reviews of indicators and their limits but instead a healthy program of financial support from one or more US agencies or agencies of other concerned governments to mount a concerted effort to develop, test and validate new indicators that are more relevant to 21st century industry and society. I won’t hold my breath waiting for this to happen.

  • Indicators defined as “traditional” in the introduction of this debate are not usually used in isolation to evaluate innovation. I would therefore vote as ‘uncertain” .

  • >