China Daily 30/4/2014 Real purpose of academic research is to seek truth Simon Ho The purpose of academic research is to seek the truth and new knowledge which enhances social development. Such research is one of the integral responsibilities of a faculty member working in an academic institution. It is one of the key aspects of their job performance. Faculty members who are not active in research probably use outdated teaching materials that may not meet the needs of our fast-changing society. It is understandable that some faculty members also pursue research to bolster their reputations and achieve promotion and tenure. Collectively, faculty members’ research performance (in terms of funding received, publications and awards, etc) affects their academic institution’s resources, performance, reputation and ranking. However, these should be by-products and not the main purposes of research. Unfortunately the misuse of resources in the so-called “research activities” in many publicly funded research institutions have attracted public attention in recent years. When academic institutions increasingly promote the use of quantitative performance indicators in research, many faculty members’ work serves the purpose of chasing indicators — a dynamic that has numerous negative effects. Following this trend, researchers are often inclined to select agendas or topics favored by the indicator system. This utilitarian approach subsequently discourages more meaningful or valuable research. Overall, funding bodies and institutions should emphasize the quality of research based on social needs, personal interests and expertise and curiosity. Quality research should contribute to the advancement of new knowledge, have relevance (that is problem-based) and contribute to the betterment of society. Research for the purposes of publishing more articles in the so-called “high-impact” or SCI/SCII journals would be a waste of time and energy — particularly if these publications do not try to help solve the world’s pressing problems. For many years, a journal’s “impact factor” — a problematic measure of how frequently the journal’s articles are cited in other journals — has mistakenly dominated faculty members’ publication choices and institutions’ assessment methods. The importance of any individual article cannot be assessed based on its citation performance. Even in journals with the highest impact, some articles are seldom cited by other researchers. The number of citations is always affected by the popularity of specific topics. Some international research funding and assessment bodies have stopped using journal-based metrics such as impact factors as a surrogate measure of the quality of specific research articles. Research is also highly valued as a process. Nobel Laureate Professor Myron Scholes expressed the following in a speech in Hong Kong in late 2013. “Persistence means that even if we fail, we learn from our failure along the way to our goals. We applaud success, but we also applaud failures that lead to success. Great academics are persistent and willing to fail in an attempt to succeed. Research and development are never easy. The great researchers ‘search’ for new ideas that break the tyranny of the ‘data mining’, and build new models and gain new insights from the information set. This is the critical difference between successful and unsuccessful hunters.” Mechanical assessment systems often discourage scholars from pursuing riskier, but possibly innovative projects. This is because it may be years before the first research articles are published. Research projects feature intellectual, innovative and enterprising processes. In the same way that just taking part in the Olympic Games is lauded because we treasure the participants’ sportsmanship as much as the results, so should dedicated research participation be revered. Quick results that do little more than echo others’ views have little value — regardless of their final outlets of publication. Present-day faculty members are inclined to pursue more research and less teaching to satisfy research assessment requirements and compete for grants. Some universities’ overemphasis on research and rankings has been criticized for diminishing the importance of teaching and students’ personal development. These institutions typically look at candidates’ research records rather than their teaching performance and student development when making recruitment, promotion and tenure decisions. Universities are not pure research facilities. Institution leaders should encourage and support research activities that reinforce outstanding teaching and learning. All these factors have implications for the assessment of research efforts. Assessing the achievements of researchers should amount to much more than simply counting the number of publications and the journals’ impact. Evaluating a researcher’s contributions requires that some of his or her selected publications be read and analyzed. This is a task that must not be passed automatically to journal editors or replaced with a blind reliance on journal classifications, citation indices, impact factors, etc. Many academic institutions and funding bodies now ask candidates to identify their five best articles. This makes it easier for reviewers to evaluate the selected publications in more detail. Reviewers should read each representative publication carefully to make a comprehensive and fair assessment. In short, quality research is characterized by at least four factors: innovation, relevance, impact and the ability to enhance teaching. Ultimately, top-rated institutions will only look at such factors with little adherence to quantitative research requirements. The author is a senior university leader and professor.