Article Review on Research Methodology:

Assessing Software Review Meetings

 

Prepared for DCS891C

Summer 2000

Last Updated: August 27, 2000

Presented July 21, 2000

 

Report POC: Stephen Parshley

Office: 845-938-4165

E-mail: cs4463@usma.edu

 

 

Abstract

 

This paper responds to a DCS891C research course requirement to read one article, find a related article, and answer questions on both.  Through examination of a research paper, the student identifies research one or more research methodologies, at least one related research paper, and possible directions for future research on the topics covered in the papers.  The focus of this paper is an article by Adam Porter and Philip Johnson, entitled Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies.   The authors report results of a comparative analysis of two experimental studies, both designed to measure the value of software review meetings.  The authors assert that their work offers a novel tack, between classical literature review, and statistical meta-analysis, producing exceptionally valuable results.  They conclude that their analytical methodology has value for future research.   


Contents

 

1      Overview...................................................................................................................... 1

2      Requirements Part 1 – What the paper is about.............................................................. 1

2.1   What is the essence of the research?.......................................................................... 1

2.2   What is the problem being investigated?..................................................................... 1

2.3   What is the technical background of the problem and the context of the work?............ 1

3      Requirements Part 2 – Research Methodology............................................................... 1

4      Requirements Part 3 – Related Work............................................................................. 2

5      Requirements Part 4 – Related Research Problems........................................................ 2

6      References.................................................................................................................... 2

 


1         Overview

 

Adam A. Porter and Philip M. Johnson conducted a comparative analysis of two experimental studies.  Both experimental studies “assess the contribution of meetings to software review.”[1]  The purpose of the authors’ comparative analysis is to demonstrate the value of an eclectic research methodology “to obtain a higher scientific ‘return on investment’ form this (sic) valuable raw data.”[2]   Porter and Johnson suggest that standard formal research methodologies have limitations that their methodology overcomes.  They note that both classical literature review and strict statistical meta-analysis fail to extract information that comparative review can mine from experimental studies.  Porter and Johnson attempt to combine the best of each methodology to glean results not obtainable by either alone, yet retain the objective credibility of standard research methods.  Their methodology seeks to benefit from disparate independent studies by normalizing data wherever possible.  The authors follow The Reconciliation Process[3], consisting of standardizing independent and dependent variables, developing common hypotheses, analyzing data separately, and comparing results.  They apply their methodology to two independent experimental studies.  The first study, an experiment conducted at the University of Hawaii, compared real and nominal groups applying the standard meeting-based review method to source code.  The second study, an experiment conducted at the University of Maryland, compared individual versus collective performance of a group of students conducting software assessment of requirements documents.  The goal of each independent study was to determine the value of meetings in the software review process. 

 

2         Requirements Part 1 – What the paper is about

 

2.1        What is the essence of the research?

 

Porter and Johnson seek to derive as much credible, objective information as possible from existing research through applying what they purport to be a unique research methodology, combining elements of both literature review and statistical meta-analysis. 

 

2.2   What is the problem being investigated?

 

The authors attempt to determine whether it is possible to determine more information through application of their research methodology than through typical forms of comparative analysis.  The specific question they address is: Are software review meetings effective and efficient (versus review processes without meetings).

  The results the authors allege are that their findings transcended the limited original published findings of each independent study and that their results were attributable to their unique research methodology.  They conclude that there exists evidence of limited value for software meetings and that more research is necessary. 

 

2.3   What is the technical background of the problem and the context of the work? 

 

Porter and Johnson review a variety of software review assessment techniques involving meetings. They note that while studies have interesting results, “no single study gives unequivocal results.” [4]   The context of their research is existing studies, each having specific formal, standardized techniques of ensuring objectivity within the study.  In each of the studies in their article, the authors note that the experimental hypotheses challenge traditional wisdom on meetings: that they are valuable, essential parts of the software review process.  Furthermore, the primary benefits are synergy, clarification, defect discovery, establishment of milestones, and education for participants.  The acknowledged costs of meetings are time, management, preparation, and cooperation.  The additional overhead of meetings is evident in scheduling and development costs.  Ultimately, the studies’ aims are reducible to cost-benefit analyses.

 

3         Requirements Part 2 – Research Methodology

 

The authors’ research methodology is comparative analysis of existing studies.  They apply a reconciliation process to independent data to allow meaningful interpretation of data from separate but related experiments.  Central to that process, and most applicable to this article, is the development of common hypotheses[5].  The common hypotheses represent the area of meaningful overlap between data from the independent studies.  Another way of viewing the hypotheses is to say that they represent the questions for which answers may be derivable through the application of Porter and Johnson’s comparative analysis methodology.

 

4         Requirements Part 3 – Related Work

 

Porter and Johnson cite Lawrence G. Votta’s work as one of the seminal formal studies on the problem of whether meetings have value in the software assessment process.  Votta uses statistical analysis to determine the relative value of meetings by examining specific benefits and associated costs.  The results of his study suggest that the overhead of scheduling and preparing for meetings is nonlinearly related to the number of participants.  Furthermore, the training value of such meetings is “dubious at best.”[6]  He ultimately suggests that large formal meetings be reduced to depositions of two or three reviewers, from which useful data may be gleaned.  He notes that such a process might provide the majority of the benefits of meetings while avoiding the majority of their costs.  He concludes that additional studies are warranted.  The relevance of Votta’s work to Porter and Johnson is twofold: 1) Votta’s acknowledges the limited value of results without additional studies to confirm findings; 2) Votta asks questions that Porter and Johnson hope to answer.  The authors credit Votta with significant contribution to the field of study, but note that such reports may have synergistic value when taken together with other related reports within a comparative analysis.  Of course, that is precisely what Porter and Johnson do. 

 

5         Requirements Part 4 – Related Research Problems

 

Porter and Johnson specify several areas for future research.  First, they suggest their comparative analysis might be valuable to other software engineering researchers who seek to avoid the cost of independent studies but who also seek to glean new value from existing data.  Second, the authors recommend researchers conduct additional comparative studies to determine the validity of both their results and methodology.  Third, and specific to the topic of their article, the authors cite a need for further investigation in reviewer specialization.  Should reviewers receive more training along narrower lines of expertise?  Can such reviewers improve defect detection both individually and as members of groups assessing software?  Finally, much work remains to be done in determining those classes of defects more likely to be found by meetings rather than individual review.  Each of these related research problems is an outgrowth of the authors’ work. 

The author’s work is certainly no substitute for well-designed independent experimental studies, and that comparative analysis appears to have limitations beyond those cited by the authors.  However, it is possible to add the authors’ approach to an arsenal of research methodologies.  Prudently chosen, each may provide results more effectively and efficiently than alternate methodologies. 

 

6         References

 

  1. Porter, Adam A., and Philip M. Johnson, Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies.  IEEE Transactions on Software Engineering, Vol. 23, No. 3, March 1997.   
  2. Votta, Lawrence  G. Jr., Does Every Inspection Need a Meeting?, ACM Software Engineering Notes, Vol. 18, No. 5, December 1993.  Votta’s presentation is documented in the Special Interest Group on Software Engineering (SIGSOFT) ’93 Proceedings of the First ACM SIGSOFT Symposium on the Foundations of Software Engineering. 


[1] Porter and Johnson, page 143. 

[2] Ibid, page 144. 

[3] Ibid, page 140.

[4] Porter and Johnson, page 131.

[5] Ibid, page 140.

[6] Votta, page 114.