| Contact Us | Search | Sitemap | Deutsch English |   

Path: Home > Events > ISQFD 2002 > Programme> Using Quality Function Deployment To Improve The Quality Of Data Models and Database Designs


Symposium programme, Tutorials, Venue, Registration, Schedule

Daniel L. Moody

Using Quality Function Deployment To Improve The Quality Of Data Models and Database Designs

Track: J6


Abstract:
Data modelling is a method for defining information requirements independently of how the information will be physically stored (Hull and King, 1987; ISO, 1987). It is widely used in practice to define user information requirements as part of the systems development process (Hitchman, 1995). The resulting data model can be transformed into a physical database design in a relatively straightforward way (Batini et al, 1994; Teory, 1994). Although data modeling represents only a small proportion of the total systems development effort (estimated to be about 2%), its impact on the quality of the final system is probably greater than any other phase (Moody and Shanks, 1998; Witt and Simsion, 2000). The data model is a major determinant of system development costs (ASMA, 1996), system flexibility (Gartner Research, 1992), integration with other systems (Moody and Simsion, 1995) and the ability of the system to meet user requirements (Banker and Kauffman, 1991). The combination of low cost and high impact suggests that the data modelling phase represents a high leverage point for improving software development quality and productivity. The traditional thrust of software quality assurance has been to use "brute force" testing at the end of development (van Vliet, 1993). However empirical studies show that more than half the errors which occur in the systems development process are the result of inaccurate or incomplete requirements (Martin, 1989; Lauesen and Vinter, 2000). Also, the most common reason for failure of systems development projects is incomplete requirements (Standish Group, 1995; 1996). This suggests that substantially more effort should be spent during early development phases to catch defects when they occur, or to prevent them from occurring altogether (Zultner, 1992). According to Boehm (1981), relative to removing a defect discovered during the requirements stage, removing the same defect costs on average 3.5 times more during design, 50 times more at the implementation stage, and 170 times more after delivery. Empirical studies have shown that moving quality assurance effort up to the early phases of development can be 33 times more cost effective than testing done at the end of development (Walrad and Moss, 1993). However, it is during analysis that the notion of software development as a craft rather than an engineering discipline is strongest, and quality is therefore most difficult to assess. There are relatively few guidelines for evaluating the quality of data models, and little agreement even among experts as to what makes a "good" data model. As a result, the quality of data models produced in practice is almost entirely dependent on the competence of the designer (Moody and Shanks, 1994; Krogstie et al, 1995).

Objectives Of This Paper:
This paper defines an approach to improving the quality of data models which incorporates the principles of QFD:

  • It defines a set of quality factors for evaluating the quality of data models. These correspond to technical requirements of the data model. The quality factors have been empirically validated in practice using the technique of action research (Moody and Shanks, 1998).
  • The set of quality factors are then mapped to external software quality factors (user defined dimensions of quality), as defined by Fitzpatrick and Higgins (1998). These correspond to customer requirements of the system. The mapping between data model quality factors and external quality factors forms the first House of Quality as defined in QFD (Evans and Lindsay, 2002).
  • It defines interactions between the data model quality factors (technical requirements). These are derived from theory and empirical studies of data modelling.
  • It shows how characteristics of the data model can be mapped to characteristics of the physical database design (corresponding to component characteristics) to form the second House of Quality. Together these Houses of Quality provide the basis for designing a database that more effectively meets user requirements.

Author:
Daniel Moody is an Associate Professor in the Department of Computer and Information Science at the Norwegian University of Science and Technology (visiting from the School of Business Systems, Monash University). He is the Australian President of the Data Management Association (DAMA) and Australian World-Wide Representative for the Information Resource Management Association (IRMA). Daniel has held senior data management positions in some of Australia's largest commercial organisations, and has worked as a consultant for IBM Australia and Simsion Bowles & Associates (an Australian-based data management consultancy). He has consulted in a wide range of organisations both in Australia and overseas, including Singapore, Hong Kong, Indonesia, Taiwan and South Korea. He has held academic positions at a number of Australia's leading universities, including the University of Melbourne, the University of New South Wales and the University of Queensland. His research interests include data modelling, information resource management, information economics, data warehousing and knowledge management. He has published over 50 papers in the IS field, in both practitioner and academic forums, and has chaired a number of national and international conferences.



© 2002 QFD Institut Deutschland e. V. All rights reserved. Terms of Use.