May 29, 2015 // By Michelle Miller
Database specific testing can identify issues not found by normal functional testing. Because it focuses more attention on data and how it is controlled, it can predict earlier issues that might not occur for some time in a production environment.
There are many approaches to testing the database and the ownership of that testing can vary by project depending on the resources and skillsets available. A good test strategy will introduce testing early in the project life-cycle and use methods which could be automated and re-used through-out the duration of the project. Consideration of how other functional testing might cover areas and augmenting those which aren’t covered will avoid redundancy and ensure good test coverage. We will touch on each of these in this series.
When applications hinge on extensive databases, testing them throughout their development becomes a necessity. However, the extent of database testing is not often shared across the industry. In fact, there are multiple facets of databases that need to be tested to confirm their reliability.
There are four basic categories that database testing falls into – Database designs, Data, Interfaces, and Business Rules/Functionality.
Database designs should be tested early and are often done through reviews. Identifying issues with design early in a project can reduce issues and re-work significantly. It is much easier to change the design before it has been implemented, even partially. Factors to consider during the reviews should include field/property/attribute definitions, naming conventions, and normalization vs. de-normalization. This requires understanding the kind of data that will be contained in the tables, how it will be used, as well as understanding the organization and how it operates – if there are standards or best practices which should be followed.
Data testing particularly applies to migrations or data conversions. Reviewing the actual data that is/has been stored in the database can uncover issues in the original design, business requirements, interface programs, etc. When reviewing data, testers are taking into account three aspects of the data: Data Integrity, Data Quality, and Relational Integrity.
Data Integrity – ACID test
- Atomicity – a transactional unit of work either completes entirely or is entirely rolled back
- Consistency – only data which is valid according to defined rules gets saved, (i.e. constraints)
- Isolation – the order the transactions occur do not affect the end result. Single transactions have same result as if it occurred serially
- Durability – (Persistence) – committed transactions remain committed even if the database crashes or power failures occur
- Invariants – records whose values are outside the bounds of expectations
- Precision – a form of variant, where attribute size is smaller or larger than expected values
- Defaults – are there nulls in fields where values are expected, are the number of records with default values outside the expected norms
- All expected relationships between records present
Interface testing is how the data comes into the database or how it goes out. Interfaces (Iedi, etl, dts, ux, html, xml, etc) may be “out-of-the-box” with an application or may be custom built. Data that is manipulated via data maps and database views needs to be validated so that they land in the database appropriately. If not, you run the risk of expected outputs.
Business Rules/Functionality testing includes calculations, data transformations based on specific events, or derived from values within the transaction. They may also be derived from related database fields or tables. These are typically coded in triggers or stored procedures and include constraints.
When complex databases are implemented, the thoroughness of their testing must be comprehensive and execution of the testing must also be in-depth. We will explore this in our next post.
If you’d like to contact us directly, email us or call us at 877-277-1044.