Direct and Indirect Measures, Reliability

Rating - 3/5

 Direct and Indirect Measures :

A direct measure is obtained by applying measurement rules directly to the phenomenon of interest.For example, by using the specified counting rules, a software program’s “Line of Code” can be measured directly.

An indirect measure is obtained by combining direct measures.For example, number of “Function Points” is an indirect measure determined by counting a system’s inputs, outputs, queries, files, and interfaces

Software Reliability

  • IEEE 982.1-1988 defines software reliability as “The ability of a system or component to perform its required functions under stated conditions for a specified period of time.”
  • Is the probability that a system will operate without failure under given environmental conditions for a specified period of time.
  • Aims at fault-free performance of software systems

Software reliability engineering

  • Engineering (applied science) discipline
  • Measure, predict, manage reliability
  • Statistical modeling
  • Customer perspective:
  • failures vs. faults
  • meaningful time vs. development days
  • customer operational profile

SRE Process

These Topics Are Also In Your Syllabus Direct and Indirect Measures, Reliability
1 Types of Operating Systems - Batch operating system, Time-sharing systems, Distributed OS, Network OS, Real Time OS link
2 Cost-Benefit and analysis -Tools and techniques link
You May Find Something Very Interesting Here. Direct and Indirect Measures, Reliability link
3 Scope and classification of metrics link
4 Software measuring process and product attributes link
5 Types Of Systems link

Direct and Indirect Measures, Reliability

  1. List Associated Systems
  • Lists all systems associated with the product that must be tested independently and are of two types Base product & variations and Super-systems
  1. Develop Operational Profiles
  • An Operational profile is a Complete set of commands with their probabilities of occurrence
  • System Testers and System engineers are included
  1. Define “Just Right” Reliability
  • Define what “failure” means for the product – Failure is defined as any Departure of system Behavior in execution from user needs – Failure intensity is the number of Failures per unit time
  • Choose a common Measure for all failure intensities, either Failures per some natural unit or Failures per hour
  • Set the total system Failure Intensity Objective (FIO) for each associated System – using Field data like Customer Satisfaction surveys related to measured failure intensity, – an analysis of competing products balancing among major quality characteristics users need.
  • Finding a developed software failure intensity objective for any software you develop plus choosing among software reliability strategies to meet the developed software
  1. Prepare for Test
  • Specify the new Test cases for new operations of base product and variation for current release based on operation profile
  • Prepares Test procedures to execute load tests
  1. Execute Test
  • determine an allocated Test time among Feature test, Load test, and Regression test, invoke test and identity system failure along with when they occur
  • Feature tests – Interactions and Effects of the field environment minimized
  • Load tests- execute Test cases simultaneously, with full interactions and all the effects of the Field environment
  • Regression test-executes some or all feature tests and it is designed to reveal failures caused by faults introduced by program changes
  1. Guiding Test
  • Involves guiding the product’s system Test phase and Release
  • Failure data is interpreted differently for software we are developing and software we acquire. We attempt to remove the faults that are causing Failures
  • For developed software, we estimate the FI/FIO ratio from the times of failure events or the number of failures per time interval, using reliability estimation programs such as CASRE (Computer Aided Software Reliability estimation)

Rating - 3/5