Automated Test System

During the last several decades Automatic Test Equipment (ATE) as it is applied in Automated Test Systems (ATS) has experienced a tremendous rate of growth. This is characterized in the development of a wide variety of general and special purpose equipment. With advancing technology and increasingly complex electronic systems, ATE has become a solution to the multifaceted problem of varying production and maintenance test strategies.

ATS are application specific systems that increase the reliability and productivity of testing activities in different stages of product development, manufacturing and maintenance. Different test strategies are necessary to validate a product’s performances throughout the lifecycle. These strategies have been instantiated in the development of hardware and software as it is associated to the Test Program Set (TPS).

The significant difference between ATS and manual testing is that, with ATS, the test control and decision logic coded in the computer program frees up the test technician to perform other tasks. However, the introduction of automated test does not eliminate the need for an intelligent test technician or analysis of the Unit Under Test (UUT). In many ways it complicates the matter because the analysis and decision logic for test must be determined at the time the ATS and TPS are being developed. An automated test process cannot rely on the intelligent reaction to circumstances that the technician has always provided in the manual test process. The advantages to the ATS are the speed, precision, and repeatability possible only with a computer.

In this site, the design process for ATS is presented using standard systems engineering principles. No prior knowledge of ATE, ATS or TPS development is assumed on the part of the reader. A general familiarity with electronic testing techniques is a prerequisite for a full understanding of the material presented.

This site will present a series of reference designs as a starting point for ATS and TPS development. Proposed design stages are explained in view of the typical division between hardware and software for the ATS. The process, in which design recommendations are made, can make easier the development of future systems. Thus helping the design team to consider important aspects in decisions taken along the development process and keeping these decisions linked to the requirements defined in initial development stages. Through the use of the proposed process, shorter development cycles can be expected.

 Introduction to Automated Test Systems

Manual testing requires highly skilled technicians, using standard test equipment to verify functionality and isolate faults within the UUT. This is accomplished by comparing known acceptable values with measured values. The technicians may compensate for insufficiency and errors in test specifications by creating ways to circumvent test incompatibilities. Decisions on marginal measurements may vary between technicians, leading to inconsistency in testing. When the human element is introduced, the success of the testing operation is influenced by the intelligence and experience of the technician. Another consideration involving manual test is the difficult task of training new test technicians when the experienced technician transfers out of the department, gets promoted or otherwise leaves. Even when the technician is highly experienced and capable of performing the required analysis and repair, manual test carries the burden of excessively long test times.

With the increasing complexity and technological advances in electronics, manual test techniques have become more obsolete. The ATS has solved many of the problems associated with manual test. With the advent of embedded computers, not only are traditional instruments becoming smarter, entirely new types of instruments are appearing with far more capabilities, versatility, and accuracies. The ATS reduces the chance of human error by minimizing the need for manual intervention. Under automatic control both test performance and fault isolation can be achieved with considerable speed and precision. In addition, the ATS provides consistency in the test operation, alleviates the impact of variation made to the UUT, and mitigates the training problem.

Basic Automated Test System

The basic ATS consists of the following: a control device or computer, stimulus source devices, response measurements devices, input/output switching devices, input/output control device, and the computer program. The figure illustrates a simplified block diagram of a typical ATS. The connection to the UUT is through an Interface Device (ID).

Simplified ATS Block Diagram

Simplified ATS Block Diagram

The test operator loads the TPS onto the ATS. This can be accomplished through several methods. The test program can be loaded from a memory device such as a legacy floppy disk, compact disk (CD) or flash memory stick. It could also be loaded from a computer network or server. Finally it could be resident on the ATS local hard drive.

The unit under test is connected to the ATS usually through an interface device. The ATS computer directly controls the test process. It will select the stimuli – power supplies, signal generators, digital signals, etc – from the various stimulus devices and set them to the required levels. The stimulus devices are routed to the UUT via an automatically controlled switching device. Simultaneously, the responses from the UUT are conditioned in the ID and routed through the automatic switching device to the appropriate measurement devices. Measurement scales and calibration factor are applied where appropriate. For example, signal amplitudes may need to be scaled and path loss through ID and ATS cabling may require compensation. Finally the measurement comparisons are made against preprogrammed limits. The operator monitors the test sequence. All results, errors or operator intervention messages (adjustments, connections or manual observations) are displayed on the computer monitor. Supplemental instructions may be contained in a Test Program Instruction (TPI) manual.

Test Program Set

The generation of an efficient TPS to fully test and evaluate a UUT is not always easily achieved. It depends on many factors even before the initial test plan is developed. The TPS consists of those items necessary to test the UUT on the ATS.

Elements of a Test Program

There are four basic elements to a TPS; test program, interface device, test program instructions and support documentation.

Test Program

The test program (TP) contains the coded sequence. The TP, when executed on the ATS, will provide the system a set of instructions sufficient to accomplish the intent to automatically determine the condition of the UUT. For diagnostic programs the sequence will automatically isolate to a fault.

Interface Device

The interface device (ID) provides the mechanical and electrical connection and signal conditioning, if required, between the ATS and the UUT. The ID may also include cables and mounting fixtures.
RFModule_Fixture OpticalDetector_Fixture

Test Program Instructions

The test program instructions (TPI) provide the information needed for testing which cannot be conveniently provided or displayed by the ATS under test program control.

Support Documentation

The support documentation consists of information, specifications, schematics, and logic diagrams necessary for analysis of the TP, ID and UUT in the event a problem or anomaly occurs during the testing process.

Test Program Set Development

Once the UUT test requirements have been defined, ATS and TPS development can commence. The development can be defined in four main phases. Each phase will be discussed in more detail in later chapters.

Test Program and Interface Design

The TP designer lays the groundwork for the entire test program and ATS. An initial test plan should be developed. This will spawn the development of test procedures. The output of the TP design phase is a complete UUT test procedure with associated TPI, ATS and ID design requirements. A thorough design is required to insure total compatibility between the UUT and the TPS, ID and ATS.

Test Program Development or Coding

In this phase of development the test program is coded into the specified test language. In addition, ID hardware diagrams, schematics, and documentation are finalized. The pre-production ID is manufactured and made ready for the next phase.

Program Integration

In this phase, the process of debugging the operation and performance of the TPS is accomplished. Care should be taken when making changes to any element of the TPS. Any one change may adversely affect numerous tests.

TPS Acceptance

This is the final phase of the TPS development process. In this phase the finished product is demonstrated and must meet all specifications and requirements. All information, documentation, schematics and drawings necessary for support of the TPS are included in the final package as Support Documentation.

Relationship between TPS and ATS

To be fully effective, the ATS and TPS must be a completely integrated system. Many factors are involved when a TPS is to be generated for a specific UUT. If the ATS requirements have been properly defined the TPS development cost can be significantly reduced.

The number and type signal I/O devices and programming environment must be identified early in the ATS development cycle. Tradeoffs must be made between what costs are to be increase in the ATS and the costs that will affect any future TPS development. These tradeoffs should be based on careful cost performance analysis. Hardware and software compromises made to justify the cost of an ATS may greatly increase future TPS development costs. In addition, ATS testing capability enhancement not included or identified in the development stage may be difficult to justify. Typically, characteristics of an ATS remain stable on the system is released for production. Consequently desirable characteristics must be considered and included in the ATS development process if they are to be included at all.

Some things to consider early in the ATS development process are throughput, fault detection, ATS maintenance and calibration, and reconfigurability.

ATS Throughput

ATS throughput is defined at the test rate. The faster the speed of the test system, the fewer the number of test systems required to meet the production volume requirements. Test yield also forms a part of this equation. A reduction in the number of systems invariably drives down costs in other areas such as personnel and manufacturing space required.

Fault Detection

In the manufacturing process the earlier a fault is detected the more economical it is to correct. For this reason test and inspection should always be placed in the front of the manufacturing process. Test at the front end should be developed to detect Initial production failures include wiring errors, opens, shorts, improperly installed components, improperly polarized components, wrong components, bad components, and solder splashes.

Accurate diagnostics is tied in with fault detection. The better the level of fault identification the higher possibility there is of successful rework. Although some processes discard faulty product as too expensive to rework there is clearly a financial cost to this approach. Given throughput issues, repair and re-test is often attractive. In a maintenance scenario it is required.

Maintenance and Calibration

Time for routine maintenance and calibration has to be planned in advance. Adequate training and backup for the test equipment is vital. A major problem in test can be catastrophic, as it will invariably halt production.

 Reconfigurability

Manufacturing lines must be capable of manufacturing multiple products with as short a changeover time as possible. In a typical scenario these changeovers are often out of the control of the manufacturer but there must exist methods for updates and the provision of to reconfigure the ATS as new products are introduced.

Program Responsibilities

The delegation of task responsibilities at each stage of ATS development depends primarily on contract definition and the contractor’s internal organization. Figures 2-4 depicts the flow diagram of a typical process encountered in development of the ATS.

Figure2 Figure3 Figure4

 

 

 

 

 

 

 

 

 

 

 

 

 

Development of an ATS requires a combination of talents from a variety of personnel. In order to effectively coordinate the development process and personnel, one person (Program Manager) should be in control. The Program Manager should possess management expertise with familiarity of the ATS and its hardware and software characteristics. The table contains a list of the various tasks involved, and recommendations for individual participation and responsibility. Support requirements necessary for, each task are also delineated. This does not imply that the TPS design task is to be sub-divided and allocated among various personnel. Rather it indicates that the single point of responsibility, the test program designer, requires a variety of supporting personnel and functional groups to effectively accomplish the TPS design task. Organization, communications and controls that ensure the support required is available when necessary, are crucial if an efficient TPS effort is to be accomplished.
Test Programming Responsibilities

ID Task Data Source TYAD Principal Responsibility Support
1 UUT Documentations Customer (Program/Item Manager) Commodity Engineer Technical Library
2 Requirements and Policy ISO Procedures TPS Program Management (PM) Customer, QA and TPS Engineer
3 Test Requirements Document and Updates Customer (Program/Item Manager) Commodity Engineer Technical Library
4 TPS Development Process Control ISO Procedures TPS‐PM QA, Administrative Staff
5 Concept Review TPS Development Artifacts Senior TPS Engineer TPS Engineer, TPS‐PM, Customer, QA, Mission Software, ATE Support
6 Detailed Test Design TPS Development Artifacts TPS Engineer ATE Support, Senior TPS Engineer
7 Interface Device Design TPS Engineer ATE Support, Senior TPS Engineer
8 Preliminary Design Review TPS Development Artifacts Senior TPS Engineer TPS Engineer, TPS‐PM, Customer, QA, Mission Software, ATE Support
9 TP Coding TPS Engineer Senior TPS Engineer
10 Interim Design Review TPS Development Artifacts Senior TPS Engineer TPS Engineer, TPS‐PM, Customer, QA, Mission Software, ATE Support
11 ID Fabrication Engineering Lab Technician TPS Engineer
12 TPS Documentation TPS Engineer Senior TPS Engineer
13 Integration TPS Engineer Senior TPS Engineer
14 QA Desk Audit TPS Development Artifacts Quality Assurance ATE Support
15 Acceptance Testing Quality Assurance ATE Support, TPS Engineer
16 Delivery to Repository TPS Development Artifacts TPS Engineer TPS‐PP

Design Objectives and Requirements

 Requirements Gathering

Before any ATS or TPS development can begin, the requirements for the ATS and each TPS must be clearly established. The early identification of requirements which influence the system performance parameters and the system configuration from a support standpoint will insure a complete and effective test strategy with minimal cost of ownership. This process further requires the development of an optimum diagnostic concept that considers various degrees of Built-In-Test/Built-In-Test-Equipment (BIT/BITE), Built-In-Diagnostics (BID), Automatic Test Equipment (ATE) with associated TPS, and manual test. The process must implement a design process which ensures the electronic systems and equipment has testability.

Testing of electronic circuits has been an activity which historically has not been considered until the end of the system design or prototype phase. The emphasis has been that testing is a post-design activity. In the past, this has been acceptable because the complexity of electronic circuits has been manageable, particularly from the point of view of ‘observability’ of component behavior. Very Large Scale Integrated circuit (VLSI) technology has changed that perspective. Because of high costs and the inability to adequately test complex components, it is imperative for the designer to consider testability at the early conceptual design stages in order to avoid unsupportable designs. The term testability addresses the extent to which a system, or subsystem supports fault detection in a confident, timely and cost-effective manner. The incorporation of adequate testability, including BIT, requires early and systematic management attention to testability requirements, design and measurement. This concept prescribes a uniform approach to testability (including BIT) requirements, testability analysis, prediction and evaluation, and preparation of testability documents.

 UUT Source Documentation

The following UUT documentation items are necessary to initiate ATS and TPS development. These items are supplied by the UUT developer to the test developer.

  1. UUT Requirements
    1. Current configurations of the UUTs must be made available; a minimum of two new UUTs.
    2. These UUTs will be Customer Furnished Equipment (CFE) to the ATS or TPS developer.
    3. After Acceptance Testing, one of the UUTs should be saved as a ‘golden’ UUT.
    4. The second UUT should go to the ATS/TPS fielding team.
    5. It is preferable that one UUT be modified for fault insertion (i.e., no conformal coating, use of sockets to replace/remove ICs, etc.).
  2. UUT Technical Data Package (TDP)
    1. The UUT TDP should include, at a minimum:
      1. UUT product specification
      2. Testing requirements
      3. Schematics
      4. Assembly drawings
      5. Parts list
      6. System software documentation as required
  3. UUT Failure Mode, Effects, and Criticality Analysis (FMECA)
    1. The FMECA preparation of the TPS test strategy to insure that the most likely faults are detected and isolated first. It will also be used in selecting a realistic set of UUT failure modes to be inserted during development and acceptance.
  4. Testability Analysis Report

ATS and TPS Documentation

The ATS and TPS documentation should consist of those items necessary to fully develop, document, deliver, operate, maintain, and modify all elements of an ATS or TPS. The documentation should: act as a communication medium between members of the development team; be a system information repository to be used by maintenance engineers; provide information for management to help them plan, budget and schedule the ATS development process; and tell users how to use and administer the system. Satisfying these requirements requires different types of document from informal working documents through to professionally produced user manuals. Test developers are usually responsible for producing most of this documentation.

At this time it is not possible to define a specific document set that should be required; this depends on the customer for the system, expected lifetime, and the development schedule that it expected. Generally the documentation produced falls into two classes: process and product.

 Process Documentation

These documents record the process of development and maintenance. Plans, schedules, process quality documents and organizational and project standards are process documentation.

Effective management requires the process being managed to be visible. TPS are viewed as intangible and the software process involves apparently similar cognitive tasks rather than obviously different physical tasks, the only way this visibility can be achieved is through the use of process documentation.

Process documentation falls into a number of categories:

  1. Plans, estimates and schedules: These are documents produced by managers which are used to predict and to control the software process.
  2. Reports: These are documents which report how resources were used during the process of development.
  3. Standards: These are documents which set out how the process is to be implemented. These may be developed from organizational, national or international standards.
  4. Engineering Notebooks: These are often the principal technical communication documents in a project. They record the ideas and thoughts of the engineers working on the project; are interim versions of product documentation; describe implementation strategies; and set out problems which have been identified. They often, implicitly, record the rationale for design decisions.
  5. Memos and electronic mail messages: These record the details of everyday communications between managers and development engineers.

Product Documentation

This documentation describes the product that is being developed. Product documentation describes the product from the point of view of the test developer developing and maintaining the ATS or TPS; user documentation provides a product description that is oriented towards ATS users.

Product documentation is concerned with describing the delivered ATS and TPS. Unlike most process documentation, it has a relatively long life. It must evolve in step with the ATS. Product documentation includes user documentation which tells users how to use the system. System documentation is principally intended for developers and system maintainers. This documentation will describe how to develop a TPS and how to maintain and support the system.

 User Documentation

Users of a system are not all the same. The producer of documentation must structure it to cater for different user tasks and different levels of expertise and experience. It is particularly important to distinguish between end-users and developers:

End-users use the software to assist with operating the test system. They want to know how the ATS and TPS can help them. They are not interested in administration details.

Developers are responsible for writing the TPS and ATS support. This may involve acting as an operator, as a network manager, or as a technical consultant who fixes end-users hardware and software problems and who liaises between users and other developers.

To cater for these different classes of user and different levels of user expertise, there are at least five documents (or chapters in a single document) which should be delivered with the ATS.

The functional description of the ATS outlines the system requirements and briefly describes the services provided. This document should provide an overview of the system. Users should be able to read this document with an introductory manual and decide if the system is what they need.

The system installation document is intended for maintainer and developer. It should provide details of how to install the system in a particular environment. It should contain a description of the files making up the system and the minimal hardware configuration required. The permanent files which must be established, how to start the system and the configuration dependent files which must be changed to tailor the system to a particular host system should also be described.

The introductory manual should present an informal introduction to the system, describing its ‘normal’ usage. It should describe how to get started and how end-users might make use of the common system facilities. It should be liberally illustrated with examples. Inevitably beginners, whatever their background and experience, will make mistakes. Easily discovered information on how to recover from these mistakes and restart useful work should be an integral part of this document.

The system reference manual should describe the system facilities and their usage, should provide a complete listing of error messages and should describe how to recover from detected errors. It should be complete. Formal descriptive techniques may be used. The style of the reference manual should not be unnecessarily pedantic and turgid, but completeness is more important than readability.

System maintenance and developer’s guide should be provided. This should describe the messages generated when the system interacts with other systems and how to react to these messages. If system hardware is involved, it might also explain the operator’s task in maintaining that hardware. For example, it might describe how to clear faults, how to connect new instruments, etc.

A quick reference card listing available software facilities and how to use them would be convenient for experienced system users. On-line help, which contain brief information about the system, can save the user spending time in consultation of manuals although should not be seen as a replacement for more comprehensive documentation.

 System Documentation

System documentation includes all of the documents describing the system itself from the requirements specification to the final acceptance test plan. Documents describing the design, implementation and testing of a system are essential if the program is to be understood and maintained. Like user documentation, it is important that system documentation is structured, with overviews leading the reader into more formal and detailed descriptions of each aspect of the system.

The system documentation should include:

  • Test system requirements document and an associated rationale.
  • A document describing the system architecture.
  • For each program in the system, a description of the architecture of that program.
  • For each component in the system, a description of its functionality and interfaces.
  • Program source code listings. These should be commented where the comments should explain complex sections of code and provide a rationale for the coding method used. If meaningful names are used and a good, structured programming style is used, much of the code should be self-documenting without the need for additional comments.
  • Validation documents describing how the system is validated and how the validation information relates to the requirements.
  • A system maintenance guide that describes: known problems with the system; which parts of the system are hardware and software dependent; and which describes how evolution of the system has been taken into account in its design.

 Test Strategy

Testing is done to validate a product in the three basic phases of the product’s life cycle:

  • Development
  • Manufacturing
  • Maintenance

 

Each of the three phases requires its own unique test plan, methods, procedures, and techniques. Each can vary significantly in equipment, complexity, time, and expense.

 

For example, consider validating design uncertainty versus validating the manufacturing process for correct product assembly. Validating the manufacturing process requires relatively simple and repetitive test setup with little knowledge in the design of the product. Whereas validating design uncertainty requires significant knowledge in the design and non-repetitive complex testing.

Functional testing in a manufacturing environment is usually performed at room ambient and should not, if possible, require any special environmental conditioning, such as warm-up time, or temperature conditioning.

During functional testing on the production line there may be multiple initial production failures include wiring errors, opens, shorts, improperly installed components, improperly polarized components, wrong components, bad components, and solder splashes. After the device is placed into service only one component is expected to fail at a time.

Automatic testing is almost always used in the production environment for the following reasons.

  1. Modern systems are complex and manual test procedures would take excessive time considering the quantity of tests required.
  2. Automatic testing minimizes operator judgment and standardizes results.
  3. High production can only be maintained using automatic testers.

Production testing does not qualify a design; it is used to verify that the device has been assembled correctly and that all the components are in proper working order. Production testing is used to detect manufacturing defects. The number of tests should be minimized and performed as quickly as possible to increase production rate. Statistical analyses should provide the basis for the determination of test limits in order to optimize the detection of faulty hardware.

The testing philosophy should be structured to detect failures at the lowest possible level, since the cost to remedy a failure increases substantially at higher test levels. The PVT philosophy is usually driven by contract requirements and program direction. The following testing should be considered during the earliest possible time in planning any manufacturing effort:

  1. One Hundred Percent Test of All Bare Printed Wiring Boards, Cables and Wiring Harnesses – This testing should use high voltage to detect shorts or leakage paths and current to detect open signal paths. Signal paths that normally conduct low levels of current or signal paths that normally conduct high levels of current should be tested with levels similar to those they will see in actual use.
  2. Board Level Test of All CCAs
  3. Functional Test of All Assemblies and Subsystems Including CCAs
  4. Functional Test of the System
  5. Off-Line Diagnostic Test Stations

In general, the test philosophy at each level of PVT assumes that the previous level of test has been performed successfully and duplicity of tests is discouraged since it decreases production rate and increases cost. Multiple failures will occur during production and the PVT philosophy should be structured to detect multiple failures. It is desirable to detect all failures, in a given UUT, during one pass across the test set. However, this is usually not possible during functional testing and may not always be possible during in-circuit testing, depending on the type of test used. Multiple failures may have to be detected and isolated by sequential test runs.

PVT sometimes defers testing to higher levels for one reason or another. Usually, a test is elevated to a higher level only because it is difficult or cannot be performed at a lower level, an undesirable situation. It is preferred that tests be performed at the lowest possible level. Do not overlook the provisioning for spares when making these decisions. For example, if a test is omitted at the shop replaceable unit (SRU) level but performed at a higher level, critical SRU functions may not be verified at the lower level. This is not a problem if the SRU is installed into the next higher assembly and the missing tests performed at the higher level. However, if the same SRU test is used for acceptance of a spare SRU, the critical test will not be performed. The solution for this is to have a separate test in the test requirement for spares or to include the critical test for all hardware.

Vertical commonality is certainly a design goal for hardware. It is a very useful philosophy but the tests need to be tailored for use at each level. It provides uniformity in test results at all levels.

The advantages of vertical commonality are:

  1. Hardware and Software are Common at the Various Levels – This makes changes easier to implement since only one change has to be engineered. The changed test hardware or software is then distributed to the various sites.
  2. Test Results at All Levels are Repeatable and Co-relatable

The disadvantages of vertical commonality are:

  1. It May Be Too Costly for Small Production Quantities – For small production quantities where expensive test tools are required and are not already available, requiring the use of these expensive testers for the sake of commonality may not be cost effective. It may more cost effective to use manual or semiautomatic testers.
  2. Compromise May Be Necessary – Because of the different test philosophies between Production and field tests, compromise may be necessary at one or both test levels.

From a designer’s viewpoint, production test is very similar to field test. What succeeds at the field will succeed in production. There are two significant differences:

  1. Multiple failures may exist in production and the independence of circuitry is important. If a single failure makes multiple circuits fail to operate, all the components in those circuits will be removed and the cost of the assembly goes up accordingly.
  2. The level of accuracy is higher in production since components should only reflect purchased tolerances. This may make noise levels much more critical during Production test.

When creating a Test Requirements Document (TRD), the designer should always consider the accuracy of measurements in the presence of noise. A 10 bit ADC is not considered extremely accurate. When working with + and – 5Volt references, the least significant bit represents 10 millivolts. If you expect 100 millivolts of noise in your circuit, requiring accuracies to 3rd LSB might be a mistake (ie. 80 millivolts). If you are going to use digital filtering, auto-calibration, or multiple references to eliminate noise and tolerance build up, throw the test engineers a bone and include the description of these processes in your TRD.

 Test Requirements

UUT test and performance requirements must be defined and documented prior to detailed test design. The TPS developer, no matter how talented or experienced, cannot effectively generate an accurate and complete TPS without some guidance. Without accurate UUT specifications and failure mode data upon which to develop fault isolation criteria, the actual support requirements of the UUT may never be fully appreciated. Although it might seem a perfectly good and comprehensive test design the test may well fall short of UUT fault detection objectives. Similarly, testing based on inadequate data may be too stringent and result in an unduly high unit rejection rate; may be an inadequate test of performance; or may result in low probability of mission success.

It is possible to perform test design using minimal data; however, this process is extremely costly and time consuming. It is also error prone due to mistakes that can occur during the reverse engineering process that is required.

 Bench Testing

Bench testing of a UUT has been a valuable tool for the test programmer. Physical inspection and bench test results can be very useful in discovering and resolving documentation errors and as a means for verifying and/or supplementing UUT source documentation.

Unfortunately, some modern circuitry does not lend itself to bench testing as easily as circuits in the past. This is due to the increasing use of VLSI, hybrid ICs, and microprocessors in ball grid array packaging and high pin count surface mount devices. The number of functions performed by each replaceable, non-repairable component and the number of tests and interconnections per component have all increased, although the number of I/O pins on the UUT may not have increased proportionately. Therefore, the problems of manually exercising these UUTs with bench test techniques do not always simplify the understanding of the UUT, but may become a problem in itself.

The detailed analysis performed by the test programmer will almost always uncover errors in UUT documentation. Even if it were possible to get 100 percent accurate documentation there would still be unanswered questions that will require bench testing.

Requirements of a Test Program Set

Once the UUT is understood and test requirements have been defined, ATS and TPS development can commence. The development can be defined in four main phases.

Test Program and Interface Design

The TP designer lays the groundwork for the entire test program and ATS. An initial test plan should be developed. This will spawn the development of test procedures. The output of the TP design phase is a complete UUT test procedure with associated TPI, ATS and ID design requirements. A thorough design is required to insure total compatibility between the UUT and the TPS, ID and ATS.

Test Program Development or Coding

In this phase of development the test program is coded into the specified test language. In addition, ID hardware diagrams, schematics, and documentation are finalized. The pre-production ID is manufactured and made ready for the next phase.

 Test Program Integration

In this phase, the process of debugging the operation and performance of the TPS is accomplished. Care should be taken when making changes to any element of the TPS. Any one change may adversely affect numerous tests.

TPS Acceptance

This is the final phase of the TPS development process. In this phase the finished product is demonstrated and must meet all specifications and requirements. All information, documentation, schematics and drawings necessary for support of the TPS are included in the final package as Support Documentation.

Test Program Development Process

A TPS is a collection of software test routines that serve two purposes; the first is to prove the unit under test, UUT, is operational and the second is to isolate failures if the first set of tests detects a failure. These tests are linked through program jumps, function calls or subroutine calls. In order to understand the differences between traditional diagnostic development and diagnostic development using a fault model, it is necessary to first examine the process used for writing test program set.

The TPS development process is the core of the test system. The detailed process flow is defined in the figure illustrates a high level process.

High Level TPS Development Process

High Level TPS Development Process

 

 

 

 

 

 

 

 

 

 

As with hardware, there are software issues to consider:

  • Should the project support a rewrite of the test code into a more modern language or try to use existing code? Can the existing code be converted?
  • What programming assumptions were made in the old software code?
  • Elements of legacy code were probably based upon common understandings of software developers at the time they created the legacy code. These assumptions may not be documented, and they may not be obvious today.
  • Supporting data and documentation may be tied to the legacy code explicitly, and changing the code can result in significant costs to update and manage the associated data.
  • The ability to interchange instruments combines hardware and software issues.