Some predictive genetic tests become available without adequate assessment of their benefits and risks. When this happens, providers and consumers cannot make a fully-informed decision about whether or not to use them. Although extensive use has eventually proved most tests to be of benefit, a few have not proved helpful and were discarded or modified. In the meantime, people were wrongly classified as at-risk and subjected to treatments that, in their case, proved unnecessary or sometimes harmful. Others, who could have benefited from treatment were classified as "normal" and not treated. Harmful effects can be avoided or at least reduced if systematic, well-designed studies to assess a test's safety and effectiveness are undertaken before tests become routinely available and after they are significantly modified. In this chapter, we present criteria for assessing genetic tests prior to routine use, policies for ensuring that the necessary data are collected and, finally, recommendations for review of the data before tests are routinely used.
The Task Force strongly holds that the clinical use of a genetic test must be based on evidence that the gene being examined is associated with the disease in question, that the test itself has analytical and clinical validity, and that the test results will be useful to the people being tested. In this section, we first describe these criteria and then consider how adherence to them can be ensured.
In developing genetic tests, scientists must first be confident that the DNA segments under investigation play a role in the disease in question. These segments might be apparently functionless markers that appear to be spatially linked on a chromosome to a disease-related gene. Linkage is demonstrated when, within families, one form of the marker is found in those with the disease more often than in blood relatives in whom the disease is absent. Because such associations might be due to chance, as was the case for the linkage claimed between bipolar affective disorder and markers on chromosome 11, and between schizophrenia and markers on chromosome 5,1,2 stringent statistical standards must be satisfied before accepting linkage,3 and the findings must be confirmed in additional families with the disease. The method has proved successful in locating disease-related genes for Huntington disease, cystic fibrosis, breast cancer, and other disorders.
Further research leads scientists from the linked, functionless marker to a nearby gene suspected of being causally related to the diseases in question. The proof depends on finding mutations in the gene that are only present (in gene dosage sufficient to cause disease) in family members with disease.a Further proof that a gene is causally related to disease comes from demonstrating that the protein encoded by the gene is absent, not synthesized in adequate amounts, or manifests a structural or functional aberration that plausibly accounts for symptoms and signs of the disease.
Another approach to identifying a disease-related gene does not depend on linkage but on suspecting that a gene that has been previously identified ("candidate" gene) plays a role in a specific disease. Here too, mutations (in gene dosage sufficient to cause disease) must be found only in those with the disease.
The DNA segments associated with a disease might be functional, common, polymorphic gene variants. Recently, attention has been given to the association between the apolipoprotein E polymorphism and Alzheimer disease (AD).7 A higher proportion of people with apoE4 will develop AD than those with other forms of the polymorphism. Some people with AD, however, will not inherit apoE4 and others with apoE4 will never develop AD;8 the polymorphism is neither a necessary nor sufficient cause for the disease. It is not clear whether polymorphic variants themselves predispose to the disease, whether the association is spurious (unlikely in the case of apoE4 and AD), or whether a marker linked to both the polymorphic gene and the disease-related gene is responsible.b The following criterion must be satisfied before either linked markers or putative disease-related mutations are used as the basis of a genetic test. The genotypes to be detected by a genetic test must be shown by scientifically valid methods to be associated with the occurrence of a disease. The observations must be independently replicated and subject to peer review.
For DNA-based tests, analytical validity requires establishing the probability that a test will be positive when a particular sequence (analyte) is present (analytical sensitivity) and the probability that the test will be negative when the sequence is absent (analytical specificity).c In contrast to DNA-based tests, enzyme and metabolite assays measure continuous variables (enzyme activity or metabolite concentration). One key measure of their analytical validity is accuracy, or the probability that the measured value will be within a predefined range of the true activity or concentration. Another measure of analytical validity is reliability, or the probability of repeatedly getting the same result.
Analytical validation of a new genetic test includes comparing it to the most definitive or "gold standard" method. The first genetic test to be used clinically might, however, be the gold standard; for example, a test that employs sequencing to detect disease-related mutations. In either case, validation includes performing replicate determinations to ensure that a single observation is not spurious, and "blind" testing of coded positive samples (from patients with the disease in whom the alteration is known to be present) and negative samples (from controls). Organizations engaged in new test development should have access to a sufficient number of patient samples to have statistical confidence in the validation. In validating a new test analytically, the laboratory techniques should be as similar as possible to those used when the test will be performed clinically once it is validated.
Analytical sensitivity and specificity of a genetic test must be determined before it is made available in clinical practice.
Clinical validation involves establishing several measures of clinical performance including (1) the probability that the test will be positive in people with the disease (clinical sensitivity), (2) the probability that the test will be negative in people without the disease (clinical specificity), and (3) the probability that people with positive test results will get the disease (positive predictive value (PPV)) and that people with negative results will not get the disease (negative predictive value). Predictive value depends on the prevalence of the disease in the group or population being studied, as well as on the clinical sensitivity and specificity of the test.
Two intrinsic features of genetic diseases, heterogeneity and penetrance, affect clinical validity.
Heterogeneity. The same genetic disease might result from the presence (in the necessary gene dosage) of any of several different variants (alleles) of the same gene (allelic diversity) or of different genes (locus heterogeneity). With current technology, all disease-related alleles cannot always be identified, particularly when there are many of them, which is often the case. This failure to detect all disease-related mutations reduces a test's clinical sensitivity.
Penetrance. The probability that disease will appear when a disease-related genotype is present is the penetrance of the genotype. When penetrance is incomplete, PPV is reduced. Penetrance is incomplete when other genetic or environmental factors must be present. In high-risk breast cancer families, 10 to 15 percent of women with inherited susceptibility mutations of the BRCA1 gene will never develop breast cancer. Environmental factors and possibly other inherited factors are required as well. In women without a family history of breast cancer, the penetrance of a BRCA1 or BRCA2 mutation is even lower.10 Alleles at other gene loci and similar environments are more likely to be shared by relatives than by people in the general population.
Sensitivity can be estimated by determining the proportion of all known (symptomatic) patients with the disease in whom the test is positive. For direct DNA tests for inherited mutations whose causal role has been established, the mutation is not an effect of the disease. Therefore, determining the sensitivity in symptomatic people is a valid measure of its sensitivity among asymptomatic people. This might not be the case for tests of enzyme activity or metabolite concentration, however. They might be "effects" rather than "causes." Moreover, substances might interfere with their detection. Consequently, validation entails performing the test in healthy individuals. This can be accomplished in pilot screening programs discussed further in Chapter 3.
PPV can be estimated by comparing the frequency of positive test results in healthy people younger than the age at which the disease first manifests to their frequency in healthy people who exceed the age by which the disease usually appears. Subtracting the second frequency from the first gives a crude estimate of penetrance. This method does not take into consideration differences in mortality rates from competing causes. A more definitive but time-consuming method is prospective followup of people tested in a pilot study. Having a treatment available that might prevent symptoms of the disease complicates such a study. If all people with positive tests results are treated, it will be impossible to determine whether the failure of the disease to manifest is due to incomplete penetrance or the effects of the intervention. A randomized controlled trial, in which only half of the subjects at risk are treated, can help establish the efficacy of the intervention and the penetrance of the inherited mutation.
Prospective studies can take years. If widespread use of a genetic test is withheld until PPV is fully determined, manufacturers and commercial laboratories could be inhibited from developing tests and, consequently, people denied the benefits that might accrue as a result of being tested. Later in this chapter we discuss solutions to this problem.
Parameters of clinical validity will depend in part on the group or population in which the test will be used. For instance, the frequency of disease-related alleles might differ between ethnic groups, making it difficult if not impossible to extrapolate test sensitivity from one group to another. This is the case for cystic fibrosis and breast cancer in which certain alleles can predominate in one ethnic group or geographical area but not in others.11,12 Penetrance can also differ among ethnic groups. The prevalence of allele frequencies will have a marked effect on PPV; the greater the prevalence, the higher the PPV. Age will also affect allele prevalence; in a population older than the age at which the disease usually causes death, the allele frequency will be lower than in a younger population. For all these reasons, validation studies should be conducted in a group representative of the one in which the test is intended for clinical use.
When tests developed for one purpose are used for another, there is no assurance that the sensitivity or PPV will be the same. The maternal serum alpha-fetoprotein (MSAFP) test was formally validated and approved by the Food and Drug Administration (FDA) as a screening test for open fetal neural tube defects. When it was subsequently discovered that a low MSAFP could predict an increased probability of Down syndrome in the fetus, it quickly was used for this purpose without systematic formal validation. The sensitivity and PPV of the MSAFP test for Down syndrome and other chromosome abnormalities are lower than for neural tube defects.13 Data on a particular intended use of a test is needed before that use becomes generally accepted clinical practice.d
The three following criteria help ensure that appropriate data on the clinical validity of genetic tests will be collected during the developmental stages.
The development of tests to predict future disease often precedes the development of interventions to prevent, ameliorate, or cure that disease in those born with genotypes that increase the risk of disease. Even during this therapeutic gap, benefits might accrue from testing as discussed in Chapter 1, such as the ability to avoid the conception or birth of an affected child, reduction of uncertainty and, in those with negative results, escape from frequent monitoring for signs of disease or prophylactic surgery and fear of insurance or employment discrimination. In the absence, however, of definitive interventions for improving outcomes in those with positive test results, the benefits will be limited and not everyone will want to be tested. To improve the benefits of testing, efforts must be made as tests are developed to investigate the safety and effectiveness of new interventions. In the absence of such interventions, studies must be mounted to ensure that testing is beneficial and, particularly, does not inflict psychological harm. The balance of benefits to risks will sometimes depend on how the information is presented and who presents it. These issues are candidates for study. The effect of testing on people with negative, as well as positive results, is important to assess. In high-risk families, people with negative results might have assumed they would be affected and are unprepared to cope with a negative result. They might feel guilt for not having the problem afflicting their affected relatives.14 For genetic susceptibility testing, people with negative results might gain the false impression that they have no chance of getting the disease and persist in or undertake unhealthful behaviors possibly to their future detriment. Ways should be sought to present information and explanations to minimize inappropriate or erroneous interpretations (see Chapter 4). Learning why people who are offered testing decide not to be tested might also help improve understanding of people's perceptions of genetic testing.
The scientists and laboratories developing genetic tests might not have the expertise to explore a number of issues related to communication and counseling. Collaboration with clinical geneticists, genetic counselors, and psychologists can improve the quality of studies looking into these aspects of test development.
Before a genetic test can be generally accepted in clinical practice, data must be collected to demonstrate the benefits and risks that accrue from both positive and negative results.
Because of the length of time it can take to establish the appropriateness of a test for clinical use, it is all the more important to ensure the collection of data on safety and effectiveness in the course of test development. At present, no government policy requires the collection of data on clinical validity and utility for all predictive genetic tests under development. Under the Clinical Laboratory Improvement Amendments of 1988 (CLIA), any laboratory providing tests on which clinical decisions are based must demonstrate the tests' analytical validity to outside surveyors, but CLIA has no provision for review of clinical validity or utility. Under the Medical Device Amendments to the Food, Drug, and Cosmetic Act, the safety and effectiveness or substantial equivalence (to devices marketed prior to passage of the Medical Device Amendments in 1976) of clinical diagnostic testing devices, which include genetic testing devices,e must be demonstrated prior to marketing. FDA considers clinical validity in assessing safety and effectiveness of clinical laboratory testing devices, but generally not data on followup interventions. The FDA's requirements for demonstrating safety and effectiveness are limited to developers who plan to market genetic testing kits.f The FDA has acknowledged to the Task Force that it has the authority to regulate genetic tests marketed as services but is not doing so. (Personal communications from D. Bruce Burlington, M.D. Director, Center for Devices and Radiological Health, FDA, April 3, 1996) Organizations applying for Federal grants to develop genetic tests must submit their research proposals to peer review "study sections." Institutional review boards (IRBs) must also approve protocols submitted to study sections for Federal funding. Many genetic tests, particularly for common disorders, are being developed without Federal funds for research and are not, therefore, subject to peer review. Under FDA regulations, organizations developing new medical devices must have their investigational protocols approved by an IRB. If test results are reported for clinical use and there is no confirmatory test available, the developer must comply with FDA's Investigational Device Exemption regulations. The FDA has not enforced this regulation for developers planning to market tests as services. A number of organizations developing or offering genetic tests, including those who market their own tests (home brews), have never submitted a protocol to an IRB or contacted FDA.
Considering the structures for external review of research in the U.S. today, the Task Force is of the opinion that IRBs are the most appropriate organizations to consider whether the scientific merit of protocols for the development of genetic tests warrants the risk, however minimal, to subjects participating in the research.
Protocols for the development of genetic tests that can be used predictively must receive the approval of an institutional review board (IRB) when subject identifiers are retained and when the intention is to make the test readily available for clinical use, i.e., to market the test. IRB review should consider the adequacy of the protocol for: (a) the protection of human subjects involved in the study, and (b) the collection of data on analytic and clinical validity, and data on the test's utility for individuals who are tested. IRB review is not needed for minor changes in tests (e.g., detection of additional mutations) as long as the original test was reviewed by an IRB. IRBs may request notification of such changes, however.
Tests under development must be conducted in CLIA-certified laboratories if the results will be reported to patients or their providers.
Health department laboratories or other public agencies developing new genetic tests that satisfy these conditions must also submit protocols to properly-constituted IRBs.
In the early stages of test development, analytical validity and clinical sensitivity can be established using specimens from which identifiers have been removed. (For clinical sensitivity, it need only be known whether the specimen came from someone with disease; identity need not be known.) Using specimens stripped of identifiers prevents contacting subjects. In this case, IRB approval is not needed, although some IRBs might want to know of such studies.15 It would be more problematic to remove identifiers (anonymizing) in an attempt to estimate PPV. Positive test results on specimens from people who were healthy at the time the specimen was collected need to be followed up to see if disease subsequently appeared. Plans to contact the people or examine their medical records require IRB review and approval. Although recontact might be needed to establish PPV, it might not be appropriate to inform people of results. Informing of results would be appropriate only at a stage when the clinical validity of the test has been fairly well established and when some benefit accrues to the subject from knowing the result. Protocols should spell out what subjects will be told when they are invited to participate in the study, if and under what circumstances they will be recontacted and how recontact will be made, and under what circumstances they will be given results.
The Task Force recognizes that the development of genetic tests is an iterative process; methodological changes to improve sensitivity and, perhaps, specificity, will be made. As already indicated, such changes do not require submission of new protocols to an IRB. Changes in the population or group being tested in the developmental stage, or in the purposes of testing should be submitted for IRB review, with appropriate justification, as an amendment to the original protocol.
Institutional review boards were established to protect human subjects from the risks of participating in research.g Genetic test development entails a quest for information in order to advance medical practice and clearly falls under the rubric of research. Any research involving humans entails some risk. Even for research in which the risk to subjects is minimal, the risk should not be taken unless the research has scientific merit. OPRR has commented "if a research project is so methodologically flawed that little or no reliable information will result, it is unethical to put subjects at risk or even to inconvenience them through participation in such a study (emphasis added).18( p. 4-1) As part of their duty to protect, IRBs must assess the scientific merit of protocols. Most protocols for the development of genetic tests will have scientific merit if they satisfy the criteria enumerated above. In order to protect human subjects in the development of genetic tests, IRBs must recognize the risks posed by genetic test development and determine that investigators have taken adequate steps to apprise subjects of these risks and reduce the chance of harm from those risks.19,20
The Task Force recognizes that assistance to IRBs in assessing genetic testing protocols would be helpful. After receiving considerable comment, the Task Force rejected creation of a National Genetics Board (NGB) that could review protocols requiring stringent scrutiny or set general guidelines for IRB review and provide consultation to IRBs on request.21 An NGB would add another layer of bureaucracy and further delay approval of research protocols.
The Task Force recommends that the Office of Protection of Human Subjects from Research Risks (OPRR) develop guidelines to assist IRBs in reviewing genetic testing protocols. The proposed Secretary's Advisory Committee should work with OPRR to accomplish this task. OPRR and the Advisory Committee should consider how they can be kept apprised of protocols being submitted, in order for them to formulate relevant advice. One possibility is that IRBs submit a one page summary of each genetic testing protocol to OPRR or the group that is developing guidance. The information could include the name of the investigator and his/her institution, the disease for which the test is being developed, intended use, method proposed, and population being studied. Based on these brief reports, the group developing guidance could request protocols for further study but would have no authority to interfere with local IRB review. The protocols would help the group develop general guidance criteria for local IRBs in future reviews.
In developing guidelines for IRBs, OPRR should focus first on tests under development that require stringent scrutiny. The proposed Secretary's Advisory Committee or its designate, in cooperation with OPRR, should establish criteria for stringent scrutiny. In addition to the three criteria mentioned in Chapter 1 - (1) tests that have the ability to predict future disease in healthy or apparently healthy people; (2) tests that are likely to be used for predictive purposes; and (3) tests for which no independent confirmation is available - others should be considered. These criteria include: (4) tests likely to have low sensitivity (due to genetic heterogeneity) and low positive predictive value (due to incomplete penetrance); (5) tests for which no intervention is available or proven to be effective in those with positive test results; (6) tests for disorders of high prevalence; (7) tests likely to be used for screening; and (8) tests likely to be used selectively in ethnic groups with higher incidence or prevalence of the disorder.h
The Task Force recommends strenuous efforts by all IRBs (commercial and academic) to avoid conflicts of interest, or the appearance of conflicts of interest, when reviewing specific protocols for genetic testing. OPRR should consider more stringent standards for all types of IRBs for avoiding conflict of interest situations. Situations in which a close colleague of the investigator is also the local expert on genetic testing pose a difficult problem for university IRBs. Such colleagues should recuse themselves and, if necessary, the IRB should obtain outside consultation. Another difficult situation arises in small companies in which development of a test is crucial to the company's success. Companies should consider using independent IRBs to avoid the appearance of a conflict of interest.
As previously mentioned, organizations that are developing genetic test kits would be expected to submit their investigative protocols to IRBs. FDA can decline to consider applications containing data from clinical investigations that have not been approved by an IRB. Organizations using Federal research funds for genetic test development are also required to obtain IRB approval. Tests developed without Federal funds, either commercially, in academic clinical laboratories, or in some health departments, are not, at the moment, in legal jeopardy if they do not obtain IRB approval.i Testing organizations should comply voluntarily with obtaining IRB approval of genetic test protocols. Other options the Task Force considered for enforcing the requirement for IRB approval included ensuring that: (1) the FDA use its authority to require all test developers, regardless of whether they plan to market tests as services or kits, to submit protocols to IRBs, (2) third-party payers refuse to reimburse for a genetic test unless the developer can show that it conducted validation/utility studies under an IRB-approved protocol,j (3) clinical laboratory surveyors (see Chapter 3) confirm that laboratories have received IRB approval of the new genetic tests they developed, and (4) Congress enacts legislation requiring submission of all research protocols, regardless of support, to an IRB.
Investigators given IRB approval for their genetic test protocols have the primary responsibility for data collection under the protocols. To expedite data collection, collaborative efforts will often be needed. For uncommon diseases, a single investigator will seldom have a sufficient number of specimens that contain all or most possible disease-related mutations. Collaboration with investigators who can provide independent sets of specimens or patients increases the likelihood that more mutations will be represented and lends greater statistical confidence to assessments of validity. In assessing tests for susceptibility mutations, having a wider range of patients of various ages obtained from different sources will shorten the time to getting a reliable estimate of PPV. Collaboration will also expedite assessing the safety and effectiveness of interventions in people with positive test results that might be included in protocols to measure test validity.
In other research fields, collaborative research has sometimes been delayed by the necessity of obtaining the approval of each collaborating institution's IRB under current regulations.22 OPRR, with input from the proposed Secretary's Advisory Committee on genetic testing, should streamline the requirements for IRB review of multicenter collaborative protocols for genetic test development in order to reduce costs and get the studies quickly underway. The Task force calls on Federal agencies, particularly NIH and the Centers for Disease Control and Prevention (CDC) to support consortia and other collaborative efforts to facilitate collection of data on the safety and effectiveness of new genetic tests. CDC should play a coordinating role in data gathering and should be allocated sufficient funds for this purpose. In any sharing or pooling of data, confidentiality of the subject source of the data must be strictly maintained. There is, for instance, no reason why a central coordinating agency needs to know the names of subjects with positive or negative test results.
Because it has programs in place, CDC's role is particularly suited to collecting data in healthy populations (e.g., on disease-related allele frequencies). CDC could also establish procedures for tracking healthy individuals with positive test results, as well as those diagnosed with inherited disorders, to learn more about test validity, the natural history of such disorders, and the safety and effectiveness of interventions. The collection of this data should be undertaken in cooperation with test developers, health care providers, and consultants in genetics and other relevant specialties.
CDC could also function as a repository of data submitted to it by organizations competing in the development of a specific test who might not want to collaborate and share data. Respecting proprietary rights, CDC could periodically and confidentially assess the pooled data for validity and utility of the test, providing feedback to the participants on the overall findings.
The Task Force welcomes recent CDC initiatives to expand its population-based surveillance systems in order to provide data on the validity of genetic tests and post-test interventions, and to conduct epidemiologic studies to learn more about test validity, the natural history of genetic disorders, and the safety and effectiveness of interventions. These efforts should be in collaboration with other Federal and State agencies and private organizations.
Compliance with all of the criteria for assessment of genetic test validity and utility might be difficult. It can take years to determine whether a disease will appear in healthy people with positive test results or to establish whether an intervention is safe and effective in preventing or ameliorating the disease in question. The Task Force is concerned that the requirements for prolonged data collection might inhibit test development, especially if commercial firms cannot secure a profit until a test is recognized as being suitable for clinical use.k Adoption of the recommendations in the following section would facilitate rapid introduction with collection of the data necessary for assessing validity and utility.
The Task Force recognizes that assessing the validity and utility of some genetic tests will take a long time. When preliminary data indicate a test is likely to have validity and utility, the test should be approved for marketing (see below) but developers must continue to collect data until more definitive answers are obtained. Options for encouraging collection of the requisite data include the following.
Although IRBs receive a final report of investigative studies they have approved, they have no responsibility to assess the quality of the data or whether it supports the conclusions of the investigators. Considering the potential widespread use of some genetic tests and their importance, test developers must submit their validation and clinical utility data to internal as well as independent external review. In addition, test developers should provide information to professional organizations and others in order to permit informed decisions about routine use. External review should take place after data have been collected and near the point when developers believe their tests are ready for clinical use not exclusively under investigative protocols. The Task Force recognizes that not all new genetic tests are in need of such review. The proposed Secretary's Advisory Committee should suggest criteria for external review, and recommend means of ensuring that review of tests requiring stringent scrutiny will take place. To accomplish the latter, the cooperation of various government and nongovernment groups to conduct reviews must be secured, as well as funds to support the reviews. Review should take place at the local, as well as national level. A wide range of stakeholders should participate in reviews.
Review panels could become enmeshed in endless debate if they attempt to set cutpoints for sensitivity and PPV; these should vary depending on the particular test, its use, options for treatment, and other factors. Even for a particular test, reasonable people will differ on how much test uncertainty they can tolerate.24,25 It is more important for external reviewers to ensure that the data have been appropriately collected and analyzed than to attempt to set cutpoints. They should also review proposed informational material to make sure the data are interpreted correctly and that test limitations (such as imperfect sensitivity and PPV) are indicated. Review panels could suggest those groups that should consider using the test and those that should not.
The iterative nature of test development makes it likely that methodological improvements will be made in predictive genetic tests. If such changes are made prior to external review, developers can use the data collected before the changes as a "baseline" to demonstrate the improvements, e.g., in test sensitivity. If a test has already been externally reviewed, and the methodological changes alter the target groups, the purposes of testing, or other significant aspects, re-review should be considered by the proposed Secretary's Advisory Committee or other organizations.
The first level of local review is by the clinical laboratory that plans to make the test available for clinical use (see Chapter 3). In addition, independent local review is also needed, particularly to assess clinical validity and utility. The Task Force strongly suggests that any organization in which tests are developed conduct a structured review of the analytic and clinical validity and utility of new genetic tests before marketing them or otherwise making them available for clinical use. This structured review should be conducted by those not actually involved in developing the test and collecting the data. Some medical centers have standing committees that review tests proposed to be offered in the institution's clinical laboratories that could serve this function. For commercial organizations, a unit within the company, but independent of the laboratory that is actually developing the test, should review the data.
Current legal requirements that genetic tests be reviewed prior to their clinical use apply only to tests marketed as kits, which require premarket approval by FDA. Even if FDA were to include in its purview genetic tests marketed as services, its review would not address all issues of concern to the Task Force. First, FDA does not generally assess safety and effectiveness of a laboratory test in terms of its ability to improve outcomes of those undergoing testing. Second, FDA generally limits its review to the intended uses of a test claimed by the test's sponsor in its premarket notification. Except when it restricts use of a test to specified purposes, which it has the authority to do, FDA does not exert its power to prevent a test marketed for one intended use to be used for other purposes. This is one reason why the Task Force urges developers to undertake formal validation for each intended use of a genetic test.
To improve FDA perspectives on genetic testing and related issues, the Task Force recommends that FDA bring together consultants on genetic testing either from existing panels or by constructing a new panel to provide guidance to FDA on the classification levels needed for genetic testing devices with single or multiple intended uses. Not all devices may require comparable types of review. In conjunction with the proposed Secretary's Advisory Committee considering stringent scrutiny of genetic tests, these consultants should identify aspects of genetic testing that affect the classification level.
Although no other legally-required mechanisms currently exist, other reviews can have a profound influence on providers' decisions to use, or not use, new medical technologies. Examples are: statements of professional societies, consensus development panels, and ratings by the U.S. Preventive Services Task Force.26 The decision of health insurers on whether a specific genetic test will be included in their benefits or reimbursement packages can also influence use and will be based on the insurers' own reviews (M Schoonmaker, submitted for publication) or other external reviews. A recent consensus development panel on cystic fibrosis carrier screening provides an example of national external review.4
Review organizations could select the tests in the greatest need of review by using the criteria for stringent scrutiny to be developed by the proposed Secretary's Advisory Committee. The reviews would be based primarily on data collected during the test development stage or during the proposed conditional premarket approval stage. Depending on how interest in a test expands, on technological changes, and on other considerations, reviewers could periodically reassess the test as the Preventive Services Task Force does for the interventions it reviews.26
|Top of page|
Last Reviewed: August 2005