NHGRI logo

Because of the highly interactive nature of the program, answers are relevant to all three funding announcements (RFA-HG-21-029, RFA-HG-21-030, and RFA-HG-21-031).

Genes and Alleles

  1. Should each application propose 1000 genes?

    We want the program as a whole to tackle 1000 protein coding genes over the 5-year Phase 1. However, we do anticipate that we will want to plan on some overlap for validation or complementary work.
  2. Given that final gene priorities will be decided after funding, how much detail about prioritization needs to be in application?

    Please propose a justification and a rationale, with examples. We want to hear your opinions on how to do this. But please recognize that, once the grants are funded, we will convene a discussion on how to come up with priorities for Phase 1.
  3. Can we propose genes that all are in one biological pathway?

    The larger point of Phase 1 is to sample enough different classes of genes that the effort will inform how to design a program that can assess all genes. Your rationale for prioritization should reflect how your choices will contribute to that aspect of “generalizability.”
  4. May I propose work on genes affecting a single tissue/disease?

    The key is showing how the approach would be generalizable — is the approach applicable to multiple cell types? Will it teach us lessons across multiple classes of genes/proteins? Others? This may be possible with a small number of (multi-cell type) tissues, or a disease with complex origins (e.g., involving multiple genes or tissues), but may be more challenging than other approaches.
  5. In selecting the genes, should one have in mind that they should be functionally tested in different types of organoids, such as cortical organoids, liver organoids, etc.?

    For each application, applicants should prioritize genes with their own proposed assays in mind. However, genes/alleles that lend themselves to being testing in different systems is potentially an advantage for generalizability or scale of approach, an important aspect of Phase 1. In addition, there are likely to be advantages for the ability to do validation.
  6. Do you really mean null genetic alleles only?

    Alleles should generally be equivalent, justified by e.g., stability/reproducibility/strong effect. The FOA also envisages that other alleles may be justified in cases where nulls may not yield informative phenotypes due to cell lethality.
  7. How well-defined does a null have to be? Can there be a mixture of frameshift/early stop codon from CRISPR editing versus defined and sequenced editing?

    The FOA says that it is important that the systems that are used be reproducible and validated. Although the FOA is not more specific on this point, it is my personal opinion that reviewers will evaluate how reproducible and reliable the proposed experiments will be, and how well characterized they are, as this is an important consideration in considering which approaches will scale.
  8. Can one create/test multiple null alleles of a gene?

    I am not sure — if the reason is for testing of multiple isoforms per locus, then this seems likely to be cost-prohibitive. In general, the FOA was meant to test fully null alleles.

Samples and Assays

  1. What types of assay does this program seek?

    It seeks both molecular (including multi-omic) and cellular (including imaging) and other measurements. It seeks assays that are a) informative and useful in interpreting biology, including how well they relate to anatomical and physiological phenotypes arising from nulls or other alleles; b) that have some potential for scale; and c) that yield generalizable approaches.

    Applications will inevitably need to consider multiple tradeoffs, for example between assay specificity/informativeness; assay complexity; cost/number of assays; number of samples; number of tissue or cell types assayed; etc. Applicants should consider and justify their choices in making these tradeoffs. We anticipate that consideration of these constraints will stimulate ideas about how to surmount them, both in applications, and during the course of the program.
  2. How many assays/cell types/organ systems should I propose?

    See “generalizability” (Q3, Q4) above. Probably multiple, but it is not that simple — for example a single organ may be fine if it has multiple cell types and has other lessons for generalizability. The larger point is: How does what you propose inform how to design a potential Phase 2, which would assess all genes.
  3. The FOA states: “complex multicellular systems preferred.” May I propose single cell-type cultures?

    There is a preference for, but not requirement for, complex multicellular systems (eg. organoids or similar systems). This preference is because organoids have increasingly high potential to provide rich and informative phenotypes that are more faithful to organismal phenotypes. Over the coming 5 years of Phase 1, the methods for organoids will likely improve. However, organoid methods are not uniformly developed across all tissues. In addition, in some cases organoids may have disadvantages (e.g., cost, throughput, non-feasibility or non-relevance for certain tissues). The FOA therefore allows single cell-type cultures, where they are well-justified. Please justify your choices in context with rest of FOA.
  4. Does the program expect stable null cell lines as an output of the project versus pooled KO experiments where the null cells only exist during the assay period?|

    There are advantages to having stable cell lines, including allowing replication, and as a resource developed from the program. The FOA does not exclude other approaches. However, the applicant will need to justify their choices in terms of the larger goals of the FOA, including scalability, reproducibility/consistency, informativeness/interpretability, ability to generalize, etc.
  5. Do the samples/cell lines/data derived from them need to be sharable? 

    Yes — products of the research should be shared. This is an explicit part of the Resource Sharing plan of this FOA, and can be considered in the score. This is also standard NIH policy. Samples derived from individuals that were originally consented for broad sharing is important or the data cannot be incorporated into a resource. But see the next question (Q14).
  6. Am I required to share cell lines (e.g., iPSC lines) with the community?

    Although this is not specifically addressed in the FOA, completely open sharing of many cell lines to many people is likely to be cost-prohibitive. In your Resource Sharing Plan please make a reasonable case for how the samples could be shared in the context of the scientific system you propose, in a practical way. Cell lines should at least be shared among grantees in the MorPhiC consortium, to start. NHGRI may need to consider how to make such physical resources practically available (for example through a cell repository).
  7. How important is sample diversity? (i.e., population origin, sex, life stage of subjects from which samples will be derived)

    This is important, but we recognize that it may be cost-prohibitive to include enough diverse samples to enable well-powered inferences about biological variability. However, diverse samples should be used to the extent possible, so that Phase 1 yields some information about biological variation to enable us to think about better designs.

Overall Approach, Responsiveness, Format and Budget

  1. How well-defined does the strategy for functional assays need to be in the proposal? Should one group focus on a defined proven strategy or evaluate feasibility/scalability between multiple strategies and present so in the proposal?

    This FOA asks for each applicant to submit an integrated approach (gene prioritization through assay data production). We expect some diversity of approach, across different applications. While the FOA excludes technology development, we expect that there is room for methods optimization to improve efficiency, and that applications will be a mix of well-defined approaches that will work towards the overall program goals, at some level, with some elements that are still being tested/evaluated. When you are discussing a part of a process that is still being evaluated, the justification should include an explanation of what the expected improvement would be, and what your criteria are for adopting a change, or rejecting it.
  2. Would it be responsive for one application to focus on creating stable null cell lines, and another application to focus on molecular assays of the cell lines, in a consortium manner?

    For this FOA, we ask for each application to take an integrated approach from gene selection through assays. First, in Phase 1, we want to encourage some diversity of approaches. Second, we anticipate that there may be dependencies between different elements (e.g., between alleles and assays, for example). Therefore, applications proposing just a single element will not be responsive.
  3. How much can I spend on tech dev?

    New tech dev is not part of this program. But optimization of assays to scale, or adoption of new tech, would be OK.
  4. Can we analyze our own data?

    Yes, but resources are limited. Prioritize analyses that are designed to characterize the quality and utility of their data for downstream applications (e.g., consistency, biological and technical variability). Analyses that include, for example, looking for correlations between assay data types, or comparisons integrating outside data (e.g., KOMP, other "perturbation X phenotype") are acceptable as long as they are mainly used to help characterize the performance of the system or demonstrate utility.
  5. It would make sense that each production center design their own analysis because it is likely to depend on what their assays are.

    Yes, this makes sense. Note that there are different types of analyses. There are ones required to make basic sense of the data — data processing, some validations and QC, etc. Once the data are processed, other types of analyses can be done with those derived data, including analysis by the DAV and community.

Review and Funding

  1. If multiple applications are focused on the analysis of the same types of assays would only one would be selected?

    After review, NHGRI may select among well-scored applications to achieve “program balance”. In some circumstances this may mean that we will not fund two very similar applications. Please see RFA section V.2.

Other Questions

  1. For the DAV’s, given lack of Y1 data, what data should they start with?

    There are publicly available datasets that follow potentially similar experimental design principles as MorPhiC KO’s of genes in complex cellular environments followed by readouts. Modeling such data can potentially teach a lot about the challenges the MorPhiC program will face and likely are good datasets to analyze (at scale) during year 1.
  2. How do you see prioritization of data analysis applications in terms of their complementarity vs quality?

    Interpreting this as ‘what happens after peer review?’, as the FOA (Section V.2) makes clear a number of other factors beyond the scientific merit go into making of an award. 
  3. Can you provide any more details on what / how much external data to integrate and if that integration is expected to be done, e.g. by copying and integrating the data (extract, transform, load (ETL) from the source) or if the other datasets should not be duplicated by for example cross querying via the portal.?

    We do not anticipate that DRACC will be required to make ETL type copies of large-scale non-MorPhiC datasets. It is expected that datasets of relevance to MorPhiC (provided it is possible under copyright and other laws) that are used by the DAV’s in year 1 might be made available by the DRACC. 
  4. Is the processing and analysis of the data is expected to be done by DRACC?

    It is expected that the DRACC will bring specific expertise on handling the processing of common assays. It is also expected that the Data Production Research and Development Centers should propose to bring analytical expertise 
  5. Should we expect consortium wide portals/channels for disseminating algorithms/tools/results or is the expectation that each DAVC have individual dissemination channels?

    The MorPhiC program intends to have one portal to access to all resources from the program. The DRACC will be responsible to coordinate, build and maintain such a portal. This will however due to the very nature of the activity be a collaborative effort between all three components of the Consortium.
  6. It would make sense that each production center design their own data analysis depending on the assays.

    Yes, it is expected that each production center will have expertise to undertake analytical methods for their own datasets. The DRACC and DAV can provide software and engineering expertise for deployment and scaling of such software. There are a lot of details like this that the Consortium is expected to answer. 
  7. Will new algorithm developed be considered responsive to this FOA?

    Yes, but please justify scientifically.

Last updated: September 22, 2021