Methodological considerations
Over the course of the past decades, the field of computational neuroscience has developed a number of tools to allow better insight into the genetics of the human brain. These tools include packages to process and analyze imaging data, such as FSL1,2, FreeSurfer, and FieldTrip, tools that allow for genetic analysis beyond simple GWASs, such as LDSC3, FUMA4, pleioFDR5, MOSTest6, MiXeR7, and several others. Each of these tools are developed with a different background and for different aims. To combine these tools to answer a singular or multiple questions within a particular field can cause some friction.
For example, the main objective of MOSTest is to boost discovery of novel individual SNPs associated with a trait, while MiXeR aims to quantify the proportion of SNPs influencing a given trait to obtain a measure of polygenicity in the univariate stream, or to quantify the proportion of shared SNPs between two traits in a multivariate analysis. Therefore, MOSTest focuses mainly on individual SNPs, and MiXeR focuses more on patterns across the genome. These differences mean that the output from these tools can not necessarily be directly linked. These different aims and their corresponding assumptions about the input data need to be carefully considered before combining tools in a given analysis.
In the work described here, careful choices were made to include or exclude certain tools to ensure that the assumptions about the input data for each tool matched the objectives for which the tool was developed. Luckily, helper tools (such as the python_convert
toolbox) can wrangle data to match a particular tool’s expectations, but fundamental assumptions (such as a focus on individual SNPs versus genome-wide analysis) can hardly be adjusted for within the input data.
Distribution of input data about mental health-related symptoms in a healthy population is heavily skewed and biased towards milder symptoms associated with sub-clinical presentations. This is true also for the component loadings in the first article, as indicated in the Supplementary Material from Roelfs et al.8. In particular IC2 (reflecting items related to psychosis) showed a very narrow distribution with long tails. This was not entirely unexpected since our analysis removed individiuals with a diagnosed psychiatric disorder and psychotic symptoms are usually serious enough to warrant a clinical diagnosis. This meant that the number of individuals in our dataset that reported experiencing psychosis symptoms was very low, reducing the variance in that component. Conversely, IC3 (capturing items related to anxiety, depression, and mental distress) had a much more balanced distribution with fewer outliers. This is likely because a larger proportion of the population experience some depression, anxiety, or distress in their lifetime, and these individuals are not significantly different from the rest of the population, reducing the effect of these individuals on the range of values for this component.
More importantly, since we used a data-driven decomposition method rather than a pre-defined framework for clustering symptoms and symptom domains, we ended up with a set of components that best described the data but that may perhaps not best serve the constructs we were interested in studying. This is best exemplified by IC1 (capturing items related to history of sexual abuse) as well as IC6 (capturing items related to traumatic experiences) and IC12 (capturing items related to emotional abuse). Even using expert input from clinicians we did not anticipate these symptoms warrant a separate component. Although these items are obviously very relevant for mental health, issues arise when deriving a genetic fingerprint for symptom profiles such as these. One needs to be very cautious to not overinterpret the genetic associations and correlations of these profiles. The (significant) genetic correlations of these profiles do not suggest that individuals are genetically predisposed to be subject to these traumatic experiences, but rather imply that there are confounders and second-order effects at play driving the associations and correlations.
One of the main constraints that repeatedly limited the types of analyses available for our dataset among of the tools used in the work described here was the absence of effect directions in MOSTest. While a general univariate GWAS returns an effect size for each SNP with a direction of effect denoting whether the particular SNP is positively or negatively associated with a trait, the multivariate MOSTest GWAS mainly boosts discovery of SNPs that are associated with a trait, either negatively or positively. This is because it is possible for a subset of SNPs to be positively associated with a number of traits univariately that make up the composite measure and negatively with a different subset, but rather than canceling each other out these effect sizes are summarized in an effect size estimate without sign. This causes a number of limitations for follow-up analyses.
To overcome the limitation of missing effect directions in the MOSTest output a number of analyses that may have yielded interesting results could not be run due to mismatch in the MOSTest output data and the assumptions of the input data for these tools. Active development to add effect directions in MOSTest is ongoing, but will come with their own drawbacks, like diminishing power. This is discussed in further detail below.
Working within the limitations of the tools available is a common challenge for any researcher working in a multidisciplinary field. One major advantage of the computational neuroscience field, particularly the subfield focusing on imaging-genetics is (at the moment) rather small, so the threshold to reach out to developers and ask for input on the best approach to meet a certain assumption in the toolbox is easily-done. It becomes even simpler when the maintainers of several broadly-used tools are also already co-authors on manuscripts. This made the process of linking tools considerably easier.
Another obstacle linking analyses of different origin was the gene mapping of the conjunctional FDR analyses. Since the FUMA toolbox expects p-values but the conjunctional FDR functionality returns q-values that are closely associated with p-values but cannot be linked one-to-one, a manual adjustment was necessary to ensure that the pleioFDR output files matched the FUMA assumptions for input data. These modifications have been discussed by collaborators with the creators of FUMA who have confirmed that the results would be valid with the proposed adjustments, but this nonetheless meant that the tool we used was not originally built for the analyses discussed here.
All analysis pipelines described in this work revolve around a set of uni- or multivariate GWASs. Since SNP discovery was not the main aim of this body of works, the main insights were derived from subsequent analyses linking the SNPs to genes, and the genes to psychiatric disorders, processes in the brain, or to link the genetic architecture identified by the GWAS to known genetic structure of psychiatric disorders or traits associated with mental health.
As mentioned, the main tool incorporating GWAS functionality were PLINK for univariate analyses, and MOSTest for multivariate analyses. PLINK is probably the most widely used tool for univariate GWAS while MOSTest is a relatively novel tool allowing for both univariate and multivariate analyses, the latter being the main novel development. The main drawback of MOSTest is the lack of effect direction in the summary statistics from the multivariate analysis. To obtain a measure of directionality, we could infer the overall effect direction by investigating the effect directions from each of the univariate analyses. This analysis dubbed the “sign”-test involves extracting the sign of the effect direction for a SNP in each univariate measure that comprised the multivariate GWAS and calculate the proportion of signs that were positive or negative. If the vast majority of signs from the univariate analyses were negative, then one could infer that the global effect is likely also negative and vice versa. However, this makes some broad assumptions about the effect on a trait in isolation compared to the aggregated trait in a multivariate analyses. This assumption is easily violated and thus the sign-test is not an officially recommended test to infer effect direction in MOSTest. A more robust test for effect direction in MOSTest is under development, but is not ready for use as of yet.