DOI: 10.1002/alz.074788 ISSN: 1552-5260

Brain imaging endophenotypes by unsupervised deep learning

Degui Zhi
  • Psychiatry and Mental health
  • Cellular and Molecular Neuroscience
  • Geriatrics and Gerontology
  • Neurology (clinical)
  • Developmental Neuroscience
  • Health Policy
  • Epidemiology

Abstract

Background

Understanding the genetic architecture of brain structure and Alzheimer’s disease is challenging, partly due to difficulties in designing robust, non‐biased descriptors of the brain.

Method

We present approaches to derive robust brain imaging phenotypes using unsupervised deep representation learning. Training a 3‐D convolutional autoencoder model with reconstruction loss, we derived a vector representation (termed endophenotype) that captures rich morphological information of the brain. Further, we developed a perturbation‐based decoder interpretation approach that can highlight brain regions that are most relevant to individual endophenotypes.

Result

Using UK Biobank data, we trained the model using over 6,000 UK Biobank (UKBB) participants’ T1 or T2‐FLAIR (T2) brain MRIs. The model is used to derive 128 endophenotypes. The endophenotypes have a mean heritability of 0.3 and the GWAS of which have identified 43 independent loci in the held‐out UKBiobank dataset (discovery n = 22,962/replication n = 12,848), among which 13 loci have not been previously reported by the UK Biobank Big40 study.

Conclusion

Using UK Biobank data, we show that, compared to traditional brain image‐derived phenotypes (IDPs), our endophenotypes are more heritable and have higher power for genetic discovery.

More from our Archive