Enhancing the Protein Tertiary Structure Prediction by Multiple Sequence Alignment Generation
暂无分享,去创建一个
Le Zhang | S. Sun | Yu Li | Jiayang Chen | Tao Shen
[1] Naman Goyal,et al. LLaMA: Open and Efficient Foundation Language Models , 2023, ArXiv.
[2] Jian Peng,et al. High-resolution de novo structure prediction from primary sequence , 2022, bioRxiv.
[3] Liangzhen Zheng,et al. Contact-Distil: Boosting Low Homologous Protein Contact Map Prediction by Self-Supervised Distillation , 2022, AAAI.
[4] Eli N. Weinstein,et al. ProGen2: Exploring the Boundaries of Protein Language Models , 2022, Cell systems.
[5] Anne-Florence Bitbol,et al. Generative power of a protein language model trained on multiple sequence alignments , 2022, bioRxiv.
[6] G. Church,et al. Single-sequence protein structure prediction using language models from deep learning , 2021, bioRxiv.
[7] Oriol Vinyals,et al. Highly accurate protein structure prediction with AlphaFold , 2021, Nature.
[8] Wojciech Zaremba,et al. Evaluating Large Language Models Trained on Code , 2021, ArXiv.
[9] Junzhou Huang,et al. PSSM-Distil: Protein Secondary Structure Prediction (PSSP) on Low-Quality PSSM by Knowledge Distillation with Contrastive Learning , 2021, AAAI.
[10] John F. Canny,et al. MSA Transformer , 2021, bioRxiv.
[11] Noam M. Shazeer,et al. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity , 2021, J. Mach. Learn. Res..
[12] Tom Sercu,et al. Transformer protein language models are unsupervised structure learners , 2020, bioRxiv.
[13] M. Zaheer,et al. Big Bird: Transformers for Longer Sequences , 2020, NeurIPS.
[14] Nikolaos Pappas,et al. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention , 2020, ICML.
[15] Lav R. Varshney,et al. BERTology Meets Biology: Interpreting Attention in Protein Language Models , 2020, bioRxiv.
[16] Guokun Lai,et al. Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing , 2020, NeurIPS.
[17] Sheng Wang,et al. Bagging MSA Learning: Enhancing Low-Quality PSSM with Deep Learning for Accurate Protein Structure Property Prediction , 2020, RECOMB.
[18] Nikhil Naik,et al. ProGen: Language Modeling for Protein Generation , 2020, bioRxiv.
[19] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[20] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[21] Tim Salimans,et al. Axial Attention in Multidimensional Transformers , 2019, ArXiv.
[22] Myle Ott,et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences , 2019, Proceedings of the National Academy of Sciences.
[23] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[24] Maria Jesus Martin,et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments , 2016, Nucleic Acids Res..
[25] Marco Biasini,et al. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests , 2013, Bioinform..
[26] Sean R. Eddy,et al. Hidden Markov model speed heuristic and iterative HMM search procedure , 2010, BMC Bioinformatics.