期刊论文详细信息
Journal of Cheminformatics
DrugEx v2: de novo design of drug molecules by Pareto-based multi-objective reinforcement learning in polypharmacology
Xuhan Liu1  Gerard J. P. van Westen1  Adriaan P. IJzerman1  Herman W. T. van Vlijmen2  Michael T. M. Emmerich3  Kai Ye4 
[1] Drug Discovery and Safety, Leiden Academic Centre for Drug Research, Einsteinweg 55, 2333 CC, Leiden, The Netherlands;Drug Discovery and Safety, Leiden Academic Centre for Drug Research, Einsteinweg 55, 2333 CC, Leiden, The Netherlands;Janssen Pharmaceutica NV, Turnhoutseweg 30, 2340, Beerse, Belgium;Leiden Institute of Advanced Computer Science, Niels Bohrweg 1, 2333 CA, Leiden, The Netherlands;School of Electronics and Information Engineering, Xi’an Jiaotong University, 28 Xianning W Rd, Xi’an, China;
关键词: Deep learning;    Adenosine receptors;    Cheminformatics;    Reinforcement learning;    Multi-objective optimization;    Exploration strategy;   
DOI  :  10.1186/s13321-021-00561-9
来源: Springer
PDF
【 摘 要 】

In polypharmacology drugs are required to bind to multiple specific targets, for example to enhance efficacy or to reduce resistance formation. Although deep learning has achieved a breakthrough in de novo design in drug discovery, most of its applications only focus on a single drug target to generate drug-like active molecules. However, in reality drug molecules often interact with more than one target which can have desired (polypharmacology) or undesired (toxicity) effects. In a previous study we proposed a new method named DrugEx that integrates an exploration strategy into RNN-based reinforcement learning to improve the diversity of the generated molecules. Here, we extended our DrugEx algorithm with multi-objective optimization to generate drug-like molecules towards multiple targets or one specific target while avoiding off-targets (the two adenosine receptors, A1AR and A2AAR, and the potassium ion channel hERG in this study). In our model, we applied an RNN as the agent and machine learning predictors as the environment. Both the agent and the environment were pre-trained in advance and then interplayed under a reinforcement learning framework. The concept of evolutionary algorithms was merged into our method such that crossover and mutation operations were implemented by the same deep learning model as the agent. During the training loop, the agent generates a batch of SMILES-based molecules. Subsequently scores for all objectives provided by the environment are used to construct Pareto ranks of the generated molecules. For this ranking a non-dominated sorting algorithm and a Tanimoto-based crowding distance algorithm using chemical fingerprints are applied. Here, we adopted GPU acceleration to speed up the process of Pareto optimization. The final reward of each molecule is calculated based on the Pareto ranking with the ranking selection algorithm. The agent is trained under the guidance of the reward to make sure it can generate desired molecules after convergence of the training process. All in all we demonstrate generation of compounds with a diverse predicted selectivity profile towards multiple targets, offering the potential of high efficacy and low toxicity.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202203040216755ZK.pdf 6503KB PDF download
  文献评价指标  
  下载次数:6次 浏览次数:3次