点击蓝字
关注我们
AI TIME欢迎每一位AI爱好者的加入!
6月9日晚7:30-9:00
AI TIME特别邀请了三位优秀的讲者跟大家共同开启ICLR专场六!
哔哩哔哩直播通道
扫码关注AITIME哔哩哔哩官方账号
观看直播
链接:https://live.bilibili.com/21813994
★ 邀请嘉宾 ★
徐民凯: 目前是蒙特利尔学习算法研究所 (Mila) 的研究生。他的研究兴趣主要包括基于概率的建模、学习和推理(如深度生成模型和深度无监督学习),以及它们在基础自然科学中的应用(如计算化学/材料/生物学)。他于上海交通大学(SJTU)获得学士学位,并曾在字节跳动(TikTok)人工智能实验室实习。民凯已在ICML、ICLR、AAAI、AAMAS等顶级机器学习会议发表多篇论文,并担任ICML、NeurIPS、AAAI等会议程序委员会成员。
(个人主页:https://minkaixu.com/)
报告题目:
用于分子构型预测的神经生成动力学
摘要:
We study how to generate molecule conformations (i.e., 3D structures) from a molecular graph. Traditional methods, such as molecular dynamics, sample conformations via computationally expensive simulations. Recently, machine learning methods have shown great potential by training on a large collection of conformation data. Challenges arise from the limited model capacity for capturing complex distributions of conformations and the difficulty in modeling long-range dependencies between atoms. Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph. We propose a method combining the advantages of both flow-based and energy-based models, enjoying: (1) a high model capacity to estimate the multimodal conformation distribution; (2) explicitly capturing the complex long-range dependencies between atoms in the observation space. Extensive experiments demonstrate the superior performance of the proposed method on several benchmarks, including conformation generation and distance modeling tasks, with a significant improvement over existing generative models for molecular conformation sampling.
论文标题:
Learning Neural Generative Dynamics for Molecular Conformation Generation
论文链接:
https://openreview.net/forum?id=pAbm1qfheGk
宋佳铭:斯坦福大学计算机系的五年级博士生,师从Stefano Ermon。他的研究内容主要集中于深度概率模型的学习和推断,并且应用于非监督学习,生成模型,和强化学习等领域。
(个人主页:http://tsong.me/)
报告题目:
基于非对抗深度训练的扩散生成模型
摘要:
基于扩散过程的生成模型 (Diffusion models, 简称扩散模型) 是深度无监督学习的新热点。扩散模型不需要对抗训练,并且最近在图像生成上取得了超越对抗生成模型 (GANs) 的效果;然而他们生成图片的速度要远低于后者。今天,我将简单介绍扩散模型的原理和局限,我们在这篇文章为加速扩散模型所做的工作,以及对扩散模型的展望。在实验效果上,我们的方法在不进行额外训练的情况下,将传统扩散模型的图片生成提速了20到50倍;这一点也在OpenAI的最新工作(arXiv:2105.05233)中获得了应用。本文共同作者为孟辰霖和Stefano Ermon。
论文标题:
Denoising Diffusion Implicit Models
论文链接:
https://arxiv.org/abs/2010.02502
孟辰霖:斯坦福大学一年级博士生,导师是Stefano Ermon. 本科毕业于斯坦福大学. 主要研究方向是机器学习,非监督学习,生成模型。(个人主页:https://cs.stanford.edu/~chenlin/)
报告题目:
Improved Autoregressive modeling with Distribution Smoothing
摘要:
While autoregressive models excel at image compression, their sample quality is often lacking. Although not realistic, generated images often have high likelihood according to the model, resembling the case of adversarial examples. Inspired by a successful adversarial defense method, we incorporate randomized smoothing into autoregressive generative modeling. We first model a smoothed version of the data distribution, and then reverse the smoothing process to recover the original data distribution. This procedure drastically improves the sample quality of existing autoregressive models on several synthetic and real-world image datasets while obtaining competitive likelihoods on synthetic datasets.
论文标题:
Improved Autoregressive modeling with Distribution Smoothing
论文链接:
https://arxiv.org/abs/2103.15089
直播结束后我们会邀请讲者在微信群中与大家答疑交流,请添加“AI TIME小助手(微信号:AITIME_HY)”,回复“iclr”,将拉您进“ICLR会议交流群”!
AI TIME微信小助手
主 办:AI TIME 、AMiner
合作伙伴:智谱·AI、中国工程院知领直播、学堂在线、学术头条、biendata、数据派、 Ever链动
合作媒体:学术头条
AI TIME欢迎AI领域学者投稿,期待大家剖析学科历史发展和前沿技术。针对热门话题,我们将邀请专家一起论道。同时,我们也长期招募优质的撰稿人,顶级的平台需要顶级的你,
请将简历等信息发至yun.he@aminer.cn!
微信联系:AITIME_HY
AI TIME是清华大学计算机系一群关注人工智能发展,并有思想情怀的青年学者们创办的圈子,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法、场景、应用的本质问题进行探索,加强思想碰撞,打造一个知识分享的聚集地。
更多资讯请扫码关注
我知道你在看哟
点击 阅读原文 了解更多精彩