英国·威廉希尔(WilliamHill)中文官方网站

{dede:global.cfg_webname/}
  • English
  • 官方微信
  • 首页
  • 栏目名称
    • 测试
  • 第二个
  • 首页
  • 关于威廉希尔
    • 英国威廉希尔公司简介
    • 历史沿革
    • 机构设置
    • 现任领导
    • 历任领导
    • 联系我们
  • 团队队伍
    • 全职教工
    • 讲座 兼职教授
    • 重要人才计划
    • 退休人员名单
  • 人才培养
    • 本科生培养
    • 硕士生培养
    • 博士生培养
  • 科学研究
    • 学术交流
    • 重点学科
    • 科研机构
    • 科研团队
    • 科研成果
    • 讨论班
  • 党团建设
    • 党建动态
    • 工会活动
    • 团学工作
  • 理论学习
    • 主题教育
  • 合作交流
    • 国际合作
    • 校际合作
    • 校企合作
  • 人才招聘
    • 招生信息
    • 就业信息
    • 招生宣传
  • 员工之家
    • 员工组织
    • 员工基金
    • 员工活动
    • 百年院庆
  • 信息信箱

学术交流

  • 学术交流
  • 重点学科
  • 科研机构
  • 科研团队
  • 科研成果
  • 讨论班

学术交流

Variational deep neural networks for a class of inverse problems – models, algorithms and applications

日期:2022-11-11  作者:  点击:[]

报 告 题 目:Variational deep neural networks for a class of inverse problems – models, algorithms and applications(短期课程)

主 讲 人:陈 韵 梅  教授

单 位:University of Florida

时 间:11月8日、11月11日、11月15日9:30

腾讯 ID: 233-993-541

摘 要:

Interpretability and generalizability of deep neural network learning methods are the main concerns and limitations when solving inverse problems in real-world applications. Despite of numerous empirical successes in recent years, deep-learning based methods are generally difficult to interpret and prune to overfitting, lack convergence guarantee, and can be extremely data demanding. In this short course I would like to share our recent work to address those challenges.

In the first talk, I will present our exact and inexact learned decent algorithms (LDAs), which induce efficient network architectures for solving a class of inverse problems. These LDAs incorporate residual learning architecture into the descent algorithms for the nonconvex nonsmooth optimization of the inverse problems. We show that LDAs yields highly interpretable neural network architecture, retain convergence guarantee, while achieving improved efficiency and accuracy in sparse natural image, MRI and low dose CT  image reconstruction problems.

In the second talk, I will present our generalizable MRI reconstruction method with diverse dataset to tackle the task specific and extremely data demanding problems in deep learning based methods.  The proposed variational model and our LDA induced network presented in the first talk can adaptively learn a regularizer that is enable to encode both task-invariant and task-specific features. The network is trained by a bilevel optimization algorithm to prevent overfitting and improve generalizability. A series of experimental results on heterogeneous MRI data sets indicate that the proposed method generalizes well to the reconstruction problems, whose under-sampling rate and trajectories are not present during training.

In the last talk, I will discuss how to use multi-source/ multi-domain complementary information to improve the performance of deep neural networks.  Several applications on image processing using multi-source/ multi-domain learning will be presented. I will also present a novel (block) coordinate (or alternating minimization) algorithm for solving the nonconvex, nonsmooth, variational, multi-source/ multi-domain learning model and generation of the network, whose architecture exactly follows the proposed algorithm. Our preliminary results on sparse view CT reconstruction and multimodal MR image joint reconstruction and synthesis indicate the effective of the proposed approach.

简 介:

陈韵梅,佛罗里达大学Distinguish Professor Emeritus、博士生导师。致力于数学、图像处理和机器学习等交叉学科的研究,研究领域涉及医学图像分析中数学模型的建立与数值优化方法的发展,并对其中潜在的数学理论进行了深入的研究。曾获中国国家自然科学三等奖和教育部科技进步一等奖,获国际发明专利9项,主持国家级项目30余项,在Inventiones Mathematicae, SIAM Journal on Imaging Science等杂志上发表学术论文200余篇。



上一条:高 阶 网 络 传 播 动 力 学 建 模 及 结 构 重 构 下一条:An Alternative Doubly Robust Estimation in Causal Inference Model

【关闭】

友情链接

版权信息:英国·威廉希尔(WilliamHill)中文官方网站