Zhang Junyu2

ZHANG Junyu

Assistant Professor

Department of Industrial Systems Engineering and Management

Research Interest:

  • Saddle point problems algorithm design & analysis, complexity lower bounds

  • Stochastic optimization: variance reduction methods, sample complexity analysis

  • Optimization theories of Reinforcement Learning

  • Composite optimization: prox-linear methods, stochastic composite optimization

  • Riemannian optimization

Selected Journal Papers:

  • Zhang, J., Hong, M. and Zhang, S., 2021. On lower iteration complexity bounds for the saddle point problems. Mathematical Programming. Accepted, to appear soon. [paper]

  • Zhang, J., & Xiao, L., 2021. Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization. Mathematical Programming. Accepted, to appear soon. [paper]

  • Davis, D., Drusvyatskiy, D., Xiao, L. and Zhang, J., 2021. From Low Probability to High Confidence in Stochastic Convex Optimization. Journal of Machine Learning Research, 22, pp.49-1. [paper]

  • Zhang, J. and Xiao, L., 2021. MultiLevel Composite Stochastic Optimization via Nested Variance Reduction. SIAM Journal on Optimization, 31(2), pp.1131-1157. [paper]

  • Zhang, J., Ma, S. and Zhang, S., 2020. Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis. Mathematical Programming, 184(1), pp.445-490. [paper]

  • Zhang, J., Liu, H., Wen, Z. and Zhang, S., 2018. A sparse completely positive relaxation of the modularity maximization for community detection. SIAM Journal on Scientific Computing, 40(5), pp.A3091-A3120. [paper]

Selected Conference Proceedings:

  • Zhang, J., Ni, C., Yu, Z., Szepesvari, C., & Wang, M., 2021. On the convergence and sample efficiency of variance-reduced policy gradient method. Advances in Neural Information Processing Systems (NeurIPS). To appear. [paper] (selected as spotlight paper)

  • Zhang, J., Hong, M., Wang, M. and Zhang, S., 2021, March. Generalization bounds for stochastic saddle point problems. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 568-576. PMLR. [paper]

  • Zhang, J., Koppel, A., Bedi, A.S., Szepesvari, C. and Wang, M., 2020. Variational policy gradient method for reinforcement learning with general utilities. Advances in Neural Information Processing Systems (NeurIPS), 33, pp.4572-4583. [paper] (selected as spotlight paper)

  • Zhang, J. and Xiao, L., 2019, May. A composite randomized incremental gradient method. International Conference on Machine Learning (ICML), pp. 7454-7462, PMLR. [paper]

  • Zhang, J. and Xiao, L., 2019. A stochastic composite gradient method with incremental variance reduction. Advances in Neural Information Processing Systems (NeurIPS), 32, pp.9078-9088. [paper]