JACoW logo

Journals of Accelerator Conferences Website (JACoW)

JACoW is a publisher in Geneva, Switzerland that publishes the proceedings of accelerator conferences held around the world by an international collaboration of editors.


BiBTeX citation export for TU3B1: Machine Learning Applications for Performance Improvement and Developing Future Storage Ring Light Sources

@unpublished{leemann:fls2023-tu3b1,
  author       = {S.C. Leemann},
  title        = {{Machine Learning Applications for Performance Improvement and Developing Future Storage Ring Light Sources}},
% booktitle    = {Proc. FLS'23},
  booktitle    = {Proc. ICFA Adv. Beam Dyn. Workshop (FLS'23)},
  eventdate    = {2023-08-27/2023-09-01},
  language     = {english},
  intype       = {presented at the},
  series       = {ICFA Advanced Beam Dynamics Workshop},
  number       = {67},
  venue        = {Luzern, Switzerland},
  publisher    = {JACoW Publishing, Geneva, Switzerland},
  month        = {01},
  year         = {2024},
  note         = {presented at FLS'23 in Luzern, Switzerland, unpublished},
  abstract     = {{This presentation will focus on two recent applications of Machine Learning (ML) to storage ring-based synchrotron light sources. The first example highlights improvement of storage ring performance by use of ML to stabilize the electron beam size at the source points against perturbations from insertion device (ID) motion*. The stability of the source size is improved by roughly one order of magnitude through a neural network-based feed-forward that compensates, in a model-independent manner, for ID-induced source size changes before they can occur. In the second example, ML is used to replace many-turn particle tracking in multi-objective genetic algorithms (MOGA) for the design of lattices for demanding future storage rings**. By training neural networks to give accurate predictions of nonlinear lattice properties such as dynamic aperture and momentum aperture, the overall MOGA optimization process an be substantially accelerated. Including overhead from training and iterative retraining, MOGA optimization can be accelerated through ML by up to two orders of magnitude, thereby dropping overall optimization campaign runtime even on large clusters from weeks to just hours.}},
}