Paper | Title | Other Keywords | Page |
---|---|---|---|
MO4IOPK02 | Highly Scalable Numerical Methods for Simulation of Space Charge Dominated Beams | space-charge, damping, plasma, proton | 12 |
|
|||
We are developing highly scalable solvers for space charge dominated beams based on both Particle-In-Cell (PIC) and direct Vlasov models. For the PIC model, particles are distributed evenly on different processors and space charge effect has been counted by solving Poisson's equation on a finite mesh. Several Poisson solvers have been developed using Fourier, Spectral Element (SEM) and Wavelet methods. Domain decomposition (DD) has been used to parallelize these solvers and all these solvers have been implemented into the PTRACK code. PTRACK is now widely used for large scale beam dynamics simulations in linear accelerators. For the Vlasov model, Semi-Lagrangian method and time splitting scheme have been employed to solve Vlasov equation directly in 1P1V and 2P2V phase spaces. 1D and 2D Poisson solvers have been developed with SEM. Similarly, DD has been used for parallelization of Poisson and Vlasov solvers. New efforts on developing Vlasov and Poisson solvers on unstructured mesh will also be reported. |
|||
MO4IOPK04 | Overview of (Some) Computational Approaches in Spin Studies | proton, dipole, lattice, focusing | 18 |
|
|||
In the proposed electric dipole moment (EDM) experiment, with an estimated spin coherence time of 1000 s, the spin precession due to an EDM of 10-29 e.cm will produce a change in the vertical spin component of approximately 10 μrad during the storage time. Such high sensitivity needs an extremely high accurate and reliable simulation environment of the beam and spin behavior during the storage time. Therefore, several spin-related accelerator programs have been considered and investigated. The paper surveys the computational algorithms of these approaches and provides their comprehensive analysis from multiple perspectives: accuracy, performance, extensibility, and scope of potential applications. |
|||
MO4IOPK05 | An Efficient 3D Space Charge Routine with Self-Adaptive Discretization | space-charge, FEL, electron, cathode | 23 |
|
|||
Precise and fast 3D space-charge calculations for bunches of charged particles are still of growing importance in recent accelerator designs. A widespread approach is the particle-mesh method computing the potential of a bunch in the rest frame by means of Poisson's equation. Whereas an adaptive discretization of a bunch is often required for efficient space charge calculations in practice, such a technique is not implemented in many computer codes. For instance, the FFT Poisson solver that is often applied allows only an equidistant mesh. An adaptive discretization following the particle density is implemented in the GPT tracking code (General Particle Tracer, Pulsar Physics). The disadvantage of this approach is that jumps in the distribution of particles are not taken into account. In this paper we present a new approach to an adaptive discretization which is based on the multigrid technique. The goal is that the error estimator needed for the adaptive distribution of mesh lines can be calculated directly from the multigrid procedure. The algorithm will be investigated for several particle distributions and compared to that adaptive discretization method implemented in GPT. |
|||
MO3IODN01 | Impedance Estimation by Parabolic Partial Differential Equation for Rectangular Taper | impedance, wakefield, synchrotron, vacuum | 27 |
|
|||
Recently, calculation of wake field and impedance has become more important. In many cases they are usually calculated numerically by using a mesh. It will be shown here that the mesh calculation based on the paraxial approximation can be much faster than ordinary methods when the bunch is very short. There are two advantages. One is to be able to choose the longitudinal mesh size independent of the bunch length. The other is that the problem can be solved as an initial-value problem in spite of frequency domain calculation. |
|||
MO4IODN02 | Applying an hp-Adaptive Discontinuous Galerkin Scheme to Beam Dynamics Simulations | electromagnetic-fields, electron, space-charge, FEL | 30 |
|
|||
The problem of self-consistent simulations of short relativistic particle bunches in long accelerator structures exhibits a pronounced multi-scale character. The adequate resolution of the THz space charge fields excited by short ultra-relativistic bunches requires mesh spacings in the micrometer range. On the other hand, the discretization of complete accelerator sections using such fine meshes results in a vast number of degrees of freedom. Due to the spatial concentration of the particles and the excited space charge fields, the application of time-adaptive mesh refinement is an emerging idea. We reported on the implementation of time-adaptive mesh refinement for the Finite Integration Technique (FIT)*. Based on this work, we implemented an hp-adaptive discontinuous Galerkin (DG) code. The twofold refinement mechanisms of the hp-adaptive DG method offer maximum modeling freedom. We present details of the h- and p-adaptations for the DG method on Cartesian grids. Special emphasis is put on the stability and efficiency of the adaptation techniques. |
|||
|
|||
MO4IODN03 | Portable High Performance Computing for Microwave Simulation by FDTD/FIT Machines | target, radiation, electromagnetic-fields, electron | 35 |
|
|||
In addition to standard high performance computing technologies such as supercomputers and grid computers, a method of dedicated computers have been attempted to construct portable high performance computing environments in the vicinity of office PC. The method of dedicated computers have also been adopted into electromagnetic field simulations, which are mainly in a linear algebra equation solver for general electromagnetic field analysis and the FDTD solver for microwave simulations. In this paper, attempts of FDTD/FIT dedicated computer (FDTD/FIT machine) are introduced*. The basic scheme of the FDTD/FIT method itself is very simple and suitable for implementation as hardware circuits. In addition, it is also essential to realize many other functions such as imposing of boundary conditions, treatment of non-uniform materials, power input, etc. Moreover, to fully bring out the advantage of the method of dedicated computer, the computer architecture should be designed to achieve efficient computing of all of FDTD/FIT scheme including the boundary condition setting, etc. Especially various efforts of minimization of memory access overhead are discussed in this paper. |
|||
|
|||
TU1IOPK01 | Computational Beam Dynamics for a High Intensity Ring: Benchmarking with Experiment in the SNS | kicker, extraction, injection, impedance | 42 |
|
|||
As SNS continues to ramp toward full intensity, we are acquiring a wealth of experimental data. Much effort is being applied to understand the details of the beam accumulation process under a variety of experimental conditions. An important part of this effort is the computational benchmarking of the experimental observations. In order to obtain quantitative agreement between the calculations and the observations, and hence a full understanding of the machine, a great deal of care must be taken to incorporate all the relevant experimental parameters into the calculation. These vary from case to case, depending upon what is being studied. In some of these cases, the benchmarks have been critical in unearthing flaws in the machine and in guiding their mitigation. In this paper we present the results of benchmarks with a variety of experiments, including coupling in beam distributions at low intensities, space charge effects at higher intensities, and a transverse instability driven by the impedance of the ring extraction kickers. |
|||
TU1IOPK02 | Comparison of Different Simulation Codes with UNILAC Measurements for High Beam Currents | DTL, emittance, quadrupole, focusing | 48 |
|
|||
The GSI Univeral Linear Accelerator UNILAC can accelerate all ion species from protons to uranium. Hence its DTL section is equipped with e.m. quadupoles allowing for a wide range of field strength along the section. During the last years various campaigns on the quality of high current beams at the DTL exit as function of the applied transverse focusing have been performed. Measurements were compared with up to four different high intensity beam dynamics codes. Those comparisons triggered significant improvement of the final beam quality. The codes were used to prepare an ambitious and successful beam experiment on the first observation of a space charge driven octupolar resonance in a linear accelerator. |
|||
|
|||
TU1IOPK04 | Benchmarking Different Codes for the High Frequency RF Calculation | cavity, superconducting-cavity, radiation, electromagnetic-fields | 53 |
|
|||
In this paper, we present benchmarking results for high-class 3D electromagnetic (EM) codes in designing RF cavities today. These codes include Omega3P [1], VORPAL [2], CST Microwave Studio [3], Ansoft HFSS [4], and ANSYS [5]. Two spherical cavities are selected as the benchmark models. We have compared not only the accuracy of resonant frequencies, but also that of surface EM fields, which are critical for superconducting RF cavities. By removing degenerated modes, we calculate all the resonant modes up to 10 GHz with similar mesh densities, so that the geometry approximation and field interpolation error related to the wavelength can be observed. |
|||
TU2IOPK02 | Simulation Studies & Code Validation For The Head-Tail Instability With Space Charge | space-charge, wakefield, betatron, impedance | 58 |
|
|||
The head-tail instability represents a potential intensity limitation for bunched beams in the synchrotrons of the FAIR project. Parametrical studies with numerical simulations over very long time scales are necessary in order to understand the effect of direct space charge, nonlinear synchrotron oscillations and image charges, which are all important in FAIR synchrotrons. Existing analytic approaches either neglect space charge or describe simplified models, which require a numerical or experimental validation. For our simulation studies we use two different computer codes, HEADTAIL and PATRIC. In this work we verify models for wake-field kicks and space-charge effect using the analytic solution for head-tail mode frequencies and growth rates from the barrier airbag model. |
|||
TU4IOPK02 | Novel Methods for Simulating Relativistic Systems Using an Optimal Boosted Frame | electron, laser, plasma, free-electron-laser | 73 |
|
|||
It was shown recently that it may be computationally advantageous to perform computer simulations in a Lorentz boosted frame for a certain class of particle acceleration devices or problems such as: free electron laser, laser-plasma accelerator, and particle beams interacting with electron clouds*. However, even if the computer model relies on a covariant set of equations, it was pointed out that algorithmic difficulties related to discretization errors may have to be overcome in order to take full advantage of the potential speedup**. Further complications arise from the need to transform input and output data between the laboratory frame and the frame of calculation, but can be overcome at low additional computational cost***. We will present the theory behind the speed-up of numerical simulation in a boosted frame, our latest developments of numerical methods, and examples of application to the modeling of the above-cited problems and others if applicable. |
|||
|
|||
TU3IODN03 | Modeling Techniques for Design and Analysis of Superconducting Accelerator Magnets | quadrupole, superconducting-magnet, status, magnet-design | 77 |
|
|||
Superconducting magnets for particle accelerators are complex devices requiring the use of sophisticated modeling techniques to predict their performance. A complete description of the magnet behavior can only be obtained through a multi-physics approach which combines magnetic, mechanical, and electrical-thermal models. This approach is essential in particular for the next generation of magnets, which will likely implement strain sensitive conductors like Nb3Sn and will handle forces significantly larger than in the present LHC dipoles. The design of high field superconducting magnets has benefited from the integration between CAD, magnetic, and structural analysis tools allowing a precise reproduction of the magnet 3D geometry and a detailed analysis of the three-dimensional strain in the superconductor. In addition, electrical and thermal models have made possible investigating the quench initiation process and the thermal and stress conditions of the coil during the propagation of a quench. We present in this paper an overview of the integrated design approach and we report on simulation techniques aimed to predict and reproduce magnet behavior from assembly to quench. |
|||
TU3IODN05 | Transient, Large-Scale 3D Finite Element Simulations of the SIS100 Magnet | dipole, synchrotron, acceleration, ion | 83 |
|
|||
Numerical simulations are frequently used in the design, optimization and commissioning phase of accelerator components. Strict requirements on the accuracy as well as the complex structure of such devices lead to challenges regarding the numerical simulations in 3D. In order to capture all relevant details of the geometry and possibly strongly localized electromagnetic effects, large numerical models are often unavoidable. The use of parallelization strategies in combination with higher-order finite-element methods offers a possibility to account for the large numerical models while maintaining moderate simulation times as well as high accuracy. Using this approach, the magnetic properties of the SIS100 magnets designated to operate within the Facility of Antiproton and Ion Research (FAIR) at the GSI Helmholtzzentrum für Schwerionenforschung GmbH (GSI) in Darmstadt, are calculated. Results for eddy-current losses under time-varying operating conditions as well as field quality considerations are reported. |
|||
TU4IODN01 | A Parallel Hybrid Linear Solver for Accelerator Cavity Design | linear-collider, collider, cavity | 89 |
|
|||
Numerical simulations to design high-energy particle accelerators give rise to large-scale ill-conditioned highly indefinite linear systems of equations that are becoming increasingly difficult to solve using either a direct solver or a preconditioned iterative solver alone. In this paper, we describe our current effort to develop a parallel hybrid linear solver that balances the robustness of a direct solver with the efficiency of a preconditioned iterative solver. We demonstrate that our hybrid solver is more robust and efficient than the existing state-of-the-art software to solve these linear systems on a large number of processors. |
|||
WE2IOPK01 | Hard- and Software-based Acceleration Techniques for Field Computation | acceleration, linac, status, ion | 93 |
|
|||
Due to high demand in more realistic graphics rendering for computer games and professional applications, commercial, off-the-shelf graphics processing units (GPU) increased their functionality over time. Recently special application programming interfaces (API) allow programming these devices for general purpose computing. This talk will discuss the advantages of this hardware platform for time domain simulations using the Finite-Integration-Technique (FIT). Examples will demonstrate typical accelerations over conventional central processing units (CPU). Next to this hardware-based accelerations for simulations also software-based accelerations are discussed. A distributed computing scheme can be used to accelerate multiple independent simulation runs. For memory intense simulations the established Message Passing Interface (MPI) protocol enables distribution of one simulation to a compute cluster with distributed memory access. Finally, the FIT framework also allows special algorithmic improvements for the treatment of curved shapes using the perfect boundary approximation (PBA), which speeds up simulations. |
|||
WE2IOPK03 | Graphical Processing Unit-Based Particle-In-Cell Simulations | plasma, target, controls, acceleration | 96 |
|
|||
New emerging multi-core technologies can achieve high performance, but algorithms often need to be redesigned to make effective use of these processors. We will describe a new approach to Particle-in-Cell (PIC) codes and discuss its application to Graphical Processing Units (GPUs). We will conclude with lessons learned that can be applied to other problems. Some of these lessons will be familiar to those who have programmed vector processors in the past, others will be new. |
|||
WE2IOPK05 | VizSchema - A Standard Approach for Visualization of Computational Accelerator Physics Data | plasma, cavity, laser, acceleration | 101 |
|
|||
Even if common, self-described data formats are used, data organization (e.g. the structure and names of groups, datasets and attributes) differs between applications. This makes development of uniform visualization tools problematic and comparison of simulation results difficult. VizSchema is an effort to standardize metadata of HDF5 format so that the entities needed to visualize the data can be identified and interpreted by visualization tools. This approach allowed us to develop a standard powerful visualization tool, based on VisIt, for visualization of large data of various kinds (fields, particles, meshes) allowing 3D visualization of large-scale data from the COMPASS suite (VORPAL and Synergia) for SRF cavities and laser-plasma acceleration. |
|||
|
|||
WE3IOPK01 | The Object Oriented Parallel Accelerator Library (OPAL), Design, Implementation and Application | space-charge, scattering, wakefield, gun | 107 |
|
|||
OPAL (Object Oriented Parallel Accelerator Library) is a tool for charged-particle optic calculations in accelerator structures and beam lines including 3D space charge, short range wake-fields and 1D coherent synchrotron radiation. Built from first principles as a parallel application, OPAL admits simulations of any scale, from the laptop to the largest High Performance Computing (HPC) clusters available today. Simulations, in particular HPC simulations, form the third pillar of science, complementing theory and experiment. In this paper we present a fast FFT based direct solver and an iterative solver, namely a solver based on an algebraic multigrid preconditioned conjugate gradient method able to handle efficiently exact boundary conditions on complex geometry's. We present with timings up to several thousands of cores. The application of OPAL to the PSI-XFEL project as well as to the ongoing high power cyclotron upgrade will demonstrate OPAL's versatile capabilities. Plans for future developments towards a 3D finite element time domain Maxwell solver for large structures and simulation capabilities for 3D synchrotron radiation will be discussed. |
|||
WE3IOPK02 | Recent Progress and Plans for the Code ELEGANT | storage-ring, lattice, linac, synchrotron | 111 |
|
|||
ELEGANT is an open-source accelerator code that has been under development for approximately two decades. In that time, it has evolved from a graduate student project with a narrow purpose to a general code for the design and modeling of linacs and storage rings. ELEGANT continues to evolve, thanks in no small part to suggestions from users. ELEGANT has seen extensive application to modeling of linacs, particularly for applications related to free-electron lasers and energy recovery linacs. Recent developments have emphasized both linac and storage-ring-related enhancements, along with parallelization. In this paper, we briefly review the features of ELEGANT and its program suite. We then describe some of the recent progress made in the ongoing development of ELEGANT. We also discuss several noteworthy applications and directions for future work. |
|||
WE4IOPK02 | High-Fidelity Injector Modeling with Parallel Finite Element 3D Electromagnetic PIC Code Pic3P | gun, space-charge, cavity, electron | 122 |
|
|||
SLAC's Advanced Computations Department (ACD) has developed the parallel Finite Element 3D electromagnetic code suite ACE3P for modeling of complex accelerator structures. The Particle-In-Cell module Pic3P was designed for simulations of beam-cavity interactions dominated by space charge effects. Pic3P solves the complete set of Maxwell-Lorentz equations self-consistently and includes space-charge, retardation and boundary effects from first principles. In addition to using conformal, unstructured meshes in combination with higher-order Finite Element methods, Pic3P also uses causal moving window techniques and dynamic load balancing for highly efficient use of computational resources. Operating on workstations and on leadership-class supercomputing facilities, Pic3P allows large-scale modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of next-generation accelerator facilities. Applications include the LCLS RF gun and the BNL polarized SRF gun. |
|||
|
|||
WE4IOPK04 | Beam Dynamics In The Low Energy Part Of The Low Emittance Gun (LEG) | solenoid, cavity, emittance, laser | 125 |
|
|||
One possible electron source for the PSI-XFEL is the Low Emittance Gun (LEG), which is currently under development at PSI. It consists of a pulsed DC gun, which operates at 500 keV and has the option of using either a photo cathode or a field emitter array. The gun is followed by a pulsed in-vacuum solenoid and a two frequency cavity, not only used to accelerate the beam but also to create a highly linear energy correlation required for ballistic bunching. All components are rotationally symmetric, so a full particle-in-cell simulation of the setup using 2 1/2 D MAFIA, including space charge, wake fields and beam loading effects, shows the base line performance. The low emittance beam, which propagates in a large part of the setup at relatively small energies of around 500 kEV, is rather sensitive to small perturbations in the fields. So we also investigated the effect of mechanical misalignments on the beam quality using the 3D in-house code CAPONE. |
|||
|
|||
WE3IODN01 | The XAL Infrastructure for High Level Control Room Applications | EPICS, controls, lattice, dipole | 131 |
|
|||
XAL is a Java programming framework for building high-level control applications related to accelerator physics. The core of XAL consists of a GUI framework to provide common “look and feel” and functionality for all XAL applications, a hardware representation of the machine for connectivity and control, and a beam simulation model termed the "online model" for model reference and comparison to the hardware operation. The structure, details of implementation, and interaction between these components, auxiliary XAL packages, and applications are discussed. A general overview of applications created for the SNS project and based on XAL is presented. |
|||
WE3IODN03 | Improvement Plans for the RHIC/AGS On-Line Model Environments | controls, lattice, booster, ion | 137 |
|
|||
The on-line models for RHIC and the RHIC pre-injectors (the AGS and the AGS Booster) can be thought of as containing our best collective knowledge of these accelerators. As we improve these on-line models we are building the framework to have a sophisticated model-based controls system. Currently the RHIC on-line model is an integral part of the controls system, providing the interface for tune control, chromaticity control, and non-linear chromaticity control. What we will discuss in this paper is our vision of the future of the on-line model environment for RHIC and the RHIC pre-injectors. Although primarily these on-line models are used as Courant-Snyder parameter calculators using live machine settings, we envision expanding these environments to encompass many other problem domains. We will also discuss the importance of the modeling infrastructure and organization as well as interfacing to controls, power supply, and magnetic measurement infrastructure and organizations. The model engines themselves will be discussed and our own evolution toward incorporating more sophisticated simulation filters, such as PTC and UAL, into the on-line model infrastructure. |
|||
WE4IODN01 | Beam-Beam Simulations for KEKB and Super-B Factories | luminosity, coupling, emittance, impedance | 141 |
|
|||
Recent progress of KEKB and nano beam scheme adopted in KEKB upgrade are discussed. For the present KEKB, chromatic x-y coupling, which was the key parameter to improve luminosity, is focussed. Beam-beam simulations with weak-strong and strong-strong models for nano beam scheme are presented. A weak-strong simulation was done in the presencee of the longitudinal micro-wave instability. Finally status of beam simulations in KEK supercomputers is presented. |
|||
WE4IODN03 | Recent Advances of Beam-Beam Simulation in BEPCII | luminosity, resonance, betatron, background | 147 |
|
|||
The luminosity of BEPCII (the upgrade project of Beijing electron-positron collider) have reached 3.0× 1032 cm-2s-1@1.89GeV in May 2009. In this papaer we'll compare the beam-beam simulation results with the real machine. In the case the single bunch current is lower than 8mA, the simulation concides well with the real. Some phenomenon related to synchro-betatron resonances during machine tuning and simulation is shown. The tune is close to half integer help us increase luminosity, however the detector background increases at the same time. It is believed that the beam-beam dynamic effect result in the drop of the dynamic aperture. We also study the possible luminosity contribution from the crab waist scheme in BEPCII. |
|||
|
|||
TH1IOPK04 | Developing the Physics Design for NDCX-II, a Unique Pulse-Compressing Ion Accelerator | ion, space-charge, solenoid, target | 157 |
|
|||
The near-term mission of the Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is to study "warm dense matter" at ~1 eV heated by ion beams; a longer-term topic is ion-driven target physics for inertial fusion energy. Beam bunch compression factors exceeding 50x have been achieved on the Neutralized Drift Compression Experiment (NDCX) at LBNL, enabling rapid target heating; however, to meet our goals an improved platform, NDCX-II, is required. Using refurbished induction cells from the decommissioned Advanced Test Accelerator at LLNL, NDCX-II will compress a ~500 ns pulse of Li+ ions to ~1 ns while accelerating it to 3-4 MeV (a spatial compression of 100-150x) over ~15 m. Non-relativistic ions exhibit complex dynamics; the beam manipulations in NDCX-II are actually enabled by strong longitudinal space charge forces. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and a centroid-offset model, and both (r,z) and 3D Warp-code simulations, to develop the NDCX-II accelerator. Both Warp and LSP are used for plasma neutralization studies. This talk describes the methods used and the resulting physics design. |
|||
TH2IOPK01 | Self Field of Sheet Bunch: A Search for Improved Methods | shielding, electron, multipole | 163 |
|
|||
We consider a 2D bunch represented by \mathcal N simulation particles moving on arbitrary planar orbits. The mean field of the bunch is computed from Maxwell's equations in the lab frame with a smoothed charge/current density, using retarded potentials. The particles are tracked in beam frame, thus requiring a transformation of densities from lab to beam frame. We seek improvements in speed and practicality in two directions: (a) choice of integration variables and quadrature rules for the field calculation; and (b) finding smooth densities from scattered data. For item (a) we compare a singularity-free formula with the retarded time as integration variable, which we used previously, with a formula based on Frenet-Serret coordinates. The latter suggests good approximations in different regions of the retardation distance, for instance a multipole expansion which could save both time and storage. For item (b) we compare various ideas from mathematical statistics and numerical analysis, e.g., quasi-random vs. pseudo-random sampling, Fourier vs. kernel smoothing, etc. Implementations in a parallel code with \mathcal N up to a billion will be given, for a chicane bunch compressor. |
|||
TH2IOPK02 | Simulation of Microwave Instability in LER of KEKB And SuperKEKB | impedance, luminosity, synchrotron, vacuum | 169 |
|
|||
Two methods were investigated to study microwave instability in LER of KEKB and SuperKEKB. One is macroparticle tracking code based on PIC. The other one solves the VFP equation directly. First we compare the two methods using a resonator impedance model of KEKB LER. Then we use the calculated impedance including CSR to study the beam instability of LER of KEKB and SuperKEKB. Convergence properties of these two methods due to numerical noise are discussed. |
|||
|
|||
TH2IOPK04 | Study of Beam-Scattering Effects for a Proposed APS ERL Upgrade | scattering, beam-losses, linac, electron | 173 |
|
|||
Beam scattering effects, including intra-beam scattering (IBS) and Touschek scattering, may become an issue for linac-based 4th-generation light sources, such as X-ray free-electron lasers (FELs) and energy recovery linacs (ERLs), as the electron density inside the bunch is very high. In this paper, we describe simulation tools for modeling beam-scattering effects that were recently developed at the Advanced Photon Source (APS). We also demonstrate their application to a possible ERL-based APS upgrade. The beam loss issue due to the Touschek scattering effect is addressed through momentum aperture optimization. The consequences of IBS for brightness, FEL gain, and other figures of merit are also discussed. Calculations are performed using a particle distribution generated by an optimized high-brightness injector simulation. |
|||
TH3IOPK01 | The Simulation of the Electron Cloud Instability in BEPCII and CSNS/RCS | electron, proton, positron, vacuum | 179 |
|
|||
Electron Cloud Instability (ECI) may take place in any positively charged particle circular accelerator especially in positron and proton storage ring. This instability has been confirmed to be a serious restriction to the beam stabilities. The physical model on the formation of electron cloud in various kinds of magnetic fields was introduced in the first section of the paper. The transverse and longitudinal wake field model to present the interaction between electron cloud and beam were introduced in another section of the paper. As an example, in positron storage of BEPCII and RCS of CSNS, the densities of electron cloud and beam instabilities caused by the accumulated electrons were simulated. |
|||
|
|||
TH3IOPK04 | Using Geant4-based Tools to Simulate a Proton Extraction and Transfer Line | cyclotron, proton, dipole, extraction | 190 |
|
|||
The simulation toolkit GEANT4 has been used to create high-level tools for specific user groups, such as SPENVIS in space physics and GATE in medical imaging. In Accelerator Physics, comparable efforts are being devoted to develop general-purpose programs for simulating beamlines and accelerators, allowing access to Geant4's facilities for 3D geometry, tracking, and interactions in matter without the need for specialised programming techniques. In this study we investigate the use of two high-level tools based on Geant4, G4BEAMLINE and BDSIM, to model a 65-meter beam line supplying protons from the TRIUMF cyclotron to the ISAC RIB facility. We outline the rather different approaches to defining the beamline geometry (including cyclotron extraction foil and exit region) in each code. Their diagnostic and visualisation features are also compared. Due to its ability to model some important aspects such as rectangular dipoles and magnetic fringe fields, G4beamline was utilized for a series of simulations presented here, investigating the distribution of losses in the beamline, the role of scattering in the cyclotron extraction foil, and the sensitivity of losses to tuning parameters. |
|||
|
|||
TH4IOPK03 | Aperture and Beam-Tube Models for Accelerator Magnets | coupling, dipole, impedance, sextupole | 202 |
|
|||
The modeling of eddy-current phenomena in superconductive accelerator magnets is challenging because the large differences in geometrical dimensions (skin depth vs. magnet size) and time constants (ramping time vs. relaxation time). The paper addresses modeling issues as e.g. the ferromagnetic saturation of the iron yoke, the eddy-current losses in the yoke end parts, the eddy-current losses in the beam tube and possible eddy-current losses in the windings. Heavy saturation, small skin depths and small time constants render simulations of this kind to be challenging. The simulation approach is used in combination with an optimization procedure involving both continuous and integer-valued parameters. |
|||
TH1IODN01 | A Fast and Universal Vlasov Solver for Beam Dynamics Simulations in 3D | space-charge, multipole, cavity, dipole | 208 |
|
|||
The Vlasov equation describes the evolution of a particle density under the effects of electromagnetic fields. It is derived from the fact that the volume occupied by a given number of particles in the 6D phase space remains constant when only long-range interaction as for example Coulomb forces are relevant and other particle collisions can be neglected. Because this is the case for typical charged particle beams in accelerators, the Vlasov equation can be used to describe their evolution within the whole beam line. This equation is a partial differential equation in 6D and thus it is very expensive to solve it via classical methods. A more efficient approach consists in representing the particle distribution function by a discrete set of characteristic moments. For each moment a time evolution equation can be stated. These ordinary differential equations can then be evaluated efficiently by means of time integration methods if all considered forces and a proper initial condition are known. The beam dynamics simulation tool V-Code implemented at TEMF utilizes this approach. In this paper the numerical model, main features and designated use cases of the V-Code will be presented. |
|||
TH1IODN04 | Discretizing Transient Curent Densities in the Maxwell Equations | target, laser, vacuum, electron | 212 |
|
|||
The Finite Difference Time Domain (FDTD) method and the related Time Domain Finite Element Method (TDFEM) are routinely used for simulation of RF and microwave structures. In traditional FDTD and TDFEM algorithms the electric field E is associated with the mesh edges, and the magnetic flux density B is associated with mesh faces. It can be shown that when using this traditional discretization , projection of an arbitrary current density J(x,t) onto the computational mesh can be problematic. We developed and tested a new discretization that uses electric flux density D and magnetic field H as the fundamental quantities, with the D-field on mesh faces and the H-field on mesh edges. The electric current density J is associated with mesh faces, and charge is associated with mesh elements. When combined with the Particle In Cell (PIC) approach of representing J(x,t) by discrete macroparticles that transport through the mesh, the resulting algorithm conserves charge in the discrete sense, exactly, independent of the mesh resolution h. This new algorithm has been applied to unstructured mesh simulations of charged particle transport in laser target chambers with great success. |
|||
TH2IODN01 | Simulation and Commissioning of J-PARC Linac Using the IMPACT Code | DTL, linac, emittance, beam-losses | 218 |
|
|||
The beam commissioning of J-PARC linac has been performed since November 2006, and we are now in a transitional phase from an initial commissioning stage to a stage where we seek more stable operation with higher beam power. In the beam commissioning, the modeling is important to understand the underlying physics of the experimental data obtained by beam monitors. As the J-PARC is a high-intensity proton accelerator facility, the beam is subject to strong space-charge effects. In addition, mitigation of the beam loss is critically important to avoid excess radio-activation of the accelerator components. Therefore, an accurate Particle-In-Cell simulation code plays an essential role in the beam commissioning, especially in mapping out our course in the beam commissioning planning. For this purpose, we have been using IMPACT code in J-PARC linac. In this paper, we review the simulation studies performed for J-PARC linac trying to understand the experimental results in the course of the beam commissioning efforts. |
|||
TH2IODN04 | Physics Problem Study For A 100 MeV, 500 Microamp H- Beam Compact Cyclotron | cyclotron, space-charge, extraction, beam-losses | 224 |
|
|||
A high intensity compact cyclotron, CYCIAE-100, is selected as the driving accelerator for Beijing Radioactive Ion-beam Facility (BRIF). At present the physics design of this machine has been accomplished. This paper gives a brief review of the general designs of this machine. For further intensity upgrade of this compact machine in the future, it is crucial to carry out in-depth study on the self fields effects including the contributions of single bunch space charge and the interaction of many radially neighboring bunches. In order to include the neighboring bunch effects fully self-consistently in compact cyclotrons, a new physical model is established for the first time and implemented in the parallel PIC code OPAL-CYCL. After that, the impact of the single bunch space charge and neighboring bunches on the beam dynamics in CYCIAE-100 for different intensity levels are studied by the simulations using the new model. |
|||
TH3IODN02 | Space Charge Simulations for ISIS | space-charge, injection, resonance, synchrotron | 229 |
|
|||
The ISIS Facility at the Rutherford Appleton Laboratory in the UK produces intense neutron and muon beams for condensed matter research. It is based on a 50 Hz proton synchrotron which accelerates ~3·1013 protons per pulse (ppp) from 70 to 800 MeV, corresponding to beam powers of ~0.2 MW. Studies are under way for major upgrades in the Megawatt regime. Underpinning this programme of operations and upgrades is a study of the high intensity effects that impose limitations on beam power. The behaviour of the beam in the 50 Hz rapid cycling synchrotron (RCS) is largely characterised by high space charge levels and the effects of fast ramping acceleration. High intensity effects are of particular importance as they drive beam loss, but are not fully understood with only limited analytical models available. This paper reviews several methods by which these effects are explored numerically on ISIS, and compares them where possible with experimental or analytical results. In particular we outline development of a new space charge code Set, which is designed to address key issues on ISIS and similar RCS machines. |
|||
TH4IODN04 | The Study on the Space Charge Effects of RCS/CSNS | emittance, space-charge, lattice, injection | 239 |
|
|||
The China Spallation Neutron Source (CSNS) is now in the design stage. Many simulations have been done for the RCS/CSNS, including the space charge induced emittance growth and beam loss, the combined effects of space charge and magnet errors, the dependence of space charge effects on the lattice structures, etc. |
|||
|
|||
FR1IOPK01 | Optimization Algorithms for Accelerator Physics Problems | linac, emittance, ion, ion-source | 245 |
|
|||
Optimization tools are needed in every step of an accelerator project, from the design to commissioning to operations. However, different phases have different optimization needs that may require different optimization algorithms. For example, a global optimizer is more appropriate in the design phase to map the whole parameter space whereas a local optimizer with a shorter path to solution is more adequate during operations to find the next best operating point. Different optimization algorithms are being used in accelerator physics, we mention in particular standard algorithms based on least square minimization and evolutionary algorithms such as genetic optimizers. Over the years, we have developed several optimization tools for beam tracking codes to include 3D fields and SC effects. Including particle tracking in the optimization process calls for parallel computing. We will review the different algorithms and their implementation and present few highlight applications. |
|||
|
|||
THPSC003 | RadTrack: A User-Friendly, Modular Code to Calculate the Emission Processes from High-Brightness Electron Beams | radiation, diagnostics, lattice, controls | 259 |
|
|||
The development of the code RadTrack is based on the need to model accelerator system diagnostics. The code is built using a modular approach with a strong emphasis on intuitive user interface. The operations of trajectory calculation and radiation field solving are segregated; currently the tracking is handled by Q-Tracker and the field solving is executed by a modified version of QUINDI. Additionally, the RadTrack user interface allows for seamless start-to-end stitching of I/O exchange between certain codes, and the visualization canvas reinforces user directives in a near-real-time environment. |
|||
THPSC006 | Particle-In-Cell Simulation of Electron-Helium Plasma in Cyclotron Gas Stopper | ion, electron, space-charge, extraction | 266 |
|
|||
The cyclotron gas stopper is a newly proposed device to stop energetic ions in a high pressure helium gas and to transport them in a singly charged state with a gas jet to a vacuum region. Ions are injected into the region with vertical magnetic field, where they first meet a degrader and then move in helium gas. Due to multiple scattering, radioactive ions lose their energy, and the process is accompanied by ionization of helium. Externally applied voltage remove electrons and single-charged helium ions from the box. Under a certain incoming particle rate, the amount of ionized charge becomes large and cannot be removed completely. As a result, a neutralized plasma is accumulated in the center of the box and new incoming particles cannot be ejected from the field-shielded area. The present study focuses on a detailed understanding of space charge effects in the central ion extraction region. Particle-in-cell simulations of electron-helium plasma are based on self-consistent particle tracking in a field obtained from solution of Poisson’s equation for particle interacting via Coulomb forces. The paper analyzes the process and estimates the maximum possible incoming particle rate. |
|||
THPSC013 | Design of 10 GeV Laser Wakefield Accelerator Stages with Shaped Laser Modes | laser, plasma, electron, focusing | 281 |
|
|||
Laser plasma generated wakefields sustain accelerating gradient a thousand times higher than conventional accelerators, allowing acceleration of electron beams to high energy over short distances. Recently, experiments have demonstrated the production of high quality electron bunches at 1GeV within only a few centimeters. We present simulations, with the VORPAL framework, of the next generation of experiments, likely to use externally injected beams and accelerate them in a meter long 10 GeV laser plasma accelerator stage, which will operate in the quasi-linear regime where the acceleration of electrons and positrons is nearly symmetric. We will show that by using scaling of the physical parameters it is possible to perform fully consistent particle-in-cell simulations at a reasonable cost. These simulations are used to design efficient stages. In particular, we will show that we can use higher order laser modes to tailor the focusing forces, which play an important role in determining the beam quality. This makes it possible to increase the matched electron beam radius and hence the total charge in the bunch while preserving the low bunch emittance required for applications. |
|||
THPSC019 | COSY Extensions for Beam-Material Interactions | target, ion, heavy-ion, emittance | 292 |
|
|||
While COSY INFINITY provides powerful DA methods for the simulation of fragment separator beam dynamics, the master version of COSY does not currently take into account beam-material interactions. These interactions are key for accurately simulating the dynamics from heavy ion fragmentation and fission. In order to model the interaction with materials such as the target or absorber, much code development was needed. There were four auxiliary codes implemented in COSY for the simulation of beam-material interactions. These include EPAX for returning the cross sections of isotopes produced by fragmentation and MCNPX for the cross sections of isotopes produced by the fission and fragmentation of a 238U beam. ATIMA is implemented to calculate energy loss and energy and angular straggling. GLOBAL returns the charge state. The extended version can be run in map mode or hybrid map-Monte Carlo mode, providing an integrated beam dynamics-nuclear processes design optimization and simulation framework that is efficient and accurate. The code, its applications, and plans for large-scale computational runs for optimization of separation purity of rare isotopes at FRIB will be presented. |
|||
THPSC020 | Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework | cavity, gun, emittance, cathode | 296 |
|
|||
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results. |
|||
THPSC021 | Computational Models forμChannel Plate Simulations | electron, cathode, ion, feedback | 300 |
|
|||
Many measurements in particle and accelerator physics are limited by the time resolution. This includes particle identification via time-of-flight in major experiments like CDF at Fermilab, Atlas and CMS at the LHC. Large-scale systems could be significantly improved by large-area photo-detectors. The invention of a new method of making MCPs that promises to yield better resolution and be considerably less expensive than current techniques. Two different models for MCP simulations are suggested. Semi-analytical approach is a powerful tool for the design of static image amplifiers. Monte Carlo simulations can be successfully used for large area photo detectors with micron and Pico-second resolution range. Both approaches were implemented in the codes MCPS and MCS. The results of computer modeling are presented. References. 1. V.Ivanov, Z.Insepov, Pico-Second Workshop VII, The Development of Large-Area Pico-second Photo-Devices, Feb. 26-28, 2009; ANL. 2. V.Ivanov. The Code “Micro Channel Plate Simulator”, User’s Guide, Muons, Inc., 2009 |
|||
THPSC022 | Recent Improvement of Tracking Code BBSIMC | electron, dynamic-aperture, proton, luminosity | 304 |
|
|||
The beam-beam simulation code (BBSIMC) is a incoherent multiparticle tracking code for modeling the nonlinear effects arising from beam-beam interactions and the compensation of them using an electromagnetic lens. It implements short range transverse and longitudinal wakefield, dipole noise to mimic emittance growth from gas scattering, beam transfer function, and wire compensation models. In this paper, we report on recent improvements of the BBSIMC including a beam-beam compensation model using a low energy electron beam and an interpolation scheme of beam-beam forces. Some applications are presented for the Relativistic Heavy Ion Collider (RHIC) electron lens. |
|||
THPSC026 | RF-Kick Caused by the Couplers in the ILC Acceleration Structure | HOM, cavity, linac, emittance | 311 |
|
|||
In the paper the results are presented for calculation of the transverse wake and RF kick from the power and HOM couplers of the ILC acceleration structure. The RF kick was calculated by HFSS code while the wake was calculated by GdfidL. The calculation precision and convergence for both cases is discussed. The beam emittance dilution caused by the couplers is calculated for the main linac and bunch compressor of ILC. |
|||
THPSC028 | Computation of a Two Variable Wake Field Induced by an Electron Cloud | electron, wakefield, single-bunch, space-charge | 314 |
|
|||
A single bunch instability caused by an electron cloud has been studied using analytical and semi-analytical methods with the wake field. The wake field in these cases was computed in the classical sense as excited electromagnetic field that transversally distorts those parts of the bunch trailing certain transversal offset in the leading part of the same bunch. The transversal wake force in this case is only depending on the longitudinal distance between the leading part of the bunch producing the wake force and the trailing parts of the bunch feeling the wake force. However during the passage of the bunch through the electron cloud the density of the electron cloud near the beam axis changes rapidly which does not allow the single variable approximation for the wake field. In this paper pursuing the idea of K. Ohmi we compute numerically the wake forces as two variable function of the position of the leading part of the bunch and the position of the bunch parts trailing the leading offset in the bunch. |
|||
THPSC031 | PteqHI Development and Code Comparing | rfq, space-charge, multipole, quadrupole | 322 |
|
|||
For the development of high energy and high duty cycle RFQs accurate particle dynamic simulation tools are important for optimizing designs, especially in high current applications. To describe the external fields in RFQs, the Poisson equation has to be solved taking the boundary conditions into account. In PteqHI this is now done by using a finite difference method on a grid. This method will be described and simulation results will be compared to different RFQ particle dynamic codes. |
|||
THPSC035 | Tracy# | lattice, controls, emittance, dynamic-aperture | 326 |
|
|||
Tracy is an accelerator modeling and simulation code originally developed at LBNL in Pascal two decades ago*. Tracy evolved to Tracy2** which served as the basis for several derivative codes at other synchrotron light sources, including PSI, SSRL and Soleil. In most of these cases, the accelerator physics library was extracted and translated in C. At the ALS the library was re-written in C++ (Goemon***) in an object-oriented manner. Later this version was converted to C# with some effort spent on optimizing its performance****. Tracy# is the latest C# version upgraded to take advantage of the new features of the .NET Framework 3.5 and 4.0. It efficiently uses the modern language features of the C# and the standardized libraries of the .NET Framework for database, XML and networking. It also works with other .NET languages, such as IronPython and F# for interactive scripting. Although it is developed on Windows, MONO makes it portable to other operating systems including Linux. |
|||
THPSC037 | Possibility of Round Beam Formation in RIBF Cyclotrons | ion, cyclotron, space-charge, heavy-ion | 333 |
|
|||
Since 1997 RIKEN Nishina center has been constructing a next-generation exotic beam facility, RI beam factory (RIBF), based on a powerful heavy ion driver accelerator . Its accelerator complex was successfully commissioned at the end of 2006 and started supplying heavy ion beams in 2007. The four ring cyclotrons (RRC, fRC, IRC and SRC) connected in series accelerate the energy of the heavy ion beams up to 400 MeV/u for the lighter ions such as argon and 345 MeV/u for heavier ions such as uranium. Intensity upgrade plans are under way, including the construction of a new 28 GHz superconducting ECR ion source. The new ECR will take all the succeeding accelerators and beam transport lines to a space charge dominant regime, which should be carefully reconsidered to avoid emittance growth due to space charge forces. Beam dynamics in the low energy cyclotron, RRC was studied by OPAL-cycl a flavor of the OPAL. The simulation results clearly show vortex motions in the isochronous field, resulting in round beam formation in the first 10 turns after the injection point. The possible increase of beam loss at beam extraction will be also discussed in this paper. |
|||
THPSC041 | Set Code Development and Space Charge Studies on ISIS | space-charge, beam-losses, synchrotron, closed-orbit | 337 |
|
|||
ISIS is the spallation neutron source at the Rutherford Appleton Laboratory in the UK. Presently, it runs at beam powers of ~0.2 MW, with upgrades in place to supply increased powers for the new Second Target Station. Studies are also under way for major upgrades in the megawatt regime. Underpinning this programme of operations and upgrades is a study of the high intensity effects that impose the limitations on beam power. Spallation is driven by a 50 Hz rapid cycling synchrotron, characterized by high space charge and fast ramping acceleration. High intensity effects are of particular importance as they drive beam loss, but are poorly understood analytically. This paper reviews development of the space charge charge code Set. |
|||
THPSC047 | Complete RF Design of the HINS RFQ with CST MWS and HFSS | rfq, linac, quadrupole, radio-frequency | 340 |
|
|||
Similar to many other linear accelerators, the High Intensity Neutron Source requires an RFQ for initial acceleration and formation of the bunched beam structure. The RFQ design includes two main tasks: a) the beam dynamics design resulting in a vane tip modulation table for machining and b) the resonator electromagnetic design resulting in the final dimensions of the resonator. The focus of this paper is on the second task including simulating high power operation of RFQ. We report complete and detailed RF modeling on the HINS RFQ resonator using simulating codes CST Microwave Studio (MWS) and Ansoft High Frequency Structure Simulator (HFSS). All details of the resonator such as input and output radial matchers, the end cut-backs etc have been precisely determined. In the first time a full size RFQ model with modulated vane tips, the power couplers and all tuners installed has been built, and a complete simulation of RFQ tuning has been performed. Finally some aspects of high power operation of RFQ have been investigated. Comparison of the simulation results with experimental measurements demonstrated excellent agreement. |
|||
THPSC049 | H5PartRoot - A Visualization And Post-Processing Tool For Accelerator Simulations | emittance, feedback, extraction, collider | 343 |
|
|||
Modern particle tracking codes with their parallel processing capabilities generate data files of the order of 100 Gigabytes. Thus they make very high demands on file formats and post-processing software. H5PartROOT is a versatile and powerful tool addressing this issue. Based on ROOT, CERN's object-oriented data analysis framework developed for the requirements of the LHC era, and the HDF5 hierarchical data format, supplemented by an accelerator-specific interface called H5Part, H5PartROOT combines the statistical and graphical capabilities of ROOT with the versatility and performance of the HDF5 technology suite to meet the needs of the accelerator community. Providing the user with both a graphical user interface (data browser) and a shared library to be used in an interactive or batch ROOT session, H5PartROOT passes on the full power of ROOT without presupposing any knowledge about the intricacies of either ROOT or C++. |
|||
|
|||
THPSC050 | Parallel SDDS: A Scientific High-Performance I/O Interface | photon, HOM, controls, cavity | 347 |
|
|||
Use of SDDS, the Self-Describing Data Sets file protocol and toolkit, has been a great benefit to development of several accelerator simulation codes. However, the serial nature of SDDS was found to be a bottleneck for SDDS-compliant simulation programs such as parallel elegant. A parallel version of SDDS would be expected to yield significant dividends for runs involving large numbers of simulation particles. In this paper, we present a parallel interface for reading and writing SDDS files. This interface is derived from serial SDDS with minimal changes, but defines semantics for parallel access and is tailored for high performance. The underlying parallel IO is built on MPI-IO. The performance of parallel SDDS and parallel HDF5 are studied and compared. Our tests indicate better scalability of parallel SDDS compared to HDF5. We see significant I/O performance improvement with this parallel SDDS interface. |
|||
THPSC052 | The Python Shell for the ORBIT Code | lattice, space-charge, laser, status | 351 |
|
|||
A development of a Python driving shell for the ORBIT simulation code is presented. The original ORBIT code uses the Super Code shell to organize accelerator related simulations. It is outdated, unsupported, and it is an obstacle for the future code development. A necessity of the replacement of the old shell language and consequences are discussed. A set of modules that are currently in the core of the pyORBIT code and extensions are presented. They include particle containers, parsers for MAD and SAD lattice files, a Python wrapper for MPI libraries, space charge calculators, TEAPOT trackers, and a laser stripping extension module. |
|||
THPSC054 | Recent Progress on Parallel ELEGANT | dynamic-aperture, linac, storage-ring, damping | 355 |
|
|||
The electron accelerator simulation software elegant is being parallelized in a multi-year effort. Recent developments include parallelization of input/output (I/O), frequency map analysis, and position-dependent momentum aperture determination. Parallel frequency map and momentum aperture analysis provide rapid turnaround for two important determinants of storage ring performance. Recent development of parallel Self-Describing Data Sets file (SDDS) I/O based on MPI-IO made it possible for parallel elegant (Pelegant) to take advantage of parallel I/O. Compared with previous versions of Pelegant with serial I/O, the new version not only enhances the I/O throughput with a good scalability, but also provides a feasible way to run simulations with a very large number of particles (e.g., 1 billion particles) by eliminating the memory bottleneck on the master with serial I/O. Another benefit of using parallel I/O is reducing the communication overhead significantly for the tracking of diagnostic optical elements, where the particle information has to be gathered to the master for serial I/O. |
|||
THPSC056 | Beam Fields in an Integrated Cavity, Coupler, and Window Configuration | cavity, storage-ring, resonance, factory | 359 |
|
|||
In a multi-bunch high current storage ring, beam generated fields couple strongly into the RF cavity coupler structure when beam arrival times are in resonance with cavity fields. In this study the integrated effect of beam fields over a several thousand RF periods is simulated for the complete cavity, coupler, window and waveguide system of the PEP-II B-factory storage ring collider. We show that the beam generated fields at frequencies corresponding to several bunch spacings for this case gives rise to high field strength near the ceramic window and could limit the performance of future high current storage rings such as PEP-X or Super B-factories. |
|||
THPSC057 | BPM Breakdown Potential in the PEP-II B-factory Storage Ring Collider | vacuum, factory, storage-ring, impedance | 363 |
|
|||
High current B-Factory BPM designs incorporate a button type electrode which introduces a small gap between the button and the beam chamber. For achievable currents and bunch lengths, simulations indicate that potentials can be induced in this gap which are comparable to the breakdown voltage. This study characterizes beam induced voltages in the existing PEP-II storage ring collider BPM as a function of bunch length and beam current. |
|||
THPSC059 | Array Based Truncated Power Series Package | 371 | |
|
|||
I present a new package for fast Truncated Power Series(TPS) calculation with no limit on the order and number of variables. This package has been used by PTC/FPP and integrated in MAD-X. |
|||
THPSC061 | Molecular Dynamics Simulation of Crystalline Beams Extracted from a Storage Ring | extraction, emittance, lattice, ion | 374 |
|
|||
It is well-known that a charged-particle beam is Coulomb crystallized in the low-temperature limit. The feasibility of beam crystallization has been raised by the recent progress in beam cooling techniques and in understanding of the behavior of crystalline beams. To go a step further, we explore the dynamic behaviors of crystalline ion beams extracted from a storage ring, employing the molecular dynamics simulation technique. The effect of an extraction device and the following transport line on various crystalline beams has been investigated for extraction and transport of crystalline beams without collapse of the ordered structure. |