Keyword: extraction
Paper Title Other Keywords Page
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining dipole, electronics, operation, database 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice in­ci­dent which hap­pened dur­ing com­mis­sion­ing of the LHC on the 19th of Septem­ber 2008 caused dam­age to sev­er­al mag­nets and ad­ja­cent equip­ment. This raised not only the ques­tion of how it hap­pened, but also about the state of all other splices. The inter mag­net splices were stud­ied very soon after with new mea­sure­ments, but the in­ter­nal mag­net splices were also a con­cern. At the Cha­monix meet­ing in Jan­uary 2009, the CERN man­age­ment de­cid­ed to cre­ate a work­ing group to anal­yse the pro­voked quench data of the mag­net ac­cep­tance tests and try to find in­di­ca­tions for bad splices in the main dipoles. This re­sult­ed in a data min­ing pro­ject that took about one year to com­plete. This pre­sen­ta­tion de­scribes how the data was stored, ex­tract­ed and anal­ysed reusing ex­ist­ing Lab­VIEW™ based tools. We also pre­sent the en­coun­tered dif­fi­cul­ties and the im­por­tance of com­bin­ing mea­sured data with op­er­a­tor notes in the log­book.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPKN009 The CERN Accelerator Measurement Database: On the Road to Federation database, controls, data-management, status 102
 
  • C. Roderick, R. Billen, M. Gourber-Pace, N. Hoibian, M. Peryt
    CERN, Geneva, Switzerland
 
  The Mea­sure­ment database, act­ing as short-term cen­tral per­sis­tence and front-end of the CERN ac­cel­er­a­tor Log­ging Ser­vice, re­ceives bil­lions of time-se­ries data per day for 200,000+ sig­nals. A va­ri­ety of data ac­qui­si­tion sys­tems on hun­dreds of front-end com­put­ers pub­lish source data that even­tu­al­ly end up being logged in the Mea­sure­ment database. As part of a fed­er­at­ed ap­proach to data man­age­ment, in­for­ma­tion about source de­vices are de­fined in a Con­fig­u­ra­tion database, whilst the sig­nals to be logged are de­fined in the Mea­sure­ment database. A map­ping, which is often com­plex and sub­ject to change and ex­ten­sion, is there­fore re­quired in order to sub­scribe to the source de­vices, and write the pub­lished data to the cor­re­spond­ing named sig­nals. Since 2005, this map­ping was done by means of dozens of XML files, which were man­u­al­ly main­tained by mul­ti­ple per­sons, re­sult­ing in a con­fig­u­ra­tion that was error prone. In 2010 this con­fig­u­ra­tion was im­proved, such that it be­comes fully cen­tral­ized in the Mea­sure­ment database, re­duc­ing sig­nif­i­cant­ly the com­plex­i­ty and the num­ber of ac­tors in the pro­cess. Fur­ther­more, log­ging pro­cess­es im­me­di­ate­ly pick up mod­i­fied con­fig­u­ra­tions via JMS based no­ti­fi­ca­tions sent di­rect­ly from the database, al­low­ing tar­get­ed de­vice sub­scrip­tion up­dates rather than a full pro­cess restart as was re­quired pre­vi­ous­ly. This paper will de­scribe the ar­chi­tec­ture and the ben­e­fits of cur­rent im­ple­men­ta­tion, as well as the next steps on the road to a fully fed­er­at­ed so­lu­tion.  
 
MOPKS013 Beam Spill Structure Feedback Test in HIRFL-CSR feedback, controls, power-supply, FPGA 186
 
  • R.S. Mao, P. Li, L.Z. Ma, J.X. Wu, J.W. Xia, J.C. Yang, Y.J. Yuan, T.C. Zhao, Z.Z. Zhou
    IMP, Lanzhou, People's Republic of China
 
  The slow ex­trac­tion beam from HIR­FL-CSR is used in nu­cle­ar physics ex­per­i­ments and heavy ion ther­a­py. 50Hz rip­ple and har­mon­ics are ob­served in beam spill. To im­prove the spill struc­ture, the first set of con­trol sys­tem con­sist­ing of fast Q-mag­net and feed­back de­vice based FPGA is de­vel­oped and in­stalled in 2010, and spill struc­ture feed­back test also has been start­ed. The com­mis­sion­ing re­sults with spill feed­back sys­tem are pre­sent­ed in this paper.  
poster icon Poster MOPKS013 [0.268 MB]  
 
WEPMS015 NSLS-II Booster Timing System injection, booster, timing, storage-ring 1003
 
  • P.B. Cheblakov, S.E. Karnaev
    BINP SB RAS, Novosibirsk, Russia
  • J.H. De Long
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II light source in­cludes the main stor­age ring with beam lines and in­jec­tion part con­sist­ing of 200 MeV linac, 3 GeV boost­er syn­chrotron and two trans­port lines. The boost­er tim­ing sys­tem is a part of NSLS-II tim­ing sys­tem which is based on Event Gen­er­a­tor (EVG) and Event Re­ceivers (EVRs) fromμRe­search Fin­land. The boost­er tim­ing is based on the ex­ter­nal events com­ing from NSLS-II EVG: "Pre-In­jec­tion", "In­jec­tion", "Pre-Ex­trac­tion", "Ex­trac­tion". These events are ref­er­enced to the spec­i­fied bunch of the Stor­age Ring and cor­re­spond to the first bunch of the boost­er. EVRs pro­vide two scales for trig­ger­ing both of the in­jec­tion and the ex­trac­tion pulse de­vices. The first scale pro­vides trig­ger­ing of the pulsed sep­tums and the bump mag­nets in the range of mil­lisec­onds and uses TTL out­puts of EVR, the sec­ond scale pro­vides trig­ger­ing of the kick­ers in the range of mi­crosec­onds and uses CML out­puts. EVRs also pro­vide the tim­ing of a boost­er cycle op­er­a­tion and events for cy­cle-to-cy­cle up­dates of pulsed and ramp­ing pa­ram­e­ters, and the boost­er beam in­stru­men­ta­tion syn­chro­niza­tion. This paper de­scribes the final de­sign of the boost­er tim­ing sys­tem. The tim­ing sys­tem func­tion­al and block di­a­grams are pre­sent­ed.  
poster icon Poster WEPMS015 [0.799 MB]  
 
WEPMS020 NSLS-II Booster Power Supplies Control booster, controls, operation, injection 1018
 
  • P.B. Cheblakov, S.E. Karnaev, S.S. Serednyakov
    BINP SB RAS, Novosibirsk, Russia
  • W. Louie, Y. Tian
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II boost­er Power Sup­plies (PSs) [1] are di­vid­ed into two groups: ramp­ing PSs pro­vid­ing pas­sage of the beam dur­ing the beam ramp in the boost­er from 200 MeV up to 3 GeV at 300 ms time in­ter­val, and pulsed PSs pro­vid­ing beam in­jec­tion from the linac and ex­trac­tion to the Stor­age Ring. A spe­cial set of de­vices was de­vel­oped at BNL for the NSLS-II mag­net­ic sys­tem PSs con­trol: Power Sup­ply Con­troller (PSC) and Power Sup­ply In­ter­face (PSI). The PSI has one or two pre­ci­sion 18-bit DACs, nine chan­nels of ADC for each DAC and dig­i­tal input/out­puts. It is ca­pa­ble of de­tect­ing the sta­tus change se­quence of dig­i­tal in­puts with 10 ns res­o­lu­tion. The PSI is placed close to cur­rent reg­u­la­tors and is con­nect­ed to the PSC via fiber-op­tic 50 Mbps data link. The PSC com­mu­ni­cates with EPICS IOC through a 100 Mbps Eth­er­net port. The main func­tion of IOC in­cludes ramp curve up­load, ADC wave­form data down­load, and var­i­ous pro­cess vari­able con­trol. The 256 Mb DDR2 mem­o­ry on PSC pro­vides large stor­age for up to 16 ramp­ing ta­bles for the both DACs, and 20 sec­ond wave­form recorder for all the ADC chan­nels. The 100 Mbps Eth­er­net port en­ables real time dis­play for 4 ADC wave­forms. This paper de­scribes a pro­ject of the NSLS-II boost­er PSs con­trol. Char­ac­ter­is­tic fea­tures of the ramp­ing mag­nets con­trol and pulsed mag­nets con­trol in a dou­ble-in­jec­tion mode of op­er­a­tion are con­sid­ered in the paper. First re­sults of the con­trol at PS test­ing stands are pre­sent­ed.
[1] Power Supply Control System of NSLS-II, Y. Tian, W. Louie, J. Ricciardelli, L.R. Dalesio, G. Ganetis, ICALEPCS2009, Japan
 
poster icon Poster WEPMS020 [1.818 MB]  
 
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability simulation, operation, superconducting-magnet, detector 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For ac­cel­er­a­tors (e.g. LHC) and other large ex­per­i­men­tal physics fa­cil­i­ties (e.g. ITER), the ma­chine pro­tec­tion re­lies on com­plex in­ter­lock sys­tems. In the de­sign of in­ter­lock loops, the choice of the hard­ware ar­chi­tec­ture im­pacts on ma­chine safe­ty and avail­abil­i­ty. While high ma­chine safe­ty is an in­her­ent re­quire­ment, the con­straints in terms of avail­abil­i­ty may dif­fer from one fa­cil­i­ty to an­oth­er. For the in­ter­lock loops pro­tect­ing the LHC su­per­con­duct­ing mag­net cir­cuits, re­duced ma­chine avail­abil­i­ty can be tol­er­at­ed since shut­downs do not af­fect the longevi­ty of the equip­ment. In ITER's case on the other hand, high avail­abil­i­ty is re­quired since fast shut­downs cause sig­nif­i­cant mag­net aging. A re­li­a­bil­i­ty anal­y­sis of var­i­ous in­ter­lock loop ar­chi­tec­tures has been per­formed. The anal­y­sis based on an an­a­lyt­i­cal model com­pares a 1oo3 (one-out-of-three) and a 2oo3 ar­chi­tec­ture with a sin­gle loop. It yields the prob­a­bil­i­ties for four sce­nar­ios: (1)- com­plet­ed mis­sion (e.g., a physics fill in LHC or a pulse in ITER with­out shut­down trig­gered), (2)- shut­down be­cause of a fail­ure in the in­ter­lock loop, (3)- emer­gen­cy shut­down (e.g., after a quench of a mag­net) and (4)- missed emer­gen­cy shut­down (shut­down re­quired but in­ter­lock loop fails, pos­si­bly lead­ing to se­vere dam­age of the fa­cil­i­ty). Sce­nario 4 re­lates to ma­chine safe­ty and to­geth­er with sce­nar­ios 2 and 3 de­fines the ma­chine avail­abil­i­ty re­flect­ed by sce­nario 1. This paper pre­sents the re­sults of the anal­y­sis on the prop­er­ties of the dif­fer­ent ar­chi­tec­tures with re­gard to ma­chine safe­ty and avail­abil­i­ty.  
 
WEPMU012 First Experiences of Beam Presence Detection Based on Dedicated Beam Position Monitors operation, injection, pick-up, instrumentation 1081
 
  • A. Jalal, S. Gabourin, M. Gasior, B. Todd
    CERN, Geneva, Switzerland
 
  High in­ten­si­ty par­ti­cle beam in­jec­tion into the LHC is only per­mit­ted when a low in­ten­si­ty pilot beam is al­ready cir­cu­lat­ing in the LHC. This re­quire­ment ad­dress­es some of the risks as­so­ci­at­ed with high in­ten­si­ty in­jec­tion, and is en­forced by a so-called Beam Pres­ence Flag (BPF) sys­tem which is part of the in­ter­lock chain be­tween the LHC and its in­jec­tor com­plex. For the 2010 LHC run, the de­tec­tion of the pres­ence of this pilot beam was im­ple­ment­ed using the LHC Fast Beam Cur­rent Trans­former (FBCT) sys­tem. How­ev­er, the pri­ma­ry func­tion of the FBCTs, that is re­li­able mea­sure­ment of beam cur­rents, did not allow the BPF sys­tem to sat­is­fy all qual­i­ty re­quire­ments of the LHC Ma­chine Pro­tec­tion Sys­tem (MPS). Safe­ty re­quire­ments as­so­ci­at­ed with high in­ten­si­ty in­jec­tions trig­gered the de­vel­op­ment of a ded­i­cat­ed sys­tem, based on Beam Po­si­tion Mon­i­tors (BPM). This sys­tem was meant to work first in par­al­lel with the FBCT BPF sys­tem and even­tu­al­ly re­place it. At the end of 2010 and in 2011, this new BPF im­ple­men­ta­tion based on BPMs was de­signed, built, test­ed and de­ployed. This paper re­views both the FBCT and BPM im­ple­men­ta­tion of the BPF sys­tem, out­lin­ing the changes dur­ing the tran­si­tion pe­ri­od. The paper briefly de­scribes the test­ing meth­ods, fo­cus­es on the re­sults ob­tained from the tests per­formed dur­ing the end of 2010 LHC run and shows the changes made for the BPM BPF sys­tem de­ploy­ment in LHC in 2011.  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System controls, kicker, operation, injection 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dump­ing Sys­tem (LBDS) is a crit­i­cal part of the LHC ma­chine pro­tec­tion sys­tem. After every LHC beam dump ac­tion the var­i­ous sig­nals and tran­sient data record­ings of the beam dump­ing con­trol sys­tems and beam in­stru­men­ta­tion mea­sure­ments are au­to­mat­i­cal­ly anal­ysed by the eX­ter­nal Post-Op­er­a­tional Checks (XPOC) sys­tem to ver­i­fy the cor­rect ex­e­cu­tion of the dump ac­tion and the in­tegri­ty of the re­lat­ed equip­ment. This soft­ware sys­tem com­ple­ments the LHC ma­chine pro­tec­tion hard­ware, and has to as­cer­tain that the beam dump­ing sys­tem is ‘as good as new’ be­fore the start of the next op­er­a­tional cycle. This is the only way by which the strin­gent re­li­a­bil­i­ty re­quire­ments can be met. The XPOC sys­tem has been de­vel­oped with­in the frame­work of the LHC “Post-Mortem” sys­tem, al­low­ing high­ly de­pend­able data ac­qui­si­tion, data archiv­ing, live anal­y­sis of ac­quired data and re­play of pre­vi­ous­ly record­ed events. It is com­posed of var­i­ous anal­y­sis mod­ules, each one ded­i­cat­ed to the anal­y­sis of mea­sure­ments com­ing from spe­cif­ic equip­ment. This paper de­scribes the glob­al ar­chi­tec­ture of the XPOC sys­tem and gives ex­am­ples of the anal­y­ses per­formed by some of the most im­por­tant anal­y­sis mod­ules. It ex­plains the in­te­gra­tion of the XPOC into the LHC con­trol in­fras­truc­ture along with its in­te­gra­tion into the de­ci­sion chain to allow pro­ceed­ing with beam op­er­a­tion. Fi­nal­ly, it dis­cuss­es the op­er­a­tional ex­pe­ri­ence with the XPOC sys­tem ac­quired dur­ing the first years of LHC op­er­a­tion, and il­lus­trates ex­am­ples of in­ter­nal sys­tem faults or ab­nor­mal beam dump ex­e­cu­tions which it has de­tect­ed.  
poster icon Poster WEPMU023 [1.768 MB]  
 
THCHAUST06 Instrumentation of the CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance and Diagnostics instrumentation, database, distributed, framework 1232
 
  • C. Roderick, R. Billen, D.D. Teixeira
    CERN, Geneva, Switzerland
 
  The CERN ac­cel­er­a­tor Log­ging Ser­vice cur­rent­ly holds more than 90 ter­abytes of data on­line, and pro­cess­es ap­prox­i­mate­ly 450 gi­ga­bytes per day, via hun­dreds of data load­ing pro­cess­es and data ex­trac­tion re­quests. This ser­vice is mis­sion-crit­i­cal for day-to-day op­er­a­tions, es­pe­cial­ly with re­spect to the track­ing of live data from the LHC beam and equip­ment. In order to ef­fec­tive­ly man­age any ser­vice, the ser­vice provider's goals should in­clude know­ing how the un­der­ly­ing sys­tems are being used, in terms of: "Who is doing what, from where, using which ap­pli­ca­tions and meth­ods, and how long each ac­tion takes". Armed with such in­for­ma­tion, it is then pos­si­ble to: an­a­lyze and tune sys­tem per­for­mance over time; plan for scal­a­bil­i­ty ahead of time; as­sess the im­pact of main­te­nance op­er­a­tions and in­fras­truc­ture up­grades; di­ag­nose past, on-go­ing, or re-oc­cur­ring prob­lems. The Log­ging Ser­vice is based on Or­a­cle DBMS and Ap­pli­ca­tion Servers, and Java tech­nol­o­gy, and is com­prised of sev­er­al lay­ered and mul­ti-tiered sys­tems. These sys­tems have all been heav­i­ly in­stru­ment­ed to cap­ture data about sys­tem usage, using tech­nolo­gies such as JMX. The suc­cess of the Log­ging Ser­vice and its proven abil­i­ty to cope with ever grow­ing de­mands can be di­rect­ly linked to the in­stru­men­ta­tion in place. This paper de­scribes the in­stru­men­ta­tion that has been de­vel­oped, and demon­strates how the in­stru­men­ta­tion data is used to achieve the goals out­lined above.  
slides icon Slides THCHAUST06 [5.459 MB]