Keyword: data-analysis
Paper Title Other Keywords Page
WEPKS012 Intuitionistic Fuzzy (IF) Evaluations of Multidimensional Model operation, software, lattice, fuzzy set 805
 
  • I.D. Valova
    ICER, Sofia, Bulgaria
 
  There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model of data is presentation of data in a form of cube (referred as infocube or hypercube) with data or in form of "star" type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or "concentrated", therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The use of intuitionistic fuzzy evaluations- IFE provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data.  
 
WEPKS019 Data Analysis Workbench experiment, interface, TANGO, synchrotron 823
 
  • A. Götz, M.W. Gerring, O. Svensson
    ESRF, Grenoble, France
  • S. Brockhauser
    EMBL, Heidelberg, Germany
 
  Funding: ESRF
Data Analysis Workbench [1] is a new software tool produced in collaboration by the ESRF, Soleil and Diamond. It provides data visualization and workflow algorithm design for data analysis in combination with data collection. The workbench uses Passerelle as the workflow engine and EDNA plugins for data analysis. Actors talking to Tango are used for sending limited commands to hardware and starting existing data collection algorithms. There are scripting interfaces to SPEC and Python. The current state at the ESRF is prototype.
[1] http://www.dawb.org
 
poster icon Poster WEPKS019 [2.249 MB]  
 
WEPKS021 EPICS V4 in Python EPICS, software, controls, status 830
 
  • G. Shen, M.A. Davidsaver, M.R. Kraimer
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515
A novel design and implementation of EPICS version 4 is undergoing in Python. EPICS V4 defined an efficient way to describe a complex data structure, and data protocol. Current implementation in either C++ or Java has to invent a new wheel to present its data structure. However, it is more efficient in Python by mapping the data structure into a numpy array. This presentation shows the performance benchmarking, comparison in different language, and current status.
 
 
THCHAUST03 Common Data Model ; A Unified Layer to Access Data from Data Analysis Point of View detector, framework, synchrotron, neutron 1220
 
  • N. Hauser, T.K. Lam, N. Xiong
    ANSTO, Menai, Australia
  • A. Buteau, M. Ounsy, S. Poirier
    SOLEIL, Gif-sur-Yvette, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  For almost 20 years, the scientific community of neutrons and synchrotron facilities has been dreaming of using a common data format to be able to exchange experimental results and applications to analyse them. If using HDF5 as a physical container for data quickly raised a large consensus, the big issue is the standardisation of data organisation. By introducing a new level of indirection for data access, the CommonDataModel (CDM) framework offers a solution and allows to split development efforts and responsibilities between institutes. The CDM is made of a core API that accesses data through a data format plugins mechanism and scientific applications definitions (i.e. sets of logically organized keywords defined by scientists for each experimental technique). Using a innovative "mapping" system between applications definitions and physical data organizations, the CDM allows to develop data reduction applications regardless of data files formats AND organisations. Then each institute has to develop data access plugins for its own files formats along with the mapping between application definitions and its own data files organisation. Thus, data reduction applications can be developed from a strictly scientific point of view and are natively able to process data coming from several institutes. A concrete example on a SAXS data reduction application, accessing NeXus and EDF (ESRF Data Format) file will be commented.  
slides icon Slides THCHAUST03 [36.889 MB]