GAUDI User Guide

Chapter 14
Accessing SICB facilities

14.1 Overview

In order to facilitate the transition towards C++ based code we have implemented a number of features into the Gaudi framework whose purpose is to allow access to facilities existing within SICB but not yet existing within the framework itself. Gaudi also can read data produced with SICBMC or SICBDST.

In this chapter we cover: staging data from DSTs, converting SICB data for use by Gaudi algorithms, accessing the magnetic field map, accessing the SICB geometry description, the use of the SUINIT, SUANAL and SULAST routines from within the Gaudi framework and handling event pileup in Gaudi.

When using the geometry and magnetic field descriptions described here, you should not forget that they are a temporary solution and that they will disappear at some point in the future. Gaudi includes already the machinery to provide any algorithm with the detector data stored in XML format. Investing some time now to describe your detector in XML may be, in many cases, more convenient than using the old SICB routines to access detector description data. If, for any reason, you have to use the old tools, use them only to populate some pre-defined class which can then be used in your algorithms. In this way, when you decide to move to the new detector description tools in Gaudi, the required changes in your code will be confined to the parts which access the geometry.

14.2 Reading tapes

There are three ways to specify a SICB input data file in Gaudi:

  1. Read one or more files on disk by setting the appropriate option of the Event Selector in the job options file:
  2. EventSelector.Input = {"FILE = '$SITEROOT/lhcb/data/mc/sicb_mbias_v233_10ev.dst2,     $SITEROOT/lhcb/data/mc/sicb_bpipi_v233_100ev.dst1'"};
  3. Specify a data set by giving the job identification numbers in the book-keeping data base, as was done in SICB. The framework will query the ORACLE database in order to find the corresponding tape and then stage it in. The access to the database is through the oraweb01 web server at CERN, as for the mcstagein script.

    The job IDs are specified in the job options file as follows:
  4. EventSelector.Input = { "JOBID='16434, 16435'" };
  5. Read one or more tape files, even if they are not in the LHCb production book-keeping database, for example test beam data. The user should tell the Event Selector which tape(s) and file(s) to read in the following way:
  6. EventSelector.Input = {"TAPE='Y21221-7, Y21223-24'";

    The format is <Volume serial number of the tape>-<File sequence number>

When a Gaudi job requires to stage more than one tape, the program waits for the first tape to be staged. The rest of the required tapes are staged while the first file is processed. Once the program ends reading the first file, it will check if the next file is ready before continuing. If the tape is not staged yet the program writes a message and waits until the file is in the staging disk pools.

Skipping events

When reading SICB data, one may wish to skip some events. This can be done by setting the property EventSelector.FirstEvent. For example, to start processing at event 109, add the following job option:

EventSelector.FirstEvent = 109;

The algorithms will run only after reading that event. Note that Gaudi will take a few seconds to reach the requested event as it has to read all the records in the Zebra file before the one requested.

14.3 Populating the GAUDI transient data store: SICB Converters

14.3.1 General considerations

Access to the SICB data sets is basically via wrappers to the FORTRAN code. A complete event is read into the Zebra common block, and then the conversion to transient objects is done on request by specialised converters.

As mentioned in Chapter 13, a converter must implement the IConverter interface, by deriving from a specific base class. In this way any actions which are in common to all converters of a specific technology may be implemented in a single place.

In the following section we give detailed instructions on how to implement converters within the SicbCnv package. These are intended primarily for Gaudi developers themselves.

14.3.2 Implementing converters in the SicbCnv package

SICB converters are available for reading most SICB banks, the full list of converted banks is available at http://cern.ch/lhcb-comp/Support/html/ConvertedSICBBanks.htm. Writing back into persistent storage (ZEBRA files) is under development for a few banks.

Typically, GAUDI DataObject s can be of two types:

Template files exist in the directory $LHCBSOFT/SicbCnv/<version>/doc for both types of converters, to ease the creation of user converters:

If you intend to write your own SICB converter, follow the instructions below:

14.4 Access to the Magnetic Field

The magnetic field map will be accessible in the future via the transient detector store. For the time being, as this is not implemented and as access to the magnetic field has been requested, we have provided a magnetic field service. Again this is effectively just a wrapper which uses SICB routines to read the information from a . cdf file.

The location of the field.cdf file is provided by the standard.stream file which is read in by a SICB routine called from Gaudi. This file is in the data base area, $LHCBDBASE/standard.stream in AFS. For every version the file used in the production is read in. The location of the standard.stream file will be taken from an environment variable as per normal SICB operation.

To use the Magnetic field service one should modify the jobOptions.txt file to include the following:

ApplicationMgr.ExtSvc = { "MagneticFieldSvc"};

Any algorithm which requires the use of the service makes a request via the service() method of the Algorithm base class:

IMagneticFieldSvc* pIMF= 0;
StatusCode sc = service("MagneticFieldSvc", pIMF );

The service provides a method:

StatusCode fieldVector(HepPoint3D& Pos, HepVector3D& field)

which gives a magnetic field vector at a given point in space, for example:

HepPoint3D P(10.*cm, 10.*cm, 120.*cm);HepVector3D B;pIMF->fieldVector( P, B );

The magnetic field service uses a new version of the SICB routine GUFLD. In this new version the best possible description of the magnet geometry and the field are assumed in order to eliminate dependencies with other parts of SICB. Technically in SICB this corresponds to:

IMAGLEV = IUVERS('GEOM','MAGN') = 4
IFLDLEV = IUVERS('GEOM','MFLD') = 4

These two parameters have been fixed to 4 in the production since a few months before the Technical Proposal: version 111 of SICB. Thus this seems to be a reasonable restriction until the field map is provided in the detector description database.

For an example of the use of this service see the sub-algorithm readMagField in the example FieldGeom distributed with the release.

14.5 Accessing the SICB detector geometry from Gaudi

As discussed previously, the detector geometry will be included along with the field map in the XML detector description database. Currently only a part of the LHCb detector is in the new database. However, the detector geometry used in the production of the data can be accessed by calling a function in the SicbFortran name space (this function is just a wrapper for the FORTRAN function of similar name):

void SicbFortran::utdget(const std::string& a1, const std::string& a2,                          int& nParam, int* data);

nParam should be set to the number of data words required and on return from the function will contain the number of data words actually copied into the array: data. The first string contains the name of the sub detector whose geometry is being requested and the second string is a list of options:

'V'- version;
'G' - geometry description parameters (default);
'C' - calculated geometry (not in *.cdf);
'H' - hit description parameters;
'D' - digitization parameters;
'R' - reconstruction parameters;
'F' - floating point parameters (default);
'I' - integer parameters;
'N' - take parameters from the *.cdf file
'L' - ZEBRA pointer to the beginning of the parameters storage is returned in IARRAY(1)

An algorithm requiring this access should include the header file:

#include "SicbCnv/TopLevel/SicbFortran.h"

and call it so:

float rpar[300];
SicbFortran::utdget("WDRF","D",mpar, (int *) rpar);
log << MSG::INFO << " wdpar(" << j << ") = " << vf[j] << endreq;

Note that the data returned by the function is written into an integer array. However we can also read floating point numbers as in the code fragment above by casting a float array.

One should notice that the geometry returned by this function is that which was used in the production of the data and not that which is in the current version of the .cdf files. Only if the option 'N' is specified are the .cdf files read in from the standard location. In order to be able to use the array of parameters returned one has to know in advance the organization of these data in the cdf file since the data are stored in an array and not in a common block with named variables.

The sub-algorithm readTRackerGeom in the example FieldGeom extracts and writes out some digitization and geometry parameters of the outer tracker.

14.6 Using FORTRAN code in Gaudi

Existing FORTRAN code can be used within a Gaudi application by using the FortranAlgorithm class.This is a standard Gaudi algorithm which calls the FORTRAN routines: SUINIT in the initialize() method, SUANAL in the execute() method for each event, and SULAST in finalize(). Implementing these three routines allows you to write code in FORTRAN and have it called from within Gaudi, in particular to import routines already written in SICB.

Note, however that there are some points that should be kept in mind when importing SICB code into Gaudi. The following list is far from being complete but we hope that it will help anyone using the FortranAlgorithm. It may be updated in the future with the user's experiences and our own findings.

14.7 Handling pile up in Gaudi.

In this section we explain the pile-up structure implemented in the Gaudi framework (see Figure 30)

Pile-up in Gaudi is performed by a pile-up algorithm. An example (PileUpAlg) can be found in the package SicbCnv. The pile-up algorithm creates a second instance of the event selector which has to be configured in the job options file. The leading event will be read in from the EventSelector created by the ApplicationMgr. The piled-up events are read in from the second instance created by the pile-up algorithm. There are two different iterators, one for each EventSelector instance, which loop over these two data streams.

When reading the ZEBRA files produced by SICB the events are merged at the level of the ZEBRA common blocks by a call to the Futio routine REMERGE. Every C++ algorithm requesting data from one of the converted banks will get the merged data. Every FORTRAN algorithm using the SICB banks will also read the merged data as SICBDST does.

PileUpAlg must be the first algorithm called, other algorithms will access the merged events when they retrieve some data from the store. The pile-up algorithm controls the number of events which have to be piled-up to every leading event. PileUpAlg uses a C++ version of the SICB routine RELUMI to get the number of pile-up events as a function of the luminosity and some other beam parameters. Those parameters are currently read in from the beam.cdf file. The C++ version of RELUMI uses the random number service in Gaudi. Other implementations of the algorithm, for instance to return a fix number of pile-up events every time, may be implemented if they are needed.

Figure 30 Pile-up in Gaudi

 

In the future, the two event selectors will use two different data services to access different data stores. There could be several different pile-up algorithms using a pile-up service provided in the framework, including all the common functionality needed by the different pile-up algorithms. The current implementation is very dependent on SICB and does not use any pile-up service.

The following job options are necessary to instantiate the current implementation of the pile-up structure. First, tell the ApplicationMgr to create a second SicbEventSelector , called PileUpSelector ::

ApplicationMgr.ExtSvc += { "SicbEventSelector/PileUpSelector" };

The application manager should know that the pile-up algorithm should be run, and the user has to configure how this algorithm works. That configuration depends on the concrete implementation of the algorithm. In the current PleUpAlg the user can select between two different implementations to get the number of pile-up events for signal and minimum bias leading events. To select pile-up for signal or minimum bias one can use the pile-up algorithm property "PileUpMode" which can be set to LUMISIGNAL or LUMIMINBIAS. In both cases the number of Pile-up events depends on the luminosity..

ApplicationMgr.TopAlg = { "PileUpAlg","Alg1",Alg2,...};PileUpAlg.PileUpMode = "LUMISIGNAL";

Finally the second event selector should be configured, the input data source for the pile-up events has to be defined with the PileUpSelector.JobInput property. This property accepts JOBID's, FILE names or TAPEs (note that the format of this option is different to the format explained in section 14.2 for the property of the "main" event selector) :

PileUpSelector.JobInput = "JOBID 12933";