TOC PREV NEXT INDEX

Gaudi logo


Chapter 18
Accessing SICB facilities
18.1 Overview

In order to facilitate the transition towards C++ based code we have implemented a number of features into the Gaudi framework whose purpose is to allow access to certain SICB facilities. Gaudi can read data from SICB ZEBRA banks.

In this chapter we cover: staging data from tapes, converting SICB data for use by Gaudi algorithms, accessing the magnetic field map, accessing the SICB geometry description, the use of the SUINIT, SUANAL and SULAST routines from within the Gaudi framework and handling event pileup and spillover in Gaudi. We also explain how to modify the size of the ZEBRA COMMON block.

When using the geometry and magnetic field descriptions described here, remember that they are a temporary solution for backwards compatibility with SICB. Gaudi includes the machinery to provide any algorithm with the detector data stored in XML format. Investing some time now to describe your detector in XML may be, in many cases, more convenient than using the old SICB routines to access detector description data. If you do use the old tools, use them only to populate some pre-defined class which can then be used in your algorithms. In this way, when you will move to the new detector description tools in Gaudi, the required changes in your code will be confined to the parts which access the geometry.

18.2 Reading tapes

There are three ways to specify a SICB input data file in the Gaudi job options:

1. Specify one or more file names. The name can be either the complete path on disk, or a "Castor file name" as returned by the book-keeping database. Using the Castor file name is the recommended method
EventSelector.Input = {"FILE = 'C:/data/mydata.dst,  /castor/cern.ch/lhcb/mc/L251367'"};

2. Specify a data set by giving the job identification numbers (JOBIDs), as was done in SICB. The framework will make an http query to the book-keeping database in order to find the corresponding dataset, and then issue a stage request. This database query can be avoided by using Castor names instead of JOBIDs, as recommended above.:
EventSelector.Input = { "JOBID='16434, 16435'" };

3. Specify explicitly a tape and file sequence number. This option is available just for completeness, tapes should never be accessed directly, use Castor instead.
EventSelector.Input = {"TAPE='Y21221-7, Y21223-24'";

The format is <Volume serial number of the tape>-<File sequence number>

When a Gaudi job requires to stage more than one file, the program waits for the first file to be staged. The rest of the required files are staged while the first file is processed. Once the program ends reading the first file, it will check if the next file is ready before continuing. If the file is not staged yet the program writes a message and waits until the file is in the staging disk pools.

Skipping events

When reading SICB data, one may wish to skip some events. This can be done by setting the property EventSelector.FirstEvent. For example, with the following job option, the algorithms will only start processing at event 109:
EventSelector.FirstEvent = 109;

Note that Gaudi will take a few seconds to reach the requested event as it has to read all the records in the Zebra file before the one requested.

18.3 Populating the GAUDI transient data store: SICB Converters
18.3.1 General considerations

Access to the SICB data sets is basically via wrappers to the FORTRAN code. A complete event is read into the Zebra common block, and then the conversion to transient objects is done on request by specialised converters.

As mentioned in Chapter 13, a converter must implement the IConverter interface, by deriving from a specific base class. In this way any actions which are in common to all converters of a specific technology may be implemented in a single place.

In the following section we give detailed instructions on how to implement converters within the SicbCnv package. These are intended primarily for Gaudi developers themselves.

18.3.2 Implementing converters in the SicbCnv package

SICB converters are available for reading most SICB banks, the full list of converted banks is available at http://cern.ch/lhcb-comp/Support/html/ConvertedSICBBanks.htm. Writing back into persistent storage (ZEBRA files) is possible for a few banks.

Typically, GAUDI DataObjects can be of two types:

· Simple classes, which contain the data of a single SICB bank. These classes are of type DataObject. An example is the Event class containing the data of the PASS bank.
· Container classes, which contain data from multiple SICB banks. An example is the ObjectVector<MCParticle>, which contains Monte-Carlo particles with data from the ATMC bank.

Template files exist in the directory $LHCBSOFT/SicbCnv/<version>/doc for both types of converters, to ease the creation of user converters:

· SicbCnv.Class.Template.cpp and SicbCnv.Class.Template.h to be used when writing a converter for a single class.
· SicbCnv.Class.Container.cpp and SicbCnv.Container.Template.h to be used when writing a container of an object container.

If you intend to write your own SICB converter, follow the instructions below:

· Copy SicbCnv.xxxx.Template.h to Sicb<your-class>Cnv.h, where <your-class> is the name of your persistent class.
· Copy SicbCnv.xxxx.Template.cpp to Sicb<your-class>Cnv.cpp
· Now customize the header and the implementation file
· Follow TODO instructions in Sicb<your-class>Cnv.h
· Follow TODO instructions in Sicb<your-class>Cnv.cpp
· The converter factory must be made known to the system. This in fact depends on the linking mechanism: If the converter is linked into the executable as an object file, no action is necessary. However, usually the converter code resides in a shared or archive library. In this case the library must have an initialisation routine which creates an artificial reference to the created converter and forces the linker to include the code in the executable. An example of creating such a reference can be found in the file $LHCBSOFT/SicbCnv/<version>/SicbCnv/SicbCnvDll/SicbCnv_load.cpp. The convention for these initialization files is the following: for any other package replace the string "SicbCnv" with "OtherPackage".
· Compile link, debug
· Once the converter works, remove unnecessary TODO comments.
18.3.3 Back Converters

In some cases it may be necessary to convert event data from the Gaudi Transient Event Data store back to SICB banks in the ZEBRA COMMON block, to make this data available to FORTRAN algorithms. This "back" conversion is done by dedicated converter algorithms. An example is the CaloDigitToSicbConverter algorithm in the SICB/CaloSicbCnv package

18.4 Access to the Magnetic Field

The magnetic field map will be accessible in the future via the transient detector store. For the time being, as this is not implemented and as access to the magnetic field has been requested, we have provided a magnetic field service. Again this is effectively just a wrapper which uses SICB routines to read the information from a .cdf file.

The location of the field.cdf file is provided by the standard.stream file which is read in by a SICB routine called from Gaudi. This file is in the data base area, $LHCBDBASE/standard.stream in AFS. For every version the file used in the production is read in. The location of the standard.stream file will be taken from an environment variable as per normal SICB operation.

To use the Magnetic field service one should modify the jobOptions.txt file to include the following:
ApplicationMgr.ExtSvc += { "MagneticFieldSvc"};

Any algorithm which requires the use of the service makes a request via the service() method of the Algorithm base class:
IMagneticFieldSvc* pIMF= 0;
StatusCode sc = service("MagneticFieldSvc", pIMF );

The service provides a method:
StatusCode fieldVector(HepPoint3D& Pos, HepVector3D& field)

which gives a magnetic field vector at a given point in space, for example:
HepPoint3D P(10.*cm, 10.*cm, 120.*cm);
HepVector3D B;
pIMF->fieldVector( P, B );

The magnetic field service uses a new version of the SICB routine GUFLD. In this new version the best possible description of the magnet geometry and the field are assumed in order to eliminate dependencies with other parts of SICB. Technically in SICB this corresponds to:
IMAGLEV = IUVERS('GEOM','MAGN') = 4
IFLDLEV = IUVERS('GEOM','MFLD') = 4

These two parameters have been fixed to 4 in the production since a few months before the Technical Proposal: version 111 of SICB. Thus this seems to be a reasonable restriction until the field map is provided in the detector description database.

For an example of the use of this service see the sub-algorithm readMagField in the example FieldGeom distributed with the release.

18.5 Accessing the SICB detector Geometry from Gaudi

As discussed previously, the detector geometry will be included along with the field map in the XML detector description database. Currently only a part of the LHCb detector is in the new database. However, the detector geometry used in the production of the data can be accessed by calling a function in the SicbFortran name space (this function is just a wrapper for the FORTRAN function of similar name):
void SicbFortran::utdget(const std::string& a1, const std::string& a2,
                         int& nParam, int* data);

nParam should be set to the number of data words required and on return from the function will contain the number of data words actually copied into the array: data. The first string contains the name of the sub detector whose geometry is being requested and the second string is a list of options:

'V'- version;
'G' - geometry description parameters (default);
'C' - calculated geometry (not in *.cdf);
'H' - hit description parameters;
'D' - digitization parameters;
'R' - reconstruction parameters;
'F' - floating point parameters (default);
'I' - integer parameters;
'N' - take parameters from the *.cdf file
'L' - ZEBRA pointer to the beginning of the parameters storage is returned in IARRAY(1)

An algorithm requiring this access should include the header file:
#include "SicbCnv/TopLevel/SicbFortran.h"

and call it so:
float rpar[300];

SicbFortran::utdget("WDRF","D",mpar, (int *) rpar);
log << MSG::INFO << " wdpar(" << j << ") = " << vf[j] << endreq;

Note that the data returned by the function is written into an integer array. However we can also read floating point numbers as in the code fragment above by casting a float array.

One should notice that the geometry returned by this function is that which was used in the production of the data and not that which is in the current version of the .cdf files. Only if the option 'N' is specified are the .cdf files read in from the standard location. In order to be able to use the array of parameters returned one has to know in advance the organization of these data in the cdf file since the data are stored in an array and not in a common block with named variables.

The sub-algorithm readTRackerGeom in the example FieldGeom extracts and writes out some digitization and geometry parameters of the outer tracker.

18.6 Using FORTRAN code in Gaudi

Existing FORTRAN code can be used within a Gaudi application by using the FortranAlgorithm class.This is a standard Gaudi algorithm which calls the FORTRAN routines: SUINIT in the initialize() method, SUANAL in the execute() method for each event, and SULAST in finalize(). Implementing these three routines allows you to write code in FORTRAN and have it called from within Gaudi, in particular to import routines already written in SICB.

Note, however that there are some points that should be kept in mind when importing SICB code into Gaudi. The following list is far from being complete but we hope that it will help anyone using the FortranAlgorithm. It may be updated in the future with the user's experiences and our own findings.

· Gaudi is linked with only two sub-packages of SICB: Finclude and Futio. This means that taking some code from SICB and running it successfully in Gaudi will not always be straight forward. Every case will have to be studied separately. In some cases you may find that the effort required to make the code compatible with Gaudi is comparable to writing it directly in C++. For some pieces of code the integration into Gaudi may be very simple. The difficulties will come mainly from dependencies of the code you want to include on other parts of SICB. For instance there may be common blocks which your code needs but which are never initialized.
· As most of SICB is not executed, you will have either to include and execute the required initialization or to try to eliminate those dependencies.
· Gaudi uses SICB to access the data in the DST's, to extract the information from the banks and to convert the data into the appropriate classes. This needs some initialization but not every SICB initialization step is followed. The standard.stream file and setup.cdf are read in from the standard locations. The program will have access to the data in the cdf files which were used to produce the DST.
· Sicb.dat is not read in. If you need to change the running conditions of your program do it using the jobOptions.txt file or write your own cards file and read it in SUINIT.
· In Finclude you will find all the NAME_BANK.INC and NAME_FUNC.INC include files. This means that you have access to the bank data with the usual utilities provided by SICB. For example, to access the momentum of the reconstructed track one can use (as in SICB):
P = AXTK$P(NOBJ)

· Futio includes most of the UT* and UB routines which can therefore be used by FORTRAN code within Gaudi.
· Initialize the histogram files yourself in SUINIT. Gaudi initializes the histogram service but this can be accessed only from C++ code.
18.7 Handling pile up in Gaudi.

In this section we explain the pile-up structure implemented in the Gaudi framework (see Figure 18.1). Please note that this possibility has been rendered somewhat redundant by the implementation of pile-up at generator level in SICBMC v244.

Pile-up in Gaudi is performed by a pile-up algorithm. An example (PileUpAlg) can be found in the package SicbCnv. The pile-up algorithm creates a second instance of the event selector which has to be configured in the job options file. The leading event will be read in from the EventSelector created by the ApplicationMgr. The piled-up events are read in from the second instance created by the pile-up algorithm. There are two different iterators, one for each EventSelector instance, which loop over these two data streams.

When reading the ZEBRA files produced by SICB the events are merged at the level of the ZEBRA common blocks by a call to the Futio routine REMERGE. Every C++ algorithm requesting data from one of the converted banks will get the merged data. Every FORTRAN algorithm using the SICB banks will also read the merged data as SICBDST does.

PileUpAlg must be the first algorithm called (unless spillover is also enabled, in which case it should be the second algorithm after the spillover algorithm), other algorithms will access the merged events when they retrieve some data from the store. The pile-up algorithm controls the number of events which have to be piled-up to every leading event. PileUpAlg uses a C++ version of the SICB routine RELUMI to get the number of pile-up events as a function of the luminosity and some other beam parameters. Those parameters are currently read in from the beam.cdf file. The C++ version of RELUMI uses the random number service in Gaudi. Other implementations of the algorithm, for instance to return a fix number of pile-up events every time, may be implemented if they are needed.
Figure 18.1 Pile-up in Gaudi

In the future, the two event selectors will use two different data services to access different data stores. There could be several different pile-up algorithms using a pile-up service provided in the framework, including all the common functionality needed by the different pile-up algorithms. The current implementation is very dependent on SICB and does not use any pile-up service.

The following job options are necessary to instantiate the pile-up structure. First, tell the ApplicationMgr to create a second EventSelector, called PileUpSelector::
ApplicationMgr.ExtSvc += { "EventSelector/PileUpSelector" };

The application manager should know that the pile-up algorithm should be run, and the user has to configure how this algorithm works. That configuration depends on the concrete implementation of the algorithm. In the current PileUpAlg the user can select between two different implementations to get the number of pile-up events for signal and minimum bias leading events. To select pile-up for signal or minimum bias one can use the pile-up algorithm property "PileUpMode" which can be set to LUMISIGNAL or LUMIMINBIAS. In both cases the number of Pile-up events depends on the luminosity..
ApplicationMgr.TopAlg = { "PileUpAlg","Alg1",Alg2,...};
PileUpAlg.PileUpMode = "LUMISIGNAL";

Finally the property PileUpSelector.Input defines the input data source for the pile-up events, in the same format as EventSelector.Input, described in section 18.2.:
PileUpSelector.Input = {"JOBID='12933'"};

18.8 Handling SpillOver in Gaudi

It is also possible to read in additional (spillover) events. In order to switch on spillover you should include in your job options the lines shown in Listing 18.1
Listing 18.1 Job options for spillover
1: ApplicationMgr.TopAlg = { "SpillOverAlg" };
2: SicbEventCnvSvc.enableSpillover = true;
3: SpillOverAlg.SpillOverMode = "LUMI";
4: // Enable next two lines to modify number of spillover events.
5: // Default is Prev = 1, Next = 0; Maximum is Prev=2, Next=2
6: // SpillOverAlg.SpillOverPrev = 2;
7: // SpillOverAlg.SpillOverNext = 1;
8: SpillOverSelector.Input = {"JOBID='12933'"};
9: // Data to be loaded
10: SpillOverAlg.SpillOverData = {
11: "MCOuterTrackerHits", "MCInnerTrackerHits",
12: "Prs/Signals", "Prs/SummedSignals",
13: "Spd/Signals", "Spd/SummedSignals",
14: "Ecal/Signals", "Ecal/SummedSignals",
15: "Hcal/Signals", "Hcal/SummedSignals" };
. Note that SpillOverAlg must be executed ahead of any other algorithm (including ahead of pileup) so Line 1 must occur in the job options file ahead of any other TopAlg declaration.

In the example shown, spillover is included and enabled, and only one additional event is read in by default, corresponding to the previous beam crossing. Up to a total of 4 spillover events can be read in by changing the values of the SpillOverAlg.SpillOverPrev and SpillOverAlg.SpillOverNext job options. The spillover events are read into additional branches of the LHCb data model, as described in reference [8]. Note that the events are merely made available in the transient event data store of Gaudi. None of the event information is modified, in particular the time of flight information of the hits is not modified: the labels Prev, Next etc. are for convenience only, it is up to the algorithms using this infomation to add appropriate timing offsets when required. Only the data specified by the SpillOverAlg.SpillOverData job option (lines 10 to 15) is actually loaded.

The input data source for the spillover events has to be defined with the SpillOverSelector.Input property, see line 8. This property has the same format as EventSelector.Input, described in section 18.2
18.8.1 Limitations

The current implementation of spillover has the following limitations:

· In order to determine the probability of interactions in previous and subsequent bunch crossings, the spillover algorithm takes the instantaneous luminosity from the current event (as used to generate PileUp). Based on this probability, it uses a random number generator to simulate the actual number of interactions in each of the bunch crossings (let's call this number num_inter). If, in a given bunch crossing, num_inter is greater than zero, then an event is read into the SpillOver transient event structure from the SpillOver input file. This approach is an approximation: if num_inter > 0, the SpillOver algorithm always reads one (and only one) event from the input file, regardless of the value of num_inter. In theory the event read from the input file should contain num_inter piled up events; in practice one just reads the next event. It would be possible to do the correct thing by skipping events until one with the right PileUp multiplicity is found, or to open several input files, each containing events with a fixed PileUp multiplicity. Neither of these possibilities is currently implemented.
· Since SpillOver events will be combined by digitisers into a single Raw event, it is only foreseen to provide the /MC part of the event. Furthermore, only those parts of the /MC subevent whose converter foresees SpillOver are available. Listing 18.2 for example shows how the SICB MCParticles converter has been extended to allow spillover. Any SICB converter can be extended in a similar way if required
Listing 18.2 Adding provision for spillover to a SICB converter
1: // Next line declares path for main event
2: declareObject("/Event/MC/MCParticles", objType(), "ATMC");
3: // Next 4 lines declare path for spillover events
4: declareObject("/Event/Prev/MC/MCParticles", objType(), "ATMC");
5: declareObject("/Event/Next/MC/MCParticles", objType(), "ATMC");
6: declareObject("/Event/PrevPrev/MC/MCParticles", objType(), "ATMC");
7: declareObject("/Event/NextNext/MC/MCParticles", objType(), "ATMC");

18.9 Increasing the size of the ZEBRA COMMON block

The default Gaudi implementation initializes the ZEBRA COMMON block to a rather small size (3M words) in order to avoid a large memory overhead for applications that need to access SICB facilities but not large events. This limited size is however insufficient to read all but the simplest of events. Data processing applications need to over-ride this size. One way to do this is to provide a private version of the GETZEBRASIZE routine, to be linked ahead of the one in the SicbCnv library. An example is shown in Listing 18.3, where the ZEBRA COMMON size is set to 12M words
Listing 18.3 Code to redefine the size of the ZEBRA COMMON block
1: SUBROUTINE GETZEBRASIZE(size)
2:
3: IMPLICIT NONE
4: #define NWPAW 1000000
5: #define NWGEAN 12000000
6: REAL GSTORE
7: COMMON / GCBANK / GSTORE( NWGEAN )
8: REAL HPAW
9: COMMON / PAWC / HPAW(NWPAW)
10:
11: INTEGER size
12: size = NWGEAN
13:
14: RETURN
15: END



Quadralay Corporation
http://www.webworks.com
Voice: (512) 719-3399
Fax: (512) 719-3606
sales@webworks.com
TOC PREV NEXT INDEX