ARC Accounting Technical Details

New in version 6.4.

Changed in version 6.12.

Warning

Information in this chapter is relevant only for 6.4+ ARC releases.

Moreover ARC 6.12 get accounting changes to address the APEL move to ARGO messaging service protocol. If you are publishing to APEL you must update to 6.12+ ARC release.

Note

If you are looking for the information about the technical details of legacy accounting subsystem in 6.0-6.3 ARC releases please read Legacy JURA Accounting Technical Details but it is highly recommended to update to recent release.

General accounting configuration and operations flows are described in Accounting Subsystem. This section contains more technical details about implementation of each component of accounting subsystem.

Job accounting information processing workflow

Technical details of ARC CE accounting workflow

Fig. 21 Technical details of ARC CE accounting workflow: information collection, AAR creation, querying and publishing

Collecting the accounting information

The A-REX Accounting subsystem is part of the core A-REX functionality starting from the 6.4 release. The main functionality of A-REX Accounting subsystem is to handle the data writing to local SQLite accounting database for the every job state change.

The data sources of the Accounting data are per-job files in the control directory:

  • .local file contains general information associated with the job. All IDs, ownership, authtokenattributes are taken from this file. The data in .local are written and updated by the A-REX JobControl modules.
  • .statistics file is a dedicated file written by the DTR data transfer framework that contains data transfer measurements.
  • .diag file is written by the LRMS Scripts: initially by submit-<lrms>-job.sh, than JobScript during the job execution on the Worker Node and finally by scan-<lrms>-job.sh that adds extract from LRMS accounting data. It contains but is not limited to resource usage and worker node environment data.

The local SQLite accounting database contains all the A-Rex Accounting Record (AAR) data for every ARC CE job.

The initial record about the job is created based on the first ACCEPTED job event. The ID, ownership and submission time is recorded during this step and accounting job status is marked as in-progress.

Any subsequent job events triggers event data recording in the database, and allow to track data staging time, lrms enqueueing time, etc.

When the FINISHED job event occurs (execution is completed) the A-REX Accounting subsystem updates all AAR metrics in the database, storing resource usage, endtime, etc. Such a state is indicated by the status={completed|failed|aborted}.

Using the local accounting database

Using the accounting data for statistics lookup and/or publishing to external services is accomplished via the developed arc.control Python modules.

The AccountingDBSQLite module is responsible for handling all low-level database operations and it hides SQL queries under the API needed for other workflows.

In particular the accounting subsystem of ARC Control Tool provides a command line interface to the typical queries that can get you the accounting data in a flexible manner.

The records publishing is carried out by the AccountingPublishing Python module that includes:

  • classes for generating usage records in OGF.98 UR, EMI CAR 1.2, APEL Summaries and APEL Sync formats
  • classes that handle the records POST-ing to SGAS endpoint
  • classes that handle the records sending to APEL via AMS protocol
  • general wrapping classes to handle regular publishing and republishing of the data

Both the arcctl accounting reublish tool and the jura-ng tool (that runs regularly by A-REX) use the same AccountingPublishing Python module.

The regular publishing process stores the last published record endtime in the dedicated Publishing database. The next round of regular publishing queries the stored time and query the records since then.

Accounting data publishing details

Reporting to SGAS

SGAS has a simple custom web service interface loosely based on WS-ResourceProperties.

AccountingPublishing Python module uses the insertion method of this interface to report URs directly to the Python httplib library with SSL context wrapping.

To increase communication efficiency the AccountingPublishing SGASSender class sends URs in batches. SGAS accepts a batch of URs in a single request. The batch is an XML element called UsageRecords, containing elements representing URs. The maximal number of URs in a batch can be set as a urbatchsize configuration parameter of SGAS target.

Reporting to APEL

Changed in version 6.12.

APEL curently uses AMS REST protocol for records sending.

The AccountingPublishing APELAMSDirectSender class implements the AMS REST communication without external dependencies.

Communication code relies on the same Python httplib library with SSL context wrapping.

It connects to AMS endpoint with valid SSL context with client certificate authentication and obtains AMS authentication token.

Messages sent to APEL are S/MIME signed using openssl binary tool and than sent to the endpoint using AMS authentication token.

Reporting to APEL also works with sending records in batches. The default urbatchsize value is set to 500 according to APEL recommendations but can be lowered if you run into message size issues (e.g. sending large individual records).

Republishing

Republishing simply triggers the same AccountingPublishing classes for the defined timeframe that comes from the command line.

All records are regenerated from accounting database data and sent to the target.

Security

The accounting directory <controldir>/acconting is by default accessible only by the user running A-REX (root in most cases).

All usage records are submitted with use of the X.509 credentials specified by the value of x509_ set of configuration options of arc.conf. No proxies are used for communication with accounting services.

The only access restriction made by a SGAS service is matching the Distinguished Name of the client (in this context ARC CE) with a set of trusted DNs. When access is granted, policies are then applied by SGAS, allowing either publishing and/or querying rights. Clients with publishing rights can insert any UR, regardless of content. By default, querying rights only allows retrieving URs pertaining to jobs submitted by the querying entity.

Publishing records to APEL requires glite-APEL endpoint defined for the grid-site in the GOCDB. The ARC CE certificate DN should be added to the glite-APEL endpoint.

Third-party accounting queries

The ARC Control Tool accounting stats interface is powerful enough to get custom information from the accounting database as shown in examples.

However if you want to get a specific report or integrate ARC accounting database with third-party software you can of cause use SQLite directly.

The SQLite database file location is: <controldir>/accounting/accounting.db.

It is worth to be aware of the ARC Accounting Database Schema to develop third-party queries.

Definition of the A-REX Accounting Record including attribute mappings to SGAS and APEL

ARC CE is measuring and collecting a lot of accounting information needed but not limited to the data required by common aggregated accounting SGAS and APEL services.

All accounting information stored about a job is defined by what we called A-REX Accounting Record (AAR).

AARs has a representation inside the local accounting database according to schema and representations inside A-REX and Python modules.

Local stats are generated based on the stored AARs information and provides the way for on-site CE operations analyses.

The following tables include a flat list of the properties (NOT the database rendering) included into the AAR:

Table 4 Attributes used in current implementation
A-REX Accounting Record (AAR) SGAS OGF-UR APEL CAR Content description
jobid JobIdentity.GlobalJobId, RecordIdentity is composed of jobid and hostname taken from the endpointurl. JobIdentity.GlobalJobId, RecordIdentity is composed of jobid and hostname taken from the endpointurl. The global unique jobid assigned by AREX.
localid JobIdentity.LocalJobId JobIdentity.LocalJobId LRMS job ID
jobname JobName JobName User specified job name
endpointurl MachineName MachineName, SubmitHost, Site The A-REX job submission endpoint URL used for this job
endpointtype not used not used The A-REX job submission endpoint type used for this job
lrms not used Infrastructure (used as a part of it) The LRMS behind A-REX
queue Queue Queue The name of the LRMS queue of the job
nodename Host Host WN name(s) as given by LRMS separated by :
clienthost SubmitHost (port removed) not used Client connection socket from the client to A-REX
usersn UserIdentity.GobalUserName UserIdentity.GobalUserName The global user identity, at the moment it is the SN from the certificate
localuser UserIdentity.LocalUserId UserIdentity.LocalUserId The mapped local userid
authtokenattributes UserIdentity.VO and child structures UserIdentity.Group and UserIdentity.GroupAttribute contains the attributes of auth token (VOMS FAQNs in currect implementation)
projectname ProjectName UserIdentity.GroupAttribute User-defined name of the project the job belongs to
status Status Status The terminal state of an A-REX job: aborted, failed, completed
exitcode not used ExitStatus The exit code of the payload in the LRMS
submissiontime StartTime StartTime The timestamp of job acceptance at A-REX
endtime EndTime EndTime The timestamp when the job reached the terminal state in A-REX
nodecount NodeCount NodeCount Number of allocated worker nodes
inputfile FileTransfers not used Details of downloaded inputfile: url, size, transfer start, transfer end, downloaded from cache
outputfile FileTransfers not used Details of uploaded outputtfile: url, size, transfer start, transfer end
usedmemory Memory Memory Maximum virtual memory used by the job
usedmaxresident Memory Memory Maximum resident memory used by the job
usedaverageresident Memory Memory To be dropped from the AAR schema
usedwalltime WallDuration WallDuration The measured clocktime ellapsed during the execution of the job in the LRMS. No matter on how many cores, processors, nodes the user job ran on.
usedcputime CpuDuration CpuDuration (with type all) The total CPU time consumed by the job. If the job ran on many cores/processors/nodes, all separate consumptions shall be aggregated in this value.
usedusercputime CpuDuration (with type user but should not be there) CpuDuration (with type user) The user part of the usedcputime
usedkernelcputime CpuDuration (with type system but should not be there) CpuDuration (with type system) The kernel part of the usedcputime
cores Processors Processors The number of cores allocated to the job
usedscratchspace StorageUsageBlock   The used size of scratch dir at the end of the job termination in the LRMS.
systemsoftware     The type and version of the system software (i.e. opsys, glibc, compiler, or the entire container wrapping the system software)
wninstance ServiceLevel   Coarse-grain characterization tag for the WorkerNode, e.g. BigMemory or t2.micro (aka Amazon instance type)
RTEs     List of used RTEs, including default ones as well.
data-stagein-volume Network class has something similar   The total volume of downloaded job input data in GBs
data-stagein-time     The time spent by the DTR sysem to download input data for the job
data-stageout-volume Network class has something similar   The total volume of uploaded job output data in GBs
data-stageout-time     The time spent by the DTR sysem to upload output data of the job
lrms-submission-time     The timestamp when the job was handed over to the LRMS system
lrmstarttime     The timestamp when the payload starts in the LRMS
lrmsendtime     The timestamp when the payload completed in the LRMS
benchmark Benchmark ServiceLevel The type and the corresponding benchmark value of the assigned WN
Table 5 NOT USED SGAS or APEL attributes
SGAS OGF-UR APEL CAR
ProcessID  
Charge Charge
Swap Swap