Advanced Resource Connector (ARC) version 0.8 is the third stable
release of this grid middleware. Two and half years after the previous
stable release (0.6) we are proud to announce a number of features
that significantly improve performance and usability. Moreover, this
release contains the new technology preview for the ARC job management
Moreover, release 0.8 sees a license change from GPL v2 to Apache
v2.0, opening new perspectives for ARC exploitation in conjunction
with other products.
Although ARC release 0.8 is based on essentially the same technology
as 0.6 and is a direct upgrade from v0.6.5, upgrading requires certain
changes, as described in the upgrade instructions.
The Advanced Resource Connector as of version 0.8 is an open source
software solution that enables production quality computational grids
for high throughput computing, encompassing a wide range of size and
purpose. The middleware integrates computing resources (typically,
computing clusters managed by a batch system) and, to a lesser extent,
storage services, making them available via a common secure grid
layer. The middleware builds upon standard open source solutions such
as OpenLDAP and OpenSSL, as well as some Globus Toolkit 4 pre-WS
libraries. It relies on well-tested pre-OGSA grid technologies in
creating unique ARC-specific services and tools. ARC developers strive
to achieve simplicity, non-invasiveness, high performance, stability
and reliability. With release 0.8 comes enhanced scalability and
improved performance of computing and information services. ARC
middleware is officialy supported on all major Linux flavors, and is
known to operate smoothly on other Linux systems, with a variety of
batch job management systems.
ARC provides a reliable implementation of fundamental grid services
optimized for high throughput computing, such as information services,
resource discovery and monitoring, job submission and management,
brokering, and basic data and resource management. Most of these
services are provided through the security layer of GSI.
ARC implements several services and tools essential for a production
Computing Service: two alternative implementations are available in
this release: the traditional combination of ARC-specific Grid
Manager and GridFTP server, as well as the new standard-compliant
Information system, implementing the NorduGrid information model
and providers. This release features a new Globus-independent
implementation of the information services
Client tool implemented as a CLI with integrated resource
discovery, matchmaking and brokering, making use of ARC xRSL as
well as JSDL languages for job description; it also has basic data
Storage Service integrated with data indexing system (Smart Storage
Persistent computing resource usage logging system
Real-time monitoring system that relies entirely on the information
ARC main features are:
Acknowledged simplicity in deployment and maintenance on a wide
range of computing resources of various configurations: successful
deployment reported on all major Linux flavours.
Highly sophisticated grid gatekeepers for computing resources (Grid
Manager or A-REX), capable of most complex job and data management
operations, with comprehensive security features and extensibility.
Clear separation between local resources and grid layer; following
batch systems are supported as grid plug-ins: all PBS flavours,
SGE, Condor, LSF, SLURM, LoadLeveler, as well as a simple fork.
Highest efficiency of computing resource utilization: input/output
grid data staging and other grid-specific procedures are handled
exclusively by the grid front-end. This also allows for parallel
deployment with other grid middlewares.
Absence of centralized workload management service.
Efficient information system that accurately reflects up-to date
system status, suitable for both resource discovery and monitoring.
Easy to install, small and yet very powerful client tool making
intelligent use of the distributed information. Client tool is
based on a powerful API and is available in several human
languages, facilitating development of 3rd party application
specific tools and portals.
Changes since ARC v0.6.5
License changed from GPL v2 to Apache v2.0
Preview of the A-REX job execution service is included. It is a
re-write of the Grid Manager, accommodating it for the
service-oriented architecture and in very near future will replace
the old Grid Manager.
Information system components (Globus-based GRIS and GIIS) are
replaced by native OpenLDAP implementations: ARC GIIS and ARC
BDII. This improves stability and maintainability of the system.
New functionalities and features:
Implemented authentication caching in cache
Added support for multiple session directories
Added support for a dynamic list of output files (Bug #1315)
Create required directories when SRM_INVALID_PATH is returned from
prepareToPut (Bug #1382)
Fix for error in RTE resulting in empty logfile (Bug #1385)
Add rerunable state to job info print-out (Bug #1395)
Cache clean: use correct block size (disk blocks rather than IO
Many fixes in the config_parser function used by the job control
scripts: multi-valued options support, fixed subsection listing,
more explicit messages, make quoting optional, don't complain on
'all' command, fix handling of single quotes in values
Fix to process VO entries in the right order in nordugridmap
Resolve file stat if long_list requested (not resolve) according to
description in header (Bug #1407)
Set ownership of renewed proxy to one of mapped user
Fix for ownership set to uninitialized values while renewing proxy
Configuration: added VO based configuration example
Simplified handling of cache configuration
Reporting errors which happened while processing user's access
control for job
Enhancing configuration template to have its authorization part more
Added check for libraries located in lib64 for LCAS and LCMAPS
CA_utils: added support for multiple URLs in *.crl_url.
Automatic selection of type of delegated proxy (Bug #1410)
Fix for dynamic outputfiles breaks when used in combination with
scratchdir option (Bug #1414)
SRM client respects estimated wait time that server returns
Added check for Globus GSSAPI and set configuration variables in
packaging - needed for LCAS plugin
fix for default for job memory being too high in submit-lsf-job
replaced stat with more portable perl implementation
for getting owner
fixed nordugrid-job-reqwalltime bug in the LoadLeveler
Deal gracefully with zero cputime/walltime in infosys for killed
jobs (Bug #1237)
avoid a warning with older versions of 'find' utility
don't request disk space from condor when not using local
scratch (Bug #1329)
allow for variable substitution on executable name and arguments;
don't pollute environment of worker node with variables from the
fix problem with Condor 7.3 when queues are empty
check that maxcputime is numeric
more portable load averages, cpu info and process info
The middleware is tested on a variety of Linux systems. While it
should work on other Unix-like systems, this release was not tested on
For a computing service, you will need a Linux cluster running a
Local Resource Management System or a standalone Linux box
configured with "fork" job submission. It can be shared with other
For a storage service, you will need a conventional disk array with
Linux front-end, or simply a Linux box with some storage
capacity. It can be shared with other services.
For all the optional services, a shared Linux box is sufficient.
For a client any Linux machine will do, no administrator privileges
The middleware is free to be deployed anywhere by anybody. Pre-built
binary releases for a dozen Linux platforms can be downloaded from
NorduGrid download area or via NorduGrid package repositories.
The software is released under the Apache License v2.0.
The NorduGrid repository hosts the source code, and provides all the
essential external software which are not part of a standard Linux
ARC packages fall into two main kinds: the server and the client
parts. Server parts are typically installed on a computing resource by
a system administrator, while the client can be set up anywhere and
needs no system administrator privileges. Detailed installation
instructions are distributed with the middleware documentation and are
available at the Web site:
Due to a bug in SLURM < 1.3.15, scan-SLURM-jobs will kill jobs when
slurmctld is not available. Solution: upgrade SLURM to >= 1.3.15
When deployed with A-REX, SRM links for input and output files can
not be used (Bug #1562)
User support and site installation assistance is provided via the
request tracking system available at .
In addition, NorduGrid runs a couple of mailing lists, among which the
nordugrid-discuss mailing list is a general forum for all kind of
issues related to ARC middleware.