Quickstart ARC: towards distributed computing in a few minutes - x509 edition
Scared of distributed computing complexities?
With ARC7 you can setup a Computing Element and try common distributed computing workflows in just a few minutes!
ARC7 comes with so-called zero configuration included and works out of the box without any manual configuration. It has a pre-installed x509 host certificate signed by a Test-CA.
If you want to test your ARC-CE with token submission there are two extra steps that need to be performed in order to set up a Test JWT issuer and allow the client (remote client or on the ARC-CE itself) to trust tokens from this issuer.
If you want to test job submission on a remote ARC client, the client must trust the ARC-CE host certificate which is issued by the Test-CA, and you must therefore apply the extra step for both the token and the x509 user case. These are described in Step 5c or Step 5b respectively.
You can try ARC by using the legacy x509 user certificate, or with the newer Jason Web Token capability. The procedure below splits into x509 versus token at Step 5. The two require slightly different configuration options on the ARC server, and different procedures to aquire the authentication document (certificate or token).
The ARC server can be set up to accept both user x509 certificates and user tokens in paralell, or just one of the two. This is up to you.
Step 0. Prerequisites
The zero configured A-REX comes with the REST interface enabled. It runs by default on port
443
, so make sure it is not firewalled.If you are testing ARC with a remote client: Either register your ARC-CE on a DNS server, or add the ARC-CE host name to the
/etc/hosts
file on the client host.
Step 1. Enable NorduGrid ARC7 repos
Repository security
The NorduGrid RPM packages and DEB repositories are signed, and in order for the repository tools APT and YUM to verify them you must install the NorduGrid GPG key:
For rpm
based distributions like Red Hat Enterprise Linux and Fedora:
[root ~]# rpm --import http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7
For Ubuntu distributions with sudo
:
[user ~]$ wget -q http://download.nordugrid.org/DEB-GPG-KEY-nordugrid-7.asc -O- | sudo apt-key add -
For Debian without sudo
:
[root ~]# wget -q http://download.nordugrid.org/DEB-GPG-KEY-nordugrid-7.asc -O- | apt-key add -
Repository configuration
The NorduGrid ARC repositories for RedHat Enterprise Linux / CentOS packaging utility
dnf
can be configured through:/etc/yum.repos.d/nordugrid.repo
The repository configuration can be set up automatically with dnf by installing the``nordugrid-release`` package or creating the configuration file manually.
The easiest way to configure DNF to use the NorduGrid repository for Red Hat Enterprise Linux, CentOS and similar distributions is to install the
nordugrid-release
package which can be found in the NorduGrid package repository for the appropriate RHEL/EPEL release.Links to the release packages:
Install with
dnf
(Fedora, CentOS Stream, Rocky Linux, CentOS Linux 8+9) by copying the appropriate link from above[root ~]# dnf install <rhel-repo link>This creates the appropriate repo files in
/etc/yum.repos.d/
.For manual YUM repository setup, create a file
/etc/yum.repos.d/nordugrid.repo
with the following contents (here using Rocky as example, if you are on Fedora, replacerocky
withfedora
)If you are installing an alpha, beta or release candiate, please set the nordugrid-testing to
enabled=1
.[nordugrid] name=NorduGrid - $basearch - base baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/base enabled=1 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7 [nordugrid-updates] name=NorduGrid - $basearch - updates baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/updates enabled=1 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7 [nordugrid-testing] name=NorduGrid - $basearch - testing baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/testing enabled=0 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7
The NorduGrid ARC repositories for Ubuntu packaging utility APT can be configured through: /etc/apt/sources.list
or when supported through a repo specific file: /etc/apt/sources.list.d/nordugrid.list
.
The repository configuration can be set up automatically by means of installing nordugrid-release
package or creating the configuration file manually.
Ubuntu version names:
Ubuntu release |
Code name |
---|---|
24.04 |
noble |
23.10 |
mantic |
22.04 |
jammy |
20.04 |
focal |
The examples below give you the link for most recent Ubuntu releases. Packages are shown below for amd64
architecture. Replace amd64
for i386
if required for your architecture.
Install the source file with dpkg
, example shown for Ubuntu 22.04:
[root ~]# wget -q https://download.nordugrid.org/packages/nordugrid-release/releases/7/ubuntu/24.04/amd64/nordugrid-release_7~noble1_all.deb
[root ~]# dpkg -i nordugrid-release_7~noble1_all.deb
For a different version of Ubuntu, change the version names appropriately.
For manual APT repository setup for Ubuntu, the APT sources file should contain the following (here shown for Ubuntu 22.04 Jammy).
The configurations for the varios APT based distributions can be found in the following sections.
To enable a specific repository, remove the “#
” from the beginning of the line, before the “deb
” as shown for the Base Channel.
# Base channel - must be enabled
deb http://download.nordugrid.org/repos/7/ubuntu/ jammy main
deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy main
# Updates to the base release - should be enabled
deb http://download.nordugrid.org/repos/7/ubuntu/ jammy-updates main
deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy-updates main
# Scheduled package updates - optional
#deb http://download.nordugrid.org/repos/7/ubuntu/ jammy-experimental main
#deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy-experimental main
For a different release version, change the version name accordingly.
The NorduGrid ARC repositories for Ubuntu packaging utility APT can be configured through: /etc/apt/sources.list
or when supported through a repo specific file: /etc/apt/sources.list.d/nordugrid.list
.
The repository configuration can be set up automatically by means of installing nordugrid-release
package or creating the configuration file manually.
Debian version names:
Debian release |
Code name |
---|---|
12 |
bookworm |
11 |
bullseye |
10 |
buster |
9 |
stretch |
The examples below give you the link for most recent Debian releases. Packages are shown below for amd64
architecture. Replace amd64
for i386
if required for your architecture.
Install the source file with dpkg
, example shown for Debian 12:
[root ~]# wget -q https://download.nordugrid.org/packages/nordugrid-release/releases/7/debian/12/amd64/nordugrid-release_7~bpo12+1_all.deb
[root ~]# dpkg -i nordugrid-release_7~bpo12+1_all.deb
For a different version of Debian, change the version names appropriately.
For manual APT repository setup for Debian, the APT sources file should contain the following (here shown for Debian 12 Bookworm).
The configurations for the varios APT based distributions can be found in the following sections.
To enable a specific repository, remove the “#
” from the beginning of the line, before the “deb
” as shown for the Base Channel.
# Base channel - must be enabled
deb http://download.nordugrid.org/repos/7/debian/ bookworm main
deb-src http://download.nordugrid.org/repos/7/debian/ bookworm main
# Updates to the base release - should be enabled
deb http://download.nordugrid.org/repos/7/debian/ bookworm-updates main
deb-src http://download.nordugrid.org/repos/7/debian/ bookworm-updates main
# Scheduled package updates - optional
#deb http://download.nordugrid.org/repos/7/debian/ bookworm-experimental main
#deb-src http://download.nordugrid.org/repos/7/debian/ bookworm-experimental main
For a different release version, change the version name accordingly.
Set up dependency repositories
dnf config-manager --set-enabled powertools
dnf makecache
dnf config-manager --set-enabled crb
dnf makecache
[root~]# apt-get update
[root~]# apt-get update
Step 2. Install A-REX
ARC Resource-coupled EXecution service (A-REX) is a core component that manages authentication, authorization and job life cycle. It is enough to have A-REX installed to have a minimal computing element:
[root ~]# dnf -y install nordugrid-arc-arex
[root ~]# apt-get -y install nordugrid-arc-arex
[root ~]# apt-get -y install nordugrid-arc-arex
Step 3. Run A-REX
To start ARC services just run:
[root ~]# arcctl service start --as-configured
You can check if A-REX is running with:
[root ~]# arcctl service list
arc-arex (Installed, Disabled, Running)
arc-arex-ws (Installed, Disabled, Running)
arc-datadelivery-service (Not installed, Disabled, Stopped)
arc-infosys-ldap (Not installed, Disabled, Stopped)
Step 4. Install the ARC client
Install ARC client tools on the client host
Note
In the zero-conf setup - we install the client and the ARC control client tool on the same server as the ARC-CE, so client and host is the same machine. Typically you would install the client on amother (remote) machine.
[root@server]# dnf -y install nordugrid-arc-client nordugrid-arc-arcctl
[root@server]# apt-get -y install nordugrid-arc-client nordugrid-arc-arcctl
[root@server]# apt-get -y install nordugrid-arc-client nordugrid-arc-arcctl
Step 5. Install and enable autocompletion (optional)
arcctl
tool automates many ARC CE operations and is designed with bash-completion in mind.
If you would like to use ARC in production it is advised to have completion enabled:
[root ~]# dnf install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
[root ~]# apt-get install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
[root ~]# apt-get install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
Step 6. Generate a user x509 certificate and key for testing
Grid services and users authentication heavily relies on cryptography and uses certificates/keys for each entity. ARC7 comes with Test Certificate Authority on board that can issue the test user certificates easily.
The ARC7 zero configuration implements a default closed approach defining the special authorization object called authgroup.
During the test-user certificate generation, arcctl test-ca
will automatically add the issued certificate subject to the testCA.allowed-subjects
file,
opening the job submission possiblity to the test-user transparently. the testCA.allowed-subjects can be found in your /etc/grid-security
folder.
No other subject will be able to submit to your system before you change the authgroup
settings in arc.conf
,
which you will do once you configure ARC for production use.
You can test submission from the host running A-REX or from any other machine by following the instructions below.
To generate a test certificate/key and install it to standard location inside a local user’s home directory, run:
[root@server]# arcctl test-ca usercert --install-user user01
User certificate and key are installed to default /home/user01/.globus location for user user01.
Note
Replace user01 with the actual username you want to submit jobs by. While it is technically possible to submit jobs as the root user, we strongly discourage that.
To generate a test certificate/key for a remote client, two steps are needed.
First create a tar-ball containing usercert, ca-certs and setup script using the TestCA automatically setup on your ARC-CE server:
[root@server]# arcctl test-ca usercert -t
[2025-08-06 18:44:20,340] [ARCCTL.TestCA] [INFO] [53878] [Certificate and key for user Test User 80404674 are exported to usercert-Test-User-80404674.tar.gz]
[2025-08-06 18:44:20,341] [ARCCTL.TestCA] [INFO] [53878] [Printing usage instructions for tarball]
tar xzf usercert-Test-User-80404674.tar.gz
source arc-testca-usercert/setenv.sh
In addition it adds the newly created x509 user certificate subject in the file /etc/grid-security/testCA.allowed-subjects which will ensure that jobs and other requests issued by a user with this certificate is accepted.
Next copy the tar-ball over to your client machine by a method of your choice (e.g. scp), and run the two commands as per instructions above:
[user@client]$ tar -xzvf usercert-Test-User-80404674.tar.gz
[user@client]$ source arc-testca-usercert/setenv.sh
This sets up the necessary env vars and trust between the ARC-CE and ARC client machine.
Step 7. Get a proxy certificate
To submit jobs or perform any other action towards the ARC-CE you must authenticate yourself. We will do this with a proxy-certificate which is a Single Sign-On token for distributed grid-infrastructure.
To generate a proxy certificate do:
[user ~]$ arcproxy
Your identity: /DC=org/DC=nordugrid/DC=ARC/O=TestCA/CN=Test User 50350053
Proxy generation succeeded
Your proxy is valid until: 2023-06-03 01:10:38
Step 8. Restart A-REX
On the ARC-CE, restart A-REX services to activate the configuration changes
[root ~]# arcctl service restart -a
Step 9. Check all is ok
You can run the client commands (arcinfo
, arcsub
etc) from the host running A-REX or from any other machine by installing the ARC client in steps 4 and 5.
You can start with the information query about your newly installed ARC computing element:
[user ~]$ arcinfo -C https://arc.example.org/arex
Computing service:
Information endpoint: https://arc.example.org:443/arex
Submission endpoint: https://arc.example.org:443/arex (status: ok, interface: org.nordugrid.arcrest)
This means that all is ok, and the ARC client got back information from the ARC-CE that the information and service endpoints are available and ok.
Note
The examples use arc.example.org
as a domain name for A-REX host. Step 0. Prerequisites for more information.
Tip: You can use $(hostname)
instead of typing the hostname for these tests in your zero-conf setup and have a local client. For example:
arcinfo -C $(hostname)
Warning
It can take some minutes after the setup for everything to be fine, so if you see status: critical
wait ca 1 minute and check again.
Step 10. Submit a job and check that it is running
A simple job can be submitted with the arctest
tool:
[user ~]$ arctest -J 2 -C https://arc.example.org/arex
Job submitted with jobid: https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
The job status can be checked with the arcstat
tool:
[user ~]$ arcstat https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
Job: https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
Name: arctest2
State: Running
Status of 1 jobs was queried, 1 jobs returned information
To fetch the job’s stdout run arccat
tool:
[user ~]$ arccat https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
HOSTNAME=arc.example.org
GRID_GLOBAL_JOBURL=https://arc.example.org:443/arex/f77b3d1b1efb
MALLOC_ARENA_MAX=2
PWD=/var/spool/arc/sessiondir/f77b3d1b1efb
SYSTEMD_EXEC_PID=374194
<output_omitted>
Step 11. Play more with the ARC Computing Element
As an admin you might frequently need to extract information from the logs and directories that ARC computing element uses. The brief list of the relevant paths can be obtained from:
[root ~]# arcctl config brief
ARC Storage Areas:
Control directory:
/var/spool/arc/jobstatus
Session directories:
/var/spool/arc/sessiondir
Scratch directory on Worker Node:
Not configured
Additional user-defined RTE directories:
Not configured
ARC Log Files:
A-REX Service log:
/var/log/arc/arex.log
A-REX Jobs log:
/var/log/arc/arex-jobs.log
A-REX Helpers log:
/var/log/arc/job.helper.errors
A-REX WS Interface log:
/var/log/arc/ws-interface.log
Infosys Infoproviders log:
/var/log/arc/infoprovider.log
To get information and manage jobs on A-REX server, the arcctl job
is useful.
Operations include but is not limited to:
Listing jobs:
[root ~]# arcctl job list
f5ab040cdc51
f617259d58ec
<output omitted>
[root ~]# arcctl job list --long
f5ab040cdc51 FINISHED arctest2 https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
f617259d58ec FINISHED arctest2 https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
<output omitted>
Job general information:
[root ~]# arcctl job info f77b3d1b1efb
Name : arctest2
Owner : https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
State : FINISHED
LRMS ID : 376176
Modified : 2023-06-02 16:07:05
Job log:
[root ~]# arcctl job log f77b3d1b1efb
2023-06-02T14:06:51Z Job state change UNDEFINED -> ACCEPTED Reason: (Re)Accepting new job
2023-06-02T14:06:51Z Job state change ACCEPTED -> PREPARING Reason: Starting job processing
2023-06-02T14:06:51Z Job state change PREPARING -> SUBMIT Reason: Pre-staging finished, passing job to LRMS
----- exiting submit_fork_job -----
2023-06-02T14:06:53Z Job state change SUBMIT -> INLRMS Reason: Job is passed to LRMS
---------- Output of the job wrapper script -----------
Detecting resource accounting method available for the job.
Looking for /usr/bin/time tool for accounting measurements
GNU time found and will be used for job accounting.
------------------------- End of output -------------------------
2023-06-02T14:07:05Z Job state change INLRMS -> FINISHING Reason: Job finished executing in LRMS
2023-06-02T14:07:05Z Job state change FINISHING -> FINISHED Reason: Stage-out finished.
A-REX logs that mentions the job:
[root ~]# arcctl job log f77b3d1b1efb --service
### /var/log/arc/arex.log:
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: ACCEPTED: parsing job description
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: ACCEPTED: moving to PREPARING
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: PREPARING from ACCEPTED
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: SUBMIT from PREPARING
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: state SUBMIT: starting child: /usr/share/arc/submit-fork-job
[2023-06-02 16:06:53] [Arc] [INFO] [374270/3] f77b3d1b1efb: state SUBMIT: child exited with code 0
[2023-06-02 16:06:53] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: INLRMS from SUBMIT
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: Job finished
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: FINISHING from INLRMS
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: FINISHED from FINISHING
### /var/log/arc/ws-interface.log:
Getting job attributes:
[root ~]# arcctl job attr f77b3d1b1efb jobname
arctest2
Get production ready
Now you are ready to Install production ARC7 Computing Element!