Quickstart ARC: towards distributed computing in a few minutes - token edition
Scared of distributed computing complexities?
With ARC7 you can setup a Computing Element and try common distributed computing workflows in just a few minutes!
ARC7 comes with so-called zero configuration included and works out of the box without any manual configuration. It has a pre-installed x509 host certificate signed by a Test-CA.
If you want to test your ARC-CE with token submission there are two extra steps that need to be performed in order to set up a Test JWT issuer and allow the client (remote client or on the ARC-CE itself) to trust tokens from this issuer.
If you want to test job submission on a remote ARC client, the client must trust the ARC-CE host certificate which is issued by the Test-CA, and you must therefore apply the extra step for both the token and the x509 user case. These are described in Step 5c or Step 5b respectively.
You can try ARC by using the legacy x509 user certificate, or with the newer Jason Web Token capability. The procedure below splits into x509 versus token at Step 5. The two require slightly different configuration options on the ARC server, and different procedures to aquire the authentication document (certificate or token).
The ARC server can be set up to accept both user x509 certificates and user tokens in paralell, or just one of the two. This is up to you.
Step 0. Prerequisites
The zero configured A-REX comes with the REST interface enabled. It runs by default on port
443
, so make sure it is not firewalled.If you are testing ARC with a remote client: Either register your ARC-CE on a DNS server, or add the ARC-CE host name to the
/etc/hosts
file on the client host.
Step 1. Enable NorduGrid ARC7 repos
Repository security
The NorduGrid RPM packages and DEB repositories are signed, and in order for the repository tools APT and YUM to verify them you must install the NorduGrid GPG key:
For rpm
based distributions like Red Hat Enterprise Linux and Fedora:
[root ~]# rpm --import http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7
For Ubuntu distributions with sudo
:
[user ~]$ wget -q http://download.nordugrid.org/DEB-GPG-KEY-nordugrid-7.asc -O- | sudo apt-key add -
For Debian without sudo
:
[root ~]# wget -q http://download.nordugrid.org/DEB-GPG-KEY-nordugrid-7.asc -O- | apt-key add -
Repository configuration
The NorduGrid ARC repositories for RedHat Enterprise Linux / CentOS packaging utility
dnf
can be configured through:/etc/yum.repos.d/nordugrid.repo
The repository configuration can be set up automatically with dnf by installing the``nordugrid-release`` package or creating the configuration file manually.
The easiest way to configure DNF to use the NorduGrid repository for Red Hat Enterprise Linux, CentOS and similar distributions is to install the
nordugrid-release
package which can be found in the NorduGrid package repository for the appropriate RHEL/EPEL release.Links to the release packages:
Install with
dnf
(Fedora, CentOS Stream, Rocky Linux, CentOS Linux 8+9) by copying the appropriate link from above[root ~]# dnf install <rhel-repo link>This creates the appropriate repo files in
/etc/yum.repos.d/
.For manual YUM repository setup, create a file
/etc/yum.repos.d/nordugrid.repo
with the following contents (here using Rocky as example, if you are on Fedora, replacerocky
withfedora
)If you are installing an alpha, beta or release candiate, please set the nordugrid-testing to
enabled=1
.[nordugrid] name=NorduGrid - $basearch - base baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/base enabled=1 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7 [nordugrid-updates] name=NorduGrid - $basearch - updates baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/updates enabled=1 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7 [nordugrid-testing] name=NorduGrid - $basearch - testing baseurl=http://download.nordugrid.org/repos/7/rocky/$releasever/$basearch/testing enabled=0 gpgcheck=1 gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-7
The NorduGrid ARC repositories for Ubuntu packaging utility APT can be configured through: /etc/apt/sources.list
or when supported through a repo specific file: /etc/apt/sources.list.d/nordugrid.list
.
The repository configuration can be set up automatically by means of installing nordugrid-release
package or creating the configuration file manually.
Ubuntu version names:
Ubuntu release |
Code name |
---|---|
24.04 |
noble |
23.10 |
mantic |
22.04 |
jammy |
20.04 |
focal |
The examples below give you the link for most recent Ubuntu releases. Packages are shown below for amd64
architecture. Replace amd64
for i386
if required for your architecture.
Install the source file with dpkg
, example shown for Ubuntu 22.04:
[root ~]# wget -q https://download.nordugrid.org/packages/nordugrid-release/releases/7/ubuntu/24.04/amd64/nordugrid-release_7~noble1_all.deb
[root ~]# dpkg -i nordugrid-release_7~noble1_all.deb
For a different version of Ubuntu, change the version names appropriately.
For manual APT repository setup for Ubuntu, the APT sources file should contain the following (here shown for Ubuntu 22.04 Jammy).
The configurations for the varios APT based distributions can be found in the following sections.
To enable a specific repository, remove the “#
” from the beginning of the line, before the “deb
” as shown for the Base Channel.
# Base channel - must be enabled
deb http://download.nordugrid.org/repos/7/ubuntu/ jammy main
deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy main
# Updates to the base release - should be enabled
deb http://download.nordugrid.org/repos/7/ubuntu/ jammy-updates main
deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy-updates main
# Scheduled package updates - optional
#deb http://download.nordugrid.org/repos/7/ubuntu/ jammy-experimental main
#deb-src http://download.nordugrid.org/repos/7/ubuntu/ jammy-experimental main
For a different release version, change the version name accordingly.
The NorduGrid ARC repositories for Ubuntu packaging utility APT can be configured through: /etc/apt/sources.list
or when supported through a repo specific file: /etc/apt/sources.list.d/nordugrid.list
.
The repository configuration can be set up automatically by means of installing nordugrid-release
package or creating the configuration file manually.
Debian version names:
Debian release |
Code name |
---|---|
12 |
bookworm |
11 |
bullseye |
10 |
buster |
9 |
stretch |
The examples below give you the link for most recent Debian releases. Packages are shown below for amd64
architecture. Replace amd64
for i386
if required for your architecture.
Install the source file with dpkg
, example shown for Debian 12:
[root ~]# wget -q https://download.nordugrid.org/packages/nordugrid-release/releases/7/debian/12/amd64/nordugrid-release_7~bpo12+1_all.deb
[root ~]# dpkg -i nordugrid-release_7~bpo12+1_all.deb
For a different version of Debian, change the version names appropriately.
For manual APT repository setup for Debian, the APT sources file should contain the following (here shown for Debian 12 Bookworm).
The configurations for the varios APT based distributions can be found in the following sections.
To enable a specific repository, remove the “#
” from the beginning of the line, before the “deb
” as shown for the Base Channel.
# Base channel - must be enabled
deb http://download.nordugrid.org/repos/7/debian/ bookworm main
deb-src http://download.nordugrid.org/repos/7/debian/ bookworm main
# Updates to the base release - should be enabled
deb http://download.nordugrid.org/repos/7/debian/ bookworm-updates main
deb-src http://download.nordugrid.org/repos/7/debian/ bookworm-updates main
# Scheduled package updates - optional
#deb http://download.nordugrid.org/repos/7/debian/ bookworm-experimental main
#deb-src http://download.nordugrid.org/repos/7/debian/ bookworm-experimental main
For a different release version, change the version name accordingly.
Set up dependency repositories
dnf config-manager --set-enabled powertools
dnf makecache
dnf config-manager --set-enabled crb
dnf makecache
[root~]# apt-get update
[root~]# apt-get update
Step 2. Install A-REX
ARC Resource-coupled EXecution service (A-REX) is a core component that manages authentication, authorization and job life cycle. It is enough to have A-REX installed to have a minimal computing element:
[root ~]# dnf -y install nordugrid-arc-arex
[root ~]# apt-get -y install nordugrid-arc-arex
[root ~]# apt-get -y install nordugrid-arc-arex
Step 3. Run A-REX
To start ARC services just run:
[root ~]# arcctl service start --as-configured
You can check if A-REX is running with:
[root ~]# arcctl service list
arc-arex (Installed, Disabled, Running)
arc-arex-ws (Installed, Disabled, Running)
arc-datadelivery-service (Not installed, Disabled, Stopped)
arc-infosys-ldap (Not installed, Disabled, Stopped)
Step 4. Install the ARC client
Install ARC client tools on the client host
Note
In the zero-conf setup - we install the client and the ARC control client tool on the same server as the ARC-CE, so client and host is the same machine. Typically you would install the client on amother (remote) machine.
[root@server]# dnf -y install nordugrid-arc-client nordugrid-arc-arcctl
[root@server]# apt-get -y install nordugrid-arc-client nordugrid-arc-arcctl
[root@server]# apt-get -y install nordugrid-arc-client nordugrid-arc-arcctl
Step 5. Install and enable autocompletion (optional)
arcctl
tool automates many ARC CE operations and is designed with bash-completion in mind.
If you would like to use ARC in production it is advised to have completion enabled:
[root ~]# dnf install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
[root ~]# apt-get install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
[root ~]# apt-get install -y bash-completion python-argcomplete
[root ~]# activate-global-python-argcomplete
Step 6. Set up test jwt token issuer and trust
If your ARC-CE and ARC client are the same machine you can apply the compressed command below that both initializes the test-jwt issuer and sets up the trust in one step.
[root@server]# $(arcctl test-jwt init --force | tail -n 1)
Step 6a.
For the zero-conf setup we will use ARC’s inbuilt test-token issuer to submit a token. This is run on the ARC client machine. In our case this is the same machine as the ARC-CE server, but it can equally well be a remote ARC client.
[user@server] arcctl test-jwt init
This will output an arcctl deploy
command that needs to be issued on the ARC-CE server. Copy this to your clipboard.
Example output from arcctl test-jwt init
:
Issuer URL: https://arc.example.org/arc/testjwt/8b7baf79
JWKS:
{
"keys": [
{
"e": "AQAB",
"kid": "testjwt",
"kty": "RSA",
"n": "r0nMfmRfhJFiyCPRUc8m9K7yl0qksmIRIQeiMNEi3_Und6WVNhLpERrzwb6jTHu5wr_Tk408ve-ig1udpqEZ5PUcV6K25MohYu1b6ifrYDo6go-bQ0cEaEyZRYGm1scOUb_gWCAYOLe-hv7hZGnQ3rojLZ2BJwUwBVOj5Hp_ROPUdbifKfNkBiujhGPJAegrPrKgsskQNA2GkXWACeS85WPKIQ54bkUiASsmz3_b0Ik9jQaQnHsU0znM3G-EpjnLB-1PS7FC1tIMaXcJ2BJZuFfkDyIv1Ymn8vKf9WeQjQ80L08k78pzTGOerZLcc5BQ2ZWEUhADWRWzkqHmEDymIw",
"use": "sig"
}
]
}
Run the following command on the ARC CE to trust the Test JWT issuer:
arcctl deploy jwt-issuer --deploy-conf test-jwt://H4sIAOzQRGYC/73T3XKiMBQA4HfheqmgRbB3qFTxF7CUys4OAyFIiCQ0CTLY8d03brezT9C9yjlJznwnmeRDqaFI81SkytOHgjhvIVOelFKIhj8NBikDAnKhqQQiMHxU+blltQrPE1WumCqA+gOAjKj3TQ8touBM2/yB0Hvh4D5ZdWJgZWaWFuZE+aEIiiFJIMkbioj4Lmjwh5FcyyFDpKDfL35JEpUMo7yBQCD6H876j5N22oqSMnRN/4/9xUFJVx3mScvQt2F3QLl9QvfHimEvx58fCpSk7dtT2QRGuUz+lt5z0cs8ONgyJjJiGtkWdVCUq2fUz7wgBFY9WZv9WXvHvHYD14dou3PQKAlJPo5ed+WmcQJ27bJx9bJsjY4lL/hRsy5QRSe9zZt3Jza8ELyO10NjS8tjq2djVLDjnI5PVM18DTip08fBcVHrHOzDLDlFM/u430C1vJhlvCD+iNFqEw+nqy7spq/7ylg2SbD3wjxDxbrY4Slqq3LhrWx4Yh5bnzjH/s4eLvBbZM/gwTIib+36xmOGQ2QfeH0dJZnm4knlpz5Z8lC7ku1ooTpNRTZTVfcO5vNMF+42fQMrqcbtc4HnvXvRjzWxLutiEkG/8i1to1nYtJrry2IPWbwBwJj6wzhywtKeR0F0xe/L2pn3tdt9/jR5vRydlNuv2+03J4GqklYEAAA=
.. _deploy_jwt_issuer:
**Step 6b. On the A-REX server - setup trust of the client's Test JWT issuer**
Run the ``arcctl deploy jwt-issuer`` command from :ref:`Step 6a <jwtinit>` on the A-REX server.
Example:
.. code-block:: console
[root@server]# arcctl deploy jwt-issuer --deploy-conf test-jwt://H4sIAOzQRGYC/73T3XKiMBQA4HfheqmgRbB3qFTxF7CUys4OAyFIiCQ0CTLY8d03brezT9C9yjlJznwnmeRDqaFI81SkytOHgjhvIVOelFKIhj8NBikDAnKhqQQiMHxU+blltQrPE1WumCqA+gOAjKj3TQ8touBM2/yB0Hvh4D5ZdWJgZWaWFuZE+aEIiiFJIMkbioj4Lmjwh5FcyyFDpKDfL35JEpUMo7yBQCD6H876j5N22oqSMnRN/4/9xUFJVx3mScvQt2F3QLl9QvfHimEvx58fCpSk7dtT2QRGuUz+lt5z0cs8ONgyJjJiGtkWdVCUq2fUz7wgBFY9WZv9WXvHvHYD14dou3PQKAlJPo5ed+WmcQJ27bJx9bJsjY4lL/hRsy5QRSe9zZt3Jza8ELyO10NjS8tjq2djVLDjnI5PVM18DTip08fBcVHrHOzDLDlFM/u430C1vJhlvCD+iNFqEw+nqy7spq/7ylg2SbD3wjxDxbrY4Slqq3LhrWx4Yh5bnzjH/s4eLvBbZM/gwTIib+36xmOGQ2QfeH0dJZnm4knlpz5Z8lC7ku1ooTpNRTZTVfcO5vNMF+42fQMrqcbtc4HnvXvRjzWxLutiEkG/8i1to1nYtJrry2IPWbwBwJj6wzhywtKeR0F0xe/L2pn3tdt9/jR5vRydlNuv2+03J4GqklYEAAA=
ARC CE now trust JWT signatures of https://arc.example.org/arc/testjwt/8b7baf79 issuer.
Auth configuration for issuer tokens has been written to /etc/arc.conf.d/10-jwt-a7374e17.conf
ARC restart is needed to apply configuration.
This command does two things
1) creates a ``tokenissuers`` folder in your control directory
Our control directory now contains a folder ``tokenissuers/a7374e17`` where the issuer url, key and metadata is stored:
.. code-block:: console
[root@server]# ls /var/spool/arc/jobstatus/tokenissuers/a7374e17
issuer keys metadata
2) automatically sets up the arc configuration token authentication for tokens issued by this test-jwt issuer.
The token authentication file ``10-jwt-a7374e17.conf`` produced by the ``arcctl deploy jwt-issuer`` command looks like this in our example:
.. code-block:: console
[root@server]# cat /etc/arc.conf.d/10-jwt-a7374e17.conf
[authgroup:testjwt-a7374e17]
authtokens = * https://arc.example.org/arc/testjwt/8b7baf79 arc * *
[mapping]
map_to_user = testjwt-a7374e17 nobody:nobody
[arex/ws/jobs]
allowaccess = testjwt-a7374e17
Here we see that a separate authgroup has been automatically created for tokens issued by this test-jwt issuer. Mapping of this authgroup is done to the nobody user (which is the only user we assume for zero-conf), and access is enabled for job submission by jobs issued with a token from this test-jwt issuer.
.. note::
When you set up your production ready service later on, you will remove the test-jwt authgroup and add your real token issuers as per :ref:`[authgroup] authokens <reference_authgroup_authtokens>` section.
For a remote client: Setup trust of the A-REX server
With the zero-conf setup A-REX is pre-installed with a x509 host certificate issued by the Test-CA. A remote client will need to trust this Test-CA, and therefore the following steps are needed.
On the A-REX host print out the Test-CA certificate:
[user@server] arcctl test-ca info -o ca-cert
-----BEGIN CERTIFICATE-----
MIIFyTCCA7GgAwIBAgIUeLkSbksS9r3raPvkT2rR0ep06X8wDQYJKoZIhvcNAQEM
BQAwdDETMBEGCgmSJomT8ixkARkWA29yZzEZMBcGCgmSJomT8ixkARkWCW5vcmR1
<output omitted>
TJ9f0I8ktHACLvLvJE9SIDWs2zPo8o4cmvLBAtxe+jaijn22THtpLLUSXt1ozexS
ZHGFtsUBuIoNzXoRXxJwkGBA1ZpLBbOpjyp6PzNcTPYFG51+EHTUMPkbfyQ5
-----END CERTIFICATE-----
Copy this output to your clipboard, and then on the ARC client machine do:
[user@client]$ arcctl deploy ca-cert --x509-cert-dir ~/.globus
[2024-10-25 21:40:50,327] [ARCCTL.ThirdParty.Deploy] [INFO] [726706] [Deploying CA Certificate to /etc/grid-security/certificates/ARCTestCAfdb0a5e3.pem]
[2024-10-25 21:40:50,328] [ARCCTL.ThirdParty.Deploy] [INFO] [726706] [Reading CA Certificate PEM data from stdin]
-----BEGIN CERTIFICATE-----
MIIFyTCCA7GgAwIBAgIUeLkSbksS9r3raPvkT2rR0ep06X8wDQYJKoZIhvcNAQEM
BQAwdDETMBEGCgmSJomT8ixkARkWA29yZzEZMBcGCgmSJomT8ixkARkWCW5vcmR1
<output omitted>
TJ9f0I8ktHACLvLvJE9SIDWs2zPo8o4cmvLBAtxe+jaijn22THtpLLUSXt1ozexS
ZHGFtsUBuIoNzXoRXxJwkGBA1ZpLBbOpjyp6PzNcTPYFG51+EHTUMPkbfyQ5
-----END CERTIFICATE-----
[2024-10-25 21:40:54,173] [ARCCTL.ThirdParty.Deploy] [INFO] [726706] [CA Certificate for /DC=org/DC=nordugrid/DC=ARC/O=TestCA/CN=ARC TestCA fdb0a5e3 is deployed successfully to /etc/grid-security/certificates/ARCTestCAfdb0a5e3.pem]
This will create all necessary files in your x509_cert_dir and allow your remote client to trust the ARC-CE.
Step 7. Get a submission token
To submit jobs or perform any other action towards the ARC-CE you must authenticate yourself. We will do this using a token issued from the test-jwt issuer.
To generate a token do:
[user ~]$ export BEARER_TOKEN=$(arcctl test-jwt token)
Step 8. Restart A-REX
On the ARC-CE, restart A-REX services to activate the configuration changes
[root ~]# arcctl service restart -a
Step 9. Check all is ok
You can run the client commands (arcinfo
, arcsub
etc) from the host running A-REX or from any other machine by installing the ARC client in steps 4 and 5.
You can start with the information query about your newly installed ARC computing element:
[user ~]$ arcinfo -C https://arc.example.org/arex
Computing service:
Information endpoint: https://arc.example.org:443/arex
Submission endpoint: https://arc.example.org:443/arex (status: ok, interface: org.nordugrid.arcrest)
This means that all is ok, and the ARC client got back information from the ARC-CE that the information and service endpoints are available and ok.
Note
The examples use arc.example.org
as a domain name for A-REX host. Step 0. Prerequisites for more information.
Tip: You can use $(hostname)
instead of typing the hostname for these tests in your zero-conf setup and have a local client. For example:
arcinfo -C $(hostname)
Warning
It can take some minutes after the setup for everything to be fine, so if you see status: critical
wait ca 1 minute and check again.
Step 10. Submit a job and check that it is running
A simple job can be submitted with the arctest
tool:
[user ~]$ arctest -J 2 -C https://arc.example.org/arex
Job submitted with jobid: https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
The job status can be checked with the arcstat
tool:
[user ~]$ arcstat https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
Job: https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
Name: arctest2
State: Running
Status of 1 jobs was queried, 1 jobs returned information
To fetch the job’s stdout run arccat
tool:
[user ~]$ arccat https://arc.example.org:443/arex/rest/1.0/jobs/f77b3d1b1efb
HOSTNAME=arc.example.org
GRID_GLOBAL_JOBURL=https://arc.example.org:443/arex/f77b3d1b1efb
MALLOC_ARENA_MAX=2
PWD=/var/spool/arc/sessiondir/f77b3d1b1efb
SYSTEMD_EXEC_PID=374194
<output_omitted>
Step 11. Play more with the ARC Computing Element
As an admin you might frequently need to extract information from the logs and directories that ARC computing element uses. The brief list of the relevant paths can be obtained from:
[root ~]# arcctl config brief
ARC Storage Areas:
Control directory:
/var/spool/arc/jobstatus
Session directories:
/var/spool/arc/sessiondir
Scratch directory on Worker Node:
Not configured
Additional user-defined RTE directories:
Not configured
ARC Log Files:
A-REX Service log:
/var/log/arc/arex.log
A-REX Jobs log:
/var/log/arc/arex-jobs.log
A-REX Helpers log:
/var/log/arc/job.helper.errors
A-REX WS Interface log:
/var/log/arc/ws-interface.log
Infosys Infoproviders log:
/var/log/arc/infoprovider.log
To get information and manage jobs on A-REX server, the arcctl job
is useful.
Operations include but is not limited to:
Listing jobs:
[root ~]# arcctl job list
f5ab040cdc51
f617259d58ec
<output omitted>
[root ~]# arcctl job list --long
f5ab040cdc51 FINISHED arctest2 https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
f617259d58ec FINISHED arctest2 https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
<output omitted>
Job general information:
[root ~]# arcctl job info f77b3d1b1efb
Name : arctest2
Owner : https://wlcg.cloud.cnaf.infn.it//b9f1e5e2-a8f0-4332-bd9d-58bd63898cc6
State : FINISHED
LRMS ID : 376176
Modified : 2023-06-02 16:07:05
Job log:
[root ~]# arcctl job log f77b3d1b1efb
2023-06-02T14:06:51Z Job state change UNDEFINED -> ACCEPTED Reason: (Re)Accepting new job
2023-06-02T14:06:51Z Job state change ACCEPTED -> PREPARING Reason: Starting job processing
2023-06-02T14:06:51Z Job state change PREPARING -> SUBMIT Reason: Pre-staging finished, passing job to LRMS
----- exiting submit_fork_job -----
2023-06-02T14:06:53Z Job state change SUBMIT -> INLRMS Reason: Job is passed to LRMS
---------- Output of the job wrapper script -----------
Detecting resource accounting method available for the job.
Looking for /usr/bin/time tool for accounting measurements
GNU time found and will be used for job accounting.
------------------------- End of output -------------------------
2023-06-02T14:07:05Z Job state change INLRMS -> FINISHING Reason: Job finished executing in LRMS
2023-06-02T14:07:05Z Job state change FINISHING -> FINISHED Reason: Stage-out finished.
A-REX logs that mentions the job:
[root ~]# arcctl job log f77b3d1b1efb --service
### /var/log/arc/arex.log:
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: ACCEPTED: parsing job description
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: ACCEPTED: moving to PREPARING
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: PREPARING from ACCEPTED
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: SUBMIT from PREPARING
[2023-06-02 16:06:51] [Arc] [INFO] [374270/3] f77b3d1b1efb: state SUBMIT: starting child: /usr/share/arc/submit-fork-job
[2023-06-02 16:06:53] [Arc] [INFO] [374270/3] f77b3d1b1efb: state SUBMIT: child exited with code 0
[2023-06-02 16:06:53] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: INLRMS from SUBMIT
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: Job finished
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: FINISHING from INLRMS
[2023-06-02 16:07:05] [Arc] [INFO] [374270/3] f77b3d1b1efb: State: FINISHED from FINISHING
### /var/log/arc/ws-interface.log:
Getting job attributes:
[root ~]# arcctl job attr f77b3d1b1efb jobname
arctest2
Get production ready
Now you are ready to Install production ARC7 Computing Element!