>
Status Report, September 1, 2001
The purpose of the project is to create the grid computing infrastructure in Nordic countries. Project participants include universities and research centers in Denmark, Sweden, Finland and Norway. The active phase of the project started in May 2001, and involves the Niels Bohr Institute (Denmark), Lund and Uppsala Universities (Sweden), University of Oslo (Norway) and Helsinki Institute of Physics (Finland). From the very beginning, the NorduGrid testbed became an integral part of the EU DataGrid project, initiated at CERN and aimed at creation of the computing infrastructure for the future high-energy physics experiments. This report overviews the status of the NorduGrid project as of September 1, 2001.
Due to the considerable geographical spread of the project member sites, the organization and management heavily relies on the Internet and teleconferences. The project Web site (/) opened in December 2000, and is regularly updated with the most detailed information.
Prior to the project's start, the steering panel was appointed to provide adequate management. It consists of the coordinator, John Renner Hansen, representing Niels Bohr Institute, Denmark; Tord Ekelöf, Paula Eerola and Sten Hellman, representing Uppsala, Lund and Stockholm Universities of Sweden; Farid Ould-Saada from the University of Oslo, Norway, and Matti Heikkurinen, representing Helsinki Institute of Physics, Finland. The group holds regular phone meetings.
Three new postdoctoral positions, funded by the project, were created. Each employed researcher carries responsibility for one of the NorduGrid nodes: in Lund, Uppsala and Oslo, and provides support and assistance to other sites in the area. Positions were announced in December 2000, and chosen by the steering panel were:
Name | Location | Start |
Balázs Kónya | Lund | May 1st, 2001 |
Mattias Ellert | Uppsala | May 15, 2001 |
Aleksandr Konstantinov | Oslo | June 20, 2001 |
For the fast evaluation of existing demands and problems, a technical working group, consisting of the three abovementioned postdocs and people, responsible for the cooperation with EU DataGrid (see Section 5): Anders Wäänänen (Testbed and Demonstrators Work Package), and Oxana Smirnova (HEP Applications Work Package). The group meet bi-monthly and discuss status of the project, relations with EU DataGrid, various problems, and provides an immediate short-term planning.
The first (introductory) meeting took place at NBI, on June 20-22. Present were: B.K., M.E., A.K., A.W., J.R.H., F.O.-S.. Several important decisions were made:
The second meeting was held on August 21, in Lund. The meeting was the follow-up, and the discussed issues covered acquired experience, the resource specification for the DataGrid Testbed1 (see Section 5), user policy, Globus toolkit upgrade and future steps.
Apart of the steering panel and the technical working group meetings, a few general workshops are foreseen.
The inaugural 1st NorduGrid Workshop took place in Lund, February 5-6, 2001. Present were coordinators of Grid-related activities in participating institutions, and researchers, otherwise involved in the project. The purpose of the workshop was the general presentation of the project and its planned activities, review of the situation in the Nordic countries, and discussion of organizational issues. During the Workshop, two parallel sections were working: one covering the testbed installation issues (chaired by A.W.), and another - discussing applications (chair: O.S.). A presentation of the business aspects and industrial applications for the grid was made by M.H..
The 2nd Workshop will take place in Oslo, November 1 to 2nd. Major issues to be discussed will be the first hands-on experience with installation and usage of the hardware and middleware, and expansion of the activities to other participating institutions.
From the project start, part of existing resources was assigned for the test installations. The LSCF cluster at NBI (ca. 30 heterogeneous processors) is used for middleware and resource management tests. In Lund, a mini-farm (2 PC's) was set up at the Particle Physics department, and a stand-alone machine was configured for Grid tests at the Cosmic and Subatomic Physics department. In Uppsala University and Helsinki Institute of Physics, a stand-alone PCs were originally used for initial tests.
In addition to existing resources, three new dedicated clusters were set up: in Lund, Uppsala and Oslo. Lund and Uppsala clusters upgraded and replaced previously used local test facilities. Installations took place during July 2001.
Computing resources are fairly heterogeneous, hardware- and software-wise. Most of the processors are various types of Intel Pentium. Operating systems are various flavors of Linux: different distributions of Red Hat, Mandrake, Slackware and Debian. On test clusters, resource management is currently performed via OpenPBS.
Grid middleware was chosen to be uniform, and test sites were set up with Globus toolkit version1.1.3b14.
Detailed specifications for each cluster are shown below.
Name | Lund Grid Cluster | Contact person | Balázs Kónya | ||
Address | Elementary Particle Physics Department of Lund University | ||||
HARDWARE | |||||
Nodes | Quantity | CPU | RAM | Disks | Other notes |
1 | PIII (Coppermine) 1GHz 256KB cache | 512 MB | 40 & 60 GB | Front-end machine, Powerware 5115 UPS | |
2 | Dual PIII (Coppermine) 1GHz 256KB cache | 512 MB | 30 GB | Dual processor computing nodes | |
1 | PIII (Coppermine) 1GHz 256KB cache | 256 MB | 16 GB | Single processor computing node | |
Network | 100Mb/s private network with a 3Com OfficeConnect 16 100MB Switch | ||||
Mass storage | |||||
SOFTWARE | |||||
OS | Linux Mandrake 8.0 distribution, kernel-2.4.3 | ||||
Resource manager | OpenPBS 2.3.12 | ||||
File system | All the nodes have local disks and share the NFS mounted /scratch and /home area of the front-end machine | ||||
Databases | |||||
MIDDLEWARE | |||||
Gatekeeper | grid.quark.lu.se 2119 port | ||||
Globus | Globus version 1.1.3b14 | ||||
NETWORK | |||||
Configuration | All the nodes are on a closed private network behind the front-end machine. | ||||
COMMENTS | |||||
The grid cluster of the Elementary Particle Physics Department of Lund University, dedicated to the NorduGrid project, contains six Intel Pentium III 1GHz processors with 256 MB RAM per processors. The cluster consists of four Linux based PCs, two of them are dual-processor machines. The cluster is made up of a front-end machine (single-processor) and three computing nodes with 5 available Pentium III 1GHz processors (1 single and 2 dual nodes). The computing nodes are connected to the front-end machine using a private network, which means that the nodes can only be accessed through the front-end computer. The front-end machine (grid.quark.lu.se) runs PBS as the local resource management system. The front-end node of the cluster is dedicated for code development (code editing, compilation, etc.), while the back-end nodes (node1, node2, node3) are used only for code executions. |
Name | grid.tsl.uu.se | Contact person | Mattias Ellert | ||
Address | Uppsala University, Department of Radiation Sciences, Uppsala, Sweden | ||||
HARDWARE | |||||
Nodes |
Quantity |
CPU |
RAM |
Disks |
Other notes |
1 | P-III 866 Mhz | 512 MB | 40 GB + 60 GB | Gatekeeper | |
2 | 2 × P-III 866 Mhz | 512 MB | 40 GB | Computing nodes | |
Network | 100 Mbit/s | ||||
Mass storage | None | ||||
SOFTWARE | |||||
OS | RedHat Linux 7.1 | ||||
Kernel 2.4.3-12 on the gatekeeper, kernel 2.4.3-12smp on the computing nodes | |||||
Resource manager | OpenPBS 2.3pl2 | ||||
File system | /home and /usr/local shared through NFS | ||||
Databases | None | ||||
MIDDLEWARE | |||||
Gatekeeper | /O=Grid/O=NorduGrid/CN=grid.tsl.uu.se/jobmanager, /O=Grid/O=NorduGrid/CN=grid.tsl.uu.se/jobmanager-pbs | ||||
Globus | Globus 1.1.3b14 | ||||
NETWORK | |||||
Configuration | Private network | ||||
COMMENTS | |||||
The computers are on a private network. Only the gatekeeper is directly accessible from the outside. The outside is however directly accessible also from the computing nodes. |
Name | Oslo University NorduGrid gatekeeper | Contact person | Aleksandr Konstantinov | ||
Address | University of Oslo, Department of Physics | ||||
HARDWARE | |||||
Nodes | Quantity | CPU | RAM | Disks | Other notes |
1 | 2 x Intel PIII 1 GHz | 256MB | 39266MB | computing node | |
1 | Intel PIII 1 GHz | 256MB | 39266MB | computing node | |
1 | Intel PIII 870 MHz | 128MB | 2 x 41174MB | gatekeeper | |
Network | 100Mbps ethernet cards EtherExpress Pro100 | ||||
Mass storage | NA | ||||
|
|||||
OS | Gatekeeper - distribution: RedHat 7.1 , kernel: 2.4.2-2 (i686) , libc: 2.2.2 | ||||
Nodes - distribution: Slackware 7.1 , kernel: 2.2.19 (i686) (SMP & UP) , libc: 2.2.3 | |||||
Resource manager | OpenPBS 2.3.12 | ||||
File system | ext2fs | ||||
Databases | NA | ||||
|
|||||
Gatekeeper | grid.uio.no 2119 | ||||
Globus | Globus 1.1.3b14 | ||||
|
|||||
Configuration | Nodes are situated on private physically isolated network with 100Mbps connection through Allied Telesyn FS708 network switch. | ||||
|
|||||
To provide proper functionality of a computational grid, several services should be enabled. Installation and maintenance of such services is the essential part of a testbed set-up. The present status of NorduGrid services is described below.
User authentication is one of the key issues in a Grid environment. Globus toolkit uses personal certificates, issued by a recognized certification authority, to identify each user. The NorduGrid Certification Authority (CA) is set up at NBI. It provides X.509 certificates for identification and authentication purposes. The scope is limited to people from the Nordic countries involved in Grid-related projects: primarily the NorduGrid and EU DataGrid, as well as DKGRID (Denmark). Contrary to most Certificate Authorities worldwide, the NorduGrid one is not a national, but a transnational virtual organization.
The certificates are meant to be used with the Globus toolkit, to provide user authentication. They are recognized not only by the sites, participating in the NorduGrid project, but also by the EU DataGrid, in the framework of the DataGrid Testbed Work package.
The Globus toolkit provides means of querying resources on a computational grid for their current configuration, capabilities, and status. Such an information is essential for proper distribution of workload. The corresponding database server is running at NBI, providing information on known resources. The browsable index is accessible via WWW.
The NorduGrid project has adopted a common naming convention for identifying its resources and users. The agreed namespace represents the resources of the project as part of a virtual organization (O=NorduGrid). The distinguished name of a resource has the form of "/O=Grid/O=NorduGrid/CN=grid.quark.lu.se", where the CN field is the name of the computing resource. A NorduGrid user is identified with the "O=Grid/O=NorduGrid/OU=domain.name/CN=User Name" string, here the OU field is the domain name used by his home institute and the CN field contains his real name.
The present status of the Globus toolkit allows for simple tests only, checking the connectivity and basic functionality. Following the Globus installation at all the sites, the most trivial tests were made, and inter-connectivity was successfully checked between NBI, Lund, Uppsala, Oslo and Helsinki.
For further tests, more advanced applications, relying on realistic physics analysis cases, are being prepared. Below is the list of tasks, preliminary tested for a local batch submission, and for a rudimentary remote submission via Globus.
Task description | Study of the pion source elongation in Z decays. Uses Jetset/Pythia to generate e+e- hadronic events. |
People | Raluca Muresan (NBI), Oxana Smirnova (Lund) |
Executable | File size: 1.8 MB, occupied memory: 16 MB |
Input | ASCII input cards (40 B) |
Output | Binary HBOOK file (0.5 to 1 MB) |
Specific requirements | CERNLIB and Jetset libraries needed for compilation |
Task description | Study of hadronization corrections to the helicity components of the fragmentation function in hadronic decays of Z boson. Uses Jetset/Pythia to generate e+e- hadronic events. |
People | Oxana Smirnova, Christina Zacharatou Jarlskog (Lund) |
Executable | File size: 1.2 MB, occupied memory: 2 MB |
Input | ASCII input cards (40 B) |
Output | Binary HBOOK file (12 KB) |
Specific requirements | CERNLIB and Jetset libraries needed for compilation |
Task description | Study of identical kaons production in the Lund string model. Uses Jetset/Pythia to generate e+e- hadronic events. |
People | Oxana Smirnova (Lund) |
Executable | File size: 0.8 MB, occupied memory: 2 MB |
Input | - |
Output | Binary HBOOK file (360 KB) |
Specific requirements | CERNLIB and Jetset libraries needed for compilation |
Task description | Monte-Carlo generation (PYTHIA) + ATLFAST (for ATLAS project) |
People | Børge Kile Gjelsten (Oslo) |
Executable | File size: 16 MB, occupied memory: 6 MB |
Input | Text file |
Output | Text and HBOOK files |
Specific requirements | CERNLIB and PDFLIB libraries needed for compilation Staticaly linked binary can be produced |
The NorduGrid project participates in the EU DataGrid activities along two directions:
All the NorduGrid sites successfully took part in the Testbed0 (June 2001), main goal of which was Globus toolkit and basic services installation. The next phase, Testbed1, starts in September 2001, and will involve execution of test use-cases. The NorduGrid is an integral part of the Testbed, and the NorduGrid CA is one of 11 officially recognized by DataGrid.
Participation in the Applications Work Package of DataGrid proceeds via the ATLAS experiment, which distributes the so-called ATLAS Toolkit, containing three physics use-cases. Installation of the toolkit was successfully done at the Uppsala and Oslo sites.
As a part of the DataGrid, the NorduGrid sites are meant to be used not only for physics applications, but also for other tasks, like, e.g., biology. In this respect, there is a cooperation going on with the biologists in Lyon, France, who are testing job submission to Lund.
To enable close cooperation with the DataGrid, representatives of the NorduGrid regularly attend corresponding meetings and Workshops. Presentations of the NorduGrid activities were made at the First DataGrid workshop in Amsterdam (March 2001), and the Second DataGrid workshop in Oxford (July 2001).
During the covered period, much progress have been made, and all the planned milestones met successfully. The basic nodes of the grid infrastructure are set up and being constantly upgraded to meet the needs of the testbed. An invaluable experience is being acquired and documented on the project's Web site.
The underlying structure of a trans-Nordic computational grid is set up, with most basic services enabled. However, the full-scale functionality is unachievable by this time, due to two main obstacles:
The NorduGrid project relies in this sense on the development by the EU DataGrid. With the first release of the DataGrid tools in September 2001, some problems could be solved.
For the further development of the NorduGrid testbed, the following major steps are hence foreseen: