Esta es la antigua web del Centro de Supercomputación de Galicia.

Nuestro nuevo web está disponible desde el 18 de Julio de 2011 en
https://www.cesga.es

Por favor, actualice sus enlaces y marcadores.


Esta web solo se mantiene con el fin de servir como histórico de noticias, cursos, ofertas de empleo publicadas, etc.. y/o documentación.

CESGA-Centro de Supercomputación de Galicia
Galego :: Español :: English
Centro de Supercomputacion de Galicia
Home » Data Storage » Services
Highlights

Users Online
449 guests online
Total since 21-12-05: 75373863 visitors
ISO 9001:2008

MGMT EXCELLENCE

Accessibility

Data storage PDF E-mail
The data storage service that CESGA provides is meant to the processing and storing of high-performance information, huge and high-availability data volumes, and to the access from any computer connected to the Internet network.

In order to be able to use the data storage it is necessary to have an user account in CESGA's servers, so those who do not have one must fill in and send the user registration form.

The following step is filling in the data storage application form, for which you must determine the amount of information you want to store and what are the characteristics of this information. With the aim of guiding the user in this classification, we recommend reading the next section for CESGA's system classification and, in case of having any doubt, contact CESGA's Systems Department by calling the number 981569810 or e-mailing sistemas@cesga.es.

Data Storage Application Form

Once the form has been sent, CESGA will contact you to inform you about the characteristics of the storage system at your disposal, and about how you can start using it from that moment on.


Criteria for the classification of the information in the storage system

With the aim of responding to the growing demand for service quantity and quality in the storage services, as well as to the different storage options available in the market, it is necessary to carry out a classification of the kinds of data in order to adequate the different storage services to the specific needs of each information group. These classifications can correspond to criteria such as amount of information, required availability level, safety and access control, etc., bearing in mind the variety of data to which the Supercomputing Center of Galicia provides service, we establish the following list of main criteria to classify the information:

  • Availability and error tolerance level: where we will indicate the data criticity identifying those that must be “always available” in one extreme of the scale and “occasionally available” in the other extreme. “Always available” can identify critical data for the operation of the services 24/7/365 and “occasionally available”, those that must only be accessed on demand. Between both extremes, there exist situations where data no-availability time windows (4 hours, 8 hours...). We must highlight that by “no-availability” we are not meaning the access speed to the data, but that they must be robust against any kind of problems that can appear in the system (what in computing terms is known as failure tolerance, and that will eventually be what will determine the maximum number of SPOF o unique failure points). Within this classification we could, for example, establish a high level (with multiple access to the data and redundancy systems of RAID-type data), a medium level (with RAID solutions, but without component redundancy) and low (without any kind of RAID or component redundancy).
  • Security copies periodicity: It will be determined, to a large extent, by the frequency with which the data are modified. They can be done daily or weekly, or done on demand when new information is introduced, for instance, or not done at all, in those cases when the data storage already means a security copy of the data.
  • Connectivity: defined by at least two performance parameters: the width of the access band and the latency; and by the means used (for example, if it can be shared or immediately connected to new servers) and the reach distance. The actual connection interfaces (by means of optical fiber, the different SCSI buses, or the connections through the local or wide area networks using NFS or CIFFS protocols) define to a large extent these parameters, but they should not be closed to them (for instance, using SCSI interfaces it is possible to widen the bandwidth using multiple HBAs to access the same information volume).
  • Storage capacity: this parameter will identify the amount of storage that the data may require. The absolute values are not representative for this parameter nowadays, in the temporal field, a small amount information can refer to some tens of Megabytes, while the same amount represented a very high volume of information only ten years ago. As a result, we will use percentages referenced to the maximum capacity available every moment for this parameter.
  • Sharing: depending on whether the data must be accessed from different hosts and/or different user's communities within or without the actual center.


Considering the aforementioned parameters, it can be rightly thought that the specification of each of them conditions the others to a large extent (that is, they do not represent a strictly-orthogonal group). Nevertheless, it must be bore in mind that what we are trying to do at this stage is to separate the storage needs from the available technologies in order to, once these requirement have been specified, look for the best technology fulfilling the requirements at every moment. For example, some years ago, it was necessary to do direct connections between the storage and the systems that was going to use it in order to get high bandwidths, while nowadays it is not necessary any more thanks to the broadband networks display (even in WAN environments).

In addition to these criteria we could introduce others, such as the data temporability (in other words, if they are data whose presence should be permanent or that, on the contrary, are continually replaced), security and information confidentiality, etc., that can be really important but that also could mean increasing excessively the number of classes. Considering that they tend to be secondary factors, within some certain types of data, subcategories could be later established in the most significant cases.


Information classification at CESGA

According to these criteria, we make a classification of the information available in the computing and storage servers into 4 types:

  • Type 1 or SCRATCH: of a very high performance (very low latency and maximum bandwidth), for it affects the performance of the center's computing systems, and medium capacity (depending on the number of simultaneous jobs it must support), for the data are stored only while the computing execution is running. Its availability may be low (for they are temporary data) and it is not necessary to do backups daily.
  • Type 2 or home directories: containing data that ca be analyzed and modified at any time and critical data, for its availability depends on the performance of the computing services of the center. So they should have as a priority the availability (maximum) and a proper balance between capacity (medium, depending on the number of users) and performance (medium), and from which backups are done daily.
  • Type 3 or massive storage system (MSS): used to store data bases and experiments' results, they do not normally vary their content (they tend to be WORM-type) and their access speed is not normally critical, although they do require a high bandwidth to access the servers, for they can be the place where the experiments' results are stored. Backups can be done on demand, because its content is only modified sporadically. Examples of this type are the results of the daily weather forecast or the data bases used in Genomics.
  • Type 4 or backups (internal and external) to disk: They are copied of the data that the users make of their own servers or personal computers, in CESGA's storage systems, to have a safety copy of their data. It is not necessary then to do backups (they are the actual backup), the availability of the service may be low. The service is offered by means of the net (internal or external), so the type of connection does not require a high performance (the bottleneck is placed in the interconnection of the final user with the storage). The capacity may be low or medium, depending on the number of users or centers where service is provided.
  • Type 5 or PARALLEL SCRATCH: of a very high performance (very low latency and maximum bandwidth), similar to Type 1 but with the extra feature that the scratch data are shared among all the cluster nodes and they are distributed into them, what allows the improvement of the bandwidth to access the files and the total scratch capacity over the local disk capacity. Its availability may be very low because it depends of many components that are not redunded and backups is not done of these data.


Availability

Backups (periodicity)

Connectivity

Capacity

Sharing or accessibility

Type 1
Scratch

Low

NO

Low latency, maximum bandwidth

Medium (20%)

No sharing

Type 2
Home Directory

Maximum (depends on the performance of the system)

Daily

Medium (Standard architectures, FC)

Medium (30%)

Among all the nodes of the same cluster or system)

Type 3
MSS

Medium

On-demand

Intranet network or FC, to reach the maximum sharing with high internal bandwidths

Maximum (90%)

High, internal within the center and sporadically external

Type 4
Backups

Low

No backup

Network, intranet and internet, with medium bandwidths

Low (10%)

Maximum, includes internal and external systems

Type 5 Parallel Scratch

Low

No

Low latency, maximum bandwidth

High (50%)

No sharing



Last Updated ( 12.04.2010 )
Master HPC

CESGA SUPPORTS

PRACE Award 2009

Itanium Alliance Award

Projects

Gelato´s Member

Acreditación EUGridPMA

Last updates
Dominio galego

MONTHLY VIRUS ALERT

infoarrobacesga.es :: Telf.: +34 981 569810 - Fax: 981 594616 :: Avda. de Vigo s/n 15705, Santiago de Compostela.
CESGA