e-SCIENCE @ IFIC - Valencia
| HOME ACTIVE PROJECTS CONTRIBUTIONS TRAINING OLD PROJECTS PEOPLE    
ACTIVE-PROJECTS .
| EGEE-III | EGI | RED DE e-CIENCIA | ATLAS | PARTNER | METACENTRO | ETC |
   
EGEE-III
 

Enabling Grids for E-sciencE (EGEE) is Europe's leading grid computing project, providing a computing support infrastructure for over 10,000 researchers world-wide, from fields as diverse as high energy physics, earth and life sciences.

In 2009 EGEE is focused on transitioning to a sustainable operational model, while maintaining reliable services for its users. The resources currently coordinated by EGEE will be managed through the European Grid Initiative (EGI) as of 2010. In EGI each country's grid infrastructure will be run by National Grid Initiatives. The adoption of this model will enable the next leap forward in research infrastructures to support collaborative scientific discoveries. EGI will ensure abundant, high-quality computing support for the European and global research community for many years to come.

www.eu-egee.org

EGI
  The main foundations of EGI are the National Grid Initiatives (NGI), which operate the grid infrastructures in each country. EGI will link existing NGIs and will actively support the setup and initiation of new NGIs.    
Red de e-Ciencia  
 

La Red Nacional de e-Ciencia persigue coordinar e impulsar el desarrollo de la actividad científica en España mediante el uso colaborativo de recursos geográficamente distribuidos e interconectados mediante Internet. En la red participan usuarios y expertos en aplicaciones de diversas disciplinas científicas (biocomputación, imagen médica, química computacional, fusión, meteorología, etc.), investigadores en el ámbito de las TIC y centros proveedores de recursos, quedando así representados todos los actores de la e-Ciencia.

www.e-ciencia.es

   
ATLAS  
 

ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. Starting in late 2009/2010, the ATLAS detector will search for new discoveries in the head-on collisions of protons of extraordinarily high energy. ATLAS will learn about the basic forces that have shaped our Universe since the beginning of time and that will determine its fate. Among the possible unknowns are the origin of mass, extra dimensions of space, microscopic black holes, and evidence for dark matter candidates in the Universe.

Computing for the ATLAS experiment at the LHC proton-proton collider at CERN will pose a number of new, yet unsolved technological and organisational challenges:

  • The CPU power required would currently not be affordable. For the offline processing, it is estimated that 2.5x105 SPECint95 (107 MIPS) will be needed. To collect this computer power today, it would take 50,000 of the most powerful PCs. We count on the continuing improvement of the price-performance ratio of CPU power and of the corresponding disks and memory.
  • The data volume produced by the experiment of about 1 Pbyte (1015 bytes) per year requires new methods for data reduction, data selection, and data access for  physics analysis. The basic consideration is that every physicist in ATLAS must have the best possible access to the data necessary for the analysis, irrespective of his/her location. The proposed scheme consists of archiving the raw daat  (1 Pbyte/year) selected by the Level-3 trigger system. A first event reconstruction will be performed at CERN, on all data a few hours after data taking. For this processing, basic calibration and alignment have to be available. The purpose of this processing is to determine physics quantities for use in analysis and to allow event classification according to physics channels. The data produced in the processing have to be accessible at the event level and even below that at the physics object level. We are considering an object-oriented database system for this purpose. One copy of the data will be held at CERN. We also consider replicating some or all of the data at a small number of regional centres.
  • The world-wide collaboration will require performant wide-area networks for data access and the physics analysis. Big improvements in the price-performance evolution of networks are hoped for in view of thede-regulation of the PTTs, the evolution of the Internet and the wide-spread use of networks for multi-media applications such as video on demand.
  •  The effort required to develop and maintain the ATLAS software will be enormous. Because of the dependence of the whole experiment?s success and due to the long lifetime of about 20 years of the project, the software quality requirements will have to be very high. For the whole ATLAS software development, up to 1000 person-years will be required. It appears that the overall manpower is available within the collaboration. A complication is that the workforce is very much spread geographically and that many developers will be students who can spend only a few years in the project.
  •  

    http://atlas.ch/

       
    PARTNER  
     

    A Particle Training Network for European Radiotherapy (PARTNER)  has been established in response to the critical need for reinforcing research in ion therapy and the training of professionals in the rapidly emerging field of hadron therapy.

    This is an interdisciplinary, multinational initiative, which has the primary goal of training researchers who will help to improve the overall efficiency of ion therapy in cancer treatment, and promote clinical, biological and technical developments at a pan-European level, for the benefit of all European inhabitants.

    cern.ch/partner

       
    Metacentro  
      Metacentro is a project to join actors from the Valencian Region with common interest in GRID technologies. The founder members are the University of Valencia, CSIC and the Valencia Politechnic University.    
    ETC