Where is everything: Difference between revisions

From genomewiki
Jump to navigationJump to search
(Created page with "* Behind the VPN: ** 1PB Hitachi shared file system called /pod ** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod is mounted as NFS ** CIRM-01 cluster (headnode: "podk"), only...")
 
No edit summary
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
* Behind the VPN:
* Behind the VPN:
** 1PB Hitachi shared file system called /pod
** 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
** CIRM-01: 64GB RAM, 24 cores, for CIRM users, /pod is mounted as NFS
** ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
** CIRM-01 cluster (headnode: "podk"), only for CIRM users, /pod is mounted as NFS
** stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
*** 19 nodes, 32 cores, 256 GB per node, /pod is mounted
** bazaar: 256GB, 32 cores, Ubuntu operating system, general use
** stacker: 1.5TB RAM, 160 cores, general use
** openstack cluster
** bazaar: 256GB, 32 cores, general use
*** manage via http://podcloud.pod/
*** 2240 cores (70 nodes * 32 cores), 256 GB per node, no /pod, openstack block store available
*** each node has 10 TB local storage
*** CIRM-01: 64GB RAM, 24 cores, /pod/pstore is mounted as NFS, only accessible to CIRM group
** traditional clusters:
*** parasol cluster, 19 x 32 = 608 cores, headnode: "podk", /pod/pstore is mounted as NFS, 2TB local hard disk?, 256 GB per node, /pod/pstore is mounted
*** SGI cluster, 50 x 32 = 1600 cores, headnode "podk", /pod/pstore mounted as NFS, hard disk? 256 GB RAM per node
 
* Before the VPN:
** GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive everywhere
** hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software
** parasol "ku" cluster: 960 cores (30 nodes x 32 cores), 256 GB RAM per node, 2TB of local /scratch
** kolossus and juggernaut: like hgwdev, 1TB RAM each, 64 Cores each

Latest revision as of 20:38, 28 November 2016

  • Behind the VPN:
    • 1PB Hitachi pseudo-parallel-NFS file system usually mounted as /pod/pstore (400TB) and /pos/podstore
    • ceph file system: object store, 400 TB will be soon >1PB, 60% full?, S3 API (bodo module)
    • stacker: 1.5TB RAM, 160 cores, general use, /pod/pstore is mounted
    • bazaar: 256GB, 32 cores, Ubuntu operating system, general use
    • openstack cluster
      • manage via http://podcloud.pod/
      • 2240 cores (70 nodes * 32 cores), 256 GB per node, no /pod, openstack block store available
      • each node has 10 TB local storage
      • CIRM-01: 64GB RAM, 24 cores, /pod/pstore is mounted as NFS, only accessible to CIRM group
    • traditional clusters:
      • parasol cluster, 19 x 32 = 608 cores, headnode: "podk", /pod/pstore is mounted as NFS, 2TB local hard disk?, 256 GB per node, /pod/pstore is mounted
      • SGI cluster, 50 x 32 = 1600 cores, headnode "podk", /pod/pstore mounted as NFS, hard disk? 256 GB RAM per node
  • Before the VPN:
    • GPFS massive parallel file system: 1.3PB, 30 file servers, mounted on /hive everywhere
    • hgwdev: 1TB RAM, 64 Cores, ~1TB harddisk on /scratch, small NFS as /cluster/software
    • parasol "ku" cluster: 960 cores (30 nodes x 32 cores), 256 GB RAM per node, 2TB of local /scratch
    • kolossus and juggernaut: like hgwdev, 1TB RAM each, 64 Cores each