LeoFS

LeoFSLeoFSLeoFS

LeoFS

LeoFSLeoFSLeoFS
  • Home
  • Enterprise-Ready
  • Industry Success
  • Resources
    • Sample Solutions
    • Documents
    • Pricing
  • Contact Us
  • More
    • Home
    • Enterprise-Ready
    • Industry Success
    • Resources
      • Sample Solutions
      • Documents
      • Pricing
    • Contact Us
  • Home
  • Enterprise-Ready
  • Industry Success
  • Resources
    • Sample Solutions
    • Documents
    • Pricing
  • Contact Us

Case Studies

UCSF HPC Cluster

Satellite Imagery

UCSF HPC Cluster

Near 50% saving over BeeGFS solution

Cluster of 4 nodes, 1.4 PB, $100K

80% capacity utilization

Find out more

Media Production

Satellite Imagery

UCSF HPC Cluster

Out-perform EMC Isilon S200 Series

60% more capacity

50% higher throughput

Find out more

Satellite Imagery

Satellite Imagery

Satellite Imagery

Same number of drives, 144

Dual 10 GbE vs. Infiniband

No buddy mirroring

Find Out More

University of California

UCSF Wynton HPC Center

  • 1.2 PB storage, estimated cost $192K USD
  • Hardware: 4 nodes of 60-bay servers, 2 nodes of metadata servers
  • Software: ZFS and BeeGFS

Case Link

Competitive LeoFS Cluster

  • 1.4 PB storage, only $100K USD
  • Throughput: read 6.8GB/s, write 10GB/s (asynchronous)
  • Hardware: 4 nodes of 36-bay servers

2 Metadata + Storage nodes

  • CPU:Intel Xeon E5-2630 V4 x 2
  • Motherboard: Supermicro X10DRL-i
  • HBA: LSI SAS 9300-8I
  • System Disk Drive: 480 GB SSD x 2 + 240 GB SSD x 2
  • Storage Disk Drive: 10 TB HDD x 34
  • RAM: 128 GB
  • Network Port: 4 x 10 GbE


2 Storage only nodes

  • CPU:Intel Xeon E5-2630 V4 x 2
  • Motherboard: Supermicro X10DRL-i
  • HBA: LSI SAS 9300-8I
  • System Disk Drive: 240 GB SSD x 2
  • Storage Disk Drive: 10 TB HDD x 36
  • RAM: 64 GB
  • Network Port: 4 x 10 GbE

Animation 3D

EMC Isilon Case Study

  • Ten nodes of S200 series
  • Raw capacity of 600 TB
  • I/O throughput 8 GB/s

Competitive LeoFS Cluster

  • Ten nodes of 4U 24-bay storage servers
  • Raw capacity of 960 TB
  • I/O throughput over 12 GB/s

Geneva Observatory

BeeGFS Case Study

  • 4 storage nodes and 2 metadata servers
  • 144 drives
  • Infiniband
  • Effective 800 TB
  •  I/O 5-8 GB/s

Case Link

Competitive LeoFS Cluster

  • 4 nodes of 4U 36-bay storage servers, 2 metadata servers
  • 144 drives
  • Dual 10 GbE
  • Usable 1PB
  •  I/O 8-11 GB/s
  • No buddy mirroring, 80% capacity utilization
  • No single point of failure from drives, nodes or network
  • File-level RAID, faster data recovery
  • Better ROI

Copyright © 2025 LeoFS.Info - All Rights Reserved.