WebApr 12, 2024 · Posted by Jonathan Apr 12, 2024 Apr 12, 2024 Posted in Computing Tags: Ceph, homelab series, Kubernetes, NVMe, Rook, storage Part 4 of this series was … WebI just ran some benchmarks on my Kubernetes/Ceph cluster with 1 client, 2 data chunks and 1 coding chunks. Each node is has a smr drive with bcache on a cheap(~$30) sata ssd over gigabit. My understanding is that Ceph performs better when on gigabit when using erasure coding as there is less data going over the network. With Ceph 3 nodes
Ceph cluster for a noob guide? : homelab - reddit.com
WebThese are my two Dell Optiplex 7020s that run a Ceph cluster together. The nodes have identical specs and are as follows: i5-4590. 8GB RAM. 120GB + 240GB SSD. They are both running Proxmox with Ceph installed on them, using the 240GB SSD as an OSD. This enables the cluster to run in HA as well as being able to migrate containers and VMs with … WebAnyone getting acceptable performance with 3x Ceph nodes in their homelab with WD Reds? So I run a 3x commodity hardware Proxmox nodes that consists of two i7-4770k's (32gb ram each), and a Ryzen 3950x (64gb) all hooked up at 10G. As of right now, I have Three OSDs 10TB WD Reds (5400s) configured in a 3/2 replicated pool, using bluestore. hss live guru 10th malayalam 1
40Gb networking on a budget : r/homelab - reddit
WebNew Cluster Design Advice? 4 Nodes, 10GbE, Ceph, Homelab I'm preparing to spin up a new cluster and was hoping to run a few things past the community for advice on setup and best practice. I have 4 identical server nodes, each have the following: 2 10Gb Network connections 2 1Gb Network connections 2 1TB SSD drives for local Ceph storage WebThe temporary number of OSDs under the current test is 36, and the total number of OSDs in the final cluster Ceph is 87, the total capacity of bare metal HDD is 624T, the total number of NVMEs is 20, and the capacity of bare metal NVME is 63T. WebThey are growing at the rate of 80k per second per drive with 10mbit/s writes to Ceph. That would probably explain the average disk latency for those drives. The good drives are running at around 40ms latency per 1 second. The drives that have the ecc recovered are sitting at around 750ms per 1 second. hss live guru 9th malayalam