Servers for rent!

  • I'm looking to rent space on my servers!

    I would charge 2 USD per TB per month and you would simply tell me how much space you need, and I would plot your files on the HDD's. I would allow you to run TeamViewer in read only mode so you can in-fact see your files are both being plotted to your specifications, and running via Blago in AVX2.

    If someone wanted to rent either a half cluster (2 nodes with 384 TB of HDD space) or a full cluster (4 nodes with 768 TB of HDD space) I would allow you to have full control over the server with TeamViewer or the share screen software of your choosing. You can install and do whatever you like on your server. I would also offer a minor discount for people who pay for this much storage.

    The 3 nodes currently running hold 576TB (Another node will be online in the next week) with plans to double that over the next month. Due to the nature of my work, I can procure HDD's extremely cheap, so I thought this would be perfect for Burst.

    One server will run 3 other nodes total (Including itself), each containing 8TB WD (Non SMR drives to ensure max write/read speeds) controlled by a single OC'ed 7700K. The 4 nodes per server constitute 768TB (8x24x4) and equal a cluster. I plan to expand this to at least 2PB over the next 2-3 months. The servers/nodes use enterprise SAS controllers and high speed 6GBPS SAS interconnects linking each node and cluster -- ensuring each optimized drive will read well over 50MBPS. Each cluster will read at over 5GBPS, ensuring the best and fastest DL's possible.

    There will be 1 1050ti per node, so 4 in each cluster, giving about 80-100K nounces per cluster.

    The servers will be connected to a fiber optic 48 port switch with 2 internet connections from two different ISP's linked to the switch via a load balancing router, ensuring there will be zero internet down time.

    Both of the 2 internet connections will go through firewall/DDoS attack mitigation server's which are capable
    of blocking and rerouting thousands of IP addresses per second.

    The servers will have 2 way SAS redundancy per cluster, meaning if 1 cluster were to ever fail for some reason (Like mobo failure) the impact would not be felt as another cluster would take over the drive reads.

    Battery backup will consist of 4 AGM 150AH batteries linked to an inverter as power redundancy. The battery bank will be expanded as necessary to ensure at least an 8 hour reserve of power.

    This is a great option for people who don't want to spend time or large upfront costs buying HDD's and other hardware. The servers will work at the fastest possible speeds, with the highest levels of redundancy, with no messing around.

    Payment would be in burst 😉 on the day your first TB is plotted and exactly a month thereafter.
    You would not be charged anything until after your plot file is %100 online and running in Blago.

    If you're interested or want further info on renting options, feel free to PM me!

    On a side note to Admins, the registration panel did not work for me. I tried with 2 computers and 3 different browsers.
    I could not create an account as I was "forbidden" and had to register via Twitter (Hence the terrible username) maybe someone could look into that?


  • This post is deleted!

  • These are the current nodes. I plan to more than double this over the next 1-2 months.

    alt text

  • admin

    @WarrenDobson2 Damn nice hardware - I like.

    Try the registration again now. I've rebooted the server and moved it to a less loaded proxy. If Chrome still gives you issues, try a different browser.

  • I appreciate your approval!

    Due to the interest I've received (and the new 192 TB shipment of HDD's I got yesterday) I'm lowering the price per TB to 1.5 USD a month! It is a fixed rate and not subject to the price of Burst. So 20 TB would cost 30 USD a month, for example. Some people have asked my motive for leasing space as opposed to mining Burst myself. I figure I can make just as much leasing vs mining burst myself.

    This is my way of giving back to the community, allowing people to mine burst without the technical knowledge required to set it all up; while also saving people the headache and cost of buying hundreds, or thousands of dollars in HDD's, only to realize they will never get their return back as the difficulty goes up.

    Edit: I should also mention that the drives plotted on this server can be linked to the pool of your choice. Likewise, you just need to tell me what starting nounce you're on, and these plots will integrate seamlessly into your existing plots!

    It's also just fun to play data center 🙂

  • @WarrenDobson2 hit me up on msg

  • @WarrenDobson2 hit me up too in pm, thanks

  • Really interested but a quick question - how are the drives presenting in terms of volumes to the underlying OS - a logical volume comprising multiple disks or does each disk directly represent one volume? The reason I ask is about contention and how it will be managed - particularly if you end up with multiple users with <8TB per person.

    The main thing I'm concerned about is contention when it comes to seek times - if each physical disk is shared (whether it be one disk, one volume, multiple customers to that volume or multiple disks per volume - even just as a RAID0, shared between customers) even if everybody has optimized plots then the seeking will be crazy during a new block - everyone will be reading different parts of the disk (massively different parts of the disk at that) at the same time. This could give much less than 50MBps regardless of interconnects or drives - the bottleneck becomes the seek.

    Just wondered how you were structuring the volumes to deal with this? If it's as simple as "if you buy in 8TB chunks you can get exclusive use of a single disk" then that's fine for me - but if it's a case of each cluster (or half cluster) combines all the disks together into a giant volume then it could result in significant decrease in performance depending on how your tenants are structured. If everyone's data happens to end up on a single disk that's great, but if everyone's data crosses disk boundaries it could be terrible for performance.

    Not trying to be negative here at all just want to know how much needs to be purchased to guarantee exclusivity on a single physical disk 🙂

    Also depending on the interest you get (and whether you want to because the SAN you've got here is much more fun to manage than this...) I know I'd be willing to sacrifice redundancy for double the capacity - especially if disks are provided single rather than in an array and can be provided split across your physical servers - means the individual loss of a single machine or disk is unlikely to impact more than 8TB of plots at a time. 2TB for 1.5USD would be an insanely good deal.



  • @guytp

    Short answer: I've thought about this and realized from the start you cannot have more than 1 plot per drive. There would be no drive sharing.

    Each drive is a RAID 0 config. The drives are not combined in one giant logical disk! That would also be a disaster as if 1 drive failed...

    The main reason I went SAS was not for raid, but because I can get over 96 drives running off 1 server and avoid any SATA bottlenecks as the SAS controllers use PCIE x8 2.0 bandwidth. SATA cannot offer that.

    There is no data redundancy. What I meant by SAS redundancy, was the SAS controller is jacked into 2 servers with 2 OS's, so if 1 ever went down due to BSOD or hardware failure, no one would be affected. Everything has a backup, except the HDD's themselves (Due to the insane cost of RAID 5 or 6 on this many drives) 🙂

    Edit: One thing I should mention is that plots MUST be in groups of 4 TB's. I only have both 4 and 8 TB drives, so It must be an even number and there is a minimum order of 4 TB.

  • @WarrenDobson2 Excellent - sent you a PM 🙂

  • Wow. When I stated this, I thought maybe I'd get a few dozen TB's worth of orders. But as it stands, I have orders of close to 400 TB! I'll be upgrading the GPU's and adding 2 GTX 1060's per server to keep up with demand!

  • @WarrenDobson2 sweet i sent you a PM