Hi,

I thought I’d post my latest project. I use a bunch of Raspberry Pi compute modules as servers and decided to build myself a custom blade server to host them. This is replacing a bunch of old Intel rack mount servers on my home network - it’s a lot less power hungry! It’s been through a few iterations and is now working really well. This is the server:

https://preview.redd.it/4eff1iwi5i1c1.jpg?width=5442&format=pjpg&auto=webp&s=f91eebef92053a9698f74588df2a8ef3cd29462b

It’s a 2U rack mountable unit, in an off-the-shelf ABS case with some custom 3D printed parts. The server takes up to 10 of these blades:

https://preview.redd.it/zi84q19k5i1c1.jpg?width=5472&format=pjpg&auto=webp&s=7b5e757c0f054ab96a97cf4be5b1ce9f4c49ff7f

It’s got gigabit Ethernet, USB-A and HDMI on the front and an NVMe SSD slot on the board, along with an SD card slot and a battery backed real time clock. There’s a little OLED on the front displaying information about the blade, including the name and IP address to make it easy to identify for maintenance. There’s also an RP2040 on it for management.

The blades plug in to a custom backplane which provides power and centralised management. There’s an LCD front panel providing basic tools for powering on and off blades and status information, and another compute module which acts as a management web server. It can be used to upload flash images to the blades via the backplane, and provides serial console access to the blades through the web interface.

I’ve been using this for a while now and was wondering if other folks out there are interested in it? It would be quite quick and easy for me to turn this into a product for sale if there was a market out there for it.

Please let me know any comments or suggestions you have, any feedback is appreciated!

Alastair

  • TheGuyDanish@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m glad to see someone got around to this. I had a similar idea a while ago and even got to a first and second revision that I managed to bring up. But my motivation kinda ran out on the backplane board and figuring out how to keep a respectable transfer speed.

    https://i.postimg.cc/D0qRK4jh/PXL-20210804-120104488.jpg

    Really interested in seeing where this goes and I’d be very happy to chat about details and concerns if you need a feedback chamber. :)

  • tenekev@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Kudos to you sir. I’m first to jump against RPi in homelab posts but this is on a whole other level. I think everyone would love a detailed explanation on it.

    The Compute blade comes to mind and I’m drawing parallels between them. AFAIR, the compute blade does power and management over the front ethernet port. Which requires the PoE stuff to be there too. Does your backplane simplify the boards (and make it the project cheaper)?

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks, that’s very kind. I’ve added some more detail on other replies and I think I’ll do a full blog post in the next couple of days.

      There are definitely parallels with the Compute Blade project but there are a few differences. My blades are a bit simpler, they don’t have the TPM that the Compute Blade does, as I didn’t have any real need for it. The CB also has a more dense number of blades in the 19" width. This was another design decision on my part, I quite liked the short depth case making the unit small and I wanted to make sure there was plenty of airflow for cooling (tbh I didn’t need as much as I used!)

      My unit is more focused on being like a traditional server unit, as that’s what my use case was. Centralised power, centralised management and provisioning etc. You’re correct, the Compute Blade uses PoE, and I did it through the backplane. My preference was for central management rather than per-blade, so that meant a backplane and it all flowed from there. It allows you to feed the USB and serial console into the management server which is great for provisioning and debugging. The displays are also born out of my days as a network infrastructure guy, where being able to see the server’s name and IP address on the physical unit would have been a godsend when doing maintenance! So I guess the design differences between this and the Compute Blade are about my focus on more of a server use rather than general compute module.

      I’d say it’s probably a bit cheaper using a backplane than PoE. The PoE adds a bit to the cost of each blade which would soon multiply up, plus the additional cost of a PoE switch vs non-PoE. I’m using an off-the-shelf ATX PSU and these are made in such huge quantities that the price per watt is difficult to beat.

  • ztasifak@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What is the benefit of using raspberries for your use case. Low power usually comes with low performance. Or am I missing something? If I invest the same amount of money in different miniPCs (used on ebay or similar) wouldn’t I get more compute power for the money?

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, this isn’t useful for many things, but as others have mentioned there are situations where it is. My original use case, the thing which prompted me to build this (other than just the fun of seeing if I could do it!) was to replace a whole load of low complexity VMs. I’m a freelance programmer and I do a bunch of hosting for both myself and some clients out of my home office. I’ve got a small rack setup in my attic with UPS, and have redundant fibre connections. It’s obvs nowhere near datacentre quality but it works well for my purposes.

      I’d previously been using VMs running on some second hand enterprise x64 kit that I bought. Whilst this works great, the electricity bill is rather higher than I’d like! When I analysed what all the VMs are doing I realised that it’d be perfectly possible to do this on a Pi. In the dim and distant past I was a network infrastructure guy, so I started looking into “proper” server Pi solutions and before I knew it I was down this rabbit hole!

      It works really well for low power server applications. It’s not in the same league as the big iron ARM mega-core servers (or indeed Xeon servers) for performance, but then it’s nowhere near that league for price either. I haven’t figured out an exact price if I was to sell it commercially, but it’d likely be in the $800 US price range without CMs. If you were to max that with 4GB PIs that’d end up around $1500, which’d give you 40 cores of pretty decent performance and 80GB of RAM. The Gigabyte and Altera servers I’ve seen are awesome and way more powerful than this but are several times more expensive.

      • ztasifak@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Indeed. But for 1500 USD I can build a brand new small form factor pc with 96gb ram and lots of compute power. Well if it works for you, great. Certainly looks cool

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks, some more info on other replies and I’ll do a proper blog write up in the next few days.

  • EvanH123@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I really wanted to do something like this about a year ago but ran into the great Pi shortage and ultimately gave up after all I could find cheap enough were 3b’s.

    Also I mainly use my server for storage and I don’t think the Pi’s USB3 is really adequate.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The Pi shortage was definitely a nightmare. I wouldn’t have bothered with this even at the start of the year as you just couldn’t find the kit. It does seem to have eased now, though. Digi-Key and Farnell have had good stock levels of CM4s for the last couple of months and the CM5 can’t be far away now. I’ve found performance of CM4s with NVMe SSD’s are pretty good, certainly enough for my use cases. Blades like this aren’t much use for storage servers though, not really enough storage options. I wouldn’t rule out doing a purpose built storage server at some point though, my home network has a couple of big NAS boxes with 20-odd SSDs and I’d love to replace those!

  • Jacksaur@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I would start learning Kubernetes legit just to give myself an excuse to use this, it looks so damn cool!

    Are you planning to make this available at all?

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, at the very least I’ll hand-build a few units with the spares I’ve got here and make those available. If there’s enough demand I’ll potentially do a full production run. I’ll open source the designs too so folks can have a proper poke about in it :-)

  • Beard_o_Bees@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The backplane sounds interesting.

    I’m guessing it doesn’t facilitate card-to-card data transfer (using something like a virtual NIC) since you’ve got everyone going to it’s own switch port (also a guess on my part)?

    It seems like if you can use the backplane’s bus to update firmware, it might be able to move other data?

    No matter how you slice it, this is pretty cool.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thank you! Yes, you’re correct on your guesses. There’s blade to backplane/management server comms, but no direct blade to blade comms. As I’ve mentioned on a couple of other replies, it’s definitely possible to do a version of this where the Ethernet comes from the blade to the backplane over the PCI-e connector and into a switch on the backplane, so that you’d have all the switching done on-board and a single uplink port. It’s a much more complicated project to do though so not something I’ve tackled yet.

      The blade uses PCI-e card edge connectors as they’re cheap, and I route UART0 (GPIO 14/15) and the USB from the compute onto this. There’s a USB switch IC on the blade which can route the CM’s USB output to either the host port on the front of the blade or through the backplane. The UARTs and USB are connected through switches on the backplane into the management module. The blades also have RP2040s on them which are connected to various pins on the compute modules, and the management module can talk to these using I2C. It’s able to use this for doing stuff like restarting the CM into provisioning mode, and for reporting status information. The RP2040 is connected via I2C to both the compute module on the blade and the backplane’s management module, so can be used for exchanging status information from within Linux on the blade with the management module. That’s how I get out status, temperature etc info. There’s no reason this couldn’t be used for other stuff too, and in theory could be used to exchange inter-blade data at I2C data rates.

      The connector also passes out the RP2040’s UART and SWD as I use this to flash the firmware into the RP2040. I haven’t switched this into the backplane but in theory it could be too.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah I know about that one, I looked at it when I first started thinking about using Pi’s to do the server stuff I wanted, but I couldn’t actually buy one then. So I built my own :-) As I mentioned on another post, there’s a few differences around my focus on using this as a simple server system.

  • jace_garza@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t think I’d ever need one of these in my home but this looks so friggin cool I’d want one just to have one 🤣. Helluva job! 10/10

  • KittensInc@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Wow, that looks amazing!

    Have you also considered routing networking through the backplane? That would essentially get rid of all per-node wiring.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thank you! Yes, I’d really like to do this in a future version, I’ve mentioned a bit more detail on some other replies. It’ll be quite an expensive thing to develop which is why I went with this for now. It’d certainly make for a much tidier setup.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The feedback’s been good so I’ll hopefully get this available to buy soon. I think somewhere in the $800ish price range without the CMs, which hopefully is near to home use price range.

  • engineerfromhell@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Saving this post, would be very curious to see this project grow. I wonder what kind of density could be achieved with some custom thermal control solution. Amazing work, I hope you can make it in to a product.

    • allyg79@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you were to lose the front panel sockets and do the switching on board then it’s possible to pack quite a few into a 19" width. There’s also space to fit 2 compute modules per blade, so in theory it could get quite dense. Cooling would definitely be the issue. I suspect with powerful enough case fans it should be workable.