2 weeks ago, Tim came to visit me, and we discussed a lot. A lot has changed since the last status update. And we wanted to share what is happening behind the screen
The current financial situation got better, we’ll share some news around this in the coming months, but we finally have enough to pay Tim full time and pay a freelancer from time to time to help on the infrastructure.
We are still rationalizing our infrastructure, we’ll contact you on a case by case basis to migrate you to our new infrastructure. This will start to happen in December.
We want to clarify what is IndieHosters and what it is offering.
There was always a tension in IndieHosters between the fact that we want you to be a member that decides of everything, and we want to be in a worker owned cooperative, where we, workers, are the only one deciding.
There was also a tension between the not for profit, and the fact that well, we want to pay our rent with our services.
We want to make hosting cheaper so that more people can quit google, and provide a nice service.
We want to be able to support people.
As you can feel, not everything is compatible. And I think we start to have an idea that we feel comfortable sharing.
As a first point, we have different users. We have people that would like to be member of an association, that like this idea, and others that are more client.
We came up with the idea of a virtuous triangle, here are the 3 angles that help each other:
- IndieHosters - the association
- oxalis - the cooperative that Tim is part of
- libre.sh - the free software project that powers our infrastructure
The association already exists. And the idea is to transition most of our users there.
We’ll keep a fair price, the support will officially be only on the forum - community support.
And this is where our pro-bono activity will also be.
We want you to be more involved there, and we’ll contact you in the coming months to see how we do. But if you already have ideas, feel free to expose them here, or privately by email.
The 1st of February, it is our 5th anniversary, and we’ll run a crowdfunding to get more members there too. And we want to release a new shiny platform (more on that later).
The draft of the presentation of the association is already here. Feel free to edit or ask questions here.
About your debt the bills we never sent you We’ll send you the bills in January to prefill the crowdfunding. We’ll send you a reminder, but don’t worry if you don’t receive the bill, we’ll keep running your service, and renew your domains (until the crowdfunding hehe ).
We feel it is our DNA, and we want to keep doing it. It is also a great place to experiment new things with you, and see what works and what doesn’t.
If you need a profesionnal service (High Availability, SLA, phone support, consulting, customization, training…) this is where you go. We leave the name to the association, the split is not clear yet, currently, it is still blurry, but we give ourself one year to find a new brand for these activities.
The association would actually be one client of the cooperative. So if the governance of the association change, and we are not happy about each other anymore, the association would be free to find a new hoster (we hope not, but if we let the members decide, this is what can happen too).
This part is the part that currently pays the rent, obviously, and we hope to grow by next year. We have a goal to be 5 people by next year, but we’ll see
Actually, we don’t want to grow to infinity, we have a plan that if we reach 14 people, then we split the structure in 2.
We hope that the split is quite clear. If you have any feedback, please feel free to comment here.
We are working on a new version of the platform, and we want to release a β for 1st of February (5th anniversary and crowdfunding).
The idea is not crazy, it would be:
- A cloud - with Nextcloud obviously
- A collaborative note taking app - CodiMD
- This very forum
- Our chat
- maybe a fediverse app, mastodon, peertube or funkewhale, we have to see what we can, and what you want
And everything behind a unified experience, with a single sign on, and some sort of dashboard. Details are still blury. We don’t plan to migrate the current shared Nextcloud, it would be too complexe. But we’ll keep it updated. We’ll also change the domain name, not so happy with .host .
And everything will be deployed in a Highly available fashion on our new infrastructure. And the platform is also easily scalable so we can scale to the infinity and beyond! We’ll probably be able to offer higher storage for lower price, but we’ll be able to do so, only if we have a bit of volume.
This β platform purpose is also to be a great demonstration for potential clients of the cooperative that would like to deploy such tool suit in their environment, under their domain name, with their logo, and their integration (ldap, already existing tools…)
We also want, for the 1st of February to release libre.sh in α version.
libre.sh is this very platform we are working since a year. This very forum is already running on top of it, and half of the infrastructure is already there. We are now working hard to release it free software. You can follow development here (not much yet, but it will come ).
This idea of this platform is to be able to collaborate on how to operate hosted free software with High Availability, scale, backups, restore, self service for the end user, and with as little effort as possible for the admins. Since the beginning, it is our dream, and we see the building blocks getting together nicely.
If you are interested in that, follow this forum.
Hope you enjoyed reading, and we are really curious of your feedback! We know it is ambitious, but we think we can do it with you, and it is our duty to move free software faster!
PS: Here is the status of our infrastructure
- old docker-compose infrastructure
- 402 containers serving 104 applications on 4 servers.
- test kubernetes cluster
- 38 containers serving 3 applications on 3 servers
- with a test ceph cluster
- prod kubernetes cluster
- 381 containers serving 152 applications on 6 servers
- Backed by a ceph cluster of 30 TB hdd and 1 TB Nvme