Categories
General Technical

Update on Downtime / F.A.Q

This blog post will serve as a update on the downtime as of 16/04/2014 and an F.A.Q on this and other general downtime and related issues.

The Downtime

So initially, sites were down due to high load on the web server, which causes 503 errors that you see. Earlier that day, it turns out that our Varnish caching software had crashed resulting in about 10 hours of downtime overnight. The downtime you are now seeing is different. Due to problems at our host/datacentre, it appears the hardware that the database server on began having problems, and our host migrated the server to a new “node” without shutting down the server. There appears to have been complications with this that resulted in a sizeable amount of data loss and some database corruptions that were very unlikely and unpredicted when we initially went down.

Our Data Is Gone?

That title is what I assumed might be going through your heads as you read the last section. The answer isn’t a simple yes or no. Our host has a backup from about 12 hours prior to the migration of the node which was during the Varnish crash we believe. In addition to that, the current version of our database server is accessible and has some wikis we should be able to export. It is very important you read the following section thoroughly before clicking the link.

We have logs of all actions that we use to fight vandalism and spam. Using these, we’ve mostly determined the potential loss of data. These “loggable actions” are not all edits. They include things from deletions and protections, to upload and patrol logs. (note: files are not stored on the same server, and are unaffected, they will lose their relevant File/Image namespace page)

Link to list of affected wikis

The Plan

So this part of the post isn’t confirmed, we’re still waiting on a reply from our host but we needed to let you guys know the situation and calm the panic.

As it stands, our host have offered to put up a second instance of our database server running from the backup they took. What we then intend to do is extract as many of the affected wiki’s databases that are not corrupt from the primary instance and import them into the backed up server. This should mitigate as much of the loss as possible.

F.A.Q

When will this be done?

We can’t give an E.T.A at the moment as we have to wait for host to confirm the plans and get things set up. The database server back up is a large file and will take time to transfer. Have you ever performed an action on your computer and the estimated time keeps changing? This is why we can’t tell you for certain, but we’ll be working as hard as we can to get the wikis back up as soon as possible.

Is this the same reason you don’t tell us about other downtimes?

Partially. There are always different factors in each type of errors and sometimes its to do with external factors.

Am I affected?

Unless you contributed to any of the wikis listed after 3AM EDT on Monday the 14th of April, it is very unlikely. However preference changes are not logged so if you changed a setting you may have lost that.

So I should backup my own wiki then/Turn back on DumpsOnDemand?

No. It is very very rare we see data loss and the reason we’ve disabled the features as they are a key cause of the 503 memory issues. It puts high load on the server which ultimately results in downtime. It is understandable for you to see this as the perfect excuse to cry for DumpsOnDemand but it causes downtime, and we get into a vicious cycle of people panicking and backing up their wikis, stopping the service for everyone else.

Why don’t you just buy more servers?

Simple answer, we can’t afford to. We have a former staff member who controls a handful of our servers on his personal account as he gets better deals than our current staff and we can provide the most quality for the little money we have.

Categories
Uncategorized

The Current Situation: As it Stands.

As many of you have noticed, as of late ShoutWiki has been displaying more than a few HTTP 503 errors (“Service Unavailable”.)

This blog is going to outline what these issues are and why they are caused, what we are doing and why this blog has been published now. Let’s start with the latter. The reason we’ve left it so long is because we had originally planned to be able to give you all the facts on, in particular a timeline related to the solution. Unfortunately we are unable to give you these dates as of present – although as soon as we are aware we will let you know – and we feel it is wrong to keep our users (you) in the dark for even longer.

To keep it simple, the issues are being thrown by our Varnish caching server as the Apache server (which servers you the pages and images) has been unavailable. In short, this has been caused because the web server has been using excessive amounts of CPU and RAM, and the whole server has been left without memory.

What have we done so far? We’ve temporarily disabled various high CPU processes (unfortunately this includes export functions) in an effort to preserve and extend the uptime of the site. In addition we’ve taken steps to make the infrastructure more scalable. We’ve been making steps towards making it easier for us to add additional web servers to our setup by moving non-wiki sites to their own server, and we’re looking into moving images to the cloud. Obviously this takes a fair amount of planning due to the financial and technical implications.

And finally, what we’re going to do. Simply put, we’re going to reinstall the operating system on the server. We’re fairly certain that there is a problem with the template the web server VPS uses. While we do this, we’re also going to be switching from Apache to Nginx in an attempt to improve performance. When is this downtime, and how long it will it take? We’re not entirely sure – which is what we’ve been waiting on. Our web server VPS is attached to the account of a former staff member, mainly for financial reasons, but also due to the connections this former staff member has with our provider. Unfortunately, we’ve been unable to get a hold of them recently (as soon as they are around again, we’ll be able to get the apache server back online!), and we are still waiting on a reply on when he is available to reinstall the operating system. The length of downtime will be determined on how much data has to be shuffled around, currently images take up a large proportion of the disk space, which is a big reason in why we want to relocate our images.

ShoutWiki is a for profit organisation, however none of our staff members get paid for their work, they are all volunteers and all our income is reinvested into servers and this will be the case until we can establish a fast and stable service.

Again, we apologise for the downtime, and thank you for your continued patience. We understand this is a very frustrating time and we look forward to resolving the issues as soon as possible and allow you to get back to editing your wikis.
On behalf of ShoutWiki staff,

— Lewis Cawte
Chief Technical Officer, ShoutWiki.

keno cbd gummies | sativa and cbd gummies | cbd gummy bears online | cbd gummies dapper laughs | private label cbd gummies | heartland cbd gummies | cbd gummies store near | where buy cbd gummies | cbd gummies and smoked | cbd gummies for kidneys | cbd gummy pack | anxiety relief gummies | penis enlarging exercise | tevida male enhance | penis enlargement cream in usa | pelican gummies reviews | male enhancement shooter | alpha xtrm male enhancement | granite power male enhancement | cialix male enhancement scam | penis enlargment hand job | penis veins enlarged | quick fix male enhancement | gnc penis enlargement pilks | enjoy boost gummies | top diet pills 2024 | tim mcgraw gummy weight loss | roual keto gummies | is keto burn pills safe | k3 mineral keto gummies | best otc diet pills 2024 | keto gummies by shark tank | acv benefits keto | sharktank weight loss gummies | most famous weight loss pill | sea mist medical weight loss | brazilian weight loss pills |