Robowiki talk:About
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
RoboWiki:Copyright page | 1 | 17:30, 6 May 2019 |
Robowiki mirroring | 1 | 16:27, 14 June 2018 |
Finally Recovered!!! | 0 | 17:34, 9 October 2017 |
First page |
Previous page |
Next page |
Last page |
It seems that
http://robowiki.net/wiki/RoboWiki:Copyrights
is not gone
It's deleted by admin instead
Is that possible to recover a page deleted by admin?
Or maybe the original content is still renamed, and the deleted page is actually recreated by spam.
Hi
There was nothing useful on that page, it was created by a spammer.
Hi mates,
I start to be convinced that we need a mirror of Robowiki. I do not suggest to migrate away but to have "read only" version during the infrequent glitches. The last one took 2 month to be fixed, but the scary part that Robowiki was not awailable even for browsing. I.e. we would not even be able to contact admins since info is unavailable.
I am investigating back up options:
- dumpgenerator from wikiteam
- pros:
- seems to work and pulls full archive with history of revisions
- cons:
- redownload the whole wiki every time (about 700 MB dump)
- not clear what to do with this dump, I cannot find a reasonable howto for converting this dump to a useful web accesible instance. I.e. it is only good for a local searches. Am I right about it?
- pros:
- git-mediawiki
- pros:
- no need to redowload the whole wiki thanks to incremental changes by Git
- apparently can export entries to another mediawiki hosted instance
- can be used for local editing of wiki (this is super cool)
- cons:
- I cannot reliably make a replica of Robowiki, it dowloads about 1600 pages and then just make an appearance of working. I think it is either wiki API limit or something similar because sometimes it does more. The main problem is with discussion threads which are full of useful info but apparently we have more than 10000 of them and git-mediawiki fails here.
- pros:
- it would be super cool to do something like along the distributed web projects
- IPFS
- Dat Project then we can even host the mirror from our browsers like Beaker
The distributed web seems to be good for read only, and I am not sure how to put updates in. IPFS seems to be designed for a static content.
First page |
Previous page |
Next page |
Last page |