I wonder how easy it would be to make a practically indestructible, everlasting Wikipedia reader.
Something using solar for power, with a rugged and water-resistant enclosure, made of extremely high-quality components that won't break for hundreds of years at least. Maybe add an IRDA port for good measure, to make it possible to transfer all the data out somewhat quickly.
You could make hundreds of these and put them in hard-to-reach locations around the world, to make sure at least one survives whatever calamity might befall us in the future.
Be certain to use acid-free paper [1]. The typical cheap bright-white paper of today will have hard time staying in a good condition in 100-200 years. Ideally go for the ultra-durable cotton-rag paper used e.g. for paper money.
What would happen if I print all this down at the scale we have been
discussing? How much space would it take? It would take, of course, the
area of about a million pinheads ... All of the information which all of
mankind has every recorded in books can be carried around in a pamphlet
in your hand — and not written in code, but a simple reproduction of
the original pictures, engravings, and everything else on a small scale
without loss of resolution.
I have many current and old dumps and can switch between a few years. Very nice in case of deleted articles or to check old time stamped versions. It also supports more than just Wikipedia like wikiquote or wikivoyage or cooking wiki. You can compile own mediawikis too
Some thoughts about making it possible for individual humans to access Wikipedia, robustly to calamities that are within the sphere of human agency.
Seems like you would want it to be stored digitally. Ideally, people would have the ability to access it remotely, in case their local copy is somehow corrupted. For that, you would need a physical network by which the data can be transmitted. Economies of scale would seem to suggest that there would be one or a few entities that would “serve” the content to individuals who request it. Of course, you would want those individuals to be able to access this information without having detailed technical knowledge and ability. I guess they would have pre-packaged software “browsers” they could use to access the network.
In order to maintain this arrangement, you would want enough political stability to allow for the physical upkeep of this infrastructure, including human infrastructure (feeding the engineers who make it all possible). In order to make it worthwhile, you would need people who want to access the information too. I suspect political stability, a sufficient abundance of the necessities for human life, and the political will to make sure that everyone’s needs are met so that they can safely be curious about the world would help here too.
All of this requires sources of power. I suspect that a combination of nuclear power, solar/batteries, and geothermal energy would be sufficient and would avoid the problem of running out of fossil fuels at some point in the future. The nice side-effect here of reducing the impact of calamities exacerbated by the greenhouse effect.
For the information to continue being relevant, you would have to update it with new knowledge, and correct inaccuracies. How best to accomplish this? Well, I guess you would need a systematic way to interrogate the causes behind the various effects we observe in the world. I would propose a system where people create hypotheses, and perform experiments that exclude the influence of as many factors as possible external to the phenomenon being studied. People would then share their findings, and I guess would critique each other’s arguments in a sort of “peer review” to try to come to a consensus. You would have to feed and provide for these people at a certain basic level to make sure they are comfortable and safe enough to continue doing this work. I guess you would want to encourage the value systems compatible with this method of interrogating the world.
just in case anyone is as obtuse as I am, I believe the joke here is that the contents of Wikipedia might be contaminated with AI generated content starting around 2023.
you can probably look up low background steel to complete the analogy.
The last kiwix zim dump of English Wikipedia before the release of ChatGPT is from May 2022. The Internet Archive still preserves the torrent[0]. To host this, or browse it locally you can use `kiwix-serve -p 8888 wikipedia_en_all_maxi_2022-05.zim` from kiwix-tools[1].
it's not about pre-war. it's about pre-trinity-nuclear tests. which means uncontaminated by atmospheric radioactive isotopes.
it happened at the end of ww-ii but that is not the point.
> Low-background steel, also known as pre-war steel and pre-atomic steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s.
> download a pre ~2023 dump, because Low-background steel.
Military A.I. was likely in use earlier, and since PSYOPS are the most used and most effective weapon in the U.S. Military's arsenal, you absolutely know it was used. It ain't a war crime the first time...
How much do we know about military AI’s capabilities? As in, is there any evidence that the government/military was ahead of big tech on the AI research front?
Seconded. Sometimes when someone says XYZ was likely used it's because they've read something from a credible source, or maybe are a subject matter expert, or have grasped some other similarly solid chain of evidence.
But sometimes, they mean "likely" in the more colloquial sense of a guesstimation, which can range anywhere from informed guess to low effort fan-fiction. I default toward the latter unless otherwise specified.
It is useful software for offline use and emergencies. For those who may not know, apart from wikis, they also offer offline documentation(Linux distros like ArchWiki, libraries, etc.), medical libraries(Medicine Plus, etc.), and Stack Exchange.
Main Kiwix dev in charge of scrapers (tools to create ZIM files, even if we do not really scrape technically speaking) here.
We are working hard toward upgrading the Wikipedia ZIMs, but it is far from being an easy feat. I'm mostly solo on this, and far from dedicating 100% of my time to this, so it does not move very fast. We are quite close to being able to reach the goal however, probably only a matter of weeks now.
Bonus: the tool will now get pretty good at making a ZIM of any Mediawiki, not only Wikimedia ones, we expect for instance to work on all Fandom wikis somewhere this year since there is significant knowledge over there.
I looked into it once, I think the script or system that built the larger dumps broke and no one fixed it. I started working on it but other stuff got in the way.
If you've got some spare bandwidth & storage then seeding some of the torrents here is a cheap and fun way of helping Wikipedia out. I've served around 20TB of these dumps in the past year.
A bit of warning to people who use the database download: the dumps don’t reflect a consistent state of the database and contain broken and missing data.
I may not be using the term correctly here. In short, I would love a local LLM + Wikipedia snapshot so that I can have an offline, self-hosted ... Hitchhiker's Guide to Earth.
I would like companies to start aggressively pushing back against AI scrapers using things like Anubis[0]. If you can't be a good steward of the internet or respectful to other peoples' resources, then people have the right to deny them to you.
I bet someone like Cloudflare could pull the dataset each day and serve up a plain text/Markdown version of Wikipedia for rounding error levels of spend. I just loaded a random Wikipedia page and it had a weight of 1.5MB in all for what I worked out would be about 30KB of Markdown (i.e. 50x less bandwidth).
Of course, the problem then is getting all these scrapers and bots to actually use the alternative, but Wikimedia could potentially redirect suspected clients in that direction..
Someone suggested to me to apply a filter that serves .md or txt to bots/ai scrapers instead of the regular website, seems smart if it works but i hate it when i get captchas and this could end up similarly detecting non-bots as bots
maybe a view full website link loaded on js so bots dont see it idk
I would love to see most sites serve me markdown. I'd happily install a browser extension to mask me as a a AI bot scraper if it means I can just get the text without all the noise.
someone built a service for ai bots called pure.md its been a godsend to curl websites as markdown on the occasional where it doesnt work first time and works great for occasional use with the free tier
someone pointed out you can enable by default reader mode on safar under settings but even then not all website’s pages are seeved as reader mode enabled pages
I wonder if Wikipedias recent switch to client side rendering has hurt their performance too. Serving a prerendered page might have helped this situation. I don't know the details of their new system though.
no - when fragile resources are abused by one endpoint out of one hundred thousand others, and the abuse is one hundred thousand times greater.. how is that a condemnation of the "ways" of "all people" .. what is justice?
Oh no even now there is plenty of use for it outside of AI training. Just think of all the schools in villages all around the world that don't have access to the internet or have a very limited connection. I've worked with folks that would setup local "wikipedia servers" for schools so that kids could access Wikipedia via a local network connection. In other setups they just download all of wikipedia to a set of laptops and you use one of the offline readers to browse it.
This is essentially the modern version of having a library of encyclopedias.
I'm thinking less about AI training and more about having a source of (reasonably) reliable information from the net, in case AI generated fake images and generated cross referenced texts start making it too difficult to discern real history from malicious rewrites. It's bad enough now, but can get much worse with the proliferation of AI agents.
Anyone archiving the site. Wikipedia is, for its faults, one of the best-curated collections of summarized human knowledge, probably in history.
Replicating that knowledge helps build data resilience and protect it against all sorts of disasters. I used to seed their monthly data dump torrent for a while.
True. Musk for example is publicly attacking it for spreading "left-wing lies" because in his wiki page there are statements like "He has been criticized for making unscientific and misleading statements, including COVID-19 misinformation and promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments." which are just pure facts.
It would be nice to have something like this more decentralized.
I was perusing some recent discussions on sources with interest. It seems that Wikipedia's intelligentsia have managed to "blacklist" (deprecate or declare "generally unreliable") practically every prominent source of news in the US that is not centrist or leftist.
I kid you not; through a process of attrition they've attacked the very reliability and reputation of every source, including Fox News and the like, and they've told editors sitewide that they simply can't be cited as a "Reliable Secondary Source", like at all.
I am not sure if that is an accurate assessment of the situation on the ground for mainstream media, but it certainly exposes some real systemic bias.
And this is the highest-order and most enduring method of ingraining systemic bias in the project: by weeding out sources with unfavorable viewpoints and perspectives, saying they publish lies and untruth, and being able to prohibit them globally from any use.
And I was pondering this state of affairs and just thinking about Karoline Leavitt's press room, and wondering what will the landscape be, if there is precious little intersection between press outlets who may be favorable or deferent to the present administration, and those which are allowed to be cited on Wikipedia? Ouch!
And you know, I wouldn't be surprised if people hurling those accusations somehow believe that the lies and misinformation are one-sided and partisan. As if leftism has some sort of monopoly on Truth and Goodness bestowed from above.
It's really been sickening to see the media outlets just lay down thick trails of bullshit that is designed to distract us, to instill fear, uncertainty, and doubt, to make us hate one another, to keep us hanging on that channel or that subscription for the next tidbit. It's disgusting and manipulative, and the Right has absolutely no monopoly on those tactics.
Wikipedia is simply a microcosm of the prevailing zeitgeist, so they are as likely to cure systemic bias as a leopard can change its spots.
Your point is understandable regarding source bias, but in Musk's case, the statements "they" mentioned are simply true. While you definitely have a valid point about the risks of systemic bias in excluding certain outlets, relativizing factual accuracy could inadvertently lead to a situation where every lie becomes just another "valid opinion." A viewpoint can indeed be an opinion, but misinformation remains misinformation. Wikipedia should not become a space for free interpretation of reality.
Just because one side happens to produce more misinformation doesn't mean these facts should be omitted. Consider this analogy: Stalin killed millions and was undeniably a tyrant, and even though the current Russian establishment might push a different narrative, it doesn't erase historical reality. Similarly, accurately documenting Musk's misleading statements isn't bias—it's factual reporting.
Anyone who wants to have access while off-line, for whatever reason. This can be as simple as saving costs via more complicated as accessing content from regions with spotty and/or expensive connectivity (you're on a ship out of reach of shore-based mobile networks, you do not have access to Starlink or something similar, you're deep in the jungle, deep underground, etc) to some prepper scenario where connectivity ends at the cave entry because the 'net has ceased to exist.
I would like to have a less politically biased online encyclopedia for the latter scenario, it would be a shame to start a new society based on the same bad ideas which brought down the previous one. If ever a politically neutral LLM becomes available that'd be one of the first tasks I'd put it to: point out bias - any bias - in articles, encyclopedias and other 'sources' (yes, I know, WP is not an original source but for this purpose it is) of knowledge.
You don't need to be deep in the jungle. You might just not want to pay for mobile data. If your phone has an SD card slot, you can put in 1 TB of storage and have wikipedia, a lifetime of music, tons of books, an atlas of your country for GPS navigation, and plenty of room for taking photos/videos. Storage is cheap enough that mobile data should be basically pointless.
I suspect "politically neutral" is a meaningless phrase. It's just a way for people to tar their political opponents by inference.
The problem is: even if you report only facts, there is an editorial function in choosing which facts to report, because it is physically impossible to report all facts. So someone can always point to some sort of bias on choosing which facts to report.
There are no politically neutral humans but there can be politically neutral publications. All you have to do to be politically neutral is treat all legal political ideologies the same without favouring one over the others. Wikipedia does not achieve this goal, not by far.
This is not kindergarten so let's no go down this path. Asking for a politically neutral (see my explanation elsewhere in this thread if you don't understand what that means) source of information is not 'bad politics' but intended to avoid bad politics. I suspect that you 'identify' as either 'liberal' or 'progressive' so I assume you'd be less than thrilled if Wikipedia had a conservative bias. The same goes for conservatives and (traditional) capital-L Liberals who are less than thrilled to see Wikipedia having a 'left-wing' or 'progressive' bias. It just makes WP end up being lumped together with the legacy media, known to be untrustworthy where it counts and that is a shame for a site which in many ways still is a valuable resource as long as you avoid any and all subjects which have been pulled into the polarised political discourse.
Genuine question, can you provide multiple explicit examples of such bias? I heard a lot of people railing against bias in Wikipedia, but no one provides any blatant examples of it.
A genuine answer, how about looking up some studies on this subject? Not those done by Wikipedia of course, they claim to be politically neutral after all.
Six studies, including two from Harvard researchers, have found a left-wing bias at Wikipedia:
A 2024 analysis [1] by researcher David Rozado that used AllSides Media Bias Ratings [2] found Wikipedia associates right-of-center public figures with more negative sentiment than left-wing figures, and tends to associate left-leaning news organizations with more positive sentiment than right-leaning ones.
A Harvard study [3] found Wikipedia articles are more left-wing than Encyclopedia Britannica.
Another paper [4] from the same Harvard researchers found left-wing editors are more active and partisan on the site.
A 2018 analysis [5] found top-cited news outlets on Wikipedia are mainly left-wing.
Another analysis [6] using AllSides Media Bias Ratings found that pages on American politicians cite mostly left-wing news outlets.
American academics found [7] conservative editors are 6 times more likely to be sanctioned in Wikipedia policy enforcement.
There are far more sources out there.
If I show examples of biased pages - the one on Antifa is a good example - this will just devolve into a quibble about this or that sentence.
> based on the same bad ideas which brought down the previous one
I don’t think that’s fair. Not that Wikipedia is without bias, but that their ivory tower biases are worlds apart from the lying brutal animalistic Hollywood signals herding the masses in “our democracy”.
I wonder how easy it would be to make a practically indestructible, everlasting Wikipedia reader.
Something using solar for power, with a rugged and water-resistant enclosure, made of extremely high-quality components that won't break for hundreds of years at least. Maybe add an IRDA port for good measure, to make it possible to transfer all the data out somewhat quickly.
You could make hundreds of these and put them in hard-to-reach locations around the world, to make sure at least one survives whatever calamity might befall us in the future.
Kiwix has created pretty polished software for this: https://kiwix.org/
My last download of English Wikipedia was ~110 GB and includes images! It's impressively small for the volume of information available.
You could even make it radiation tolerant by printing it.
Be certain to use acid-free paper [1]. The typical cheap bright-white paper of today will have hard time staying in a good condition in 100-200 years. Ideally go for the ultra-durable cotton-rag paper used e.g. for paper money.
[1]: https://en.wikipedia.org/wiki/Acid-free_paper
Then figure out how and where to store the over one thousand volumes that have 1200 pages each.
From "Plenty of Room at the Bottom"[1]:
Need a good magnifying glass, though (:[1] https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf
Relevant XKCD: https://what-if.xkcd.com/59/
Make sure it has the words "DON'T PANIC" inscribed in large friendly letters on the cover.
Somewhat related, the Long Now Foundation has a Rosetta Project, but it is for archiving languages: https://en.wikipedia.org/wiki/Rosetta_Project
https://en.wikipedia.org/wiki/WikiReader (remember Openmoko Neo 1973 and FreeRunner?)
Aard2 for Android exists since at least 2015:
https://f-droid.org/packages/itkach.aard2
I have many current and old dumps and can switch between a few years. Very nice in case of deleted articles or to check old time stamped versions. It also supports more than just Wikipedia like wikiquote or wikivoyage or cooking wiki. You can compile own mediawikis too
https://en.wikipedia.org/wiki/Voyager_Golden_Record
Some thoughts about making it possible for individual humans to access Wikipedia, robustly to calamities that are within the sphere of human agency.
Seems like you would want it to be stored digitally. Ideally, people would have the ability to access it remotely, in case their local copy is somehow corrupted. For that, you would need a physical network by which the data can be transmitted. Economies of scale would seem to suggest that there would be one or a few entities that would “serve” the content to individuals who request it. Of course, you would want those individuals to be able to access this information without having detailed technical knowledge and ability. I guess they would have pre-packaged software “browsers” they could use to access the network.
In order to maintain this arrangement, you would want enough political stability to allow for the physical upkeep of this infrastructure, including human infrastructure (feeding the engineers who make it all possible). In order to make it worthwhile, you would need people who want to access the information too. I suspect political stability, a sufficient abundance of the necessities for human life, and the political will to make sure that everyone’s needs are met so that they can safely be curious about the world would help here too.
All of this requires sources of power. I suspect that a combination of nuclear power, solar/batteries, and geothermal energy would be sufficient and would avoid the problem of running out of fossil fuels at some point in the future. The nice side-effect here of reducing the impact of calamities exacerbated by the greenhouse effect.
For the information to continue being relevant, you would have to update it with new knowledge, and correct inaccuracies. How best to accomplish this? Well, I guess you would need a systematic way to interrogate the causes behind the various effects we observe in the world. I would propose a system where people create hypotheses, and perform experiments that exclude the influence of as many factors as possible external to the phenomenon being studied. People would then share their findings, and I guess would critique each other’s arguments in a sort of “peer review” to try to come to a consensus. You would have to feed and provide for these people at a certain basic level to make sure they are comfortable and safe enough to continue doing this work. I guess you would want to encourage the value systems compatible with this method of interrogating the world.
Just my 2 cents.
You got me imagining a project where the entire wiki db gets laser-etched on thin stone tablets/metal plates
microfische is probably the best option here.
Might also be worthwhile to download a pre ~2023 dump, because Low-background steel.
That's a good point. I'll add older Wikipedia dumps to my https://lowbackgroundsteel.ai site.
There's a Tumblr for everything
just in case anyone is as obtuse as I am, I believe the joke here is that the contents of Wikipedia might be contaminated with AI generated content starting around 2023. you can probably look up low background steel to complete the analogy.
The last kiwix zim dump of English Wikipedia before the release of ChatGPT is from May 2022. The Internet Archive still preserves the torrent[0]. To host this, or browse it locally you can use `kiwix-serve -p 8888 wikipedia_en_all_maxi_2022-05.zim` from kiwix-tools[1].
[0]: <https://web.archive.org/web/20221007114937/https://download....>
[1]: <https://github.com/kiwix/kiwix-tools>
LOL that is an amazing analogy, thank you.
My go-to as well. Pre-war steel.
it's not about pre-war. it's about pre-trinity-nuclear tests. which means uncontaminated by atmospheric radioactive isotopes. it happened at the end of ww-ii but that is not the point.
Yes, however it's also an accepted name for it
> Low-background steel, also known as pre-war steel and pre-atomic steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s.
https://en.wikipedia.org/wiki/Low-background_steel
It's an important distinction because a lot of ships were sunk during WWII.
> download a pre ~2023 dump, because Low-background steel.
Military A.I. was likely in use earlier, and since PSYOPS are the most used and most effective weapon in the U.S. Military's arsenal, you absolutely know it was used. It ain't a war crime the first time...
How much do we know about military AI’s capabilities? As in, is there any evidence that the government/military was ahead of big tech on the AI research front?
Seconded. Sometimes when someone says XYZ was likely used it's because they've read something from a credible source, or maybe are a subject matter expert, or have grasped some other similarly solid chain of evidence.
But sometimes, they mean "likely" in the more colloquial sense of a guesstimation, which can range anywhere from informed guess to low effort fan-fiction. I default toward the latter unless otherwise specified.
"Please summarize the maintenance procedure for a tomahawk missile"
boom
It is also unlikely that, if such AI was used, it would have been used to edit a billion articles about obscure species of plants and insects.
You can also get it as a .zim file for easy offline browsing with Kiwix.
The whole enchilada: https://download.kiwix.org/zim/wikipedia/wikipedia_en_all_ma...
Other versions: https://library.kiwix.org/#lang=eng&category=wikipedia
It is useful software for offline use and emergencies. For those who may not know, apart from wikis, they also offer offline documentation(Linux distros like ArchWiki, libraries, etc.), medical libraries(Medicine Plus, etc.), and Stack Exchange.
Will the 2025 zim be available as well?
Main Kiwix dev in charge of scrapers (tools to create ZIM files, even if we do not really scrape technically speaking) here.
We are working hard toward upgrading the Wikipedia ZIMs, but it is far from being an easy feat. I'm mostly solo on this, and far from dedicating 100% of my time to this, so it does not move very fast. We are quite close to being able to reach the goal however, probably only a matter of weeks now.
Bonus: the tool will now get pretty good at making a ZIM of any Mediawiki, not only Wikimedia ones, we expect for instance to work on all Fandom wikis somewhere this year since there is significant knowledge over there.
I am wondering the same thing. I have Jan 2021, Jan 2024... I want to keep a snapshot each year and I wonder why a new one hasn't been generated.
I haven't looked for documentation on creating my own zim file.
I looked into it once, I think the script or system that built the larger dumps broke and no one fixed it. I started working on it but other stuff got in the way.
I tried this kiwix the other day, it has like a 300mb "essentials" text version that was interesting.
This comment was downvoted and instead, it'd better merit a comment as to "why" it wasn't contributing to the discussion?
> I tried this kiwix the other day, it has like a 300mb "essentials" text version that was interesting.
I didn't downvote the comment, but it's not an incredibly deep contribution, is it?
If you really wish to contribute, perhaps you can say what the "'essentials' text version" contained and why you found it interesting?
If you've got some spare bandwidth & storage then seeding some of the torrents here is a cheap and fun way of helping Wikipedia out. I've served around 20TB of these dumps in the past year.
https://meta.wikimedia.org/wiki/Data_dump_torrents
Do you happen to know why wikipedia didn't embrace torrents as the default download method?
Speculating: because torrents are not especially good at dealing with small modifications?
Most people probably won't seed many versions, so it's a losing effort, and you need to allocate a huge chunk of space for each version.
Deduplicating filesystems are sadly not in vogue.
Most people don't use torrents.
I think it is a nice use case for IPFS
A bit of warning to people who use the database download: the dumps don’t reflect a consistent state of the database and contain broken and missing data.
https://meta.wikimedia.org/wiki/Data_dumps/What_the_dumps_ar...
I more or less do this every year — grab the latest Kiwix, English version (about 100 GB or so). I keep the older ones as well.
Is there a RAG for Wikipedia?
I may not be using the term correctly here. In short, I would love a local LLM + Wikipedia snapshot so that I can have an offline, self-hosted ... Hitchhiker's Guide to Earth.
Huggingface has a few datasets of Wikipedia embeddings.
Here’s a few results: https://huggingface.co/search/full-text?q=Wikipedia+embeddin...
And the first result, which is probably what you’ll want to use: https://huggingface.co/datasets/Upstash/wikipedia-2024-06-bg...
I recommend you go for pgvector or a similar self hosted solution to calculate the similarities instead of a service like Vector.
There are non English versions of Wikipedia also.
Can anyone please point to information on how we can download a copy of one specific language version?
You can find plenty of other languages here [1] for example. These are the Kiwix versions.
[1] https://dumps.wikimedia.org/other/kiwix/zim/wikipedia/
It's literally described there on the page if you open it...
the linked page tells you where those are available already
First time I visited this page was in January 2025
Okay, that looked a bit ridiculous in the pre-AI era (who needs to download the whole Wikipedia?), but now I can see the sense in it.
Too bad the AI scrapers don't care, and are melting Wikipedia's production servers anyway.
https://arstechnica.com/information-technology/2025/04/ai-bo...
I would like companies to start aggressively pushing back against AI scrapers using things like Anubis[0]. If you can't be a good steward of the internet or respectful to other peoples' resources, then people have the right to deny them to you.
[0] https://github.com/TecharoHQ/anubis
I bet someone like Cloudflare could pull the dataset each day and serve up a plain text/Markdown version of Wikipedia for rounding error levels of spend. I just loaded a random Wikipedia page and it had a weight of 1.5MB in all for what I worked out would be about 30KB of Markdown (i.e. 50x less bandwidth).
Of course, the problem then is getting all these scrapers and bots to actually use the alternative, but Wikimedia could potentially redirect suspected clients in that direction..
Someone suggested to me to apply a filter that serves .md or txt to bots/ai scrapers instead of the regular website, seems smart if it works but i hate it when i get captchas and this could end up similarly detecting non-bots as bots
maybe a view full website link loaded on js so bots dont see it idk
I would love to see most sites serve me markdown. I'd happily install a browser extension to mask me as a a AI bot scraper if it means I can just get the text without all the noise.
someone built a service for ai bots called pure.md its been a godsend to curl websites as markdown on the occasional where it doesnt work first time and works great for occasional use with the free tier
lol
me too tbh
someone pointed out you can enable by default reader mode on safar under settings but even then not all website’s pages are seeved as reader mode enabled pages
I wonder if Wikipedias recent switch to client side rendering has hurt their performance too. Serving a prerendered page might have helped this situation. I don't know the details of their new system though.
Tragedy of the commons. And that’s why we can’t have nice things.
Because people are people. And will always prioritize egotism over respect for the common good.
But we have nice things. Wikipedia can deal with it just fine.
no - when fragile resources are abused by one endpoint out of one hundred thousand others, and the abuse is one hundred thousand times greater.. how is that a condemnation of the "ways" of "all people" .. what is justice?
Oh no even now there is plenty of use for it outside of AI training. Just think of all the schools in villages all around the world that don't have access to the internet or have a very limited connection. I've worked with folks that would setup local "wikipedia servers" for schools so that kids could access Wikipedia via a local network connection. In other setups they just download all of wikipedia to a set of laptops and you use one of the offline readers to browse it.
This is essentially the modern version of having a library of encyclopedias.
There's already a project to serve the use case you're describing (school in a disconnected village): Internet in a Box
https://internet-in-a-box.org/
They provide offline access to Wikipedia, OpenStreetMap, Project Gutenberg, and many other resources.
I'm thinking less about AI training and more about having a source of (reasonably) reliable information from the net, in case AI generated fake images and generated cross referenced texts start making it too difficult to discern real history from malicious rewrites. It's bad enough now, but can get much worse with the proliferation of AI agents.
Pre-2022 Wikipedia dumps will be analyzed by future historians.
And banned by future governments.
Replicating that knowledge helps build data resilience and protect it against all sorts of disasters. I used to seed their monthly data dump torrent for a while.
Also helps save Wikipedia if it gets shut down - which might happen!
True. Musk for example is publicly attacking it for spreading "left-wing lies" because in his wiki page there are statements like "He has been criticized for making unscientific and misleading statements, including COVID-19 misinformation and promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments." which are just pure facts.
It would be nice to have something like this more decentralized.
I was perusing some recent discussions on sources with interest. It seems that Wikipedia's intelligentsia have managed to "blacklist" (deprecate or declare "generally unreliable") practically every prominent source of news in the US that is not centrist or leftist.
I kid you not; through a process of attrition they've attacked the very reliability and reputation of every source, including Fox News and the like, and they've told editors sitewide that they simply can't be cited as a "Reliable Secondary Source", like at all.
I am not sure if that is an accurate assessment of the situation on the ground for mainstream media, but it certainly exposes some real systemic bias.
And this is the highest-order and most enduring method of ingraining systemic bias in the project: by weeding out sources with unfavorable viewpoints and perspectives, saying they publish lies and untruth, and being able to prohibit them globally from any use.
And I was pondering this state of affairs and just thinking about Karoline Leavitt's press room, and wondering what will the landscape be, if there is precious little intersection between press outlets who may be favorable or deferent to the present administration, and those which are allowed to be cited on Wikipedia? Ouch!
If you still think Fox News is a reliable and reputable source of information I have a bridge to sell you.
It's not Wikipedia's fault that the vast majority of right wing media consists of pure propaganda, disinformation, and lies.
[citation needed]
And you know, I wouldn't be surprised if people hurling those accusations somehow believe that the lies and misinformation are one-sided and partisan. As if leftism has some sort of monopoly on Truth and Goodness bestowed from above.
It's really been sickening to see the media outlets just lay down thick trails of bullshit that is designed to distract us, to instill fear, uncertainty, and doubt, to make us hate one another, to keep us hanging on that channel or that subscription for the next tidbit. It's disgusting and manipulative, and the Right has absolutely no monopoly on those tactics.
Wikipedia is simply a microcosm of the prevailing zeitgeist, so they are as likely to cure systemic bias as a leopard can change its spots.
Wait why just leftism, what happened to centrism ? Where'd the goalposts go ?
Your point is understandable regarding source bias, but in Musk's case, the statements "they" mentioned are simply true. While you definitely have a valid point about the risks of systemic bias in excluding certain outlets, relativizing factual accuracy could inadvertently lead to a situation where every lie becomes just another "valid opinion." A viewpoint can indeed be an opinion, but misinformation remains misinformation. Wikipedia should not become a space for free interpretation of reality.
Just because one side happens to produce more misinformation doesn't mean these facts should be omitted. Consider this analogy: Stalin killed millions and was undeniably a tyrant, and even though the current Russian establishment might push a different narrative, it doesn't erase historical reality. Similarly, accurately documenting Musk's misleading statements isn't bias—it's factual reporting.
No idea when your pre-AI era begun, but I was much more excited to host Wikipedia locally 15 years ago than I am now.
> who needs to download the whole Wikipedia
Anyone who wants to have access while off-line, for whatever reason. This can be as simple as saving costs via more complicated as accessing content from regions with spotty and/or expensive connectivity (you're on a ship out of reach of shore-based mobile networks, you do not have access to Starlink or something similar, you're deep in the jungle, deep underground, etc) to some prepper scenario where connectivity ends at the cave entry because the 'net has ceased to exist.
I would like to have a less politically biased online encyclopedia for the latter scenario, it would be a shame to start a new society based on the same bad ideas which brought down the previous one. If ever a politically neutral LLM becomes available that'd be one of the first tasks I'd put it to: point out bias - any bias - in articles, encyclopedias and other 'sources' (yes, I know, WP is not an original source but for this purpose it is) of knowledge.
You don't need to be deep in the jungle. You might just not want to pay for mobile data. If your phone has an SD card slot, you can put in 1 TB of storage and have wikipedia, a lifetime of music, tons of books, an atlas of your country for GPS navigation, and plenty of room for taking photos/videos. Storage is cheap enough that mobile data should be basically pointless.
Is there a "politically neutral" human? And if there was, what could that person reasonably say about politics?
I suspect "politically neutral" is a meaningless phrase. It's just a way for people to tar their political opponents by inference.
The problem is: even if you report only facts, there is an editorial function in choosing which facts to report, because it is physically impossible to report all facts. So someone can always point to some sort of bias on choosing which facts to report.
And when editors have big ad spenders, you bet they won't criticize the hand that feeds them, most of the time
There are no politically neutral humans but there can be politically neutral publications. All you have to do to be politically neutral is treat all legal political ideologies the same without favouring one over the others. Wikipedia does not achieve this goal, not by far.
You have bad politics. This is bad politics.
No, you have bad politics.
This is not kindergarten so let's no go down this path. Asking for a politically neutral (see my explanation elsewhere in this thread if you don't understand what that means) source of information is not 'bad politics' but intended to avoid bad politics. I suspect that you 'identify' as either 'liberal' or 'progressive' so I assume you'd be less than thrilled if Wikipedia had a conservative bias. The same goes for conservatives and (traditional) capital-L Liberals who are less than thrilled to see Wikipedia having a 'left-wing' or 'progressive' bias. It just makes WP end up being lumped together with the legacy media, known to be untrustworthy where it counts and that is a shame for a site which in many ways still is a valuable resource as long as you avoid any and all subjects which have been pulled into the polarised political discourse.
Genuine question, can you provide multiple explicit examples of such bias? I heard a lot of people railing against bias in Wikipedia, but no one provides any blatant examples of it.
A genuine answer, how about looking up some studies on this subject? Not those done by Wikipedia of course, they claim to be politically neutral after all.
Here's a few, from https://www.allsides.com/blog/wikipedia-biased
Six studies, including two from Harvard researchers, have found a left-wing bias at Wikipedia:
A 2024 analysis [1] by researcher David Rozado that used AllSides Media Bias Ratings [2] found Wikipedia associates right-of-center public figures with more negative sentiment than left-wing figures, and tends to associate left-leaning news organizations with more positive sentiment than right-leaning ones.
A Harvard study [3] found Wikipedia articles are more left-wing than Encyclopedia Britannica.
Another paper [4] from the same Harvard researchers found left-wing editors are more active and partisan on the site.
A 2018 analysis [5] found top-cited news outlets on Wikipedia are mainly left-wing.
Another analysis [6] using AllSides Media Bias Ratings found that pages on American politicians cite mostly left-wing news outlets.
American academics found [7] conservative editors are 6 times more likely to be sanctioned in Wikipedia policy enforcement.
There are far more sources out there.
If I show examples of biased pages - the one on Antifa is a good example - this will just devolve into a quibble about this or that sentence.
[1] https://davidrozado.substack.com/p/is-wikipedia-politically-...
[2] https://www.allsides.com/media-bias/ratings
[3] https://www.semanticscholar.org/paper/Do-Experts-or-Collecti...
[4] https://www.hbs.edu/faculty/Publication%20Files/17-028_e7788...
[5] https://archive.md/v4TFn
[6] https://archive.is/dDr7X
[7] https://thecritic.co.uk/the-left-wing-bias-of-wikipedia/
[flagged]
> based on the same bad ideas which brought down the previous one
I don’t think that’s fair. Not that Wikipedia is without bias, but that their ivory tower biases are worlds apart from the lying brutal animalistic Hollywood signals herding the masses in “our democracy”.