On Wed, 11 Mar 2026 20:45:45 +0100,
"Peter G." <freebsd@disroot.org> wrote:
>
> was a bit rushing earlier, this is to clear things up
>
> On 11/03/2026 13:03, Peter G. wrote:
> > On 10/03/2026 21:23, Constantine A. Murenin wrote:
> >
> > this is how a very IO intensive systems usually work. you preload all
> > caches and serve only from caches. sometimes preloading runs for a
> > longer while, simply loading caches. then all that content is server
> > effortlessly without much load.
> >
> > but you can keep the old URLs in place and simply display different
> > results. again, prepare the compliance once, profit forever.
>
> the point was: you need a web component to the version control system
> that does heavy caching, then it won't matter much if bots scrape or
> not, you'll serve everything from caches anyway
>
> for example, cgit running on git
>
See: https://github.com/robots.txt
It reads quite opposite, and they have CDN and caching.
--
wbr, Kirill
No comments:
Post a Comment