Nick Owens:
> mischief@beast.home.arpa:~/src/openbsd $ git grep -Er '^(int
> |^)main\(' | cut -d: -f3 | sort | uniq -c | sort -n -k1 -r | sed 10q
> 742 main(int argc, char *argv[])
> 525 int main() {
> 350 main(int argc, char **argv)
> 159 main()
> 151 main(void)
> 145 int main(int argc, char **argv) {
> 123 int main()
> 51 int main(int argc, char *argv[]) {
> 39 int main(int argc, char** argv) {
> 26 int main(void)
That contains a lot of chaff from configure scripts.
--
Christian "naddy" Weisgerber naddy@mips.inka.de
OpenBSD Mail Box
BTC:1BsNfN6m7xtT4PqDb9jJHnDDFBb38zS9Yi
Monday, March 09, 2026
Re: cvsweb news
On Mon, 9 Mar 2026 at 13:27, Nick Holland <nick@holland-consulting.net> wrote:
On 3/8/26 4:38 PM, Constantine A. Murenin wrote:
> These are all still broken:
>
> http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/ <http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/>
> http://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/ <http://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/>
> http://cvsweb.openbsd.org/cgi-bin/cvsweb/www/ <http://cvsweb.openbsd.org/cgi-bin/cvsweb/www/>
>
> Nick, can we have that fixed, please?
>
> These have been spattered throughout the mailing list archives, research papers etc etc, and cannot be fixed upstream.
What if we switch from CVS to GOT or some other version control system?
Why would there be any issue with the cvsweb links continuing to work with GOT or another version control system?
These URLs are semantic and deterministic. To intentionally break them for zero real reason, makes little sense.
The "301 Moved Permanently" redirects are free.
After 43 years working in IT, one thing I know: change happens. And the
longer you try to retain unending backwards compatibility, the more painful
it is /WHEN/ change must happen anyway. Change of tools should be part of
every plan.
Supporting these old URLs has to have negligible cost.
It can be done by a 3-line nginx.conf, after all, which wouldn't be spawning any new processes to process the requests.
When you click a link, you want to it work, period.
It's even still CVS, it's still web, the filepath and the filename didn't change, either.
Why should anyone even care that your underlying implementation has changed?
Why should the user care if it's Perl or Go, or even CVS or Git, when the server knows exactly what the requester wants?
Zero reasons for the links to be broken when the service itself is still backwards compatible, is widely known, and widely used.
I'm not terribly interested in backwards compatibility. Keep in mind, the
alternative to this wonderful application Ken has written was likely to
become complete unavailability of the info via the web. Ultimately, though,
Ken wrote the code, he'll be the one to make the call on this, but he and I
are currently on the same page, it seems. The change in URLs is successfully
breaking a lot of the traffic that would be hitting us -- over 99% of the
traffic hitting us now is based on the old cvsweb URLs. Our paths had already
been "mapped". The new system has fewer paths directly exposed to the
Internet, I'm hoping this will result in a sustained reduction in traffic.
You're juxtaposing the wonderful rewrite as the reason for the URLs being broken, but the URLs being broken has nothing to do with the wonderful rewrite. Instead, it's a misconfiguration of the web-server.
Security by obscurity has limited utility. It would be a better test of the rewrite if it could actually handle all that traffic without issue. (And it probably could, I'm not the one implying that it couldn't.)
If you refuse to test that it can actually process all those requests from all the AI bots, then how do you even know it actually works for the original overload that prompted the rewrite in the first place? (And if it does work, then what excuse is there to not serve the content under the old URLs, too?)
It may only be a matter of time until the AI regroups, and the thing breaks again.
From personal experience, many bots actually don't handle redirects correctly, so, using redirects is actually a great way to ensure the legitimate users do get served the content, whereas the bots end up with a redirect that they don't necessarily know how to handle correctly.
It's unfortunate that URLs change. But the old URLs give a good indication
what it is that a link was pointing at, so a user who might understand what
they were being pointed to can probably figure out how to get there. And
perhaps, a lot of "communication by URL" should be replaced by full
sentences, anyway. We will probably be changing the 404 error to help
guide humans to the desired location, but brains will have to be involved.
It may be an indication to you, but it doesn't give such an indication to someone who's not familiar with OpenBSD.
> As a reminder, Cool URIs don't change. www.w3.org/Provider/Style/URI <http://www.w3.org/Provider/Style/URI>
I've rarely been accused of being cool.
OK, I'll reword it. Breaking URLs for zero purpose is corporate BS and the very definition of passing the buck where people don't care about the quality of the code they write, or of the service they maintain.
Intentionally serving "404 Not Found" when you 100% know exactly what the user wants, is just wrong.
I thought OpenBSD was about correctness. Correct answer to these requests would be 300 Multiple Choices (with links to both CVS and Git, for example), or 301 Moved Permanently, or 302 Found.
C.
Subscribe to:
Comments (Atom)