Tuesday, March 10, 2026

Re: cvsweb news

On 3/10/26 2:39 AM, Otto Moerbeek wrote:
> On Tue, Mar 10, 2026 at 01:05:48PM +0700, hahahahacker2009 wrote:
>
>> Vào Thứ 7, 7 thg 3, 2026 vào lúc 06:08 James Jerkins
>> <james@jamesjerkinscomputer.com> đã viết:
>> >
>> > On 03/05/26, Nick Holland wrote:
>> > > that problem was malformed URLs in the links, if you removed the duplicate
>> > > part of the URL, it worked as expected. krw@ had already found and fixed
>> > > the problem, but I hadn't updated it in the running code (oops!). This
>> > > has been fixed now.
>> > >
>> >
>> > Thanks, that fixed the duplicate URLs!
>> >
>> > Every https request to the new cvsweb from Lynx displays an error:
>> >
>> > "SSL error:unable to get local issuer certificate"
>> >
>> > Checking the certs returned via the command line shows certs for cvsweb
>> > and the intermediate.
>> >
>> > CONNECTED(00000003)
>> > depth=0 CN = cvsweb.openbsd.org
>> > verify error:num=20:unable to get local issuer certificate
>> > verify return:1
>> > ---
>> > Certificate chain
>> > 0 s:/CN=cvsweb.openbsd.org
>> > i:/C=US/O=Let's Encrypt/CN=R12
>> > -----BEGIN CERTIFICATE-----
>> >
>> > Can the new cvsweb send the full certificate chain or is this a
>> > Lynx problem?
>> >
>>
>> curl also fail. I think certificates should always be read using
>> a function that read a certificate chain?
>>
>> > I love the new bike shed and the paint color. Job well done to Ken and Nick!
>> > Thank you both for improving cvsweb.
>> >
>> >
>>
>
> Looking with opensls s_client, this seems more like the server only
> sends the certificate and not the full chain. Some clients do not like
> that.
>
> -Otto
>

this should be fixed now.

Nick.

Re: cvsweb news

On 3/10/26 11:37 AM, Constantine A. Murenin wrote:
[... blah ... blah ... blah ...]

you made arguments for why it is "easy".
You did not make any consideration for the CONSEQUENCES of your "easy"
solutions.

The old cvsweb app made it trivial to scrape not only every single file, but
every single version of every single file, every single incremental diff of
every file and /every single diff between every two versions of every file/.
So... for a file in CVS with N commits, there are N*(N-1) diffs possible.
And the old version of cvsweb exposed all of those...for every single file in
CVS. They already have a list of all these possible diffs, and they are
attempting to use them.

Every diff requested requires firing up external applications. That's a lot
of load, even for a significantly more efficient application.

Those requests are still coming in. Yesterday, well over 90% of the queries
we got were URLs from the old application. So by returning a 404 instead of
firing up cvs, co, and/or rcsdiff every time a bot query comes in, we save a
LOT of load on the system. That's a HUGE win.

This win has enabled me to remove the IP filters, I've removed much of the
malicious request handling which was all justifiably highly unpopular and
(unfortunately) hurting some of the legitimate users. I hope to soon return
the systems this application runs on to a fully redundant CARP pair (due to
the load, I had to "split" the cvsweb off to its own machine, so lost the
redundancy). Lots of "win" here.

So...unless the OpenBSD developers request otherwise, I do think we will not
be worrying about -- and in fact, actively discouraging -- the old URLs. I get
it, less than optimal. But this whole problem has been a gigantic "less than
optimal" that shouldn't be, but it is, and we deal with it as best we can with
the resources that we have available. As far as Ken and I are concerned at this
point, the discussion of supporting old URL is over.

(And special thanks to my employer for laying me off at an opportune time where
I could devote a fair chunk of time to work with Ken at getting his new solution
up and running!)

Nick.