Wednesday, December 30, 2020

The most f*cked up piece of software: "Leaky abstractions? Never heard of it"

Note: if you came here looking for a solution for the particular problems mentioned in this post you would be disappointed: there is no solution, sorry. I advise you to read the post anyway: you might still find it useful. You can also skip to the section below where I describe some possible actions you can try: who knows, you might be lucky.

Law of Leaky Abstractions describes a very real cause of problems in software products. saaj-ri was never shy on leaking abstractions in quite a spectacular way. The fact that it managed to have two different classes that implemented DOM Document interface is really telling: you could start with a Document instance, and then invoking getDocumentElement().getOwnerDocument() would give you a different Document instance. Bugs related to this "feature" might be very entertaining to track.

But this is nothing compare to what the project achieved just in half a year in 2017...

 

After being hit by the bug described in the previous post I thought I would not have to deal with saaj-ri anymore for quite some time. I thought I just file a bug and that is it. (After I find which one of the myriads repositories for this particular code is the real owner.)

I was wrong...

For some new project we had to make sure that our soft runs in a particular environment. We had never tried that environment before so we expected some problems along the way. No big deal.

After we resolved the most obvious problems we eventually noticed that the warnings we have added to detect the saaj-ri bug described in the previous post are present in the logs in the new environment.

This is strange... I looked at the new environment closer and noticed that it included some saaj-ri jars. Thanks to magic of the service loader we were getting classes from this saaj-ri jar and not from the JDK. The environment's saaj-ri version was 1.4.1 while the JDK 8 has 1.3.<something>. Nice, we have got some bug fixes along the way for free. We can now remove the JDK patch and just patch the saaj-ri version packaged with the new environment. This looks a bit cleaner.

But before making a patch I just switched our codebase to use the same saaj-ri 1.4.1 version, and fired the tests...

Call me naïve but I expected only one failing test: the test for the bug with namespace prefixes.

But I was not disappointed, oh no. Seeing test results made me laugh. About third of the tests for a particular functionality were failing. NPEs in the third-party code, failed assertions because something expected was not found in the test messages, errors like "org.w3c.dom.DOMException: WRONG_DOCUMENT_ERR: A node is used in a different document than the one that created it", you name it.

So much for "some bug fixes along the way"!

Sit comfortably and let me tell you a fairytale:

Once upon a time there was a project, just a little project. It implemented the SAAJ specification, and to minimize amount of code it relied on the internal DOM classes of the JDK. It was probably a reasonable decision back in those days given the fact that the little project was also bound to be included in the JDK.

And so it became part of the JDK core functionality. Big mistake was it. It was added to the JDK and mostly forgotten: for years there were literally no changes.

But then JMPS (как вы яхту назовёте...) came along. The little project suddenly got a big problem: the dependency on the internal DOM classes became a liability.

Instead of saying "screw the JPMS, we just open one internal module of the JDK to the other" somebody decided to do "the right thing". Much, much bigger mistake! Especially given the fact where this stuff ended up in the latest JDK versions.

Well, the fairytale did last long, but it came to an abrupt end. The project maintainers had to do a lot of work and probably did not have a lot of time for it.

So how would you do it properly? You are not allowed to use the classes from other modules directly. You cannot inherit. You decided to wrap.

This is probably the only correct decision that the project maintainers made and I guess it is only by accident: there was no other choice. But the execution of this decision resulted in a mess and an utter nightmare.

If you implement an API by wrapping code that implements the same or similar API it is important not to allow wrapped instances to leak into the code that uses the wrappers. If you allow this it would be just too easy to pass a wrapper instance into a wrapped instance's method. Like in this case, a SOAPElement into DOM Node.appendChild() for example.

So the correct decision is "wrap everything and do not allow wrapped instance to be visible to the external code".

I would estimate that for saaj-ri it is like 2 week of work with about a week or two to implement comprehensive unit tests that can prove that no wrapped instance is ever leaked outside. Tests are really important here because DOM API allows myriads different ways to obtain one instance form another.

There is also that pesky DOMImplementation interface which would probably require some additional work but let's forget about it for the moment. It is forgotten in saaj-ri anyway.

So after those 3-4 weeks one can be reasonably sure the job is done and it is done correctly. There of course would be a thing or two in some corner parts of the API that might have been forgotten, but for the rest the code would be ready.

I do not know what the project maintainers decided to do, but what they did is just the minimal wrapping, probably stopping after they made sure the existing unit tests work.

This created an enormous "abstraction leakage": the new saaj-ri implementation was readily giving you wrapper instances and wrapped instances without any thought. Sometimes you would get a wrapper instance from one method and a wrapped instance from another quite similar method (like NS vs non-NS methods).

The project's history is really telling: the first commit for JDK9 is done on 20 January 2017 and since then whole February and March there were sporadic fixes in different parts of the wrapping code mostly by adding other wrapper classes. And it looks like that crap also survived some reviews from the JDK team and nobody raised any concerns.

There were even more fixes in that area in May again adding more wrapper classes. The fixes clearly indicate that the whole "wrap only if it is really necessary" approach was broken but nobody gave a damn.

Meanwhile this crap was released as part of JDK9, and people started to notice, with issues like WSS-605. So other projects were forced to implement workarounds which looked, honestly, quite terrible. But I cannot blame the maintainers who had to come up with such hacks: they thought (and rightly so) that they must do whatever they could to make sure their projects can be used with JDK9.

Somewhere in June-July 2017 there were more fixes in saaj-ri for the wrapping code. Again nobody said stop to that madness.

So after 5 months of "development" the result was still a mess. This big mess ended up as saaj-ri 1.4.1 release. This is what I was dealing with.

Just for kicks I decided to spend some time trying to make our tests work. Of course I could go the way WSS4J went to fix WSS-605, but as I said the fix they implemented looked terrible. Instead I decided to patch saaj-ri wherever necessary. With just couple of changes I plugged some abstraction leaking holes and got rid of WRONG_DOCUMENT_ERR and NPEs

But I still had failed assertions in our tests. It took me some time to debug them but eventually I found the reason: the wrapped instance escaped through yet another hole and some library we used navigated the wrapped document but used the wrapping element in the condition to decide if it found whatever it needed.

Some googling took me to SANTUARIO-516 where a similar problem is discussed. Following the links Icame to this issue. They "have fixed" it here

The fix? Yet another class was wrapped. I rather enjoyed the comment to the fix:

This change fixes the delegation so that element types, including attributes and text nodes, correctly identify as being part of the same owning document.
Note that whilst this fixes the consistency of some cloned elements, a proper fix is beyond what can be achieved within this change and requires a comprehensive review.

Wow! After three and a half years they finally started to suspect something is wrong. Or maybe the comment is just about the newly added wrapper class.

Of course they did not bother to set up that "comprehensive review". Or maybe they did bother and came with nothing.

Another problem with their fix is that it is done only to the saaj-ri version which is based on the new "jakarta" saaj interfaces (jakarta.xml.soap.*). Tough luck if you need it for the previous version (javax.xml.soap.*)

I reapplied this fix to 1.4.1 version where I did my changes so far. It fixed some of our tests but some others that were OK before this change started to fail with WRONG_DOCUMENT_ERR. Splendid work they did: they fixed one place where it leaked and introduced another leak.

This "fix" yet again demonstrates how f*cked up this project is.

I had to track and change more places where they did not bother with the proper wrapping or unwrapping. Our tests finally passed, except for one I was originally expecting not to pass, for the issue described in the previous post.

Phew...

I even went as far as patched WSS4J and removed their WSS-605 fix, and the tests were still OK. This just proves that whatever is done in other projects to work around saaj-ri issues were not really necessary if the saaj-ri developers did their job properly in the first place.

After all this work to make the tests pass we are still faced with the question: what must we do with our project w.r.t. saaj-ri?

The answer is clear: get rid of it, and now!

We will not use my fixes in our project. It is clear that saaj-ri 1.4+ is so broken that we do not have any guarantee it does not fail in some other use case which we just happen not to test or even in the tested scenarios but with slightly different SOAP message. Given the way some of these errors are manifested it would be nightmare to debug them in production.

But getting rid of saaj-ri will take some time so right now we just switch back to saaj-ri 1.3 packaged with the JDK8: we still have this possibility in this new environment.

 

If you came here looking for a solution for a problem with saaj-ri, well, I can only say you have a big problem. The best solution is to get rid of saaj dependency. I understand it is not always possible but this is definitely a way to go.

If you are still on JDK8 and got saaj-ri 1.4 dependency by accident try to sitch to the JDK saaj-ri version.

You can try to find another saaj implementation and hope it does not have any quirks.

In some cases you can work around the problems by staying "pure DOM" as long as possible and convert you DOM to SOAP at the very end. Or the other way around, get hold of the wrapped DOM as soon as possible and just continue with DOM.

If your code manipulates SOAP documents via saaj try to change it to use DOM API and then apply what I have written above.

But if you have some third-party dependency that uses saaj and not DOM then, I am sorry, you are doomed.

I am not going to create tickets for saaj-ri for the problems I have found. It is clear that the project maintainers are changing the code to hide the symptoms and are not going to fix the whole thing once and for all. And I am not going to create a separate ticket for each problem I have identified. There are just too many of them.

No way I am going to submit my changes back to saaj-ri. Again, these changes have fixed some issues but I do not have any idea what is not fixed. I do not have time to do it properly. Last but not least I do not want to have my name associated with this pile of crap.

I am not going to create a ticket for the problem with namespace prefixes described in the previous post: why bother?

In fact the only ticket I would like to create is for closing saaj-ri project for good but no doubt such a ticket will not be accepted. And it is probably a good thing: this project is a chance to keep some very talented people busy at least part of their time. This way they have less time to apply their expertise in other projects. Every little thing counts...

Sunday, December 27, 2020

The most f*cked up piece of software: "no worries, we know better how you want your SOAP message parsed"

Well, as I wrote at the beginning of the previous post: at the time I had encountered the problem I did not think it would be really interesting to blog about it. So I did not.

But about one month ago we got a new business partner. We use SOAP (and in our case: still use ssaj-ri) to communicate with this partner. Messages received from that partner were failing some complex preprocessing step. Needless to say we did not have any problems with messages sent by other partners.

First we ignored the problem because this was a new partner and their messages were not compliant with some other part of the specification we relied on. We even had to make some changes to adapt our code to detect and accept their peculiar view on the specification. And at first the business partner did not send a lot of messages so we even did not notice the problem.

Eventually we noticed that all messages from that partner are failing a particular preprocessing step. When I looked at the problem I could not understand the reason for the failure. Basically, the message had several parts, each part went through the same preprocessing step, and all but one part were preprocessed successfully. One always failed.

I dumped the message before the preprocessing step and then preprocessed it separately, and it failed exactly the same way as in our product.

I was prepared to blame the business partner. After all, we did have a problem with them not following the specification. So I thought it is just another place where they do something incorrectly.

But I knew that the preprocessing library we use also has its issues. I had found and reported one some time ago, and it was confirmed and fixed.

So I started playing directly with this library trying to make it accept the message, but nothing helped.

Then I decided to look at the message as we receive it. I dumped it before it went to the SOAP parser and tried to preprocess it separately. Again, it failed the same way.

OK, case is closed: the business partner is to blame?

But then I looked at the dumped messages. I mean, really looked, with my eyes. And I could not believe what I have seen. I added a unit test that just parsed the "bytes as dumped before the saaj-ri SOAP parser" and immediately serialized the parsed message. And the parsed-then-serialized message differed from the original one.

This is the original message:

<Envelope xmlns="http://www.w3.org/2003/05/soap-envelope">
<Header ...>
...
<Body ...>
...

And this is the parsed-then-serialized message:

<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Header ...>
...
<env:Body ...>
...

See those "env:" namespace prefixes? When I saw this ... let's put it this way ... I expressed my deep concerns about the state of the software development in general and in this project in particular. I wished all the best to the developer who is responsible for this particular behavior, and to all his relatives from 10 generations before. I was amazed by the state of cognitive problems the said developer must have suffered throughout her or his life.

Needless to say parsing the same raw data with a DOM parser and then preprocessing it suddenly fixed the problem we were having.

I found the code responsible for this fubar almost immediately. This is it: NameImpl.java

Just look at it! Does it not call for even more best wishes to the developer who is responsible for this ... pile? It certainly forced me to do it.

The code looks like it was created by some junior developer who had just came across this wonderful feature of java: static nested classes. And the developer had gone full throttle into using this feature.

Do not get me wrong: I do not have anything against junior developers. Of course I went through this myself. In many ways I am probably still a junior developer. But this is why we have things like code reviews and people like mentors or team leaders or whatever.

This code should have been cleaned up right in the next commit after it was added.

Actually I digress. Normally I would not even blink at this kind of code in a 3rd party dependency: life is too short for that. But having wasted quite some time on this problem I thought I have all the right to rant about it.

The gem of this problem lies in all those wonderful classes like SOAP1_1Name which explicitly check for an empty namespace prefix and "helpfully" replace it with some hardcoded predefined value.

What on earth whoever had added these checks was thinking?! "Oh, look, empty prefix. It is probably some kind of mistake, so let's fix it."? Had this person expected others would thank him? How about a medal? I think something like a slap on the hands with an industrial-grade keyboard would be more appropriate.

It looks like we have to give more priority to the task of moving from saaj-ri.

It is jut bizarre how much low quality code ends up in the core tools ...

For the moment we have just patched this class. Since we still use JDK 8 in most cases, it means we had to patch the version that is packaged with the JDK and use -Xbootclasspath/p JVM option.

And we have added some startup warning to detect that we are running in an environment without the patch being active.

Saturday, December 26, 2020

The most f*cked up piece of software ...

... that I have seen so far. Surprisingly it is not JBoss.

I know I did not blog for a long time. It is not like I did not have any fun with Oracle, JBoss, JDK, or some other woefully broken code. It is just that most problems I come across look pretty mundane. Or they require quite an in-depth write up and I feel like it is stupid to spend a lot of time on describing something after I already have wasted hours if not days to understand and fix.

Case in point: I first ran into a problem with a particular functionality almost 3 years ago. The problem was reproducible only under some specific circumstances so it took some time to find the root cause. When I found it I did not think it would be really interesting to blog about it. After all, I had already encountered much more interesting problems in software coming from the same group of developers. So I came with a workaround which helped in a lot of cases, and I decided that we need to get rid of this dependency altogether as soon as possible.

Unfortunately other tasks with higher priority prevented me from completing this task, and we still have this dependency. On the other hand had I completed that task I would not have recently a "pleasure" wasting literally days on tracking some other problems that were caused by the same software.

I decided that the sheer variance of problems in one not particularly complicated project is really worth blogging about.

Meet the "the most fucked up piece of software du jour": SAAJ reference implementation. I did not really trace its pedigree as I did not care about what particular shithole "incubator of promising projects" it crept out, but it is present under prefix "metro" in a lot of code repositories. Names of some authors in the source code of saaj-ri match names of authors in other metro-related projects like WSIT. So I am not amused.

What made all the problems worse is that Sun started to include metro subprojects in JDK as core functionality. (As if the JDK code was perfect and it needed some low quality code for balance.) As a result a lot of broken code ended up being used by a lot of developers in a lot of projects. This all made fixing bugs more difficult: even if they were fixed as part of the metro projects the fixes were seldom included in the JDK.

Oracle continued this tradition, at least up to JDK 11, when they finally removed a bunch of stuff, including SAAJ, from the JDK. But before that they made some extremely bad choices which resulted in saaj-ri being broken beyond anything imaginable.

saaj-ri is now part of Eclipse EE4J and looking at its git repository I can only conclude that the project decay is evident. Not that I care.

But saaj-ri is still included in a lot of places and a lot of developers are still using it, directly or indirectly. I can only wish good luck to them.

Well, enough ranting. Time to show the code. Let's start with the problem I encountered 3 years ago.

Part of our application was processing SOAP messages using saaj packaged with the JDK. We noticed that under some circumstances processing of several SOAP messages in parallel was almost grinding to a halt. After taking some thread dumps and analyzing them we found this wonderful piece of code in saaj classes.

This is the code from the saaj-ri. The code in the JDK was under a different package name but for the rest it the same or very close. See line 91:

private static ParserPool parserPool = new ParserPool(5);

You can enjoy the source code of ParserPool here, although it is not interesting.

In the current JDK8 EnvelopeFactory is a bit different: it has a classloader-bound pool. What a relief!

What does it mean basically? You can parse at most 5 SOAP messages in parallel [from the code loaded by the same classloader]. If for some reason the parsing takes a lot of time you have a serious problem.

This code raises many questions like why the hell they need the pool at all? OK, this might have been important back when saaj-ri was initially developed but why this code is still present now? Why only 5 instances?

To add insult to injury this code has also a resource leak. In some circumstances you might end up with an empty parser pool: some exceptions can be thrown after a parser was retrieved from the pool. In that case the parser is not returned in the pool.

But do not despair! Good developers of saaj-ri are ready to combat the problems. They have actually fixed the resource leak somewhere in 1.4.X version.

And they added a way to increase the pool size! Wow - now you have it configurable. (As usual, with a JDK system property, once and for everybody, but who am I to point to a better way?)

Monday, September 28, 2015

WTF collection: how not to add an API

Some time ago we decided to move one of our projects to JDK 1.8. After all, JDK 1.7 reached EOL. While we did not really want to start with all those JDK 1.8 features, the security updates were important for us. In addition, the newest versions of popular browsers refuse to work over HTTPS with web servers running on JDK 1.7, and our project has a web UI.

The move was not an easy exercise. Yes, most of the problems were traced to our code, although for some of them I would have expected better diagnostics from the tools or frameworks.

But one problem stood out.

The project communicates over HTTPS with some remote web services. And we started getting errors for one of them. The web service is a bit different than others: the web service provider insisted on using the SSL/TLS Server Name Indication (SNI).

We were having some problems when we started communicating with this web service initially while we were still running with JDK 1.7. And now the errors with JDK 1.8 were remarkably similar to the errors we had initially with the web service. It was immediately clear who is the primary suspect.

After all I knew that the JDK 1.8 includes some API to explicitly control the SNI behavior. But I hoped that the JDK does the right thing if SNI settings are not explicitly controlled. Our code did not do it.

Let's look closer at it. This is what -Djavax.net.debug=all for.

First surprise: after setting the property on, I could not log in in our web based UI! We got some errors in browsers saying that the HTTPS connection could not be established. Removing the property helped with UI. How it is even possible to release a product where enabling some debugging output would break the functionality?! Yes, a lot of developers, me including, did make similar mistakes, but leaving such a bug in a released version is too far from my point of view.

And how the hell I suppose to debug the suspected SNI problem? Let's try to use another JDK. OpenJDK 1.8.0.51 out, Oracle JDK 1.8.0_60 in. Result: the problem with the debugging property and HTTPS in the web UI is gone.

Hope dies last ... the problem with the web service is still there. Of course it would have been too easy if this problem were also solved.

But at least I could now look at the SSL debugging output. And indeed, exactly what I thought: SNI is not sent.

I also knew about jsse.enableSNIExtension system property. We started the JVM without this property so the default behavior must have been used. Needless to say that explicitly setting the property to true did not change a thing.

The rest was just a tedious work of creating a reproduction scenario and some googling. A simple program with some basic HttpURLConnection manipulations did not reproduce the problem: the JDK was sending the SNI info. Time to look at the JDK source code and do more debugging.

From my point of view the authors of that part of Java have had reserved themselves a long time ago a permanent place in the developer's hell. This code is a mess and a bloody nightmare. Yes, I have seen some code that was much worse. But somehow I expected the JDK code be of a better quality...

After many cycles of modifying the test program, debugging, and studying the mess they called "source code" I came to this beauty:

HttpsClient.java, starting from line 430, method public void afterConnect():
[430]    public void afterConnect()  throws IOException, UnknownHostException {
...
[439]                    s = (SSLSocket)serverSocket;
[440]                    if (s instanceof SSLSocketImpl) {
[441]                       ((SSLSocketImpl)s).setHost(host);
[442]                    }
...
[470]            // We have two hostname verification approaches. One is in
[471]            // SSL/TLS socket layer, where the algorithm is configured with
            ...
            The rest of the very long and insightful comment is stripped
[518]             boolean needToCheckSpoofing = true;
[519]             String identification =
[520]                s.getSSLParameters().getEndpointIdentificationAlgorithm();
[521]             if (identification != null && identification.length() != 0) {
[522]                 if (identification.equalsIgnoreCase("HTTPS")) {
[523]                    // Do not check server identity again out of SSLSocket,
[524]                    // the endpoint will be identified during TLS handshaking
[525]                    // in SSLSocket.
[526]                    needToCheckSpoofing = falsee;
[527]                }   // else, we don't understand the identification algorithm,
[528]                    // need to check URL spoofing here.
[529]            } else {
[530]                 boolean isDefaultHostnameVerifier = false;
...
[535]                 if (hv != null) {
[536]                    String canonicalName = hv. getClass().getCanonicalName();
[537]                     if (canonicalName != null &&
[538]                    canonicalName.equalsIgnoreCase(defaultHVCanonicalName)) {
[539]                        isDefaultHostnameVerifier = true;
[540]                    }
[541]                }  else {
...
[545]                    isDefaultHostnameVerifier = true;
[546]                }

[548]                 if (isDefaultHostnameVerifier) {
[549]                    // If the HNV is the default from HttpsURLConnection, we
[550]                    // will do the spoof checks in SSLSocket.
[551]                    SSLParameters paramaters = s.getSSLParameters();
[552]                    paramaters.setEndpointIdentificationAlgorithm ("HTTPS");
[553]                    s.setSSLParameters(paramaters);

[555]                    needToCheckSpoofing = false;
[556]                }
[557]            }

[559]            s.startHandshake();
...
[581]    }
There are so many things that are wrong here. But let's start:
  1. Lines 440 - 442: the hostname is passed to the SSL socket via a non-public API. This basically prevents you from providing your own SSL socket factory with your own SSL sockets delegates. Your sockets will not get hostname info. And hostname is used by the default trust verification mechanism invoked from the default SSL socket implementation.

  2. The biggest "wrong" in this code starts on line 470. The authors have probably wasted all their energy on those 50 lines of comments, and got nothing left to properly implement the logic. Basically the SNI information is sent only if method SSLSocketImpl.setSSLParameters() is invoked. And if it is not invoked, no SNI is sent. And the code above shows that setSSLParameters() is invoked in one case only: if no endpoint identification algorithm was specified and no "default" hostname verifier was set. Our code had a custom hostname verifier, and oooops SNI was not send.

    The funny thing about it: if one bothers to explicitly specify an endpoint identification algorithm, even the default one, SNI is not sent either.

    There is actually a bug JDK-8072464 about the "non-default hostname verifier", but it does not mention an explicitly specified endpoint identification algorithm. And it looks like they do not plan to fix it in 1.8.

  3. There is another bug lurking in the API and the implementation: there is no easy way to disable the SNI or actually just customize it for a particular connection. Yes, one can disable sending SNI by setting jsse.enableSNIExtension system property to false, but it is a JVM-wide setting. Don't you hate when the only way to get some functionality is use some system property? I do hate that kind of "all or nothing" approach. And JDK is full of it. One of the worst offenders is javamail: it gives you a way to specify per-connection settings and still relies on JVM system properties is some cases. Really clever technic!

    Back to SNI: you see, to explicitly specify SNI you have to implement an SSL socket factory, which is already quite a step. Then you can use setSSLParameters() to customize the SNI or provide an empty list if you do not want to have SNI sent. So far so good, but this is the only place where you are in control of a socket. And it is too early. Because HttpsCient.afterConnect() is invoked much later. Say there is no endpoint identification algorithm specified and no "default" hostname verifier set. Or just imagine bug JDK-8072464 is actually fixed. In this case the default SNI behavior kicks in and overwrites whatever you have specified in the socket factory. Remember that little setHost() on line 441? This is where the host name gets into the SSL parameters. And then the code on line 551 - 553 overrides your precious SNI with the one that was set on line 441.

    So in reality you have to implement an SSL socket factory and an SSLSocket so that you can do some additional SNI manipulation in method startHandshake(). But then you will not get hostname set because lines 440 - 442 are not executed for your custom SSLSocket.

A small detour: go read what they call a "JDK Enhancement Proposal" about SNI, especially the section about testing.
Need to verify that the implementation doesn't break backward
compatibility in unexpected ways.
A noble goal, is it not?

Just imagine how much time they have spent on that JEP, on the API, on the implementation. Net result? Puff, zilch, and JDK-8072464.

Of course all this applicable only if your code relies on the JDK's HTTP support. This is probably another very good reason to move to libraries like Apache HttpComponents. I do not know if it properly supports SNI and gives you enough rope but it can at least be patched much easier if needed.

Since we still have to rely on the JDK's HTTP support I had to resort to a custom SSL socket factory and a custom SSL socket and on things like "instanceof SSLSocketImpl" and typecasts. Too much code to my liking to work around some silly bug. But at least we now can send messages to that web service.

And, by the way, there is another problem with the JDK's SNI. From my point of view it is also a bug but this time it goes over somewhat grey area of definition ambiguity. The code in question is Utilities.rawToSNIHostName().

The SNI RFC section 3.1 prohibits use of IP addresses in SNI, and the JDK behaves correctly if hostname in the above code is an IP address.

But they also ignore hostnames without a '.' character. This is wrong. I guess they try to follow the RFC where it says:
"HostName" contains the fully qualified DNS hostname of the server,
as understood by the client.

There two problems with the JDK's behavior. First of all, the spec says "as understood by the client". If a hostname with or without a '.' character is correctly resolved to an IP, it is as good as "fully qualified as understood by the client". So the JDK incorrectly excludes hostnames without a '.' character.

On the other hand, if the JDK follows some other specification of a "fully qualified DNS hostname", then a mere presence of a '.' character in a hostname does not make it fully qualified. It is at most "a qualified name". Unless of course the JDK authors have somewhere a specification that says "a hostname is fully qualified if it has at least one '.' character". But I bet they just got tired after writing all those specs, JEPs, APIs, and comments in the code.

Sunday, August 2, 2015

WTF collection: JDK and broken expectations

Imagine: you have a piece of code that have been working fine for a long time. The code connects to some web server, sends some data, gets the result, etc. Nothing fancy here.

One day the very same piece of software was used to send data to yet another web server. This time it was different: we got long delays and a lot of errors like "Unexpected end of stream". Time to investigate.

The packet capture revealed one particular HTTP request header that just screamed "You want some problems? Just use me! " The header was Expect: 100-continue

The code in question uses java.net.HttpURLConnection class. (Yes, I know about Apache commons HTTP, but that's beyond the point.) I just hoped that this header is not set by the JDK code automatically. It is always a great PITA to change HTTP headers that the JDK babysits. Just try to set HTTP request header "Host" for example! Fortunately "Expect" header is not one of those restricted by the JDK. It was set by the application itself based on some conditions. Couple of changes and this header is not sent any more. No more delays, no more "Unexpected end of stream" errors, everything works.

By now I was curious enough to find out what was going on. Even the fact that I had to go through some JDK code did not stop me. Almost every time I had to look in the JDK code I got the feeling I could easily have guess where the authors of that code grew up...

Yep, no surprises here, the same style, see: sun.net.www.protocol.http. HttpURLConnection class. Here is the relevant piece of "logic":

private void  expect100Continue()throws IOException {
            // Expect: 100-Continue was set, so check the return code for
            // Acceptance
            int oldTimeout = http.getReadTimeout();
            boolean enforceTimeOut = false;
            boolean timedOut = false;
            if (oldTimeout <= 0) {
                // 5s read timeout in case the server doesn't understand
                // Expect: 100-Continue
                http.setReadTimeout(5000);
                enforceTimeOut = true;
            }

            try {
                http.parseHTTP(responses, pi, this);
            } catch (SocketTimeoutException se) {
                if (!enforceTimeOut) {
                    throw se;
                }
                timedOut = true;
                http.setIgnoreContinue(true);
            }
            if (!timedOut) {
                // Can't use getResponseCode() yet
                String resp = responses.getValue(0);
                // Parse the response which is of the form:
                // HTTP/1.1 417 Expectation Failed
                // HTTP/1.1 100 Continue
                if (resp != null && resp.startsWith("HTTP/")) {
                    String [] sa = resp.split("\\s+");
                    responseCode = -1;
                    try {
                        // Response code is 2nd token on the line
                        if (sa.length > 1)
                            responseCode = Integer.parseInt(sa[1]);
                    } catch (NumberFormatException numberFormatException) {
                    }
                }
                if (responseCode != 100) {
                    throw new ProtocolException("Server rejected operation");
                }
            }

            http.setReadTimeout(oldTimeout);

            responseCode = -1;
            responses.reset();
            // Proceed
    }

Nice thing: they decided to work around some broken HTTP servers that may send a 100 HTTP response even if a request does not have the "Expect: 100-continue" header. See those http.setIgnoreContinue() here and there?

This is actually the only nice thing I can say about this code.

The authors also decided to work around another possible misbehavior with respect to the "Expect" header. See that comment 5s read timeout in case the server doesn't understand Expect: 100-Continue on line 1185? Except that all this happens only if the connection does not have its own read timeout set, see line 1184.

But if you decided to protect your application against some servers that take too long to respond by setting read timeout on an HttpURLConnection to some reasonable value like 30 or 60 seconds, you are screwed. Because the JDK starts waiting for a 100 HTTP response and if the server does not send it, the JDK times out after waiting as long as your read timeout setting (30 or 60 or whatever seconds!). In that case the JDK does not bother sending the request body to the server, and your application gets a SocketTimeoutException. Nice work, guys!

And it is not all. Another very interesting logic starts on line 1200. If some response was read from the server, the code verifies the response code. Completely useless and extremely harmful piece of code: if the server responded with anything other than 100 the JDK reports "Server rejected operation" error.

Now go back to RFC 2616 and read the very first requirement for HTTP/1.1 origin servers (emphasis mine):

- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request.  It MUST NOT
perform the requested method if it returns a final status code.

See that or respond with a final status code part? Say the server implements that part of the specification correctly. It receives a request with "Expect: 100-continue" header, decides that it cannot handle the request because for example it does not recognize the specified context path. The server immediately sends a 404 HTTP response. But the JDK knows better and instead of the real error your application gets a stupid and meaningless "Server rejected operation" error. Good luck finding out what went wrong.

Be warned and do not use the "Expect: 100-continue" HTTP request header if you have to rely on java.net.HttpURLConnection class. And do not think the JDK code is somehow of a better quality than rest of code out there. In fact it is normally on par or even worse.

Thursday, February 6, 2014

Oracle

I have my share of irritations at Oracle's software. Oracle buys companies and software products, good or bad, rebrands them, then it makes them worse through the years, and still manages to sell that crap. In the end poor developers have to deal with enormous behemoths of software world, not really understanding what it is all about and how the hell this thing supposed to work. And even if developers understand all of that the crap from Oracle does not work as documented or expected anyway.

In my mind there was one exception from this: Oracle database. Probably because it is something the company started with.

Yes, Oracle database has its shares of strange features, non-standard implementations, etc. Yes, many argue that Oracle is the worst database out there. No, I am not going to join holly wars on what database is better.

Note that I wrote 'was one exception'. Recently my opinion is changed.

I am not a DBA. I do not have really deep knowledge of relational databases. I know enough to write SQLs if necessary. And if I can I try to off-load some work from my applications to the database. This keeps the processing close to the data being processed. As a result some of my SQLs end up being quite complicated but application code is much cleaner. Sometimes I run into a strange behavior that turns out to be a feature of the database product. It is not a big deal if it happens every now and them.

Unfortunately for the last week or two I had to write a lot of SQL statements. Some of them looked complicated, at least to me. I tested them against PostgreSQL and Oracle. I did not have any problem with PostgreSQL.

But Oracle... Well, it turned out my SQL was complicated to Oracle as well. Constructs that looked obvious just did not work with Oracle, resulting either in error messages, sometimes very strange, or in downright wrong results.

Did you know that Oracle treats an empty string as null? Now I do. If you have a not null column of say VARCHAR2 type you cannot insert an empty string into it. How clever it is?! Now what? Insert a string containing a single space??? Brr. Worst of all: there are people out there who defend Oracle on this issue.

Or how about not having a boolean type? You cannot have a column of boolean type in a table or in a query result. You have to go with 1/0 or 'Y'/'N', or whatever. Wow, real database shines through.

Another thing I ran into is limitations of updatable result sets. Sure, these limitations do not have anything to do with the database itself. They are part of a jdbc driver. But it is just playing with words. I do not care how great the database is if its jdbs driver is lousy.

Oracle documents support for updatable result sets. It is interesting to look at the difference between the versions: Oracle8i and Oracle9i. There are also more recent versions, for example, Oracle 11g, but they are quite similar to Oracle9i.

There is one interesting limitation that is mentioned in Oracle8i documentation and is not there in Oracle9i: "A query cannot use ORDER BY"

One might draw a conclusion that Oracle lifted this limitation long time ago. Ha-ha, gotcha! It is still there, even in a driver version 11.2.0.2.0.

But the funniest thing is not the documentation, but why the limitation is there in the first place. Turns out the driver parses SQL statements passed to it looking for quite a number of SQL fragments. One of such fragment is ORDER BY. And when an application uses methods like ResultSet.updateRow(), the driver gets the original statement, truncates it right before the ORDER BY and then appends the result after some UPDATE ... SET ... fragment it has generated. Now imagine what it does to a statement that has some analytic functions like ROW_NUMBER() OVER (ORDER BY ...). Bloody clowns!

Next Oracle "feature" hit me hard. I had quite a complicated INSERT from SELECT query that did some grouping reducing a 1M row table to a result set of about 70 rows. Worked without a hitch in PostgreSQL. But in Oracle I was getting one of those "unable to extend segment ... " errors. Looking at what and why I discovered that Oracle did not apply grouping at all: the result set contained 1M rows! WTF?!

The grouping was done "per date": the original data contained event timestamps and the query should have produced "so many events of a particular type per day". To convert a timestamp to a date I used cast(timestamp as DATE). "CAST converts one built-in datatype ... value into another built-in datatype ... value. " To hell it does. It probably just sets the type tag on a value without any conversion. So yes, if you run something like
select
    cast(systimestamp as date) d1,
    cast(systimestamp + interval '1' second as date) d2
  from dual;
you see the same two values in the output and think "yeap, cast() works". But if you run
select d1, d2 from (
    select
        cast(systimestamp as date) d1,
        cast(systimestamp + interval '1' second as date) d2
      from dual
) where d1 = d2;
you get no rows back! On the other hand this works as expected returning a single row:
select d1, d2 from (
    select
        trunc(systimestamp) d1,
        trunc(systimestamp + interval '1' second) d2
      from dual
) where d1 = d2;

Now my SQL produces the expected 70 rows both in PostgreSQL and Oracle. But I still wonder how many of such "small details" will hit me tomorrow?

Next one is a bit questionable: ORA-00904: XXX: invalid identifier. It is described quite in details for example here: "Ask Tom".

Why is it questionable? Well, Tom claims Oracle follows the standard here:
ANSI SQL has table references (correlation names) scoped to just one level deep.

This might very well be the case. The language used in all those standards is usually quite incomprehencible. It is really difficult to understand what the authors ment to say. And everyone who tries to follow standard usually understands things slightly differently. I found SQL92 text on the web. As I expected it is completely unclear if there is such a limitation in the standard. Actually I say stronger: if I did not know about Oracle's intrepretation I would not even think there is a limitation.

Now imagine Java had the same scoping rules:
void a() {
    int  c = 0;
    while (c < 10) {  // OK, visible
        ...
        if (c == 5) { // OK, visible
            int  k = c; // Oops, c is not visible here
            ...
        }
    }
}
But let's give Oralce the benefit of the doubt on this one.

Time for my favorite: error ORA-32036. Just go and read its description. And read again ... and again ... Wonderful piece of English prose, is it not? I guess if you cannot appreciate its beauty you cannot be a real DBA. Now goolge it. Turns out the error depends on how a jdbc connection is made. The error happens if the statement is executed in an XA connection. But everything works OK if it is a non-XA connection. And it is not only jdbc, it is the same in .NET world.

I could have added much more but it is getting late...

Only one thing bothers me: I can explain all the issues I mentioned above. Stupid decisions of not having boolean, empty string, bugs in parsing, [mis-]interpretation of the standard, etc. Yes, I can see how they could have happened. But that last one (ORA-32036)?! Damn, I am losing my imagination.

Friday, September 13, 2013

SQL: Upsert in PostgreSQL

Upsert is a very useful SQL statement. Unfortunately not every database supports it. The product I am working on can use several different databases. One of them is PostgreSQL which does not support upserts.

It is not a big deal: googling for it turns several solutions, including triggers, functions, and "Writeable CTE". I find the Writeable CTE solution quite elegant. Unfortunately it has one ... well feature that sometimes might be completely unimportant, just a nuisance in some cases, or a real problem in other situations.

For me it was the latter.

If you execute the example from the "Writeable CTE" page you will get the following results before executing the upsert:
1   12   CURR
2   13   NEW
3   15   CURR
After upsert the results are (changes are in bold):
1   12   CURR
2   37   CURR
3   15   OBS
4   42   NEW
So rows with ids 2 and 3 were updated and a new row with id 4 was inserted. But if you paid a close attention to messages in pgAdmin or psql, you might have noticed the following:
Query returned successfully: 1 row affected 

The query did its job, but reported only the inserted rows! Imagine the query results in only updates. It will report then
Query returned successfully: 0 row affected

By the way Oracle reports the correct result: combined number of affected rows. In the example above it says
3 rows merged

Is it important? After all the query did what it was asked to do.

For me it is important. My real query could do 3 things: update a single row, insert a single row, or do nothing. And I need to know which way it went. Actually all I need to know if a row is affected or not. With Oracle I know. With PostgreSQL I know only if a row was inserted. Sure I always can go to the database and ask, but this means another query, another roundtrip...

But who says my upsert query can stop at only one CTE? Meat the beauty:
WITH
upsert as
(update mytable2 m
    set sales = m.sales + d.sales,
        status = d.status
   from mytable d where m.pid = d.pid
 returning m.*),
ins as
(insert into mytable2
 select a.pid, a.sales, 'NEW' from mytable a
  where a.pid not in (select b.pid from upsert b)
 returning *)
select (select count(*) from ins) inserted,
       (select count(*) from upsert) updated;

If you repeat the example, but run this query instead of the original upsert, you get the job done and you also get the following result:
inserted   updated;
1          2

You immediately know the answer. And it is better than Oracle because in Oracle you cannot differentiate between inserted and updated rows!

You can tweak the select the way you want. Need only "total number of affected rows"? Use:
select (select count(*) from ins) + (select count(*) from upsert);

I ended up with something like:
select 'i' kind from ins
union
select 'u' kind from upsert

Since there is at most one affected row in my case, I get either an empty result set, or a result set with a single row and column having value 'u' or 'i'. And I do not really need to know whether a row was inserted or updated, so my java code looks really simple:
boolean  isChanged = stmt.executeQuery().next();

Nice and simple.