Friday, September 13, 2013

SQL: Upsert in PostgreSQL

Upsert is a very useful SQL statement. Unfortunately not every database supports it. The product I am working on can use several different databases. One of them is PostgreSQL which does not support upserts.

It is not a big deal: googling for it turns several solutions, including triggers, functions, and "Writeable CTE". I find the Writeable CTE solution quite elegant. Unfortunately it has one ... well feature that sometimes might be completely unimportant, just a nuisance in some cases, or a real problem in other situations.

For me it was the latter.

If you execute the example from the "Writeable CTE" page you will get the following results before executing the upsert:
1   12   CURR
2   13   NEW
3   15   CURR
After upsert the results are (changes are in bold):
1   12   CURR
2   37   CURR
3   15   OBS
4   42   NEW
So rows with ids 2 and 3 were updated and a new row with id 4 was inserted. But if you paid a close attention to messages in pgAdmin or psql, you might have noticed the following:
Query returned successfully: 1 row affected 

The query did its job, but reported only the inserted rows! Imagine the query results in only updates. It will report then
Query returned successfully: 0 row affected

By the way Oracle reports the correct result: combined number of affected rows. In the example above it says
3 rows merged

Is it important? After all the query did what it was asked to do.

For me it is important. My real query could do 3 things: update a single row, insert a single row, or do nothing. And I need to know which way it went. Actually all I need to know if a row is affected or not. With Oracle I know. With PostgreSQL I know only if a row was inserted. Sure I always can go to the database and ask, but this means another query, another roundtrip...

But who says my upsert query can stop at only one CTE? Meat the beauty:
WITH
upsert as
(update mytable2 m
    set sales = m.sales + d.sales,
        status = d.status
   from mytable d where m.pid = d.pid
 returning m.*),
ins as
(insert into mytable2
 select a.pid, a.sales, 'NEW' from mytable a
  where a.pid not in (select b.pid from upsert b)
 returning *)
select (select count(*) from ins) inserted,
       (select count(*) from upsert) updated;

If you repeat the example, but run this query instead of the original upsert, you get the job done and you also get the following result:
inserted   updated;
1          2

You immediately know the answer. And it is better than Oracle because in Oracle you cannot differentiate between inserted and updated rows!

You can tweak the select the way you want. Need only "total number of affected rows"? Use:
select (select count(*) from ins) + (select count(*) from upsert);

I ended up with something like:
select 'i' kind from ins
union
select 'u' kind from upsert

Since there is at most one affected row in my case, I get either an empty result set, or a result set with a single row and column having value 'u' or 'i'. And I do not really need to know whether a row was inserted or updated, so my java code looks really simple:
boolean  isChanged = stmt.executeQuery().next();

Nice and simple.

Wednesday, June 19, 2013

Aaaaaah! Metro, you made my day!

As I have mentioned before, WS-security unit tests in metro are using SAAJ to handle SOAP messages. A typical test goes like this:
  1. A SOAP message is created and populated with some elements by the test code.
  2. A WS-Policy-based configuration that defines what to do with the message is created.
  3. A security operation is performed.
  4. The resulting SOAP message is written to a file.
  5. A new instance of SOAP messages is created from the file.
  6. A new WS-Policy-based configuration that defines how to validate the message is created.
  7. Validation is performed.
I decided to add some JAX-WS code to one of the test. Originally the test was doing both signing and encryption, but I removed encryption. It is much easier to see what is going on. Then I verified that the test is still green, and that it fails if I modify the file the test generates before the file is read in. Just in case: you never can be too careful.

I have added the following steps at the end of the test:
  1. The file with the message, created after the security operation, is read in the streaming mode as a JAX-WS message.
  2. A new WS-Policy-based validation configuration is created. It is actually exactly the same code as in SAAJ SOAP message case.
  3. Validation is performed. This is done by a different set of classes than in SAAJ case, although class names are similar. Here metro shines again: although the operations are similar, the way validation is invoked is different. Worse, all the parameters for JAX-WS way of validating are type-compatible with SAAJ validation. Trying to use SAAJ validation code with JAX-WS message compiles but fails with NPE.

After some failed attempts I have got a version that not only compiled but went quite deep into metro code, and then failed with signature validation. It was "Reference #idxxx: signature digest values mismatch" or something like that.

This was ... interesting. The same message is OK if validated as a SOAPMessage instance and fails if validated as a JAX-WS message. Something fishy was going on. Metro had not only provided multiple implementations of XML-Security, but they also managed to make them incompatible. Remind me, what does that "WSIT" stand for?

Of course the problem might have been in the way I have set up JAX-WS-based validation, but I was quite sure it is another genuine "feature" of metro.

In order to understand what the error message means it is necessary to understand what the signed message looks like. It is a SOAP message with a lot of additional stuff added to the SOAP Header (a lot of details omitted):
<Envelope ...
   <Header>
       <Header1 wsu:Id="id_h1".../>
...
          <ds:Signature >
          <ds:SignedInfo>
...
              <ds:Reference URI="#id_h1">
                  <ds:DigestValue>some base64</ds:DigestValue>
              </ds:Reference>
              <ds:Reference URI="#id_body">... <ds:Reference>
...
          </ds:SignedInfo>
          <ds:SignatureValue>some base64</ds:SignatureValue>
          </ds:Signature>
       <HeaderN wsu:Id="id_hN".../>
   </Header>
   <Body wsu:Id="id_body">...</Body/>
</Envelope>

Each <ds:Reference> element "describes" a particular thing that is signed, typically some element from the same message, as well as how that thing has to be preprocessed before calculating the digest, what algorithm is used to produce the message digest, and the message digest value. URI attribute of <ds:Reference> specifies which element is digested.

<ds:SignedInfo> can contain a lot of such <ds:Reference> elements. And then comes <ds:SignatureValue> that is actually a digitally signed message digest of <ds:SignedInfo> element.

The order of signed header elements and <ds:Signature> is not important. Some say the signature must come after to be signed header elements to facilitate streaming, but this is a moot point. Most often than not SOAP Body also has to be signed, so you can kiss goodbye that nice streaming theory.

Anyway the error I was getting, "Reference #idxxx: signature digest values mismatch", was about the very first <ds:Reference> in the message. It meant that the verifying code looked at URI attribute, URI="#id_h1" in this case, found the corresponding header element by its id, <Header1 wsu:Id="id_h1".../>, and calculated its digest. And the calculated digest did not match <ds:DigestValue> of the <ds:Reference>.

I switched on the logging and repeated the test several times, with the same result. I was not sure what I wanted to see. The logging did not show anything exciting. But then I noticed some pattern. The output contained calculated and expected digest values taken from <ds:DigestValue> of the <ds:Reference>. The values were unreadable because the digest is just byte[], and metro guys did not bother with encoding/decoding or pretty-printing them. While the expected digest was clearly changing from run to run, the calculated digest looked the same. This was clearly wrong because the digest should have been calculated over the header element including all its attributes. While most of the things remained unchanged, wsu:Id attribute differed from run to run. So the calculated digest had to be different as well.

Checking the verification code under the debugger confirmed this: the calculated digest was exactly the same every time the test was executed. So what exactly metro is using as the source for the digest calculation? Turned out: in this particular case nothing.

Yeap, nothing. So the calculated "digest" is probably some fixed initial state of the message digest implementation.

The problem had nothing to do with how I used metro API. The real reason was the signed message itself. Time to show the relevant <ds:Reference> in its full glory:
<ds:Reference URI="#_5002">
    <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
    <ds:DigestValue>dKof/iss1y+eaCxi5xQGzXZw8RQ=<ds:DigestValue>
<ds:Reference>

The key to the problem is not what is there, but rather what is absent. Some important piece is missing, namely, "instructions" on how the date has to be preprocessed before calculating the digest. Normally a <ds:Reference> looks like this:
<ds:Reference URI="#_5006">
    <ds:Transforms>
        <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
            <exc14n:InclusiveNamespaces PrefixList="S"/>
        <ds:Transform>
    <ds:Transforms>
    <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
    <ds:DigestValue>JWq3aJtzUP98fkiThJ0WYtcrWCY=<ds:DigestValue>
<ds:Reference>
<ds:Transforms> and its child <ds:Transform> elements are such "instructions".

The rest was easy: knowing that there were no <ds:Transform> elements I looked at what metro does in this case with the referenced element. Well, nothing. No data is digested.

Some questions remained though:
  1. Why the signing code produced a messaged without <ds:Transform> elements?
  2. Why the SAAJ message was successfully validated?
  3. What is the expected behavior in case there is no <ds:Transform> elements according to the specification?

Let's start with the last question. The answer is: I do not care. It might be valid. It might be not valid, but in any case it is definitely not the "signature digest values mismatch". This is actually an answer on the first question as well. Why the signing code produced a messaged without <ds:Transform> elements? It does not matter because metro might need to process such messages anyway, no matter how they are created.

Why the SAAJ message was successfully validated? Well, the validation was performed by same code as the singing. For SAAJ messages metro delegates all the work to JSR 105 "Java XML Digital Signature API Specification" implementation, which is now part of JDK. Basically it is some older version of Apache Santuario, repackaged by Sun. I checked Apache Santuario source and found some remarkable similarities with metro code. Except that Santuario's code does not have this particular bug because after applying all the transforms it checks if there is any data left to be processed, and processes it. And metro does not perform this check. The check existed in Santuario's code for ages, from the first check-in of JSR 105 support in 2005. I guess metro had "borrowed" some even older version of that code. As a result metro fails if there are no <ds:Transform> elements, and also might fail if there are <ds:Transform> elements. I did not check completely the logic there but it looks like some combination of <ds:Transform> elements might result in the same error. The "I" is WSIT looks more and more like a joke.

By the way, why the signing code produced a messaged without <ds:Transform> elements? The transforms to use are coming from WS-Policy, but not directly. At least when I tested some metro samples, the generated messages had <ds:Transform> elements, but WS-Policy declarations used by the samples did not have any explicit mentioning of transforms. Metro runtime is probably adding this during parsing of WS-Policy. The signing code in the unit test uses a combination of WS-Policy file and some runtime policy manipulation to create the final policy for the signature process. What exactly has to be singed is defined by java code, so this is probably the reason why the final policy ended up having no transforms specified. Sure enough after I found out how to add this info to the java-created policy and modified the code, signed messages were produced with <ds:Transform> in <ds:Reference>. And the JAX-WS way of verifying the message went OK as well.


What can I add? At least interoperability-wise metro really shines.

Monday, June 17, 2013

Web services: in search of interoperability

I have some experience with web services, SOAP, SAAJ, what not, but until recently I did not have any "pleasure" to work with all the rest of WS-* standards. WS-Addressing, WS-Crap, WS-Security, WS-whatever.

Not that long ago however I ran out of luck.

The product I am working on can operate as a WS service or a WS client. And it works. But there are situations when "it works" is not good enough. Usually it has to do with artificial barriers created to protect certain markets. This all is accompanied by some "specification" to which a product must conform. As a rule such specification is badly edited collection of copy-pasted fragment from other standard documents from well-known authorities like W3, OASIS, etc., often with contradictory statements. Nobody cares.

Recently we needed to prove our product conforms to one of such specs. We were lucky. Not only there was a specification, there was also a "compliancy service" provided. The "compliancy service" could accept SOAP requests and thus validate if the sender conforms to the specification, or it could send SOAP requests and validate SOAP responses to validate if the receiver conforms to the specification.

Last year I had to deal with another "specification" and "compliancy service" from the same people. Do you know what one needs to have a good compliancy service? No, you do not have to conform to well-known standards, or even to your own specification. Monopoly is good enough. Add some crappy software and you are set.

For example, the software they used then (and still use) could not handle some erroneous HTTP requests. Instead of returning any kind of response the "compliancy service" did nothing. Literally. The connection was kept open, but not a single byte of response was sent back. Eventually the client timed out trying to read a response. It took us more than a month collecting data and e-mailing them before they agreed that the problem is on their side. The problem is still not fixed.

So I knew I had a lot of fun ahead, I just did not know how much.

This time everything revolved around WS-Addressing and optionally WS-Security. How exactly WS-* stuff had to be applied was specified in an "interoperability standard" document. The document was unclear on couple of dozens points, but it was a "standard", so our product had to be "standard"-compliant.

The "compliancy service" found no problem in our requests and responses in case no WS-Security had to be applied. Adding XML signature to the picture changed everything.

First, the "compliancy service" did not like what request elements our product signed. It complained we were signing more than needed. Turned out it was the case of "do what I do and not what I say". The "standard" defined some elements as mandatory and allowed some optional elements to be present. In the section that described what has to be signed it said that all mandatory and optional (if present) elements must be signed. But "compliancy service" did not like that our requests had optional elements that were signed. OK, no optional stuff then. And no more complains from the "compliancy service".

But when I started testing our product as a web service provider all hell broke loose. No matter what I did the "compliancy service" said "signature verification failed". Just that.

Since then I have learned what JWSDP, XWSS, WSIT, Metro, you name it, means. I have seen monsters much worse than in JBoss code.

And I found out that by 2008 there were still XML parsers in the wild that would reject valid XML documents as invalid. And that in 2013 somebody would still use that parser. Ok, ok, granted, I am not really sure if that XML parsing problem is a feature of a parser itself. It might very well be that the parser was improved as part of the "compliancy service" development. But still.. failing to parse XML if there is a character entity representing a whitespace character between some XML elements? Like this:

<Header>
<header1 …/>
<header2 …/>&#x20;
</Header>&#x20;
<Body …/>
</Envelope>

Remove any of these two entities, and the problem goes away, even if is added anywhere else. +100 to "compliance level". Grrr.

After a lot of experiments and quite some test code to generate XML signature I found out that the "compliancy service" did not like new lines in our response messages. Only after I produced a signed response that did not contain new line characters, the "compliancy service" gave up and accepted the response.

This was really strange because request messages with new lines did not cause any trouble. Submitting a bug report to them was not really an option. We did not have another month.

I found out that the "compliancy service" uses some WS-* toolkit from Sun, not sure of the exact name and version of the toolkit. Nowadays it goes under name "Metro". Or is it WSIT? Beats me. Anyway, based on some stack traces I have seen it was a version from around 2008. Oh, Sun! I had some pleasures debugging JAX-WS RI some time ago. Fine experience, unforgettable.

So I decided to download the latest version of that toolkit to play with it. The decision opened a bright world of project code names and their relationships. Googling classes from the stacktrace resulted in XWSS, JSWDP, WSIT, with XWSS being the primary suspect. Project migrations, consolidations, broken download links, Oracle buying Sun added even more fun.

All the roads led to metro and WSIT. The latest version is 2.3, so be it.

Setting it up and running some samples went mostly flawless, but when I started experimenting with soapUI, I immediately ran into an issue. The sample I was using was a SOAP 1.2 web service, but I sent to it a SOAP 1.1 request. Granted, it was a faulty request, but a SOAPFault with NullPointerException and nothing more is quite an extreme way to say "I do not support SOAP 1.1 here".

By the way do you know what WSIT stands for? Web Services Interoperability Technologies. Yeap, "Interoperability".

I also tested how character entities are parsed. I could not reproduce the problem. At least this one is solved. Or it was not a problem of the toolkit at all.

The real fun began when I started sending signed requests from soapUI. First I have got bitten by the fact that soapUI "friendly" modified my messages.

Next problem I ran into was much more serious: some of the signed messages my test code produced were happily accepted by metro and some were rejected with an error that sounded like "Signature verification of SOAP Body failed".

Some of the messages accepted by metro had new line characters, so again the problem we had with the "compliancy service", if it was the problem of the toolkit, was solved. Needless to say when I generated response messages with the exact formatting they still were rejected by the "compliancy service".

And what about the test messages that metro rejected? I actually found the cause quite quickly. Under some circumstances metro chops off whitespace characters and probably also comments that are direct child nodes on after SOAP Body. They probably do it in order to promote "I" ("-nteroperability"). What else can be the reason? And of course whitespaces are significant with XML digital signature.

For example, this:
<Envelope>
…
<Body>
    <elementX .../>
</Body>
</Envelope>

is treated by metro as if it were
<Envelope>
…
<Body><elementX .../></Body>
</Envelope>
But not always. Who said life is easy?

Looking back I know that I was lucky when I have tested our product as a WS client sending data to the "compliancy service". Pure by chance the request messages did not have any whitespace characters in Body.

I should say the source code of metro is a ... well, I do not know. Saying "mess" would not do it justice. It would be more like a huge compliment. Classes with same name in multiple packages maybe doing the same thing? Or maybe not the same? Methods longer than couple of thousand lines? Copy-paste? You name it, and it is there.

I also found that the problem was reported to them, maybe even multiple times. It is always easy when you know exactly what you are looking for. And of course it was reported fixed. Ha! This is another thing I do not understand: you have found a problem, you have fixed it. Is it so much work to fire "find usage" in your IDE? One of the remaining places is in the file next to the one you just have modified! To me it says a lot about quality of the project and people working on it.

The problem is in JAX-WS integration code of WSIT, but given the complexity of metro, JAX-WS is probably the only way metro is used, so the problem affects everybody who is using metro with XML-Security. And people are still running into this problem. The answer from "Interoperability" specialists is of course "it is your problem". Unfortunately it is true.

Another "stamp of quality" is their unit tests. WS-Security subproject has only 11 tests that do something about WS-Security. Compare that with 27 tests around WS-Policy parsing. Even more interesting fact is that their WS-Security tests do not test JAX-WS code paths. Metro web site claims that they use XML streaming to improve performance. And part of their code is using XMLStreamReader. Whether it improves performance I do not know since they like to copy data into XMLStreamBuffer objects to use them in other places as XMLStreamReader. But their unit tests are using SAAJ to read SOAP messages, and not the streaming code. As a result the code that is actually used by a metro-based WS client or server is not tested.

I should probably even not mention the possibility to have some unit tests for testing the interoperability with other toolkits. Doubt they would understand the concept.

Anyway, knowing the problem and the fix I repeated my failing tests, this time fixing the data as needed under debugger. Sure thing, no more signature errors.

Net result: our product is compliant with WS-Security standard. I know what we need to do to make a particular configuration of the latest version of "the great interoperability toolkit of all times" to accept our messages. Given the complexity of metro I have no idea if some other configuration would be OK with our messages.

I still have no idea why the "compliancy service" did not like our responses with new lines in it. Needless to say I tried again making sure there are no whitespaces in Body, but the error was still the same.

If you are thinking of using metro for your projects, do not. Even if you do not need WS-Security. If they manage to screw things up during parsing I do not want to think what they can do in more complicated cases.

If you are unfortunate to use metro now, especially with WS-Security... Well, if you have metro on both sides, you will not be beaten by this bug, because normally metro generates SOAP Body without whitespace characters.

If you have some interoperability issues with XML signatures using metro and some other toolkit, check the messages. Maybe you are lucky and all your issues are caused by whitespaces in SOAP Body.

If you are a metro developer... let me say no more.

Tuesday, June 11, 2013

Watch out what soapUI is sending out

I recently had to debug an interoperability issue in how two software products exchange SOAP messages. soapUI is quite handy in such cases, but it can be too helpful and change messages for you.

In my case the messages I was sending had some WS-Addressing elements in SOAP header. I needed these elements to have some specific values, so my messages were prepared accordingly. Nonetheless the receiving side seemed to get confused during processing of these messages.

It wasted some time poking around the receiving side configuration and logging before I did what I had to do from the beginning: look very closely at "Raw request" tab in soapUI. It turned out soapUI "helpfully" modified my carefully crafted messages.

The soapUI project was created from a WSDL that contained some WS-Addressing. soapUI detected it and automatically modified every message. The thing is: my messages already had all the necessary and correct WSA elements, but soapUI went ahead anyway and replaced my wsa:Action element with its own, with the same text value. It also added some namespace declaration here and there. Why could it not leave my wsa:Action element alone? It was correct from the start. soapUI did not just insert its own wsa:Action, it replaced the original one, so it could have just checked if it is valid. Probably it was too much work for them.

And it is not that easy to spot such a small change in all that XML mess. Yeah, it is a silly excuse, but it is true.

Sunday, March 24, 2013

WTF collection: soapUI and HTTP redirects

Countless times I come across quite astonishing things in software. I cannot even imagine what creators of the software were thinking: so obviously broken is the end result.

Authors of even otherwise very useful products have their moments.

Take soapUI for example: a very powerful and useful tool. But it has a number of quirks or downright stupid "features". One of them is the topic of this post.
Many webservice frameworks expose webservice WSDLs online: if a webservice is accessible at some <ws_endpoint_url>, then quite often the relevant WSDL is available at <ws_endpoint_url?wsdl>. By the way I could not find out whether this is just a useful convention or some kind of a standard.

And soapUI can load wsdl files from filesystem location or from some URL.

With these things combined it is quite easy to create a soapUI project targeting an available webservice. Just use <ws_endpoint_url?wsdl> as "Initial WSDL" in the "New soapUI project" dialog, and you are in business.

This works as a charm for simple webservices where all definitions are held in a single WSDL file. But the world is tough and real men use things like wsdl:import or include. And this causes endless pain.

You see quite often the server responds to <ws_endpoint_url?wsdl> not with the content of the WSDL file but with HTTP redirect with the real location of the WSDL. And soapUI is intelligent enough to follow this redirect automatically. Then it parses the WSDL and tries to load any "imported" or "included" WSDLs and XSDs.

Here its intelligence stopped. soapUI did not realize that the "base url" was changed as result of the redirect. soapUI continues resolving all relative locations from WSDL file using the original <ws_endpoint_url> as "base url". This results in soapUI using urls to non-existing resources or even completely invalid urls. WTF?