These days I happened to work on three projects that shared a similar aspect, testing a number of interfaces on the level of conformance with some specifications. In this post I present some of my findings while doing this exercise.
The first project was the evaluation of the level of conformance with INSPIRE regulations of a number of WMS and WFS endpoints of a big data provider in the Netherlands. This provider has a nice setup of WMS and WFS endpoints based on the deegree project. Compared to other implementations in the Netherlands their setup is quite mature. They provide GML 3.2.1 over WFS 2.0, harmonised according to the INSPIRE data models. Unfortunately the implementation also has some weaknesses, mainly due to missing required metadata elements in the service capabilities, the conformance test tools gave errors on these missing elements. And that’s sad, due to these minor errors the fancy framework appeared quite less fancy. So I strongly suggested them to integrate the use of some of these test tools in their acceptance and monitoring procedures (and/or have the framework limit the possibility of causing errors by requiring fields to be filled or take sensible defaults). Quite a list of test frameworks is available these days (http://bit.do/inspire-test), some of these are quite comprehensive and compete to become the european default test implementation. For these tests I used ETF, Geoportal validator and some separate geonovum tools.
Quite interesting in this project was the aspect of being able to do some usability tests using common WMS/WFS/CSW clients. I performed some tests using a web client (geonetwork/openlayers), QGIS and ArcGIS Desktop. It was nice to see that searching for the services in a catalogue using the QGIS Metasearch client is very convenient. Adding the services to the QGIS map works fine (at least for the main dutch projection, wgs84 gave some challenges, probably related to axis order issues). Also the WFS could be added to the map, although it uses WFS 1.1. The QGIS plugin for WFS 2.0 works great on the WFS services directly, however not when you try to add it via Metasearch. A little note on the WFS 2.0 plugin. It seems to ‘flatten’ the data before adding it to the map, which is a very practical approach, but you’re also missing some of the original richness of the data (it also incidentally causes QGIS to add the data as a table, instead of a feature-layer).
ArcGIS Desktop could display some of the WMS services, however some others failed and I was not able to determine why (axis order?). For displaying WFS services in ArcGIS Desktop one needs the ‘interoperability extension‘, which I didn’t have available, so I haven’t been able to test that. Geonetwork uses OpenLayers to visualise WMS and offers a ‘download wizard’ to download features from a WFS. Both worked as expected on the tested services. Although it would have been nice if the service would have provided some other output encodings then just GML.
The second project involved work on a set of Abstract Test Suites for INSPIRE conformance testing in the scope of MIWP-5. Which will ultimately result in the selection and implementation of a new Inspire Test Suite at the European level. I presented some of our work on these ATS’ses to the Cite Team Engine team at the OGC TC meeting in Barcelona march 2015. It was a good experience to share our experiences with them and confirm that they are struggling with similar challenges as we did/do. An important issue for example is how to manage versions of the ATS/ETS. Each ATS/ETS should be tightly related to a certain version of a specification (in INSPIRE this would be the Implementing Rules and Technical Guidelines). So when a new version of the specification arrives (or a corrigendum is accepted). The ATS and ETS are updated. However the previous ATS/ETS should still be available, because existing implementations will want to verify that they still comply with the previous version of the test implementation (to be able to identify the validation failure is related to the update of the test tooling, and not to a bug in the implementation itself). The specification could indicate the timespan that service providers have to update their implementation to the new version (corrigendum). Another shared issue is related to the linkage between specifications. For example when testing a WFS implementation, some GML is returned, would a WFS-test trigger a GML test on that returned GML, or does it only test the WFS itself? Same applies to the GML, does the GML validator only test the GML to the Schema, or should it go into the geometry and test if the geometry is topologically correct? In INSPIRE we have similar issues, like when selecting a value from a codelist, should the metadata validator resolve the codelist and check if the indicated value is actually part of that codelist or thesaurus. From a user perspective, yes yes yes, because it helps in getting your work done. However there are some ownership aspects. The service or metadata provider might not be the maintainer of that exposed data or referenced codelist. The result of this test is also very likely to change over time (data changed/service offline etc). That’s why we suggested to move these typical tests that validate things over multiple ‘domains’ to a separate test suite, for which the test result is less final, but more an indicator of todo’s for the service and data providers. Overall an interesting discussion, I hope OGC and INSPIRE will (continue to) join efforts in setting up good validation tools for conformance testing. Team engine seems a sensible technology.
The third project involves a research project we’re working with a couple of partners to deliver a software framework to facilitate Citizen Science. The framework involves Mobile Apps and sensors for data creation. A synchronisation component that accepts data from sensors and phones. Validation and conflation components that validate and relate the data to other sources. Publication components that expose the data to consumers (CSW/SOS/WMS/WFS/RDF). And a portal website where citizens can register and view, add, review and extract data from the Citizen Science Surveys. As usual in this type of projects defining and implementing m2m interfaces between these components, although standardised, still takes quite some effort. That’s why at some point we decided to create integration tests that run over multiple components and test a full workflow of data through the system. Two approaches are used, one is a system test using casperjs/phantomjs that mimics a browser session where buttons are automatically triggered to manage a full workflow (the mobile app runs in an android emulator). The other approach just tests each of the interfaces individually with specific api request using mock-objects. For the latter we were looking for a platform to build this on, an obvious choice would be to use Team engine, since most of the interfaces in the framework are OGC. The framework uses SAML/XACML for Authentication (SSO) and Authorisation, which might require some fixes in Team Engine. Added value of having a test framework available to test the interfaces in such a framework will also facilitate a foreseen design of citizen science best practices discussion with some of the other ‘citizen observatories‘ (related european research project) at the GWF/INSPIRE conference in Lisbon next may.
Recent Comments