I recently attended the INSPIRE Conference in historic Edinburgh. Like last year, it was again a great gathering of folks working to deliver a world leading Spatial Data Infrastructure across Europe. However, this year the focus was different than in previous years, with implementation deadlines looming large.
The focus of this year’s conference had more to do with solutions and getting things done, rather than theoretical discussions on topics like: “How we are going to build the ‘semantic web?’” or “How are we going to build tools around RIF?”. While these are interesting and seductive questions, I don’t see them helping cash-starved governments meet their INSPIRE requirements.
Current Status of the INSPIRE Directive
As with any large and ambitious project there’s plenty to worry about – i.e. some states are more compliant than others. Regardless, one can also see good progress being made as software vendors (both open source and traditional) have been putting great effort into improving their tools to make them “INSPIRE-Ready” – ourselves included.
The Challenges of the INSPIRE Mandate
In a nutshell, the INSPIRE Directive is all about data custodians delivering their data in standard GML datasets through OGC Web Services such as WMS and WFS. It sounds simple enough, so what is the challenge?
Schemas, Data Models, and the Importance of Spatial ETL
One of the challenges has been making it easy for organizations to get their data into the INSPIRE-specified GML schemas. As you know, GML – being an object oriented data model – doesn’t map well to the relational databases that are prevalent today. The mapping between the object model of the GML schemas to the relational models is not a trivial challenge.
At the conference, there were a number of presentations that described this challenge, and the term “Spatial ETL” was commonly used – regardless of vendor – to describe the process of moving data from one data model to another. The good news here is that the approach to mapping data from one data model to another is known, and solutions such as FME exist.
Working with GML and XML
Another challenge of INSPIRE is just working with the GML (or XML) that is required. In the past this was difficult, but now there are tools that enable organizations to read and write the GML dictated by INSPIRE. While the prospect of reading and writing GML still fills some with fear, there are now good solutions and success stories showing how this too can be done.
If you find yourself fighting with XML (or GML) reading or writing challenges, Take the XML Challenge! Since I launched the challenge earlier this spring, I’ve received some very interesting submissions and will discuss them in a future post. Suffice it to say it has been fun!
[As you all know XML is one of my passions and I am always looking for others who are excited about XML. Having said that I am still afraid to host a “party for those who love XML” as I am afraid that I might find myself alone! 😉 Googling “XML lover” is also interesting as it makes it clear that society is not yet ready for those two terms to be used together.]
Is INSPIRE a Success?
At this point it is too early to tell if the INSPIRE Directive, in and of itself, is a success. To me the definition of success isn’t about data delivery at all: INSPIRE will be a success when the data is being used to help organizations – both large and small – make better decisions. Like any infrastructure, INSPIRE is built in order to improve standards of living and to benefit all aspects of society.
I look forward to future INSPIRE conferences where the focus will shift from “delivery of INSPIRE compliant data services” to “consuming INSPIRE data to make better decisions”. It’s when we start seeing real ROI stories, fueled by INSPIRE data services, that we’ll know that INSPIRE is a success.
What’s your take on the current status of the INSPIRE Directive? How are you coping with its challenges? Whether you are an organization that is working to share your data through INSPIRE compliant services, or if you are looking to consume INSPIRE data in your own workflows – I would love to hear from you.
Shameless Plug – INSPIRE Webinar on July 21st
If you want to learn more about overcoming INSPIRE’s data challenges, then I’d like to invite you to join me for a webinar that
I’m presenting this I just presented on Thursday (July 21) called: “Harmonise Your Spatial Data for INSPIRE with FME”.
Don MurrayDon is the co-founder and President of Safe Software. Safe Software was founded originally doing work for the BC Government on a project sharing spatial data with the forestry industry. During that project Don and other co-founder, Dale Lutz, realized the need for a data integration platform like FME. When Don’s not raving about how much he loves XML, you can find Don working with the team at Safe to take the FME product to the next level. You will also find him on the road talking with customers and partners to learn more about what new FME features they’d like to see.
Unfortunately I couldn’t make the conference, but lots of videos posted online. INSPIRE is a political success for sure, we can share our data 🙂 Sadly I think the technical implementation and the metadata driven, rather than data driven search will fall short of the mark and ultimately fail in providing a platform for geospatial decision making, it would have been fine in 2000, but not so sure come 2020. With this in mind, its great to see SAFE putting energy into this space, I can see a big role for FME Server in providing the view and download services, but then more, for example, you already have the ability to stream data via a KML network link – hopefully government agencies will go this extra mile.
Your recent blog post http://gisconsultancy.com/blog/geoweb/moving-beyond-inspire is definitely worth a read by anyone who is interested in INSPIRE. Your view lines up with my reaction last year when there was a presentation that suggested that RIF should be used to do “transformation” within INSPIRE.
Your point about “Open Source”, “Open Data”, and “Open Standards” is also an excellent one. There is a lot of confusion about that. At the end of the day the best system is the one that gets data into the hands of the decision maker. More correctly, getting the data to the decision maker isn’t enough. Data is merely the fuel that great applications need. The best system is the one that gets the data to those applications, in the format and data model that they need, that the decision makers use!
For us, as data (Open or not) “slingers”, we work to support any and all applications (Open Source or not) in all protocols/formats (Open Standards or not).