Thursday 21 June 2012

Conclusions from #XCRI #CourseData Development Day

A good day was spent examining the major elements of technical implementation at the XCRI-CAP development day in London yesterday. Here are my collected thoughts on the major themes.



One of the major themes I picked up on was the number of institutions still grappling with determining where they are going to source a lot of the descriptive marketing style information that XCRI-CAP requires. There were a fair few of them in the situation where their marketing information lived either in static web or CMS silos that aren't readily re-usable for this purpose.

The core elements demonstrated were the feed validator and feed aggregator, both nicely designed and useful for assisting with, and then publishing the coding effort.

XCRI-CAP feed validator


The validator can be found here and it offers an unusually nice (compared to a lot of XML validation offerings that is!) interface for validating the XML output of an XCRI-CAP feed.

Things to note include replication of error reports (i.e. if you have 500 instances of the same error then you'll get 500 listed) and the fact that whilst for aggregation purposes, the order of specific elements within a parent is irrelevant, for validation they are non-trivial. In other words your feed won't validate through this route if your elements aren't it 'as specified' schema order. I imagine (I haven't checked the source code for the validator to confirm this) that this is because the validator just stamps the schema over the XML. Perhaps a more elegant solution would be to read the XML into an object level entity and then spit it back out in order-enforced XML and checking for validation issues on that basis, but to be honest it's not exactly difficult to get the order right, just annoying to have to ;-)


XCRI-CAP aggregator


The aggregator can be found here. It has an aggregated feeds 'status' page front interface and a useful search facility built on top of elastic search (which in itself is extremely groovy and worthy of exploration), that can also be directly queried via JSON. Again full source code is available, which I imagine could save some third-party aggregators a job come implementation.

Seeing the aggregator in operation and being able to experiment with searching feeds through it is extremely useful and will doubtless assist me in my discussions with our marketing folks to determine just how much of the reasonably rich XCRI-CAP standard we want to fill out in amongst the 'preferred' and 'optional' elements.

A slight nag point here, I really don't see the value in the preferred/optional distinction. Either something is up to us whether we publish it (it's optional), or it's not (it's mandatory). Saying something is 'preferred' is inherently subjective and doesn't tell information providers why they should populate it beyond 'someone thinks it's nice to have this'. The problem is not knowing the basis for that preference or how others might interpret that preferred status. Is an external aggregator more likely to refuse to include information from feeds that don't include it? Will feeds that include that information look better or worse in these representations (rather dependent on layout choices by said external aggregators...)? Anyway, just a minor niggle and I'm sure it's probably just me!

External aggregators


Whilst on the subject of external aggregators, I posed a question to Rob Englebright (@rob_work) regarding JISC engagement with external stakeholders in terms of who is actually going to use these feeds once they're all produced. The central problem, at present, is that I am essentially asking my institution's marketing team to invest in a different approach to managing course advertising information because XCRI-CAP has been developed according to the European standards model (which is completely right, imho). However, the pay-off for marketing is some tangible benefit both in terms of reduced effort (single point of entry broadcasting to multiple internal and external consumers) and increased availability of accurate data (if external stakeholders get our updated data as soon as it changes in our feed then we don't have lag across multiple stakeholders whilst waiting for manual data updates). 

At present there is appears to be a very low level of engagement by the 'big players', such as UCAS, in respect of their take up and approach to this data. I don't think I'm giving the game away by saying that the programme people are frustrated by this (almost as much as I am I think!) Rob made a good point that in many ways the real benefit accrued by institutions as part of this programme relates to firming up, overhauling and wholesale improvement of collection, storage and update of course information across institutions; rather than to the creation of an XCRI-CAP feed, though this is a very useful measuring tool for JISC to determine the success of individual institutions with regard to programme outcomes.

I would agree with the first part of this, but would suggest that, at least potentially, the feed offers a way for institutions to pay lip-service to the first part whilst still achieving the second. After all a 'feed' can be a piece of static XML, it doesn't have to be an all-singing and all-dancing dynamic, database-driven, JSON extravaganza. 


Elevator pitch - XCRI-CAP mobile app


We also discussed the elevator pitch in terms of the kinds of functionality we'd like to see from the search and results interfaces. I started off in my usually annoying fashion by questioning the virtue of using a platform targeted app in this situation. Personally I see no value in developing a 'mobile app' where you are essentially querying and mashing up blocks of data, unless:
  • The data you are using has to be sourced through the device (accelerometer stuff, etc.)
  • You want to render that data using incredible 3D graphics
  • The dataset is non-volatile and you want users to access it 'offline'
For me none of the XCRI-CAP data fits into any of those categories. We can do location-based stuff through a web front end and a properly designed responsive design will cope with a much wider variety of device end-points than any individual app. This has obvious advantages when it comes to app maintenance and future enhancements as it's write once, rather than write once, re-write, re-write again and then deploy (yes, phonegap etc. can help here, but aren't a total solution, more a platform fudge).

In an environment like the one this data is being surfaced in users are going to expect live data and want to hop about from XCRI to other data points rapidly, not be stuck within the confines of an app, even an institutionally branded one.

This isn't to say that an HTML5 responsive web design front end for XCRI-CAP data wouldn't be useful, it absolutely would and would fit in perfectly under the elevator pitch remit (for me at least). It would also be far easier for individual institutions to pick up and rebrand/modify further down the line. Not many institutions have in-house expertise for modifying and supporting mobile apps across multiple platforms.


So...


A good day, a useful day and lots to take away and think about. We rounded things out with a demonstration of the competition winner at Dev8D for mashed up XCRI data. Really impressed with the winner, included such vital search factors as 'pubs within 1 mile' and 'minimum average temperature'!

- rob

No comments:

Post a Comment