From Bodington Wiki

Jump to: navigation, search

Back to TReCX



These notes have been cobbled together from various email exchanges.


It would be good to build in a hooks so we can add secure messaging at a later date. Selwyn from Phosphorix has done mucho work in this area as part of ioNodes. They have developed ioAgents (interoperability Agents) which use VPI (Virtual Private Internet) to securly pass messages around.

They have a system based on ebxml which is part of the Learning Matrix - they plan to split off the secure messaging into its own WAR file, they also plan to replace the infrastructure with WSRM (reliable messaging). This will be called ioAtom. This will be useful for TREC.

Would it be an idea to building in BPEL 'sequencer' so that we can schedule other services that get called before a message is posted , things like encryption, txt2html conversion?

Alistair Young

tracking / logging / notification:

2 active (tracking, logging) 1 passive (notification) i.e. given a user id or opaque identifier, a VLE could ask for tracking/logging info on a user via a WS. For notification, VLE registers it's endpoint with mvnforum and mvnforum uses WS-CallBack to notify the VLE.

Equally, a VLE could install the toolkit to let another VLE query it for tracking/logging etc, i.e. tutor at instA uses Guanxi to let student access resources at instB VLE. Tutor then wants to asses student on what they've been doing at instB VLE. Guanxi works the levers to get tracking/logging info back.

XACML wrap the endpoints to provide shibbed toolkit!

Matthew and Peter C slug it out

Matthew: I'd like the tracking stuff to be as easy possible to implement and having to write listeners on each app that does tracking event generation seems to raise the bar for implementors.

Peter: Also configuration. If I have n apps co-operating, why should I have to configure n(n-1) listeners when I can configure n?

Matthew: I prefer the UNIX syslog style where eveything is accepted and apps send thier events in themselves (possibly doing queuing and bulk transfers).

Peter: This is accepted in many large event tracking systems - a central event sink which provides storage, notification and similar services. See, for example, OpenView and other network management systems; syslog as already described; MS Operations Manager [a godsend as Windows has so many event sources and no syslog equivalent]; Nagios. It is far easier to configure; it is far easier to amend that configuration as the number of co-operating systems increases; and it is far easier to query and report on the entire system.

Now Peter C and Alistair get going

Alistair: I must say I prefer the pub/sub model as it's more efficient than continually looking in some (possibly huge) event repository for an event that may only be triggered once a day.

Peter: This could be fixed by having a meta-notification level that notifies subscribers when an event in which they're interested arrives at the event repository. Subscription should already provide a filtering mechanism to describe the events in which the subscriber is interested; this merely adds another potential filter field of the source app (or possibly not even that).

Alistair: Implementation of listeners is as easy or as hard as you care to make it. Ditto for installation.

Peter: Having a back history of notifications is useful for monitoring and debugging, however. It may also be relevant for applications if they can rely on it, for example by providing the user with a history of what they did when.

Alistair: If the repo stores the backup then that would complicate the interface - there would have to be a query one, i.e. "get me all events of type...". Would it not be better for the app to store them in a format that makes sense to it? Presumably it already stores internal events?

Peter: To be honest, I don't know. It rather depends whether we're talking about app-as-monolith or app-as-part-of-cooperating-services. In the former case, the app probably already stores the events in a format that makes sense. As we move towards the latter case, I think we need to evaluate whether it *is* better for the app to store them. I'm not convinced either way as I haven't yet had the spare brainpower to think through the ramifications.

Matthew: I think you need both.A way of sending tracking events to a remote store for those applications that don't store tracking events already. (Sink API) A way of exposing internal tracking events (eg Bodington events) to an external application. (Search API). The problem with sending events to a remote store is you lose control of them so if for example you want to restrict access to the events it is harder todo this once the events have left the originating system. The remote store should implement the Search API so to allow reporting on collected events.

Brian charges over the hill

I think this reflects an important point - events making sense to the service - creating tracking methods should be a design time pastime - these should appear in the service WSDL (or keep them internal, of course - then it has nothing to do with Tetra). Interested parties can then create the appropriate client software. This removes the need to find any over-arching XML standard format, although best practices will no doubt emerge.

An example of two different kinds of tracking. First there is the "X reads Y" event, which might be recorded appropriately in some textual format.

Second, there is the Java event itself. In a web service that contains a Swing application, for example, it is bang-bustingly useful to simply record the Java event objects, ship them to your own machine, clone them into another JVM running the same swing application and you have an exact replay of the user performance. (This might be extended to more general cases involving non-Swing methods with creative extensions of Java event classes.....)

Now there will be other different (and complex, perhaps) formats of tracking events depending on the arena... again, let the designers and subject experts design this in through the WSDL.

I still suggest that the tracking messages are sent to a JMS queue to which interested parties can subscribe (in a loose fashion, of course). The JMS server can then arrange to call a service method on any registered service. I suggest JMS because you already have all the topic queue, registration .. gubbings there already. You'd have to have a little adapter that feeds a jms message to the jms server from the http/soap message. The service processing - message driven bean, even - can do what it wants on the way back. The response can be immediate or some batch process.


The first pub/sub we looked at was opensif, well worth a play just to get your head round what pub/sub is and potential usecases.

The next thing we tried was biztalk by ms again well worth a play.

When we developed the relaible messaging system for SHELL now embedded in ioNodes, we used agents and hubs.

Agents tell the hub where to send the data, so the agent needs to know who its subscribers are that's all. [he says]

To make ioNodes reliable messaging [currently based on ebxml] of better service to the community we actually want to ruthlessly refactor the main ioNode source and throw out everything that is not messaging... Code name for this refactoring is ioNodeMX short hand for interoperability node messaging exchange...

Its worth doing I think becayse we have had to deal with lots of stuff already in this code base such as https, http key exchange, more importantly reliability and full traceability of messages.

ioAtom on the other hand doesn't use SOAP currently we use SCP to move xml, once again we have built reliability and full traceability into the methodology so we took the lessons learnt from the ebxml reliability / traceability reccommendations and applied them in a much cleaner and simpler way.

We want to port both ioNodeMX and ioAtom to be .war based deployments as we are doing now with all our new stuff. We think this is best for the community approach.

We haven't got the budget to do that plus a ref model / web service at the moment, so I am suggesting that this could be the basis of some colaboration with adam et al..

We also started a message tracker for ioNodes it works but we never released it fully as there are still bugs to iron out.

We figured message tracking was a very important way to visualise whats going on with our middleware for management purposes.

This tracking may be an entirely different thing to what Adam is talking about though :)

Anyhow web services could be putIn / getOut, register node [instituion], register subscriber [to a service], getMessageTraceLog() that kind of thing.

IMO the stack for reliable and fully traceable messaging is very similar in concept to SMTP and NNTP I had a real problem at the offset of developing ioNodes saying things like why are we reinventing SMTP and NNTP

Nevertheless we went ahead :)

The advantages with the systems we have implemented are 1. being able to use fairly standard ports [80, 443] and thus get over / through firewall issues 2. no spam or other messaging abuse thus get over mailserver filtering issues 3. full control of the where to and from thus get over trust issues

While implementing https / SOAP we went on to set up node to node VPNS This essentially opens up all ports between an agent an a hub for example meaning We were able to take advantage of other forms of messaging hence ioAtom [scp via 22] Currenttly we can and do use SMTP, SOAP and SCP for reliable and traceable messaging

Cut our design based on ebxml reccomendations...


My simple understanding of TrecX is you wish to track stuff on the enterprise / VLE and envisage some kind of tracking web service which may need some business orchestration element.

This is guess work... business orchestration executes the process[es], TrecX logs them, TrecX provides a tracking web service which can can be used by alsorts of other apps as part of a business process... I am having to use my imagination here, I'm guessing that you may wish to execute a number of processes in a sequence with measures for failure or success at any given point in a sequence. Thus TrecX logs or tables may need monitoring by another business process in order to reliably complete the said process.

Potential caveats from the guess work... You would also need to consider what constitutes success or failure, whether to use sync / async for a given scenario, whether to push or pull data in response to success or failure.

So I need to know more really have the assumptions confirmed and then i can illustrate and compare the issues we have had to overcome -> solve and implement against.

ioNodes were developed in response to the SHELL projects requirement to electronically track the lifelong learners progress over a long period of time across the WAN or the internet and via multiple insitutions all the time recording a cumulative record of learning achievements as an IMS LIP record.

The java development side of ionode got bogged down in security early [a classic mistake], politics [unavoidaable] and all mannner of distractions then most unfortunately design contol went 'west' [for my little girls cancer treatment] and the reliable messaging core code became embedded with bespoke business processes, quite out of control, too many chiefs.

However the hardware and operating system development led us into unchartered territory with alsorts of potential which is still to be documented and presented to the community.

We found ourselves getting sucked into the world of dedicated internets, grids and while very exciting to us nobody to fund it.. well at least I got sucked into it. This could go on so i'll wind up now.

In conclusion... here and now, our new design for ioNodeMX keeps business process services and the transport mechanism distinct and thus easier to use. This may be very obvious, apologies if anyone on the list is offended by that.

On a seperate thread the other cool stuff ioNodes do is being made much simpler and distinct and hopefully much more available to the community. We will anounce to the list first if thats ok with you all, as we are looking for some peer reviewing of alpha releases in order to go to beta.

Back to TReCX