Demonstrators

This category contains 11 posts

Semantic Web meets RSS meets students

Over the Summer I wrote some code as part of a project with Dr. Louise Platt and Jackie Fealey here at LJMU. We were fortunate to be able to draw on the talents of a very enthusiastic and hard-working student intern by the name of Daniel Blunt. Danny — with help from Louise and Jackie … Continue reading

The Final Package – Linked Data Approaches to OERs

In our project plan we outlined three phases of work. The initial preparation phase (April-June) was based on the creation of initial demonstrators and early investigations of digital literacies and reuse issues. Initial participatory design workshops with staff in the faculty of Education, Community and Leisure (documented here) pointed out a desire for the type … Continue reading

Xtranormal Presentation, as an Autokitty

If you saw the presentation video made with Xtranormal a couple of postings back, you’ll perhaps be interested know we used it as a guinea pig for authoring AutoKitty semantic presentations.  The screenshots are above, but if you want to see some example results, check the links below: AutoKitty Presentation Initial demo using the They … Continue reading

Prototype #7: Confused? Hopefully This Will Help!

One of the problems with describing this project is that, given it’s predominantly a HTML5 web application, until a substantial portion of the UI is constructed one finds oneself waving arms around frantically and using hypebole to explain what it should do, or could do, or what you hope it will do!  And obviously arm … Continue reading

Prototype #6: Finally, Some Linked Data!

The latest version of the tool now has a basic interface to attach facets to the video.  Remember, the tool associates content (through groups of tags, known as facets) to passages in your chosen YouTube.  Thus far it had been possible to mark out passages, but not assign any facets. An example: if one had … Continue reading

Prototype #5: Finding YouTube Videos

The current design for the tool only supports YouTube videos as the media off of which resources can be hung. The first step in any new project, therefore, is to source a suitable video on the YouTube site (specifically the tool needs to get at the unique ID used by YouTube to identify each video). … Continue reading

Prototype #4: More UI Fun!

This video demonstrates a more advanced version of the HTML5 based off-line editor style user interface. Note how the scale can be changed, zooming out for an overview or in close for accurate to-the-second editing.

Prototype #3: Early UI Test

This is a very early user interface test, to rough out the display layout and styles (the green has already been objected to by colleagues). One of the goals of the project is to create a very simple and intuitive way of creating semantic OERs, and this prototype trials ideas for achieving that goal. The … Continue reading

Prototype #2 (part two)

Couldn’t resist this extra post. The above video demonstrates the same animation facet code as the Prototype #2 posting, but wired to MIT’s standard Nobelists Exhibit (took only about one minute to add the animation facet!) Half way through the video I switch to the Simile Timeline view of the Exhibit, and it’s quite fun … Continue reading

Prototype #2

This second version of the our Simile Exhibit animation facet has undergone a major overhaul under-the-hood, resulting in two major enhancements. First, the facet can now filter either as a list or a range, and second, the user interface is now decoupled from the rest of the code to form its own plug-in-able module. Permitting … Continue reading

Previous Posts