Uniform Interface

In the sense of constraint 4 of REST.
A Uniform Interface is a way to peramiterise the space of hypermedia interfaces, such that the media can explain, through the client, to the reader, what information is present, and what are the possible controls.
HTML is an example of a uniform interface. A web browser is able to render any website because it provides a uniform interface which web-style hypermedia can target.

idea: A Uniform Interface to generalise AV

Here is an attempt to approach it.

Dimension

A Song is 1 Dimensional
    Temporal. A player needs to be able scrub forward and backwards. References need to be able to point to:
      "time codes" or temporal points in the song,
      and "sections" or contiguous temporal regions in the song
A Film is 3 Dimensional:
    Temporal, as before, there for References need access to
      Temporal Pointss
      Temporal Regions
    Screen Area, a two dimensional surface on which the video is shown, References need access to
      Spacial Points and
      Spacial Regions
It should be noted that these different aspects multiply, rather then add. We need to consider spacial points extended over temporal regions and visa versa. A hypermedia referencing and transclusion system should be able to handle both.
It should also be noted, that Film actualy has an audio channel, and audio might have left, right and base channels. These should also be considered part of the internal structure.

Signature

The dimension of an artifact could be expressed in a signature. To hallucinate the signature of a film:
surface = Time x (Sound + Screen)
Now a client to read that and know, I need a time scrub bar along the bottom, I need audio output and I need a screen. Knowing this it could, without even looking at the contents of the artefact, link comments, documents transclusions ect correctly.

AV with transcripts

Consider a lecture, with a transcript, references and diagrams. An appropriate signiture might be something like this
surface = Time x (Sound + 2*Screen + Doc)
The whole lecture takes place over time, so thats out front.
The two screens represent the face shot and the presentation, it changes over time.
Through out the talk, there is a transcript of the speaker, this is text with additional structure, just like a normal document. It should be tied to the same time axis as the sound and the screens.
Referencing, insertion and transclusion should all be able to operate on points and regions in this combined space

Audio Book

Audio books are currently a mess. Most of them are only legacy distributed through amazons proprietary player. When they can escape that prison, they are often available only as one massive mp3, or as one mp3 per chapter. One really wants to be able to write in the margins, cross reference ect. I want audio-books as hypermedia.
The inner structure of an audio book is like that of a document: parts, chapters, sections, paragraphs. Only they take place within an audio file.
Assimilating audio would make engaging with audio books a much more engaging process.

Other possibilities

    Animation and simulation. Tell me that this video doesnt want to be a hypertext document with included animations.
    Ingesting strange and files from dead systems, and assimilating them into a hypertext system by writing something that interacts with the uniform interface
    Journalism is often audio, and it should be able to reference and be referenced by other videos and important documents.
    Data Visualisations