Monthly Archives: January 2012


Nanostudio for iPad and iPhone is the best thing for portable
music-making since Bhaji’s Loops on the Palm five years ago. It gets
away from pattern-based song generation apps like Tabletop and Rhythm
, which although lovely and fun toys, still make hard work of
joining instrument and drum patterns into a decent track. Their history
lies in 80s sequencers and drum machines, and after the misty eyes have
cleared, we remind ourselves that we’re in a new world of fancy new touch
screens and UI and can transcend historical limitations. On that line,
Garage Band is very impressive but the instrument UIs seem to count
more for folks than the sheer sonic potential that I’m looking for.


With Nanostudio on iPad and iPhone, and Nanosync on the Mac to upload
samples and download the tracks, I’m ready to do some creative battle.
Serious creativity requires discipline, which for me means deadlines and
constraints, so my Linchpin-style plan is this: finish one short track
every week, and ship it to the appropriately-titled Nanoscope blog.

The initial rules are:

  • The track must be created and mastered on Nanostudio, any version, either
    on iPad or iPhone

  • Tracks must be no more than three minutes long. Sketches, vignettes,
    impressions, not epics.

  • Any genre, any samples, any sounds, any bpm.

  • A track must be posted each week by the end of Monday, local (Melbourne)

  • Tracks will be named after one of the newest colour schemes on kuler

Short tracks align better with our attention deficit culture. It also means
I can explore more, place a number of smaller bets, and see what pays off.
And, let’s face it, if I don’t have much time, I can quickly throw any old
shit together and call it a conceptual sketch.

There might be some other arbitrary rules or random elements I come up with
over time to enhance the creative process and keep it interesting. And at
some point I’ll stop doing this. We’ll have that discussion then.

The first track is up, just to grease the wheels.

Enterprise APIs

Below are some highlights from articles on enterprise API trends from O’Reilly, Programmable Web, and ZDNet.

  • Enterprise APIs are apparently becoming mainstream as organisations open their silos of data for internal consumption.
  • Enterprise APIs need to align with the business strategy. Most enterprise APIs are now “owned” by the business, not IT.
  • There are increasing numbers of third-party API providers, such as Datafiniti, whose success depends on fostering a developer community around their API, and offering other value-added services.
  • The load on enterprise APIs is unpredictable, so the service implementation needs to be elastic.
  • REST and JSON are already the majority, with SOAP and XML declining.
  • OAuth, both 1.0 and 2.0, is the default for securing APIs, especially for so-called “three-legged” authentication scenarios. Where it competes, OpenID is on the way out.
  • One quick win for implementing internal enterprise APIs is analytics, including the tactical sort I talked about before.
  • SOA, cloud, and enterprise APIs will effectively merge as concepts, and become “the way we do things”.

My thoughts on some of this:

Externally-accessible enterprise APIs make customers do the work, avoiding second-guessing the functionality customers need, and any subsequent delay in deployment. By so doing companies also reduce the cost of doing business, and increase their transparency. More strategically, it can encourage customer investment in building to their API and increasing “stickiness”. Monitoring the use of those APIs (via analytics) can provide a significant source of aggregate and individual customer information.

Among the tradeoffs of opening up enterprise data is of course data security. Another risk is business model security if, for example, substantial IP is visible through the API design and data structures.

Stickiness from implementation investment implies some amount of coupling. SOA and enterprise APIs still require developers to design with them, and generally do some amount of coding. Crucially, they require developers to bind to the API-defined data at design time unless the API is either carefully designed or very simple.

Ideally, an enterprise API should be standardised, with consistent request and response protocols that can be hard-coded across alternate providers, or dynamically discoverable by software agents, either at build or run-time. Even with standard REST approaches such as OData, dynamic or late binding without human intervention requires a level of discoverable semantic knowledge beyond a WSDL-like syntactic description. This is one of the reasons that the Semantic Web was developed, but it seems that mainstream developers are finding it overly complicated. Perhaps. However, for looser coupling and more agile use of enterprise data, automated selection and use of APIs will require a semantic “understanding”, and the significant existing semantic web work will be used and extended.

By example, a CSV file of tabular data, even if expressed in JSON or XML as a structured key-value map, can have machine-comprehensible metadata about the meaning of each column attached. The semantic web already offers the ability to describe each data fields not only in terms of a meaning defined in a common ontology such as UMBEL, but also expected data formats and relationships and dependencies between the fields, using a combination of RDF, Turtle, and OWL. It does require a more formal definition of an enterprise API, but a lot of it could be auto-generated.

I’m exploring. Feel free to agree, comment, or worse.