finally a bnode with a uri

Posts tagged with: getting real

Quick thoughts on semantic microblogging

Motivation and wish list for a personal semantic microblogging system
This week, the first "Microblogging Conference Europe" will take place in Hamburg. I was lucky to get a late ticket (thanks to Dirk Olbertz, who won't be able to make it). The conference will have barcamp-style tracks, and (narrow-minded as I am) I started thinking about adding SemWeb power to microblogging.

The more I use Twitter and advanced clients like TweetDeck, the more I think that (slightly enhanced) microblogs could become great interfaces to the (personalized) Semantic Web. I'm already noticing that I don't use a feed reader or delicious to discover relevant content any more. I'm effectively saving time. But simultaneously it becomes obvious that Twitter can be a distracting productivity killer. So, here is the idea: Take all the good things from microblogging and add enough semantics to increase productivity again. And while at it, utilize this semantic microblog as a work/life/idea log.

A semantic microblog would simplify the creation of structured, machine-readable information, in part for personal use, and generally to let the computer take care of certain tasks or do things that I didn't think of yet.

I have only two days left to prepare a demo and a talk, so I better start developing. I'll keep the rest of this post short and log my progress on Twitter instead. The app will be called "smesher". I'm starting now (or rather tomorrow morning, have to leave in 15 mins).

Use cases

  • How much time did I spend doing support this month?
  • Who are my real contacts (evidence-driven, please, why do I have to manually add followees)?
  • Show me a complete history of activities related to project A
  • How much can I bill client B? (or even better: Generate an invoice for client B)
  • What was that great Tapas Bar we went to last summer again?
  • Where did I first meet C?
  • Bookmarks ranked by number of occurrences in other tweets
  • Show me all my blog posts about topic D
  • ...

Microblogs: Strengths

  • Microblogs are web-based
  • Microblogs are very easy to use ("less is more")
  • Microblogs offer a great communication channel (asynchronous, but almost instant)
  • Microblog clients are getting ubiquitous
  • Microblogs can be used as life logs
  • Microblogs can be used for note taking
  • Microblogs can be used for bookmarking
  • Microblogs can be used for announcements
  • Microblogs can accelerate software development (near-real-time feedback loop)
  • Microblog search (and the associated feeds) can be used to track interests
  • hashtags are a simple way to annotate posts
  • A Microblog can be used as an interface to bots

Some Requirements and Nice-to-haves for semantic microblogging

  • access to a post's default information (author, title, date, source)
  • support for evolving patterns (@-recipients, people mentioned, URLs mentioned, hashtags, Re-Tweets)
  • groups, or at least private notes (some posts just don't need to be on the public timeline ;)
  • complete archives
  • perhaps semantic auto-tagging
  • post-publication tags (I'll surely forget a necessary tag every now and then)
  • private tags?
  • keep the simple UI (no checkbox overload etc.)
  • support for machine tags or a similar grassroots extensibility mechanism to increase granularity without losing usability/simplicity
  • an API that supports user-defined and evolving structures
  • URL expander for bit.ly etc.
  • rules to create/infer/extract information from (machine) tags and existing data, maybe recursively
  • Twitter/Identi.ca tracking/relaying

Approach

  • Getting Real (UI first etc., worked great last time)
  • RDF 'n' SPARQL FTW: I don't know what the final data model is going to be, and I want an API but don't have time to code it.

Related Work

Getting Real with RDF & SPARQL at DevX

DevX article about combining the Getting Real approach with SemWeb technologies
My "Getting Real" with RDF and SPARQL article is now available in DevX' Semantic Web zone:
"Getting Real" is an agile approach to web application development. This article explains how it can be successfully combined with the flexibility of semantic web technologies. The article is a look behind the scenes of dooit's first iteration (and an introduction to Trice, code included). The focus is not so much on the Web aspect of RDF, but rather on its ability to accelerate software development ("Data First", etc).

Any feedback is welcome, in comments here or over at the DevX site.

dooit - a live Getting Real experiment

I created an RDF app following the Getting Real approach
dooitI've probably read Getting Real half a dozen times since the release of the free online version last year. The agile process seems to fit quite nicely with RDF-based tools (Semantic CrunchBase was the most recent proof of concept for me). I'm currently writing a DevX article about using RDF and SPARQL in combination with Getting Real and wondered about quantitative numbers for such an approach. As I usually don't record hours for personal projects, I had to create a new one: sillily named "dooit", a to-do list manager.

dooit follows a lot of GR suggestions such as "UI first", not wasting too much time on a name, that less may be enough for 80% of the use cases, or that usage patterns may evolve as "just-as-good" replacements of features ("mm-dd" tags could for example enable calendar-like functionality).

I started the live experiment on Friday and finished the first iteration on Saturday. Below is a twitter log of the individual activities. I was using Trice as a Web framework, otherwise I would of course have spent much more time on generating forms and implementing AJAX handlers etc. So, the numbers only reflect the project-specific effort, but that's what I was interested in.
  • (Fr 08:24) trying the "Getting Real" approach for a small RDF app
  • (Fr 10:51) idea: a siiimple to-do list with taggable items
  • (Fr 11:02) nailing down initial feature set: ~15mins: add, edit, tick off taggable to-do items
  • (Fr 11:02) finding a silly product name: ~5mins: "dooit"
  • (Fr 11:27) creating paper sketches: ~20mins (IIRC, done yesterday evening)
  • (Fr 11:42) got unreal by first spending ~30mins on a logo
  • (Fr 12:07) Setting up blank Trice instance and basic layout to help with HTML creation: ~25mins
  • (Fr 13:52) first dooit HTML mock-up and CSS stylesheet: ~90mins
  • (Fr 17:14) JavaScript/AJAX hooks for editing in place, forms work, too, but w/o data access on the server: ~3h
  • (Fr 18:12) identifying RDF terms for the data structures: ~30min
  • (Fr 18:13) gotta run. time spent so far for creating RDF from a submitted form: 20mins
  • (Sa 14:40) continuing Getting Real live experiment
  • (Sa 14:41) "URIs everywhere" is one of the main issues for agile development of rdf-based apps. Will try to auto-gen them directly from the forms..
  • (Sa 19:04) rdf infrastructure work to auto-generate RDF from forms and to auto-fill forms from RDF: ~2h
  • (Sa 19:07) functions to send form data to RDF store via SPARQL DELETE/INSERT calls: ~1h
  • (Sa 19:09) replacing mockup template sections with SPARQL-generated snippets: ~1h (CRUD and filter-by-tag now in place, just ticking off items doesn't work yet)
  • (Sa 20:09) implementing rest of initial feature set, tests, fine-tuning: ~1 h. done :)
  • (Sa 20:14) Result of Getting Real experiment: http://semsol.org/dooit Got Real in ~10 12 hours
I think I can call it a success so far. One point about GR is staying focused, working from the UI to the code helps a lot here (as does live-logging, I guess ;). But I'm not done yet. Now that I have a first running version, I still have to see if my RDF-driven app can evolve, if the code is manageable and easy to change. I'm looking forward to finding that out, but my shiny new dooit list suggests to finish the DevX article first ;)

Archives/Search

YYYY or YYYY/MM
No Posts found

Feeds