The Impacts of Digitisation and Convergence on Television

BitTorrent Download

Television has come a long way since it’s mainstream popularity in a Post-World War 2 world. Today, the convergence of the Internet along with other digital innovations has, rather ironically, provided the same kind of industry wide, mass-disruption that Television caused Radio, all those years ago. However, where Television sought to replace Radio as the primary source of household entertainment, the Internet has instead empowered audiences to seek control over their entertainment habits; granting them the ability to consume media on their own and often unique terms. This ability to deeply personalize what traditionally used to be a static user experience has created a number of technological, social, ethical and commercial concerns, which will need to be addressed in the near future for Television to survive this most recent technological and social transition.

The most extraordinary shift in the disparity between old media and new consumption behaviors is due primarily because users have discovered new means of information access and how this information can be heavily personalized. Traditional broadcasted Television (old media) is a highly inflexible medium that can only be consumed in pre-packaged linear ways and it’s this content model which the entire Television industry has built is business models. Until recently, control over distribution channels was relatively easy, and this control gave commercial networks the power to monetize specific demographics. Primarily, the monetization strategies were structured around selling consumer attention (advertising to audiences).

Küng, Picard, & Towse state that “Once such content is available in digital format, it can be accessed through several devices, channels can be broken down into constituent parts, advertising can be avoided and programs can be consumed on demand, and in different locations, according to consumers’ individual preferences.” In other words, anyway, any place and anytime – something quite foreign to the established status quo of successful commercial Television networks. However, despite technological advances and changing audience behaviors over the past decade (favoring new and exciting content delivery mechanisms) the Television industry has been quite slow to find a place in this emerging new paradigm.

Increasingly, programs are utilizing the web, and specifically social networking (such as Twitter and Facebook), as a cost effective means to increase audience engagement. Websites, Video streaming (via catch-up services), blogs, and Social Networks can be easily created to catalyze an audience and create ‘lock-in’ which aide in stabilizing audience numbers and potentially even increasing them through word-of-mouth. Content lifespan can also be extended through convergence. Jeremiah Zinn from MTV Networks said recently at a VideoNuze event in New York, that “overlaying commentary and contests on TV reruns extends the life of that content and can make it relevant even during a second or third airing”. This clearly shows that old media is happy to converge with new technologies, provided that they don’t interfere with existing monetization and distribution strategies. New monetization opportunities are beginning to be explored, but so far the Industry seems lethargic to embrace them; possibly due the complex and sometimes delicate distribution deals currently existing between studios, producers, copyright-holders and Television networks around the world. The Film and Television Industries have learned from the music industry’s mistakes and the subsequent success of legal music download services, such as iTunes and are slowly embracing them, however, the burden of geography still handicaps many users from getting what they want, when they want it.

It is this “digital distance” that is fracturing the Industry. The effect that these arbitrary geographical rules imposed on different audiences is a primary driver behind the increasing quantity of pirated Television. Some popular TV episodes are getting up to as many as 5 million downloads per episode, a number that is rapidly approaching the total audience population for that same episode in the US. Ernesto Van Der Sar who created the Torrent Freak website said that after they analyzed some 400,000 torrents the “data indicated that approximately half of all the people using BitTorrent at any given point in time, were using it to download a TV-show”. This is supported by Michael Newman who stated that in 2008 the most popular episode of Lost was downloaded over 5.7 million times versus Neilson’s report of 13.4 million US viewers tuning in. The Industry is concerned that the increasing quantity of ‘bootlegged’ or otherwise unauthorized episodes downloaded online will only increase profit hemorrhaging, but there are also reports that the Industry has allegedly used these same illegal distribution networks as a marketing tool by deliberately leaking unaired pilots. One of the members of EZTV reportedly told Torrent Freak that it was his “understanding that many of the people that download TV shows from us are avid TV fans and will usually buy DVD boxsets of shows they like.”

Clearly, there is an argument to be made for Torrent downloading in actually helping Television in building stronger, deeper, more engaged audiences. Additionally, the Television Network NBC recently commented that they actually make less money on per-per-download services than they do on free download sites due to the advertising revenue gained from so many viewers, further justifying BitTorrent downloading as a viable distribution platform. Never-the-less, unauthorized downloading are publicized by traditional media as being theft, whilst many downloaders argue that if recording to PVR from freely broadcast media isn’t illegal, then neither should be Peer-to-peer (P2P). Proponents of Television downloading argue that P2P represents an opportunity, not a threat.

New media is something that the mainstream will soon expect from the Television Industry. New media is about breaking down the walled garden of traditional media and stripping away all geographical bias and enhancing the social relationship. People now see Television as an invested social experience; one to be shared discussed and participated in. The concept of Prime-time Television is starting to disintegrate and soon will be gone altogether, as audiences simply download or time-shift to the program they wish to watch at the time. Interestingly, only Reality Television and Special Events (e.g., Olympics, Sport Finals) are likely to avoid this trend and be watched live, partly because of the need audiences will have to share these experiences in real-time. This later trend has already converged into mainstream Television behavior, with people discussing live Television events as they watch, and some shows such as the Australian ABC’s Q & A and the American’s CNN which actually already incorporate a live audience Twitter feed into the regular broadcast.

The next generation will be the one to watch closely. They will be an entire generation who’ve not known Televisual media any other way, with an utterly transformed mode consumption and interpretation. A generation of Television viewers who don’t care which network a program comes from, only care that it’s right for them, at that moment, to be shared and discussed. They will expect to watch Television in a post-scarcity world and in contrast Television’s current status quo, expect Television to adapt it’s broadcasts to everyone’s own personal schedules. While the demands of unique personalization may take the Industry more time to adjust to audiences pulling them in every personal direction, once the Industry embraces new media we will again reach equilibrium. Despite the possible short-term struggles with the ethical, commercial and technological challenges, the future looks bright for all Television audiences because ultimately digitization allows new communicative, journalistic and content consumption which will force us to reformulate the existing paradigm.

This post is a slightly modified version of a piece I wrote for a University assignment for the Curtin University Subject Web Media: WEB207, answering the question: What are the Impacts of Digitization and Convergence on Television?

See more from this unit.

Games: At Work, No One Knows I am a Wizard

A treant from World of Warcraft

There is still a strong social stigma attached to people who confess to regularly playing computer games in western culture. The lingering stereotype of gamers being solitary male teenagers with poor social skills persists, despite studies showing that the average gamer is 30 years old, and has over 30% chance of being female. The fledgling industry is now breaking into the mainstream, and the rise of casual and social gaming has turned the games industry into a $39 billion per year powerhouse of entertainment. In the next 12 months, this figure is expected to balloon into $55 billion per annum, which is a figure that will rival the international film industry and predicted that it will soon be the preferred and dominant form of entertainment.

The popularity and rise of recent casual and social gaming owes much thanks to the phenomenal success of the Nintendo Wii games console, who’s success is largely the result of it’s ability to not only break through the traditional image of the games industry but to transcend it entirely. The Wii made gaming accessible; making games a social experience anyone could enjoy (particularly families and the elderly) – opening up games to a new and untapped demographic . While not the sole reason, it was one instrumental in the rise of casual and social gaming, which in the past 12-24 months has become a seeming tidal wave of success.

There is a deep psychology to gaming that’s yet to be fully understood. Researchers have found that games provide “sense of freedom and connection to other” and this lets us explore ourselves, our friends, our families, but also complete strangers in way we could never do during a face-to-face interaction. Playing games, particularly online, gives us remarkable insight into other people free from typical social constraints, for example PlayStation’s Smash Up Derby allows users to drive classic motor cars, like the T-Bird; but also drive them at breakneck speeds into other users.

This combination of reality and fiction is deeply stimulating. It also allows us to validate and test our moral systems, since people can be exposed to morally questionable situations that would never arise organically in the real world. Studies also suggest that games make us smarter. Educational games such as Immune Attack (presented by the Federation of American Scientists) provide mental and social benefits to players. Unfortunately, there is also a cost. Games are highly validating, in that they provide a source of fun, thrill, competitiveness and this makes them very addictive; although there is a lack of formal diagnosis in current medical or psychological literature. Unfortunately, the number and frequency of deaths and illnesses resulting from online game addiction continue to grow.

While social and casual gaming can clearly enrich our lives and relationships, we must be mindful of the possible problems when taken in excess.


This post is a slightly modified version of a piece I wrote for a University assignment for the Curtin University Subject Internet Studies 102/502: The Internet and Everyday Life, answering the question: What are the implications of the rise of casual and social games on the internet for online gaming and everyday life?

See more from this unit.

My Thoughts on Dating, Intimacy and Sexuality

Second Life

The Internet can be a powerful tool for everyday people to explore thoughts and emotions without inhibition.  Since the Internet provides us with great anonymity, we can explore and share deep feelings and ideas without fear of judgment and retribution.  This can facilitate very positive outcomes; especially for people with otherwise quite acceptable sexual feelings and desires, but who feel impeded and couldn’t or wouldn’t act them out in real life (such as is often the case with young people exploring sex and homosexuality).

Arguably, cybersex is also perfectly safe.  Cybersex provides people with a physically safe environment, since the nothing physical ever occurs (other than possible self-masturbation, which often accompanies cybersex).  However, sharing and exploring sexual feelings and desires online, is accompanied with strong and intense emotions.  It’s these feelings and emotions that are significant to infidelity and therefore any sexual activity, regardless of whether it is merely flirting, seeking arousal or orgasm, could reasonably be considered betrayal by most romantic partners. Indeed, the Fortino Group reports “one-third of all divorce litigation now involves one partner’s online infidelity”.

We live in a world where the Internet is becoming a pivotal and sometimes pervasive component of our everyday lives.  Our physical bodies are exposed to stimuli that transcend our own thoughts and views of the world and we’re exposed to more than we can imagine.  We can participate, or contribute as much or as little of ourselves as we desire.  Because of this the line between the real world and the virtual-world is becoming increasingly difficult to define.  Since we carry ourselves into the virtual-world, it has become a mere extension of our physical selves.

If we define physical acts of sexuality to be foremost the emotional connection between two people sharing a sexually arousing experience, than Cybersex is just a real as intercourse.  The Internet has also adapted to make Cybersex as real as possible, further blurring the line.   The social game Second Life, grants players great control to ensure that players can highly customize their game avatar; designed to be the player’s representation in the Second Life digital world. Player’s can then control their avatars, much like a puppeteer would control a puppet, and as such can enact any activity the player can imagine.  Second Life is known for player avatars being able to enact and enhance sexual activities, augmented by text chat or voices using a microphone . It’s easy to trivialize Cybersex as harmless fun, but doing so also trivializes illegal sexual activities such as the computer depiction of adults having sex with children .  While people should feel free to explore Cybersex as part of normal and healthy sex life, normal real world social rules and expectations need to still apply.


This post is a slightly modified version of a piece I wrote for a University assignment for the Curtin University Subject Internet Studies 102/502: The Internet and Everyday Life, answering the question: How far would a partner/spouse have to go online before it is considered cheating? Up to what point is flirting online acceptable? How ‘real’ is cybersex?

See more from this unit.

Politics: Julia Gillard is My Facebook Friend

GetUp!

Democracy literally originates from ancient Latin as “government by the people” and as a system of government, it allows the citizens of a country to directly or indirectly participate and manipulate the legislative process. Only a few short years ago, for a single individual to be heard, required great effort, coordination and some luck. As a result politics evolved many defenses against individuals attempting to disrupt the status quo. This meant that citizen activism was slow, encumbering and difficult to coordinate en mass.

This is changing as the mainstream continues to adopt new Internet technologies and has created for itself “transformative opportunities related to key public sector issues of transparency, accountability, communication and collaboration, and to promote deeper levels of civic engagement.” This has resulted in an unprecedented and transformational level of citizen participation and organization. Where citizen activism used to take days, weeks or even months to coordinate, it can today be organized and executed, quite literally overnight. Getup.org.au (GetUp!) is a community advocacy group dedicated to getting greater community participation on important issues. They take a very strong and active role in Citizen Democracy; not only making suggestions, but also providing specific and intricate instructions on how people can engage in a number of specific political and social agendas. The site provides many examples on its front page, and these are constantly changing with an ever-changing social landscape.

Conversely, the Queensland Government’s Get Involved initiative is more about how the general public can participate in their local communities . There are suggestions towards influencing Government Policies and decision-making, but the bulk of the suggestions involve passive political activities, such as donation and volunteering and are reasonably ambiguous and nondescript. But its not just the underlying vagueness that is the problem with the Get Involved website.

The main issue is one of obviousness and one of timeliness; it provides dated, obvious suggestions for which most people would be already be aware (e.g., volunteering at a local school). However, the Government cannot be seen to be biasing or influencing the public, which stands to reason why the Get Involved website is quite generic in it’s suggestions. However, even after forgiving Get Involved for its politically sensitive content, I think the main reason why GetUp! is more successful, is because it selects highly specific language, which is clearly designed to invoke an emotive reaction as well as the specific links and activities to do something about it. This reduces social hegemony and action paralysis.

It’s indeed a microcosm for politics today; governments and politicians know they need to get engaged in new media, however, Social Media and the Blogosphere often cycle faster than Governments and Politicians can react. It’s interesting to me to see how the political machine will evolve and adapt to a more open and Internet-aware public.


This post is a slightly modified version of a piece I wrote for a University assignment for the Curtin University Subject Internet Studies 102/502: The Internet and Everyday Life, answering the question: Navigate around and discuss two of the following sites in terms of the kinds of involvement they encourage. (www.pm.gov.au, www.tonyabbott.com.au, www.getinvolved.qld.gov.au or www.getup.org.au)

See more from this unit.

Adding Wiki Style Functionality to Your Rails Site Using 'acts_as_versioned'

If you need to add basic wiki style functionality to your Ruby on Rails models, there is a really easy way to get similar model versioning without having to resort to cutting the code yourself.

The acts_as_versioned ‘plugin’ has been available for quite some time, but its been made far better by it now becoming a gem instead of an old-school plugin. The authors have gone to considerable effort to make it as painless as possible to use.

This post, is designed to give you a brief over-view into how to get up and running with with models which ‘acts_as_versioned’.  Because its the current version (at time of posting) and because its awesome, this walk-though assumes that you are using Rails 3, not 2.  The instructions for Rails 2 sites are similar, but you’ll need to tweak this for it to work.

First, you need to grab the gem:

sudo gem install acts_as_versioned

Next, add the dependency to the ‘Gemfile’, it doesn’t matter too much where it goes, I stuck it somewhere in the middle:

gem 'acts_as_versioned', '0.6.0'

Next, just under the ‘ActiveRecord::Base’ line in the model’s class file, instruct the class that its to act as a versioned model.

class Article < ActiveRecord::Base
  acts_as_versioned
end

Then, in the migration file you need to execute the model's method to create the version table.  This is key because the acts_as_versioned gem actually creates an additional database table to house all the previous versions of a given record.  Obviously, you need to delete the table is the schema is taken down.  My migration now looks like:

class CreateArticles < ActiveRecord::Migration
  def self.up
    create_table :article do |t|
      t.string :title
      t.string :body
      t.integer :user_id

      t.timestamps
    end
    Article.create_versioned_table
  end

  def self.down
    drop_table :articles
    Article.drop_versioned_table
  end
end

The key method is the

Article.create_versioned_table

which creates the version table of the model. Now, get rake to create the database:

 rake db:migrate

Thats it!  Its done.  Using acts_as_versioned is simple. I'll provide some examples, where '@article' represents an instance of a model setup 'acts_as_versioned'. To find the current version of an article you can use the version property:

@article.version 

But just performing a normal ActiveRecord lookup returns the most current version anyway, so to revert to a previous version use the revert_to method on an article instance:

@article.revert_to(version_number)

You can save (just like you've done a hundred times before) a previous version as the current on by using the save method. The save on a reverted articles will just create a new version.

To get the number of versions:

@article.versions.size

Since '@article.versions' returns an array of versions, you can do neat things like this:

History

<% for version in @article.versions.reverse %> Version <%= version.version %> <%= link_to '(revert to this version)', :action => 'revert_to_version', :version => version.id, :id => @article %>
<% end %>

Obviously for this to work, you'd need to create a 'revert_to_version' action in the appropriate controller, but you get the idea.

acts_as_versioned is an amazing piece of work, and aside from the wiki-like functionality it gives you for very little effort, I can imagine scenarios such as audit and trace logs and "undo" features which could really benefit from this little gem.

Using Rails’ Flash Messages with AJAX Requests

Have you ever wondered how to get access to the Ruby on Rails‘ flash message when performing a AJAX or restful web request?  You might hit yourself on the head when you discover how easy it is.  Simply append the flash message to the Response headers.  You could even wrap this in a helper, and using an after_filter to automatically add the header for you on every AJAX response.

class ApplicationController < ActionController::Base
after_filter :flash_headers

def flash_headers
  # This will discontinue execution if Rails detects that the request is not
  # from an AJAX request, i.e. the header wont be added for normal requests
  return unless request.xhr?

  # Add the appropriate flash messages to the header, add or remove as
  # needed, but I think you'll get the point
  response.headers['x-flash'] = flash[:error]  unless flash[:error].blank?
  response.headers['x-flash'] = flash[:notice]  unless flash[:notice].blank?
  response.headers['x-flash'] = flash[:warning]  unless flash[:warning].blank?

  # Stops the flash appearing when you next refresh the page
  flash.discard
end

And then you just read the header with whatever you happen to be reading it with. For completeness sake here is an example of how to read the header in JavaScript using Prototype:

 new Ajax.Request('/your/url', {
  onSuccess: function(response) {
    var flash = response.getHeader('x-flash);
    if (flash) alert(flash);
 }
});

Forget the UML Module for NetBeans!

A while ago, I wrote a blog post on how, with considerable effort, you can get a native UML NetBeans module up and running despite the NetBeans UML module being removed from the standard distribution.

I managed to get mine working, but there is a huge cost – Once you close the project (or the IDE) housing the diagram, you can never reopen it.  Out of pure determination desperation and perseverance I managed to get the diagram I needed, printed and done; but I can never open it and make adjustments.

Apparently, we’re all supposed to use SDE for NetBeans by Visual Paradigm now as the “official” replacement, but I tried it, and it was simply fail.  Proprietary and fail.

Fortunately, after taking a punt, I found a UML modelling tool which is not only more functional and better than the NetBean’s module was, but looks better too.  It even has the ability to create code from class diagrams (which you can obviously just cut and paste into your NetBeans IDE project of choice.  Its called ArgoUML.

ArgoUML is the leading open source UML modeling tool and includes support for all standard UML 1.4 diagrams. It runs on any Java platform and is available in ten languages.

I’ve used it a bit now, and I just love it.  I particularly like the way it can make recommendations on how to improve your diagram using its “critics” system.

It’s features boast:

  • All 9 UML 1.4 Diagrams supported
  • Platform Independent: Java 5+
  • Click and Go! with Java Web Start
  • Standard UML 1.4 Metamodel
  • UML Profile support with profiles provided
  • XMI Support
  • Export Diagrams as GIF, PNG, PS, EPS, PGML and SVG
  • Available in ten languages – EN, EN-GB, DE, ES, IT, RU, FR, NB, PT, ZH
  • Advanced diagram editing and Zoom
  • OCL Support
  • Forward Engineering
  • Reverse Engineering / Jar/class file Import
  • Cognitive Support
    • Reflection-in-action
      • Design Critics
      • Corrective Automations (partially implemented)
      • “To Do” List
      • User model (partially implemented)
    • Opportunistic Design
      • “To Do” List
      • Checklists
    • Comprehension and Problem Solving
      • Explorer Perspectives
      • Multiple, Overlapping Views

I haven’t yet worked out how to create object instances from my class diagrams yet, so I’m not sure if it just doesn’t support this or it’s user error, but in every other conceivable way, it seems to be an excellent UML modelling application for virtually every OS you can name.

Optimizing Apache 2 Configuration for Smaller VPS Instances

I recently down-scaled the server which hosts this blog (one among a few). Being a Ubuntu server, it was trivial to install the LAMP stack, including Apache 2. However, I quickly discovered a problem with the default configuration on a server with a lesser amount of memory (in this case 512MB). The server would work just fine for a short while and then grind to a near halt, where even a SSH session becomes unusable. When I eventually got into the server, I listed the processes and found the ‘apache2’ process running several dozen times!

The default configuration for the Pre-fork MBM (Multi-Processing Module) reads as follows:

# prefork MPM

   StartServers          16
   MinSpareServers       16
   MaxSpareServers       32
   ServerLimit           400
   MaxClients            400
   MaxRequestsPerChild   10000

To something more reasonable to a server with limited memory, such as:

# prefork MPM

   StartServers         4
   MinSpareServers      4
   MaxSpareServers      8
   MaxClients           35
   MaxRequestsPerChild  10000

I found this has made my server much more stable – and I’ve not noticed any performance decrease from the new configuration.