A Little Facelift for 2009

Cross-posted from the Particls blog.

In order to celebrate our renewed focus and to inspire all start-ups to battle through these unfortunate times, we’ve given the Particls blog a little make over.  It’s been a long while since we gave our little blog some TLC and I think it’s deserved.

Our new header image is design by one of our developers Andrew, and symbolizes millions of “Particls” of data – many isolated, but then coming together, forming streams of information.  I love it.  What do you think?

Twitter “Track” is back – Introducing the Particls Fountain

Cross-posted from the Particls blog.

From the day we launched Particls 1.0, people have always been excited with our approach to Attention Management.  And while we certainly don’t consider it to be a failure, we always felt that being a Windows only desktop client, with some complex UI challenges, that there was something too difficult for many users to grasp.  There was effectively three paradigm shifts (Aggregate Everything, Rank against interests, Variable interruption based on relevance) and this was a too much for a lot of users to wrap their heads around – ultimately creating a ‘barrier to entry’.

We switched over to a web-based solution, cutting Particls in half.  The Attention Management Engine was removed and eventually became Engagd, and the visualization and alerting engine became what we called ‘Particls 2.0’.  We’ve been working on these two platforms for quite some time, but as the economy turned, and faced with ever increasing costs and minimal resources; we decided to find another way – to work with fine scalpels instead of axes like we once did.  A limitations of resources always forces companies to focus on what really matters – and we choose to use this economic downturn as an opportunity instead of allowing it to consume us.

In order to reduce complexity and scope, we’ve diverted all resources onto a new project we’ve been internally calling “Particls Fountain”.

Particls Fountain will eventually become what we wanted Particls 2.0 to be, but rather than building from the bottom up, we’re building it from left-to-right. Right now it is simply a replacement for the Twitter Tracking service, where you follow topics of interest you define and Particls alerts you Tweets about that topic.

Currently these alerts are delivered via XMPP or Direct Messages, but other mechanisms have been requested and are in the pipeline. Unfortunately however, Direct Messages are being limited by the Twitter API. We will be bringing new channels online to compensate. If you want to get started with Particls, simply follow the instructions at http://blog.particls.com/index.php/instructions

Aside from extremely agile development and releasing frequent, smaller updates to the service, we are also letting the community be the primary driver of development.  We’ve setup a UserVoice site where great ideas are already flowing from the community of about 100 testers.  This feedback is vital, and it’s encouraging to see these users vote for their favorite feature.  Its quite insightful, and it clearly demonstrates what we think is a cool feature, is actually not always what users want or care about.

As a developer I also find it extremely rewarding to mark a feature as “complete” and getting immediate feedback about it.  Its great and so far we’ve found that not only do we as a team produce code faster, but we also building stuff better than we did without it.

It’s still early days for Particls Fountain, but we really do want to make this a tool everyone will find useful, so please come try it out and give us your thoughts.  Be our bosses and tell us what to do to make it something you love.

Because we do.

Note: If you want to get started with Particls, simply follow the instructions at http://blog.particls.com/index.php/instructions

Windows Server Performance on Amazon EC2

One of the trending conversations on the Web at the moment (and has been growing for quite some time) has been the idea of Cloud-based Computing.  While distributed storage has come quite a long way since the conversation began, its only really recently that we’ve had a choice of cloud based computing.

Amazon Web Services winning at Crunchies 2009

The idea behind computing in the cloud is genuinely, and extremely exciting.  Amazon Web Services, including EC2, S3 and the others – are a stroke of architectural genius.  But the problem is, that we’ve been given the false impression that cloud based computing is going to change the web.  We’re spun stories about how its going to radically decrease our infrastructure costs and we’re spoon fed fairy-tales that our scale issues are going to be as easy to fix as double clicking an icon.

You see the problem is this: at the end of the day you’re dealing with a Virtualized Environment – and its always slower than the real-deal.

While working on a project recently we bought into the whole “Elastic Cloud” as well.  We quickly learned that even though its relatively painless to spawn new instances your still ultimately bound to the same rules as you would with a cluster – if your code isn’t built to scale across several machines, its not going to.

After about 2.5 weeks of playing, tuning, perfecting the Amazon EC2 Windows instance we were running – the performance compromise was simply too great to validate its use in a production environment.  I suspect that the virtualization software being used by Amazon actually blocks processes from running in parallel (as they normally would on a physical server), since the machine had extreme difficulty in running more than one thing at a time.  And we found that Apache would do busy-waits when performing PHP Restful API calls to our other systems.  This resulted in 2 concurrent users using 100% CPU usage for the entirety of their sessions.

In the end, the Windows Amazon EC2 solution was completely untenable.  It wouldn’t even have been satisfactory for development let alone production.  So giving up on trying to find a magical “setting” – we thought we’d scale up to a more powerful Amazon instance.  But I didn’t get far before I was casually told that the AMI (the name for an Amazon VM image) I had lovingly crafted for 3 days to our own purposes, was not compatible with the Medium and Large instance settings (since I’d used a 32bit Windows Server as the base of the AMI).  At this pricing level, to constantly run the servers 24/7 for a whole month was going to cost the same, if not more than a similar(ish) physical machine hosted in the ‘old fashioned way. EPIC FAIL!

In the end we did get that physical machine, and despite having less physical memory than is available through EC2, the machine is using virtually 0% CPU and is serving stuff up faster than even we’d thought it would.

Perhaps virtualization technology will improve, and perhaps Microsoft’s Azure platform will be more beneficial – but in my books, using a Windows Server machine on Amazon’s EC2 is about as much fun as putting bamboo shoots under your fingernails.  It really does feel like a wolf in sheep’s clothing

SBS POP3 Connector Polling Interval

The minimum level you can set through the GUI is 15 minutes.
This is how you can change this through a registry setting: ScheduleAccelerator.
Remember, this connector is only available for a Small Business Server !

1. Locate and then click the following registry subkey:
“HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/SmallBusinessServer/Network/POP3 Connector”

2. On the “Edit” menu, point to “New”, and then click “DWORD Value”.

3. Type “ScheduleAccelerator” (without the quotation marks) as the entry name, and then press ENTER.

5. On the “Edit” menu, click “Modify”.

6. In the “Value data” box, type the value that you want, and then click “OK”. To determine the polling interval, the value that is configured on the “Scheduling” tab in the GUI is divided by the value that you type for the ScheduleAccelerator entry.

For example, if a 15 minute interval is specified in the GUI and you set the value of the ScheduleAccelerator entry to 3, the connector will poll ever five minutes.

7. Quit Registry Editor and reboot the server.