'Load langs.xml failed!' Message When Opening Notepad++

Unless you actually like the plain, old, boring Notepad (which basically hasn’t changed since 1985) – then you’re probably already aware of a great replacement for it, called Notepad++.  To put it short, it’s an amazing little text editor with a feature as long as my arm, including syntax highlighting for virtually every scripting language you can think of.  It also has integration into Windows Explorer, so it’s easy to right-click and edit files which are not associated directly to Notepad++.

For years, it has never let me down, and while it still continues to serve faithfully, this week the following message started to appear every time I opened it on my 64bit Windows 7 workstation:

cymbalta prices in united states

Configurator
Load langs.xml failed!
OK

That obviously also means that I lost my syntax highlighting.  I could get it back, but I had to manually select the type of file I was editing.  Normally this is automatic.

Somehow, the langs.xml had errors in it. Perhaps while exploring various Notepad++ options I accidentally made unintended changes to it.  I don’t know, but when I went to the Notepad++ installation directory, the langs.xml file had a file size of 0 bytes.

So while in the installation folder for Notepad++ I renamed langs.xml to langs.xml.bad. Then, in that same folder, I copied langs.model.xml, and renamed the copy to langs.xml.

This fixed this problem for me, hopefully it will work for you as well. You may want to then compare the langs.xml to langs.xml.bad and see if there is anything legitimate that you want back – but in my case It was fairly obvious that the langs.xml file was pretty useless in it’s 0 bytes state.

MacBook Won’t Eject CD (a.k.a My Mac ate my CD)

While using a Windows VM on my MacBook today, I needed some files off a Windows install disk. I put the disk is with no problems, did what I needed to do, and then tried to eject it. You can imagine my suprise when I could hear the drive (making its really ugly crunching sound) and then – no disk.

So I pressed eject again. Nothing.

And again. Nothing. hrmm. This wasn’t looking good.

So I went back to OSX, and the disk wasn’t being detected by OSX – despite working fine the VM. I pressed the eject button. Nothing. I logged off (thinking some app had locked the superdrive). Pressed eject. Nothing. Rebooted, holding the mouse button down(I’d heard this does an eject if done on boot) – still nothing.

I was worried.

In a last ditch attempt, I found the terminal command ‘drutil tray eject’ which made the right sounds, but didn’t eject the disk. hrmm, getting closer. I thought to myself “I wonder if it’s stuck on something?” and then (and I have to admit that I felt like i was molesting my Mac) – stuck the tip of a plastic cable tie into the slot poked around a bit (a little, just enough to feel some resistance inside) and tried ‘drutil tray eject’ again in the terminal window.

The disk ejected, like a good little MacBook.

Fixing Audio Problems in Windows 7 x64 on MacBook (Not Pro) and Boot Camp

I was fortunate enough recently to get a license of Windows 7 x64 Home Premium which I promptly installed on my (non-pro) 15″ MacBook.

The install through the OSX Boot Camp wizard went really smoothly and wireless and most other drivers worked out of the box. However, on my early 2008 MacBook, the audio driver did not work (instead only red light constantly came out of the audio jack). Obviously, the driver provided by Boot Camp is not the right one. From my experiences using XP on my Mac, I remembered that the audio card is made by Realtek. After lots of googling, I downloaded the driver directly from Realtek and it worked. Here is the download link. The one I downloaded is Vista Driver (32/64 bits) Driver Version R2.14. Extract the files and run the setup.exe.

BTW, if you are looking for some older versions, you can use this ftp site.
ftp://202.65.194.211/pc/audio/. This is a mirror site used by Realtek. Note that the download speed is kinda slow, so be patient.

PostgreSQL Performance Optimization

Recently, I’ve been dealing with databases at work which have millions if not BILLIONS of records.  So as you can imagine, having Postgres running smoothly and as quickly as possible, is of utmost importance.  So, as a guide, and compiled from a number of sources. Obviously faster, better, bigger hardware will make the database faster, but there are often other steps you can take to get PostgreSQL working a bit smarter, and a bit harder. The first place to start with Postgres optimization is the Postgres configuration. The list below provides a guide (use at your own discretion) of some of the primary settings relating to resource use. Even small tweaks can have a big impact on server performance.

max_connections. This option sets the maximum number of database back end processes to have at any one time. Use this feature to ensure that you do not launch so many back ends that you begin swapping to disk and kill the performance of all the children. Depending on your application it may be better to deny the connection entirely rather than degrade the performance of all of the other children.

shared_buffers. Editing this option is the simplest way to improve the performance of your database server. The default is pretty low for most modern hardware. Shared buffers defines a block of memory that PostgreSQL will use to hold requests that are awaiting attention from the kernel buffer and CPU. The default value is quite low for any real world workload and need to be beefed up. However, unlike databases like Oracle, more is not always better. There is a threshold above which increasing this value can hurt performance.

PLEASE NOTE. PostgreSQL counts a lot on the OS to cache data files and hence does not bother with duplicating its file caching effort. The shared buffers parameter assumes that OS is going to cache a lot of files and hence it is generally very low compared with system RAM. Even for a dataset in excess of 20GB, a setting of 128MB may be too much, if you have only 1GB RAM and an aggressive-at-caching OS like Linux.

Note that on Windows (and on PostgreSQL versions before 8.1), large values for shared_buffers aren’t as effective, and you may find better results keeping it relatively low (at most around 50,000, possibly less) and using the OS cache more instead.

It’s likely you will have to increase the amount of memory your operating system allows you to allocate at once to set the value for shared_buffers this high. If you set it above what’s supported, you’ll get a message like this:

IpcMemoryCreate: shmget(key=5432001, size=415776768, 03600) failed: Invalid argument

This error usually means that PostgreSQL's request for a shared memory
segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 415776768 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 50000) and/or
its max_connections parameter (currently 12).

effective_cache_size. This value tells PostgreSQL’s optimizer how much memory PostgreSQL has available for caching data and helps in determing whether or not it use an index or not. The larger the value increases the likely hood of using an index. effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system, after taking into account what’s used by the OS itself, dedicated PostgreSQL memory, and other applications. This is a guideline for how memory you expect to be available in the OS buffer cache, not an allocation! This value is used only by the PostgreSQL query planner to figure out whether plans it’s considering would be expected to fit in RAM or not. If it’s set too low, indexes may not be used for executing queries the way you’d expect.

Setting effective_cache_size to 1/2 of total memory would be a normal conservative setting, and 3/4 of memory is a more aggressive but still reasonable amount. You might find a better estimate by looking at your operating system’s statistics. On UNIX-like systems, add the free+cached numbers from free or top to get an estimate. On Windows see the “System Cache” size in the Windows Task Manager’s Performance tab. Changing this setting does not require restarting the database (HUP is enough). .

work_mem. This option is used to control the amount of memory using in sort operations and hash tables. While you may need to increase the amount of memory if you do a ton of sorting in your application, care needs to be taken. This isn’t a system wide parameter, but a per operation one. So if a complex query has several sort operations in it it will use multiple work_mem units of memory. Not to mention that multiple backends could be doing this at once. This query can often lead your database server to swap if the value is too large. This option was previously called sort_mem in older versions of PostgreSQL.

max_fsm_pages. This option helps to control the free space map. When something is deleted from a table it isn’t removed from the disk immediately, it is simply marked as “free” in the free space map. The space can then be reused for any new INSERTs that you do on the table. If your setup has a high rate of DELETEs and INSERTs it may be necessary increase this value to avoid table bloat.  Sets the maximum number of disk pages for which free space will be tracked in the shared free-space map. Properly adjust upward to make vacuum runs faster and eliminate/reduce the need to “vacuum full” or “reindex”. Should be slightly more than the total number of data pages which will be touched by updates and deletes between vacuums. Requires little memory (6 bytes per slot), so be generous adjusting its size. When running vacuum with “verbose” option, DB engine advises you about the proper size.

fsync. This option determines if all your WAL pages are fsync()’ed to disk before a transactions is committed. Having this on is safer, but can reduce write performance. If fsync is not enabled there is the chance of unrecoverable data corruption. Turn this off at your own risk.

commit_delay = and commit_siblings. These options are used in concert to help improve performance by writing out multiple transactions that are committing at once. If there are commit_siblings number of backends active at the instant your transaction is committing then the server waiting commit_delay microseconds to try and commit multiple transactions at once.

random_page_cost. Sets estimated cost of non-sequentially fetching. Lower it to influence the optimizer to perform index scans over table scans.

Note that many of these options consume shared memory and it will probably be necessary to increase the amount of shared memory allowed on your system to get the most out of these options.

If you are after a more comprehensive list of Postgres’ Tuning and Performance, the PostgreSQL documentation has a great wiki on the subject.

The other place that often gets overlooked for performance enhancement, is the actual database queries themselves.  I must admit ignorance myself to the Postgres inclusing of the ‘EXPLAIN ANALYSE’ keywords preceding any SQL statement which returns a very comprehensive trace of the query through the database entry, including specific timings, index use etc, which can be a big eye opener to tables, sorts or indexes which maybe being used incorrectly, or just being slow.  Here is an example of explain analyse on a SQL statement on a very large database:

EXPLAIN ANALYSE SELECT items.etag, subscriptions.subscription_data
FROM items, subscriptions WHERE items.item_id = subscriptions.item_id;

returns the entire query plan, like:

"Hash Join  (cost=1.29..22.38 rows=50 width=64) (actual time=0.055..0.084 rows=21 loops=1)"
"  Hash Cond: (subscriptions.item_id = items.item_id)"
"  ->  Seq Scan on subscriptions  (cost=0.00..17.70 rows=770 width=36) (actual time=0.010..0.012 rows=21 loops=1)"
"  ->  Hash  (cost=1.13..1.13 rows=13 width=36) (actual time=0.027..0.027 rows=13 loops=1)"
"        ->  Seq Scan on items  (cost=0.00..1.13 rows=13 width=36) (actual time=0.008..0.014 rows=13 loops=1)"
"Total runtime: 0.154 ms"

Scale is the New Black

Cross-posted from the Particls blog.

bupropion sr in uk

Over the past two weeks, the Faraday Media development team have been hard at work migrating all our products and initiatives into a new datacenter.  The new data center was much more suited to Faraday Media technology – much easier to scale, much faster and more reliable.

As part of our efforts, we’ve finally had the opportunity to give Particls and the Engagd platform their own dedicated servers, effectively quadrupling our processing capability.  This will allow us to service our partners and customers with increased reliability and confidence.

We’ve taken the opportunity with the new servers, to finally move our blogs off Blogger and onto a hostedWordPress solution, giving us far more flexibility with our blogs and presentation.

One problem, however, after we upgraded to WordPress 2.6, was that when we changed the permalink settings (to something more tollerable than ‘?p=x’) suddenly, index.php worked fine, but any other page reported as ‘not found’.  After serverl long hours Googling for the answer, there was lots of “answers” for WordPress on Apache (specifically about correct access to the .htaccess file and ensuring the correct PHP/Apache modules are installed) – but none about how to solve these issues on IIS.  It turns out that there are a number of known issues with 2.6 on IIS, which are now solved with the release of WordPress 2.6.1.

With most of our migration issues now sorted, we can now confidently continue to deliver our attention and data portablity solutions to the masses secure in the knowledge that our services are scalable and our bandwidth is plenty.

Upgrade Subversion Client on Mac OSX

As a long-time Windows user and programmer, I cannot state enough just how great Mac OSX is as a development environment.  It comes with so many tools already installed as standard ready to go (or at the very least on the OSX install disk).

Like I said, mac osx comes with a subversion client out of the box. At least Leopard does. If you do not believe me, try the following command and watch the output:

svn, version 1.4.4 (r25188)
compiled May 31 2008, 03:45:57

Copyright (C) 2000-2006 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).

The following repository access (RA) modules are available:

* ra_dav : Module for accessing a repository via WebDAV (DeltaV) protocol.
- handles 'http' scheme
- handles 'https' scheme
* ra_svn : Module for accessing a repository using the svn network protocol.
- handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
- handles 'file' scheme

However, as you can see, the default Subversion client (type: svn –version in a terminal window) is quite old.  The biggest problem to me is that I use Versions and that stops NetBeans IDE and the command-line client no longer work.  But no fear! It turns out it is very easy to update it.

  1. Head over to http://svnbinaries.open.collab.net/servlets/ProjectDocumentList?folderID=164&expandFolder=164&folderID=0 and grab the correct version of subversion for you, download and install it.  So long as the new version is higher than the old, you can just install and it will copy over the old version.
  2. Make sure that the new binaries are on the path before the original subversion libraries.  To do this, issue the following command in a terminal: export PATH=/opt/subversion/bin:$PATH

If you type: svn –version into a terminal window again you will now see the version you installed.

svn, version 1.5.4 (r33841)
   compiled Oct 27 2008, 11:19:10

Copyright (C) 2000-2008 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).

The following repository access (RA) modules are available:

* ra_neon : Module for accessing a repository via WebDAV protocol using Neon.
  - handles 'http' scheme
  - handles 'https' scheme
* ra_svn : Module for accessing a repository using the svn network protocol.
  - with Cyrus SASL authentication
  - handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' scheme
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
  - handles 'http' scheme
  - handles 'https' scheme

Hope this post helps you use the new and improved subversion.

Howto Backup PostgreSQL Databases Server With pg_dump command

Recently I had do to a lots of PostgreSQL database administration as I needed to move several databases onto  a production server.  PostgreSQL is one of the most robust, open source database servers available, and for my money, faster and generally better than MySQL. Like MySQL database server, it provides utilities for creating a backup.

Backup database using pg_dump command. pg_dump is a utility for backing up a PostgreSQL database. It dumps only one database at a time.

$ pg_dump table | gzip -c > table.dump.tar.gz

Another option is use to pg_dumpall command. As a name suggest it dumps (backs up) each database, and preserves cluster-wide data such as users and groups. You can use it as follows:

$ pg_dumpall | gzip -c > all.dump.tar.gz


Windows Server Performance on Amazon EC2

One of the trending conversations on the Web at the moment (and has been growing for quite some time) has been the idea of Cloud-based Computing.  While distributed storage has come quite a long way since the conversation began, its only really recently that we’ve had a choice of cloud based computing.

Amazon Web Services winning at Crunchies 2009

The idea behind computing in the cloud is genuinely, and extremely exciting.  Amazon Web Services, including EC2, S3 and the others – are a stroke of architectural genius.  But the problem is, that we’ve been given the false impression that cloud based computing is going to change the web.  We’re spun stories about how its going to radically decrease our infrastructure costs and we’re spoon fed fairy-tales that our scale issues are going to be as easy to fix as double clicking an icon.

You see the problem is this: at the end of the day you’re dealing with a Virtualized Environment – and its always slower than the real-deal.

While working on a project recently we bought into the whole “Elastic Cloud” as well.  We quickly learned that even though its relatively painless to spawn new instances your still ultimately bound to the same rules as you would with a cluster – if your code isn’t built to scale across several machines, its not going to.

After about 2.5 weeks of playing, tuning, perfecting the Amazon EC2 Windows instance we were running – the performance compromise was simply too great to validate its use in a production environment.  I suspect that the virtualization software being used by Amazon actually blocks processes from running in parallel (as they normally would on a physical server), since the machine had extreme difficulty in running more than one thing at a time.  And we found that Apache would do busy-waits when performing PHP Restful API calls to our other systems.  This resulted in 2 concurrent users using 100% CPU usage for the entirety of their sessions.

In the end, the Windows Amazon EC2 solution was completely untenable.  It wouldn’t even have been satisfactory for development let alone production.  So giving up on trying to find a magical “setting” – we thought we’d scale up to a more powerful Amazon instance.  But I didn’t get far before I was casually told that the AMI (the name for an Amazon VM image) I had lovingly crafted for 3 days to our own purposes, was not compatible with the Medium and Large instance settings (since I’d used a 32bit Windows Server as the base of the AMI).  At this pricing level, to constantly run the servers 24/7 for a whole month was going to cost the same, if not more than a similar(ish) physical machine hosted in the ‘old fashioned way. EPIC FAIL!

In the end we did get that physical machine, and despite having less physical memory than is available through EC2, the machine is using virtually 0% CPU and is serving stuff up faster than even we’d thought it would.

Perhaps virtualization technology will improve, and perhaps Microsoft’s Azure platform will be more beneficial – but in my books, using a Windows Server machine on Amazon’s EC2 is about as much fun as putting bamboo shoots under your fingernails.  It really does feel like a wolf in sheep’s clothing

Configuring Windows Components (Like IIS) on Amazon EC2

After getting my Amazon EC2 imaged created, I quickly discovered that the default image is as bare-as-bare-as-can-be.  They had other default images with SQL Server 2005 Express on it, but I prefered to use a clean Windows customized by yours truely.

Optional Windows Server operating system components are typically added or configured using installation media. This tutorial describes how to add or configure optional Windows components within the Amazon EC2 environment.

Windows Server operating systems include many optional components. Installing all components on each Amazon EC2 Windows AMI is not practical. Instead, you can access the necessary files to configure or install components using Elastic Block Storage (EBS) Snapshots.

The following is a list of available snapshots:

  • Windows 2003 R2 Enterprise 32-bit: snap-bb10f6d2
  • Windows 2003 R2 Datacenter 32-bit: snap-8010f6e9
  • Windows 2003 R2 Enterprise 64-bit: snap-d010f6b9
  • Windows 2003 R2 Datacenter 64-bit: snap-a310f6ca

Simply add the volume with a size of 2GB – using whatever tool or command line you prefer – to set it up and attach it to the instance you want to configure.  It might take a few moments for the instance to detect the volume, but after that, you will then be able to configure the Windows Components Wizard to point to the new volume.

Someone Pulled the Plug

Cross-posted from the Particls blog.

Guys sorry about the troubles lately. It seems all I have done this week is worked on the one part of Touchstone I can’t stand (a.k.a. Feed Adapter) and fought with the hosting company to get us the heck, back online.

I am pleased to report that the issues should now be over.

Symptoms may have included; inability to use the invite system, inability to launch Touchstone, and the inability to browse to /use the community site.

There is a silver lining, however, in that our server is getting a major upgrade and relocation to a better data centre, so hopefully these issues are now in the past.

Now if I could just get SQL Server access back, we’ll be right-as-rain.