The Cog

That Little Bit of Knowledge That Makes Everything Work

Fix Vuze Not Starting on Linux

Just a few days ago, my Vuze installation on my Ubuntu powered server simply stopped working. It would start and almost immediately crash. In my case, the problem was caused by the Java Standard Widget Toolkit (SWT) library being 64-bit, while my server OS is 32-bit. This architecture mismatch caused the crash. I have no idea how this problem started, since I don’t remember Vuze updating or anything like that.

How to resolve the problem:

  1. Begin by going to http://www.eclipse.org/swt/ and downloading the SWT for Linux.
  2. Find the Vuze installation folder. In my case, I extracted Vuze to my home directory, but it could be anywhere.
  3. Delete the file ‘swt.jar’ in the installation folder.
  4. Open the archive you downloaded above and extract just the ‘swt.jar’ from it and put it into the Vuze installation directory.

You should now be able to launch Vuze, if this was indeed your problem. If it still fails to start, you can open up “/home/<username>/.azureus/logs/debug_*.log” and see the stack trace for the error.

GNOME 3 on Ubuntu 11.10

In keeping with the theme of Ubuntu desktop environments, I thought that I would comment on my experiences with the latest version of the GNOME interface. As soon as Ubuntu 11.10 came out on October 13th, I decided to upgrade my previous installation. Bad idea. It ended up breaking so much that I could barely get into the system in failsafe mode, so a reinstall was necessary. I kind of expected this as 11.10 was a big change in some departments, and I did have an interesting system configuration. Anyway, back to environments. I used Unity for the first week or so, and I got used to it – somewhat. There were a few things that I just couldn’t get used to, like having the launcher all the way on the other side of my desk, a place where I seldom venture.

I then decided to try out GNOME 3 (also known as GNOME shell). It is the successor to the ‘classic’ interface in all previous versions of Ubuntu. You can install it with just a few clicks by opening Software Centre and searching for ‘GNOME Shell’. Once installed, you can simply log out, and then click the gear in the top right corner of the login area and select ‘GNOME’. Installing GNOME 3 does not replace Unity, and you can switch to and fro at any time. I’ll be honest, I’m not entirely avid about either Unity or GNOME 3 yet, but I see where they’re headed, and I embrace the change they are imposing. I just don’t think that either once has perfected the ‘new’ style interface yet. The first thing that you will notice is that with GNOME 3, almost the entire interface is hidden. Only a thin bar at the top is visible. It features an ‘Activities’ button on the left, the clock in the middle, and the accessibility, volume, networking, and user applets on the right. However, you will not find indicators there. They can be seen by moving your cursor to the bottom right of the screen. A transparent panel will emerge from the bottom, and the indicators appear on the right in an almost stacked manor. They don’t all look okay, especially ones like indicator-multiload. Their icons are cropped, and so ones that are irregularly shaped don’t turn out right. There is a special one called ‘Removable Devices’ which allows you to eject and unmount media. Your currently active application title will appear on the left of the top bar, directly beside the activities button. At this point all it seems to do is allow you to close the application, but I can see a jump-list type menu coming in the near future.

The top panel is flared at the edges of the screen, so that when a window is maximized, the corners are rounded, similar to Apple products. Speaking of maximizing, GNOME 3 has the same drag-and-snap features as Unity and Windows 7. What is interesting is that there is only one caption button in window title bars, and that is the close. There is no minimize or maximize/restore. You can always right-click the title bar though. The close button is on the right instead of the left like in Unity. The menus for applications are located under the title bar, unlike in Unity where they appear in the top bar. The whole interface aims to be semi-transparent and rounded, with very simple and clean menus. The applets for volume and network are much more simplified than in Unity. There are subtle animations everywhere, and they make the interface flow, but they’re not a waste of time and don’t reduce productivity. GNOME 3 does not have support for Compiz, so there are no customizable flashy effects at the moment, however that is likely to change in the future, as the GNOME 3 window manager, Mutter, matures. In terms of the notification system, notifications appear in the centre at the bottom of the screen in that transparent bar that I mentioned earlier. You can also turn them off in the user applet. The user applet is similar to that of the one in Unity, except that there is no section for devices and printers, and there is only an option for suspend, no shut down or restart. It took me a while to figure out that when you hold down the alt key on the keyboard, the suspend option turns into a menu which gives you the other options.

Now the real important part of the interface is the ‘Activities’ panel. This is the heart of the new interface. You can bring it up by clicking ‘Activities’ in the top left corner of the main monitor, or by moving the cursor to the top left corner of any monitor. I love that feature, because it doesn’t matter what monitor I’m using, I can always bring it up quickly in one fluid motion. One other great thing is that when you move your cursor into the corner on a multi-monitor setup like mine, for the height of the top panel, the cursor is held on one monitor like there is no other screen, so you can nudge the corner of the monitor even though there isn’t really a corner there because it is in between two monitors. It is such a small thing, but epically crucial to the interface, and it was something that Unity failed on for a long time. The activities panel is remotely similar to the dash in Unity. On the left is the ‘favourites bar’, which is like a launcher. You can pin applications to it, and start them by clicking. It appears on your primary monitor, so it’s in the right location regardless of your setup (are you listening Unity developers?). Unlike Unity’s launcher, the icons get smaller as the number increases, unlike Unity’s ‘folding’ effect. There are also no indicators as to how many instances are running. The large area in the middle can have two uses, which are toggled with two buttons at the top. The primary use is to show you all the windows open in your current workspace. It arranges them as large as possible in live updating tiles on the monitor that the window resides on. The window title is displayed below, and you can close applications right from the activities panel. You switch applications by clicking them. You might think that bringing up the panel to switch active windows is a pain, but I found it faster than Unity’s launcher, partly because I didn’t have to move my cursor as far, but mostly because the targets are bigger, and you can go faster without risk of selecting another application. On the right are the workspaces. GNOME 3 dynamically creates them as necessary. Just drag windows over the workspace you want them in. It will automatically create new workspaces as you fill them, and destroy them as you close or move applications. Applications that do not appear in your favourites can be opened by typing and searching, which works exactly the same way as Unity. If you want to browse installed applications, you can click the second function button at the top, and all the applications will appear as tiles. I find this interface better than Unity because it’s full screen. You can also view by category just as in Unity. To open a new instance of an already running application, hold down the ctrl key. You can close the activities panel by clicking on an application or launcher, or by nudging the corner again.

At this point you can clearly see that I enjoy using GNOME 3 far more than Unity, and I use it exclusively now. I highly recommend that you try out both because they offer a similar interface, with minor changes, and it’s those minor differences that can make or break your experience with Ubuntu.

Ubuntu 11.10 Oneiric Beta First Impressions

It has been a few days since the Oneiric Beta 1 was released, and I’ve had a few days to play around with it.

The major change that most will notice is that 11.10 does not ship with GNOME 2, in fact it does not ship with GNOME at all! The only interface available is Canonical Ltd.’s proprietary Unity interface. As I have said  before, I don’t like Unity very much. Regardless, I began testing with an open mind.

There are many things that bugged me about the Unity interface when I first used it in 11.04, so many in fact that I can’t exactly list them all here. The first problem that I had was that all my favorite panel applets were no longer compatible. I used applets like System Load Applet, and Hardware Sensors Applet. Over time the two projects were ported to become ‘indicators’, that is that they can show up in the indicator area on the top panel in Unity. The System Load Indicator project is progressing along well, and it looks great, almost exactly as it was before. The Hardware Sensors Indicator is not exactly ready yet, but it should be ready soon. These software ports help make Unity usable to the extent that GNOME 2 was.

Enough about panels. Time to delve into Unity itself. The first improvement I noticed was that Unity was slightly faster than before, which is a plus. The idea of replacing the applications menu with special ‘lenses’ in the Dash makes Unity seem more unified (no pun intended). Another welcome change is that the old ‘Ubuntu Button’ which was previously in the panel is now in the Launcher itself. The Launcher also now comes with some settings. It still has very low customizability over other systems, but they are a welcome advancement. Some new settings include being able to change the backlighting on the icons, change the icon size, and hide behavior. The setting that is still missing is the ability to move the Launcher to another edge of the screen. When this question was posed to Canonical’s CEO Mark Shuttleworth upon the release of 11.04, he responded that he wanted the Launcher to be close to the Ubuntu Button. While that might have been true, now the Ubuntu Button (now called the ‘Dash Home’) is part of the Launcher, so it can move with the Launcher. I am still wondering why they haven’t done something about this.

The biggest problem of all is multiple monitor support. 11.04 pretty much had none. I have a two monitor setup, one in front (my primary), and one to the left. The Launcher is shown on the primary monitor, on the left side. In 11.04, this put the Launcher right in the middle of the monitor setup. Since the Launcher auto-hid itself, you would have to move the cursor to the screen edge to show it. The problem was that there was no screen edge there. This meant that I had to use the keyboard super key to show the Dash, then cancel it and scramble to hover over the Launcher before it hid itself again. As you can see, this was the main reason that I never used Unity. 11.10 changes the setup a bit. It treats the monitor to the left as the primary, always. This solves the screen edge problem, but it creates another. I have to move the cursor across both screens to get to the Launcher and back, a distance of sometimes 7000 pixels. While this problem is much more tolerable than the old one, it is still an area for improvement.

Unity, just as always, has been designed for 3D acceleration. However Canonical was forced to create a 2D software accelerated version, and I must say that it has major problems. It has bugs galore, and major features don’t work, including things like not being able to rearrange the Launcher. The 3D version isn’t without bugs either. Zeitgeist-daemon crashes often or becomes a runaway process. The music lens doesn’t seem to work at all, the volume indicator has major problems when you click and drag it, and other assorted problems that should be ironed out by release time. I strongly suggest not installing the Beta on any computer that is any bit important because it will let you down. I would wait until at least the Release Candidate.

In terms of bundled software, I was surprised to see Synaptic Package Manager not installed by default. While Ubuntu Software Centre does practically the same things, Synaptic is far more powerful for administration, and I wish that they had not removed it. It can still be installed through Software Centre. Speaking of Software Centre, it has had a major upgrade since 11.04. The first welcome change is the speed improvement. It is still slow as all-get-out, but better than before. The replacement of Evolution with Thunderbird is another interesting one. I have nothing against Thunderbird, however Evolution does the non-mail tasks far better than Thunderbird. If you don’t think that I’ve been positive about anything, 11.10 also ships with a new login screen, which is the best I have seen so far, in any OS.

Overall, 11.10 is shaping up to be a much better OS than 11.04 in the Unity department, and I think that it will be welcomed throughout the community despite its (major) flaws. Two of my testing machines were fresh installs, and they went just fine. I also did an upgrade from 11.04 to 11.10 and the machine became unbearably slow, over 4x slower startup. I’m not sure if this is just a bug, or if this might be the time to reinstall.

XAML “Assembly Not Found” Error Might Be Caused By Your Network

While working on a new Windows WPF application today, I ran into a serious problem. I had made a UserControl class and wanted to use it in my application. After defining it in the XAML of the main window, Visual Studio said that the entire assembly could not be found. I spent several hours checking the code: namespaces, assemblies, cleaning/rebuilding etc. but nothing changed anything. I eventually found out that Visual Studio has problems when working with projects over mapped network drives. I copied the project from my server onto my desktop and voila! The error was gone.

Noctua NF-S12BFLX Review

First of all, you might be wondering why I’m writing a review about a fan. You might think that a fan is a fan and that’s all. But let me tell you, the Noctua NF-S12BFLX is a fan like no other. There is more technology in this fan than there is on the space station. The NF-S12BFLX is a simple 120mm computer case fan that is anything but simple. Just looking at the fan itself, it doesn’t seem very special at all, but after opening the Velcro flap on the box and reading all about it, you can understand why it costs $25. First of all, it has tapered blade ends for less noise. In fact, almost all the features are for increased airflow and reduced noise. It comes with 4 mounting screws as well as 4 rubber plugs, which you can use instead. They are suppose to reduce vibration and noise while increasing lifespan. It also comes with 2 power adapters, which adjust the voltage and reduce the speed.

Before installing the fan, I powered it up using an old PSU. On maximum speed, which is only 1200rpm, the fans are supposed to move 100m3/h and produce 18dBA of noise. My testing revealed that mine only made 16dB of sound, which is pretty good. I didn’t test the airflow, but compared to my other fans, it was very impressive. With the Low Noise Adapter (LNA) the fan was inaudible over the PSU fan. The same went for the Ultra LNA.

I bought the fan to replace a burned out exhaust fan on my server. With my backup server being a huge pain-in-the-ass to start up, I decided to replace the fan while the server was running. Removing the old one was pretty simple, but getting the NF-S12BFLX in was exactly the opposite. I wanted to use the rubber plugs, but the thing is that they need to be threaded through the hole from the outside and through the fan. Then they need to be stretched (which makes them thinner), and when released they expand and hold the fan in place. This worked for the first 2 holes, but it was nearly impossible to grab hold of the plugs on the holes closest to the motherboard. It took some clever maneuvering with some right angle pliers to get them in place. The plugs are designed fine, but I would not recommend hot-swapping them, as it would be so much easier if there was no motherboard in the case. The NF-S12BFLX comes with a 3 to 4 pin PSU adapter, which I used.

The fan has been running for a few weeks now, and has made a drastic improvement in the airflow of the case. It claims to have an MTBF of over 150,000 hours, which is insane for any fan. It is too early to tell how long it will last, but I will monitor it, and keep you posted on how long it lasts.

Update: It’s been 6 months 24/7, and it’s still going strong.

Encrypt Your Life With GPG

 

With privacy becoming a greater and greater concern every day, encryption is proving to be a very viable option for securing our most sensitive data. One of my favorite encryption systems is the GNU Privacy Guard (GnuPG or simply GPG). GPG first came into existence in 1999, and is inter-operable with the very popular PGP (Pretty Good Privacy) encryption system, which has been around since 1991.

GPG uses certificates to encrypt files. A certificate is a file which contains encryption data used to encrypt and decrypt files. When you create a certificate, it contains both a public and a private (also called secret) key, known as a key pair. GPG uses what is known as asymmetrical encryption. This means that you use one key to encrypt and the other to decrypt. The idea being that when a certificate is created you can distribute the public part, but keep the private part to yourself. That way when someone else encrypts a file for you, only you can open it, not even the person who just encrypted it! This way you don’t have to protect the public key file.

To begin encrypting you need to acquire the required software. If you are using Ubuntu or another version of Linux, you probably already have support. If you are using Windows you can grab a suite of software known as gpg4win at www.gpg4win.org.

The following is for Windows users using gpg4win, however the instructions are the same for most software.

Begin by making a new certificate in Kleopatra by going to File>New Certificate and select “Create a new personal PGP key pair”. Fill in the fields, and enter a strong passphrase. This phrase is required to decrypt files, as well as to modify the key later on. This passphrase can be changed later. Once you have a certificate, select it and in the File menu click Export Secret Keys. This will allow you to backup the ENTIRE certificate, both public and private parts. You should store this somewhere very safe, such as on a flash drive or CD in a bank vault for example. Next, go to File>Export Certificates. The resulting file is just the public key part, of which you can send to anyone that you want to send you encrypted files. You don’t need to keep the public key protected, you can email it or even post it somewhere online. The 2 keys are unrelated and you cannot break one with the other. If you receive an exported public key from someone, you can simply click Import Certificates in the toolbar to import and begin encrypting files for that person.

To encrypt or decrypt a file, open Kleopatra and go to File>Encrypt/Decrypt File. Select the certificates that you want to use to encrypt the file with. Remember to make one of them yours so you can open the decrypted file at the end. If you have a 32 bit computer, you can simply right click a file and select Encrypt. This context menu support is not yet available for 64 bit computers.

After working with Kelopatra for a while, you will probably notice the term “Signing” used a lot. Just in case you’re wondering, signing is the process of using your key to verify the integrity of a file. When a file is signed, it is protected from modification. When it is verified later on, Kleopatra will tell you if the file is intact, or if it has been changed. Signing does not encrypt or change the file at all, but you can also do both a sign and encrypt which protects a file from modification and encrypts it all at once. When only signing, a file with the extension .sig is created. You need to transfer both the .sig file along with the unencrypted file to be checked.

Never Install 3rd Party Drivers

Driver – every Windows user cringes at the utter sound of the word let alone the thought of having to deal with one. Regardless, they are required in order for your software to interact with hardware properly (or at all). The manufacturers of computer peripherals and components usually provide the drivers for their own equipment, and so instinctively people immediately go to their websites to download and install their driver before attempting to use the equipment.

The big mistake that people make is that you don’t necessarily need to install the driver from the manufacturer, or even install any driver at all. Certain peripherals are designed to work on generic drivers, ones that come preinstalled with the operating system. Things such as external hard drives and cameras never need a driver, and you would be a fool to install one. Most of the time though, a device will need some sort of driver, but the 3rd party driver is usually never the best option. The following only applies to those using Windows Vista and 7 (sorry all you outdated XP lovers). Microsoft has created a repository known as Windows Update. Most people attribute it to where their OS updates come from, but it also contains copies of drivers for all types of devices. The best part is that Microsoft has put them in such a package that makes them not only easier to install, but they are usually auto-configured such that they work better with the OS. They also don’t come with any of the excess bloat that most manufacturers ship with their drivers. For example, a “driver” for my HP scanner is 250MB, and only 5MB of that is actually the driver, the rest is crappy software that you cannot opt out of installing, and most people never use.

The thing is that on most installs of Windows, you need to manually enable support for installing these drivers. To do this, open the start menu and right click on Computer, and select Properties. In the left sidebar, select Advanced Settings, and navigate to the Hardware tab. Click the button at the bottom about Device Installation, and check the Yes box in the resulting window. Windows should now automatically search Windows Update for drivers whenever you plug a device in. So next time you get a new device such as a scanner, mouse, or any other device that isn’t super-special, try plugging it in without installing the driver and see what Windows Update can do for you. If that driver does not do enough for you, you can always install the manufacturer’s driver afterward.

Chromium Adds Support for Everything GPU Accelerated

It seems that as of Chromium 13 build 85800, there is now support under about:flags for using the GPU to accelerate all webpages, not just ones with canvases. It seems to make browsing a little faster.

Get your own snapshot of the bleeding-edge Chromium: http://build.chromium.org/f/chromium/snapshots/

Hide Chromium’s Toolbar for More Screen Space

If you are using a new build of Chromium 13, you can now go to “about:flags” and enable “Compact Navigation”. After restarting, you can right click any tab and select “Hide the Toolbar”. If you also hide the bookmarks bar, you can make Chromium take up only as much space as the title bar.

So far this is only available in the Windows builds, but it should be coming to Linux and OS X soon.

How To Universally Compare The Performance of Processors

 

Like me, you want to get the most out of your money when it comes to a new computer, and that usually means the most speed for the lowest price. When looking at a computer’s specs, you are usually given the speed of the computer’s CPU or central processing unit (the computer’s main processor). The speed is usually measured in gigahertz (GHz). Most people assume that the larger that number, the faster the computer is – but that is very wrong. A 1.6GHz processor can run circles around a 3.7 GHz processor. Of course it is also very possible that a 3.7GHz processor is faster than a 1.6GHz one, so you can’t use that number to reliably gauge speed.

In order to understand why (and to gauge speed correctly) you need to know what that number means, and in order to understand that, you need to know a little bit about how a processor works. I’ll try to keep this as simple as possible. A processor is essentially a slab of hundreds of millions of transistors, which are essentially tiny electronic switches. They are arranged and interconnected in such a way that they can turn the electrical on and off pulses (the 1s and 0s people assimilate with computers) into new signals by way of comparing ons and offs using what is known as Boolean logic; this act is known as processing. Of course it is much more complicated than that, but for this explanation, more details aren’t necessary. The CPU processes in cycles, one set of information at a time and it usually takes many cycles to perform a task. The number of cycles that the processor can run through in a certain period of time is known as the processors clock speed. This speed is measured in hertz, meaning cycles per second. For example, if a computer processor is rated for 2.4GHz, then it operates at 2.4 billion processing cycles per second. You would think that a processor with a higher clock speed would be faster, but that isn’t always the case.

A long time ago, clock speed was everything. You could directly compare processors by using their clock speeds. The higher the clock speed, the faster the processor. This is not the case anymore for two reasons. The first is that computers today have what are known as multi-core processors. Believe it or not, but a processor can only process one thing at a time. Event though there are hundreds of programs running on your computer at one time, the processor is only dealing with one at a time, processing a little bit from one program, then going to the next, and back again. At least that was the deal until multi-core processors were invented. They involve having multiple smaller processors, known as cores, in one package. Each core can handle one thing at a time, so a 4 core processor can do 4 things simultaneously, while a 2 core can only do 2. CPU manufacturer Intel also uses a technology known as Hyper-threading. This technology allows one core to do 2 things at once, now enabling a 4 core processor with Hyper-threading to do 8 things at once. So this multi-core business complicates things a little, but if you think about it, you could still use clock speed as a gauge if you did some math on the numbers.

This is where reason two comes in. Processors perform the tasks they do by using what are known as instructions. Instructions are exactly what they sound like, they are jobs that the processor must perform. If you take a computer program and break it down as far as possible, it is made up of billions of small instructions which in the grand scheme of things allow your computer to do useful things. Processors are designed to understand a certain number of predefined instructions which are collectively known as an instruction set. A long time ago, all processors had the same instruction sets, meaning that they understood the same instructions. Most instructions use one or 2 clock cycles of the processor to execute, so the more clock cycles in a second, the faster the task was completed. Now days, different processors have different instruction sets, making some processors more efficient than others. For example, if I asked a toddler to draw a rectangle on a piece of paper, and he didn’t know what a rectangle was but did know what a line was, I would have to tell him to draw a line for the top, left, bottom, and right. Telling him to draw each line was one instruction, so in the end I had to give him 4 instructions to draw a rectangle. If I asked a different kid, and he did know how to draw a rectangle, I could simply tell him to draw a rectangle. That makes one instruction. In the end assuming that they can both execute one instruction in the same amount of time (same clock speed), the second kid would be done 75% faster than the first. This is why newer processors, are much faster than older ones with relatively the same speeds. If 2 processors are in exactly the same family, and are clocked different, then the one with the faster clock speed is truly the faster processor.

So now that instruction sets have made it impossible to use clock speed as a measure for overall performance, how do you do it? The answer is using a unit of measure know as flops. Flops stands for Floating Point Operations Per Second. That is the number of mathematical calculations involving a decimal point that can be performed in once second. Modern computer processors are measured in gigaflops, or Gflops. This number can be applied to any processor of any age, and accurately determine its performance. The flops value of any processor can be compared to that of any other processor, and the larger one is always the more powerful one. There is only one catch – you can’t find this value on any spec sheet. You have to determine this number yourself. The best way is using a program known as Linpack. There are versions available for all platforms (Windows, Linux, OS X), and the test is simple to run. The reason that processors don’t have a flops value on them is that it depends on the rest of the computer for the final number. Processors depend on other computer components in order to function, and a flops value will reflect these other system components as well. This makes flops a great gauge for determining overall system performance (to a degree). A great application for Windows users is LinX, a graphical front-end for Linpack.