Saturday, November 17, 2012

VirtualBox "Kernel driver not installed (rc=-1908)" on Mac OSX

I have a Windows virtual machine that I use, and I tried to start it today and received the following error:


After a bit of searching I found a pastebin with the answer, to load the kernel driver run:

sudo kextload /Library/Extensions/VBoxDrv.kext/

Attempting to start the virtual machine progressed a little further, but I was now greeted with:

Not sure what the issue here is… When I try to edit settings I'm greeted with:

Clicking OK allowed me enter the settings, but I couldn't change anything. The solution for me was to discard the saved state for the VM, go into the settings and disable the host-only adapter (not sure if this step was necessary), then boot the VM.

Tuesday, September 18, 2012

Adding space to Ubuntu VM without reboot.

Using LVM allows you to add space to a virtual machine without rebooting it. However you need to be using it already if you want to grow existing file systems. This post only covers increasing the size, as shrinking is more risky and I generally avoid it.

Increasing the size of a filesystem where LVM is used directly on the disk:
  1. If you're using LVM on the storage device (not partition) directly, it's trivial to add more space.
  2. Increase the size of the virtual machine's storage device.
  3. Run `dmesg` and find the appropriate SCSI device (e.g. 2:0:0:0)
  4. Tell the kernel to rescan that device:
    $ echo 1 | sudo tee /sys/class/scsi_device/2\:0\:0\:0/device/rescan
    $ sudo dmesg
    …
    [151064.769416] sda: detected capacity change from 5368709120 to 6442450944
  1. Use `pvresize` to increase the size of the physical volume:
    $ sudo pvresize /dev/sda
    $ sudo lvextend -l+100%FREE /dev/VolGroupName/root
    $ sudo resize2fs /dev/VolGroupName/root

This is only possible when the drive isn't partitioned. If the drive is partition, the easiest solution is to add a new storage device to the VM, use LVM directly on that, and add it to the existing volume group:

  1. Add new storage device and detect it in the guest using scsitools:
    sudo apt-get install scsitools
    sudo rescan-scsi-bus.sh
  1. Do something along the lines of:
    $ sudo pvcreate /dev/sdb
    $ sudo vgextend VolGroupName /dev/sdb
    $ sudo lvextend -l+100%FREE /dev/VolGroupName/root
    $ sudo resize2fs /dev/VolGroupName/root

Sunday, August 5, 2012

Disabling OSX's PostgreSQL to use Heroku's Postgres.app

If you want to use Postgres.app, you'll probably want to turn off any existing PostgreSQL servers you have run to free up port 5432. Typically these are configured to run via launchd, so you'll need to use launchctl to disable them, e.g.:

sudo launchctl remove /Library/LaunchDaemons/com.edb.launchd.postgresql-9.0.plist


As you can see you need to pass the path to a .plist file. There's a couple of places these can exist, so you can use grep to find them:

grep -ri postgres /Library/LaunchDaemons /System/Library/LaunchDaemons ~/Library/LaunchAgents

Sunday, July 15, 2012

Installing Solr 3.6.0 on Ubuntu from source


1. Make a solr user

    useradd -d /opt/solr solr

2. Download a tarball of Solr 3.6.0 and install to /opt/solr

    wget http://archive.apache.org/dist/lucene/solr/3.6.0/apache-solr-3.6.0.tgz
    tar -zxf apache-solr-3.6.0.tgz
    mv apache-solr-3.6.0 /opt/solr
    chown -R solr /opt/solr

4. Make log directory

    mkdir /var/log/solr
    chown solr /var/log/solr

5. Create Upstart job at /etc/init/solr.conf:

    description "Solr"


    start on runlevel [2345]
    stop on runlevel [016]
    respawn


    exec su solr -l -c "cd /opt/solr/example && java -Dsolr.solr.home=/etc/solr -Djetty.port=8080 -Djetty.host=0.0.0.0 -Djetty.logs=/var/log/solr -jar start.jar > /var/log/solr/stdouterr.log 2>&1"

6. Create the base data directory for cores:

    mkdir /var/lib/solr
    chown solr /var/lib/solr

7. Make /etc/solr/solr.xml:

    <?xml version="1.0" encoding="UTF-8" ?>
    <solr persistent="false">
      <cores adminPath="/admin/cores">
        <core name="example.com" instanceDir="/etc/solr/example.com">
          <property name="dataDir" value="/var/lib/solr/example.com/data" />
        </core>
      </cores>
    </solr>

8. Make core conf in /etc/solr/example.com/conf/{schema.xml,solrconfig.xml}

Thursday, June 21, 2012

Disable Apache on OSX Server

I seem to have lost the OSX Server configuration app, which made trying to turn off Apache a little tricky. The solution is to use launchctl:

sudo launchctl unload /System/Library/LaunchDaemons/org.apache.httpd.plist

See man launchctl for details.

Sunday, May 6, 2012

Turn Python warnings into errors

It's generally good practice to run unit tests with warnings turned into errors to ensure deprecation warnings are honoured in a timely manner. Despite often coming across people saying that this can be done with a command line flag, after a bit of searching around I couldn't find a simple example of how to do this.

After a bit of trial and error it turns out to be very simple:

python -W error foo.py

The above flag will turn all warnings into errors.

For more complex usage Doug Hellman has a good write up: http://doughellmann.com/PyMOTW/warnings/

Thursday, April 26, 2012

Enabling Upstart user jobs to start at boot.

I've come across an issue with Upstart – user jobs' start on stanzas aren't honored at boot time.

The real issue is more general: user jobs aren't loaded into Upstart until the user creates an Upstart session. If a job isn't loaded into Upstart, it's basically invisible and hence its start on stanzas won't be honored.

Loading user jobs into Upstart is simple. It happens automatically when a user creates an Upstart session by connecting via D-Bus using initctl or one of the shortcuts (e.g. start or status). Until user jobs are loaded into Upstart, they're completely disabled.

So the problem at boot time is that user's don't have the opportunity to create an Upstart session prior to the rc-sysinit job emitting runlevel, this makes it impossible for user jobs with start on runlevel [2345] to be honored.

Perhaps this is by design – I'm not sure – but I wrote the following job to get around the issue by blocking rc-sysinit and creating an Upstart session for each user with an .init/ directory in their home:

The following job should be installed into /etc/init/load-user-jobs.conf: