Musings

Moving WordPress to a subsite in Azure Websites

Deploying WordPress in Azure is relatively straightforward.  Deploy a Resource Group, with a DB, and a Web Server using the WordPress template and bob is your fathers brother.

However, I wanted to move WP to a subdomain which all in all should be pretty straight forward. All the instructions online are out of date or for different systems though, so I wanted to note what I had to set to get it all working.

General Steps are:

1) Deploy your WordPress site.

2) Go into the WP site and perform the configuration to get it running bare-bones

3) Go back into the Azure Portal and find the FTP configuration

4) Log into the Website with the FTP credentials (I used WinSCP)

5) Copy the entire WordPress site to your local machine

6) Delete everything from your site except for
              index.php
              azuredeploy.json
              web.config
I had a few problems deleting the wp-admin and wp-content folders – it kept blocking access, but went eventually.

7) Create the name of the subfolder you want to use (in my case /d/)

8) Copy the backup of your WordPress into the subfolder.

9) Edit the root web.config file.  Mine is setup to redirect all traffic to the /d/ folder.  You many a different setup, but this is mine:
<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<system.webServer>
    <rewrite>
        <rules>
            <rule name=”Root Hit Redirect” stopProcessing=”true”>
                <match url=”^$” />
                <action type=”Redirect” url=”/d/” />
            </rule>
        </rules>
    </rewrite>
</system.webServer>
</configuration>

 10) Edit the root index.php file and change the require
require( dirname( __FILE__ ) . ‘wp-blog-header.php’ );
to include the sub folder you want to use.
require( dirname( __FILE__ ) . ‘/d/wp-blog-header.php’ );

11) Azure WordPress seems to ignore the SQL database setting for the site and home name.  Therefore this must be hardcoded in the wp-config.php file.
Replace the existing paths BELOW the Stop Editing Line! as follows.

define(‘WP_HOME’,’http://example.site/d’);
define(‘WP_SITEURL’,’http://example.site/d’);
define(‘WP_CONTENT_URL’, ‘http://example.site/d/wp-content’);

I also had to define the WP_CONTENT_URL string as my Salient theme wasn’t picking up the subpath for some reason, so I hard coded it.

12) The only thing you may have to do is edit the subfolders web.config file.   I was having loads of errors, until I deleted it, then it magically started working.  However, it does seem to have been replaced with an empty web.config file, so its contents are here:
<?xml version=”1.0″ encoding=”UTF-8″?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules/>
    </rewrite>
  </system.webServer>

</configuration>

I think that was about it, but it was quite the pain to get it configured.  Nb, if you change the website name or add a custom domain, don’t forget you’ll have to change the wp-config.php with the new domain name as Azure won’t do this for you automatically.

I’m also not sure how things will go when there is a new version available, but hopefully this will get you going.

HP EliteDesk 800 G1 – DisplayPort to HDMI Audio

I’ve been fighting for several days to get the audio working from a HP EliteDesk 800 G1 SFF device into my Panasonic TV using a DisplayPort to HDMI Cable.  The PC is running Windows 10 build 10586.   I tried lots of things, including BIOS patches, various driver builds from MS and Intel, but all to no avail.

However, I have managed to get it working, by changing the Default Format for output to FM Radio Quality, down from DVD or CD in the Advanced TV Properties.   I’m not sure if this is a TV or PC/Driver problem, but I suspect its the latter as I had the same problem with another TV,  but the cable works on a Surface Pro 3 Mini-DP to DP to HDMI adapter, so I think this is PC related.

Hopefully this helps someone else out.

Saving Images to AWS S3 Scriptomagically

Whilst I’ve been messing around creating boot images, I’ve hit against the problem of needing to archive off some large images for later use.  Now I’ve finally got access to a high-speed bandwidth internet link, I’m can back stuff off to Amazon’s AWS S3 cloud in a reasonably timely fashion.

s3cmd does a great job of interfacing with AWS from a Linux CLI, but is designed to deal with precreated files, not stuff that is dynamically made.    When you’re talking about multi-gigabyte files, it isn’t always an option to make a local archive file before pushing it to the remote storage location.

I’m used to using pigz, dd and ssh to copy files like this, and wanted to achieve something similar to s3, however there doesn’t seem to be many guides to achieving this.  I have however made it work on my debian based distro relatively easily.

Tooling

This is the tooling I combined

s3cmd

You need a recent version of s3cmd to make this work – v1.5.5 or above is apparently what supports stdin/stdout which you’ll need.
As of writing, this can be obtained from the s3tools git repository @ https://github.com/s3tools/s3cmd
You’ll need git and some python bits and pieces but building was straightforward in my case.

Before you start, make sure you setup s3cmd using the command s3cmd –configure

pigz

I use pigz, although you can use gzip to achieve the same thing.  For those that don’t know, Pigz is a multi-threaded version of gzip – it offers much better performance than gzip on modern systems.

tar

tar is pretty much on every linux system, and helps deal with folder contents in a way that gzip/pigz can’t.

Usage

The command I built is as follows:

tar cvf – –use-compress-prog=pigz /tmp/source/directory/* –recursion –exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ | /path/to/s3cmd put – s3://bucket.location/BackupFolder/BackupFile.tar.gz –verbose -rr

I think its pretty self explaintory, but I’ll run through the code anyway…

tar cvf = tar compress verbose next option is a file
– = stands for stdout in tar parlance 
–use-compress-prog=pigz = self explaintory, but you can probably swap this for any compression app which supports stdout. 
/tmp/source/directory/* = the directory or mount point where your source files are coming from
–recursion = recurse through the directorys to pickup all the files
–exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ = exclude various file types (in this instance, I was backing up a broken VMFS storage array 
= breaks the input to the next app
/path/to/s3cmd = the directory where s3cmd resides – in my instance, Id installed git repository version
put = send to s3; put works with a single file name.
– = use stdout as the source  s3://bucket.location/BackupFolder/BackupFile.tar.gz = the s3 bucket and path where you want the output stored
–verbose = debugging verbose output and status tracking
-rr = reduced redundancy storage – less expensive than full redundancy, and you can include/exclude this based on your needs.

 The biggest problem with this is you don’t really get an idea of how long a backup will take. s3cmd splits the file into chunks, but you don’t know how many chunks it is until the process has completed.  I average around 6 MB/s but a multi-gigabyte file can still take several hours to upload.  Whilst I didn’t time it exactly,  a 70GB file, compressed to 10GB, took around 90mins to send to s3.
You may want to leave your backup running in a screen session.

Installshield Error 1327 (Or Invalid Drive)

A common problem I come across which causes me lots of problems to resolve, particularly with networked machines, is the Invalid Drive error (usually error code 1372) when you try and run the an InstallShield Installer often caused by a Mapped Network drive.  I believe the problem exists because you map a drive at a user level, but the administrator doesn’t get the same ‘layer’ of drives (just try opening a mapped drive from an Administrative command prompt).   Previously, Id got round this problem by logging in as a local administrator, installing the app needed for All Users, then logging back in as the user required to finish off the setup.    Note, this problem usually occurs before you even choose which drive to install to, so I can only assume its a feature of Installshield related to either a temporary file store location, or enumeration of drives at start.

However, I’ve discovered a really useful work around which seems to resolve the problem, and its the subst command.   Subst associates a drive letter with an alternative drive/path, allowing you to have multiple letters pointing to a location.  It is to drive letters what the mklink is for files and folders.

To use it, fire up an administrative command prompt and type:
susbt M:  C:tempinstallshieldtemp
where
subst ProblemDriveLetter TemporaryFolderLocation 

Run the installer to let it install and you can then cleanup with:
subst /D M:
where /D is delete and M: is the Drive letter you used in step 1.

Just typing subst will show  you drive letters you have susbtituted.

Azure Self-Signed Cert

I’ve been messing around with some of the Azure services, and am about to try some of the desktop utilities such as the Hyper-V converter to try publishing services.  One thing you’ll come across if you’re trying to do this is the need to create a self-signed management certificate to allow these apps to authenticate, and you’ll see all the technet articles mention the makecert.exe tool.

The problem is that makecert is bundled into the Windows 8.1 SDK and Visual Studio 2013 Express downloads, both of which are several hundred megabytes – overkill for what we need.

Well, the easy way to get makecert and only use about 9mb of storage space – download the 8.1 SDK installer from http://go.microsoft.com/fwlink/p/?linkid=84091  and run the installer.  When prompted to select the tools, you only need to install the MSI Tools.  This gives you makecert (as well as a few other bits and bobs) but in a much more compact format than having all of the developer tools installed.

Wierd goings on with a Live CD and libdvdcss.so.2

I’ve been messing around trying to make a live-CD with some transcoding/ripping utilities built in to utilize some of the spare hardware I’ve got lying around. More on this later, but I’ve been reworking the guide @ http://willhaley.com/blog/create-a-custom-debian-live-environment/ with my own utilities and tools.

One problem I’ve been challenged with over the last couple of days is HandBrake-Cli bombing out with the message:

[email protected]:/mnt/Videos/Movies/dvdrip/91# HandBrakeCLI -i BHD.iso -o BHD.mkv –preset=”High Profile”
[20:41:41] hb_init: starting libhb thread
HandBrake 0.9.9 (2014070200) – Linux x86_64 – http://handbrake.fr
4 CPUs detected
Opening BHD.iso…
[20:41:41] hb_scan: path=BHD.iso, title_index=1
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/index.bdmv
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/BACKUP/index.bdmv
bluray.c:2341: nav_get_title_list(BHD.iso) failed
[20:41:42] bd: not a bd – trying as a stream/file instead
libdvdnav: Using dvdnav version 4.1.3
libdvdread: Missing symbols in libdvdcss.so.2, this shouldn’t happen !
libdvdread: Using libdvdcss version  for DVD access
Segmentation fault

This has been bugging me, as it worked before I converted the image to a livecd.  I wondered if it was some kind of problem with the lack of ‘real’ disk space, or a lack of memory or something like that, but nothing I could find would identify it.

Finally, I started looking into libdvdcss rather than HandBrake itself.  I think what confused me is the symbols error looks like a warning, especially given that there is a follow-on message which looks like libdvdcss is continuing.  Anyway, eventually!   I ran an md5sum on the libdvdcss.so.2 file to see if it matched a non-live machine (to a virtually identical build).

[email protected]:/# md5sum /usr/lib/x86_64-linux-gnu/libdvdcss.so.2
4702028ab20843fd5cb1e9ca4b720a72  /usr/lib/x86_64-linux-gnu/libdvdcss.so.2

N.b. libdvdcss.so.2 is symlinked to libdvdcss.so.2.1.0 in my current Debian sid based build.

On the donor machine
[email protected]:/usr/lib# md5sum x86_64-linux-gnu/libdvdcss.so.2
c9b314d9ed2688223c427bc5e5a39e6f  x86_64-linux-gnu/libdvdcss.so.2

So I’ve SCPd the source file into the live machine, checked the md5sum matched the donor machine (it did), and repeated the HandBrake job.  Lo and behold it worked!  So I’ve restreamed the two files into the filesystem and success, it just works.
So I don’t know if something funky happens when the image is created using the link, but actually its quite easy to fix once you understand the problem.

Hope this helps someone,  and I’ll be back soon with more details about building a live image, then booting it using iPXE.

Windows 7 Zombie Mapped Network Drives

In Windows 7, when using mapped drives on a laptop, you may find that after moving around (undocking, connecting via Wifi etc.) that the mapped drive becomes a zombie – it still exists, but is essentially dead. This seems particularly prevalant where offline folders are used. This appears to be caused by the network drive service starting before the network connection is necessarily stable. However, you can change the behaviour with a simple registry key.

Under HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider

add a new DWORD entitled RestoreConenction

Set the DWORD value to be 0

After a reboot, network drives will only be reconnected to when you try to access them through explorer or the file system APIs.

iPXE Booting OpenElec

Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes.

This is a great live image for getting up and running with XBMC, or testing it before committing to installing to a harddisk.   I’ve set it up today to boot from the network to see how well it works on a machine I’m thinking about using for a media centre.  It was a bit of a pain to get it working,  but now that it is,  it works fine.

First of all, download a copy of OpenElec from http://www.openelec.tv/get-openelec/download – I got a the tarballed version entitled OpenELEC-Generic.x86_64-devel-20131026131436-r16293 from the developer sources, but I think stable versions will equally well.

This was copied to my NAS server, and untarred using the command.
 tar -xvf OpenELEC-Generic.x86_64-devel-20131026131436-r16293.tar
This then spat out what I presume to be an OpenElec live-cd or some such (but who cares – we don’t do CD’s do we? 🙂  ).    Within the created folder, there is a ‘target’ folder, which contains the images you need to boot from.  

Make sure the target folder is in a location where it is accessible from both HTTP and NFS.  Note,  I’ve not been able to make this boot using HTTP, and I’m not sure its possible, because it seems to use NFS as a persistent storage location for your configuration.

Next, create a folder for storing your persistent information (I created a folder called persistent within my target folder.

Now update your iPXE menu.

:OpenElec
echo Booting OpenElec Media Centre
echo HTTP and NAS Method 
kernel http://boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/KERNEL boot=NFS=10.222.222.50:/boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/ disk=NFS=10.222.222.50:/boot.server/openelec/persistent/ netboot=nfs ssh ip=dhcp
boot 

So this loads the kernel using http from the server, and passes the boot partition nfs and persistent nfs location.  Note, neither of the latter two define the files,  just the folder paths.  The Kernel knows what its looking for when it boots.
The final variables tell the kernel that it is being booted with nfs required,   to enable ssh (if you want it) and to get the IP using DHCP.    There are a number of other modes for debugging, text only mode, that sort of thing, but that is not discussed here.

Anyway,  other than configuring the iPXE menu to call :OpenElec,  that’s all there is too it.

XPerience Points

It should come as no surprise at the massive gap between computers running Windows XP and a more modern variant. Yesterday, The Register published an article which mentions how ~500 million PC’s still run XP which goes end of life in April 2014.

The problem is that XP ‘just worked’ (eventually). It’s a relatively lightweight OS, straightforward to configure, reliable and for big IT outfits, part of their master machine image for an extended period. Why upgrade to something more complex, more finicky and frankly more unstable in Vista? And then if XP was working so well, why bother changing the images for Windows 7?

Well, the time has come for businesses and home users to think about replacing XP with something more modern. Windows 8 is about to become 8.1 and whilst it doesn’t necessarily fix everything that is broken in 8, it appears to be a good leap forward. Plus, its still possible to find Windows 7 machines in certain retailers for those who don’t want to learn the new UI. For those who only use the Internet, check-out a Chromebook which gives you a nice portal onto the WWW without the cruft of a heavyweight machine. For people who consider themselves reasonably confident IT users, why not checkout one of the Linux distributions; Ubuntu comes highly recommended to those who are Linux n00bz.

Whatever you do, I urge you to upgrade from XP. From April 2014 onwards, no patches, no updates, no security fixes. I find it highly likely that with that many PC’s still running XP that those with a financial interest in attacking these machines and using them for nefarious machines may be sitting on exploits and security holes that will never get fixed. Its in your, as well as everyone else’s interests that you consider your options now, and migrate by April 14.

When a Minimal Install Isn’t…

Over the past couple of days, I’ve been rewriting the recovery script for a Linux LAMP application I wrote about 5 years ago.  I test it every so often to make sure it still works.   This year,  it doesn’t. Basically, we’ve reached a stage where the software versions don’t support the LAMP stack I chose (XAMPP).  Besides,  XAMPP isn’t really suitable for production servers even though its served us well in the intervening years.

So,  I’ve embarked on updating the recovery script to fit in with an ‘off-the-peg’ LAMP stack which will be easier to maintain going forward.

My favorite distribution is Debian and that’s the one I’ve most experience with.  However,  the preferred distro in the office is RHEL, or variants based therein.   So I got myself a fresh download of the Fedora 19 network install CD, loaded it into the virtual machine and off we go.  The installer is a bit err, low-rent – pretty graphics and the like, but not a lot of options to choose.  I suppose I’m too used to the ‘expert’ mode of the Debian network install.  Anyway,  went through the necessary steps to get the network up and running, configure it to talk to our proxy server etc,  find the disk config menu (hidden off-screen on a low-res screen) then go to the package selection screen.  Being reasonably accomplished now in administering Linux systems, I went for the minimal selection so I could add the other packages later on,  and off we went.

I quite liked how you can set the root password and create a new user whilst the OS installs – thats efficient. Then that was that,  server installed.  And that’s when the trouble started.

Giving yum proxy access was straightforward (although why the configs don’t carry across from the installer, I don’t know) and getting the LAMP stack installed was straightforward.  The httpd service came straight up after install and was ready to go.  Except that it wasn’t.  I could not for the life of me get a http page to come up.  It seemed that SSH was the only default port opened.  I checked network config and that all looked okay.  I could even wget http://localhost and get a page back.  So why no external connection?   Then I discovered SELinux was installed and running.  Disabled that, and a reboot – still no damn connection!   There looked to be a load of IPTables rules still listed;  could they be a carryover from SELinux I wondered?  Dropped the iptable rules and magically got http access back.  Rebooted and same problem again.

After reaching out to a colleague who has a little more experience with these distro’s that I,  and after installing Webmin,  we discovered that firewalld was running on startup.

Now,  when I install a minimal distro installation, I expect the following:

  • A bootloader
  • A kernel
  • A shell
  • Enough configuration to get from the bootloader to a shell
  • An ability to extend the system with a package manager.

I do not expect other things to get in the way,  especially as I hadn’t asked for them.   SELinux and Firewalls are good practice, but I do not want them imposed on me, especially if I’m not expecting them.  There were a number of other packages loaded (wpa_supplicant) that to me do not classify as essential to getting Linux up and running.

Fedora 19 and I have not immediately started as friends.