Skype for Business – Audio Conferencing Behaviour

If you have Skype for Business telephony services, including audio conferencing hosted by Microsoft (365), it is worth sharing the current workflow experience, which doesn’t seem to be well documented.


From a host, or moderator perspective, you dial into the meeting using your assigned phone number, shown on your Skype for Business invite.

  1. The Skype Meeting Attendant answers the phone, and asks you to enter the conference id, following by the # key.
  2. You enter the meeting number (again, shown on the invite).
  3. You’re prompted to press * if you are the leader – you’d press *
  4. You enter the pin assigned to your account
  5. You’re dropped into the meeting, and your name or number is announced if enabled.

From an end user perspective, the process is pretty much the same, except that if the leader has already joined, they’re not prompted to enter the pin number.

Unlike other ACPs, the control of the service appears to be pretty non-existent, and I think this is by design.  After all, control of the meeting can be done from the mobile app if you’re not near a desktop.
You’re not able to start a meeting recording, as this service is performed by the Skype client, recording into your local computer folder, so if this is required then that is your only option.

I think MS Teams may take a different approach, but I’ve not got my hands on telephony/audio in that product yet.

Replicating Linux Machines across a Network

This command will replicate a running machine into a remote machine container.
The source machine can be online and running, the remote machine should be booted via a Linux Live Image.   The destination machine should also have the same or larger disk size than the source machine.

Sending Machine
dd if=/dev/sda bs=16M conv=sync,noerror status=progress | gzip –9 -cf | nc destinationmachineip preferredport -q 10

Receiving Machine
nc –l –p preferredport | gzip –dfc | dd bs=16M of=/dev/sda

It is worth doing some test copies if you have a large image to send, or are on a slow network, altering the block size (bs=) and gzip compression amount (-9).  Particularly on the latter, on a fast network, you may be better off using lower compression, as the CPU cycles required for compression may in fact take longer than sending the data uncompressed or at a lower compression rate.

You can also use other compression programs like pigz to achieve better performance.

Booting Windows 2016 on HP G8 Microserver MicroSD Card

As good as FreeNAS has been, most of the clients on my home network are Windows based and speak CIFS/SMB,  and I’ve not had great success with FreeNAS reliably/stably serving these protocols.   Under load, the shares sometimes lock up and stop responding, and permissions can be a bit hit and miss.

FreeNAS support forums drink their own special brand of cool aid, so I’ve decided to try Windows, which, whilst being part of their own borg collective has a much wider base of users and obviously native integration with my client base.  So I’m piloting Windows Server 2016 with its various storage capabilities to see how it compares.
I’ve got a HP Microserver G8 which as well as 4 disk trays, supports a fifth SATA device via an additional ODD port, an internal USB and a MicroSD port, as well as various external USBs.
My FreeNAS is a previous N54L Microserver, which installs and boots easily to a USB drive, but Windows is a bit more pig-headed at booting from USB/MicroSD devices.
However, with the help of Daniels Tech Blog  I have managed to get my Microserver booting from the MicroSD Card
Daniels instructions are more or less spot on, except for one change.
list disk
select disk x
create partition primary
format quick fs=ntfs label="SD"
assign letter=C
dism /Apply-Image /ImageFile:D:sourcesinstall.wim /index:2 /ApplyDir:C:
bootsect /nt60 C: /force /mbr
bcdboot C:Windows

I couldn’t get that final line to write to the MicroSD. I kept getting errors about BCDBOOT not being able to write the files, or unable to find the source location. However, I read the documentation about BCDBOOT at Microsofts MSDN site and happened upon the command for writing to USB devices.

bcdboot C:Windows /s C: /f ALL

This seems to work fine, and a reboot allows Windows 2016 to boot.

Moving WordPress to a subsite in Azure Websites

Deploying WordPress in Azure is relatively straightforward.  Deploy a Resource Group, with a DB, and a Web Server using the WordPress template and bob is your fathers brother.

However, I wanted to move WP to a subdomain which all in all should be pretty straight forward. All the instructions online are out of date or for different systems though, so I wanted to note what I had to set to get it all working.

General Steps are:

1) Deploy your WordPress site.

2) Go into the WP site and perform the configuration to get it running bare-bones

3) Go back into the Azure Portal and find the FTP configuration

4) Log into the Website with the FTP credentials (I used WinSCP)

5) Copy the entire WordPress site to your local machine

6) Delete everything from your site except for
I had a few problems deleting the wp-admin and wp-content folders – it kept blocking access, but went eventually.

7) Create the name of the subfolder you want to use (in my case /d/)

8) Copy the backup of your WordPress into the subfolder.

9) Edit the root web.config file.  Mine is setup to redirect all traffic to the /d/ folder.  You many a different setup, but this is mine:
<?xml version=”1.0″ encoding=”utf-8″?>
            <rule name=”Root Hit Redirect” stopProcessing=”true”>
                <match url=”^$” />
                <action type=”Redirect” url=”/d/” />

 10) Edit the root index.php file and change the require
require( dirname( __FILE__ ) . ‘wp-blog-header.php’ );
to include the sub folder you want to use.
require( dirname( __FILE__ ) . ‘/d/wp-blog-header.php’ );

11) Azure WordPress seems to ignore the SQL database setting for the site and home name.  Therefore this must be hardcoded in the wp-config.php file.
Replace the existing paths BELOW the Stop Editing Line! as follows.

define(‘WP_CONTENT_URL’, ‘’);

I also had to define the WP_CONTENT_URL string as my Salient theme wasn’t picking up the subpath for some reason, so I hard coded it.

12) The only thing you may have to do is edit the subfolders web.config file.   I was having loads of errors, until I deleted it, then it magically started working.  However, it does seem to have been replaced with an empty web.config file, so its contents are here:
<?xml version=”1.0″ encoding=”UTF-8″?>


I think that was about it, but it was quite the pain to get it configured.  Nb, if you change the website name or add a custom domain, don’t forget you’ll have to change the wp-config.php with the new domain name as Azure won’t do this for you automatically.

I’m also not sure how things will go when there is a new version available, but hopefully this will get you going.

HP EliteDesk 800 G1 – DisplayPort to HDMI Audio

I’ve been fighting for several days to get the audio working from a HP EliteDesk 800 G1 SFF device into my Panasonic TV using a DisplayPort to HDMI Cable.  The PC is running Windows 10 build 10586.   I tried lots of things, including BIOS patches, various driver builds from MS and Intel, but all to no avail.

However, I have managed to get it working, by changing the Default Format for output to FM Radio Quality, down from DVD or CD in the Advanced TV Properties.   I’m not sure if this is a TV or PC/Driver problem, but I suspect its the latter as I had the same problem with another TV,  but the cable works on a Surface Pro 3 Mini-DP to DP to HDMI adapter, so I think this is PC related.

Hopefully this helps someone else out.

Saving Images to AWS S3 Scriptomagically

Whilst I’ve been messing around creating boot images, I’ve hit against the problem of needing to archive off some large images for later use.  Now I’ve finally got access to a high-speed bandwidth internet link, I’m can back stuff off to Amazon’s AWS S3 cloud in a reasonably timely fashion.

s3cmd does a great job of interfacing with AWS from a Linux CLI, but is designed to deal with precreated files, not stuff that is dynamically made.    When you’re talking about multi-gigabyte files, it isn’t always an option to make a local archive file before pushing it to the remote storage location.

I’m used to using pigz, dd and ssh to copy files like this, and wanted to achieve something similar to s3, however there doesn’t seem to be many guides to achieving this.  I have however made it work on my debian based distro relatively easily.


This is the tooling I combined


You need a recent version of s3cmd to make this work – v1.5.5 or above is apparently what supports stdin/stdout which you’ll need.
As of writing, this can be obtained from the s3tools git repository @
You’ll need git and some python bits and pieces but building was straightforward in my case.

Before you start, make sure you setup s3cmd using the command s3cmd –configure


I use pigz, although you can use gzip to achieve the same thing.  For those that don’t know, Pigz is a multi-threaded version of gzip – it offers much better performance than gzip on modern systems.


tar is pretty much on every linux system, and helps deal with folder contents in a way that gzip/pigz can’t.


The command I built is as follows:

tar cvf – –use-compress-prog=pigz /tmp/source/directory/* –recursion –exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ | /path/to/s3cmd put – s3://bucket.location/BackupFolder/BackupFile.tar.gz –verbose -rr

I think its pretty self explaintory, but I’ll run through the code anyway…

tar cvf = tar compress verbose next option is a file
– = stands for stdout in tar parlance 
–use-compress-prog=pigz = self explaintory, but you can probably swap this for any compression app which supports stdout. 
/tmp/source/directory/* = the directory or mount point where your source files are coming from
–recursion = recurse through the directorys to pickup all the files
–exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ = exclude various file types (in this instance, I was backing up a broken VMFS storage array 
= breaks the input to the next app
/path/to/s3cmd = the directory where s3cmd resides – in my instance, Id installed git repository version
put = send to s3; put works with a single file name.
– = use stdout as the source  s3://bucket.location/BackupFolder/BackupFile.tar.gz = the s3 bucket and path where you want the output stored
–verbose = debugging verbose output and status tracking
-rr = reduced redundancy storage – less expensive than full redundancy, and you can include/exclude this based on your needs.

 The biggest problem with this is you don’t really get an idea of how long a backup will take. s3cmd splits the file into chunks, but you don’t know how many chunks it is until the process has completed.  I average around 6 MB/s but a multi-gigabyte file can still take several hours to upload.  Whilst I didn’t time it exactly,  a 70GB file, compressed to 10GB, took around 90mins to send to s3.
You may want to leave your backup running in a screen session.

Installshield Error 1327 (Or Invalid Drive)

A common problem I come across which causes me lots of problems to resolve, particularly with networked machines, is the Invalid Drive error (usually error code 1372) when you try and run the an InstallShield Installer often caused by a Mapped Network drive.  I believe the problem exists because you map a drive at a user level, but the administrator doesn’t get the same ‘layer’ of drives (just try opening a mapped drive from an Administrative command prompt).   Previously, Id got round this problem by logging in as a local administrator, installing the app needed for All Users, then logging back in as the user required to finish off the setup.    Note, this problem usually occurs before you even choose which drive to install to, so I can only assume its a feature of Installshield related to either a temporary file store location, or enumeration of drives at start.

However, I’ve discovered a really useful work around which seems to resolve the problem, and its the subst command.   Subst associates a drive letter with an alternative drive/path, allowing you to have multiple letters pointing to a location.  It is to drive letters what the mklink is for files and folders.

To use it, fire up an administrative command prompt and type:
susbt M:  C:tempinstallshieldtemp
subst ProblemDriveLetter TemporaryFolderLocation 

Run the installer to let it install and you can then cleanup with:
subst /D M:
where /D is delete and M: is the Drive letter you used in step 1.

Just typing subst will show  you drive letters you have susbtituted.

Azure Self-Signed Cert

I’ve been messing around with some of the Azure services, and am about to try some of the desktop utilities such as the Hyper-V converter to try publishing services.  One thing you’ll come across if you’re trying to do this is the need to create a self-signed management certificate to allow these apps to authenticate, and you’ll see all the technet articles mention the makecert.exe tool.

The problem is that makecert is bundled into the Windows 8.1 SDK and Visual Studio 2013 Express downloads, both of which are several hundred megabytes – overkill for what we need.

Well, the easy way to get makecert and only use about 9mb of storage space – download the 8.1 SDK installer from  and run the installer.  When prompted to select the tools, you only need to install the MSI Tools.  This gives you makecert (as well as a few other bits and bobs) but in a much more compact format than having all of the developer tools installed.

Wierd goings on with a Live CD and

I’ve been messing around trying to make a live-CD with some transcoding/ripping utilities built in to utilize some of the spare hardware I’ve got lying around. More on this later, but I’ve been reworking the guide @ with my own utilities and tools.

One problem I’ve been challenged with over the last couple of days is HandBrake-Cli bombing out with the message:

[email protected]:/mnt/Videos/Movies/dvdrip/91# HandBrakeCLI -i BHD.iso -o BHD.mkv –preset=”High Profile”
[20:41:41] hb_init: starting libhb thread
HandBrake 0.9.9 (2014070200) – Linux x86_64 –
4 CPUs detected
Opening BHD.iso…
[20:41:41] hb_scan: path=BHD.iso, title_index=1
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/index.bdmv
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/BACKUP/index.bdmv
bluray.c:2341: nav_get_title_list(BHD.iso) failed
[20:41:42] bd: not a bd – trying as a stream/file instead
libdvdnav: Using dvdnav version 4.1.3
libdvdread: Missing symbols in, this shouldn’t happen !
libdvdread: Using libdvdcss version  for DVD access
Segmentation fault

This has been bugging me, as it worked before I converted the image to a livecd.  I wondered if it was some kind of problem with the lack of ‘real’ disk space, or a lack of memory or something like that, but nothing I could find would identify it.

Finally, I started looking into libdvdcss rather than HandBrake itself.  I think what confused me is the symbols error looks like a warning, especially given that there is a follow-on message which looks like libdvdcss is continuing.  Anyway, eventually!   I ran an md5sum on the file to see if it matched a non-live machine (to a virtually identical build).

[email protected]:/# md5sum /usr/lib/x86_64-linux-gnu/
4702028ab20843fd5cb1e9ca4b720a72  /usr/lib/x86_64-linux-gnu/

N.b. is symlinked to in my current Debian sid based build.

On the donor machine
[email protected]:/usr/lib# md5sum x86_64-linux-gnu/
c9b314d9ed2688223c427bc5e5a39e6f  x86_64-linux-gnu/

So I’ve SCPd the source file into the live machine, checked the md5sum matched the donor machine (it did), and repeated the HandBrake job.  Lo and behold it worked!  So I’ve restreamed the two files into the filesystem and success, it just works.
So I don’t know if something funky happens when the image is created using the link, but actually its quite easy to fix once you understand the problem.

Hope this helps someone,  and I’ll be back soon with more details about building a live image, then booting it using iPXE.