Musings

Mimecast to Office 365 – Split Routing of Email Domains

We’re currently going through a migration from our existing legacy email provider to using Mimecast as our SPAM filter. We have some services which we can’t interrupt without planning, so need to deploy Mimecast to Office 365 for our ‘user’ domain without disrupting our ‘alerting’ domain. We also need to validate Mimecast configs and setup before impacting on users, so we also have a test domain to verify configuration.

We therefore wanted to add the email filtering staged in the following order over a number of days:
1) Test
2) User
3) Alerting

However, the Mimecast documentation isn’t great for describing split routing of email based upon the senders domain, and essentially assumes that you want to send all email out through Mimecast from the off.

This great article from Antonio Vargas really helped us out in understanding why the rule wasn’t intercepting messages from the domains to send out.

In the Conditions select “Apply this rule if..” > The recipient is located > Outside of the Organization

Once that was applied to our rule, we were immediately able to verify that the test domain was able to then route email through Mimecast from Office 365.

WD My Passport Pro – RClone Backup to Cloud (AWS S3)

I’ve setup my WDMPP to perform a regular cloud sync of my pictures into an Amazon S3 data store so that when it is on an internet connection, it will sit and run in the background and upload the pictures.

Note, I’m only backing up photos rather than video as I intend to run this on a 4G mifi hotspot and don’t want 4k video uploads to trash my data allowance.  I’ll run the risk of losing these in the event of a failure whilst mobile, but its something I can live with.

First all, you need to have rclone installed on your WDMPP which involves using the SSH terminal.  I’ll create a separate article at some point, but there is plenty of information about how to do this on the internet.

Create two files within the root of the harddrive

rclonescript.sh is the command which executes the backup script

rclone copy /media/sdb1/ AmazonS3:wdmpp.backup/ -v --log-file /media/sdb1/logs/rclone.log --copy-links --ignore-case --filter-from /media/sdb1/filestocopy.txt

Command Meaning
rclone copy Use the copy command in rclone
/media/sdb1/ Source root path to look for data
AmazonS3:wdmpp.backup/ Destination root path to send data. In this instance, I’m using AWS S3, but the same principle should work for other cloud services
-v Verbose mode
log-file /media/sdb1/logs/rclone.log rclone logs to this path (note, you’ll need to mkdir the logs directory)
–copy-links Follow Symlinks for copying – seems to be required
–ignore-case Because WDMPP backs up from a variety of devices, don’t be case sensitive when applying filters
filter-from /media/sdb1/filestocopy.txt This is the filtering definition rclone uses to identify the files to copy.

You will need to chmod +x this file to make it executable
chmod +x /media/sdb1/rclonescript.sh

/media/sdb1/filestocopy.txt is the filtering rules.

- /logs/
- /.USB/
- /.wdmc/
- /.wdcache/
- /.DS_Store/
- *.txt
+ *.jpg
+ *.png
+ *.heic
+ *.bmp
+ *.raw
- *

Include (-) / Exclude (+) File or Path Description
/logs/ Exclude the logs path where rclone writes its own log to
/.USB/ Exclude the system .USB path
/.wdmc/ Exclude the system .wdmc path
/.wdcache/ Exclude the system .wdcache path
/.DS_Store/ Exclude the system .wdcache path
*.txt Exclude any text files that exist (some of my camera devices create text logs which I’m not interested in copying).
+ *.jpg Copy any jpeg files with the extension jpg
+ *.png Copy any Portable Network Graphics files with the extension png
+ *.heic Copy any  High Efficiency Image File Format files with the extension heic (these come from my phone)
+ *.bmp Copy any bitmapped files with the extension bmp (Not expecting any of these, but heh)
+ *.raw Copy any RAW camera files (my camera uses the .raw extension
* Exclude anything else

You can obviously change your filters as you need to, for example including video files or whatever else you write to the disk. I had to put the excludes before the includes as I found otherwise it wouldn’t necessarily behave as expected.  This seems to work well for me.

Once you’ve tested that it works,  it can be added to cron
First, create the cron path

mkdir /var/spool/cron

Then create the crontab

crontab -e


8 * * * * /media/sdb1/rclonescript.sh >/dev/null 2>&1

In this crontab, it runs the script every 8th minute of each hour. If you’re not sure how to create a cron job, https://crontab-generator.org/ is a great website for building cron lines.

WD My Passport Pro SSD – SMBv2 / Win 10

To enable SMBv2 compatibility on the Western Digital My Passport Pro SSD, so that it supports Windows 10, go through the following steps.

1) Enable SSH access via the admin console
2) Use PuTTy/etc to log into the console
3) nano /etc/samba/smb.conf
4) add the line
[global]
workgroup = WORKGROUP
server string = MyPassport Wireless Pro
netbios name = MyPassport
protocol = SMB2

5) run /etc/init.d/S75smb restart
6) try and browse to the \\ IP of the disk drive
7) If you can’t login (username admin) reset the password by typing
8) /etc/samba/smbpasswd -a admin
Enter the new password
9) Finally restart Samba again (per 5)
10) Profit?

Skype for Business – Audio Conferencing Behaviour

If you have Skype for Business telephony services, including audio conferencing hosted by Microsoft (365), it is worth sharing the current workflow experience, which doesn’t seem to be well documented.

 

From a host, or moderator perspective, you dial into the meeting using your assigned phone number, shown on your Skype for Business invite.

  1. The Skype Meeting Attendant answers the phone, and asks you to enter the conference id, following by the # key.
  2. You enter the meeting number (again, shown on the invite).
  3. You’re prompted to press * if you are the leader – you’d press *
  4. You enter the pin assigned to your account
  5. You’re dropped into the meeting, and your name or number is announced if enabled.

From an end user perspective, the process is pretty much the same, except that if the leader has already joined, they’re not prompted to enter the pin number.

Unlike other ACPs, the control of the service appears to be pretty non-existent, and I think this is by design.  After all, control of the meeting can be done from the mobile app if you’re not near a desktop.
You’re not able to start a meeting recording, as this service is performed by the Skype client, recording into your local computer folder, so if this is required then that is your only option.

I think MS Teams may take a different approach, but I’ve not got my hands on telephony/audio in that product yet.

Replicating Linux Machines across a Network

This command will replicate a running machine into a remote machine container.
The source machine can be online and running, the remote machine should be booted via a Linux Live Image.   The destination machine should also have the same or larger disk size than the source machine.

Sending Machine
dd if=/dev/sda bs=16M conv=sync,noerror status=progress | gzip –9 -cf | nc destinationmachineip preferredport -q 10

Receiving Machine
nc –l –p preferredport | gzip –dfc | dd bs=16M of=/dev/sda

It is worth doing some test copies if you have a large image to send, or are on a slow network, altering the block size (bs=) and gzip compression amount (-9).  Particularly on the latter, on a fast network, you may be better off using lower compression, as the CPU cycles required for compression may in fact take longer than sending the data uncompressed or at a lower compression rate.

You can also use other compression programs like pigz to achieve better performance.

Booting Windows 2016 on HP G8 Microserver MicroSD Card

As good as FreeNAS has been, most of the clients on my home network are Windows based and speak CIFS/SMB,  and I’ve not had great success with FreeNAS reliably/stably serving these protocols.   Under load, the shares sometimes lock up and stop responding, and permissions can be a bit hit and miss.

FreeNAS support forums drink their own special brand of cool aid, so I’ve decided to try Windows, which, whilst being part of their own borg collective has a much wider base of users and obviously native integration with my client base.  So I’m piloting Windows Server 2016 with its various storage capabilities to see how it compares.
I’ve got a HP Microserver G8 which as well as 4 disk trays, supports a fifth SATA device via an additional ODD port, an internal USB and a MicroSD port, as well as various external USBs.
My FreeNAS is a previous N54L Microserver, which installs and boots easily to a USB drive, but Windows is a bit more pig-headed at booting from USB/MicroSD devices.
However, with the help of Daniels Tech Blog https://www.danielstechblog.info/how-to-deploy-windows-server-2016-tp3-onto-an-sd-card/  I have managed to get my Microserver booting from the MicroSD Card
Daniels instructions are more or less spot on, except for one change.
diskpart
list disk
select disk x
clean
create partition primary
format quick fs=ntfs label="SD"
active
assign letter=C
exit
dism /Apply-Image /ImageFile:D:sourcesinstall.wim /index:2 /ApplyDir:C:
bootsect /nt60 C: /force /mbr
bcdboot C:Windows

I couldn’t get that final line to write to the MicroSD. I kept getting errors about BCDBOOT not being able to write the files, or unable to find the source location. However, I read the documentation about BCDBOOT at Microsofts MSDN site https://msdn.microsoft.com/en-gb/windows/hardware/commercialize/manufacture/desktop/bcdboot-command-line-options-techref-di and happened upon the command for writing to USB devices.

bcdboot C:Windows /s C: /f ALL

This seems to work fine, and a reboot allows Windows 2016 to boot.

Moving WordPress to a subsite in Azure Websites

Deploying WordPress in Azure is relatively straightforward.  Deploy a Resource Group, with a DB, and a Web Server using the WordPress template and bob is your fathers brother.

However, I wanted to move WP to a subdomain which all in all should be pretty straight forward. All the instructions online are out of date or for different systems though, so I wanted to note what I had to set to get it all working.

General Steps are:

1) Deploy your WordPress site.

2) Go into the WP site and perform the configuration to get it running bare-bones

3) Go back into the Azure Portal and find the FTP configuration

4) Log into the Website with the FTP credentials (I used WinSCP)

5) Copy the entire WordPress site to your local machine

6) Delete everything from your site except for
              index.php
              azuredeploy.json
              web.config
I had a few problems deleting the wp-admin and wp-content folders – it kept blocking access, but went eventually.

7) Create the name of the subfolder you want to use (in my case /d/)

8) Copy the backup of your WordPress into the subfolder.

9) Edit the root web.config file.  Mine is setup to redirect all traffic to the /d/ folder.  You many a different setup, but this is mine:
<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<system.webServer>
    <rewrite>
        <rules>
            <rule name=”Root Hit Redirect” stopProcessing=”true”>
                <match url=”^$” />
                <action type=”Redirect” url=”/d/” />
            </rule>
        </rules>
    </rewrite>
</system.webServer>
</configuration>

 10) Edit the root index.php file and change the require
require( dirname( __FILE__ ) . ‘wp-blog-header.php’ );
to include the sub folder you want to use.
require( dirname( __FILE__ ) . ‘/d/wp-blog-header.php’ );

11) Azure WordPress seems to ignore the SQL database setting for the site and home name.  Therefore this must be hardcoded in the wp-config.php file.
Replace the existing paths BELOW the Stop Editing Line! as follows.

define(‘WP_HOME’,’http://example.site/d’);
define(‘WP_SITEURL’,’http://example.site/d’);
define(‘WP_CONTENT_URL’, ‘http://example.site/d/wp-content’);

I also had to define the WP_CONTENT_URL string as my Salient theme wasn’t picking up the subpath for some reason, so I hard coded it.

12) The only thing you may have to do is edit the subfolders web.config file.   I was having loads of errors, until I deleted it, then it magically started working.  However, it does seem to have been replaced with an empty web.config file, so its contents are here:
<?xml version=”1.0″ encoding=”UTF-8″?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules/>
    </rewrite>
  </system.webServer>

</configuration>

I think that was about it, but it was quite the pain to get it configured.  Nb, if you change the website name or add a custom domain, don’t forget you’ll have to change the wp-config.php with the new domain name as Azure won’t do this for you automatically.

I’m also not sure how things will go when there is a new version available, but hopefully this will get you going.

HP EliteDesk 800 G1 – DisplayPort to HDMI Audio

I’ve been fighting for several days to get the audio working from a HP EliteDesk 800 G1 SFF device into my Panasonic TV using a DisplayPort to HDMI Cable.  The PC is running Windows 10 build 10586.   I tried lots of things, including BIOS patches, various driver builds from MS and Intel, but all to no avail.

However, I have managed to get it working, by changing the Default Format for output to FM Radio Quality, down from DVD or CD in the Advanced TV Properties.   I’m not sure if this is a TV or PC/Driver problem, but I suspect its the latter as I had the same problem with another TV,  but the cable works on a Surface Pro 3 Mini-DP to DP to HDMI adapter, so I think this is PC related.

Hopefully this helps someone else out.

Saving Images to AWS S3 Scriptomagically

Whilst I’ve been messing around creating boot images, I’ve hit against the problem of needing to archive off some large images for later use.  Now I’ve finally got access to a high-speed bandwidth internet link, I’m can back stuff off to Amazon’s AWS S3 cloud in a reasonably timely fashion.

s3cmd does a great job of interfacing with AWS from a Linux CLI, but is designed to deal with precreated files, not stuff that is dynamically made.    When you’re talking about multi-gigabyte files, it isn’t always an option to make a local archive file before pushing it to the remote storage location.

I’m used to using pigz, dd and ssh to copy files like this, and wanted to achieve something similar to s3, however there doesn’t seem to be many guides to achieving this.  I have however made it work on my debian based distro relatively easily.

Tooling

This is the tooling I combined

s3cmd

You need a recent version of s3cmd to make this work – v1.5.5 or above is apparently what supports stdin/stdout which you’ll need.
As of writing, this can be obtained from the s3tools git repository @ https://github.com/s3tools/s3cmd
You’ll need git and some python bits and pieces but building was straightforward in my case.

Before you start, make sure you setup s3cmd using the command s3cmd –configure

pigz

I use pigz, although you can use gzip to achieve the same thing.  For those that don’t know, Pigz is a multi-threaded version of gzip – it offers much better performance than gzip on modern systems.

tar

tar is pretty much on every linux system, and helps deal with folder contents in a way that gzip/pigz can’t.

Usage

The command I built is as follows:

tar cvf – –use-compress-prog=pigz /tmp/source/directory/* –recursion –exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ | /path/to/s3cmd put – s3://bucket.location/BackupFolder/BackupFile.tar.gz –verbose -rr

I think its pretty self explaintory, but I’ll run through the code anyway…

tar cvf = tar compress verbose next option is a file
– = stands for stdout in tar parlance 
–use-compress-prog=pigz = self explaintory, but you can probably swap this for any compression app which supports stdout. 
/tmp/source/directory/* = the directory or mount point where your source files are coming from
–recursion = recurse through the directorys to pickup all the files
–exclude=’*.vswp’ –exclude=’*.log’ –exclude=’*.hlog’ = exclude various file types (in this instance, I was backing up a broken VMFS storage array 
= breaks the input to the next app
/path/to/s3cmd = the directory where s3cmd resides – in my instance, Id installed git repository version
put = send to s3; put works with a single file name.
– = use stdout as the source  s3://bucket.location/BackupFolder/BackupFile.tar.gz = the s3 bucket and path where you want the output stored
–verbose = debugging verbose output and status tracking
-rr = reduced redundancy storage – less expensive than full redundancy, and you can include/exclude this based on your needs.

 The biggest problem with this is you don’t really get an idea of how long a backup will take. s3cmd splits the file into chunks, but you don’t know how many chunks it is until the process has completed.  I average around 6 MB/s but a multi-gigabyte file can still take several hours to upload.  Whilst I didn’t time it exactly,  a 70GB file, compressed to 10GB, took around 90mins to send to s3.
You may want to leave your backup running in a screen session.