Category

Software

Windows 7 Zombie Mapped Network Drives

In Windows 7, when using mapped drives on a laptop, you may find that after moving around (undocking, connecting via Wifi etc.) that the mapped drive becomes a zombie – it still exists, but is essentially dead. This seems particularly prevalant where offline folders are used. This appears to be caused by the network drive service starting before the network connection is necessarily stable. However, you can change the behaviour with a simple registry key.

Under HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider

add a new DWORD entitled RestoreConenction

Set the DWORD value to be 0

After a reboot, network drives will only be reconnected to when you try to access them through explorer or the file system APIs.

XPerience Points

It should come as no surprise at the massive gap between computers running Windows XP and a more modern variant. Yesterday, The Register published an article which mentions how ~500 million PC’s still run XP which goes end of life in April 2014.

The problem is that XP ‘just worked’ (eventually). It’s a relatively lightweight OS, straightforward to configure, reliable and for big IT outfits, part of their master machine image for an extended period. Why upgrade to something more complex, more finicky and frankly more unstable in Vista? And then if XP was working so well, why bother changing the images for Windows 7?

Well, the time has come for businesses and home users to think about replacing XP with something more modern. Windows 8 is about to become 8.1 and whilst it doesn’t necessarily fix everything that is broken in 8, it appears to be a good leap forward. Plus, its still possible to find Windows 7 machines in certain retailers for those who don’t want to learn the new UI. For those who only use the Internet, check-out a Chromebook which gives you a nice portal onto the WWW without the cruft of a heavyweight machine. For people who consider themselves reasonably confident IT users, why not checkout one of the Linux distributions; Ubuntu comes highly recommended to those who are Linux n00bz.

Whatever you do, I urge you to upgrade from XP. From April 2014 onwards, no patches, no updates, no security fixes. I find it highly likely that with that many PC’s still running XP that those with a financial interest in attacking these machines and using them for nefarious machines may be sitting on exploits and security holes that will never get fixed. Its in your, as well as everyone else’s interests that you consider your options now, and migrate by April 14.

When a Minimal Install Isn’t…

Over the past couple of days, I’ve been rewriting the recovery script for a Linux LAMP application I wrote about 5 years ago.  I test it every so often to make sure it still works.   This year,  it doesn’t. Basically, we’ve reached a stage where the software versions don’t support the LAMP stack I chose (XAMPP).  Besides,  XAMPP isn’t really suitable for production servers even though its served us well in the intervening years.

So,  I’ve embarked on updating the recovery script to fit in with an ‘off-the-peg’ LAMP stack which will be easier to maintain going forward.

My favorite distribution is Debian and that’s the one I’ve most experience with.  However,  the preferred distro in the office is RHEL, or variants based therein.   So I got myself a fresh download of the Fedora 19 network install CD, loaded it into the virtual machine and off we go.  The installer is a bit err, low-rent – pretty graphics and the like, but not a lot of options to choose.  I suppose I’m too used to the ‘expert’ mode of the Debian network install.  Anyway,  went through the necessary steps to get the network up and running, configure it to talk to our proxy server etc,  find the disk config menu (hidden off-screen on a low-res screen) then go to the package selection screen.  Being reasonably accomplished now in administering Linux systems, I went for the minimal selection so I could add the other packages later on,  and off we went.

I quite liked how you can set the root password and create a new user whilst the OS installs – thats efficient. Then that was that,  server installed.  And that’s when the trouble started.

Giving yum proxy access was straightforward (although why the configs don’t carry across from the installer, I don’t know) and getting the LAMP stack installed was straightforward.  The httpd service came straight up after install and was ready to go.  Except that it wasn’t.  I could not for the life of me get a http page to come up.  It seemed that SSH was the only default port opened.  I checked network config and that all looked okay.  I could even wget http://localhost and get a page back.  So why no external connection?   Then I discovered SELinux was installed and running.  Disabled that, and a reboot – still no damn connection!   There looked to be a load of IPTables rules still listed;  could they be a carryover from SELinux I wondered?  Dropped the iptable rules and magically got http access back.  Rebooted and same problem again.

After reaching out to a colleague who has a little more experience with these distro’s that I,  and after installing Webmin,  we discovered that firewalld was running on startup.

Now,  when I install a minimal distro installation, I expect the following:

  • A bootloader
  • A kernel
  • A shell
  • Enough configuration to get from the bootloader to a shell
  • An ability to extend the system with a package manager.

I do not expect other things to get in the way,  especially as I hadn’t asked for them.   SELinux and Firewalls are good practice, but I do not want them imposed on me, especially if I’m not expecting them.  There were a number of other packages loaded (wpa_supplicant) that to me do not classify as essential to getting Linux up and running.

Fedora 19 and I have not immediately started as friends.

Installing Debian Lenny on ML370 G3 with RAID Controller.

Whilst Debian Lenny is compatible with the Compaq/HP ML370 G3 with RAID Controller on-board, it took me a little work to get it installed and I wanted to document my efforts for myself and others!  This was a painful process to get it working, and perhaps isn’t for the weak of heart or those early in their Linux-ninja training, but hopefully this will guide you to the path of enlightenment and joy!

PreReqs:-
An ML370 G3 with sufficient hardware to boot/install e.t.c.
A copy of HP SmartStart (I used 7.20, others may work). ftp://ftp.hp.com/pub/products/servers/supportsoftware/ZIP/
A copy of Windows 2003 (2000 may also work).
Debian Lenny NetInst CD
USB Stick formatted as FAT with ‘non-free’ Array Controller Driver saved to it (ql2300_fw.bin in my case).

How I did it:-

  1. Connect up the machine, power on and insert the HP SmartStart CD
  2. Once the SmartStart has booted, configure your array(s) as per your requirement.  In my case, I used 2x147GB drives in a RAID1 and 4x72GB drives in a RAID5.
  3. Do an operating system install, choose to install Windows and let it do its own partitioning.
  4. Let Windows install, don’t bother doing any major configuratoring as you’re going to wipe the server again shortly…
  5. Once Windows is installed and on the desktop, insert the Debian CD and Reboot.
  6. Boot to the Lenny Installer, and choose a ‘Standard Install’.
  7. Configure as normal.
  8. When you get to the bit about partitioning, choose the Manual Option.
  9. Goto the NTFS partition that the Windows install created, change it to an Ext3 format, ensure it is set to be bootable, and set the mount point to be /.
  10. Id leave other partition configuration until the point that you can boot Linux.
  11. Carry on and install whatever packages/components you require as standard.
  12. When you’re prompted to install the GRUB bootloader into the MBR – SAY NO
  13. You’ll be asked where you want to install the bootloader.  My partition was /dev/cciss/c0d0p1.
  14. Finish off the Installer, remove the CD and Reboot.
  15. Fingers x’d, you’ll get GRUB boot menu.  Mine then refused to boot the OS saying that the Operating System wasn’t found.   I edited the initial boot menu because it was referring to HD (1,0).  I changed this to HD (0,0) and it booted.
  16. Login to the console.
  17. If the GRUB editing worked, make sure to edit /boot/grub/menu.lst and permanently change the setting to HD (0,0) for each of the menu items.

Cool Runnings:-
This server when running at full tilt with the fans is a noisy noisy beasty! However, you can control those fans to make them run at a dull roar rather than full hurricane force chat.  Follow the instructions from Jonas Bjork related to Ubuntu. Debian works exactly the same way, but you’ll need to install SNMPD and libstdc++2.10-glibc2.2 (the latter you’ll have to manually download the .deb from the Etch repository).  Once its all installed, run the ‘hpasm activate’ command and follow the on-screen instructions.

Profit!

gOS(h)

As you may have heard, Google have decided to step into the Operating System arena by releasing one aimed at revolutionising the desktop OS market.  At launch, it will be targeted at the lightweight ‘netbook’ range of devices specifically for people who require regular, rapid access to the Internet.   Information coming from the Mountain View chocolate factory is that it uses the Linux kernel using a ‘new windowing system’, presumably their own GUI rather than relying on X Windows and KDE or Gnome.  Presumably it is a rework of the Android platform to better support larger screens, keyboards and mice, although they do say that the two projects are not intertwined.  The OS also has Chrome Browser built in and is designed to be an interface into ‘cloud’ computing resources that have been talked about over the last couple of years.  Because the big G now provides ‘office’ resources such as E-Mail, Documents, Spreadsheets and Presentations online, they don’t see the point in having massive computing resources on the desk to write a simple document, when their server infrastructure can provide that for free (on the basis that they’ll sell advertising to you) provided you can get online.

Its an interesting approach to take, and it will be interesting to see how it pans out.  However, I do have a few concerns.  Firstly,  the Internet is NOT yet all pervasive.  Sure, you might have it wall to wall at home, and you may have access in the office.  You may even have 3G on your phone or via a dongle to get online whilst you’re mobile,  but you don’t have to travel very far to enter an internet black hole.  For example in our Office,  I can get HSDPA access with T-Mobile and 3 Internet,  but BT/Vodafone barely even has a 2G signal.  Likewise driving home,  even though I travel the M6 corridor, one of the busiest routes on the motorway network,  there are at least 3 spots where the signal drops out completely.  At the moment, this isn’t a problem, because you don’t need access to the net 100% of the time to write a document or update your e-mail.  But if you’re sat there with a blank screen because your network has dropped out, frustration levels soon start to set in.   

Of course, you can get round all this with offline synchronisation, which is the one thing currently missing from another cloud computing environment you may be exposed to – Citrix.  However,  you’re then NOT working in the cloud when you’re working offline, which means more storage, processing and memory requirements, defeating the point of a lightweight computing model.

Maybe my fears are unfounded,  or will be addressed when 4G (WiMax or LTE) appears, but I still can’t see full coverage if you’re stuck up a mountain somewhere miles from the nearest transmitter.  Still, I await this project with interest and see how it pans out in the battle with Microsoft and the soon to be released Windows 7.   Its good to see Linux continuing to make inroads into the end user environment,  I just hope it doesn’t become the pervasive kernel standard to the detriment of other Linux projects.

BootNote:

You may have heard the tale attributed to various IT Thinkers of the early 20th century but generally now attributed to be Thomas Watson of the IBM Corporation.  The quote believed to be from 1943 goes “There is maybe a world market for maybe five computers”,  ironic considering how ubiquitous computing now is.  But the point is that Google is now building a single massive global computer (a mainframe if you will) that we all have access to via what could be pretty ‘dumb’ devices.  This computer currently runs the worlds most popular search engine,  it hosts videos,  allows you to go shopping and send Emails.  In 50 years time,  will computing be done not at the desk, but in ‘the world’?  Will we access the internet in the same manor that we access electricity today?  We currently plug an electrical appliance into a socket and it works using electricity.  In the future, will we plug a terminal into a socket and the internet ‘just works’.   Don’t be surprised if we move to a “pay for what you use” ‘net where you don’t buy access to the grid, but you do buy access to the resources you use just like the utilities of today.

Adventures in Time & Motion.

And now for my next experiment…

I found out a while ago (and have only just got round to trying it) that its possible to connect a Nintendo Wiimote to a computer using a Bluetooth Adapter.   The Wiimotes work via Bluetooth, so when you scan them they appear as HID’s and have a pin code of 0000.

So what’s the point?  Well, primarily it makes it an excellent (and relatively inexpensive) remote control, ideal for a media centre or presentation controller.  Without using the motion sensors, the full set of buttons work natively on the controller to navigate, click forward & back, and exit (you can map button’s to key’s with software).

What is really cool is when you bring in the motion tracking and infra-red tracking elements.  The Wiimote (as I’m sure you’re aware) has a 3 axis accelerometer and an infra-red tracking camera built in, which allows it to to know where it is in time and space.  Some clever developers, the most famous of whom is Johnny Lee, an MIT student who has written some excellent interface software to demonstrate the capabilities and possibilities of the hardware.

Some examples include using an IR-Led array and some reflective tape to provide an interface along the lines of "Minority Report",  building a cheap IR Pen to provide multi-touch interactive white boards, and my personal favourite, head-tracking to give ‘virtual windows’ into the world using this inexpensive technology.  You should checkout his website HERE where he discusses the capabilities and allows you to download the software.

Be sure to checkout the videos on YouTube – some impressive stuff indeed!

CD’s, and all that Jazz…

Its pretty cool how quickly the music industry are embracing new ways of making money after spending years fighting against it.  I attended a concert at Birmingham’s NIA on Friday Night (Jeff Wayne’s War of the Worlds) and as well as the usual programmes and t-shirts and other marketing stuff, I was impressed to see that they were offering a live recording of the night on a CD that you could collect 10 min’s after the gig finished.

Me being cynical thought nah, it’ll be a recorded ‘as live’ CD, but what the hey, its an interesting and nice souvenir to take home with me.  At the end of the concert, we all dutifully trouped out of the hall and were directed to a ‘sheep pen’ to await the arrival of the magic CD’s.  At that point I realised that perhaps it was a live CD, and that some fella was there burning them off with his laptop.  Not so!  A team of people were transferring CD’s from a spindle into ready printed cases, already prefilled with the Act 1 and photo excerpt CD’s whilst others ran back and forth with more CD’s! Amazing stuff, and really only a 15min delay in getting hold of the pressings.

Being intrigued by how it works, I looked up the company on the Internet (www.concertlive.co.uk) to see if I could find more information.  There’s a sky news video of the process HERE (doesn’t work in the office), but basically they have their own sound engineers who sample & record the concert from backstage, then burn off the resultant recording in a trailer filled with hundreds of CD burners.  As far as I can tell, the CD’s and cases come pre-printed, and the photo CD pre-pressed (its a normal silver CD), then they put the first half burned CD’s in during the second half, then put the second half CD’s in as they issue them. A genius idea, and easily achievable provided you have enough staff and enough CD burners!!!  The set costs £20, so more expensive than a shop-bought album, but not stupidly priced. And it helps fend off the bootleggers who record concerts then release them on the Internet, as the performers will each receive a cut of the CD’s sales, as well as the producers and record companies.

As far as sound quality goes, its top notch.  There was a problem with the sound in the Auditorium part way though where the left channel was lost, but the CD recording is perfect – I suspect they take a feed from the pre-amp and process it themselves to sound good on the CD, as well as taking a feed from atmospheric mics to get the standard ‘applause’ at the appropriate point.

But all in all, an excellent show, and some excellent idea’s for marketing.  I just wish the NIA was bit better organised.

My Bargain of the Century…

IBLIKWIFI_large1_thumb_07EFC687I popped into my local Curry’s last night to look at headphones (see my previous thread on this topic) and whilst perusing the many isles where they store them, I happened to note the DAB radio section, and specifically the DAB/Wifi radio section, where upon I glanced upon the product you see on the right.

Its a Revo iblik wifi and its a clock radio that does…  FM, DAB, Internet Radio, Podcasts, UPnP media streaming, iPod dock and generic MP3 support.

Now, I’ve not got an iPod (having a deep mistrust of Apple products), but I have got access to all of the others.  And I have to say its amazing – rapid to setup, easy to use, excellent sound from something so small, it doesn’t look like a toaster or something from the 1950’s (why do DAB radio’s seem to have this requirement?) and works (so far) flawlessly.  I suppose that it shouldn’t be surprising how good it is when it retails at play.com for £169.99.  Which is why its a bargain when I picked up an ex demo unit for just £29.99. Unboxed, it was just the unit & the power wart, where as the boxed unit comes with…  A box,  remote control, stereo cable and manuals.  Well, the box goes straight in the bin anyway, the manual’s are available online, I’ve got stereo cables knocking around and the remote can be bought for the princely sum of £10 from the manufacturer’s website, should I feel the need to be able to control my alarm clock from the bed with a remote rather than having to reach my arm to tweak it.

And it seems like a good investment – the Government seem to be pushing to take the FM frequencies off the BBC and force them to use DAB only (more on the Digital Britain report later), but not only do I get local radio, but I can listen to radio you might not normally hear – last night I listened to RTL Germany,  the Sacramento fire department and the Atlantic radar control.  Amazing stuff, and recommended if you can find it at 1/5th the price it should be!

Oh yes, and I didn’t pickup any headphones. 🙂

NVidia Cards on Debian (specifically Lenny)

Anyone running Debian Xorg will know that getting accelerated Graphics drivers working (beyond using the nv driver) is more complicated than achieving world piece. I tried the official nvidia drivers, I tried the packages within the repositories, all to no effect.  However, http://desiato.tinyplanet.ca/~lsorense/debian/debian-nvidia-dri-howto.html has perfect instructions which worked first time.

Just a bit of detail around my card for reference:

  • Nvidia GeForce 6200 128mb AGP.
  • Debian Lenny (v 2.6.26-1-686)
  • Intel Desktop Mobo (An 82865 IIRC).
  • Gnome Desktop

I hope this is of use to someone, as I lost many an hour trying to get all other instructions working.  Also, for reference,  I gave up trying to get an ancient  Riva TNT2 working, but the 6200 is an excellent upgrade, and pretty cheap too.   It also means I’m able to decommission a PC from the living room, and hopefully I’ll be able to get XBMC compiled and working on it too.

What time is it?

And the answer to the question is NOT Chico time.  The correct answer is ‘it depends’.  One of the core network services, but one that often gets overlooked is having an accurate time across all IT systems.  Back in the time before the railways,  the time of an area was generally set by sunrise and sunset.  This was fine when people only travelled a few miles each day.  But when you could journey several hundred miles across the country in a day, you could end up with massive problems, especially when you wanted to catch a connecting train.  If your intermediary station was on a different time to your watch and the timetable set at your departing station, you could be in all sorts of trouble.  Hence the adoption of GMT and ‘Railway Time’,  a commonly agreed standard time across the country and around the world, regardless of your location. 

And so to the modern age of time synchronisation across a computer network.   Why is this so important you might ask?  Well, having the correct time is less important than making sure every computer clock is set the same.   However, it does make it easier if everything is set to the correct time as well.  From allowing people to login at certain times of the day, to analysing logs from different devices as you pass through the network, if everything is on the same time,  correlating activity can be an easier task if everything is synchronised.  Back before the days of flexible working where you were expected to clock on at 9am and clock off at 5.30pm, you needed to know that you were on the same time as the boss, otherwise there’d be hell to pay!

The company I work for operates two time servers.  These servers run network time services which check both with time servers out on the Internet and each other to decide on the correct time.  The reason for this multiple checking is that as time data is sent across the Internet, the inherent time delay and number of hops can skew the time out leading to inaccurate information.  However,  the accuracy shown is generally accurate to about 10ms,  plenty enough for what we require.  Users of more accurate time servers such as finance services and broadcasting have access to far more accurate time sync services and generally synchronise with what is known as stratum 0 devices. 

Time servers are referred to being in stratum groups – Stratum 0 are the ‘core’ clocks, be they atomic, gps or radio clocks and display the most accurate time available.  These are generally not available to the network, but are connected to Stratum 1 servers.   These level ‘1’ devices receive the time feed from the 0 level, and then feed that down to Stratum 2 computers.  Our time devices are Stratum 2 devices synchronising with several stratum 1 servers.   These servers also check the time with each other to verify that neither are wildly off.  The companies servers, routers and other ‘core’ devices query these level 2 servers and become Stratum 3 devices.  Finally, your computers which synchronise with these servers become Stratum 4 devices.   It is possible to have upto 256 stratum levels, but generally 5-6 is seen.   However, stratum level DOES NOT INDICATE ACCURACY.  Because of the multiple check paths available to lower stratum devices, these can be more accurate than individual level 0 or 1 devices, because of this compare and contrast work that they do.  It simply notes how far ‘away’ from the central clock it is.

We could reduce the layers of stratum services within the company – the easiest way to do this is to invest in GPS time clocks.  GPS navigation requires each satellite to know precisely what time it is, in order that the receiver can compare the signals and work out how far away each satellite is, and therefore where it is in the world.  This can be harnessed using high quality GPS receivers to calculate the time, and passed to local stratum 1 servers, for example in the core data centres.  But is it worth it?  Probably not.

Another key requirement of the network time service is to ensure the clocks are regularly synchronised.  Computer ‘Real-time-clocks’ can be wildly inaccurate, often losing or gaining several seconds per day, especially when machines get switched off and on.  Our Stratum 2 servers synchronise every hour with the Internet and themselves, but generally lower stratums update every 4, 8 or 24hrs, or when machines are first switched on.   If you find that your clock is inaccurate, and you do not have the ability to update it,  try rebooting your computer and it should come back into line.

Network time is stored as UTC (Coordinated Universal Time) which to the casual user generally means GMT. This means that no matter what time zone you are in, or if its daylight savings,  your computer will be able to work out the correct time for your location, even if you’re based in Australia, the Falklands Islands or Central London (provided the time zone database on your computer is accurate).

So when you put your clocks forward this weekend (1am Sunday becomes 2am Sunday), at least you know that the clock at work SHOULD be correct.