Using Cloudflare with HAProxy

A while ago I switched to using Cloudflare for my domain names DNS. The main reason I did this was for dynamic DNS since I had a dynamic IP on my home Internet connection. I then looked into what else I could use Cloudflare for and over time have taken advantage of more of their free options.

I was looking at setting up HAProxy anyway because I have a server that I use to play with all kinds of web services. I have a Icinga2 instance for monitoring, a Bookstack setup for taking notes, a Home Assistant install, and more. To make these easily accessible externally I wanted to use HAProxy with SNI. While looking into this I discovered Cloudflare Origin CA and use it in the following instructions.


To follow this setup completely you will need:

Cloudflare Setup

After you have followed through the Cloudflare Getting Started guide

You will then need to configure Cloudflare’s Universal SSL by following the guide, to create a Cloudflare Origin Certificate to install onto your router.

Once you have the certificate and key, you can combine them together to create a .pem file. You will then copy this .pem to somewhere on your router. I’ve placed mine under /etc/ssl/cloudflare/

Also remember to backup this file somewhere secure as in that location it won’t be saved during an upgrade of OpenWrt/LEDE.

HAProxy Configuration

This is the configuration I have used but I am not sure if it is the best way to go about it. I would love some feedback about whether there is a better way to manage it.

This file is located under /etc/haproxy.cfg

# Global parameters
    # Slowloris protection
    timeout http-request 5s
    timeout connect 5s
    timeout client 30s
    timeout server 30s
    timeout http-keep-alive 4s
    # Close the backend connection
    option http-server-close

    log local0
    maxconn 32000
    ulimit-n 65535
    uid 0
    gid 0
    tune.ssl.default-dh-param 2048
    ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
    ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

listen local_health_check
    bind :60000
    mode health

# Frontend for SNI Passthrough
frontend frontend_snipt
    bind *:443
    mode tcp
    log global
    option forwardfor header X-Forwarded-For
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
# This will decide which backend to pass the traffic to
# based on the FQDN
    use_backend backend_snipt_1 if { req_ssl_sni -i }
    use_backend backend_snipt_2 if { req_ssl_sni -i }
    use_backend backend_snipt_3 if { req_ssl_sni -i }
    use_backend backend_snipt_4 if { req_ssl_sni -i }
    use_backend backend_snipt_5 if { req_ssl_sni -i }
    default_backend backend_1

# Backend for SNI Passthrough
backend backend_snipt_1
    mode tcp
    server localhost check

backend backend_snipt_2
    mode tcp
    server localhost check

backend backend_snipt_3
    mode tcp
    server localhost check

backend backend_snipt_4
    mode tcp
    server localhost check

backend backend_snipt_5
    mode tcp
    server localhost check

# Normal frontend
frontend frontend_1
    bind *:7000 ssl strict-sni crt /etc/ssl/cloudflare/
    mode http
    use_backend backend_1

frontend frontend_2
    bind *:7001 ssl strict-sni crt /etc/ssl/cloudflare/
    mode http
    use_backend backend_2

frontend frontend_3
    bind *:7002 ssl strict-sni crt /etc/ssl/cloudflare/
    mode http
    use_backend backend_3

frontend frontend_4
    bind *:7003 ssl strict-sni crt /etc/ssl/cloudflare/
    mode http
    use_backend backend_4

frontend frontend_5
    bind *:7004 ssl strict-sni crt /etc/ssl/cloudflare/
    mode tcp
    option clitcpka
    timeout client 3h
    timeout server 3h
    use_backend backend_5

# Normal backend                          
backend backend_1
    mode http
    server server01 check

backend backend_2
    mode http
    server server01 check

backend backend_3
    mode http
    server server02 check

backend backend_4
    mode http
    server server01 check

backend backend_5
    mode http
    server server01 check

With this you can also make the LuCI interface available externally and know that the traffic will all be encrypted. This setup offloads all the TLS encryption.

You will need to also go to System > Startup in LuCI and start the haproxy service.

Open Port via LuCI

The last thing you need to make this all work is to open port 443 on the router. Browse to Network > Firewall > Traffic Rules and under the section Open ports on router: add a rule for TCP for port 443. If you want to make that more secure you can make that rule only accept connections from Cloudflare’s IP Ranges.


Allow ActiveSync for Android through Microsoft’s Web Application Proxy

While moving to Web Application Proxy for our reverse proxy, which is replacing TMG 2010 servers, we had an issue with Android devices connecting to Exchange.

After much playing around I discovered the issue was due to Server Name Indication (SNI). According to Wikipedia:

Server Name Indication (SNI) is an extension to the TLS protocol[1] that indicates what hostname the client is attempting to connect to at the start of the handshaking process. This allows a server to present multiple certificates on the same IP address and port number and hence allows multiple secure (HTTPS) websites (or any other Service over TLS) to be served off the same IP address without requiring all those sites to use the same certificate. It is the conceptual equivalent to HTTP/1.1 virtual hosting for HTTPS.

My understanding is that Android supports this, but for some reason it wasn’t working. This was tested with a few devices i.e. Samsung Galaxy Note (Cyanogen and Stock), Samsung Galaxy S3 & S4, & HTC One.

I found this article that explains how to resolve this issue. You simply need to add a binding by running this command:

netsh http add sslcert ipport= certhash=<your certificate's hash> appid={f955c070-e044-456c-ac00-e9e4275b3f04}

This acts as a legacy non-SNI binding. Once this is done you should be able to use Android devices through WAP.

Creating network diagrams with Inkscape

A while ago I was looking at creating a network diagram for my home. I first was playing around with the usual tools that are freely available e.g. Draw, Dia, etc. These produced useful diagrams but they were really boring. I did some more searching and found that Inkscape has a Connector tool that you can use to create diagrams. I ended up spending some time and making a “modern” looking network diagram, as you can see here:



The icons are from the Faenza icon set which I only recently stopped using as my icons on my Ubuntu desktop. I have modified some of the icons for devices like the Switch and WAP.

Using Inkscape to create  network diagrams is not the best way to do it. I had a few issues with connectors losing their connection when moving devices around and the occasional crash (which I reported to Inkscape on Launchpad). I would love if more support for connectors and network diagram style drawing was added into the SVG standard because you can make really nice looking diagrams.

Accessing your modem from OpenWRT Router

In this setup the modem’s IP address will be and the OpenWRT router’s address will be

Go to Network section and open Interfaces
Down the bottom of the Interface section click on Add new interface…

01 Create InterfaceName the interface e.g. modem
Select the radio button next to the wan interface in the Cover the following interface section
Click Submit
The configuration page for the new interface will now load

02 Interface Config

Enter in the address as and netmask as
Go to the Firewall Settings tab and make sure the new interface is assigned to the wan zone

03 Zone Config

Click Save & Apply

Click on the Network  menu and select Interfaces

06 Connect Interface

Click Connect next to the modem interface
You should now be able to access your modem’s web console from a device on your LAN

Deploy Image from Stand-Alone USB on SCCM 2012 R2


This deploy will be done via USB and connect to a domain through a WPA2-Personal wireless connection. It then needs to connect to the WPA2-Enterprise connection once completed. The reason we needed to do this was because we were deploying to devices with no LAN port, only WLAN. We could have purchased a bunch of USB to LAN adapters but that required manufacturer-only adpaters and it would have been more expensive than USB drives.

Adding an action to run a script after Task Sequence is completed

Early in the sequence (any where after “Restart in Windows PE”) you need to set a Task Sequence Variable for SMSTSPostAction. This allows you to run a command or script once the Task Sequence is completed. I have set this up with a script to run gpupdate and delete our hidden wireless network profile. Because I have used a script I have had to use DISM to add the script into C:\Temp on the image. Here is instuctions on how to modify the image. The type is a Set Task Sequence Variable. In the Task Sequence Variable name you set it as SMSTSPostAction and the Value to C:\Temp\<script name> My batch script is simple and looks like this:

timeout 60
timeout 60
netsh wlan delete profile name=<profilename>
netsh wlan connect ssid=<SSID>

Naming the computer automatically

The next step is to do with naming. I have a Run Command Line sequence to set the computer’s name. This is a simple command that calls a script that I have added to a package. The script is as follows:

Set env = CreateObject("Microsoft.SMS.TSEnvironment")
Set ProgressUI = CreateObject("Microsoft.SMS.TsProgressUI")
Set SWBemlocator = CreateObject("WbemScripting.SWbemLocator")
Set objWMIService = SWBemlocator.ConnectServer(strComputer,"root\CIMV2",UserName,Password)
Set colItems = objWMIService.ExecQuery("Select * from Win32_BIOS",,48)

For Each objItem in colItems
env("OSDComputername") = objItem.SerialNumber

Connecting to hidden wireless network

The next part was connecting to the domain over wireless. We had to create an extra SSID, which we hid, and protected with WPA2-Personal security. This required creating a connection to this network on another machine and then exporting the profile for that connection.

To export the XML after creating the connection is simple. In a command prompt you run

netsh wlan export profile key=clear

This will save it to the directory your command prompt is running from. Note that this saves your wireless network with the passphrase in clear. With this I created a new Package in SCCM which contained the XML (which I rename domainjoin.xml) and a batch script. In the batch script I had the following:

netsh wlan add profile filename=domainjoin.xml user=all
netsh wlan connect name=<SSID>
timeout 60

The timeout is required; I haven’t tested for a shorter time than 60 seconds.

Task sequence completion

After that a domain join is completed and a restart.
Once the Task Sequence completes, the script set with SMSTSPostAction will run. Sometimes it takes a few minutes before it connects to the wireless but have yet had one to fail.

SCCM 2012 Backup Configuration

I set-up a quick backup system that archives a week of SCCM backups.  If you have not configured your backup you will need to go to ConfigMgr Console > Administration > Site Configuration > Sites > Site Name > Site Maintenance.

SCCM Site Maintenance
SCCM Site Maintenance

Once your backup task is set-up and working, you can then create the AfterBackup.bat which automatically runs as part of the backup maintenance task. I used these instructions to make the archived backups with the name of the day it was created.

REM @echo off
setlocal enabledelayedexpansion
set target=\\SCCM01\ConfigMgr$\Backup\Archive\%date:~0,3%
If not exist %target% goto datacopy
RD %target% /s /q
xcopy "\\SCCM01\ConfigMgr$\Backup\WCMBackup\*" "%target%\" /E /-Y

I did a test with this and all worked perfectly. I now have a week of backups.

Microsoft Hyper-V Server 2012 for fun

In wanted to test an upgrade from SCCM 2012 SP1 to R2 as I couldn’t find much info on how to do this yet. I also wanted to play around with Microsoft Hyper-V Server 2012. We had a nice Sun server laying around that has 90GB of RAM and seems like a waste to be sitting there, so I installed Hyper-V 2012 on it.

There is only two 120GB SAS drives in the server and since it is just for stuffing around I put them in a RAID0. Install was simple. Configuration is easy too as it has a basic CLI interface. Enabled Remote Access in moments and started managing it from my Windows 8 machine.

I had some issues connecting to ISOs that were located on my machine but that was just security settings on my machine. I just added the Hyper-V server to the Administrators group and then it worked fine, not the most secure way to do it but it’s just for testing.

I then discovered we have a NAS laying around that was a few years old. It is an Iomega StorCenter ix4-200r. I attempted to use it with the StorCenter software but it was being a pain. I downloaded FreeNAS and used that instead. It worked without having to make any changes. Once the NAS was setup in a RAID10 with a single CIFS share I then moved all the Hyper-V storage to that. I have run into a problem with moving the storage though. I was unable to completely move the VMs because I was getting this error:

VM Storage Move Failure
VM Storage Move Failure

It moves all the storage successfully but doesn’t move the configuration file. I have looked around and couldn’t find a fix for it besides creating a new VM and attaching the moved Virtual HD.

Now that it is all running I have a nice little test environment. Prior to the NAS install I had tested the SCCM update and it went well. I have now updated our SCCM to 2012 R2.

I replaced the RAM in the NAS to bring it up to 4GB. Unfortunately the NAS has an old chipset that only supports 4GB max. This definitely improved the performance though.

Upgrade SCCM 2012 SP1 to SCCM 2012 R2

First thing is to make sure you have a good backup and the backup is working correctly.

Problems: Had to manually reinstall the console Install location > Tools > AdminConsole.msi

Had to disable then re-enable PXE on the Migration Point

Should have uninstalled client on the SCCM server before install.

Had to update Client Installation Settings on the site.