Tutorials

My first 10 minutes on a Windows development server

On the Windows Servers I use for development, I like to keep things simple. That means security should be in place, but at the same time should be workable and flexible enough for me to install and download things, without getting nagged by obnoxious over-active security mechanisms. In order to do so, I execute the following steps on every Windows development server I install.

Install RDP Defender

If your Windows server is publicly available from the internet, then there is a 100% chance that hackers, network scanners and brute force robots are trying to guess your Administator login and password as we speak.

Using password dictionaries, they will automatically try to login to your server hundreds to thousands times every minute. Not only this is bad for your server’s security, but it also wastes a lot of resources, such as CPU and bandwidth.

RDP Defender will block these attacks, by monitoring failed login attempts and automatically blacklisting the offending IP addresses after several failures. You can of course configure it to suit your needs, but it pretty much take care of itself. It takes just 30 seconds to download and install: https://www.terminalserviceplus.com/rdp-defender.php

Increase RDP Security

Start –> Run –> gpedit.msc
Go to Computer Configuration à Administrative Templates à Windows Components à Remote Desktop Services à Remote Desktop Session Host à Security

Set client connection encryption level – Set this to High Level so your Remote Desktop sessions are secured with 128-bit encryption.

Require secure RPC communication – Set this to Enabled.

Require use of specific security layer for remote (RDP) connections – Set this to SSL (TLS 1.0).

Require user authentication for remote connections by using Network Level Authentication – Set this to Enabled.

Disable Remote Management, unless specifically needed.

I’m not a fan of having stuff enabled that I don’t use or need, so even tough this probably isn’t a security risk, I’m going to disable it anyway.

Go to Server Manager à Local Server à Remote Management and click ‘Enabled’.
In the window that opens, untick ‘Enable Remote management of this server from other computers’ and hit apply.


Do not start Server Manager automatically at logon

Go to Server Manager à Manage à Server Manager Properties
Check ‘Do not start Server Manager automatically at logon’.

Disable Password Expiration

Like I said, this is a development server. There’s no need for me to have top notch security, as I’ll probably spin up a new machine in a couple of months again and delete this one.

Start à Run à gpedit.msc
Go to Computer Configuration à Windows Settings à Security Settings à Password Policy.

Change ‘Maximum password age’ to 0. Hit apply and ‘Password will not expire’ should now be shown.

Schedule automatic update restarts

Windows Server 2012 and 2016 use ‘active hours’ to determine whether or not it’s safe to reboot the machine for updates. Moreover, the maximum time frame of the ‘active hours’ cannot be greater than 12 consecutive hours. To be honest, I don’t know who came up with this brilliant idea, since a server is usually designed to be on 24/7. Therefore, I prefer choose when Windows reboots for updates by scheduling a specific time, instead of playing Russian roulette whether or not the thing is going to reboot while I’m running any jobs/tests.

Start à Run à gpedit.msc
Go to Computer Configuration à Policies à Administrative Templates à Windows Components à Windows Update.

Tick ‘Enabled’, choose option 4 and tick ‘Install during automatic maintenance’

Note: When ticking ‘Install during automatic maintenance’ the schedule you define in gpedit, i.e. ‘Every day’ and the scheduled install time of 03:00 as in the screenshot above, have no effect! The automatic maintenance option overrides this schedule. Automatic maintenance is performed daily, but you are free to change at which time it takes place via Control Panel à System and Security à Security and Maintenance à Automatic Maintenance

Disable Internet Explorer Enhanced Security Configuration

On a development server, downloading new tools and utilities is common practice. Instead of whitelisting every domain, which are a lot nowadays, I simply turn off the Internet Explorer Enhanced Security Configuration. Yes, I know this is a potential security risk, especially on production servers, but like I said, this is a development server. In addition, use your common sense when pointing and clicking at stuff on the interwebz and you should come a long way

Go to Server Manager à Local Server à IE Enhanced Security Configuration and tick ‘Off’

Privacy settings

Windows Server 2012, and especially Windows Server 2016, are quite intrusive when it comes to privacy. I don’t like the automatic sharing of ‘diagnostic and usage data’ (whatever that may be), so I switch off these options as far as possible (hoping they actually do something instead of being bogus buttons/placeholders).

Go to Server Manager à Local Server à Feedback & Diagnostics and click ‘Settings’
In the window that opens, choose ‘Never’ and ‘Basic’:

Do the same for Windows Defender, by switching off ‘Cloud Protection’ and ‘Automatic Sample Submission’:

Show extensions & hidden files, folders and drives

It’s always handy to know whether or not you’re opening invoice.pdf.exe or an actual invoice.pdf, isn’t it ?

Open a random folder, go to File à Change folder and search options

Tick ‘Show hidden files, folders and drives’ and untick ‘Hide extensions for known file types. Hit apply and OK.

Change Power Plan to High Performance

I hate waiting for my disks to spin-up and since this is a server, I always choose the High Performance Power Plan in order to get maximum performance.

Go to Control Panel à Hardware à Power Options and tick ‘High performance’.


Last but not least, install 7Zip & Notepad++

These two tools belong in every developer’s toolkit, so install them while you’re at it!

That’s all for now. Comments or questions? Let me know down below. Cheers!

Tutorial: How to secure Traccar with SSL / HTTPS for free, using IIS and Let’s Encrypt on Windows Server

Introduction

In this guide, I’m going to show you how to secure your Traccar installation with SSL, so that it can be reached over https instead of http. Traccar is a free and open source modern GPS tracking system.
Since Traccar has no native support for encrypted connections, we’ll do so by setting up a Reverse Proxy using IIS (which is the recommended method by the developer). We’ll be using Let’s Encrypt to generate a free valid certificate for your Traccar installation.

Prerequisites

  • A working Traccar instance, reachable over http (by default http://localhost:8082), installed on Windows Server 2012 R2 or Windows Server 2016.
  • A Fully Qualified Domain Name (FQDN), for example ‘yourdomain.com’, with an A record pointing to the IP of your Traccar server:

    (Of course, in the screenshot above, change the variables to meet your environment, i.e. replace ‘123.123.123.123’ with the IP of your Traccar server and ‘traccar.yourdomain.com’ with your own (sub)domain.
    Please note that it can take up to 24 hours, but usually no more than 1-2 hours, for your DNS servers to ‘propagate’, i.e. sync your update with the rest of the world.)

Getting Started

First, install the URL Rewrite add-on module. From Windows Server 2012 R2 and up, you can use the Microsoft Web Platform Installer (WebPI) to download and install the URL Rewrite Module. Just search for ‘URL Rewrite’ in the search options and click ‘Add’.


After installing, do the same for the Application Request Routing 3.0 add-on module:


Next, open IIS and add a new website:

In the window that opens, fill in the following details:

Change the variables to meet your environment.

Close IIS for now and download and install ‘Certify the web’, a free (up to 5 websites) SSL Certificate Manager for Windows (powered by Let’s Encrypt). Certify will automatically renew your certificates before they expire, so it pretty much takes care of itself.

After installing, open Certify. Before we can request a new certificate, we first need to setup a new contact. This is mandatory. So, first, go to ‘Settings’ and set a ‘New Contact’:


Next, click on ‘New Certificate’:

Select the website you created in IIS, in my case named ‘Traccar’:

The rest of the information should now autofill, based on the details you entered in IIS.

Next, go to the Advanced tab and click ‘Test’ to verify if everything is setup correctly

If all goes well, you should get this popup:

Click OK and click ‘Save’.

Next, click ‘Request Certificate’ to request your free valid SSL certificate from Let’s Encrypt for your Traccar installation:

If all goes well, you should get ‘Success’

Next, close Certify and open IIS again. Go to the website you created (in my example Traccar) and click on URL Rewrite

Click on ‘Add Rule(s)’ in the top right corner:

In the window that opens, click on ‘Reverse Proxy’ and click ‘Ok’

In the window that opens, enter ‘localhost:8082’ in the Inbound Rules text field,
select ‘Enable SSL Offloading’,
select ‘Rewrite the domain names of the links in the HTTP responses’ from ‘localhost:8082’
and select your Traccar domain from the dropdown menu, i.e. ‘traccar.yourdomain.com’ and click OK.

Next, go to your website in IIS again and click on Compression:

Outbound rewriting can only be applied on un-compressed responses. If the response is already compressed then URL Rewrite Module will report an error if any of the outbound rules is evaluated against that response. Therefore, we need to disable Compression in order to get Traccar to play nicely with IIS. Uncheck both options and click Apply:

That’s it! We’re done! Your Traccar installation should now be reachable over HTTPS and have a valid SSL certificate:

If the website is not opening (times out), check if port 443 inbound is open in your firewall:

Optional

Since your website is now reachable over https, you can change the Challenge Type to tls-sni-01 in Certify:

This way, you can remove the port 80 binding in IIS if you want, to force all traffic to your Traccar installation over https:

Have fun! Any questions or comments, let me know down below.

Blocking DNS Amplification attacks using IPtables and/or fail2ban

Updated 11 January 2019: Fixed syntax based on comments. Thank you!


If you are managing a Linux server, you’ve probably heard about DNS amplification attacks which make use of misconfigured DNS servers. DNS amplification is a DDoS technique which uses a large reply by DNS resolving the target. This is accomplished by spoofing the query with the source IP of the target victim to ask for a large DNS record, such as an ANY reply of the ROOT record or isc.org, which is most commonly found. The request itself is usually around 60-70 bytes, while the reply is as much as 2-3K. That’s why it’s called amplification. It will not only make your network participate in the attack, but it will also consume your bandwidth. More details can be found here.

Blocking these kinds of attacks can be tricky. However, there are some basic iptables rules that block most of it, using them in combination with fail2ban. As usual, your mileage might vary. The commands below were tested and executed on Ubuntu Server 16.04 LTS 64-bit.

Basically, it comes all down to adding these two IPtables rules:

iptables -A INPUT -p udp --dport 53 -m string --from 40 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery 
iptables -A INPUT -p udp --dport 53 -m string --from 40 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 60 --hitcount 3 -j DROP
iptables -A INPUT -p tcp --dport 53 -m string --from 52 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery 
iptables -A INPUT -p tcp --dport 53 -m string --from 52 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 60 --hitcount 3 -j DROP

Source: https://wiki.opennic.org/opennic/tier2security

The first iptables rule looks for the incoming udp packets on port 53 and searches the first 50 of packet for hex string “0000FF0001” (which is equivalent to an ANY query).
The second iptables rule drops the packet if the source ip and query type (in this case “ANY”) matches and occurred more than one time in the past second.

Make sure to save your iptables rules, using something like iptables-persistent, so that they stick when you reboot your server.

In case this approach doesn’t work for you, try using the following alternative, which makes use of Fail2ban instead of IPtables.

First edit the file /etc/fail2ban/jail.conf and add the following contents:

[iptables-dns]
enabled = true
ignoreip = 127.0.0.1
filter = iptables-dns
action = iptables-multiport [name=iptables-dns, port="53",
protocol=udp]
logpath = /var/log/iptables/dns_reqs.log
bantime = 86400
findtime = 120
maxretry = 1
[named-refused-udp]
enabled = true
[named-refused-tcp]
enabled = true

Next, create a new fail2ban filter by creating a new file called /etc/fail2ban/filter.d/iptables-dns.conf and adding the following contents to it:

[Definition]

failregex = fw-dns.*SRC=<HOST> DST
failregex = ^.* security: info: client #.*: query \(cache\)
'./(NS|A|AAAA|MX|CNAME)/IN' denied

ignoreregex =

After doing so, check using fail2ban-client status if you see the ‘iptables-dns’ jail listed. If fail2ban refuses to start, check your regex for typos using the following command:

fail2ban-regex /var/log/kern.log /etc/fail2ban/filter.d/iptables-dns.conf

That’s all folks! Any feedback or suggestions? Let me know in the comments!

Recommended further reading: A Realistic Approach and Mitigation Techniques for Amplifying DDOS Attack on DNS in Proceedings of 10th Global Engineering, Science and Technology Conference 2-3 January, 2015, BIAM Foundation, Dhaka, Bangladesh, ISBN: 978-1-922069-69-6 by Muhammad Yeasir Arafat, Muhammad Morshed Alam and Feroz Ahmed.

How to migrate between Synology NAS (DSM 6.0 and later)

Recently I bought a Synology DS216j to replace my Synology DS214se. The DS214se is a good entry-level NAS for personal use, but it was struggling to keep up with my 3 HD IP-cameras as well as acting as a Mail Server, mainly because of the single-core CPU. Since I didn’t want to lose my data, I had to perform a migration from the DS214se to the DS216j in order to retain the data. A quick Google search lead me into this Synology knowledge base article: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_migrate_between_Synology_NAS_DSM_5_0_and_later

The title of the knowledge article above says that it’s intended for Synology NAS running DSM 5.0 and later. At the point of writing, DSM 6.1 is the latest available DSM version, so I had a suspicion that the knowledge base article might be out of date. Because my NAS models were not identical to each other, I had to follow section 2.2. of the article linked above; Migrating between different Synology NAS models. After doing so, I can confirm that my suspicions were right; the knowledge base article is out of date, the migration process between two Synology NAS just got easier!

Here’s a small writeup about what has changed in migrating between Synology NAS between DSM 5.0 and DSM 6.0:


Section 2.2. Migrating between different Synology NAS models starts with a word of caution, telling you that all packages on the target Synology NAS (i.e. your new NAS) will have to be reinstalled, which results into in losing the following data (…) Mail Server and Mail Station settings & Surveillance Station settings. This was applicable to my Synology NAS, as I had these packages installed and were actively used. However, after performing the migration to my new NAS as described in Section 2.2. (which basically comes down to update your old NAS to the latest DSM, switch it off, swap the drives to the new NAS and turn it on) my new Synology said the packages had to be repaired instead of being reinstalled. After clicking the repair button, all my packages came back to life on the new NAS, without any data loss; all my settings and files, including from Mail Server, Mail Station and Surveillance Station (emails as well as recordings), were still there! Needless to say, it’s still good practice to backup you data before performing the migration, as described in section 1 of the knowledge base article linked above.

However, what did change was the IP address of my NAS. I assumed that my new NAS would be using the same IP as my old NAS, as Synology instructs you to turn off your old NAS before powering up your new NAS, but that was not the case. So after the migration, use the Synology finder to find the new IP of your NAS and change it to your old IP after the migration, which can be in the Control Panel à Network.

Also, lastly, I had to re-register my DDNS hostname by re-logging into my Synology account, which can be done in the Control Panel à External Access.

That’s all folks!

PS. Should you have bought any additional Surveillance Station license keys in the past, don’t forget them down and to deactivate them on your old NAS before the migration, since the license keys can only be active on one Synology product at a time. Also, as an FYI, each license key can only be migrated just once.

TeamSpeak Server 3.0.13 not starting? Install Visual C++ Redistributable for Visual Studio 2015!

Recently I encountered issues starting my TeamSpeak server after updating it from version 3.0.11.4 to 3.0.13.6.; it would immediately crash with a blank error log.
Apparently TeamSpeak Server 3.0.13 onwards requires the 32-bit Visual C++ Redistributable for Visual Studio 2015 to be installed. Yes, the 32-bit variant, even if your OS is 64 bit. So, should you encounter immediate server crashes after updating your TeamSpeak server to version 3.0.13, try downloading and installing the 32-bit variant of Visual Studio 2015 run-time from here: https://www.microsoft.com/en-us/download/details.aspx?id=48145

PS. In case you missed the release notes of TeamSpeak Server 3.0.12, as of that version, the server binaries file names do NOT contain platform suffixes any more. They’re all called “ts3server” now, so don’t forget to delete the old/obsolete binary including the platform suffix from your TeamSpeak Server installation folder (else it will crash as well…)

How to schedule a PHP script in Plesk for Windows using cronjob/crontab

Nowadays it’s dead easy to schedule a PHP script as a cronjob/crontab in Plesk Onyx for Windows. However, in the previous versions, Plesk did not supply a sample syntax for scheduled tasks. Most examples found on the interwebs assume that you’re running Plesk on Linux, but if you are like me and run Plesk on Windows, that syntax is just plain wrong.

This small ‘note to self post’ shows how to correctly schedule a PHP script in Plesk for Windows for those of you who are still running an older version of Plesk :)

Step 1. Open Plesk and search for Scheduled Tasks

Step 2. Create a new cronjob/crontab as shown above. Adjust the parameters to your liking. In this example, I’ve scheduled the particular .php script to run every 5 minutes of each day of the week.

Step 3. You’re done! In the end, your finished cronjob/crontab should look like in the image above. If desired, you can also run it on demand by clicking Run Now.

Microsoft Azure Tips & Tricks

Recently I’ve started to play around with Azure, Microsoft’s cloud. In this blogpost, I’ll be sharing some of my initial findings, which might prevent you from making the same rookie mistakes as I made ;)

First of all, compared to other cloud providers such as Vultr and Digital Ocean, Azure is quite expensive. However, should you have a MSDN account, you might be eligible for free Azure credit up to $150 per month! For more information, visit https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits-details

The Azure control panel might be overwhelming at first, but it’s pretty intuitive once you start using it. Basically the workflow is from left to right, i.e. if you click on an option, a new screen opens to the right.

Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

So without further ado, here’s my list of top tips & tricks for Azure rookies:

  • VMs are automatically allocated a dynamic WAN (‘internet facing IP address’) in order to communicate with the internet. This causes the IP address to change when you stop and start the VM, for example using auto-start/auto-stop as described above. You can specify a DNS domain name label for a public IP resource, which creates a mapping for domainnamelabel.location.cloudapp.azure.com to the public IP address in the Azure-managed DNS servers. For instance, if you create a public IP resource with contoso as a domainnamelabel in the West US Azure location, the fully-qualified domain name (FQDN) contoso.westus.cloudapp.azure.com will resolve to the public IP address of the resource. This is particularly helpful in case of using a dynamic IP address.
    Should you not want this, you can assign a static IP to the VM. However, you cannot specify the actual IP address assigned to the public IP resource. Instead, it gets allocated from a pool of available IP addresses in the Azure location the resource is created in. For more information, visit: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-ip-addresses-overview-arm

    It’s possible to delete the VM and preserve the static IP address, for example to assign to another/new VM. In order to do so, do not delete the entire resource group that is associated with the IP address. Instead, shutdown the VM, then delete the NIC associated with the static IP you want to preserve. Then delete all other components in the resource group except for the IP you want to preserve. Please do note that IP addresses cannot change resource groups, so in order to re-use the IP address, you need to create a new VM (or load balancer) in the same resource group as the IP address.
  • VMs in a DevTest lab are deployed without a Network Security Group (also known as a firewall) within Azure. They only come with the default Windows Firewall running on the machine itself. Should you want a NSG/Firewall within Azure as well, you can deploy one yourself in the same resource group and assign it to the VNet of the VM you created in the DevTest lab. Please do note that this requires you to forward ports in twofold; one time within the VM itself in the Windows Firewall and the second time from within the NSG/Firewall in the Azure portal.


    Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

    This is also the case with normal VMs (i.e. VMs that are not created in a DevTest lab) as they come with a NSG/firewall by default.
  • Azure offers two types of storage disks: SSD (Premium storage) and conventional HDD (Standard storage). SSD disks come in 3 sizes, 128GB, 256GB, 512GB. HDD disks have a flexible/user chosen size which can vary up to 1TB. You need to create a software RAID in order to get a large volume/disk.
    Premium Storage supports DS-series, DSv2-series, GS-series, and Fs-series VMs. You can use both Standard and Premium storage disks with Premium Storage supported of VMs. But you cannot use Premium Storage disks with VM series which are not Premium Storage compatible (for example the A-series or Av2-series).
    If you’re using standard storage, you’ll only be charged for the data you’re storing. Let’s say you created a 30 GB VHD but storing only 1 GB of data in it, you will only be charged for 1 GB. All empty pages are not charged.
    HOWEVER, if you’re using SSD (premium storage) you pay for the FULL SSD disk regardless of how much data you stored and empty pages are charged… so it becomes expensive pretty quick. For more information, visit https://azure.microsoft.com/nl-nl/blog/azure-premium-storage-now-generally-available-2/ and https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage
  • Azure Storage provides the capability to take snapshots of your VMs. In order to take snapshots of your VM, you need a Recovery Services vault. A Recovery Services vault is an entity that stores all the backups and recovery points you create over time. The Recovery Services vault also contains the backup policy applied to the protected files and folders.
    Previously, when you created a backup vault you had the option to select locally redundant storage (LRS) or geo-redundant storage (GRS). This has now changed and you do not have the option during the vault creation.

    Now, by default, your vault will be created in a GRS, which costs (way) more than LRS (almost double!). If you want to switch to LRS, you must do so after the vault has been created but PRIOR to registering any items to the vault. You cannot switch the storage type of your Vault after you registered any items to it! To do this, simply access the Configure tab of your backup vault and change the replication from the default of GRS to LRS. Don’t forget to click Save…

    For more information, visit: https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

  • When you create a VM in Windows Azure you are provided with a temporary storage automatically. This temporary storage is “D:” on a Windows VM and it is “/dev/sdb1” on a Linux VM and is used to save the system paging file. This temporary storage must not be used to store data that you are not willing to lose, as there is no way to recover any data from the temporary drive. The temporary storage is present on the physical machine that is hosting your VM. Your VM can move to a different host at any point in time due to various reasons (hardware failure etc.). When this happens your VM will be recreated on the new host using the OS disk from your storage account. Any data saved on the previous temporary drive will not be migrated and you will be assigned a temporary drive on the new host. Because this temporary storage drive is present on the physical machine which is hosting your VM, it can have higher IOPS and lower latency when compared to the persistent storage like data disk. The temporary storage provided with each VM has no extra cost associated with it for storage space as well as for transactions. For more information, visit: https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/
    If your application needs to use the D drive to store data, you can to use a different drive letter for the temporary disk. However, you cannot delete the temporary disk (it’ll always be there), you can only assign it a different drive letter. For instructions on how to do so, visit https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-classic-change-drive-letter

That’s it for now!

How to: MikroTik Hairpin NAT with dynamic WAN IP for dummies

9 January 2019: This post is obsolete. Please refer to this awesome video tutorial by Stevocee who explains how to use the (free built-in) DDNS feature to setup Hairpin NAT with Dynamic WAN IP and port forwarding:


About half a year ago, I bought a Mikrotik RouterBoard RB962UiGS-5HacT2HnT hAP AC (Phew! What a mouthful!). It’s a great router, or should I say routerboard, which has more features I could ever wish for… maybe even too many!

Nevertheless, out of the box, I was unable to visit my local NAS/Webserver using the WAN IP from my local network. After a quick Google DuckDuckGo search, I discovered that this requires a so-called ‘Hairpin NAT’. Basically, it reroutes all traffic sent to your WAN IP from your local network, back to (a specific IP address in) your local network. Graphically, it looks like this:

serveimage
See the arrow which, with a little bit of imagination, looks like a hairpin? Hence the name…anyway, let’s continue!

However, the issue with most Hairpin NAT configurations you find online is that it requires you to have a static WAN IP, which I don’t have. Additionally, most tutorials use the terminal, whereas I prefer the graphical interface of Winbox. Therefore, I figured out how to setup a Hairpin NAT in combination with a dynamic WAN IP using Winbox myself and since ‘sharing is caring’, here’s how to do it:

  1. Connect to your Mikrotik using Winbox
  2. Select IP –> Firewall from the menu
    mikrotik1
  3. Make sure that the default ‘defconf: masquerade’ rule is on top, which looks as follows:
    mikrotik2
  4. Add a new rule as follows, name it ‘Hairpin NAT’, which looks as follows (replace 192.168.88.0/24 with your own local network IP range):
    mikrotik3
  5. Add another rule. This rule will contain the IP and port you are trying to reroute. For example, lets say I want to connect to my local NAS running on IP 192.168.88.50 and port 1337 using my WAN IP, my rule would look like this:
    mikrotik4
    Don’t forget the exclamation mark in front of the Destination Address!
    Protip: You can add more ports in the same rule. Just split ranges with dashes like this: 1330-1337 and multiple ports with commas like so: 80,443,1330-1337
  6. Done! You should now be able to access your NAS/Webserver using your WAN IP from your local network. Feel free to add more rules to your liking, but remember, the order is important. So first the ‘defconf: masquerade’ rule, then the ‘Hairpin NAT’ rule and then all other rules :)
    mikrotik5

How to: “Fix” Perflib Error 1008

For the past three days, I’ve been trying to fix several Perflib errors that were appearing in my Event Viewer. The errors cycled through like this:

The Open Procedure for service “BITS” in DLL “C:\Windows\System32\bitsperf.dll” failed
The Open Procedure for service “ESENT” in DLL “C:\Windows\system32\esentprf.dll” failed
The Open Procedure for service “Lsa” in DLL “C:\Windows\System32\Secur32.dll” failed
The Open Procedure for service “MSDTC” in DLL “C:\Windows\system32\msdtcuiu.DLL” failed

They all had Event ID 1008 and Perflib as source. The only difference was it’s service- and DLL name.

bits_error

Apart from these errors appearing in my Event Viewer, I didn’t notice anything strange with my computer; Everything was working like it should, so I had no idea what this could be causing and what (according to Windows) wasn’t working like it should. After a quick Google search, I found out that the error itself isn’t a big deal; It’s just saying it can’t collect performance data.

That was the easy part. Getting red of the errors is a whole different story. These errors got me technically stumped, as I’ve tried virtually every solution you can find on the web to no avail:

– I ran a virus scan as well as a malware scan, without success.
– I ran a chkdsk, without success.
– I ran sfc /scannow, without success.
– I ran lodctr /r, without success.
– I ran Microsoft’s SystemFileChecker tool to repair missing or corrupted system files, without success.
– I removed and reinstalled the BITS, ESENT, LSA and MSDTC service without success.

This was driving me nuts. Since the error itself wasn’t really a big deal, I decided to disable the performance counters for the services that Windows was unable to collect performance data for. This can be achieved by using Extensible Counter List which can be downloaded here: http://download.microsoft.com/download/win2000platform/exctrlst/1.00.0.1/nt5/en-us/exctrlst_setup.exe.  Update 26-4-2018: Link is dead. You can grab it from here: https://www.fileplanet.com/116691/download/Extensible-Performance-Counter-List

After downloading and installing Extensible Counter List, go to C:\Program Files (x86)\Resource Kit and run Exctrlst.exe as Administrator.
Next, find the service(s) in the list that Windows reports it is unable to collect performance data for and uncheck ‘Performance Counters Enabled’ for each service.

extensiblecounterlist

Although this doesn’t solve the initial problem why the errors are occurring, at least it helps you to get rid of them from Event Viewer (a.k.a. symptom treatment)…. But since the errors themselves aren’t a big deal, you should be fine.

How to: Fix an unbootable Intel SSD suffering from the 8MB bug

A friend of mine was having issues booting his laptop. The BIOS recognized his SSD, an Intel SSD sa2bw120g3a, but Windows was nowhere to be found. Even bootable partition and hard drive managers showed no sign of the SSD. This got me thinking that the SSD was dead, which was odd, as the BIOS was still recognizing it.

Several minutes of Googling lead me into the right direction; My friend’s SSD was suffering from the 8MB bug that was discovered in (almost all) Intel SSD firmwares, back in July 2011. As my friend never encountered issues with his SSD and wasn’t up to date about this fact, he never updated his SSD’s firmware, which could have prevented this bug from happening.

The 8MB bug is caused by an unexpected power loss under specific conditions. This will reduce the capacity of the SSD to 8MB and change the serial number to “BAD_CTX 0000013x”. Once this error occurs, no data on the SSD can be accessed and the user cannot write to or read from the SSD. The only way to get the SSD back to work is to erase it. That’s right, all data on the drive is permanently lost.

Some people have been able to start from scratch by wiping the drive’s contents with utilities such as HDDErase and Parted Magic but this only works if your SSD is not ‘frozen’. And since my friend has all the luck in the world, sure enough his SSD was frozen. Fixing a frozen Intel SSD suffering from the 8MB bug requires a more technical approach but it’s no rocket science once you know what you have to do. So, let’s get started!

You need:
– Hiren’s Boot CD / USB: http://www.hirensbootcd.org/files/Hirens.BootCD.13.2.zip
Update 21-11-2016: Mini Linux was removed from recent Hiren’s Boot CD versions. The last Hiren’s Boot CD to include Mini Linux is version 13.2 which you can download from the link above.
– Physical access to your SSD (i.e. open up your computer case)

1. Burn Hiren’s Boot CD to a CD or create a bootable USB stick, insert it into your computer and boot from it.

2. Select ‘Mini Linux’ from the menu and hit Enter.

3. Once Linux has loaded, right click on the wallpaper and select ‘Xterm

4. A command prompt / terminal should open. Enter the following command to get a list of all available harddrives in your computer:

 fdisk –l

Locate your Intel SSD in the list and take a note of the device name, for example /dev/sda

5. Type the command:

 sudo hdparm -I /dev/sdX

where  sdX  is your SSD device.
This command will just print out some info about the drive. If you see the following in the output:
Serial Number: BAD_CTX that confirms that you are hit by this bug.

If at the Security section it reads frozen you CANNOT continue, you have to use a workaround to eliminate the freeze before you can continue:
Unplug and then replug the SATA data cable of your Intel SSD while the system is still powered on. So, leave your computer powered on, open up your case, locate the SATA data cable of your Intel SSD, unplug it and then replug it. This should unfreeze your SSD.

6. Type the command:

 sudo hdparm --user-master u --security-set-pass SOMEPASS /dev/sdX

Again /dev/sdX is your SSD drive, and SOMEPASS is a password you want to set for the SSD. (This password doesn’t lock the SSD or anything similar, it is just needed for these low-level dealing with the SSD.) We will need that SOMEPASS later on, so remember it/write it down. (But after the secure erase this password will be reset anyway so it is not important in the longterm.)

7. Check the drive again:

 sudo hdparm -I /dev/sdX

Now it should say enabled and not frozen at the security section:Security:
Master password revision code = 65534
supported
enabled
not     locked
not     frozen
not     expired: security count
supported: enhanced erase

8. Type the command:

 sudo hdparm --user-master u --security-erase SOMEPASS /dev/sdX

This issues the secure erase command. Again /dev/sdX is your SSD, SOMEPASS is the password set before. The completion of this operation can take a few minutes. After this your SSD should be functional, if not, try again with this command:

 sudo hdparm –user-master u –security-erase-enhanced SOMEPASS /dev/sdX

This latter command takes much more time (30-40 minutes) and you will have to reset the password (with step 4.) before running it because SOMEPASS is likely already reset by the previous command.

9. After this check the drive again

 sudo hdparm -I /dev/sdX

The BAD_CTX thing should be gone and your drive should be functional. You can now reinstall your O/S. After all this don’t forget to update the firmware of the SSD using Intel SSD Toolbox to prevent the bug from happening again in the future.

Source & Credits: http://askubuntu.com/questions/409684/image-or-reset-broken-ssd