Computer Stuff

My first 10 minutes on a Windows development server

On the Windows Servers I use for development, I like to keep things simple. That means security should be in place, but at the same time should be workable and flexible enough for me to install and download things, without getting nagged by obnoxious over-active security mechanisms. In order to do so, I execute the following steps on every Windows development server I install.

Install RDP Defender

If your Windows server is publicly available from the internet, then there is a 100% chance that hackers, network scanners and brute force robots are trying to guess your Administator login and password as we speak.

Using password dictionaries, they will automatically try to login to your server hundreds to thousands times every minute. Not only this is bad for your server’s security, but it also wastes a lot of resources, such as CPU and bandwidth.

RDP Defender will block these attacks, by monitoring failed login attempts and automatically blacklisting the offending IP addresses after several failures. You can of course configure it to suit your needs, but it pretty much take care of itself. It takes just 30 seconds to download and install: https://www.terminalserviceplus.com/rdp-defender.php

Increase RDP Security

Start –> Run –> gpedit.msc
Go to Computer Configuration à Administrative Templates à Windows Components à Remote Desktop Services à Remote Desktop Session Host à Security

Set client connection encryption level – Set this to High Level so your Remote Desktop sessions are secured with 128-bit encryption.

Require secure RPC communication – Set this to Enabled.

Require use of specific security layer for remote (RDP) connections – Set this to SSL (TLS 1.0).

Require user authentication for remote connections by using Network Level Authentication – Set this to Enabled.

Disable Remote Management, unless specifically needed.

I’m not a fan of having stuff enabled that I don’t use or need, so even tough this probably isn’t a security risk, I’m going to disable it anyway.

Go to Server Manager à Local Server à Remote Management and click ‘Enabled’.
In the window that opens, untick ‘Enable Remote management of this server from other computers’ and hit apply.


Do not start Server Manager automatically at logon

Go to Server Manager à Manage à Server Manager Properties
Check ‘Do not start Server Manager automatically at logon’.

Disable Password Expiration

Like I said, this is a development server. There’s no need for me to have top notch security, as I’ll probably spin up a new machine in a couple of months again and delete this one.

Start à Run à gpedit.msc
Go to Computer Configuration à Windows Settings à Security Settings à Password Policy.

Change ‘Maximum password age’ to 0. Hit apply and ‘Password will not expire’ should now be shown.

Schedule automatic update restarts

Windows Server 2012 and 2016 use ‘active hours’ to determine whether or not it’s safe to reboot the machine for updates. Moreover, the maximum time frame of the ‘active hours’ cannot be greater than 12 consecutive hours. To be honest, I don’t know who came up with this brilliant idea, since a server is usually designed to be on 24/7. Therefore, I prefer choose when Windows reboots for updates by scheduling a specific time, instead of playing Russian roulette whether or not the thing is going to reboot while I’m running any jobs/tests.

Start à Run à gpedit.msc
Go to Computer Configuration à Policies à Administrative Templates à Windows Components à Windows Update.

Tick ‘Enabled’, choose option 4 and tick ‘Install during automatic maintenance’

Note: When ticking ‘Install during automatic maintenance’ the schedule you define in gpedit, i.e. ‘Every day’ and the scheduled install time of 03:00 as in the screenshot above, have no effect! The automatic maintenance option overrides this schedule. Automatic maintenance is performed daily, but you are free to change at which time it takes place via Control Panel à System and Security à Security and Maintenance à Automatic Maintenance

Disable Internet Explorer Enhanced Security Configuration

On a development server, downloading new tools and utilities is common practice. Instead of whitelisting every domain, which are a lot nowadays, I simply turn off the Internet Explorer Enhanced Security Configuration. Yes, I know this is a potential security risk, especially on production servers, but like I said, this is a development server. In addition, use your common sense when pointing and clicking at stuff on the interwebz and you should come a long way

Go to Server Manager à Local Server à IE Enhanced Security Configuration and tick ‘Off’

Privacy settings

Windows Server 2012, and especially Windows Server 2016, are quite intrusive when it comes to privacy. I don’t like the automatic sharing of ‘diagnostic and usage data’ (whatever that may be), so I switch off these options as far as possible (hoping they actually do something instead of being bogus buttons/placeholders).

Go to Server Manager à Local Server à Feedback & Diagnostics and click ‘Settings’
In the window that opens, choose ‘Never’ and ‘Basic’:

Do the same for Windows Defender, by switching off ‘Cloud Protection’ and ‘Automatic Sample Submission’:

Show extensions & hidden files, folders and drives

It’s always handy to know whether or not you’re opening invoice.pdf.exe or an actual invoice.pdf, isn’t it ?

Open a random folder, go to File à Change folder and search options

Tick ‘Show hidden files, folders and drives’ and untick ‘Hide extensions for known file types. Hit apply and OK.

Change Power Plan to High Performance

I hate waiting for my disks to spin-up and since this is a server, I always choose the High Performance Power Plan in order to get maximum performance.

Go to Control Panel à Hardware à Power Options and tick ‘High performance’.


Last but not least, install 7Zip & Notepad++

These two tools belong in every developer’s toolkit, so install them while you’re at it!

That’s all for now. Comments or questions? Let me know down below. Cheers!

Tutorial: How to secure Traccar with SSL / HTTPS for free, using IIS and Let’s Encrypt on Windows Server

Introduction

In this guide, I’m going to show you how to secure your Traccar installation with SSL, so that it can be reached over https instead of http. Traccar is a free and open source modern GPS tracking system.
Since Traccar has no native support for encrypted connections, we’ll do so by setting up a Reverse Proxy using IIS (which is the recommended method by the developer). We’ll be using Let’s Encrypt to generate a free valid certificate for your Traccar installation.

Prerequisites

  • A working Traccar instance, reachable over http (by default http://localhost:8082), installed on Windows Server 2012 R2 or Windows Server 2016.
  • A Fully Qualified Domain Name (FQDN), for example ‘yourdomain.com’, with an A record pointing to the IP of your Traccar server:

    (Of course, in the screenshot above, change the variables to meet your environment, i.e. replace ‘123.123.123.123’ with the IP of your Traccar server and ‘traccar.yourdomain.com’ with your own (sub)domain.
    Please note that it can take up to 24 hours, but usually no more than 1-2 hours, for your DNS servers to ‘propagate’, i.e. sync your update with the rest of the world.)

Getting Started

First, install the URL Rewrite add-on module. From Windows Server 2012 R2 and up, you can use the Microsoft Web Platform Installer (WebPI) to download and install the URL Rewrite Module. Just search for ‘URL Rewrite’ in the search options and click ‘Add’.


After installing, do the same for the Application Request Routing 3.0 add-on module:


Next, open IIS and add a new website:

In the window that opens, fill in the following details:

Change the variables to meet your environment.

Close IIS for now and download and install ‘Certify the web’, a free (up to 5 websites) SSL Certificate Manager for Windows (powered by Let’s Encrypt). Certify will automatically renew your certificates before they expire, so it pretty much takes care of itself.

After installing, open Certify. Before we can request a new certificate, we first need to setup a new contact. This is mandatory. So, first, go to ‘Settings’ and set a ‘New Contact’:


Next, click on ‘New Certificate’:

Select the website you created in IIS, in my case named ‘Traccar’:

The rest of the information should now autofill, based on the details you entered in IIS.

Next, go to the Advanced tab and click ‘Test’ to verify if everything is setup correctly

If all goes well, you should get this popup:

Click OK and click ‘Save’.

Next, click ‘Request Certificate’ to request your free valid SSL certificate from Let’s Encrypt for your Traccar installation:

If all goes well, you should get ‘Success’

Next, close Certify and open IIS again. Go to the website you created (in my example Traccar) and click on URL Rewrite

Click on ‘Add Rule(s)’ in the top right corner:

In the window that opens, click on ‘Reverse Proxy’ and click ‘Ok’

In the window that opens, enter ‘localhost:8082’ in the Inbound Rules text field,
select ‘Enable SSL Offloading’,
select ‘Rewrite the domain names of the links in the HTTP responses’ from ‘localhost:8082’
and select your Traccar domain from the dropdown menu, i.e. ‘traccar.yourdomain.com’ and click OK.

Next, go to your website in IIS again and click on Compression:

Outbound rewriting can only be applied on un-compressed responses. If the response is already compressed then URL Rewrite Module will report an error if any of the outbound rules is evaluated against that response. Therefore, we need to disable Compression in order to get Traccar to play nicely with IIS. Uncheck both options and click Apply:

That’s it! We’re done! Your Traccar installation should now be reachable over HTTPS and have a valid SSL certificate:

If the website is not opening (times out), check if port 443 inbound is open in your firewall:

Optional

Since your website is now reachable over https, you can change the Challenge Type to tls-sni-01 in Certify:

This way, you can remove the port 80 binding in IIS if you want, to force all traffic to your Traccar installation over https:

Have fun! Any questions or comments, let me know down below.

Blocking DNS Amplification attacks using IPtables and/or fail2ban

Updated 11 January 2019: Fixed syntax based on comments. Thank you!


If you are managing a Linux server, you’ve probably heard about DNS amplification attacks which make use of misconfigured DNS servers. DNS amplification is a DDoS technique which uses a large reply by DNS resolving the target. This is accomplished by spoofing the query with the source IP of the target victim to ask for a large DNS record, such as an ANY reply of the ROOT record or isc.org, which is most commonly found. The request itself is usually around 60-70 bytes, while the reply is as much as 2-3K. That’s why it’s called amplification. It will not only make your network participate in the attack, but it will also consume your bandwidth. More details can be found here.

Blocking these kinds of attacks can be tricky. However, there are some basic iptables rules that block most of it, using them in combination with fail2ban. As usual, your mileage might vary. The commands below were tested and executed on Ubuntu Server 16.04 LTS 64-bit.

Basically, it comes all down to adding these two IPtables rules:

iptables -A INPUT -p udp --dport 53 -m string --from 40 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery 
iptables -A INPUT -p udp --dport 53 -m string --from 40 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 60 --hitcount 3 -j DROP
iptables -A INPUT -p tcp --dport 53 -m string --from 52 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery 
iptables -A INPUT -p tcp --dport 53 -m string --from 52 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 60 --hitcount 3 -j DROP

Source: https://wiki.opennic.org/opennic/tier2security

The first iptables rule looks for the incoming udp packets on port 53 and searches the first 50 of packet for hex string “0000FF0001” (which is equivalent to an ANY query).
The second iptables rule drops the packet if the source ip and query type (in this case “ANY”) matches and occurred more than one time in the past second.

Make sure to save your iptables rules, using something like iptables-persistent, so that they stick when you reboot your server.

In case this approach doesn’t work for you, try using the following alternative, which makes use of Fail2ban instead of IPtables.

First edit the file /etc/fail2ban/jail.conf and add the following contents:

[iptables-dns]
enabled = true
ignoreip = 127.0.0.1
filter = iptables-dns
action = iptables-multiport [name=iptables-dns, port="53",
protocol=udp]
logpath = /var/log/iptables/dns_reqs.log
bantime = 86400
findtime = 120
maxretry = 1
[named-refused-udp]
enabled = true
[named-refused-tcp]
enabled = true

Next, create a new fail2ban filter by creating a new file called /etc/fail2ban/filter.d/iptables-dns.conf and adding the following contents to it:

[Definition]

failregex = fw-dns.*SRC=<HOST> DST
failregex = ^.* security: info: client #.*: query \(cache\)
'./(NS|A|AAAA|MX|CNAME)/IN' denied

ignoreregex =

After doing so, check using fail2ban-client status if you see the ‘iptables-dns’ jail listed. If fail2ban refuses to start, check your regex for typos using the following command:

fail2ban-regex /var/log/kern.log /etc/fail2ban/filter.d/iptables-dns.conf

That’s all folks! Any feedback or suggestions? Let me know in the comments!

Recommended further reading: A Realistic Approach and Mitigation Techniques for Amplifying DDOS Attack on DNS in Proceedings of 10th Global Engineering, Science and Technology Conference 2-3 January, 2015, BIAM Foundation, Dhaka, Bangladesh, ISBN: 978-1-922069-69-6 by Muhammad Yeasir Arafat, Muhammad Morshed Alam and Feroz Ahmed.

How to migrate between Synology NAS (DSM 6.0 and later)

Recently I bought a Synology DS216j to replace my Synology DS214se. The DS214se is a good entry-level NAS for personal use, but it was struggling to keep up with my 3 HD IP-cameras as well as acting as a Mail Server, mainly because of the single-core CPU. Since I didn’t want to lose my data, I had to perform a migration from the DS214se to the DS216j in order to retain the data. A quick Google search lead me into this Synology knowledge base article: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_migrate_between_Synology_NAS_DSM_5_0_and_later

The title of the knowledge article above says that it’s intended for Synology NAS running DSM 5.0 and later. At the point of writing, DSM 6.1 is the latest available DSM version, so I had a suspicion that the knowledge base article might be out of date. Because my NAS models were not identical to each other, I had to follow section 2.2. of the article linked above; Migrating between different Synology NAS models. After doing so, I can confirm that my suspicions were right; the knowledge base article is out of date, the migration process between two Synology NAS just got easier!

Here’s a small writeup about what has changed in migrating between Synology NAS between DSM 5.0 and DSM 6.0:


Section 2.2. Migrating between different Synology NAS models starts with a word of caution, telling you that all packages on the target Synology NAS (i.e. your new NAS) will have to be reinstalled, which results into in losing the following data (…) Mail Server and Mail Station settings & Surveillance Station settings. This was applicable to my Synology NAS, as I had these packages installed and were actively used. However, after performing the migration to my new NAS as described in Section 2.2. (which basically comes down to update your old NAS to the latest DSM, switch it off, swap the drives to the new NAS and turn it on) my new Synology said the packages had to be repaired instead of being reinstalled. After clicking the repair button, all my packages came back to life on the new NAS, without any data loss; all my settings and files, including from Mail Server, Mail Station and Surveillance Station (emails as well as recordings), were still there! Needless to say, it’s still good practice to backup you data before performing the migration, as described in section 1 of the knowledge base article linked above.

However, what did change was the IP address of my NAS. I assumed that my new NAS would be using the same IP as my old NAS, as Synology instructs you to turn off your old NAS before powering up your new NAS, but that was not the case. So after the migration, use the Synology finder to find the new IP of your NAS and change it to your old IP after the migration, which can be in the Control Panel à Network.

Also, lastly, I had to re-register my DDNS hostname by re-logging into my Synology account, which can be done in the Control Panel à External Access.

That’s all folks!

PS. Should you have bought any additional Surveillance Station license keys in the past, don’t forget them down and to deactivate them on your old NAS before the migration, since the license keys can only be active on one Synology product at a time. Also, as an FYI, each license key can only be migrated just once.

PicoTorrent; a BitTorrent client without the bloat.

Remember the good old days when BitTorrent clients were just exactly that; BitTorrent clients? Nowadays BitTorrent clients are packed with loads of unnecessary features. Take uTorrent for example; it started as a lightweight BitTorrent client, but nowadays is bloated with features such as streaming but also advertisements. Roughly the same happened to qBittorrent, so I switched to Baretorrent. Sadly development of Baretorrent stopped in 2013 and is getting outdated in terms of encryption protocols, hence I started looking for an alternative and behold; PicoTorrent, the true lightweight BitTorrent Client. Basically it’s an updated version of Baretorrent; It has no unnecessary features, no advertisements, IPv6 support and it’s open-source!

Get it here: http://www.picotorrent.org/

TeamSpeak Server 3.0.13 not starting? Install Visual C++ Redistributable for Visual Studio 2015!

Recently I encountered issues starting my TeamSpeak server after updating it from version 3.0.11.4 to 3.0.13.6.; it would immediately crash with a blank error log.
Apparently TeamSpeak Server 3.0.13 onwards requires the 32-bit Visual C++ Redistributable for Visual Studio 2015 to be installed. Yes, the 32-bit variant, even if your OS is 64 bit. So, should you encounter immediate server crashes after updating your TeamSpeak server to version 3.0.13, try downloading and installing the 32-bit variant of Visual Studio 2015 run-time from here: https://www.microsoft.com/en-us/download/details.aspx?id=48145

PS. In case you missed the release notes of TeamSpeak Server 3.0.12, as of that version, the server binaries file names do NOT contain platform suffixes any more. They’re all called “ts3server” now, so don’t forget to delete the old/obsolete binary including the platform suffix from your TeamSpeak Server installation folder (else it will crash as well…)

How to schedule a PHP script in Plesk for Windows using cronjob/crontab

Nowadays it’s dead easy to schedule a PHP script as a cronjob/crontab in Plesk Onyx for Windows. However, in the previous versions, Plesk did not supply a sample syntax for scheduled tasks. Most examples found on the interwebs assume that you’re running Plesk on Linux, but if you are like me and run Plesk on Windows, that syntax is just plain wrong.

This small ‘note to self post’ shows how to correctly schedule a PHP script in Plesk for Windows for those of you who are still running an older version of Plesk :)

Step 1. Open Plesk and search for Scheduled Tasks

Step 2. Create a new cronjob/crontab as shown above. Adjust the parameters to your liking. In this example, I’ve scheduled the particular .php script to run every 5 minutes of each day of the week.

Step 3. You’re done! In the end, your finished cronjob/crontab should look like in the image above. If desired, you can also run it on demand by clicking Run Now.

HostsMan: Pi-hole without the pi (DNS-based adblocker)

If you’re tired of AdBlock Plus slowing down your browser and you don’t have a spare Raspberry Pi lying around to run Pi-hole, HostsMan is a great alternative that runs on Windows. One way to keep malware and advertisements outside is to block the servers that serve this content. This can be done by adding the IP numbers of these machines into the hosts file and redirect them to ‘localhost’. Updating the hosts is time-consuming and prone to errors, but this is when HostsMan comes into play. This free program can retrieve current lists of websites known to serve advertisements and malware and combine with the existing hosts file. Furthermore, it checks the hosts file for incorrect, duplicate or malicious entries. It also features a built-in editor and can be used to empty the DNS cache.

Download link: http://www.abelhadigital.com/hostsman

Unlike AdBlock Plus, however, HostsMan doesn’t make it obvious which hosts file sources you subscribe to. Enabling all of them sounds like a good idea, but doing so hoses some functionality such as social sharing bookmarklets. For information on which sources to use in HostsMan, visit: https://jdrch.wordpress.com/2014/11/03/which-sources-to-use-in-hostsman/

Also, for a comparison of AdBlock Plus and HostsMan, visit: https://jdrch.wordpress.com/2014/11/05/the-best-way-to-block-ads-adblock-plus-vs-a-custom-hosts-file-hostsman/

Microsoft Azure Tips & Tricks

Recently I’ve started to play around with Azure, Microsoft’s cloud. In this blogpost, I’ll be sharing some of my initial findings, which might prevent you from making the same rookie mistakes as I made ;)

First of all, compared to other cloud providers such as Vultr and Digital Ocean, Azure is quite expensive. However, should you have a MSDN account, you might be eligible for free Azure credit up to $150 per month! For more information, visit https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits-details

The Azure control panel might be overwhelming at first, but it’s pretty intuitive once you start using it. Basically the workflow is from left to right, i.e. if you click on an option, a new screen opens to the right.

Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

So without further ado, here’s my list of top tips & tricks for Azure rookies:

  • VMs are automatically allocated a dynamic WAN (‘internet facing IP address’) in order to communicate with the internet. This causes the IP address to change when you stop and start the VM, for example using auto-start/auto-stop as described above. You can specify a DNS domain name label for a public IP resource, which creates a mapping for domainnamelabel.location.cloudapp.azure.com to the public IP address in the Azure-managed DNS servers. For instance, if you create a public IP resource with contoso as a domainnamelabel in the West US Azure location, the fully-qualified domain name (FQDN) contoso.westus.cloudapp.azure.com will resolve to the public IP address of the resource. This is particularly helpful in case of using a dynamic IP address.
    Should you not want this, you can assign a static IP to the VM. However, you cannot specify the actual IP address assigned to the public IP resource. Instead, it gets allocated from a pool of available IP addresses in the Azure location the resource is created in. For more information, visit: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-ip-addresses-overview-arm

    It’s possible to delete the VM and preserve the static IP address, for example to assign to another/new VM. In order to do so, do not delete the entire resource group that is associated with the IP address. Instead, shutdown the VM, then delete the NIC associated with the static IP you want to preserve. Then delete all other components in the resource group except for the IP you want to preserve. Please do note that IP addresses cannot change resource groups, so in order to re-use the IP address, you need to create a new VM (or load balancer) in the same resource group as the IP address.
  • VMs in a DevTest lab are deployed without a Network Security Group (also known as a firewall) within Azure. They only come with the default Windows Firewall running on the machine itself. Should you want a NSG/Firewall within Azure as well, you can deploy one yourself in the same resource group and assign it to the VNet of the VM you created in the DevTest lab. Please do note that this requires you to forward ports in twofold; one time within the VM itself in the Windows Firewall and the second time from within the NSG/Firewall in the Azure portal.


    Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

    This is also the case with normal VMs (i.e. VMs that are not created in a DevTest lab) as they come with a NSG/firewall by default.
  • Azure offers two types of storage disks: SSD (Premium storage) and conventional HDD (Standard storage). SSD disks come in 3 sizes, 128GB, 256GB, 512GB. HDD disks have a flexible/user chosen size which can vary up to 1TB. You need to create a software RAID in order to get a large volume/disk.
    Premium Storage supports DS-series, DSv2-series, GS-series, and Fs-series VMs. You can use both Standard and Premium storage disks with Premium Storage supported of VMs. But you cannot use Premium Storage disks with VM series which are not Premium Storage compatible (for example the A-series or Av2-series).
    If you’re using standard storage, you’ll only be charged for the data you’re storing. Let’s say you created a 30 GB VHD but storing only 1 GB of data in it, you will only be charged for 1 GB. All empty pages are not charged.
    HOWEVER, if you’re using SSD (premium storage) you pay for the FULL SSD disk regardless of how much data you stored and empty pages are charged… so it becomes expensive pretty quick. For more information, visit https://azure.microsoft.com/nl-nl/blog/azure-premium-storage-now-generally-available-2/ and https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage
  • Azure Storage provides the capability to take snapshots of your VMs. In order to take snapshots of your VM, you need a Recovery Services vault. A Recovery Services vault is an entity that stores all the backups and recovery points you create over time. The Recovery Services vault also contains the backup policy applied to the protected files and folders.
    Previously, when you created a backup vault you had the option to select locally redundant storage (LRS) or geo-redundant storage (GRS). This has now changed and you do not have the option during the vault creation.

    Now, by default, your vault will be created in a GRS, which costs (way) more than LRS (almost double!). If you want to switch to LRS, you must do so after the vault has been created but PRIOR to registering any items to the vault. You cannot switch the storage type of your Vault after you registered any items to it! To do this, simply access the Configure tab of your backup vault and change the replication from the default of GRS to LRS. Don’t forget to click Save…

    For more information, visit: https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

  • When you create a VM in Windows Azure you are provided with a temporary storage automatically. This temporary storage is “D:” on a Windows VM and it is “/dev/sdb1” on a Linux VM and is used to save the system paging file. This temporary storage must not be used to store data that you are not willing to lose, as there is no way to recover any data from the temporary drive. The temporary storage is present on the physical machine that is hosting your VM. Your VM can move to a different host at any point in time due to various reasons (hardware failure etc.). When this happens your VM will be recreated on the new host using the OS disk from your storage account. Any data saved on the previous temporary drive will not be migrated and you will be assigned a temporary drive on the new host. Because this temporary storage drive is present on the physical machine which is hosting your VM, it can have higher IOPS and lower latency when compared to the persistent storage like data disk. The temporary storage provided with each VM has no extra cost associated with it for storage space as well as for transactions. For more information, visit: https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/
    If your application needs to use the D drive to store data, you can to use a different drive letter for the temporary disk. However, you cannot delete the temporary disk (it’ll always be there), you can only assign it a different drive letter. For instructions on how to do so, visit https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-classic-change-drive-letter

That’s it for now!