My main keyboard since 2012 has been the Ducky Shine 2 with Green LEDs and Cherry MX Blues. During the holidays I ordered some switch testers from Amazon to play around with and my favorite switch was the Kailh Box Navy. This is a very loud tactile switch with a very heavy spring; it took some getting used to compared to my old Cherry MX Blues. It’s definitely not a switch for everyone and some may find it too tiring due to the amount of force required to activate each switch.
Since I had recently completed my Black x Blue 3950x, I wanted a new keyboard to try and match the same color scheme. I ended up buying the Glorious GMMK Full Size as it seems to be the cheapest option with all of the features I wanted: RGB backlighting, hotswappable switches, full size and can be purchased without keycaps or switches. I ordered Kailh Box Navy switches separately from Novelkeys and a Tai-Hao black/blue PBT backlit keycap set from MechanicalKeyboards.
The Tai-Hao keycap set I bought has a rough matte texture that feels nice (to me) and is the only set I could find that was PBT, backlit and came in two colors (black/blue). Unfortunately the blue on this keycap set doesn’t match the blue on my Noctua Chromax fans but still looks pretty good. The only negative thing about this keycap set is the quality control; some of the keys are slightly off and some are very obviously crooked like the DEL and FN keycaps. I tried changing the switches under them but it appears to be a defect in the keycap alignment rather than the switch stem.
I’ve read mixed things about Glorious’s customer service but personally can only report a positive experience. I ordered my GMMK a few days before their end of year sale; I contacted support and they refunded me the difference (10%). The first GMMK I received was damaged in shipping (warped case); they sent me a replacement unit free of charge and didn’t even ask me to ship the old one back.
Overall I would say the GMMK is a good starter keyboard; experienced users will see the GMMK as a glorified switch tester. It doesn’t have the same build quality or heft to it as my Ducky Shine 2 or Vortex Pok3r RGB; bottoming out on some of the keys sounds a bit hollow on the GMMK as well. My next keyboard will probably be the Drop Shift or I will just build a custom YMDK96 from scratch. If you are shopping for your first mechanical keyboard, I recommend buying a switch tester to get an idea of what switch you like then getting the GMMK in the size you prefer.
About a week ago I finally noticed the Define Nano S actually has mounting holes for 2x 140mm fans on the top exhaust despite it not being mentioned on Fractal’s website. So naturally I just had to replace the 2x NF-S12A 120mm fans with 2x NF-A14 140mm fans. Thankfully Amazon has a generous return policy during the holiday shopping season. I’ve had this new system for barely a month and I thought I’d share all the changes I made since I originally built the system.
Photo 1: The build as originally completed. I had planned to only use 3 intake fans, 1 exhaust and keep the Moduvent cover on top; however I was surprised by how hot things were during gaming because of the heat from the 2080 Ti (initially on the quiet bios). During gaming my x570 was over 80C, GPU high 70s and my NVMEs got up to 70C. This is the photo I originally used when posting to PCPartPicker.
Photo 2: I switched my 2080 Ti to the stock bios which targets 65C for the GPU temp whereas the quiet bios has idle fan stop and targets 75C. This brought down the GPU temps which also brought down my x570/NVME temps during gaming. I found that adding fans on top further dropped the temps by another 5-8C. I also added the GPU brace after receiving some feedback on PCPartPicker as I didn’t really notice the GPU sag at first. This is the photo I used when I shared my build with Reddit.
Photo 3: I changed thermal pads on the x570 and I also returned the NF-S12A on the bottom and replaced it with the leftover NF-A15 from NH-D15. I had to take everything apart to do the thermal pad swap and found that I used way too much thermal paste the first time around. I used the spread method the first time but switched to just doing an X pattern. The NH-D15 comes with 2x NH-A15 fans but I had to use an NF-F12 in the front due to clearance issues with the USB 3.0 header and the 24 pin ATX power connector. This is one of the photos from my last post.
Photo 4: The current configuration with 2x NF-A14s for top exhaust. I currently set all the chassis fans to run at 50% duty cycle which is around 830rpm for the NF-A14s according to HWINFO. I also changed to a manual CPU fan curve that is more linear than stock; min duty cycle 40% at 30C, linear all the way to 100% at 70C. I’ll probably tweak this further to eliminate the subtle ramp ups of the fan during regular use. With the fan changes, thermal pad swap, 2080 Ti on stock bios, and CPU fan curve adjustment, the max tempuratures I’m seeing in HWINFO are much lower than before. The max temps I’m getting now are 75C for the x570, 60C for the NVMEs, and 65C for the 2080 Ti. CPU temps are slightly improved which I suspect is just due to better application of thermal paste. On my early AIDA64 runs, the CPU temp according to Ryzen Master would bounce between 75 and 80C; now it hovers around 75C.
More pictures and changes! I ended up returning bottom the intake NF-S12A fan and swapped it for the extra NF-A15 fan that came with NH-D15. The NF-A15 is a 140mm fan with 120mm mounting holes and performs slightly better than the NF-S12A which is a 120mm fan.
I also ended up trying a thermal pad swap on the Strix x570-I board. My first attempt was with the leftover 2.0mm Thermal Grizzly Minus Pad 8 I used for the rear NVME. This pad was thicker than the stock one and ended up being 1-2C hotter. I later tried a 1.0mm Fujipoly Ultra Extreme XR-m Thermal Pad that appears to be 3-5C cooler than the stock thermal pad.
I recently finished an all-new build featuring the AMD Ryzen 9 3950X; with the exception of the 2080 Ti, this is my first build in several years that doesn’t use components from previous builds.
My previous builds have used the Fractal Design Define R4; over the years that case has been home to an Intel i7-930 and i7-4790K. I went through many more GPU changes in the R4: first Radeon 6970 Crossfire, R9 290 Crossfire, Asus Strix 980 Ti, Asus Strix 1080 Ti and finally Asus Strix 2080 Ti.
Finally decided to retire that old R4 case and do an ITX build inside the Fractal Design Define Nano S – essentially a miniature version of the R4. I posted the complete build on pcpartpicker.com a few days ago (I even got featured on the front page!) but the primary specs are as follows:
- AMD Ryzen 9 3950X
- Noctua NH-D15 chromax.black
- 32GB G.Skill Trident Z Neo 16-16-16-36
- Asus ROG Strix x570-I Gaming
- Asus ROG Strix 2080 Ti
- Corsair SF750
- 512GB Samsung 970 Pro
- 1TB Western Digital SN750
I originally built this without the 2x NF-S12A fans on top for exhaust but found that GPU temps were 5-8C cooler with them in. CPU temps reached 75-80C under 30min of AIDA64 and GPU reached 67-68C after 35 minutes of OCCT. What I found most surprising was NVME temps reached as high as 60C when the 2080 Ti was under load and warming up the case.
The x570 chipset itself idles around 60C and can reach as high as 80C during gaming; fortunately I can’t hear the fan at idle but I can make out the whine of the fan when it gets closer to 80C. I might try replacing the thermal pad with paste or a better thermal pad as others have done on Reddit.
If you are still using a Logitech G900 or G903 mouse, I highly recommend replacing the switches with either the Omron D2F-01 or D2F-01-F. The D2F-01 has a higher actuation force which provides a more tactile and satisfying click than the original switch; personally this is what I’m using. The D2F-01-F has the same actuation force as the original if that’s what you prefer. The reason you’d want either of these switches is because both are Japanese Omron switches which are considered higher quality and more reliable than the original Chinese Omron switch (D2FC-F-7N). It seems the Logitech G903 and their newer mice are notorious for double-click issues; just Google it or take a look at the /r/LogitechG and /r/MouseReview subreddits. These YouTube videos go into great detail to explain why newer mice fail so often and cover different switch types: https://www.youtube.com/watch?v=v5BhECVlKJA and https://www.youtube.com/watch?v=NhhRTUrz0R8.
The Logitech G900/G903 isn’t too difficult to take apart but there are quite a few screws to keep track of and they are not all the same length. Be sure to have a spare set of replacement mouse feet since they must be removed in order to open the mouse. Here’s a list of YouTube videos that detail tearing down the Logitech G900/G903:
I also took some photos while taking apart my G900; hopefully all of this info will help you upgrade your mouse with new and better switches.
I’ve been running Pi-hole with DNS-Over-HTTPS using Cloudflare’s DoH client (cloudflared) for some time now; I followed the guide posted here on the official Pi-hole documentation site. When updating the cloudflared recently, I noticed it displayed some errors when the service tried to start up. After digging around, I found that cloudflared now has an option to install itself as a service whereas the guide I used includes steps for creating the service manually. Thus, I believe this is a simpler way to setup cloudflared as your DNS-Over-HTTPS client for Pi-hole.
Download the cloudflared daemon and install it:
sudo apt install ./cloudflared-stable-linux-amd64.deb
Create a folder and config file for the cloudflared daemon:
sudo mkdir /etc/cloudflared
sudo vi /etc/cloudflared/config.yml
Use the following command to instruct cloudflared to install itself as service:
sudo cloudflared service install
Start the new cloudflared service and check the status:
sudo service cloudflared start
sudo service cloudflared status
You should get output similar to the following if successful:
● cloudflared.service - Argo Tunnel
Loaded: loaded (/etc/systemd/system/cloudflared.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-09-02 03:50:20 UTC; 1s ago
Main PID: 1479 (cloudflared)
Tasks: 7 (limit: 4661)
└─1479 /usr/local/bin/cloudflared --config /etc/cloudflared/config.yml --origincert /etc/cloudflared/cert.pem --no-autoupdate
Sep 02 03:50:20 sandbox systemd: Starting Argo Tunnel...
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="Version 2019.8.4"
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="GOOS: linux, GOVersion: go1.12.7, GoArch: amd64"
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg=Flags config=/etc/cloudflared/config.yml no-autoupdate=true origincert=/et
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="Adding DNS upstream" url="https://220.127.116.11/dns-query"
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="Adding DNS upstream" url="https://18.104.22.168/dns-query"
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="Starting DNS over HTTPS proxy server" addr="dns://localhost:5053"
Sep 02 03:50:20 sandbox cloudflared: time="2019-09-02T03:50:20Z" level=info msg="Starting metrics server" addr="127.0.0.1:39507"
Sep 02 03:50:20 sandbox systemd: Started Argo Tunnel.
Now just configure Pi-hole to use cloudflared as the DNS resolver:
TLDR: Yes, you can put an ASUS Strix 2080 (and likely any other ASUS Strix card) in the Fractal Design Node 202. Check out the gallery below.
I had to RMA my ASUS Strix 1080 Ti last month; when packing it for shipment I realized it would probably fit inside my Node 202 if I removed the fans and shroud. Thus I decided to upgrade my main PC to an ASUS Strix 2080 Ti and use the replacement card in the Node 202, which ended up being an ASUS Strix 2080. This card is 2.7 slots and larger than the Strix 1080 Ti but I was still able to make it fit inside the Node 202 after removing the fan and shroud. However, there are a few things you should know if you want to try this:
- The bracket and tabs on the heatsink for the fans and shroud protrude far enough to prevent fans from spinning if you set them up to exhaust from the 202. So you have to set the fans as intakes unless you want to rip off the tabs/bracket and void your warranty.
- One of the 120mm fans must be offset from the other because of the tabs on the heatsink. See Pic #3 below.
- One of the screws that secures the GPU bracket to the Node 202 must be removed because it will hit the corner of the Strix PCB. See Pic #4 below.
- You’ll want the Node 202 in vertical position and use pressure optimized fans to get as much air in as possible.
- By relying on 2x120mm fans to cool the Strix 2080, you need a way to control them. The Strix 2080 has external fan headers but the software to control them is buggy and requires a workaround.
I am using ASUS GPU TWEAK II to set a custom fan curve on the external fan headers; the very first time the program runs, an executable named ASUSGPUFanServiceEX.exe launches and then closes immediately. This exe is responsible for applying custom fan curves so without it your custom fan curve is not applied. Launching this exe again fixes that problem but also leaves you with a blank command prompt window. To fix this I use Windows Task Scheduler to launch a Visual Basic script at every startup that will run and hide the program:
Set WShell = CreateObject("WScript.Shell")
WShell.Run """C:\Program Files (x86)\ASUS\GPU TweakII\ASUSGPUFanServiceEX.exe""", 0
Set WShell = Nothing
I haven’t done any rigorous testing but GPU temperatures seem to be decent. I’ve been playing Witcher 3 again at 1800p (custom resolution 3200×1800 with GPU scaling to 4K) on the Ultra Graphics and High Post Processing presets; temperatures generally range from 60C to 70C. My fan curve is set to run at 100% at 70C.
While it can be done, I would not recommend buying an ASUS Strix card if you already have a Node 202. It’s much easier and simpler to just use a 2 slot card. If you want to use the ASUS Strix for a new ITX build, buy a case that can actually hold it like the Cerberus or a Define Nano S with an SFX power supply.
I recently sent my ASUS ROG STRIX GeForce® GTX 1080 TI 11GB OC in for an RMA after my games started crashing within a few minutes; looking at the Event Viewer I found error messages indicating the Nvidia drivers were resetting (Display driver nvlddmkm stopped responding and has successfully recovered). To date I have only had one previous experience with ASUS RMA support; back in 2011 a friend of mine overseas gave me his RAMPAGE GENE III that was dead. I was able to RMA it with no issues and used it for an i7-950 build.
Here is the timeline of events:
- Monday July 22 – I contact ASUS Support through the website and state I have tried troubleshooting by reinstalling drivers, reinstalling Windows 10, trying different PCIE slots and a different system.
- Tuesday July 23 – I receive a reply from an ASUS Support Agent. Based on my initial message, they direct me to create an Online RMA by visiting https://www.asus.com/us/support/Article/818 (looks like I could have saved a step by going directly to this page). I fill out the Online RMA request that morning and let the Agent know I have done so. They reply back in the afternoon stating the Online RMA was not successfully pushed through and offer to process it manually.
- Wednesday July 24 – The Agent confirms my RMA has been processed; I receive an RMA Number and instructions on how to ship my 1080 Ti. I package the 1080 Ti and drop it off at USPS in the afternoon (I did not purchase the label from ASUS). It is expected to arrive at the RMA facility by Friday July 26.
- Friday July 26 – The USPS tracking number reports there is a delay.
- Monday July 29 – I check the tracking number again and the 1080 Ti has been delivered successfully.
- Tuesday July 30 – I visit the ASUS Check Repair Status page and it shows ASUS has received my 1080 Ti.
- Wednesday July 31 – I visit the Status page again and it is in the testing phase.
- Thursday August 1 – I receive a notification from the Fedex Delivery Manager that I have a package coming from ASUSTEK. I look at the Status page and it confirms my repair has been completed. There is no mention of what the replacement card is so I assume it is another 1080 Ti.
- Wednesday August 7 – I receive the package from ASUS. I open it to find a ROG Strix GeForce RTX™ 2080 OC.
It took 16 days from first contact to when I received my replacement graphics card. Potentially this could have been faster if USPS delivered my 1080 Ti on Friday instead of Monday and if I had created the Online RMA directly rather than contacting support first. The replacement took just under a week to be delivered; it was ground shipped from Jeffersonville, Indiana and I live on the opposite side of the US (West Coast).
Overall I would consider this experience a positive one; much better than the time I had to RMA Sapphire 290s repeatedly. The 2080 is a minor upgrade over the 1080 Ti according to TechPowerUp’s review. I have been using ASUS graphic cards since purchasing the ASUS STRIX GEFORCE® GTX 980 Ti back in 2016. After that came the 1080 Ti and now the 2080 replacement. I also have the ROG Strix GeForce RTX™ 2080 Ti OC which I bought after sending in my 1080 Ti (I used this whole ordeal as an excuse to upgrade). If you are having trouble deciding which brand of graphics card to buy, consider differentiating them by their customer support and where their RMA center is. For example, Gigabyte’s RMA facility is located in City of Industry, CA which is a lot closer to me than Indiana.