Skip to main content

Building an Internal Website: Lessons Learned with Synology

·2332 words·11 mins
Synology Self-Hosting Remark42 Ssl
Race Dorsey
Author
Race Dorsey
Table of Contents
A journey through building a secure internal family website on Synology NAS, complete with SSL certificates and a commenting system.

Intro
#

I recently wanted to set up an internal website for my home network–a digital family scrapbook where we could share moments and conversations. While the concept was straightforward, implementing it properly with SSL certificates and a comment system led me down an interesting path.

This isn’t intended to be a tutorial, but rather a tale of woes and eventual victory.

SSL without exposing IP
#

One of the first issues I ran into is that I wanted SSL, but since this is an internal website I couldn’t generate a Let’s Encrypt certificate without exposing my server IP.

After some research I learned about the DNS-01 Challenge Type - a method that lets you verify domain ownership through DNS records rather than serving files from your web server. This meant I could get SSL without exposing my server IP. I found a really good guide that discussed how to use the DNS-01 Challenge with Synology. Thanks to that guide this was one of the easiest parts of the project since it showed the exact UI steps in Synology. I just needed to make sure I requested a certificate for two subdomains I would be using (i.e. site.mydomain.tld and remark42.mydomain.tld)

For this to work I needed to own the domain which I was fine with, though there are other methods like managing certificates on devices where you could use any domain without owning it. I didn’t want to manage certificates on our internal devices though, and I was happy to use an existing domain I owned.

Building the site
#

Next I needed a site to test my SSL certificate. I put together a basic site and pushed the /public folder to my NAS. Technically this data is already sync’ed to my NAS so what I really did was go to Control Panel -> Task Scheduler and set up an rsync script to monitor my /public folder and then copy any changes over to a /web/projectname folder. I used Hugo to build my site, but any static website generator works here.

With my files syncing properly, the next step was to make them accessible via web server. I went to Web Station (download from Package Center if you don’t have it) and created a new Web Service. I specified static website and then pointed it to my /web/projectname folder. Note that my HTTP back-end server is Nginx.

After setting up the Web Service, I needed to configure how it would be accessed. I went to Web portal to define how it would be accessed. For portal type I selected Name-based and under hostname I listed my domain. Rest of settings I left default.

I clicked apply and then tried to navigate to my site and received a 404. I quickly realized that I told my NAS to publish the site but my browser is trying to resolve the the actual domain hostname. The final piece was making sure my internal DNS could resolve the domain. I went to my DNS server and added a local DNS entry to point my domain to my NAS IP. Then tried navigated to my domain and it worked!

Well, kind of. My browser displayed a warning due to the certificate. Looking at the certificate I saw that Synology was serving the incorrect certificate back to Control Panel -> Security -> Certificate -> Settings and made sure that my the entry for my site was configured to use the certificate I added earlier.

I saved and refreshed but continued to have issues for a little bit. Nginx didn’t get reloaded or reset so it served the wrong certificate for a while but eventually figured things out. Finally, my site worked with the right certificate!

Comments with Remark42
#

For the next part of my project I wanted a comment system. Basically my concept for this internal site was like a family scrapbook where entries could be made, and comments could be added to posts. Of course we could just use other communication methods, but I figured it would be good practice in case I ever want to implement comments on this site you’re reading.

I previously had done research on comment systems and I knew immediately I couldn’t use any of the github comment systems out there like giscus because my site is an internal site and is not on github. Maybe eventually there would be a forgejo equivalent and I have set up my own forgejo internal hub instead of using raw git, but today is not that day.

I narrowed my selection down to isso and remark42 and settled on remark42. Both seemed to fit my needs pretty well though isso seemingly only supports emailing myself of new comments whereas remark42 allows users to be notified. Just note that remark42 is privacy focused (great) but this does mean users need to explicitly sign-up for notifications on each post. More on that later.

I used Synology’s Container Manager to create a new Project. I created a docker-compose.yml based on the projects documentation, and then inserted the front-end script into my static website templates. Specifically within Hugo my site had a head.html partial which is loaded on every page but I only want the .js file to be loaded on relevant pages. To do this:

  1. I created a remark-embed.html partial with the front-end script (From docs with updated host).

  2. I updated head.html to conditionally load the remark-embed.html if a page has the comments param set to true

{{ if .Params.comments }} {{ partial "remark42_embed.html" . }} {{ end }}
  1. I also needed to update my single.html partial to intitialize the script where I want it on the page, so I added the <div id="remark42"></div> at the bottom of my page before the footer.

  2. Now on any page I wanted comments to be rendered, I set comments: true in the frontmatter.

This worked and comment sections started appearing on my site pages appropriately. I had disabled any OAUTH options (not wanting to configure them yet) and then enabled the AUTH_ANON in the my docker .yaml. I was then able to anonymously sign in but when I went to comment I was given a Not Authorized error which I assumed was related to me running this on localhost rather than through my production site. I later learned I could update ALLOWED_HOSTS to cover my localhost, though still would need to resolve the other errors I had not yet discovered.

This naturally led me to try to set-up comments to show on the production site and this is where I ran into a lot of headaches. Essentially I was given a slew of 404 errors of the embed.mjs not loading. Inspecting the response I saw that I was being served a text/html file instead of the .mjs file. I was also given many CORS related errors due to missing info in headers. When I inspected my SSL cert coming from remark42.mydomain.tld I found Synology was using the wrong certificate.

When I went to Synology’s Certificate UI I confirmed it said it was serving the correct certificate for remark42.mydomain.tld but it was not. This led me to SSH into my machine and start searching for my nginx config files which I eventually located and found none of them handled remark42.mydomain.tld. I created a new one based off of the remark42 docs. I needed to list my SSL certificates in this config. To do this I found the config file for my webservice_portal to find the ssl cert locations for my static site, and used the same filepaths in my remark42.mydomain.tld config I was making. I should note that originally I tried using the built in remark42 backend configuration options to use ssl certs. I had mounted a storage volume for my container to use the certs and linked them within the docker container but still received the incorrect SSL cert. I suspect I made a mistake there, but given my headers lacked required information to resolve CORS errors, I needed a nginx config file either way.

Once I had my nginx config set up, i needed to test and reload my configuration:

sudo nginx -t
sudo nginx -s reload

After doing this, I made progress. My site was now using the correct SSL cert for remark42.mydomain.tld. I needed to make a few more minor adjustments to my nginx configuration to make sure it provided the correct header information to resolve the CORS issues, and finally after reloading the config, I had comment sections displaying on my site, and the anonymous logins were ‘authorized’ to comment.

Automating site build
#

One thing I noticed was that since I was syncing my /public folder to the /web/projectname that Hugo was building the development hugo server into /public, which then got synced to production. This meant test content was making it to production, which wasn’t ideal.

To resolve this under my hugo’s /config/_default/hugo.toml I specified publishDir = "public". I then made a /config/development/hugo.toml and specified publishDir = "public-DEV". The /development config file would be automatically picked up when using hugo server and the default config file would be used when using hugo build. The result is that two separate build folders are used, and only the hugo build folder gets rsync’ed to the web server.

I had also noticed earlier that some of my changes weren’t making it to the production site reliably. I think this had something to do with how rsync was evaluating the files. To resolve this, I initially thought to have rsync --checksum but eventually landed on script that checked the directory checksum rather than individual file checksum. The goal was to conditionally remove the /web/projectname folder contents if any file changed, and then rsync to the entire public folder.

cd /volume1/web/scripts

SOURCE="/volume1/path/to/public/folder"
DEST="/volume1/web/projectname
LAST_CHANGE="/volume1/web/scripts/projectname_last_change.txt"

# get current state of source directory
CURRENT_STATE=$(/usr/bin/find "$SOURCE" -type f -exec /usr/bin/md5sum {} \; | /usr/bin/sort | /usr/bin/md5sum)

# if last_change.txt doesn't exist, create it
if [ ! -f "$LAST_CHANGE" ]; then
    echo "$CURRENT_STATE" > "$LAST_CHANGE"
    # initial sync
    /usr/bin/rm -rf "${DEST}"* && /usr/bin/rsync -av --chmod=D755,F755 "$SOURCE" "$DEST"
    exit 0
fi

# check if state has changed
LAST_STATE=$(/bin/cat "$LAST_CHANGE")
if [ "$CURRENT_STATE" != "$LAST_STATE" ]; then
    # State has changed, do the sync
    /usr/bin/rm -rf "${DEST}"* && /usr/bin/rsync -av --chmod=D755,F755 "$SOURCE" "$DEST"
    echo "$CURRENT_STATE" > "$LAST_CHANGE"
fi

This will keep track of the directory hash and then if it detects the hash has changed, it will remove the project folder and resync the entire contents. I placed this file in Synology and then had to chmod permissions to make it executable.

In Synology’s Task Scheduler I modified my task to point towards my new script and ran it. Below are some outputs from a version with logging to test the timing:

No change detected:

[1736353734018005579] 2025-01-08 11:28:54 - Calculating directory state...
[1736353734392143978] 2025-01-08 11:28:54 - Hash calculation took 369ms
[1736353734396052128] 2025-01-08 11:28:54 - No changes detected (369ms check)

Change detected:

[1736353734018005579] 2025-01-08 11:28:54 - Calculating directory state...
[1736353734392143978] 2025-01-08 11:28:54 - Hash calculation took 369ms
[1736353734396052128] 2025-01-08 11:28:54 - No changes detected (369ms check)
[1736353776282019039] 2025-01-08 11:29:36 - Calculating directory state...
[1736353776646378678] 2025-01-08 11:29:36 - Hash calculation took 360ms
[1736353776650390149] 2025-01-08 11:29:36 - Changes detected, starting sync
[1736353777049361096] 2025-01-08 11:29:37 - Sync completed in 395ms

Overall this was quite performative so I switched this to run every 1 minute because why not. After the site is less actively being developed I might push it back to every 5-60 minutes.

Comment Subscription?
#

So far I have an internal website with SSL that is being synced from my build folder and has comments enabled. One thing I’m not a huge fan of is that in remark42 there’s no way to auto-enroll non-admins into receiving notifications about replies. While this is great from a privacy perspective, I’m running this on an internal network for family so those aren’t my concerns with this project.

Fortunately, remark42 has a way to do a last-comments embed. Rather than reinvent the wheel with my own notification system I decided to keep things simple and implemented the last-comments on my homepage. This allowed a central place to show latest comments rather than needing to opt into notifications on every post. This also meant I didn’t need to set up email notifications either.

I created a remark42_lastcomments.html partial in Hugo and made sure to include components: ["last-comments"], in the script. Then in my head.html partial I added:

{{ if .Params.lastcomments }} {{ partial "remark42_lastcomments.html" . }} {{ end }}

On my main site pages’ frontmatter I added lastcomments: true. The last thing I needed was update my main page’s layout to initialize the script where I wanted it so I added <div class="remark42__last-comments" data-max="5"></div> to part of my page template. Now I had a place of last comments!

Victory?
#

This project achieved its main goals: I now have a secure internal website where my family can share content and discussions. The technical implementation works well, with automated builds, proper SSL, and a functional comment system. However, the journey revealed some interesting lessons about self-hosting on Synology.

When I first bought my Synology NAS years ago I was in a different place in my tech journey. Having the software/GUI was nice and made things easy for me. As I’ve gotten more into programming though, I’m less thrilled about it. I didn’t cover all the issues I had with this project here, but I spent several hours trying to get Synology to behave the way I wanted it. And even now, I’m not 100% positive what I did with nginx configuration will survive DSM updates. The NAS has been great but trying to do more technical stuff has been a challenge to the point that I think I would’ve saved time overall had I just started this project on a more traditional server.

Who knows, maybe that will be my next purchase ;)