Host Unreal Engine 4 projects on Microsoft Azure DevOPS with unlimited cost-free Git LFS quota

Host Unreal Engine 4 projects on Microsoft Azure DevOPS with unlimited cost free Git LFS quota

Host Unreal Engine 4 projects on Microsoft Azure DevOPS with unlimited cost free Git LFS quota

UPDATE 1 [2021/07/25]: It seems that Git LFS is able to resume your pushes after a network failure. At least it’s like that on Microsoft Azure DevOPS. So, it should be totally redundant to divide huge commits into smaller ones. How have I noticed this? Today, I pushed a huge single commit (around 53GBs) and it failed at 39GB due to a connection error without me noticing it for some time. A few hours later, when I made another attempt by issuing the push command again, it picked up and resumed the push at 39GB, which was really exciting.

UPDATE 2 [2021/07/25]: After pushing the repository to Azure DevOPS, if you find your self stuck in git pull without doing anything, the following command will fix the consecutive pulls:

$ git pull origin master

Or, alternatively:

$ git fetch origin master
$ git reset --hard FETCH_HEAD

UPDATE 3 [2021/07/28]: I’ve noticed due to the fact that the files modification times affect how Rsync and Git work by default, my approach in writing the original script was totally wrong, which in turn caused a bug where on each update it committed all tracked files over again causing huge bloat in the repository, despite the fact that the content of the files was unchanged. Thus, it led me to completely rewrite the script. Hopefully, the new script has been extensively tested with two repositories/projects and works as expected. In addition to that, the script now shows progress for every step, which is a nice addition in order to keep you informed and give an estimation of the time it is going to take to get the job done. And, last but not least, I have edited and improved the blog post a bit.

UPDATE 4 [2021/08/04]: Due to nested .gitignore files inside the Unreal Engine dependencies, I noticed tiny bits of dependencies for building UE4/UE5 on Microsoft Windows are not getting copied over to the repository. As a result, I fixed the script in order to also take care of that.

UPDATE 5 [2021/11/30]: Sometimes it’s possible that the amount of renamed Unreal Engine files surpass the Git’s optimal rename limit inside the Sync repository (the intermediary local git repository that we are going to use for syncing the engine source code with upstream):

warning: exhaustive rename detection was skipped due to too many files.
warning: you may want to set your diff.renameLimit variable to at least 13453 and retry the command.

So, you could set that to a really large number in order to keep track of file renames:

$ cd ~/dev/MamadouArchives-Sync
$ git config diff.renameLimit 999999
$ git config merge.renameLimit 999999

Note: You will get this warning only when the Git option diff.renames is set to true (default behavior). Likewise, the above settings does not have any effects when the copy/rename detection is turned off. Nonetheless, you can always check your settings with:

$ git config -l

UPDATE 6 [2021/12/18]: I’ve added a step regarding EngineAssociation in the project’s .uproject file, which I forgot to mention in the original post.

UPDATE 7 [2023/03/03]: In UE5 UE4Games.uprojectdirs file shas been renamed to Default.uprojectdirs. Though the syntax and the contents of the file has remained the same.

UPDATE 8 [2023/03/04]: After upgrading my project to Unreal Engin 5.1 despite the fact that I’ve already set the git configuration http.version to HTTP/1.1 as instructed in this article, despite the commit size of no bigger than 166.30 MB and the acceptable upload bandwidth I’ve got, I was getting HTTP 413 Request Entity Too Large error:

Enumerating objects: 190058, done.
Counting objects: 100% (164439/164439), done.
Delta compression using up to 16 threads
Compressing objects: 100% (113439/113439), done.
Writing objects: 100% (138834/138834), 166.30 MiB | 47.32 MiB/s, done.
Total 138834 (delta 35613), reused 121343 (delta 22206), pack-reused 0
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
send-pack: unexpected disconnect while reading sideband packet
fatal: the remote end hung up unexpectedly
Everything up-to-date

I tried every suggestion that I came across in order to debug and resolve the issue to no avail. Including enabling git verbose logging:

$ export GIT_TRACE_PACKET=1                      
$ export GIT_TRACE=1
$ export GIT_CURL_VERBOSE=1

And then maxing out on all the size limits, buffer, packet sizes, and other hints:

[core]
	compression = 0
    packedGitLimit = 512m
    packedGitWindowSize = 512m
[http]
    postBuffer = 2147483648
[https]
    postBuffer = 2147483648
[init]
    defaultBranch = master
[pack] 
    deltaCacheSize = 2047m
    packSizeLimit = 2047m
	window = 1
    windowMemory = 2047m

Then I tried to change the origin URL to SSH:

[remote "origin"]
	url = https://SOME-ORGANIZATION@dev.azure.com/SOME-ORGANIZATION/MamadouArchives/_git/MamadouArchives
	#url = git@ssh.dev.azure.com:v3/SOME-ORGANIZATION/MamadouArchives/MamadouArchives

And then pushing without LFS:

$ git push --set-upstream origin 5.1 --no-verify

This attempt was futile as well, that made me revert back to https. Then I tried to push commit by commit since I had made a few commits using (5.1 which has been repeated twice in the following command, is the name of the new local branch intended to be pushed):

$ git rev-list --reverse 5.1 | ruby -ne 'i ||= 0; i += 1; puts $_ if i % 1 == 0' | xargs -I{} git push origin +{}:refs/heads/5.1 --no-verify

And sadly, the approach of pushing one commit at a time was frutiless as well :/

Thus, for the time being I’m stuck pushing the updated project from my Linux machine and pulling it from my Windows machine. I’ll do another updated once I’ve figured what’s going wrong.

UPDATE 9 [2023/03/04]: As an experiment, I did create a new organiation and a new repository inside it. Then prior to changing the origin URL, I fetched all LFS objects issuing:

$ git lfs fetch --all
fetch: 78618 objects found, done.                                                                      
fetch: Fetching all references...

Then, I decided to first push the Git comnmits without the LFS objects, so after updating the origin URL inside the .git/config:

$ git push --set-upstream origin 5.1 --no-verify
Enumerating objects: 296578, done.
Counting objects: 100% (296578/296578), done.
Delta compression using up to 16 threads
Compressing objects: 100% (210384/210384), done.
Writing objects: 100% (296578/296578), 360.11 MiB | 5.78 MiB/s, done.
Total 296578 (delta 78143), reused 296578 (delta 78143), pack-reused 0
remote: Analyzing objects... (296578/296578) (77738 ms)
remote: Storing packfile... done (10303 ms)
remote: Storing index... done (3740 ms)
To https://dev.azure.com/SOME-ORGANIZATION/MamadouArchives/_git/MamadouArchives
 * [new branch]            5.1 -> 5.1
branch '5.1' set up to track 'origin/5.1'

¯\_(ツ)/¯ as unexpected as it seems, it worked! As you can see my actual Git objects without the LFS objects on this repository are in no way near the 10 GB size limit:_

$ git count-objects -vH                         
count: 0               
size: 0 bytes
in-pack: 296583
packs: 1
size-pack: 366.76 MiB
prune-packable: 0
garbage: 0
size-garbage: 0 bytes

The issue might be that I’ve reached some kind of limit on the main organization that I’m not aware of.

Anyways, then I pushed the master and checked out back the new brnach for continuation on the upgrade:

$ git checkout master
$ git push origin master --no-verify
$ git checkout 5.1

And, then proceeded to pushing all LFS objects:

$ git lfs push origin --all

UPDATE 10 [2023/03/05]: Yesterday, I removed a large redundant repository from the previous organization, in order to see if I could still push my updates and the error I am getting was not due to hitting some kind of ceiling limit. It didn’t work. I did also cleanup the limit hacks I’ve added to my ~/.gitconfig in UPDATE 8. Then, after successfully pushing to the new organization/repository, I’ve decided to revert back the URL section for the orgin inside my .git/config inside the local repository and try to push once more to the old repository and guess what? It worked! Weird Microsoft/Azure! Not sure what fixed the issue. It could be even I had to wait for Microsoft to clean up the repository’s space I’ve deleted if the organization size limit was the issue. Don’t really know.


Among the gamedev industry, it’s a well-known fact that Unreal Engine projects sizes have always been huge and a pain to manage properly. And it becomes more painful by the day as your project moves forward and grows in size. Some even keep the Engine source and its monstrous binary dependencies inside their source control management software. In case you are a AAA game development company or you are working for one, there’s probably some system in place with an unlimited quota to take care of that. But, for most of us indie devs, or individual hobbyists, it seems there are not lots of affordable options, especially that your team is scattered across the globe.

There are plenty of costly solutions to keep UE4 projects under source control; ranging from maintaining a local physical server or renting a VPS with plenty of space on the cloud, equipped with a self-hosted Git, SVN, or Perforce, to use cloud SCM providers such as GitHub, GitLab, BitBucket, or Perforce. Since I prefer cloud SCM providers and Git + Git LFS (which also supports file locking), let’s take a look at some popular ones such as GitHub and GitLab.

GitHub for one, provides data packs, but the free offering is far from enough for collaborative UE4 projects:

Every account using Git Large File Storage receives 1 GB of free storage and 1 GB a month of free bandwidth. If the bandwidth and storage quotas are not enough, you can choose to purchase an additional quota for Git LFS. Unused bandwidth doesn’t roll over month-to-month.

Additional storage and bandwidth is offered in a single data pack. One data pack costs $5 per month, and provides a monthly quota of 50 GB for bandwidth and 50 GB for storage. You can purchase as many data packs as you need. For example, if you need 150 GB of storage, you’d buy three data packs.

For GitLab, although the initial generous 10GB repository size is way beyond the 1GB repository size offer by GitHub, the LFS pricing is insanely high:

  1. Additional repository storage for a namespace (group or personal) is sold in annual subscriptions of $60 USD/year in increments of 10GB. This storage accounts for the size calculated from Repositories, which includes the git repository itself and any LFS objects.

  2. When adding storage to an existing subscription, you will be charged the prorated amount for the remaining term of your subscription. (ex. If your subscription ends in 6 months and you buy storage, you will be charge for 6 months of the storage subscription, i.e. $30 USD)

Well, before this all get you disappointed, let’s hear the good news from the Microsoft Azure DevOPS team:

In uncommon circumstances, repositories may be larger than 10GB. For instance, the Windows repository is at least 300GB. For that reason, we do not have a hard block in place. If your repository grows beyond 10GB, consider using Git-LFS, Scalar, or Azure Artifacts to refactor your development artifacts.

Before we proceed any further, there are some catches to consider about Microsoft Azure DevOPS:

1. The maximum Git repository size is 10GB, which considering that we keep binary assets and huge files in LFS, is way beyond any project’s actual needs. For Git LFS it seems that Microsoft since at least 2015 has been providing unlimited free storage. For comparison, the engine source code for 4.27 is 1.4GB, which in turn when it’s getting committed to the git repo becomes less than 230MB:

$ cd /path/to/ue4.27/source

$ du -h
1.4G	.

$ git init
$ git add .
$ git commit -m "add unreal engine 4.27 source code"

$ git count-objects -vH

count: 97545
size: 900.25 MiB
in-pack: 110815
packs: 1
size-pack: 227.80 MiB
prune-packable: 97545
garbage: 0
size-garbage: 0 bytes

2. The maximum push size is limited to 5GB at a time. The 5GB limit is only for files in the actual repository and it won’t affect LFS objects. Thus, there are no limits for LFS objects’ pushes. Despite that, if your internet connection is not stable, you could divide your files into multiple commits and push them separately. For example, the initial git dependencies for UE 4.27 is around 40GB spanned across ~70,000 files. Instead of committing and pushing a 40GB chunk all at once, one could divide that into multiple commits and push those commits one by one using the following command:

$ git rev-list --reverse master \
    | ruby -ne 'i ||= 0; i += 1; puts $_ if i % 1 == 0' \
    | xargs -I{} git push origin +{}:refs/heads/master

3. Sadly, at the moment Azure DevOPS does not support LFS over SSH. So, you are bound to git push/pull over https, which for some might be annoying. Especially, that it keeps asking for the https token 3 consecutive times on any push or pull!

Q: I’m using Git LFS with Azure DevOps Services and I get errors when pulling files tracked by Git LFS.

A: Azure DevOps Services currently doesn’t support LFS over SSH. Use HTTPS to connect to repos with Git LFS tracked files.

4. Last but not least, there is an issue with the Microsoft implementation of LFS, which rejects large LFS objects and spits out a bunch of HTTP 413 and 503 errors at the end of your git push. It happened to me when I was pushing 40GB of UE4 binary dependencies. The weird thing was I tried twice and both times it took a few good hours till the end of the push operation and based on measuring the bandwidth usage, the LFS upload size appeared to be more than the actual upload size. According to some answers on this GitHub issue and this Microsoft developer community question, it seems the solution is running the following command inside the root of your local repository, before any git pull/push operations:

$ git config http.version HTTP/1.1

Well, not only it did the trick and worked like a charm, but also the push time on the following git push dropped dramatically to 30 minutes for that hefty 40GB UE4 binary dependencies.

OK, after getting ourselves familiarized with all the limits, if you deem this solution a worthy one for managing UE4 projects along with the engine source in the same repository, in the rest of this blog post I’m going to share my experiences and a script to keep the engine updated with ease using a Git + LFS setup.

[Read More...]

Keep Crashing Daemons Running on FreeBSD


UPDATE 1 [2019/05/11]: Thanks to @mirrorbox’s suggestion, I refactored the script to use service status instead of ps aux | grep which makes the script even more simple. As a result, the syntax has changed. Since I keep the article untouched, for the updated code visit either the GitHub or GitLab repositories. The new syntax is as follows:

# Syntax
$ /path/to/daemon-keeper.sh

Correct usage:

    daemon-keeper.sh -d {daemon} -e {extra daemon to (re)start} [-e {another extra daemon to (re)start}] [... even more -e and extra daemons to (re)start]

# Example
$ /path/to/daemon-keeper.sh -d "clamav-clamd" -e "dovecot"

# Crontab
$ sudo -u root -g wheel crontab -l

# At every minute
*   *   *   *   *   /usr/local/cron-scripts/daemon-keeper.sh -d "clamav-clamd" -e "dovecot"

UPDATE 2 [2019/05/11]: Another thanks to @mirrorbox for mentioning sysutils/daemontools which seems a proven solution for restarting a crashing daemon. It makes this hack redundant.

Daemontools is a small set of /very/ useful utilities, from Dan
Bernstein.  They are mainly used for controlling processes, and
maintaining logfiles.

WWW: http://cr.yp.to/daemontools.html

UPDATE 3 [2019/05/11]: Thanks to @dlangille for mentioning sysutils/py-supervisor, which seems to be a viable alternative to sysutils/daemontools.

Supervisor is a client/server system that allows its users
to monitor and control a number of processes on UNIX-like
operating systems.

It shares some of the same goals of programs like launchd,
daemontools, and runit. Unlike some of these programs, it is
not meant to be run as a substitute for init as "process id 1".
Instead it is meant to be used to control processes related to
a project or a customer, and is meant to start like any
other program at boot time.

WWW: http://supervisord.org/

UPDATE 4 [2019/05/13]: Thanks to @olevole for mentioning sysutils/fsc. It is minimalistic, dependency free and designed for FreeBSD:

The FreeBSD Services Control software provides service
monitoring, restarting, and event logging for FreeBSD
servers.  The core functionality is a daemon (fscd)
which is interfaced with using fscadm.  See manual pages
for more information.

UPDATE 5 [2019/05/13]: Thanks to @jcigar for bringing daemon(8) to my attention, which is available in the base system and it seems perfectly capable of doing what I was going to achieve in my script and more.


Amidst all the chaos in the current stage of my life, I don’t know exactly what got into me that I thought it was a good idea to perform a major upgrade on a production FreeBSD server from 11.2-RELENG to 12.0-RELENG, when I even did not have enough time to go through /usr/src/UPDATING thoroughly or consult the Release Notes or the Errata properly; let alone hitting some esoteric changes which technically crippled my mail server, when I realized it has been over a week that I haven’t been receiving any new emails.

At first, I did not take it seriously. Just rebooted the server and prayed to the gods that it won’t happen again. It was a quick fix and it seemed to work. Until after a few days, I noticed that it happened again. This time I prayed to the gods even harder - both the old ones and the new ones ¯\_(ツ)_/¯ - and rebuilt every installed ports all over again in order to make sure I did not miss anything. I went for another reboot and, ops! There it was again laughing at me. Thus, losing all faith in the gods, which led me to take up responsibility and investigate more on this issue or ask the experts on the FreeBSD forums.

After messing around with it, it turned out that the culprit is clamav-clamd service crashing without any apparent reason at first. I fired up htop after restarting clamav-clamd and figured even at idle times it devours around ~ 30% of the available memory. According to this Stack Exchange answer:

ClamAV holds the search strings using the classic string (Boyer Moore) and regular expression (Aho Corasick) algorithms. Being algorithms from the 1970s they are extemely memory efficient.

The problem is the huge number of virus signatures. This leads to the algorithms’ datastructures growing quite large.

You can’t send those datastructures to swap, as there are no parts of the algorithms’ datastructures accessed less often than other parts. If you do force pages of them to swap disk, then they’ll be referenced moments later and just swap straight back in. (Technically we say “the random access of the datastructure forces the entire datastructure to be in the process’s working set of memory”.)

The datastructures are needed if you are scanning from the command line or scanning from a daemon.

You can’t use just a portion of the virus signatures, as you don’t get to choose which viruses you will be sent, and thus can’t tell which signatures you will need.

I guess due to some arcane changes in 12.0-RELEASE, FreeBSD kills memory hogs such as clamav-clamd daemon (don’t take my word for it; it is just a poor man’s guess). I even tried to lower the memory usage without much of a success. At the end, there were not too many choices or workarounds around the corner:

A. Pray to the gods that it go away by itself, which I deemed impractical

B. Put aside laziness, and replace security/clamsmtp with security/amavisd-new in order to be able to run ClamAV on-demand which has its own pros and cons

C. Write a quick POSIX-shell script to scan for a running clamav-clamd process using ps aux | grep clamd, set it up as a cron job with X-minute(s) interval, and then start the server if it cannot be found running, and be done with it for the time being.

For the sake of slothfulness, I opted to go with option C. As a consequence, I came up with a generic simple script that is able to not only monitor and restart the clamav-clamd service but also is able to keep any other crashing services running on FreeBSD.

[Read More...]

My Reddit Wallpaper Downloader Script

My i3wm setup with amazing gruvbox color scheme and a wallpaper from Reddit

i3wm setup with amazing gruvbox color scheme and a wallpaper from Reddit

Update [2019/05/08]: Many people have been asking for the wallpaper in the above screenshot. It is from System Failure II, oil on canvas, 31x43” on r/Art.

Well, I am really fascinated by Reddit art and genuine creative ideas such as Scrolller which was made possible thanks to gazillions of art pieces scattered throughout various art subreddits. I am also fascinated by Unix philosophy and have been a *nix enthusiast for as long as I can remember. In addition to all this, the discovery of r/unixporn - realizing I am not the only one who cares about aesthetics of their Unix box - was a huge blow for me; to the point that studying the GitHub dotfiles posted along the screenshots on r/unixporn by fellow nix-enthusiast redditors felt like a day to day hubby for me.

All the while, I had a successful experiment with writing a complex piece of real-world software in pure Bash with an amazingly wide range of features for around 3.5K lines of code. The real excitement came when it made to the official FreeBSD Ports Tree. In spite of the fact that many people find Bash syntax annoyingly ugly and unmaintainable and often wonder why do people still write shell scripts by asking it on Quora, since MS-DOS 6.22 era, I did develop a certain love–hate relationship with shell scripting languages such as Batch Files, Bash, etc. Thus, still I do automate almost everything with these ancient technologies.

So, here is my fully-configurable wallpaper changer software written in bash which automagically fetches and display wallpapers from your favorite subs. It has been powering and brightening up my i3wm setup for the past eight months which led me to the conclusion that it deserves a proper introduction.

[Read More...]

How to Run Multiplayer Call of Duty 4: Modern Warfare (Promod LIVE) using Wine on GNU/Linux

Call of Duty 4: Modern Warfare

Call of Duty 4: Modern Warfare

Well, I haven’t played a multiplayer game in ages until recently, when my cool boss announced regular playtimes for all the employees in our company as a group activity in order to put the fun back into work. Since I’m a die-hard COD4 fan and I used to play Promod LIVE heavily with colleagues and friends, I proposed Call of Duty 4: Promod LIVE 220 which happened to be favored by everybody; except there was one issue: everyone uses Microsoft Windows while I’m developing UE4 games on a single-boot Funtoo Linux system.

Naturally, my first attempt was running it inside a Windows 7 virtual machine under VMWare Workstation for Linux which supports up to Direct3D 10 (the exact API used by COD4). Sadly, the experience was very poor and painful with lots of unbearable stuttering on my decent hardware. Thus, the last resort was running it under Wine, which I used to happily run many Windows applications and games under it for many years. Though, throughout those years I replaced almost every Windows application with an equivalent or an alternative Linux application until I gradually stopped using it. In the meanwhile, I also distanced myself from traditional desktop environments such as GNOME, KDE, Xfce, and LXDE, while experimenting with various window managers specially i3wm, which caught my attention for many good reasons. So, in the end I made up my mind and alienated myself from desktop environments once and for all.

Running a fully-fledged game engine such as Unreal Engine 4, I expected COD4, Wine, and i3 combination to work fine out of the box as it would under any other DE. Well, it turned out that I was too simple-minded about running a fullscreen game such as COD4 under Wine/i3wm. Hopefully, as the Wine FAQ states the workaround is super easy.

Here is the full guide on running COD4 v1.7 with Promod LIVE 2.20 on GNU/Linux.

[Read More...]

Delete a File With Invalid or Bad Characters in File Name on FreeBSD

There once was a time when I did the following inside my home directory:

$ wget "some-url" -O "output-file.mp4"

I clearly remember copying the output file name from a web page. Unfortunately, the copied text has a new line at the beginning of it and I didn’t notice that. That’s because the newline or carriage return characters are control characters and have no visual representation. Anyway, when I listed files inside my home directory I noticed a strange file name on my list:

$ ls
?output-file.mp4
[Read More...]

OmniBackup: One Script to back them all up

Update 1 [2016/09/23]: OmniBackup now officially supports GNU/Linux. More info

Update 2 [2016/09/23]: Official documentation moved to GitHub which means this guide won’t be maintained anymore and maybe out of date or inaccurate.

A week ago was System Administrator Appreciation Day. It is celebrated on the last Friday in July and it has been celebrated since July 28, 2000. But, system administrators know not all days are like that day. They face many hard times and struggles during their careers and the worse of them all is either a security breech or data loss.

For so many years I’ve been writing and maintaining backup scripts on and on, for every single database I added, for every single directory with critical data, or any other service I was running on every new server I got my hands on. In the end, I found myself ended up in a pile of backup scripts and multitudinous cron entries which was a nightmare to keep track of. I even had to manage the schedule so that two backup scripts do not run at the same time. Even more, I had to manually track the backups to see whether they were successful or not. Also, someone has to manually delete the old ones to make rooms for the next ones.

Therefore, I came up with an elegant solution to replace the old process which I found exceptionally error-prone. An end to all my hardships which I call OmniBackup. At last, I’m able to confidently keep abreast of all the ever-growing data that I have to keep safe.

“So, what exactly is OmniBackup?” you may ask. “A fair question,” I would say. OmniBackup is a MIT licensed Bash script which delivers the following set of features:

  • Configuration and customization of backup mechanism through JSON
  • Support for OpenLDAP backups
  • Support for PostgreSQL backups as a whole or per database
  • Support for MariaDB and MySQL backups as a whole or per database
  • Support for filesystem backups with optional ability to follow symbolic links
  • Support for pluggable customized scripts to extend OmniBackup functionality beyond its original design which allows support for many different backup scenarios that has not been built into OmniBackup, yet
  • Backup file name tagging which allows including date or host name in the archive name
  • Online backup without a prerequisite to suspend any service
  • Support for customized backup tasks priority order
  • Support for multiple backup servers
  • Ability to always keep a copy of backups offline
  • Ability to keep a copy of backups offline if no backup server is available, or in case of an error such as a file transfer error
  • Secure file transfer through SSH / SCP protocol
  • LZMA2, gzip and bzip2 compression algorithms along with different compression levels to maintain a balance between speed and file size
  • Ability to preserve permissions inside backup files
  • Support for symmetric cryptography algorithms AES-128, AES-192 and AES-256 (a.k.a Rijndael or Advanced Encryption Standard)
  • Random passphrase generation for encrypted backups with variable length and patterns or a unique passphrase for all backups
  • Support for RSA signatures to verify the backup origin and integrity
  • Passphrase encryption using RSA public keys for individual backup servers
  • Backup integrity verification by offering hash algorithms such as MD4, MD5, MDC-2, RIPEMD160, SHA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512 and WHIRLPOOL
  • Optional Base64 encoding
  • System logs and a standalone log file including all details
  • Reporting through email to a list of recipients with ability to include passphrases
  • Customized mail subject for successful and failed backup operations
  • Customized support message for reports
  • Crontab integration
  • Custom temporary / working directory
  • Automatic and smart clean-up ability
  • One instance only policy which avoids running multiple instances by mistake at the same time, therefore avoids system slow-down
  • An example configuration file in JSON format to get you up and running

There is also a list of planned features and TODOs which did not make it into 0.1.0 release:

  • Restore script
  • GnuPG integration
  • SFTP and FTP support
  • Refactoring and code clean-up
  • Any potential bug fixes

Disclaimer: Please be wary of the fact that this script has approximately 3.5 K lines of Bash code and devoured hell of a time from me to write and debug. You should also consider that this is my first heavy Bash experiment and I may not write quality code in the language since I’m a newcomer to Bash. I do not claim that OmniBackup is production ready, that’s why I did version the first release at 0.1.0. Also keep in mind that OmniBackup heavily relies on 3rd-party software which increases the chance for fatal bugs, therefore losing data. So, I provide OmniBackup without any warranties, guarantees or conditions, of any kind and I accept no liability or responsibility for any misuse or damage. Please use it at your own risk and remember you are solely responsible for any resulting damage or data loss.

Credits: Many thanks go to my fellow and long-time friend, Morteza Sabetraftar for his help and ideas without whom OmniBackup lacked features or quality. Another kudos goes to my brother Amir by releasing me from shopping, cooking and house-cleaning without even mentioning it, an invaluable and priceless assistance that encouraged me even more to use my best endeavours to get this task done.

Please, feel free to clone and modify it as you wish. Pull requests for new features, improvements or bug fixes are also very welcome.

The rest of this post serves as a comprehensive guide on how to setup OmniBackup in order to backup and restore all your critical data in a production environment.

[Read More...]

FreeBSD: Block Brute-force Attacks Using Sshguard and IPFW Firewall

There is an old saying that the only safe computer is one that’s disconnected from the network, turned off, and locked in an underground bunker—and even then you can’t be sure!

Since most of us can’t afford to keep our servers in an underground bunker, the least little thing that could have been done in order to keep their threat exposure at rock-bottom is protecting them by running a combination of a firewall and an intrusion prevention system or IPS (a.k.a intrusion detection and prevention systems or IDPS). Surely, that alone proved insufficient and other security measures and best practices should also be considered.

This blog post covers setting up a basic secure and stateful IPFW firewall on FreeBSD along with Sshguard by iXsystems Inc as intrusion prevention system.

[Read More...]

Getting real IP addresses using Nginx and CloudFlare

[Update]: Thanks to digitaltoast for informing me about the missing real_ip_header CF-Connecting-IP; from the script and providing a patch for it.

OK, I suppose you know what CloudFlare is, and are familiar with Nginx configuration process, before we proceed any further. Just in case you don’t know, CloudFlare offers free and commercial, cloud-based services to help secure and accelerate websites. The thing is, I’m really satisfied with the services they offer except a repellent issue about logging the real IP address of your website’s visitors. Since CloudFlare acts as a reverse proxy, all connections come from CloudFlare’s IP addresses, not the real visitors anymore. Anyway, using Nginx there’s a simple workaround for this issue, which I’ll describe in the rest of this post.

[Read More...]