Skip to main content

Development

Here, we will explore through tools for software development.

Python

Most of the time in python you will need to work in a virtual environment. To create one:

python -m venv C:\path\to\new\virtual\environment

To activate an environment:

source <venv>/bin/activate

To deactivate:

deactivate

Web Development with Python

Streamlit is a simple Python framework for web dev. There is also Flask, Django, and FastAPI

Neovim / NVchad

Neovim is a terminal based IDE. It's pretty helpful when used in an environment with no mouse and no graphical interface. Install Neovim and then clone the NVchad repo to your .config folder:

git clone https://github.com/NvChad/starter ~/.config/nvim && nvim

Vim Motions

  • dd Delete Line
  • u undo
  • x delete one character
  • w go to the start of next word
  • e go to the end of the next word
  • b go to the end of previous word
  • ctrl + h/j/k/l change tab
  • ctrl + d Scroll Down
  • ctrl + u Scroll Up
  • g + g go to first line
  • shift + g go to last line
  • shift + a go to the end of current line and enter insert mode
  • :xa save and close all windows
  • yy "yank" or copy line
  • p paste line
  • o add new line and enter insert mode

Running in production

There are several ways to deploy in production, but the simplest would probably be running the app as a service. First create a service in etc/systemd/system/app.service. A basic config is:

etc/systemd/system/app.service
[Unit]
Description=App
After=network.target

[Service]
Type=simple
User=yourappuser
WorkingDirectory=/path/to/your/app
ExecStartPre=/usr/bin/npm run build
ExecStart=/usr/bin/npm run serve
Restart=always
RestartSec=10
Environment=NODE_ENV=production
Environment=PORT=3000
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=myjsapp

[Install]
WantedBy=multi-user.target
  • After=network.target ensures the network is available before starting your app
  • Restart=always makes sure your app restarts if it crashes
  • Environment=NODE_ENV=production sets the production environment
  • User=yourappuser runs the service as a specific user (recommended for security)
  • WorkingDirectory should point to your application directory
  • ExecStartPre Any command you want to run before deploying
  • ExecStart The command to start the app

Then reload the system daemon, and start the service. You can also enable the service if you want it to run as the system boots:

sudo systemctl daemon-reload
sudo systemctl enable app.service
sudo systemctl start app.service
systemctl status app.service
info

Sometimes system services are a little weird on how they run certain environments. I recommend creating a shell script and running that as a service instead of directly running nodeJS or python or whatever if you can't get it to work.

GIT

info

As of 2025, February the 2nd, Git has a market share of 87.53%. It's probably the most used tool in all of the tech industry (maybe even outside). Here are some advanced concepts that make working with GIT 10x better.

GIT History

git log is a list of all the commits in a repo, but it's a bit hard to read. To make it look a little nicer:

git log --graph --decorate --oneline

Commit Amend

Change the contents of a commit; can be used to just change the message as well.

git commit --amend -m "this is an amendement to the latest commit"

If you want to just add files, and not change the message:

git commit --amend --no-edit

Reverting

If you have commited and pushed some code that you have changed your mind on, you can revert to a previous commit. Before reverting, you will need to find the name of the commit from the logs:

git log --oneline
git revert <commit_hash>

Stashing

Sometimes code is too sloppy or just not good enough to commit, but we still want to keep it. Stash will remove the changes in your working directory and save them for later:

git stash

To add the changes back:

git stash pop

If you use stashing a lot, you can save your stashes with names so they are easier to find and navigate:

git stash save <awesome_name>

And then you can list the stashes and apply them with their corresponding indeces:

git stash list
git stash apply 0

If you want to overwrite all the files with the stash, you can checkout the stash and commit:

warning

This will overwrite your uncommitted work! Use it only if you are sure what you're doing!

git checkout stash -- .
git commit

Binary Search (Bisect)

Sometimes code breaks without realization, or you might want to look for something specific in your code. Git can do a binary search through your commits. Git will changes the commits for you until you find what you were looking for. Practically, let's assume there is a bug in our and you want to find it. You will first need to start a binary search with:

git bisect start

Then you will need to point to GIT to the latest bad commit, and the latest good commit:

git bisect bad # The current commit is the bad
git bisect good vd32401d # Commit vd32401d is the known good

The Git will select a commit in between these two commits and apply it to your working directory. You can test your project and then tell Git whether this is a good commit or a bad commit with:

git bisect bad || git bisect good

Once the search is done, Git will output a log with the commit that it found. After you are done, you will want to return to the original HEAD (latest commit):

git bisect reset

Squashing

Squashing is used to combine several commits into one. It is especially useful when a feature needs to be rebased to the main branch. We can use an interactive rebase to squash everything together:

git rebase main --interactive

Otherwise, you can retroactively flag commits with the --squash flag so when you merge or rebase, git will automatically squash the commits for you. To flag a commit:

git commit --squash

And then run a rebase with the --autosquash flag so Git will handle all the squashing automagically:

git rebase -i --autosquash

Git LFS

Git LFS is an extension that replaces large files with text pointers inside Git, while storing the file contents on a remote server. This improves repository performance when working with large binary files.

Initializing Git LFS

To start using Git LFS in your repository:

# Initialize Git LFS in your repository
git lfs install

This sets up Git LFS for your user account. The git lfs install command only needs to be run once per user account.

Tracking Files with Git LFS

Specify which file types should be tracked with Git LFS:

# Track files by extension
git lfs track "*.psd"
git lfs track "*.zip"
git lfs track "*.pdf"

# Track specific files
git lfs track "large-file.bin"

# Track files in a specific directory
git lfs track "images/*.png"

After tracking files, commit the .gitattributes file which stores these tracking patterns:

git add .gitattributes
git commit -m "Configure Git LFS tracking"

Then add and commit your large files as usual:

git add large-file.bin
git commit -m "Add large binary file"
git push

Migrating Existing Files to Git LFS

If you need to move files that are already in your Git history to LFS:

# 1. First, track the file types you want to migrate
git lfs track "*.psd"
git add .gitattributes
git commit -m "Track PSD files with Git LFS"

# 2. Use the Git LFS migrate command
git lfs migrate import --include="*.psd" --everything

The --everything flag migrates all branches and tags. For more selective migration:

# Migrate only specific branch
git lfs migrate import --include="*.psd" --include-ref=main

# Limit migration to recent commits (last 10)
git lfs migrate import --include="*.psd" --everything --exclude-ref="refs/remotes/*" --above=1kb

After migration, force push all updated refs to update the remote repository:

git push --force

Verifying LFS Status

# List all tracked patterns
git lfs track

# Show all LFS files in the repo
git lfs ls-files

Common Operations

# Pull LFS content
git lfs pull

# Fetch without checking out
git lfs fetch

# Check LFS status
git lfs status

Remove files from git without deleting

git rm --cached .

If you want to remove a specific file, replace the dot with the file path.

Delete all git history without resetting repo

Deleting the .git folder may cause problems in your git repository. If you want to delete all your commit history but keep the code in its current state, it is very safe to do it as in the following:

Checkout

git checkout --orphan latest_branch

Add all the files

git add -A

Commit the changes

git commit -am "commit message"

Delete the branch

git branch -D main

Rename the current branch to main

git branch -m main

Finally, force-update your repository

git push -f origin main

Git Hooks

When making an operation with Git like pushing, pulling, committing, etc., it creates an event, and you use Hooks to run code before or after an event. If you open the .git folder, you will find a hooks folder in it with all the events you can use to trigger code.

Migrating repo

Create a temp folder locally, and start a new empty in the destination service. Then clone the repo from source as mirror, and push the mirror to destination.

git clone --mirror <source-repo-url>
cd <source-repo-name>
git push --mirror <dest-repo-url>

The mirror in not readable so it can be deleted after pushing.

Reuse Recorded Resolution

This is a setting in git that allows git to remember how you resolved a merge conflict and will now use that resolution if it detects a similar merge conflict:

git config --global rerere.enabled true

Optimizing Repository

Git has a new feature to optimize a repo. It can help reduce the time of Git actions, and reduce the size of the repo. To enable repo optimization, run inside the repo:

git maintenance start

CI/CD

CI/CD stands for continuous intergration and continuous development. It's a set of practices and tools used to automate building, testing, and deploying. There are several tools out there that help coordinate and automate builds, but I will focus on two methods/tools I use for CI/CD - webhooks, and Jenkins.

Webhooks

When it comes to updating text on a web server the simplest thing would be to use git and GitHub webhooks. You can use any simple web framework to create a webhook server; I use Flask, which is in python:

webhook.py
from flask import Flask, request
import subprocess

app = Flask(__name__)


@app.route('/webhook', methods=['POST'])
def webhook():
# Only handle push events
if request.headers.get('X-GitHub-Event') == 'push':
# Run deploy script
result = subprocess.run(
['bash', '/path/to/deploy.sh'],
capture_output=True,
text=True
)
return f"Deployed: {result.stdout}"

return "OK"


if __name__ == '__main__':
app.run(port=3000)

This script opens up port 3000 on your server and listens on the GitHub webhook. Since this needs to be exposed to the internet, you will need to add some safeguards:

  • Signature validation
  • SSL
  • Tunneling

Signature validation

With this validation we make sure the request comes from GitHub:

webhook.py
import hmac
import hashlib

WEBHOOK_SECRET = "your-secret" # YOU SET THIS UP IN GITHUB

def is_valid_signature(payload_body, signature_header):
if not signature_header:
return False

# Get expected signature
expected_signature = hmac.new(
key=WEBHOOK_SECRET.encode(),
msg=payload_body,
digestmod=hashlib.sha256
).hexdigest()

# Compare with GitHub's signature
# GitHub sends it in format: 'sha256=<hash>'
return hmac.compare_digest(
f'sha256={expected_signature}',
signature_header
)

SSL

Make sure the webhook server is using HTTPS. By default, GitHub allows only HTTPS requests.

Tunneling

As additional protection, you can forward the traffic to your server through a tunnel. I use Cloudflare Tunnels as it is quick and easy to set up. If you use Cloudflare as your domain registrar, it has excellent documentation on how to set up tunnels with your web services.

Minio

Minio is an open-source alternative to AWS S3 object storage. Great for backend development and sometimes can be used as a database. In my case, I'm using Minio for backups. Minio offers bare-metal deployments, but Docker is superior, so we will explore that.

First you need to decide if you are using a single drive or multiple. If you're going for multi-drive, you need to define them in the .env and mount them in the compose file. Make sure the drivers are mounted on your OS, and they are added to your fstab file so they auto-mount on boot.

Set up a .env file:

.env
# Login credentials
MINIO_ROOT_USER=user
MINIO_ROOT_PASSWORD=password

# Set the storage volume or path inside the container.
MINIO_VOLUMES="/mnt/data"

MINIO_OPTS="--console-address :9001"

Copy and run the compose file; adjust if needed:

docker-compose.yml
services:
minio:
image: minio/minio
container_name: minio
ports:
- 9000:9000
- 9001:9001
restart: unless-stopped
environment:
- MINIO_ROOT_USER=$MINIO_ROOT_USER
- MINIO_ROOT_PASSWORD=$MINIO_ROOT_PASSWORD
- MINIO_CONFIG_ENV_FILE=/etc/config.env
volumes:
- ./data:/mnt/data
- ./.env:/etc/config.env
command: minio server --console-address ":9001"
info

It is recommended to use NVME or SSD drives for production (databases, dictionaries), and use HDD only for cold storage (backups, media) ; try not to mix and match drives if you can. They also recommend that you use XFS as your filesystem.

I am using a single drive deployment on a HDD drive that is formatted with BTRFS and nothing has exploded yet.

Check the logs of the new container and make sure it's not boot looping. Go to localhost:9001 or YOUR-SERVER-IP:9001 and login. Make your first bucket. Make sure to create access keys for your buckets as well; you will need them to use the API.

In my case, the tool I'm using - restic - gets the keys from environmental variables:

export AWS_ACCESS_KEY_ID=<YOUR-MINIO-ACCESS-KEY-ID>
export AWS_SECRET_ACCESS_KEY=<YOUR-MINIO-SECRET-ACCESS-KEY>

and uses them to connect to my bucket: s3:http://localhost:9000/bucket