This week, we worked with version control software and set up personal websites where we document our progress. I used Bulma and Wintersmith to create this website; it is built automatically with Gitlab pipelines.
Version Control Systems
First things first: What is a version control system (VCS)? A VCS allows us to organize different versions of files that get created during the evolution of a project, which is especially useful when collaborating in teams. Mostly, this is a software project, but in principle every file can be registered with a VCS. One can differentiate between centralised and decentralised models; the most well-known systems are probably Subversion and Git respectively, but there are a lot of other solutions.
In a centralised VCS, there is a single repository (that is, a place where all history of the project files is stored). Everyone working on the project has a copy of the current version of the projects files, and all changes to these files are committed to this central repository. Other team members can then pull (i.e. download) the changes, and the VCS tool will update their files accordingly.
With decentralised (or distributed ) VCS, everyone working on the project has their own, local repository containing the whole history of all files of the project. For text files, it is sufficient to store a series of changes (i.e. just the lines that have changed between versions) instead of whole files. This approach makes version control systems very suitable for software projects, because most source code is based on plain text files.
I'm using git, because I am already familiar with it, and the Fab Academy uses Gitlab to manage the students repositories. There are a variety of graphical git clients available, but I will use the terminal, because I'm used to working with it and I think it is faster once you get used to it.
How to use the Git Command Line Interface
Working with the git command line interface (cli) is actually incredibly easy once you get used to it. The most used commands are pretty short and kind of self-explanatory, and if you ever have the need to make use of advanced git features, the command line is the way to go, as it is always available, provides a standard interface, and is probably fairly bug-free. Also, it can be scripted, which is a nice way to save time by automating tasks that occur frequently.
In general, you make some changes (add/delete lines in your files or add/delete files) to your working directory, you add these changes to the repositories' index, commit them to be saved in version control (i.e. git), pull in possible changes from the upstream (i.e. the online repository) made by other team members, merge or rebase your changes, and push the resulting history of changes back to the upstream. On your terminal, this process may look similar to the following:
$ git add file1 file2 someDirectory/
$ git commit -m "This is a commit message so others get an idea of what you did"
$ git pull --rebase
$ git push
To see the current status of your working directory, use git status
, and to see commit messages type git log
.
Every command provides help when called with the --help
flag.
Not too complicated, is it? Now, there is a vast number of good git tutorials available online, and many of them go into much more detail than I have time & space here; a complete writeup would be out of scope. There's the gittutorial, the user manual, tutorials by Atlassian and github, and many more if you keep searching.
Gitlab Pipelines & Static Site Generators
With pipelines, Gitlab provides a very useful tool to automatically run tasks when pushing to the repository. For example, one can configure it to run automated tests, or -- in my case -- automatically build my website. In order to do this, I leverage Wintersmith, a javascript-based static site generator. Wintersmith converts Markdown files to HTML and renders them with templates.
The pipeline is configured with the special .gitlab-ci.yml
file.
The file specifies the build environment and describes which commands to run.
Wintersmith builds the site from markdown files and html templates; to build the site locally, install Wintersmith via npm
$ npm install wintersmith -g
and run
$ wintersmith build
In order to publish the site, the contents of the build folder have to be moved to the public/
directory.
These steps are automatically executed by the Gitlab pipeline.
During development, it is also very useful to preview the site.
For this, run the built-in server via
$ wintersmith preview
Additionally, I use the wintersmith-livereload
plugin which automatically reloads the site when a file changes.
The styling of my website is done with Bulma, a versatile CSS framework.
Additional Tools
I use a couple of additional tools to manage the contents of this page.
To make screenshots, I use the built-in screenshot tool of Ubuntu:
- Print key for whole screen
- Alt key + print key for current window
- Shift key + print key to select an area
Trimage losslessly minifies jpgs and pngs.
It is the linux version of Optimg.
Just call it as trimage -d .
to minify all images in the current working directory, or trimage -f <image>
to minify a single file.
ImageMagick is a very versatile tool for image manipulation, and it's included in most Linux distributions. There are GUI interfaces available, but I just use the command line interface; it is available as convert
and mogrify
(the latter modifies in-place).
To minify all jpgs in a folder, I use the following command, which produces images that are usually smaller than 200kb:
$ for i in *.jpg;
$ do mogrify -resize "1200>" -sampling-factor 4:2:0 -strip -quality 85 -interlace JPEG -colorspace RGB $i;
$ done
To rotate counterclockwise:
$ mogrify -rotate "-90" <image>
For videos, ffmpeg is more or less what ImageMagick is for images. To convert all mp4 videos in a folder to webm (which saves a lot of space with comparable quality), without audio stream, I use the following command:
$ for i in *.mp4;
$ do ffmpeg -i $i -an -q:v 1 ${i%mp4}webm;
$ done
To keep the audio stream, omit the -an
flag.
If the resulting file is still too large, one may try to reduce the quality by setting -q:v
to a value higher than 1.
To trim a video file, I use
$ ffmpeg -i <input> -ss <start> -to <end> -c:v copy -c:a copy <output>
where <input>
and <output>
are the input and output filenames, respectively.
Parameters <start>
and <end>
are the start and end times in the format hh:mm:ss; the end time is optional.
I advise to trim the video before encoding it as webm, since ffmpeg does not always trim webms correctly, for some reason.
To crop a video, use the -filter:v
parameter:
-filter:v "crop=<width>:<height>:<x>:<y>"
where width and height are the desired dimension of the crop, whereas x and y specify the position.
If x and y are omitted, the crop is centered.
The input width/height can be referred to as in_w
and in_h
, respectively.