Linux

Table of Contents

1 Tmp

1.1 Arduino

pacman -S arduino arduino-docs arduino-avr-core
gpasswd -a $USER uucp
gpasswd -a $USER lock
modprobe cdc-acm

2 Installation

On windows, you need unetbootin. On linux:

dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx && sync
# restore
dd count=1 bs=512 if=/dev/zero of=/dev/sdx && sync

On Mac:

hdiutil convert -format UDRW -o ~/path/to/target.img ~/path/to/ubuntu.iso
diskutil list, insert usb, diskutil list => /dev/disk1
diskutil unmountDisk /dev/diskN
sudo dd if=/path/to/downloaded.img of=/dev/rdiskN bs=1m
diskutil eject /dev/diskN

Create MacOS LiveUSB

sudo /Applications/Install\ OS\ X\ Mavericks.app/Contents/Resources/createinstallmedia \
--volume /Volumes/Untitled \
--applicationpath /Applications/InstallXXX.app \
--nointeraction

2.1 Virtualization

Make sure the kernel module kvm and virtio is loaded. Make sure the CPU virtualization is enabled in BIOS.

Using qemu, first create hard disk image file:

qemu-img create -f raw disk_image 30G

Then load the iso file and the hard disk image to install the system. The -m 1024 is crucial, otherwise will result in errors booting. The -enable-kvm is also crucial for speed. You might need to enable CPU virtualization in BIOS.

qemu-system-x86_64 -cdrom iso_image -boot order=d -drive file=disk_image,format=raw -enable-kvm -m 1024

Finally, run the system with

qemu-system-x86_64 -enable-kvm -m 1024 disk_image

Alternatively, you can use virt-manager as a GUI front-end.

To have sound and better resolution:

qemu-system-x86_64 -enable-kvm -m 4096 -vga virtio -soundhw hda -cpu host -smp 8 ubuntu

TODO: try SPICE.

3 GuixSD

Actually the installation process is very joyful, except that no wifi driver available.

Download the image and

xz -d guixsd-install-0.14.0.system.iso.xz
dd if=guixsd-install-0.14.0.x86_64-linux.iso of=/dev/sdX
sync

Boot the system. The network interface can be seen via ifconfig -a or ip a. You need to first bring the interface up:

ifconfig interface up

Then get the IP address via

dhclient -v interface

Then start the ssh daemon to continue install remotely (remember to set password)

herd start ssh-daemon

Partitioning the disk is the same for linux distributions. The following is the setup for GPT.

parted /dev/sda mklabel gpt
parted /dev/sda mkpart ESP fat32 1MiB 513MiB
parted /dev/sda set 1 boot on
parted /dev/sda mkpart primary linux-swap 513MiB 5GiB
parted /dev/sda mkpart primary ext4 5GiB 100%

Then, format the disks

mkfs.fat -F32 /dev/sda1
mkfs.ext4 -L my-root /dev/sda2

The label here is important, because it can be used in the config files, or the mount command below. Note that the ESP partition is mounted on /mnt/boot/efi, instead of /mnt/boot. Actually there are two suggested mount position for ESP partition on arch wiki, and /mnt/boot/efi should be preferred.

mount LABEL=my-root /mnt/
mkdir -p /mnt/boot/efi
mount /dev/sda1 /mnt/boot/efi

Then, start cow-store, making the /gnu/store copy-on-write

herd start cow-store /mnt

Move the example configuration file into the target system. The intention of the movement is that we will have that config file when we reboot the system.

mkdir /mnt/etc
cp /etc/configuration/desktop.scm /mnt/etc/config.scm
zile /mnt/etc/config.scm

When edit the file, we need to modify:

  1. On legacy boot, make sure grub-bootloader to /dev/sda. On UEFI, it should be grub-efi-bootloader and /mnt/boot/efi (path to the mount point of ESP partition). The official manual says it should be /boot/efi, but mine shows error: "grub-install: error: failed to get canonical path of /boot/efi"
  2. make sure file-system has the correct label and mount position
  3. If you didn't use encryption, then you need to remove the mapped device section, also probably add (title 'label) as indicated here

Now install the system:

guix system init /mnt/etc/config.scm /mnt/

The default config install a lot of things, including gnome, and takes an hour. I should definitely maintain a copy of my config file.

Done. Reboot.

Whenever you want to update the system:

guix pull
guix system reconfigure
guix package -u

3.1 Qemu Image

Running GuixSD in Qemu is probably the easiest way. Download the Qemu image, uncompress it, and run:

qemu-system-x86_64 \
   -net user -net nic,model=virtio \
   -enable-kvm -m 256 /path/to/image

To bring the network up:

ifconfig eth0 up
dhclient -v eth0

The system is now online. But ping command is not working, and that's fine.

guix pull
guix package -u

The qemu image is 1.2G. To expand it, first expand the image size:

qemu-img resize guixsd-vm-image-0.15.0.x86_64-linux  +10G

Boot the image

qemu-system-x86_64 -net user -net nic,model=virtio -vga virtio -enable-kvm -m 2048 -cpu host -smp 8 guixsd-vm-image-0.15.0.x86_64-linux

The partition need not be umounted.

fdisk /dev/sda
d 2
d 1 # note that this starts from 2048
n # create partition that starts also from 2048
a # check the boot flag
w # write

Then, reload the partition table:

partprobe

Then resize the filesystem via resize2fs

resize2fs /dev/sda

The image needs to connect internet.

dhclient eth0

You are online. The ping command will not work, you can check network with guix download command.

Installing git is not enough. It says certificate is needed. You need probably nss and nss-certs. It also shows some environment variables needed (how to show this information again? Are they really belong to nss or git?)

export GIT_SSL_CAINFO="/root/.guix-profile/etc/ssl/certs/ca-certificates.crt"
export GIT_EXEC_PATH="/root/.guix-profile/libexec/git-core"

3.2 Guile

When debugging guile files, use C-c C-s to change to scheme to guile, that would enable following definitions, otherwise it will just complain "No geiser REPL for this buffer" even after M-x run-geiser.

4 Git

Withdraw remote commit is actually fairly easy. First, reset local commit, then force pushing.

git reset --hard <commit-hash>
git push -f origin master

By contrast, git-revert will create a new commit to undo the previous commits.

4.1 TODO gitolite

4.2 Server

There are several protocols. The smart HTTP protocol seems to be the way to go, because it supports both anonymous and authentication.

But local and SSH is easy. For local, you can just clone using the /abs/path/to/file as URL. For ssh, use user@server:/path/to/proj.git.

Now let me talk about setting up smart HTTP with lighttpd and cgit.

in /etc/lighttpd/lighttpd.conf

server.port             = 80
server.username         = "http"
server.groupname        = "http"

server.document-root    = "/srv/http"

server.modules += ( "mod_auth", "mod_cgi", "mod_alias", "mod_setenv" )

alias.url += ( "/git" => "/usr/lib/git-core/git-http-backend" )
$HTTP["url"] =~ "^/git" {
  cgi.assign = ("" => "")
  setenv.add-environment = (
  "GIT_PROJECT_ROOT" => "/srv/git",
  "GIT_HTTP_EXPORT_ALL" => ""
  )
}
$HTTP["querystring"] =~ "service=git-receive-pack" {
        include "git-auth.conf"
}
$HTTP["url"] =~ "^/git/.*/git-receive-pack$" {
        include "git-auth.conf"
}

# alias.url += ( "/cgit" => "/usr/share/webapps/cgit/cgit.cgi" )                                           
# alias.url += ( "/cgit" => "/usr/lib/cgit/cgit.cgi" )                                                     
url.redirect += ("^/$" => "/cgit/")
$HTTP["url"] =~ "^/cgit" {
    server.document-root = "/usr/share/webapps"
    server.indexfiles = ("cgit.cgi")
    cgi.assign = ("cgit.cgi" => "")
    mimetype.assign = ( ".css" => "text/css" )
}

/etc/lighttpd/git-auth.conf

auth.require = (
        "/" => (
                "method" => "basic",
                "realm" => "Git Access",
                "require" => "valid-user"
               )
)

auth.backend = "plain"
auth.backend.plain.userfile = "/etc/lighttpd-plain.user"

In /etc/lighttpd-plain.user

hebi:myplainpassword

My /etc/cgitrc:

#
# cgit config
#

# css=/cgit.css
# logo=/cgit.png

# Following lines work with the above Apache config
#css=/cgit-css/cgit.css
#logo=/cgit-css/cgit.png

# Following lines work with the above Lighttpd config
css=/cgit/cgit.css
logo=/cgit/cgit.png

# if you do not want that webcrawler (like google) index your site
robots=noindex, nofollow

# if cgit messes up links, use a virtual-root. For example has cgit.example.org/ this value:
# virtual-root=/cgit


# Include some more info about example.com on the index page
# root-readme=/var/www/htdocs/about.html
root-readme=/srv/http/index.html

#
# List of repositories.
# This list could be kept in a different file (e.g. '/etc/cgitrepos')
# and included like this:
#   include=/etc/cgitrepos
#

clone-url=http://git.lihebi.com/git/$CGIT_REPO_URL.git
readme=:README.org
readme=:README.md
readme=:readme.md
readme=:README.mkd
readme=:readme.mkd
readme=:README.rst
readme=:readme.rst
readme=:README.html
readme=:readme.html
readme=:README.htm                                                                             
readme=:readme.htm                                                                             
readme=:README.txt                                                                             
readme=:readme.txt                                                                             
readme=:README                                                                                 
readme=:readme

section=hebi

repo.url=hebicc
repo.path=/srv/git/hebicc.git
repo.desc=Hebi CC

repo.url=cgit/hebicc
repo.path=/srv/git/hebicc.git
repo.desc=Hebi CC

repo.url=test
repo.path=/srv/git/test.git
repo.desc=Test

repo.url=pdf
repo.path=/srv/git/pdf.git
repo.desc=pdf


# The next repositories will be displayed under the 'extras' heading
section=extras


repo.url=baz
repo.path=/pub/git/baz.git
repo.desc=a set of extensions for bar users

repo.url=wiz
repo.path=/pub/git/wiz.git
repo.desc=the wizard of foo


repo.url=foo
repo.path=/pub/git/foo.git
repo.desc=the master foo repository
repo.owner=fooman@example.com
repo.readme=info/web/about.html

# Add some mirrored repositories
section=mirrors

repo.url=git
repo.path=/pub/git/git.git
repo.desc=the dscm

# For a non-bare repository
# repo.url=MyOtherRepo
# repo.path=/srv/git/MyOtherRepo/.git
# repo.desc=That's my other git repository

# scan-path=/srv/git/

The /srv/git must be of group http, and the group write mask must be set for push.

I can clone via http://git.lihebi.com/git/repo.git. The cgit page is at http://git.lihebi.com/cgit.

In practice, I cannot push a lot of pdf files, it seems to be the problem of lighttpd configuration for max body size, but haven't look into that yet. Cloning does not have such problem though.

If I don't have the Let's Encrypt certificate, I cannot use
https. Then, I can only clone, but not push via git-http-backend

/var/lib/certbot/renew-certificates may need to be run manually, if
the /etc/letsencrypt/live/example.com does not exist

But, my server is inside IASTATE, Let's Encrypt cannot find my IP
address. Thus, nothing can be done actually.

4.3 Configuration

git config --global user.email 'xxx@xxx'
git config --global user.name 'xxx'
git config --global credential.helper cache # cache 15 min by default
git config --global credential.helper 'cache --timeout=3600' # set in sec

4.4 Usage Tips

show the diff together when inspecting log

git lg -p

4.5 Individual tools

4.5.1 git-bisect

This command uses a binary search algorithm to find which commit in your project's history introduced a bug.

  1. The initial input: the "good" and "bad" commit.
  2. bisect select a commit, check it out, and ASK YOU whether it is good or bad.
  3. iterate step 2
4.5.1.1 start
  $ git bisect start
  $ git bisect bad                 # Current version is bad
  $ git bisect good v2.6.13-rc2    # v2.6.13-rc2 is known to be good
4.5.1.2 answer the question

Each time testing a commit, answer the question by:

  $ git bisect good # or bad
4.5.1.3 multiple good

If you know beforehand more than one good commit, you can narrow the bisect space down by specifying all of the good commits immediately after the bad commit when issuing the bisect start command

  • v2.6.20-rc6 is bad
  • v2.6.20-rc4 and v2.6.20-rc1 are good
  $ git bisect start v2.6.20-rc6 v2.6.20-rc4 v2.6.20-rc1 --
4.5.1.4 run script

If you have a script that can tell if the current source code is good or bad, you can bisect by issuing the command:

  $ git bisect run my_script arguments
4.5.1.5 Some work flows

Automatically bisect a broken build between v1.2 and HEAD: In this case, only find the one that cause compile failure.

  $ git bisect start HEAD v1.2 --      # HEAD is bad, v1.2 is good
  $ git bisect run make                # "make" builds the app
  $ git bisect reset                   # quit the bisect session

Automatically bisect a test failure between origin and HEAD: This time, use the make test work flow

  $ git bisect start HEAD origin --    # HEAD is bad, origin is good
  $ git bisect run make test           # "make test" builds and tests
  $ git bisect reset                   # quit the bisect session

Automatically bisect a broken test case: Use a custom script.

  $ cat ~/test.sh
  #!/bin/sh
  make || exit 125                     # this skips broken builds
  ~/check_test_case.sh                 # does the test case pass?
  $ git bisect start HEAD HEAD~10 --   # culprit is among the last 10
  $ git bisect run ~/test.sh
  $ git bisect reset                   # quit the bisect session

4.5.2 git-blame

Annotates each line in the given file with information from the revision which last modified the line.

5 Network

When using docker container, host system cannot resolve the name of container to the specific IP. I have to specify it manually. To resolve a name to IP address, you can add it into /etc/hosts. E.g. at the end of the file, add:

172.18.0.2 srcml-server-container

In Arch, ifconfig is in net-tools package, and is deprecated. Use ip instead:

ip addr show <dev>
ip link # show links
ip link show <dev>

To kill apps listening on a port, use sudo fuser -k 8080/tcp.

5.1 Wireless Networking

DHCP is not enabled by default. It is the philloshophy for Arch: installing a package will not enable any service. Enable it by;

systemctl enable dhcpcd

The utility for configuring wireless network is called iw.

  • iw dev: list dev
  • iw dev <interface> link: show status
  • ip link set <interface> up: up the interface
  • ip link show <interface>: if you see <UP> in the output, the interface is up
  • iw dev interface scan: scan for network
  • iw dev <interface> connect "SSID": connect to open network

iw can only connect to public network. wpa_supplicant is used to connect WPA2/WEP encrypted network.

The config file (e.g. /etc/wpa_supplicant/example.conf) can be generated in two ways: using wpa_cli or use wpa_passphrase. wpa_cli is interactive, and has commands scan, add_network, save_config.

wpa_passphrase MYSSID <passphrase> > /path/to/example.conf

Inside this file, there's a network section. The ssid is a quoted SSID name, while psk is unquoted encrypted phrase. The psk can also be quoted clear password. If the network is open, you can use key_mgmt=NONE in place of psk

After the configuration, you can actually connect to a WPA/WEP protected network, where

wpa_supplicant -B -i <interface> -c <(wpa_passphrase <MYSSID> <passphrase>)

connect to a

  • -b: fork into background
  • -i interface
  • -c: path to configuration file.

Alternatively, you can use the config file

wpa_supplicant -B -i <interface> -c /path/to/example.conf

After this, you need to get IP address by the "usual" way, e.g.

dhcpcd <interface>

It seems that we should enable the service:

  • wpasupplicant@<interface>
  • dhcpcd@<interface>

Also, dhcpcd has a hook that can launch wpasupplicant implicitly.

To Sum Up, find the interface by iw dev. Say it is wlp4s0.

Create config file /etc/wpa_supplicant/wpa_supplicant-wlp4s0.conf:

  network={
          ssid="MYSSID"
          psk="clear passwd"
          psk=fjiewjilajdsf8345j38osfj
  }

  network={
          ssid="2NDSSID"
          key_mgmt=NONE
  }

Enable wpa_supplicant@wlp4s0 and dhcpcd@wlp4s0 (or just dhcpcd)

To change another wifi, kill the server and use another one

sudo killall wpa_supplicant
wpa_supplicant -B -i wlp4s0 -c /path/to/wifi.conf

5.2 VPN

5.2.1 L2tp, IPSec

apt-get purge "lxc-docker*"
apt-get purge "docker.io*"
apt-get update
apt-get install apt-transport-https ca-certificates gnupg2
sudo apt-key adv \
       --keyserver hkp://ha.pool.sks-keyservers.net:80 \
       --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

deb https://apt.dockerproject.org/repo debian-jessie main
apt-get update
apt-cache policy docker-engine
apt-get update
apt-get install docker-engine
service docker start
docker run hello-world

https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/docs/clients.md https://hub.docker.com/r/fcojean/l2tp-ipsec-vpn-server/

docker pull fcojean/l2tp-ipsec-vpn-server

vpn.env

VPN_IPSEC_PSK=<IPsec pre-shared key>
VPN_USER_CREDENTIAL_LIST=[{"login":"userTest1","password":"test1"},{"login":"userTest2","password":"test2"}]
modprobe af_key
docker run \
    --name l2tp-ipsec-vpn-server \
    --env-file ./vpn.env \
    -p 500:500/udp \
    -p 4500:4500/udp \
    -v /lib/modules:/lib/modules:ro \
    -d --privileged \
    fcojean/l2tp-ipsec-vpn-server
docker logs l2tp-ipsec-vpn-server
docker exec -it l2tp-ipsec-vpn-server ipsec status

5.2.2 OpenVPN

5.2.2.1 Server Setup

https://github.com/kylemanna/docker-openvpn It is very interesting to use docker this way.

The persisit is the storage, which is mounted on /etc/openvpn, serving as the configuration. Each time, create a new docker container mounting the same storage. Each step write to the configuration.

OVPN_DATA="ovpn-data-example"
docker volume create --name $OVPN_DATA
docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://VPN.SERVERNAME.COM
docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki

It is easy to run the server itself. This time use -d option to make it a daemon.

docker run -v $OVPN_DATA:/etc/openvpn -d -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn

It is also easy to create certificate on-the-go. For that, create new container to create and retrieve the certificate.

docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass
docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn
5.2.2.2 Client Setup

On arch, copy hebi.ovpn to /etc/openvpn/client/hebi.conf. Then the service openvpn-client@hebi will be available for systemd. On ubuntu, the path is /etc/openvpn/hebi.conf, with service openvpn@hebi. Start the service will forward traffic.

It is likely that you can connect, can ping any IP address, but cannot resolve names. You can even use drill @8.8.8.8 google.com to resolve the name on the way.

The trick is to push resolv conf of local machine to remote. First install openresolv and (aur) openvpn-update-resolv-conf. Add the following to the end of hebi.conf file:

script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf

For ubuntu the openvpn package already contains the file. Just modify the conf file.

6 App

6.1 mplayer

Interactive controls:

  • Forward/Backward: LEFT/RIGHT (10s), UP/DOWN (1m), PGUP/PGDWN (10m)
  • Playback speed: [] (10%), {} (50%), backspace (reset)
  • /*: volume

When changing the speed, the pitch changed. To disable this, start mplayer by mplayer -af scaletempo. To stretch the images to full screen, pass the -zoom option when starting.

6.2 youtube-dl

When downloading a playlist, you can make the template to number the files

youtube-dl -o "%(playlist_index)s-%(title)s.%(ext)s" <playlist_link>

Download music only:

youtube-dl --extract-audio --audio-format flac <url>

6.3 chrome extensions

  • html5outliner: give you a toc of the page. VERY NICE!
  • render for email
  • unblockyouku
  • adblock
  • syntax highlighter

6.4 Remove viewer

The lab machines are accessed via spice. The client for spice is virt-viewer. It can be installed through package manager. The actual client is called remote-viewer, which is shipped with virt-viewer. So the command to connect to the .vv file: remove-viewer console.vv.

6.5 mpd

music play daemon

To start:

mkdir -p ~/.config/mpd
cp /usr/share/doc/mpd/mpdconf.example ~/.config/mpd/mpd.conf
mkdir ~/.mpd/playlists
# Required files
db_file            "~/.mpd/database"
log_file           "~/.mpd/log"

# Optional
music_directory    "~/music"
playlist_directory "~/.mpd/playlists"
pid_file           "~/.mpd/pid"
state_file         "~/.mpd/state"
sticker_file       "~/.mpd/sticker.sql"

# uncomment pulse audio section
audio_output {
	type		"pulse"
	name		"My Pulse Output"
}

Start mpd by:

systemctl --user start mpd
systemctl --user enable mpd

The client cantata can be used to create list. stumpwm-contrib has a mpd client. mpc is a command line client.

6.6 fontforge

How I made the WenQuanYi Micro Hei ttf font (clx-truetype only recognizes ttf, not ttc):

  • input: ttc file
  • Tool: fontforge

Open ttc file, select one, generate font, choose truetype The validation failed, but doesn't matter

6.7 tmux

# start a new session, with the session name set to "helium"
tmux new -s helium
# attach, and the target is "helium"
tmux a -t helium

Some default commands (all after prefix key):

  • !: break the current pane into another window
  • :: prompt command
  • q: briefly display pane index (1,2,etc)

Commands

  • select-layout even-horizontal: balance window horizontally
  • last-window: jump to last active window
  • new-window
  • detach

7 Window System

X generally distinguishes between two types of selection, the PRIMARY and the CLIPBOARD. Every time you select a piece of text with the mouse, the selected text is set as the PRIMARY selection. Using the copy function will place the selected text into the CLIPBOARD. Pasting using the middle mouse button will insert the PRIMARY selection, pasting using the paste function will insert the CLIPBOARD.

7.1 xkill

Kill all Xorg instances

pkill -15 Xorg

If using kill:

ps -ef | grep Xorg # find the pid
kill -9 <PID>

The xkill is not working properly, giving me "unable to find display" error.

7.2 Display Manager

Install xdm. It will use the file $HOME/.xsession, so

ln -s $HOME/.xinitrc $HOME/.xsession

Change default desktop environment:

  • GNome: gdm
  • KDE: kdm
  • lxfe: lightdm

Change (three approaches):

  1. edit /etc/X11/default-display-manager: I think we'd better use update-alternative
  2. sudo dpkg-reconfigure gdm
  3. update-alternatives --config x-window-manage

7.3 screen

Multi screen, stumpwm detect as one. Install xdpyinfo. It is used to detect the heads.

check the screen resolution:

xdpyinfo | grep -B 2 resolution

Multiple Display:

# Mirror display
sudo xrandr --output HDMI-2 --same-as eDP-1
sudo xrandr --output HDMI-2 --off

Rotate

xrandr --output HDMI-1 --rotate left

Chagne resolution

xrandr --output HDMI-1 --mode 1920x1080

Touch screen might need calibration in dual screen setup. Simply find the touch screen device ID (e.g. 10) from xinput and screen ID (e.g. DP-1) from xrandr, and execute:

xinput map-to-output <device-id> <screen-id>

7.4 cursor

Install xcursor-themes:

aptitude install xcursor-themes
aptitude show xcursor-themes # here it will output the themes name

In .Xresources:

Xcursor.theme: redglass

7.5 Natural Scrolling

The old solution is to swap the pointer button "4" and "5", by xmodmap or xinput:

xmodmap -e "pointer = 1 2 3 4 5"
xinput --set-button-map 10 1 2 3 5 4

The 10 is the id, to find it out, run xinput without argument.

But this way is deprecated, as of chromium 49 and above, it does not work any more. So use the xinput way to set the property:

xinput set-prop 10 "libinput Natural Scrolling Enabled" 1

I'm using logitech G900 and the property might be different. It works!

Not sure if the xinput command should be run each time the system boots. That would be hard for specifying ID.

The detail is, you can do this:

xinput # show a list of devices
xinput list-props <ID> # list of properties
xinput set-prop <deviceID> <propID> <value>

7.6 ratpoison

This is actually a wonderful WM. To start:

aptitude install ratpoison

In .xinitrc:

exec ratpoison
  • C-t ? to show the help

actually C-t is the prefix of every command, C-g to abort.

  • C-t :: type command
  • C-t !: run shell command
  • C-t .: open menu
  • C-t c: open terminal

HOWEVER, this is pretty old, and it cause the screen to go brighter and darker back and force. Fortunately the stumpwm is very like this one, but

  1. actively maintained on github.
  2. written in common lisp

7.7 StumpWM

7.7.1 Installation

In order to use ttf-fonts module, the lisp clx-truetype package needs to be installed. Install the slime IDE for emacs, install quicklisp, then install it using quicklisp. Follow the description in lisp wiki page.

7.7.1.1 A better way to install stumpwm
  • This seems a better way to install stumpwm (ql:quickload "stumpwm")

But this require the .xinitrc to be

exec sbcl --load /path/to/startstump

with startstump

(require :stumpwm)
(stumpwm:stumpwm)
7.7.1.2 Live Debugging

To debug it live, you might need this in .stumpwmrc:

  (in-package :stumpwm)

  (require :swank)
  (swank-loader:init)
  (swank:create-server :port 4004
                       :style swank:*communication-style*
                       :dont-close t)

The above wont work unless swank is installed:

(ql:quickload "swank")

The port is actually interesting. Here it is set to 4004, and the slime in Emacs defaults to 4005, thus they wont mess up. The trick to connect to stumpwm is to use slime-connect and put 4004 for the port prompt.

So acutally if you just want to live debug, just install swank and

(require 'swank)
(swank:create-server)

Note lastly that to install using quickload, you need permission. So

sudo sbcl --load /usr/lib/quicklisp/setup

To test if it works, you should be able to switch to stumpwm namespace and operate the window, like this:

(in-package :stumpwm)
(stumpwm:select-window-by-number 2)

7.7.2 General

Same as ratpoison:

  • C-t C-h: show help
  • C-t !: run shell command
  • C-t c terminal
  • C-t e: open emacs!
  • C-t ;: type a command
  • C-t :: eval
  • C-t C-g: abort
  • C-t a: display time
  • C-t t: send C-t
  • C-t m: display last message
7.7.2.1 Get Help
  • C-t h k: from key binding to command: describe-key
  • C-t h w: from command to key binding: where-is
  • C-t h c: describe command
  • C-t h f: describe function
  • C-t h v: describe variable
  • mode-line: start mode-line

7.7.3 Window

  • C-t n
  • C-t p
  • C-t <double-quote>
  • C-t w list all windows
  • C-t k kill current frame (K to force quit)
  • C-t # toggle mark of current window

7.7.4 Frame

  • C-t s: hsplit
  • C-t S: vsplit
  • C-t Q: kill other frames, only retains this one
  • C-t r: resize, can use C-n, C-p interactively
  • C-t +: balance frame
  • C-t o: next frame
  • C-t -: show desktop

Other commands

remove-split
to remove the current frame
fclear
clear the current frame, show the desktop

To resize frames interactively, C-t r and then use the arrows.

7.7.5 Groups

Shortcuts:

  • C-t g c: create: gnew. Also available for float: gnew-float
  • C-t g n: next
  • C-t g o: gother
  • C-t g p: previous
  • C-t g <double-quote>: interactively select groups: grouplist
  • C-t g k: kill current group, move windows to next group: gkill
  • C-t g r: rename current group: grename
  • C-t G: display all groups and their windows
  • C-t g g: show list of group
  • C-t g m: move current window to group X
  • C-t g <d>: go to group <d>

7.7.6 Configuration

(stumpwm:define-key stumpwm:*root-map* (stumpwm:kbd "C-z") "echo Zzzzz...")

7.8 Xmonad

I use Xmonad in vncserver, and it works nicely with host WM StumpWM because it uses a different set of keys. It has a red frame around windows by default. That is nice for visually distinguish the local and remote screen.

The executable is xmonad. Mod key is alt.

  • Mod-shift-enter opens terminal.
  • Mod-j/k move focus to windows
  • Mod-space cycle layout
  • Mod-,/. decrease/increase the number of panels inside the master (current) panel
  • Mod-h/l resize
  • Mod-shift-c kill
  • mod-p execute dmenu (need installation)
  • mod-<1-9> switch workspace

Install xmobar and trayer.

Configuration is done in /.xmonad/xmonad.hs. Test whether your configure file is syntactic-correct:

xmonad --recompile

To load, use Mod-q. This will re-compile and load the configure file.

7.9 VNC

I use tigervnc because it seems to be fast.

  • vncpasswd: set the password
  • vncserver&: start the server.
    • It is started in :1 by default, so connect it with vncviewer <ip>:1
    • On mac, the docker bridge network does not work, so you cannot connect to the contianer by IP addr. In this case, map the port 5901. 5900+N is the default VNC port.
    • vncserver -kill :1 will kill the vncserver
    • vncserver :2 will open :2

vncserver will use ~/.vnc/xstartup as startup script. It must have execution permission.

F8 to open context menu, and f to fullscreen. Once fullscreened, the host WM shortcut will not be honored.

8 System Management

The hardware beep sound is known as PC Speaker. To disable, simply remove the kernel module:

rmmod pcspkr

To use ssh key for connecting to remote ssh daemon, on the host machine, run ssh-keygen. Then ssh-copy-id user@server.

8.1 Audio

Bluetooth headsets:

  • bluez
  • bluez-utils
  • bluez-libs
  • pulseaudio-alsa
  • pulseaudio-bluetooth

use bluetoothctl to enter config:

[bluetooth]# power on
[bluetooth]# agent on
[bluetooth]# default-agent
[bluetooth]# scan on
[NEW] Device 00:1D:43:6D:03:26 Lasmex LBT10
[bluetooth]# pair 00:1D:43:6D:03:26
[bluetooth]# connect 00:1D:43:6D:03:26

If you're getting a connection error org.bluez.Error.Failed retry by killing existing PulseAudio daemon first:

$ pulseaudio -k
[bluetooth]# connect 00:1D:43:6D:03:26

8.2 Power Management

Power management is done through systmed can handle it, by acpid. The configure file is /etc/systemd/logind.conf. man logind.conf for details. hibernate will save to disk, while suspend save to ram. Both of them will resume to the current status.

HandlePowerKey=hibernate
HandleLidSwitch=suspend

8.3 Booting

The grub2 menu configure file is located at /boot/grub/grub.cfg. It is generated by /usr/sbin/update-grub (8) using templates from /etc/grub.d/* and settings from /etc/default/grub.

The default run level is 2 (multi-user mode), corresponding to /etc/rc2.d/XXX scripts. Those scripts starts with "S" or "K" meaning start or stop sent to systemd utility. Those scripts are symbol linked to ../init.d/xxx. By default there's no difference between level 2 to 5. Run level 0 means half, S means single user mode, 6 means reboot.

8.4 User Management

The account will use the values on command line, plus the default value for system. A group will also be created by default.

  • -g GROUP: specify the initial login group. Typically just ignore this, the default value will be used.
  • -G group1,group2,...: additional groups. You might want: video, audio, wheel
  • -m: create home if it does not exists
  • -s SHELL: use this shell. Typically just ignore this, the system will choose for you.

8.5 File Management

8.5.1 Swap File

A swap file can also be used as swap memory. When doing linking, the ld might fail because of lack of memory.

Check the current swap:

swapon -s

Create swap file:

dd if=/dev/zero of=/path/to/extraswap bs=1M count=4096
mkswap /path/to/extraswap
swapon /path/to/extraswap
swapoff /path/to/extraswap

This will not be in effect after reboot. To automatically swap it on, in /etc/fstab

/path/to/extraswap none swap sw 0 0

8.5.2 Back Up & Syncing

rsync commnad is used to sync from source to destination. It does not perform double way transfer. It decides a change if either of these happens:

  • size change
  • last-modified time

8.5.3 MIME

check the MIME of a file.

file --mime /path/to/file

On debian, the mapping from suffix to MIME type is /etc/mime.types.

Create default application for xdg-open

mkdir ~/.local/share/applications
xdg-mime default firefox.desktop application/pdf

~/.local/share/applications/mimeapps.list

[Default Applications]
application/pdf=firefox-esr.desktop

/usr/share/applications/*.desktop are files define for each application.

On Debian, you can also do this:

update-alternative --config x-terminal-emulator
update-alternative --config x-www-browser

8.6 LVM

8.7 Monitor the system information

lvs
vgs
pvs
df -h
vgdisplay
lvdisplay /dev/debian-vg/home

8.8 Extending a logical volume

lvextend -L10G /dev/debian-vg/tmp # to 10G
lvextend -L+1G /dev/debian-vg/tmp # + 1G
resize2fs /dev/debian-vg/tmp

8.9 Reduce a logical volume

The home is 890G.

umount -v /home
# check
e2fsck -ff /dev/debian-vg/home
resize2fs /dev/debian-vg/home 400G
lvreduce -L -490G /dev/debian-vg/home
lvdisplay /dev/debian-vg/home
resize2fs /dev/debian-vg/home
mount /dev/debian-vg/home /home

9 Arch Linux

9.1 Installation

9.1.1 Verify UEFI

Nowadays (start from 2017) Arch only supports 64 bits … and seems to prefer UEFI .. Fine

First, verify the boot mode to be UEFI by checking the following folder exists

ls /sys/firmware/efi/efivars

9.1.2 System clock

timedatectl set-ntp true

9.1.3 Partition

parted /dev/sda mklabel gpt
parted /dev/sda mkpart ESP fat32 1MiB 513MiB
parted /dev/sda set 1 boot on
parted /dev/sda mkpart primary linux-swap 513MiB 5GiB
parted /dev/sda mkpart primary ext4 5GiB 100%

This creates

sda1
/boot the EFI System Partition (ESP), swp, and a root
sda2
swap
sda3
/

Format:

mkfs.fat -F32 /dev/sda1
mkfs.ext4 /dev/sda3

Mount

mount /dev/sda3 /mnt
mkdir /mnt/boot
mount /dev/sda1 /mnt/boot

9.1.4 Select mirror

look into /etc/pacman.d/mirrorlist and modify if necessary. The order matters. The file will be copied to new system.

9.1.5 Install base system

pacstrap /mnt base

9.1.6 chroot

genfstab -U /mnt >> /mnt/etc/fstab
arch-chroot /mnt

9.1.7 Configure

Now we are in the new system.

ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
hwclock --systohc

Uncomment en_US.UTF-8 UTF-8 inside /etc/locale.gen and run

locale-gen

Set LANG in /etc/locale.conf

LANG=en_US.UTF-8

Set hostname in /etc/hostname

myhostname

Set root password

passwd

Install grub

pacman -S grub efibootmgr
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=myarch
grub-mkconfig -o /boot/grub/grub.cfg

Before reboot, it is good to make sure the network will work, by installing some networking packages:

  • dialog
  • wpa_suppliant
  • iw

Now reboot

9.1.8 Config

Install the packages, and config the system using my scripts:

  • setup-quicklisp
  • setup-git

9.1.9 Dual boot with Windows

The only difference is that, you do not need to create the EFI boot partition, but use the existing one. Just mount it to boot. The rest is the same.

9.2 Pacman

Option

S
sync, a.k.a install
Q
query

Parameter:

s
search
y
fetch new package list. Usually use with u
u
update all packages
i
more information
l
location of files

Typical usage:

Syu
update whole system
S
install package
R
remove package
Rs
remove package and its unused dependencies
Ss
search package
Qi
show description of a package
–noconfirm
use in script
–needed
do not install the installed again

Pacman will store all previously downloaded packages. So when you find your /var/cache/pacman so big, consider clean them up using:

paccache -rk 1

9.3 AUR

Have to search through its web interface. Find the git download link and clone it. It is pullable.

Go into the folder and

makepkg -si

-s alone will build it, with i to install it after build. The dependencies are automatically installed if can be found by pacman. If it is also on AUR, you have to install manually.

The md5sum line can be skipped for some package. Just replace the md5sum value inside the quotes with 'SKIP'.

10 CentOS

On installing a new instance of CentOS, issue the following commands:

# check the sshd status
# should use opensshd
service status sshd
# add user, -m means create home folder
useradd -m myname
# oh, wait, I forget to add myself to wheel
# -a means append, if no -a, the -G will accept a comma separated list, overwrite the previous setting
usermod -aG wheel myname

11 Debian

11.1 Package

  • /etc/apt/sources.list
  • /var/cache/apt/archives/

netselect-apt to select the fastest source!

dist-upgrade

cp /etc/apt/sources.list{,.bak}
sed -i -e 's/ \(stable\|wheezy\)/ testing/ig' /etc/apt/sources.list
apt-get update
apt-get --download-only dist-upgrade
# Dangerous
apt-get dist-upgrade
  • dpkg-reconfigure reconfigure a installed package
  • defconf-show show the current configuration of a package

Another part is the main. If you want some 3rd party contributor packages, add contrib after main. If you further want some non-free packages, add also non-free.

11.2 Configuration

11.2.1 update-alternatives

Options:

  • --config: show options and select configuration interactively
  • --display: show the options

Some examples:

  • update-alternatives --config desktop-background

12 Docker

To remove the requirement of sudo:

sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo service docker restart
newgrp docker

You may find yourself have to type double C-p to take effect. That is because C-p C-q is the default binding for detaching a container. This blocks C-p, I have to type it twice, must change. In ~/.docker/config.json, add:

{"detachKeys": "ctrl-],ctrl-["}

Restart docker daemon to take effect. This can also be set by --detach-keys option.

Network config:

  • docker network ls
  • docker network inspect <network-name>

12.1 Images

Docker images are template of VMs. docker images list available images locally.

You can build a docker image by writing a docker file. The first line is typically a FROM command to specify a base image. Other commands are as follows:

  • RUN: this command is the most basic command. Since it expects to be non-interactive, when running a command such as install a package, supply the -y or equivalent arguments.
  • ENV key=value
  • ADD: ADD <src> .. <dst> The difference from copy:
    • ADD allows src to be url
    • ADD will decompress an archive
  • COPY: COPY <src> .. <dst> all srcs on the local machine will be copied to dst in the image. The src can use wildcards. The src cannot be out of the current build directory, e.g. .. is not valid.
  • USER: USER daemon The USER instruction sets the user name or UID to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.
  • WORKDIR: The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile
    • if it does not exist, it will be created
    • it can be used multiple times, if it is relative, it is relative to the previous WORKDIR
  • ENTRYPOINT ["executable", "param1", "param2"]: configure the container to be run as an executable.

In the folder containing Dockerfile, run to build the image:

docker build -t my-image .

docker-compose is installed seperately with docker. It must be run inside the folder containing docker-compose.yml.

Commands

  • docker-compose up: up the service. It will not exit. Use C-c to exit and the docker-compose down command will be sent.
    • The second time you up the compose, it will not up, but update current. If all current are up to date, nothing will happen.
  • docker-compose up -d: up the service and exit. You need to shutdown it maually
  • docker-compose down: shutdown the services

A sample compose file:

version: '2'
services:
  srcml-server-container:
    image: "lihebi/srcml-server"
  helium:
    image: "lihebi/arch-helium"
    tty: true
    volumes:
      - data:/data
  benchmark-downloader:
    # this is used to download benchmarks to the shared volume
    image: "lihebi/benchmark-downloader"
    tty: true
    volumes:
      - data:/data
volumes:
  # create a volume with default
  data:

A service is a container. Setting tty to true to prevent it from stopping. That is the same effect when you pass -t to docker run. The containers can be seen by docker ps, with names prefixed and suffixed by compose_XXX_1. Change to the container will not preserve after the compose down. The containers will be deleted. Next up will create new containers.

Under any volume, if external option is set to true, docker compose will find it outside, and signal error if it does not exist.

Once the compose is up, docker create a bridge network called compose_default. All services (containers) are attached to that.

You may want to publish the image so that others can use it. DockerHub is the host for it.

When pushing and pulling, what exactly happens?

docker tag local-image lihebi/my-image
docker push lihebi/my-image
docker login
login so that you can push
docker push lihebi/my-container
push to docker hub
docker pull lihebi/my-container
pull from the internet

12.2 Instance

To create an instance of an image and run it, use the docker run command. Specifically,

  • docker run [option] <image> /bin/bash
    -i
    interactive
    -d
    detach (opposite to -i)
    -t
    assign a tty. Even when using -d, you need this.
    –rm
    automatically remove when exits
    -p <port>
    export the port <port> of the container. The host port will be randomly assigned. Running docker ps will show the port binding information. If the port is not set when running a container, you have to commit it, and run it again to assign a port or another port.
    -v /volumn
    create a mount at /volumn
    -v /local/dir:/mnt
    mount local dir to the /mnt in container. The default is read-write mode, if you want read only, do this: -v /local/dir:/mnt:ro. The local dir must be ABSOLUTE path.

To just create an instance without running it:

To run some command on an already run container, use the docker exec command with the <ID> of the container:

  • docker exec <ID> echo "hello"
    • ID can be the UUID or container name
    • you can use -it as well, e.g. docker exec -it <ID> /bin/bash

When using docker exec, I cannot connect to emacs server through emacsclient -t, and error message is terminal is not found. I can not open tmux either. But the problem does not appear when using docker run command. The problem is that, docker exec tty is not a real tty. The solution is when starting a exec command, use script to run bash:

docker exec -it my-container script -q -c "/bin/bash" /dev/null
docker exec -it my-container env TERM=xterm script -q -c "/bin/bash" /dev/null

The TERM is not necessary here because in my case docker always set it to xterm. I actually change it to screen-256color in my bashrc file to get the correct colors.

To stop a container, use docker stop command to do it gracefully. It will send SIGTERM to the app, then wait for it to stop. If you don't want to stop it gracefully, just force kill using docker kill. The default wait time is 10 seconds. You can change this to, for example, 1 second:

docker stop -t 1 <container-ID>

The reason for a container to resist stopping may be it ignores the SIGTERM request. Python did this, so for a python program, you should handle this signal yourself:

  import sys
  import signal

  def handler(signum, frame):
      sys.exit(1)

  def main():
      signal.signal(signal.SIGTERM, hanlder)
      # your app

To stop all containers:

docker stop $(docker ps -a -q)

To start a stopped container, use docker start <ID>. It will be detached by default.

You can remove a stopped container by docker rm command. To remove all containers (will not remove non-stopped ones, but give errors):

docker rm $(docker ps -a -q)

When you make any changes to the container, you can view the difference made from the base image via docker diff <ID>. When desired, you can create a new image based on the current running instance, via docker commit:

docker commit <ID> my-new-image

You can assign a name to the container so that you can better remember and reference it.

12.3 Volume

You can create a volume by itself, using docker volume create hello, or create together with a container.

You have to mount the volume at the time you create the container. You cannot remount anything to it without commiting it to an image and create again. Use the -v command to declare the volume when creating the container:

docker run -v /mnt <image>
docker run -v my-named-vol:/mnt <image>
docker run -v /absolute/path/to/host/local/path:/mnt/in/container <image>

If only inner path is provided, the volume will still be created, but with a long named directory under /var/lib/docker/volumes.

The volumes will never be automatically deleted, even if the container is deleted.

To manage a volume:

  • docker volume inspect <volume-full-name>
  • docker volume ls
  • docker volume prune: # remove all unused volumes

13 Unix Programming

POSIX defines the operating system interface. The starndard contains volumes:

  • Base Definition: convention, regular expression, headers
  • System Interfaces: system calls
  • Shell & Utilities: shell command language and shell utilities
  • Rationale

I found most of them are not that interesting, except Base Definition section 9 regular expression. This definition is used by many shell utilities such as awk.

13.1 Low-level IO

13.1.1 open

int open(const char *filename, int flags[, mode_t mode])

Create and return a file descriptor.

13.1.2 close

int close(int filedes)
  • file descriptor is deallocated
  • if all file descriptors associated with a pipe are closed, any unread data is discarded.

Return

  • 0 on success, -1 on failure

13.1.3 read

ssize_t read(int filedes, void *buffer, size_t size)
  • read up to size bytes, store result in buffer.

Return

  • number of bytes actually read.
  • return 0 means EOF

13.1.4 write

ssize_t write(int filedes, const void *buffer, size_t size)
  • write up to size bytes from buffer to the file descriptor.

Return

  • number of bytes actually written
  • -1 on failure

13.1.5 fdopen

FILE *fdopen(int filedes, const char *opentype)

from file descriptor, get the stream

13.1.6 fileno

int fileno(FILE *stream)

from stream to file descriptor

13.1.7 fdset

This is a bit array.

  • FDZERO(&fdset): initialise fdset to empty
  • FDCLR(fd, &fdset): remove fd from the set
  • FDSET(fd, &fdset): add fd to the set
  • FDISSET(fd, &fdset): return non-0 if fd is in set

13.1.8 select - synchronous I/O multiplexing

int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout)

Block until at least one fd is true for specific condition, unless timeout.

Params

  • nfds: the range of file descriptors to be tested. Should be the largest one in the sets + 1. But just pass FD_SETSIZE.
  • readfds: watch for read. can be NULL.
  • writefds: watch for write. can be NULL.
  • errorfds: watch for error. can be NULL.
  • timeout:
    • NULL: no timeout, block forever
    • 0: return immediately. Used for test file descriptors

Return:

  • if timeout, return 0
  • the sets are modified. Those in sets are those ready
  • return the number of ready file descriptors in all sets
int fd;
// init fd

fd_set set;
FD_ZERO(&set)
FD_SET(fd, &set);

struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;

select(FD_SETSIZE, &set, NULL, NULL, &timeout);

13.1.9 sync

void sync(void) // sync all dirty files
int fsync(int filedes) // sync only that file

13.1.10 dup

You can create a new descriptor to refer to the same file. They

  • share file position
  • share status flag
  • seperate descriptor flags
int dup(int old)
// same as
fcntl(old, F_DUPFD, 0)

Copy old to the first available descriptor number.

int dup2(int old, int new)
// same as
close(new)
fcntl(old, F_DUPFD, new)

If old is invalid, it does nothing (does not close new)!

13.2 Date and Time

  • calendar time: absolute time, e.g. 2017/6/29
  • interval: between two calendar times
  • elapsed time: length of interval
  • amount of time: sum of elapsed times
  • period: elapsed time between two events
  • CPU time: like calendar time, but relative to process, i.e. when the process run on CPU
  • Processor time: amount of time a CPU is in use.

13.2.1 struct timeval

  • timet tvsec: seconds
  • long int tvusec: micro seconds, must be less than 1 million

13.2.2 struct timespec

  • timet tvsec
  • long int tvnsec: nanoseconds. Must be less than 1 billion

13.2.3 difftime

double difftime (time_t time1, time_t time0)

13.2.4 timet

On GNU it is long int. It should be the seconds elapsed since 00:00:00 Jan 1 1970, Coordinated Universal Time.

get current calenddar time:

time_t time(time_t *result)

13.2.5 alarm

13.2.5.1 struct itimerval
  • struct timeval itinterval: 0 to send alarm once, non-zero to send every interval
  • struct timeval itvalue: time left to alarm. If 0, the alarm is disabled
13.2.5.2 setitimer
int setitimer(int which, const struct itimerval *new, struct itimerval *old)
  • which: ITIMERREAL, ITIMERVIRTUAL, ITIMERPROF
  • new: set to new
  • old: if not NULL, fill with old value
13.2.5.3 getitimer(int which, struct itimerval *old)

get the timer

13.2.5.4 alarm
unsigned int alarm(unsigned int seconds)

To cancel existing alarm, use alarm(0). Return:

  • 0: no previous alarm
  • non-0: the remaining value of previous alarm
  unsigned int
  alarm (unsigned int seconds)
  {
    struct itimerval old, new;
    new.it_interval.tv_usec = 0;
    new.it_interval.tv_sec = 0;
    new.it_value.tv_usec = 0;
    new.it_value.tv_sec = (long int) seconds;
    if (setitimer (ITIMER_REAL, &new, &old) < 0)
      return 0;
    else
      return old.it_value.tv_sec;
  }

13.3 Process

Three steps

  • create child process
  • run an executable
  • coordinate the results with parent

13.3.1 system

int system(const char *command)
  • use sh to execute, and search in $PATH
  • return -1 on error
  • return the status code for the child

13.3.2 getpid

  • pidt getpid(void): return PID of current process
  • pidt getppid(void): PID of parent process

13.3.3 fork

pid_t fork(void)

return

  • 0 in child
  • child's PID in parent
  • -1 on error

13.3.4 pipe

int pipe(int filedes[2])
  • Create a pipie and puts the filedes[0] for reading, filedes[1] for writing

Return:

  • 0 on success, -1 on failure

13.3.5 exec

int execv (const char *filename, char *const argv[])
int execl (const char *filename, const char *arg0, ...)
int execve (const char *filename, char *const argv[], char *const env[])
int execle (const char *filename, const char *arg0, ..., char *const env[])
int execvp (const char *filename, char *const argv[])
int execlp (const char *filename, const char *arg0, ...)
  • execv: the last of argv array must be NULL. All strings are null-terminated.
  • execl: argv are seperated, the last one must be NULL
  • execve: provide env
  • execle
  • execvp: find filename in $PATH
  • execlp

13.3.6 wait

This should be used in parent process.

pid_t waitpid(pid_t pid, int *status_ptr, int options)
  • pid:
    • positive: the pid for a child process
    • -1 (WAITANY): any child process
    • 0 (WAITMYPGRP): any child process that has the same process group ID as the parent
    • -pgid (any other negative value): any child process having the process group ID as gpid
  • options: OR of the following
    • WNOHANG: no hang: the parent process should not wait
    • WUNTRACED: report stopped process as well as the terminated ones
  • return: PID of the child process that is reporting
pid_t wait(int *status_ptr)

wait(&status) is same as waitpid(-1, &status, 0)

13.3.6.1 Status

The signature is int NAME(int status).

  • WIFEXITED: if exited: return non-0 if child terminated normally with exit
  • WEXITSTATUS: exit status: if above true, this is the low-order 8 bits of the exit code
  • WIFSIGNALED: if signaled: non-0 if the process terminated because it receives a signal that was not handled
  • WTERMSIG: term sig: if above true, return that signal number
  • WCOREDUMP: core dump: non-0 if the child process terminated and produce a core dump
  • WIFSTOPPED: if stopped: if the child process stopped
  • WSTOPSIG: stop sig: if above true, return the signal number that cause the child to stop
  1. TODO What is the difference between terminate and stop?

13.4 Unix Signal Handling

13.4.1 Ordinary signal handling

The handling of ordinary signals are easy:

  #include <signal.h>
  static void my_handler(int signum) {
    printf("received signal\n");
  }

  int main() {
    struct sigaction sa;
    sa.sa_handler = my_handler;
    sigemptyset(&sa.sa_mask);
    sa.sa_flags = SA_SIGINFO;
    // this segv does not work
    sigaction(SIGSEGV, &sa, NULL);
    // this sigint will work
    sigaction(SIGINT, &sa, NULL);
  }

13.4.2 SIGSEGV handling

13.4.2.1 Motivation

The reason that I want to handle the SIGSEGV is that I want to get the coverage from gcov. Gcov will not report any coverage information if the program terminates by receiving some signals. Fortunately we can explicitly ask gcov to dump it by calling __gcov_flush() inside the handler. I confirmed this can work for ordinary signal handling.

  // declaring the prototype of gcov
  void __gcov_flush(void);

  void myhanlder() {
    __gcov_flush();
  }

After experiment, I found:

  1. address sanitizer cannot work with this handling. AddressSanitizer will hijact the signal, and maybe output another signal.
  2. Even if I turned off address sanitizer, and the handler function is executed, the coverage information is still not able to get. This possibly because the handler is running on a different stack.
13.4.2.2 a new stack

However, handling the SIGSEGV is challenging. The above will not work 1.

By default, when a signal is delivered, its handler is called on the same stack where the program was running. But if the signal is due to stack overflow, then attempting to execute the handler will cause a second segfault. Linux is smart enough not to send this segfault back to the same signal handler, which would prevent an infinite cascade of segfaults. Instead, in effect, the signal handler does not work.

Instead, we need to make a new stack and install the handler on that stack.

  #include <signal.h>
  void sigsegv_handler(int signum, siginfo_t *info, void *data) {
    printf("Received signal finally\n");
    exit(1);
  }

  #define SEGV_STACK_SIZE BUFSIZ

  int main() {
    struct sigaction action;
    bzero(&action, sizeof(action));
    action.sa_flags = SA_SIGINFO|SA_STACK;
    action.sa_sigaction = &sigsegv_handler;
    sigaction(SIGSEGV, &action, NULL);


    stack_t segv_stack;
    segv_stack.ss_sp = valloc(SEGV_STACK_SIZE);
    segv_stack.ss_flags = 0;
    segv_stack.ss_size = SEGV_STACK_SIZE;
    sigaltstack(&segv_stack, NULL);

    char buf[10];
    char *src = "super long string";
    strcpy(buf, src);
  }
13.4.2.3 libsigsegv

I also tried another library, the libsigsegv 2. I followed two of their methods, but I cannot make either work. The code lists here as a reference:

  #include <signal.h>
  #include <sigsegv.h>
  int handler (void *fault_address, int serious) {
    printf("Handler triggered.\n");
    return 0;
  }
  void stackoverflow_handler (int emergency, stackoverflow_context_t scp) {
    printf("Handler received\n");
  }
  int main() {
    char* mystack; // don't know how to use
    sigsegv_install_handler (&handler);
    stackoverflow_install_handler (&stackoverflow_handler,
                                   mystack, SIGSTKSZ);
  }

13.5 pThread

#include <pthread.h>
pthread_create (thread, attr, start_routine, arg)
pthread_exit (status)
pthread_join (threadid, status)
pthread_detach (threadid)

13.5.1 Create threads

If main() finishes before the threads it has created, and exits with pthreadexit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes.

  #define NUM_THREADS     5

  struct thread_data{
    int  thread_id;
    char *message;
  };

  int main() {
    pthread_t threads[NUM_THREADS];
    struct thread_data td[NUM_THREADS];

    int rc;
    int i;

    for( i=0; i < NUM_THREADS; i++ ){
      td[i].thread_id = i;
      td[i].message = "This is message";
      rc = pthread_create(&threads[i], NULL, PrintHello, (void *)&td[i]);
      if (rc){
        cout << "Error:unable to create thread," << rc << endl;
        exit(-1);
      }
    }
    pthread_exit(NULL);
  }

13.5.2 Join and Detach

  int main () {
    int rc;
    int i;
        
    pthread_t threads[NUM_THREADS];
    pthread_attr_t attr;
    void *status;

    // Initialize and set thread joinable
    pthread_attr_init(&attr);
    pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);

    for( i=0; i < NUM_THREADS; i++ ){
      cout << "main() : creating thread, " << i << endl;
      rc = pthread_create(&threads[i], &attr, wait, (void *)i );
                
      if (rc){
        cout << "Error:unable to create thread," << rc << endl;
        exit(-1);
      }
    }

    // free attribute and wait for the other threads
    pthread_attr_destroy(&attr);
        
    for( i=0; i < NUM_THREADS; i++ ){
      rc = pthread_join(threads[i], &status);
                
      if (rc){
        cout << "Error:unable to join," << rc << endl;
        exit(-1);
      }
                
      cout << "Main: completed thread id :" << i ;
      cout << "  exiting with status :" << status << endl;
    }

    cout << "Main: program exiting." << endl;
    pthread_exit(NULL);
  }

13.6 Other

sleep

#include <unistd.h>
unsigned int sleep(unsigned int seconds); // seconds
int usleep(useconds_t useconds); // microseconds
int nanosleep(const struct timespec *rqtp, struct timespec *rmtp);

14 Shell Utilities

  • sort -k 4 -n
  • tee
  for name in data/github-bench/*; do 
      echo "===== $name"\
          | tee -a log.txt; { time helium --create-cache $name; } 2>&1\
          | tee -a log.txt; done

Another example: redirect output of time

{ time sleep 1 ; } 2> time.txt
{ time sleep 1 ; } 2>&1 | tee -a time.txt
  • xz: a general-purpose data compression tool
  • cpio: copy files between archives and directories
  • shuf: random number generation
shuf -i 1-100 -n 1
  • bc calculator
  • grep: -i (case insensitive), -n (show line number), -v (inverse), -H (show file name)
  • xargs: consume the standard output, and integrate result with new command:
find /etc -name '*.conf' | xargs ls -l
# the same as:
ls -l ~find ...~
  • time <command>: # the total user and system time consumed by the shell and its children
  • column: formats its input into multiple columns. mount | column -t
  • dd: dd if=xxx.iso of=/dev/sdb bs=4m; sync
  • convert: convert xxx.jpg -resize 800 xxx.out.jpg # 800x<height>
  • nl: nl <filename> 添加行号。输出到stdout
  • ln: ln -s <target> <linkname> 记忆:新的东西总要最后才发布。
  • ls: order: -r reverse; -s file size; X extension; -t time

14.1 Patch System

Create a patch (notice the order: old then new):

diff -u hello.c hello_new.c > hello.patch
diff -Naur /usr/src/openvpn-2.3.2 /usr/src/openvpn-2.3.4 > openvpn.patch

To apply a patch

patch -p3 < /path/to/openvpn.patch
patch -p1 <patch -d /path/to/old/file

the number after p indicates how many the leading slashes are skipped when find the old file

To reverse (un-apply) a patch:

patch -p1 -R <patch

This works as if you swapped the old and new file when creating the patch.

14.2 tr: translate characters

tr <string1> <string2>

the characters in string1 are translated into the characters in string2 where the first character in string1 is translated into the first character in string2 and so on. If string1 is longer than string2, the last character found in string2 is duplicated until string1 is exhausted.

characters in the string can be:

any characters will represent itself if not:

  • \\octal: A backslash followed by 1, 2 or 3 octal digits
  • \n, \t
  • a-z: inclusive, ascending
  • [:class:]: space, upper, lower, alnum
    • if [:upper:] and [:lower:] appears in the same relative position, they will correlate.

14.3 uniq: report or filter out repeated lines in a file

Repeated lines in the input will not be detected if they are not adjacent, so it may be necessary to sort the files first.

  • uniq -c: Precede each output line with the count of the number of times the line occurred in the input, followed by a single space. You can then comtine this with sort -n
  • -u: Only output lines that are not repeated in the input.
  • -i: Case insensitive comparison of lines.

14.4 Find

find . -type f -name *.flac -exec mv {} ../out/ \;

Copy file based on find, and take care of quotes and spaces:

find CloudMusic -type f -name "*mp3" -exec cp "{}" all_music \;
  • find
find ~/data/fast/pick-master/ -name '*.[ch]'

15 Trouble Shooting

15.1 Cannot su root

When su cannot change to root, run

chmod u+s /bin/su

15.2 in docker, cannot open chromium

failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted.

Solution

chromium --no-sandbox

15.3 Encoding

When converting MS windows format to unix format, you can use emacs and call set-buffer-file-coding-system and set to unix. Or you can use dos2unix, perhaps by

find . -name *.java | xargs dos2unix

15.4 Cannot open shared library

On CentOS, the default LD_LIBRARY_PATH does not contains the /usr/local/lib. The consequence is the -lpugi and -lctags are not recognized because they are put in that directory. Set it, or edit /etc/ld.conf.d/local.conf and add the path. After that, run ldconf as root to update the database.

15.5 auto expansion error for latex font

when compiling latex using acmart template, auto expansion error is reported.

Solution:

mktexlsr # texhash
updmap-sys

Reference: https://github.com/borisveytsman/acmart/issues/95

15.6 time not up-to-date

Although I set the right timezone (check by timedatectl), the clock is still incorrect. To fix that, install ntp package and run

sudo ntpd -qg

15.7 backlight on TP25

For regular laptops, using debian

cat /sys/class/backlight/intel_backlight/max_brightness
cat /sys/class/backlight/intel_backlight/brightness

echo 400 > /sys/class/backlight/intel_backlight/brightness

But on Archlinux, on TP25, The xorg-xbacklight is not working. The drop-in replacement acpilight (aur) does.

To setup for video group users to adjust backlight, place a file /etc/udev/rules.d/90-backlight.rules

SUBSYSTEM=="backlight", ACTION=="add", \
  RUN+="/bin/chgrp video %S%p/brightness", \
  RUN+="/bin/chmod g+w %S%p/brightness"

The command is still xbacklight.

15.8 xinit won't start

On Debian, when I dist-upgrade Debian 8 Jessie to 9 Stretch, the startx stop working. I try install a Debian 9 from its own image, and still the same result. The error message says:

vesa cannot read int vect screen found but none leave a usable configuration xf86enableioports failed to set iopl for i/o

The trick is you need:

chomd u+s /usr/bin/xinit

Footnotes: