Linux
Table of Contents
- 1. Tmp
- 2. V2Ray
- 3. Installation
- 4. NixOS
- 5. GuixSD
- 6. Other Package Managers
- 7. Git
- 8. Network
- 9. App
- 10. Window System
- 11. System Management
- 12. Arch Linux
- 13. CentOS
- 14. Debian
- 15. Docker
- 16. Unix Programming
- 17. Shell Utilities
- 18. Trouble Shooting
1 Tmp
1.1 git checkout from multiple remotes
From this stack-overflow question:
git checkout -b test other-remote/test
1.2 chinese input method
In ubuntu, you install ibus-pinyin
and run ibus-setup
, run ibus restart
,
logout and login. In setting, adding "Chinese" as input source is not
enough. You need to add "Chinese - Pinyin", see this post.
This would not work in Emacs. You need to start emacs, according to this post:
LC_CTYPE="zh_CN.UTF-8" emacs
1.3 SwitchyOmega
- default: socks5, localhost, 1088
- http: http, localhost, 8888
- https: http, localhost, 8888
1.4 RedmiBook wifi Atheros QCA6174 driver
Actually this post solved my problem. You need:
- https://github.com/kvalo/ath10k-firmware, put it into /lib/firmware/ath10k/QCA6174 and rename necessary files accordingly. hw2.1 is not important.
- The comment 2 in this post attached a windows binary. THIS IS IMPORTANT. Put
it to both
hw3.0/board.bin
andhw3.0/board-2.bin
I was about to help this post but I have to wait 5 hours to post anything because of newly created account. This is stupid, this probably won't come back to my mind.
1.5 sudo
sudo -i
will drop you to root shell, even if you don't know root
password!
sudo -E
will use current user's environment variables
1.6 compilation and linking
LDLIBRARYPATH is searched when the program starts, LIBRARYPATH is searched at link time
Do NOT set LDLIBRARYPATH in your login shell profile. Either creat a script:
LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3 export LD_LIBRARY_PATH exec /path/to/bin/myprog $@
Or use env
to modify one time:
env LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3 ./myprog
Never:
$ export LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3 $ ./myprog
Reference: https://www.hpc.dtu.dk/?page_id=1180
See also: readelf (-l/d), runpath and rpath: The only difference between rpath and runpath is the order they are searched in. Specifically, their relation to LDLIBRARYPATH - rpath is searched in before LDLIBRARYPATH while runpath is searched in after. Seems typically use rpath. https://amir.rachum.com/blog/2016/09/17/shared-libraries/
1.7 Arduino
pacman -S arduino arduino-docs arduino-avr-core gpasswd -a $USER uucp gpasswd -a $USER lock modprobe cdc-acm
1.8 Deep Learning Framework GPU setup
In ubuntu 18.04:
You can install a latest driver, no problem.
Install cuda 9.0 (not 9.2, the python package is actually finding
exact 9.0 of the dynamic library, e.g. libcublas.so.9.0). You can
install multiple versions of cuda. Set the $LDLIBRARYPATH to
/usr/local/cuda/lib64
if you have made the symlink, otherwise
include the 9.0 entire path.
For the current tensorflow (1.12, as of 11/6/2018), install
pip3 install --user tensorflow-gpu
test
tf.Session().list_devices()
Or actually tf.Session() will output the information, e.g. run this:
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
1.9 Docker in China
From https://lug.ustc.edu.cn/wiki/mirrors/help/docker:
In /etc/docker/daemon.json
:
{ "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }
docker.io
is the ubuntu package of docker
2 V2Ray
- offical document: https://www.v2ray.com/
- A good tutorial https://guide.v2fly.org/
- providers:
- linode speed test: https://www.linode.com/speed-test/
- digital ocean speed test: http://speedtest-sgp1.digitalocean.com/
Server:
Install docker (https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04)
apt update -y apt install -y apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" apt update -y apt install -y docker-ce systemctl status docker
pull the docker image
sudo docker pull v2ray/official
Create /etc/v2ray/config.json
with
mkdir /etc/v2ray touch /etc/v2ray/config.json
the config is:
{ "inbounds": [ { "port": 8888, "protocol": "vmess", "settings": { "clients": [ {"id": "307b01e8-7d71-4e1c-a410-98c016e776c6", "alterId": 64} ] }}], "outbounds": [ { "protocol": "freedom", "settings": {} } ] }
You can generate uuid using uuidgen
307b01e8-7d71-4e1c-a410-98c016e776c6
The command to run the container:
docker run -d --name v2ray\ -v /etc/v2ray:/etc/v2ray\ -p 8888:8888\ v2ray/official v2ray -config=/etc/v2ray/config.json
Here the /etc/v2ray is mounted, the port 8888 is mapped, and the command is
v2ray -config=/xxx
.
In one line for eazy copy:
docker run -d --name v2ray -v /etc/v2ray:/etc/v2ray -p 8888:8888 v2ray/official v2ray -config=/etc/v2ray/config.json
Then operate the container
docker container start v2ray docker container stop v2ray docker container restart v2ray docker container logs v2ray
In case of changing config, you need recreate a container
docker container stop v2ray docker container rm v2ray
3 Installation
On windows, you need unetbootin
. On linux:
dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx && sync # restore dd count=1 bs=512 if=/dev/zero of=/dev/sdx && sync
On Mac:
hdiutil convert -format UDRW -o ~/path/to/target.img ~/path/to/ubuntu.iso diskutil list, insert usb, diskutil list => /dev/disk1 diskutil unmountDisk /dev/diskN sudo dd if=/path/to/downloaded.img of=/dev/rdiskN bs=1m diskutil eject /dev/diskN
Create MacOS LiveUSB
sudo /Applications/Install\ OS\ X\ Mavericks.app/Contents/Resources/createinstallmedia \ --volume /Volumes/Untitled \ --applicationpath /Applications/InstallXXX.app \ --nointeraction
3.1 Virtualization
Make sure the kernel module kvm
and virtio
is loaded. Make sure
the CPU virtualization is enabled in BIOS.
Using qemu
, first create hard disk image file:
qemu-img create -f raw disk_image 30G
Then load the iso file and the hard disk image to install the
system. The -m 1024
is crucial, otherwise will result in errors
booting. The -enable-kvm
is also crucial for speed. You might need
to enable CPU virtualization in BIOS.
qemu-system-x86_64 -cdrom iso_image -boot order=d -drive file=disk_image,format=raw -enable-kvm -m 1024
To escape from Qemu app, press Ctrl-Alt-G
.
Finally, run the system with
qemu-system-x86_64 -enable-kvm -m 1024 disk_image
Alternatively, you can use virt-manager
as a GUI front-end.
To have sound and better resolution:
qemu-system-x86_64 -enable-kvm -m 4096 -vga virtio -soundhw hda -cpu host -smp 8 ubuntu
TODO: try SPICE.
4 NixOS
4.1 nixos
man configuration.nix
shows all options offline.
build a system without switch to it, for testing
nixos-rebuild build
build and switch
nixos-rebuild switch
To rebuild nixos with a local nixpkgs tree:
nixos-rebuild switch -I nixpkgs=/path/to/nixpkgs
To run repl
nix repl
Inside REPL, you can query the value of each option.
/var/log
goes into ~/.nix-profile/var/log/
4.2 Install nix without root
from nix wiki tutorial, using the nix-user-chroot project, basically:
mkdir -m 0755 ~/.nix nix-user-chroot ~/.nix bash -c 'curl https://nixos.org/nix/install | sh'
This will update $HOME/.bash_profile
, to drop in the nix environment:
nix-user-chroot ~/.nix bash -l
-l loads the $HOME/.bash_profile
, and all nix commands will be available,
/nix
is under your user.
4.3 package development
To build a nix script
nix-build xxx.nix
The script should return a derivation, i.e. the result of applying
stdenv.mkDerivation
function.
To access pkgs in .nix:
pkgs = import <nixpkgs> {};
If you are in a nixpkgs local checkout root directory, you can use that and specify a package to build:
nix-build -A rPackages.RcppArmadillo
You can also use your current default path, '<nixpkgs>'
(HEBI: How to is it resolved?)
nix-build '<nixpkgs>' -A rPackages.RcppArmadillo
Compute hash of a file:
nix-hash --type sha256 --flat --base32 veewee-0.4.5.1.gem
Importantly, the default is MD5, --flat
to hash content of each file path in
the package, which is the same behavior of GNU sha1sum
, and --base32
use
base-32 representation rather than hexadecimal, and all Nix expression use base32.
Or you can use nix-prefetch-url:
nix-prefetch-url <url>
4.4 package management
search
nix-env -qaP '.*emacs.*'
To install packages
nix-env -iA nixos.thunderbird
nix-env -f '<nixpkgs>' -ia emacs
sets nixpkgs repo explicitly, the default is
~/.nix-defexpr
, which has nixos
link to /nix/store/xxx-nixos-19.09
. During
system installation, it seems to be recommanded to use -f
'<nixpkgs>'
. <nixpkgs>
seems to be the channel name.
uninstall a package
nix-env -e thunderbird
list installed
nix-env -q
list generation:
nix-env --list-generations
roll back
nix-env --rollback
4.5 TODO local package manifest
4.6 channel
list current channel:
sudo nix-channel --list # >>> nixos https://nixos.org/channels/nixos-19.09
Add the unstable channel:
sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
Note that I'll need to remove the old channel to use the new channel:
nix-channel --remove nixpkgs
However, when doing nixos-rebuild, it complains about missing nixos
, which I
guess is hardcoded. So I'll need to add the package as name "nixos" instead of
"nixpkgs":
sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixos
Obtain the latest content from the channels:
nix-channel --update
Do a full upgrade:
sudo nixos-rebuild switch --upgrade
This command will do channel update for you.
Note that the channel setting is per-user, so for system upgrade, you need to set the root's channel.
To upgrade all your local packages that are installed via nix-env
:
nix-env -u
4.7 FHS
There are cases you have to run some binary, for example language packages installed via other package managers such as julia's, downloaded binaries. There are two things you need to take care for making it work:
- the interpreter, typically
/lib64/ld-linux-x86-64.so.2
. You can set it manually via/nix/store/xxx-glibc-xxx/lib/ld-linux-x86-64.so.2 /path/to/executable
. - the libraries. You need to
ldd <binary>
to find what's the missing libraries, and add the correct path toLD_LIBRARY_PATH
in the wrapper script.
When you put it a wrapper file in the same directory as original executable, you
want $(dirname "$0")/gksqt-real
.
An example for Plots.jl julia library, which build GR plotting backend and use
its binary, the path is ~/.julia/packages/GR/oiZD3/deps/gr/bin/gksqt
. I move
it to gksqt-real
and create gksqt
in the same directory with the following
content:
env LD_LIBRARY_PATH=/nix/store/1hf6bdlckrmmyv1n4ncimy2g1sx4bx0c-qt-4.8.7/lib/:\ /nix/store/1mr01vwvg7922xp0sgd4gry54swrx19m-gcc-8.3.0-lib/lib/\ /nix/store/8d94mp12ca2gihchw4r7jfmpdww8f2ha-glibc-2.27/lib/ld-linux-x86-64.so.2 $(dirname "$0")/gksqt-real
reference: stackexchange-522822
5 GuixSD
Actually the installation process is very joyful, except that no wifi driver available.
Download the image and
xz -d guixsd-install-0.14.0.system.iso.xz dd if=guixsd-install-0.14.0.x86_64-linux.iso of=/dev/sdX sync
Boot the system. The network interface can be seen via ifconfig -a
or ip a
. You need to first bring the interface up:
ifconfig interface up
Then get the IP address via
dhclient -v interface
Then start the ssh daemon to continue install remotely (remember to set password)
herd start ssh-daemon
Partitioning the disk is the same for linux distributions. The following is the setup for GPT.
parted /dev/sda mklabel gpt parted /dev/sda mkpart ESP fat32 1MiB 513MiB parted /dev/sda set 1 boot on parted /dev/sda mkpart primary linux-swap 513MiB 5GiB parted /dev/sda mkpart primary ext4 5GiB 100%
Then, format the disks
mkfs.fat -F32 /dev/sda1 mkfs.ext4 -L my-root /dev/sda2
The label here is important, because it can be used in the config
files, or the mount command below. Note that the ESP partition is
mounted on /mnt/boot/efi
, instead of /mnt/boot
. Actually there are
two suggested mount position for ESP partition on arch wiki, and
/mnt/boot/efi
should be preferred.
mount LABEL=my-root /mnt/ mkdir -p /mnt/boot/efi mount /dev/sda1 /mnt/boot/efi
Then, start cow-store, making the /gnu/store copy-on-write
herd start cow-store /mnt
Move the example configuration file into the target system. The intention of the movement is that we will have that config file when we reboot the system.
mkdir /mnt/etc cp /etc/configuration/desktop.scm /mnt/etc/config.scm zile /mnt/etc/config.scm
When edit the file, we need to modify:
- On legacy boot, make sure
grub-bootloader
to/dev/sda
. On UEFI, it should begrub-efi-bootloader
and/mnt/boot/efi
(path to the mount point of ESP partition). The official manual says it should be/boot/efi
, but mine shows error: "grub-install: error: failed to get canonical path of /boot/efi" - make sure
file-system
has the correct label and mount position - If you didn't use encryption, then you need to remove the mapped
device section, also probably add
(title 'label)
as indicated here
Now install the system:
guix system init /mnt/etc/config.scm /mnt/
The default config install a lot of things, including gnome, and takes an hour. I should definitely maintain a copy of my config file.
Done. Reboot.
Whenever you want to update the system:
guix pull sudo -E guix system reconfigure guix package -u guix package -m manifest-file
5.1 Qemu Image
Running GuixSD in Qemu is probably the easiest way. Download the Qemu image, uncompress it, and run:
qemu-system-x86_64 \ -net user -net nic,model=virtio \ -enable-kvm -m 256 /path/to/image
To bring the network up:
ifconfig eth0 up dhclient -v eth0
The system is now online. But ping
command is not working, and
that's fine.
guix pull guix package -u
The qemu image is 1.2G. To expand it, first expand the image size:
qemu-img resize guixsd-vm-image-0.15.0.x86_64-linux +10G
Boot the image
qemu-system-x86_64 -net user -net nic,model=virtio -vga virtio -enable-kvm -m 2048 -cpu host -smp 8 guixsd-vm-image-0.15.0.x86_64-linux
The partition need not be umounted.
fdisk /dev/sda d 2 d 1 # note that this starts from 2048 n # create partition that starts also from 2048 a # check the boot flag w # write
Then, reload the partition table:
partprobe
Then resize the filesystem via resize2fs
resize2fs /dev/sda
The image needs to connect internet.
dhclient eth0
You are online. The ping
command will not work, you can check
network with guix download
command.
Installing git
is not enough. It says certificate is needed. You
need probably nss
and nss-certs
. It also shows some environment
variables needed (how to show this information again? Are they really
belong to nss
or git
?)
export GIT_SSL_CAINFO="/root/.guix-profile/etc/ssl/certs/ca-certificates.crt" export GIT_EXEC_PATH="/root/.guix-profile/libexec/git-core"
5.2 Guile
When debugging guile files, use C-c C-s
to change to scheme to
guile
, that would enable following definitions, otherwise it will
just complain "No geiser REPL for this buffer" even after M-x
run-geiser
.
5.3 Guix
- guix package –search-paths: show search path
- guix channel: ???
- A good reference: https://gitlab.com/pjotrp/guix-notes/
- chromium package channel: https://gitlab.com/mbakke/guix-chromium
- How to use this channel?? my guix pull is showing me this error:
guix pull: error: failed to load '/home/hebi/.config/guix/channels.scm': system/base/compile.scm:144:21: In procedure compile-file: failed to create path for auto-compiled file "/home/hebi/.config/guix/channels.scm"
5.4 Developing Guix packages
To get the hash:
git clone https://... cd xxx guix hash -rx .
It seems not necessary to switch to the last commit if using the last commit. But the commit is needed anyway.
git log | head -1 # show the first commit git checkout c6e10a
5.5 Bootloader
/gnu/store/9nqaksx40zh5d6cg5rim3f3spy56bfb9-raw-initrd/initrd.cpio.gz
5.6 Nvidia driver
To install Nvidia driver, we need to kernel source tree. However, there is no such a package in Guix. Thus, we need to build the kernel source first.
guix build linux-libre --check --keep-failed
When --check
and --keep-failed
are used together, it builds the
package and keep it in /tmp/guix-build-linux-libre-x.x.x.drv-0
.
After that, we can simply unpack the Nvidia driver:
sh NVIDIA-Linux-x86_65-xxx.xx.run -x cd NVIDIA-Linux-x86_64-xxx.xx
Note that you should use the same version of GCC, and guix
environment
seems not to overwrite the gcc version. So say you need
gcc@7:
guix package -i gcc@7
Run the build
sudo ./nvidia-installer --kernel-source-path /tmp/guix-build-linux-libre-x.x.x.drv-0
It should build, and the kernel modules are in
/lib/modules/4.20.7-gnu/video/
. However it won't be loaded
successfully, and the installer will complain:
#+BEGINEXAMPL Driver file installation is complete. ERROR: Unable to load the 'nvidia-drm' kernel module. #+ENDEXAMPLE
That's OK. Guix has its own module load path, and the path is hard coded by Linux source to be a single path. Thus, there is practically no way to automatically load those modules. So just load them manually. Thus the driver cannot be used as the X11 driver. But it is OK to use for Tensorflow.
modprobe ipmi_devintf insmod /lib/modules/x.x.x-gnu/video/nvidia.ko insmod /lib/modules/x.x.x-gnu/video/nvidia-modeset.ko insmod /lib/modules/x.x.x-gnu/video/nvidia-drm.ko insmod /lib/modules/x.x.x-gnu/video/nvidia-uvm.ko
Last but not least, you need to manually prevent kernel from loading
nouveau, i.e. in config.scm
, you should have:
(operating-system (kernel-arguments '("intel_iommu=on" "iommu=pt" "modprobe.blacklist=nouveau")) ..)
5.7 Guix on Foreign distribution
5.7.1 Trouble Shooting
On Ubuntu, every time I run guix package
, I got the warning:
guile: warning: failed to install locale hint: Consider installing the `glibc-utf8-locales' or `glibc-locales' package and defining `GUIX_LOCPATH', along these lines: guix package -i glibc-utf8-locales export GUIX_LOCPATH="$HOME/.guix-profile/lib/locale" See the "Application Setup" section in the manual, for more info.
The problem is that, on Ubuntu, the guix-daemon is run as root. Thus,
the package and the path should be set to root's
profile. Specifically, in /etc/systemd/system/guix-daemon.service
:
Environment=GUIX_LOCPATH=/var/guix/profiles/per-user/root/guix-profile/lib/locale
The path is OK, but root does not have the package installed. Thus, the following command fixes it. There is no need to update the guix of the root.
sudo guix package -i glibc-utf8-locales
Reference: https://lists.gnu.org/archive/html/help-guix/2019-01/msg00211.html
6 Other Package Managers
- flatpak: https://flathub.org/home, and this respect system proxy
- snap: https://snapcraft.io/, but this does NOT respect system proxy
- AppImage: https://appimage.org/: runs directly
TODO: create gnome desktop:
~/.local/share/applications/test.desktop
with following:
[Desktop Entry] Name=My App Exec=executable args Icon=xxx Terminal=false Type=Application
7 Git
Withdraw remote commit is actually fairly easy. First, reset local commit, then force pushing.
git reset --hard <commit-hash> git push -f origin master
By contrast, git-revert
will create a new commit to undo the
previous commits.
show the diff together when inspecting log
git lg -p
Clone recursively for all submodules:
git clone --recursive https://xxx
If you cloned without the recursive option, you can retrieve the submodules by:
git submodule update --init --recursive
7.1 Configuration
git config --global user.email 'xxx@xxx' git config --global user.name 'xxx' git config --global credential.helper cache # cache 15 min by default git config --global credential.helper 'cache --timeout=3600' # set in sec
7.2 Server
There are several protocols. The smart HTTP protocol seems to be the way to go, because it supports both anonymous and authentication.
But local and SSH is easy. For local, you can just clone using the
/abs/path/to/file
as URL. For ssh, use
user@server:/path/to/proj.git
.
Now let me talk about setting up smart HTTP with lighttpd and cgit.
in /etc/lighttpd/lighttpd.conf
server.port = 80 server.username = "http" server.groupname = "http" server.document-root = "/srv/http" server.modules += ( "mod_auth", "mod_cgi", "mod_alias", "mod_setenv" ) alias.url += ( "/git" => "/usr/lib/git-core/git-http-backend" ) $HTTP["url"] =~ "^/git" { cgi.assign = ("" => "") setenv.add-environment = ( "GIT_PROJECT_ROOT" => "/srv/git", "GIT_HTTP_EXPORT_ALL" => "" ) } $HTTP["querystring"] =~ "service=git-receive-pack" { include "git-auth.conf" } $HTTP["url"] =~ "^/git/.*/git-receive-pack$" { include "git-auth.conf" } # alias.url += ( "/cgit" => "/usr/share/webapps/cgit/cgit.cgi" ) # alias.url += ( "/cgit" => "/usr/lib/cgit/cgit.cgi" ) url.redirect += ("^/$" => "/cgit/") $HTTP["url"] =~ "^/cgit" { server.document-root = "/usr/share/webapps" server.indexfiles = ("cgit.cgi") cgi.assign = ("cgit.cgi" => "") mimetype.assign = ( ".css" => "text/css" ) }
/etc/lighttpd/git-auth.conf
auth.require = ( "/" => ( "method" => "basic", "realm" => "Git Access", "require" => "valid-user" ) ) auth.backend = "plain" auth.backend.plain.userfile = "/etc/lighttpd-plain.user"
In /etc/lighttpd-plain.user
hebi:myplainpassword
My /etc/cgitrc
:
# # cgit config # # css=/cgit.css # logo=/cgit.png # Following lines work with the above Apache config #css=/cgit-css/cgit.css #logo=/cgit-css/cgit.png # Following lines work with the above Lighttpd config css=/cgit/cgit.css logo=/cgit/cgit.png # if you do not want that webcrawler (like google) index your site robots=noindex, nofollow # if cgit messes up links, use a virtual-root. For example has cgit.example.org/ this value: # virtual-root=/cgit # Include some more info about example.com on the index page # root-readme=/var/www/htdocs/about.html root-readme=/srv/http/index.html # # List of repositories. # This list could be kept in a different file (e.g. '/etc/cgitrepos') # and included like this: # include=/etc/cgitrepos # clone-url=http://git.lihebi.com/git/$CGIT_REPO_URL.git readme=:README.org readme=:README.md readme=:readme.md readme=:README.mkd readme=:readme.mkd readme=:README.rst readme=:readme.rst readme=:README.html readme=:readme.html readme=:README.htm readme=:readme.htm readme=:README.txt readme=:readme.txt readme=:README readme=:readme section=hebi repo.url=hebicc repo.path=/srv/git/hebicc.git repo.desc=Hebi CC repo.url=cgit/hebicc repo.path=/srv/git/hebicc.git repo.desc=Hebi CC repo.url=test repo.path=/srv/git/test.git repo.desc=Test repo.url=pdf repo.path=/srv/git/pdf.git repo.desc=pdf # The next repositories will be displayed under the 'extras' heading section=extras repo.url=baz repo.path=/pub/git/baz.git repo.desc=a set of extensions for bar users repo.url=wiz repo.path=/pub/git/wiz.git repo.desc=the wizard of foo repo.url=foo repo.path=/pub/git/foo.git repo.desc=the master foo repository [email protected] repo.readme=info/web/about.html # Add some mirrored repositories section=mirrors repo.url=git repo.path=/pub/git/git.git repo.desc=the dscm # For a non-bare repository # repo.url=MyOtherRepo # repo.path=/srv/git/MyOtherRepo/.git # repo.desc=That's my other git repository # scan-path=/srv/git/
The /srv/git
must be of group http
, and the group write mask must
be set for push.
I can clone via http://git.lihebi.com/git/repo.git
. The cgit page is
at http://git.lihebi.com/cgit
.
In practice, I cannot push a lot of pdf files, it seems to be the problem of lighttpd configuration for max body size, but haven't look into that yet. Cloning does not have such problem though.
If I don't have the Let's Encrypt certificate, I cannot use https. Then, I can only clone, but not push via git-http-backend /var/lib/certbot/renew-certificates may need to be run manually, if the /etc/letsencrypt/live/example.com does not exist But, my server is inside IASTATE, Let's Encrypt cannot find my IP address. Thus, nothing can be done actually.
7.2.1 TODO gitolite
7.3 Individual tools
7.3.1 git-bisect
This command uses a binary search algorithm to find which commit in your project's history introduced a bug.
- The initial input: the "good" and "bad" commit.
- bisect select a commit, check it out, and ASK YOU whether it is good or bad.
- iterate step 2
7.3.1.1 start
$ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good
7.3.1.2 answer the question
Each time testing a commit, answer the question by:
$ git bisect good # or bad
7.3.1.3 multiple good
If you know beforehand more than one good commit, you can narrow the bisect space down by specifying all of the good commits immediately after the bad commit when issuing the bisect start command
- v2.6.20-rc6 is bad
- v2.6.20-rc4 and v2.6.20-rc1 are good
$ git bisect start v2.6.20-rc6 v2.6.20-rc4 v2.6.20-rc1 --
7.3.1.4 run script
If you have a script that can tell if the current source code is good or bad, you can bisect by issuing the command:
$ git bisect run my_script arguments
7.3.1.5 Some work flows
Automatically bisect a broken build between v1.2 and HEAD: In this case, only find the one that cause compile failure.
$ git bisect start HEAD v1.2 -- # HEAD is bad, v1.2 is good $ git bisect run make # "make" builds the app $ git bisect reset # quit the bisect session
Automatically bisect a test failure between origin and HEAD:
This time, use the make test
work flow
$ git bisect start HEAD origin -- # HEAD is bad, origin is good $ git bisect run make test # "make test" builds and tests $ git bisect reset # quit the bisect session
Automatically bisect a broken test case: Use a custom script.
$ cat ~/test.sh #!/bin/sh make || exit 125 # this skips broken builds ~/check_test_case.sh # does the test case pass? $ git bisect start HEAD HEAD~10 -- # culprit is among the last 10 $ git bisect run ~/test.sh $ git bisect reset # quit the bisect session
7.3.2 git-blame
Annotates each line in the given file with information from the revision which last modified the line.
8 Network
When using docker container, host system cannot resolve the name of
container to the specific IP. I have to specify it manually. To
resolve a name to IP address, you can add it into
/etc/hosts
. E.g. at the end of the file, add:
172.18.0.2 srcml-server-container
In Arch, ifconfig
is in net-tools
package, and is deprecated. Use
ip
instead:
ip addr show <dev> ip link # show links ip link show <dev>
To kill apps listening on a port, use sudo fuser -k 8080/tcp
.
8.1 SSH
Dropbear is a replacement of OpenSSH.
To set up RSA login:
# generate ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub ssh-keygen # copy RSA to remote ssh-copy-id user@host
This will put the content of the public key onto the server's
~/.ssh/authorized_keys
. Which is to say, you can do the same thing
manually:
cat ~/.ssh/id_rsa.pub | ssh user@host "cat >> .ssh/authorized_keys"
In ~/.ssh/config
, you can actually set up alias for remote
host. Then you will use that alias in place of user@host
.
host remarkable Hostname 10.11.99.1 User root
A side note: remarkable password can be changed by editing
/etc/remarkable.conf
and a reboot.
8.2 Wireless Networking
DHCP is not enabled by default. It is the philloshophy for Arch: installing a package will not enable any service. Enable it by;
systemctl enable dhcpcd
The utility for configuring wireless network is called iw
.
- iw dev: list dev
- iw dev <interface> link: show status
- ip link set <interface> up: up the interface
- ip link show <interface>: if you see <UP> in the output, the interface is up
- iw dev interface scan: scan for network
- iw dev <interface> connect "SSID": connect to open network
iw
can only connect to public network. wpa_supplicant
is used to
connect WPA2/WEP encrypted network.
The config file (e.g. /etc/wpa_supplicant/example.conf
) can be
generated in two ways: using wpa_cli
or use wpa_passphrase
.
wpa_cli
is interactive, and has commands scan
, add_network
,
save_config
.
wpa_passphrase MYSSID <passphrase> > /path/to/example.conf
Inside this file, there's a network section. The ssid
is a quoted
SSID name, while psk
is unquoted encrypted phrase. The psk can also
be quoted clear password. If the network is open, you can use
key_mgmt=NONE
in place of psk
After the configuration, you can actually connect to a WPA/WEP protected network, where
wpa_supplicant -B -i <interface> -c <(wpa_passphrase <MYSSID> <passphrase>)
connect to a
- -b: fork into background
- -i interface
- -c: path to configuration file.
Alternatively, you can use the config file
wpa_supplicant -B -i <interface> -c /path/to/example.conf
After this, you need to get IP address by the "usual" way, e.g.
dhcpcd <interface>
It seems that we should enable the service:
- wpasupplicant@<interface>
- dhcpcd@<interface>
Also, dhcpcd has a hook that can launch wpasupplicant implicitly.
To Sum Up, find the interface by iw dev
. Say it is wlp4s0
.
Create config file /etc/wpa_supplicant/wpa_supplicant-wlp4s0.conf
:
network={ ssid="MYSSID" psk="clear passwd" psk=fjiewjilajdsf8345j38osfj } network={ ssid="2NDSSID" key_mgmt=NONE }
Enable wpa_supplicant@wlp4s0
and dhcpcd@wlp4s0
(or just dhcpcd
)
To change another wifi, kill the server and use another one
sudo killall wpa_supplicant wpa_supplicant -B -i wlp4s0 -c /path/to/wifi.conf
8.3 VPN
8.3.1 L2tp, IPSec
apt-get purge "lxc-docker*" apt-get purge "docker.io*" apt-get update apt-get install apt-transport-https ca-certificates gnupg2 sudo apt-key adv \ --keyserver hkp://ha.pool.sks-keyservers.net:80 \ --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
deb https://apt.dockerproject.org/repo debian-jessie main
apt-get update apt-cache policy docker-engine apt-get update apt-get install docker-engine service docker start docker run hello-world
https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/docs/clients.md https://hub.docker.com/r/fcojean/l2tp-ipsec-vpn-server/
docker pull fcojean/l2tp-ipsec-vpn-server
vpn.env
VPN_IPSEC_PSK=<IPsec pre-shared key> VPN_USER_CREDENTIAL_LIST=[{"login":"userTest1","password":"test1"},{"login":"userTest2","password":"test2"}]
modprobe af_key docker run \ --name l2tp-ipsec-vpn-server \ --env-file ./vpn.env \ -p 500:500/udp \ -p 4500:4500/udp \ -v /lib/modules:/lib/modules:ro \ -d --privileged \ fcojean/l2tp-ipsec-vpn-server
docker logs l2tp-ipsec-vpn-server docker exec -it l2tp-ipsec-vpn-server ipsec status
8.3.2 OpenVPN
8.3.2.1 Server Setup
https://github.com/kylemanna/docker-openvpn It is very interesting to use docker this way.
The persisit is the storage, which is mounted on /etc/openvpn, serving as the configuration. Each time, create a new docker container mounting the same storage. Each step write to the configuration.
OVPN_DATA="ovpn-data-example" docker volume create --name $OVPN_DATA docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://VPN.SERVERNAME.COM docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki
It is easy to run the server itself. This time use -d option to make it a daemon.
docker run -v $OVPN_DATA:/etc/openvpn -d -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
It is also easy to create certificate on-the-go. For that, create new container to create and retrieve the certificate.
docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn
8.3.2.2 Client Setup
On arch, copy hebi.ovpn to /etc/openvpn/client/hebi.conf. Then the service openvpn-client@hebi will be available for systemd. On ubuntu, the path is /etc/openvpn/hebi.conf, with service openvpn@hebi. Start the service will forward traffic.
It is likely that you can connect, can ping any IP address, but cannot
resolve names. You can even use drill @8.8.8.8 google.com
to resolve
the name on the way.
The trick is to push resolv conf of local machine to remote. First
install openresolv
and (aur) openvpn-update-resolv-conf
. Add the
following to the end of hebi.conf file:
script-security 2 up /etc/openvpn/update-resolv-conf down /etc/openvpn/update-resolv-conf
For ubuntu the openvpn package already contains the file. Just modify the conf file.
8.4 setup frp reverse proxy
Use https://natfrp.com, create tunnels, note the IDs and your token. Download their specialized frpc client, and start the service on the machine behind NAS:
frpc-xxx -f token:ID,ID,ID
To run it at startup, create a service file with:
[Unit] Description=FRP Client Daemon After=network.target Wants=network.target [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/path/to/bin/frpc_linux_amd64 -f <token>:<ID> # LimitNOFILE=infinity [Install] WantedBy=multi-user.target
For example, I would love to expose several ports, thus I'm creating one for each port:
/etc/systemd/system/frpc-8000.service
: for 8000 port used by jupyter hub/etc/systemd/system/frpc-22.service
/etc/systemd/system/frpc-5901.service
Enable on boot:
systemctl enable frpc-8000 frpc-22 frpc-5901
The list of servers of http://natfrp.com:
HTTP? | l.com | current port | ping from Huaibei | ping from Shenzhen | |||
---|---|---|---|---|---|---|---|
#7 | 福州电信 | (cn-fz-dx.sakurafrp.com) | 300 | 22 | |||
#8 | Y | 俄勒冈AWS | (us-or-aws.sakurafrp.com) | 300 | 200 | ||
#13 | Y | 洛杉矶CN2 | (us-la-kr.sakurafrp.com) | 280 | 200 | ||
#23 | Y | 日本东京IIJ-2 | (jp-tyo-dvm-2.sakurafrp.com) | 150 (unstable) | 200 | ||
#24 | Y | 镇江多线2 | (cn-zj-bgp-3.sakurafrp.com) | 20 | 30 | ||
#28 | 枣庄多线3 | (cn-zz-bgp-3.sakurafrp.com) | 35 | 40 | |||
#3 | Y | [VIP]香港DMIT | (cn-hk-dmit.sakurafrp.com) | 75 | 80 | ||
#14 | Y | [VIP]上海多线 | (cn-sh-bgp.sakurafrp.com) | frpsh | 5901:39574 | 27 | 35 |
#25 | [VIP]杭州多线 | (cn-hz-bgp.sakurafrp.com) | frphz | 22:43632 | 25 | 35 | |
#17 | Y | [VIP][高级]深圳多线 | (cn-sz-bgp.sakurafrp.com) | 37 | 10 | ||
#19 | Y | [VIP][高级]香港阿里云 | (cn-hk-ali.sakurafrp.com) | 65 | 15 |
8.4.1 previous writing reference: expose local machine without public IP
Tunneling services (Dynamic DNS service):
- https://ngrok.com/: this does not seem to work without a proxy on both local
server and client
- 1.X source code: https://github.com/inconshreveable/ngrok
- https://www.noip.com/
- DynDNS https://account.dyn.com, no free account, not obvious pricing model
I'm not sure if the speed or stability of these international providers. For chinese providers:
Some other chinese providers:
- https://natapp.cn/: 基于ngrok的国内内网穿透服务
- http://ngrok.ciqiuwl.cn/: 小米球ngrok
- https://www.natfrp.com/: a free frc service provider
- This looks very good. You basically choose a server on their console, and run local client and choose the same console.
However, I'm not sure the performance of all these models. All traffic has to go through public server? The public server should really just act as a lookup service.
But even though I got assigned a public IP, all the ports do not seem to be open. Thus, even if the DDNS services got the IP correct and up-to-date, that is not supposed to work still.
UPDATE: actually china mobile did not assign me a public IP. The IP seen on my router is 100.99.240.168, while my public IP in whatismyipaddress and is ip138 is 36.161.131.36. Thus, I don't even has a public IP, not to say a statis public IP.
They must be doing tunneling and traffic forwarding.
8.4.1.1 VPN solutions
I found OpenVPN rather business oriented.
8.4.1.2 SSH tunneling
That keeps alive a SSH connection between your local machine and a public server, and forward all traffic to public server to your local machine.
On public server /etc/ssh/sshd_config
:
AllowTcpForwarding yes GatewayPorts yes
Initiate the connection from local machine:
ssh -nN -R 8888:localhost:8889 [email protected]
Here, the 8888 port on public server will be forwarded to localhost 8889.
However, there are problems:
- The SSH connection must be kept alive. If it broke for any reason, we have to setup a logic to restart it.
- This only setup one port at a time. We'll need to run it multiple times for more ports.
- There's no redirection based on domain name.
But overall, this is clearly doable.
8.4.1.3 reverse proxy
Tool:
- https://github.com/fatedier/frp: reverse proxy
Looks like this is very promising, it can even work in P2P mode for large data transfer, and it has a dashboard.
Now the problem is still how to find a cheap and stable VPS.
8.4.1.4 VPS
- https://www.vultr.com/: seems to be more stable and cheaper than Digital Ocean
- Aliyun is soooo expensive
8.4.1.5 proxy service
- v2aky.com
- neoproxy.com
- shadowsocks.nl
Client:
- qv2ray (linux)
- v2rayng (android)
9 App
9.1 mplayer
Interactive controls:
- Forward/Backward: LEFT/RIGHT (10s), UP/DOWN (1m), PGUP/PGDWN (10m)
- Playback speed:
[]
(10%),{}
(50%), backspace (reset) /*
: volume
When changing the speed, the pitch changed. To disable this, start
mplayer by mplayer -af scaletempo
. To stretch the images to full
screen, pass the -zoom
option when starting.
9.2 youtube-dl
When downloading a playlist, you can make the template to number the files
youtube-dl -o "%(playlist_index)s-%(title)s.%(ext)s" <playlist_link>
Download music only:
youtube-dl --extract-audio --audio-format flac <url>
9.3 chrome extensions
html5outliner
: give you a toc of the page. VERY NICE!- render for email
unblockyouku
adblock
- syntax highlighter
9.4 Remove viewer
The lab machines are accessed via spice. The client for spice is
virt-viewer. It can be installed through package manager. The actual
client is called remote-viewer, which is shipped with virt-viewer. So
the command to connect to the .vv file: remove-viewer console.vv
.
9.5 mpd
music play daemon
To start:
mkdir -p ~/.config/mpd cp /usr/share/doc/mpd/mpdconf.example ~/.config/mpd/mpd.conf mkdir ~/.mpd/playlists
# Required files db_file "~/.mpd/database" log_file "~/.mpd/log" # Optional music_directory "~/music" playlist_directory "~/.mpd/playlists" pid_file "~/.mpd/pid" state_file "~/.mpd/state" sticker_file "~/.mpd/sticker.sql" # uncomment pulse audio section audio_output { type "pulse" name "My Pulse Output" }
Start mpd by:
systemctl --user start mpd systemctl --user enable mpd
The client cantata can be used to create list. stumpwm-contrib has a mpd client. mpc is a command line client.
9.6 fontforge
How I made the WenQuanYi Micro Hei ttf font (clx-truetype only recognizes ttf, not ttc):
- input: ttc file
- Tool: fontforge
Open ttc file, select one, generate font, choose truetype The validation failed, but doesn't matter
9.7 tmux
# start a new session, with the session name set to "helium" tmux new -s helium # attach, and the target is "helium" tmux a -t helium
Some default commands (all after prefix key):
- !: break the current pane into another window
:
: prompt command- q: briefly display pane index (1,2,etc)
Commands
- select-layout even-horizontal: balance window horizontally
- last-window: jump to last active window
- new-window
- detach
10 Window System
X generally distinguishes between two types of selection, the PRIMARY and the CLIPBOARD. Every time you select a piece of text with the mouse, the selected text is set as the PRIMARY selection. Using the copy function will place the selected text into the CLIPBOARD. Pasting using the middle mouse button will insert the PRIMARY selection, pasting using the paste function will insert the CLIPBOARD.
10.1 xkill
Kill all Xorg instances
pkill -15 Xorg
If using kill:
ps -ef | grep Xorg # find the pid kill -9 <PID>
The xkill is not working properly, giving me "unable to find display" error.
10.2 Display Manager
Install xdm. It will use the file $HOME/.xsession
, so
ln -s $HOME/.xinitrc $HOME/.xsession
Change default desktop environment:
- GNome: gdm
- KDE: kdm
- lxfe: lightdm
Change (three approaches):
- edit
/etc/X11/default-display-manager
: I think we'd better use update-alternative sudo dpkg-reconfigure gdm
update-alternatives --config x-window-manage
10.3 screen
Multi screen, stumpwm detect as one. Install xdpyinfo
. It is used
to detect the heads.
check the screen resolution:
xdpyinfo | grep -B 2 resolution
Multiple Display:
# Mirror display sudo xrandr --output HDMI-2 --same-as eDP-1 sudo xrandr --output HDMI-2 --off
Rotate
xrandr --output HDMI-1 --rotate left
Chagne resolution
xrandr --output HDMI-1 --mode 1920x1080
Touch screen might need calibration in dual screen setup. Simply find
the touch screen device ID (e.g. 10) from xinput
and screen ID
(e.g. DP-1) from xrandr
, and execute:
xinput map-to-output <device-id> <screen-id>
10.4 cursor
Install xcursor-themes:
aptitude install xcursor-themes aptitude show xcursor-themes # here it will output the themes name
In .Xresources
:
Xcursor.theme: redglass
10.5 Natural Scrolling
The old solution is to swap the pointer button "4" and "5", by
xmodmap
or xinput
:
xmodmap -e "pointer = 1 2 3 4 5" xinput --set-button-map 10 1 2 3 5 4
The 10 is the id, to find it out, run xinput without argument.
But this way is deprecated, as of chromium 49 and above, it does not work any more. So use the xinput way to set the property:
xinput set-prop 10 "libinput Natural Scrolling Enabled" 1
I'm using logitech G900 and the property might be different. It works!
Not sure if the xinput command should be run each time the system boots. That would be hard for specifying ID.
The detail is, you can do this:
xinput # show a list of devices xinput list-props <ID> # list of properties xinput set-prop <deviceID> <propID> <value>
10.6 ratpoison
This is actually a wonderful WM. To start:
aptitude install ratpoison
In .xinitrc
:
exec ratpoison
C-t ?
to show the help
actually C-t
is the prefix of every command, C-g
to abort.
C-t :
: type commandC-t !
: run shell commandC-t .
: open menuC-t c
: open terminal
HOWEVER, this is pretty old, and it cause the screen to go brighter and darker back and force. Fortunately the stumpwm is very like this one, but
- actively maintained on github.
- written in common lisp
10.7 StumpWM
10.7.1 Installation
In order to use ttf-fonts
module, the lisp clx-truetype
package needs to be installed.
Install the slime IDE for emacs, install quicklisp, then install it using quicklisp.
Follow the description in lisp wiki page.
10.7.1.1 A better way to install stumpwm
- This seems a better way to install stumpwm
(ql:quickload "stumpwm")
But this require the .xinitrc to be
exec sbcl --load /path/to/startstump
with startstump
(require :stumpwm) (stumpwm:stumpwm)
When using gdm, for example on Ubuntu, the default Xsession
is
/etc/gdm3/Xsession
. To add stumpwm into the entry, create
/usr/share/xsessions/stumpwm.desktop
with:
[Desktop Entry] Encoding=UTF-8 Name=Stumpwm Comment=Tiling, keyboard driven Common Lisp window manager TryExec=stumpwm Exec=stumpwm Type=Application [X-Window Manager] SessionManaged=true
Of course this require the /usr/bin/stumpwm
to be executable with
#!/bin/sh sbcl --load /path/to/startstump
10.7.1.2 Live Debugging
To debug it live, you might need this in .stumpwmrc:
(in-package :stumpwm) (require :swank) (swank-loader:init) (swank:create-server :port 4004 :style swank:*communication-style* :dont-close t)
The above wont work unless swank is installed:
(ql:quickload "swank")
The port is actually interesting. Here it is set to 4004, and the
slime in Emacs defaults to 4005, thus they wont mess up. The trick to
connect to stumpwm is to use slime-connect
and put 4004 for the port
prompt.
So acutally if you just want to live debug, just install swank and
(require 'swank) (swank:create-server)
Note lastly that to install using quickload, you need permission. So
sudo sbcl --load /usr/lib/quicklisp/setup
To test if it works, you should be able to switch to stumpwm namespace and operate the window, like this:
(in-package :stumpwm) (stumpwm:select-window-by-number 2)
10.7.2 General
Same as ratpoison:
C-t C-h
: show helpC-t !
: run shell commandC-t c
terminalC-t e
: open emacs!C-t ;
: type a commandC-t :
: evalC-t C-g
: abortC-t a
: display timeC-t t
: send C-tC-t m
: display last message
10.7.2.1 Get Help
C-t h k
: from key binding to command:describe-key
C-t h w
: from command to key binding:where-is
C-t h c
: describe commandC-t h f
: describe functionC-t h v
: describe variablemode-line
: start mode-line
10.7.3 Window
C-t n
C-t p
C-t <double-quote>
C-t w
list all windowsC-t k
kill current frame (K to force quit)C-t #
toggle mark of current window
10.7.4 Frame
C-t s
: hsplitC-t S
: vsplitC-t Q
: kill other frames, only retains this oneC-t r
: resize, can useC-n
,C-p
interactivelyC-t +
: balance frameC-t o
: next frameC-t -
: show desktop
Other commands
remove-split
- to remove the current frame
fclear
- clear the current frame, show the desktop
To resize frames interactively, C-t r
and then use the arrows.
10.7.5 Groups
Shortcuts:
C-t g c
: create:gnew
. Also available for float:gnew-float
C-t g n
: nextC-t g o
:gother
C-t g p
: previousC-t g <double-quote>
: interactively select groups:grouplist
C-t g k
: kill current group, move windows to next group:gkill
C-t g r
: rename current group:grename
C-t G
: display all groups and their windowsC-t g g
: show list of groupC-t g m
: move current window to group XC-t g <d>
: go to group <d>
10.7.6 Configuration
(stumpwm:define-key stumpwm:*root-map* (stumpwm:kbd "C-z") "echo Zzzzz...")
10.8 Xmonad
I use Xmonad in vncserver, and it works nicely with host WM StumpWM because it uses a different set of keys. It has a red frame around windows by default. That is nice for visually distinguish the local and remote screen.
The executable is xmonad
. Mod key is alt
.
Mod-shift-enter
opens terminal.Mod-j/k
move focus to windowsMod-space
cycle layoutMod-,/.
decrease/increase the number of panels inside the master (current) panelMod-h/l
resizeMod-shift-c
killmod-p
executedmenu
(need installation)mod-<1-9>
switch workspace
Install xmobar
and trayer
.
Configuration is done in /.xmonad/xmonad.hs
. Test whether your
configure file is syntactic-correct:
xmonad --recompile
To load, use Mod-q
. This will re-compile and load the configure file.
10.9 VNC
I use tigervnc because it seems to be fast.
- vncpasswd: set the password
- vncserver&: start the server.
- It is started in :1 by default, so connect it with
vncviewer <ip>:1
- On mac, the docker bridge network does not work, so you cannot connect to the contianer by IP addr. In this case, map the port 5901. 5900+N is the default VNC port.
- vncserver -kill :1 will kill the vncserver
- vncserver :2 will open :2
- It is started in :1 by default, so connect it with
vncserver
will use ~/.vnc/xstartup
as startup script. It must have
execution permission.
F8
to open context menu, and f
to fullscreen. Once fullscreened,
the host WM shortcut will not be honored.
On Ubuntu, the vncserver
will by default only listen on
localhost. Thus, need to pass -localhost no
to enable outside
access. Nothing related to firewall (iptables or ufw). Enabling ufw
will actually block connection, even if I use ufw enable; ufw allow
5901/tcp
. Just disable it.
Also, on Ubuntu, the clipboard seems not to be enabled by default. The
problem is on the server side. vncconfig
is the helper program
specific to maintain the clipboard. You will need vncconfig -nowin&
to start it. Probably add this to my .vnc/xstartup
? This is not a
problem on Arch.
On GuixSD, there is no tigervnc client. I use vinagre instead.
some random settings in xstartup
:
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
10.9.1 enable service
From this reference on archwiki, add /etc/systemd/system/[email protected]
:
[Unit] Description=Remote desktop service (VNC) After=syslog.target network.target [Service] Type=forking User=foo WorkingDirectory=/home/foo ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' ExecStart=/usr/bin/vncserver %i -geometry 1440x900 -alwaysshared ExecStop=/usr/bin/vncserver -kill %i [Install] WantedBy=multi-user.target
Change the user name accordingly. When using tigervnc, I intend to leave
geometry unspecified, and the client and server seems to be able to change that
dynamically, which is super nice. On Ubuntu, the server must have -localhost
no
option.
11 System Management
The hardware beep sound is known as PC Speaker. To disable, simply remove the kernel module:
rmmod pcspkr
To use ssh key for connecting to remote ssh daemon, on the host
machine, run ssh-keygen
. Then ssh-copy-id user@server
.
11.1 Audio
Bluetooth headsets:
- bluez
- bluez-utils
- bluez-libs
- pulseaudio-alsa
- pulseaudio-bluetooth
use bluetoothctl
to enter config:
[bluetooth]# power on [bluetooth]# agent on [bluetooth]# default-agent [bluetooth]# scan on [NEW] Device 00:1D:43:6D:03:26 Lasmex LBT10 [bluetooth]# pair 00:1D:43:6D:03:26 [bluetooth]# connect 00:1D:43:6D:03:26
If you're getting a connection error org.bluez.Error.Failed retry by killing existing PulseAudio daemon first:
$ pulseaudio -k [bluetooth]# connect 00:1D:43:6D:03:26
11.2 Power Management
Power management is done through systmed
can handle it, by acpid
.
The configure file is /etc/systemd/logind.conf
. man logind.conf
for details. hibernate will save to disk, while suspend save to
ram. Both of them will resume to the current status.
HandlePowerKey=hibernate HandleLidSwitch=suspend
11.3 Booting
The grub2 menu configure file is located at /boot/grub/grub.cfg
. It
is generated by /usr/sbin/update-grub
(8) using templates from
/etc/grub.d/*
and settings from /etc/default/grub
.
The default run level is 2 (multi-user mode), corresponding to
/etc/rc2.d/XXX
scripts. Those scripts starts with "S" or "K" meaning
start
or stop
sent to systemd
utility. Those scripts are symbol
linked to ../init.d/xxx
. By default there's no difference between
level 2 to 5. Run level 0 means half, S means single user mode, 6
means reboot.
11.4 User Management
The account will use the values on command line, plus the default value for system. A group will also be created by default.
-g GROUP
: specify the initial login group. Typically just ignore this, the default value will be used.-G group1,group2,...
: additional groups. You might want:video
,audio
,wheel
-m
: create home if it does not exists-s SHELL
: use this shell. Typically just ignore this, the system will choose for you.
11.5 File Management
11.5.1 Swap File
A swap file can also be used as swap memory. When doing linking, the
ld
might fail because of lack of memory.
Check the current swap:
swapon -s
Create swap file:
dd if=/dev/zero of=/path/to/extraswap bs=1M count=4096
Or using fallocate
fallocate -l 4096M /path/to/extraswap
Set the permission. A world-readable swap file is a huge vulnerability.
chmod 600 /swapfile
Format it:
mkswap /path/to/extraswap
Swap on/off:
swapon /path/to/extraswap swapoff /path/to/extraswap
This will not be in effect after reboot. To automatically swap it on, in /etc/fstab
/path/to/extraswap none swap sw 0 0
11.5.2 Back Up & Syncing
rsync
commnad is used to sync from source to destination. It does
not perform double way transfer. It decides a change if either of
these happens:
- size change
- last-modified time
11.5.3 MIME
check the MIME of a file.
file --mime /path/to/file
On debian, the mapping from suffix to MIME type is /etc/mime.types
.
Create default application for xdg-open
mkdir ~/.local/share/applications xdg-mime default firefox.desktop application/pdf
~/.local/share/applications/mimeapps.list
[Default Applications] application/pdf=firefox-esr.desktop
/usr/share/applications/*.desktop
are files define for each
application.
On Debian, you can also do this:
update-alternative --config x-terminal-emulator update-alternative --config x-www-browser
11.6 LVM
11.7 Monitor the system information
lvs vgs pvs df -h vgdisplay lvdisplay /dev/debian-vg/home
11.8 Extending a logical volume
lvextend -L10G /dev/debian-vg/tmp # to 10G lvextend -L+1G /dev/debian-vg/tmp # + 1G resize2fs /dev/debian-vg/tmp
11.9 Reduce a logical volume
The home is 890G.
umount -v /home # check e2fsck -ff /dev/debian-vg/home resize2fs /dev/debian-vg/home 400G lvreduce -L -490G /dev/debian-vg/home lvdisplay /dev/debian-vg/home resize2fs /dev/debian-vg/home mount /dev/debian-vg/home /home
12 Arch Linux
12.1 Installation
12.1.1 Verify UEFI
Nowadays (start from 2017) Arch only supports 64 bits … and seems to prefer UEFI .. Fine
First, verify the boot mode to be UEFI by checking the following folder exists
ls /sys/firmware/efi/efivars
12.1.2 System clock
timedatectl set-ntp true
12.1.3 Partition
parted /dev/sda mklabel gpt parted /dev/sda mkpart ESP fat32 1MiB 513MiB parted /dev/sda set 1 boot on parted /dev/sda mkpart primary linux-swap 513MiB 5GiB parted /dev/sda mkpart primary ext4 5GiB 100%
This creates
- sda1
/boot
the EFI System Partition (ESP), swp, and a root- sda2
- swap
- sda3
/
Format:
mkfs.fat -F32 /dev/sda1 mkfs.ext4 /dev/sda3
Mount
mount /dev/sda3 /mnt mkdir /mnt/boot mount /dev/sda1 /mnt/boot
12.1.4 Select mirror
look into /etc/pacman.d/mirrorlist
and modify if necessary. The order
matters. The file will be copied to new system.
12.1.5 Install base system
pacstrap /mnt base
12.1.6 chroot
genfstab -U /mnt >> /mnt/etc/fstab arch-chroot /mnt
12.1.7 Configure
Now we are in the new system.
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime hwclock --systohc
Uncomment en_US.UTF-8 UTF-8
inside /etc/locale.gen
and run
locale-gen
Set LANG
in /etc/locale.conf
LANG=en_US.UTF-8
Set hostname in /etc/hostname
myhostname
Set root password
passwd
Install grub
pacman -S grub efibootmgr grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=myarch grub-mkconfig -o /boot/grub/grub.cfg
Before reboot, it is good to make sure the network will work, by installing some networking packages:
dialog
wpa_suppliant
iw
Now reboot
12.1.8 Config
Install the packages, and config the system using my scripts:
- setup-quicklisp
- setup-git
12.1.9 Dual boot with Windows
The only difference is that, you do not need to create the EFI boot partition, but use the existing one. Just mount it to boot. The rest is the same.
12.2 Pacman
Option
- S
- sync, a.k.a install
- Q
- query
Parameter:
- s
- search
- y
- fetch new package list. Usually use with
u
- u
- update all packages
- i
- more information
- l
- location of files
Typical usage:
- Syu
- update whole system
- S
- install package
- R
- remove package
- Rs
- remove package and its unused dependencies
- Ss
- search package
- Qi
- show description of a package
- –noconfirm
- use in script
- –needed
- do not install the installed again
Pacman will store all previously downloaded packages. So when you find your /var/cache/pacman so big, consider clean them up using:
paccache -rk 1
12.3 AUR
Have to search through its web interface. Find the git download link and clone it. It is pullable.
Go into the folder and
makepkg -si
-s
alone will build it, with i
to install it after build. The
dependencies are automatically installed if can be found by pacman. If
it is also on AUR, you have to install manually.
The md5sum line can be skipped for some package. Just replace the
md5sum value inside the quotes with 'SKIP'
.
13 CentOS
On installing a new instance of CentOS, issue the following commands:
# check the sshd status # should use opensshd service status sshd # add user, -m means create home folder useradd -m myname # oh, wait, I forget to add myself to wheel # -a means append, if no -a, the -G will accept a comma separated list, overwrite the previous setting usermod -aG wheel myname
14 Debian
14.1 Package
/etc/apt/sources.list
/var/cache/apt/archives/
netselect-apt
to select the fastest source!
dist-upgrade
cp /etc/apt/sources.list{,.bak} sed -i -e 's/ \(stable\|wheezy\)/ testing/ig' /etc/apt/sources.list apt-get update apt-get --download-only dist-upgrade # Dangerous apt-get dist-upgrade
dpkg-reconfigure
reconfigure a installed packagedefconf-show
show the current configuration of a package
Another part is the main
. If you want some 3rd party contributor
packages, add contrib
after main
. If you further want some
non-free packages, add also non-free
.
To fix dependency problems:
apt --fix-broken install
14.2 Configuration
14.2.1 update-alternatives
Options:
--config
: show options and select configuration interactively--display
: show the options
Some examples:
update-alternatives --config desktop-background
15 Docker
To remove the requirement of sudo
:
sudo groupadd docker sudo gpasswd -a ${USER} docker sudo service docker restart newgrp docker
You may find yourself have to type double C-p to take effect. That is
because C-p C-q
is the default binding for detaching a
container. This blocks C-p, I have to type it twice, must change. In
~/.docker/config.json
, add:
{"detachKeys": "ctrl-],ctrl-["}
Restart docker daemon to take effect. This can also be set by
--detach-keys
option.
Network config:
- docker network ls
- docker network inspect <network-name>
15.1 Images
Docker images are template of VMs. docker images
list available
images locally.
You can build a docker image by writing a docker file. The first line
is typically a FROM
command to specify a base image. Other commands
are as follows:
- RUN: this command is the most basic command. Since it expects to be
non-interactive, when running a command such as install a package,
supply the
-y
or equivalent arguments. - ENV key=value
- ADD:
ADD <src> .. <dst>
The difference from copy:- ADD allows src to be url
- ADD will decompress an archive
- COPY:
COPY <src> .. <dst>
all srcs on the local machine will be copied to dst in the image. The src can use wildcards. The src cannot be out of the current build directory, e.g...
is not valid. - USER:
USER daemon
The USER instruction sets the user name or UID to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile. - WORKDIR: The WORKDIR instruction sets the working directory for any
RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in
the Dockerfile
- if it does not exist, it will be created
- it can be used multiple times, if it is relative, it is relative to the previous WORKDIR
- ENTRYPOINT ["executable", "param1", "param2"]: configure the container to be run as an executable.
In the folder containing Dockerfile, run to build the image:
docker build -t my-image .
docker-compose
is installed seperately with docker. It must be run
inside the folder containing docker-compose.yml
.
Commands
- docker-compose up: up the service. It will not exit. Use C-c to exit
and the
docker-compose down
command will be sent.- The second time you up the compose, it will not up, but update current. If all current are up to date, nothing will happen.
- docker-compose up -d: up the service and exit. You need to shutdown it maually
- docker-compose down: shutdown the services
A sample compose file:
version: '2' services: srcml-server-container: image: "lihebi/srcml-server" helium: image: "lihebi/arch-helium" tty: true volumes: - data:/data benchmark-downloader: # this is used to download benchmarks to the shared volume image: "lihebi/benchmark-downloader" tty: true volumes: - data:/data volumes: # create a volume with default data:
A service is a container. Setting tty to true to prevent it from
stopping. That is the same effect when you pass -t
to docker run
.
The containers can be seen by docker ps, with names prefixed and
suffixed by compose_XXX_1
. Change to the container will not
preserve after the compose down. The containers will be deleted. Next
up will create new containers.
Under any volume, if external
option is set to true
, docker
compose will find it outside, and signal error if it does not exist.
Once the compose is up, docker create a bridge network called
compose_default
. All services (containers) are attached to that.
You may want to publish the image so that others can use it. DockerHub is the host for it.
When pushing and pulling, what exactly happens?
docker tag local-image lihebi/my-image docker push lihebi/my-image
- docker login
- login so that you can push
- docker push lihebi/my-container
- push to docker hub
- docker pull lihebi/my-container
- pull from the internet
15.2 Instance
To create an instance of an image and run it, use the docker run
command. Specifically,
docker run [option] <image> /bin/bash
- -i
- interactive
- -d
- detach (opposite to -i)
- -t
- assign a tty. Even when using -d, you need this.
- –rm
- automatically remove when exits
- -p <port>
- export the port <port> of the container. The host
port will be randomly assigned. Running
docker ps
will show the port binding information. If the port is not set when running a container, you have to commit it, and run it again to assign a port or another port. - -v /volumn
- create a mount at /volumn
- -v /local/dir:/mnt
- mount local dir to the /mnt in
container. The default is read-write mode, if you want read
only, do this:
-v /local/dir:/mnt:ro
. The local dir must be ABSOLUTE path.
To just create an instance without running it:
To run some command on an already run container, use the docker exec
command with the <ID> of the container:
docker exec <ID> echo "hello"
- ID can be the UUID or container name
- you can use -it as well, e.g. docker exec -it <ID> /bin/bash
When using docker exec
, I cannot connect to emacs server through
emacsclient -t, and error message is terminal is not found. I can not
open tmux either. But the problem does not appear when using docker
run
command. The problem is that, docker exec
tty is not a real
tty. The solution is when starting a exec command, use script to run
bash:
docker exec -it my-container script -q -c "/bin/bash" /dev/null docker exec -it my-container env TERM=xterm script -q -c "/bin/bash" /dev/null
The TERM is not necessary here because in my case docker always set it to xterm. I actually change it to screen-256color in my bashrc file to get the correct colors.
To stop a container, use docker stop
command to do it gracefully. It
will send SIGTERM to the app, then wait for it to stop. If you don't
want to stop it gracefully, just force kill using docker kill
. The
default wait time is 10 seconds. You can change this to, for example,
1 second:
docker stop -t 1 <container-ID>
The reason for a container to resist stopping may be it ignores the SIGTERM request. Python did this, so for a python program, you should handle this signal yourself:
import sys import signal def handler(signum, frame): sys.exit(1) def main(): signal.signal(signal.SIGTERM, hanlder) # your app
To stop all containers:
docker stop $(docker ps -a -q)
To start a stopped container, use docker start <ID>
. It will be
detached by default.
You can remove a stopped container by docker rm
command. To remove
all containers (will not remove non-stopped ones, but give errors):
docker rm $(docker ps -a -q)
When you make any changes to the container, you can view the
difference made from the base image via docker diff <ID>
. When
desired, you can create a new image based on the current running
instance, via docker commit
:
docker commit <ID> my-new-image
You can assign a name to the container so that you can better remember and reference it.
15.3 Volume
You can create a volume by itself, using docker volume create hello
,
or create together with a container.
You have to mount the volume at the time you create the container. You
cannot remount anything to it without commiting it to an image and
create again. Use the -v
command to declare the volume when creating
the container:
docker run -v /mnt <image> docker run -v my-named-vol:/mnt <image> docker run -v /absolute/path/to/host/local/path:/mnt/in/container <image>
If only inner path is provided, the volume will still be created, but
with a long named directory under /var/lib/docker/volumes
.
The volumes will never be automatically deleted, even if the container is deleted.
To manage a volume:
docker volume inspect <volume-full-name>
docker volume ls
docker volume prune
: # remove all unused volumes
15.4 The docker manager, Portainer
This is actually great: https://www.portainer.io/documentation/quick-start/, run via:
docker run -d -p 9000:9000 -p 8000:8000 –name portainer –restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainerdata:/data portainer/portainer
And go to https://localhost:9000 for UI. It seems to be even able to manage remote containers.
16 Unix Programming
POSIX defines the operating system interface. The starndard contains volumes:
- Base Definition: convention, regular expression, headers
- System Interfaces: system calls
- Shell & Utilities: shell command language and shell utilities
- Rationale
I found most of them are not that interesting, except Base Definition section 9 regular expression. This definition is used by many shell utilities such as awk.
16.1 Low-level IO
16.1.1 open
int open(const char *filename, int flags[, mode_t mode])
Create and return a file descriptor.
16.1.2 close
int close(int filedes)
- file descriptor is deallocated
- if all file descriptors associated with a pipe are closed, any unread data is discarded.
Return
- 0 on success, -1 on failure
16.1.3 read
ssize_t read(int filedes, void *buffer, size_t size)
- read up to size bytes, store result in buffer.
Return
- number of bytes actually read.
- return 0 means EOF
16.1.4 write
ssize_t write(int filedes, const void *buffer, size_t size)
- write up to size bytes from buffer to the file descriptor.
Return
- number of bytes actually written
- -1 on failure
16.1.5 fdopen
FILE *fdopen(int filedes, const char *opentype)
from file descriptor, get the stream
16.1.6 fileno
int fileno(FILE *stream)
from stream to file descriptor
16.1.7 fdset
This is a bit array.
- FDZERO(&fdset): initialise fdset to empty
- FDCLR(fd, &fdset): remove fd from the set
- FDSET(fd, &fdset): add fd to the set
- FDISSET(fd, &fdset): return non-0 if fd is in set
16.1.8 select - synchronous I/O multiplexing
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout)
Block until at least one fd is true for specific condition, unless timeout.
Params
- nfds: the range of file descriptors to be tested. Should be the
largest one in the sets + 1. But just pass
FD_SETSIZE
. - readfds: watch for read. can be NULL.
- writefds: watch for write. can be NULL.
- errorfds: watch for error. can be NULL.
- timeout:
- NULL: no timeout, block forever
- 0: return immediately. Used for test file descriptors
Return:
- if timeout, return 0
- the sets are modified. Those in sets are those ready
- return the number of ready file descriptors in all sets
int fd; // init fd fd_set set; FD_ZERO(&set) FD_SET(fd, &set); struct timeval timeout; timeout.tv_sec = 1; timeout.tv_usec = 0; select(FD_SETSIZE, &set, NULL, NULL, &timeout);
16.1.9 sync
void sync(void) // sync all dirty files int fsync(int filedes) // sync only that file
16.1.10 dup
You can create a new descriptor to refer to the same file. They
- share file position
- share status flag
- seperate descriptor flags
int dup(int old) // same as fcntl(old, F_DUPFD, 0)
Copy old to the first available descriptor number.
int dup2(int old, int new) // same as close(new) fcntl(old, F_DUPFD, new)
If old is invalid, it does nothing (does not close new
)!
16.2 Date and Time
- calendar time: absolute time, e.g. 2017/6/29
- interval: between two calendar times
- elapsed time: length of interval
- amount of time: sum of elapsed times
- period: elapsed time between two events
- CPU time: like calendar time, but relative to process, i.e. when the process run on CPU
- Processor time: amount of time a CPU is in use.
16.2.1 struct timeval
- timet tvsec: seconds
- long int tvusec: micro seconds, must be less than 1 million
16.2.2 struct timespec
- timet tvsec
- long int tvnsec: nanoseconds. Must be less than 1 billion
16.2.3 difftime
double difftime (time_t time1, time_t time0)
16.2.4 timet
On GNU it is long int. It should be the seconds elapsed since 00:00:00 Jan 1 1970, Coordinated Universal Time.
get current calenddar time:
time_t time(time_t *result)
16.2.5 alarm
16.2.5.1 struct itimerval
- struct timeval itinterval: 0 to send alarm once, non-zero to send every interval
- struct timeval itvalue: time left to alarm. If 0, the alarm is disabled
16.2.5.2 setitimer
int setitimer(int which, const struct itimerval *new, struct itimerval *old)
- which: ITIMERREAL, ITIMERVIRTUAL, ITIMERPROF
- new: set to new
- old: if not NULL, fill with old value
16.2.5.3 getitimer(int which, struct itimerval *old)
get the timer
16.2.5.4 alarm
unsigned int alarm(unsigned int seconds)
To cancel existing alarm, use alarm(0). Return:
- 0: no previous alarm
- non-0: the remaining value of previous alarm
unsigned int alarm (unsigned int seconds) { struct itimerval old, new; new.it_interval.tv_usec = 0; new.it_interval.tv_sec = 0; new.it_value.tv_usec = 0; new.it_value.tv_sec = (long int) seconds; if (setitimer (ITIMER_REAL, &new, &old) < 0) return 0; else return old.it_value.tv_sec; }
16.3 Process
Three steps
- create child process
- run an executable
- coordinate the results with parent
16.3.1 system
int system(const char *command)
- use
sh
to execute, and search in $PATH - return -1 on error
- return the status code for the child
16.3.2 getpid
- pidt getpid(void): return PID of current process
- pidt getppid(void): PID of parent process
16.3.3 fork
pid_t fork(void)
return
- 0 in child
- child's PID in parent
- -1 on error
16.3.4 pipe
int pipe(int filedes[2])
- Create a pipie and puts the filedes[0] for reading, filedes[1] for writing
Return:
- 0 on success, -1 on failure
16.3.5 exec
int execv (const char *filename, char *const argv[]) int execl (const char *filename, const char *arg0, ...) int execve (const char *filename, char *const argv[], char *const env[]) int execle (const char *filename, const char *arg0, ..., char *const env[]) int execvp (const char *filename, char *const argv[]) int execlp (const char *filename, const char *arg0, ...)
- execv: the last of argv array must be NULL. All strings are null-terminated.
- execl: argv are seperated, the last one must be NULL
- execve: provide env
- execle
- execvp: find filename in $PATH
- execlp
16.3.6 wait
This should be used in parent process.
pid_t waitpid(pid_t pid, int *status_ptr, int options)
- pid:
- positive: the pid for a child process
- -1 (WAITANY): any child process
- 0 (WAITMYPGRP): any child process that has the same process group ID as the parent
- -pgid (any other negative value): any child process having the process group ID as gpid
- options: OR of the following
- WNOHANG: no hang: the parent process should not wait
- WUNTRACED: report stopped process as well as the terminated ones
- return: PID of the child process that is reporting
pid_t wait(int *status_ptr)
wait(&status)
is same as waitpid(-1, &status, 0)
16.3.6.1 Status
The signature is int NAME(int status)
.
- WIFEXITED: if exited: return non-0 if child terminated normally with exit
- WEXITSTATUS: exit status: if above true, this is the low-order 8 bits of the exit code
- WIFSIGNALED: if signaled: non-0 if the process terminated because it receives a signal that was not handled
- WTERMSIG: term sig: if above true, return that signal number
- WCOREDUMP: core dump: non-0 if the child process terminated and produce a core dump
- WIFSTOPPED: if stopped: if the child process stopped
- WSTOPSIG: stop sig: if above true, return the signal number that cause the child to stop
16.4 Unix Signal Handling
16.4.1 Ordinary signal handling
The handling of ordinary signals are easy:
#include <signal.h> static void my_handler(int signum) { printf("received signal\n"); } int main() { struct sigaction sa; sa.sa_handler = my_handler; sigemptyset(&sa.sa_mask); sa.sa_flags = SA_SIGINFO; // this segv does not work sigaction(SIGSEGV, &sa, NULL); // this sigint will work sigaction(SIGINT, &sa, NULL); }
16.4.2 SIGSEGV handling
16.4.2.1 Motivation
The reason that I want to handle the SIGSEGV
is that I want to get the coverage from gcov
.
Gcov will not report any coverage information if the program terminates by receiving some signals.
Fortunately we can explicitly ask gcov to dump it by calling __gcov_flush()
inside the handler.
I confirmed this can work for ordinary signal handling.
// declaring the prototype of gcov void __gcov_flush(void); void myhanlder() { __gcov_flush(); }
After experiment, I found:
- address sanitizer cannot work with this handling. AddressSanitizer will hijact the signal, and maybe output another signal.
- Even if I turned off address sanitizer, and the handler function is executed, the coverage information is still not able to get. This possibly because the handler is running on a different stack.
16.4.2.2 a new stack
However, handling the SIGSEGV is challenging. The above will not work 1.
By default, when a signal is delivered, its handler is called on the same stack where the program was running. But if the signal is due to stack overflow, then attempting to execute the handler will cause a second segfault. Linux is smart enough not to send this segfault back to the same signal handler, which would prevent an infinite cascade of segfaults. Instead, in effect, the signal handler does not work.
Instead, we need to make a new stack and install the handler on that stack.
#include <signal.h> void sigsegv_handler(int signum, siginfo_t *info, void *data) { printf("Received signal finally\n"); exit(1); } #define SEGV_STACK_SIZE BUFSIZ int main() { struct sigaction action; bzero(&action, sizeof(action)); action.sa_flags = SA_SIGINFO|SA_STACK; action.sa_sigaction = &sigsegv_handler; sigaction(SIGSEGV, &action, NULL); stack_t segv_stack; segv_stack.ss_sp = valloc(SEGV_STACK_SIZE); segv_stack.ss_flags = 0; segv_stack.ss_size = SEGV_STACK_SIZE; sigaltstack(&segv_stack, NULL); char buf[10]; char *src = "super long string"; strcpy(buf, src); }
16.4.2.3 libsigsegv
I also tried another library, the libsigsegv 2. I followed two of their methods, but I cannot make either work. The code lists here as a reference:
#include <signal.h> #include <sigsegv.h> int handler (void *fault_address, int serious) { printf("Handler triggered.\n"); return 0; } void stackoverflow_handler (int emergency, stackoverflow_context_t scp) { printf("Handler received\n"); } int main() { char* mystack; // don't know how to use sigsegv_install_handler (&handler); stackoverflow_install_handler (&stackoverflow_handler, mystack, SIGSTKSZ); }
16.5 pThread
#include <pthread.h> pthread_create (thread, attr, start_routine, arg) pthread_exit (status) pthread_join (threadid, status) pthread_detach (threadid)
16.5.1 Create threads
If main() finishes before the threads it has created, and exits with pthreadexit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes.
#define NUM_THREADS 5 struct thread_data{ int thread_id; char *message; }; int main() { pthread_t threads[NUM_THREADS]; struct thread_data td[NUM_THREADS]; int rc; int i; for( i=0; i < NUM_THREADS; i++ ){ td[i].thread_id = i; td[i].message = "This is message"; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)&td[i]); if (rc){ cout << "Error:unable to create thread," << rc << endl; exit(-1); } } pthread_exit(NULL); }
16.5.2 Join and Detach
int main () { int rc; int i; pthread_t threads[NUM_THREADS]; pthread_attr_t attr; void *status; // Initialize and set thread joinable pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); for( i=0; i < NUM_THREADS; i++ ){ cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], &attr, wait, (void *)i ); if (rc){ cout << "Error:unable to create thread," << rc << endl; exit(-1); } } // free attribute and wait for the other threads pthread_attr_destroy(&attr); for( i=0; i < NUM_THREADS; i++ ){ rc = pthread_join(threads[i], &status); if (rc){ cout << "Error:unable to join," << rc << endl; exit(-1); } cout << "Main: completed thread id :" << i ; cout << " exiting with status :" << status << endl; } cout << "Main: program exiting." << endl; pthread_exit(NULL); }
16.6 Other
sleep
#include <unistd.h> unsigned int sleep(unsigned int seconds); // seconds int usleep(useconds_t useconds); // microseconds int nanosleep(const struct timespec *rqtp, struct timespec *rmtp);
17 Shell Utilities
- sort -k 4 -n
- tee
for name in data/github-bench/*; do echo "===== $name"\ | tee -a log.txt; { time helium --create-cache $name; } 2>&1\ | tee -a log.txt; done
Another example: redirect output of time
{ time sleep 1 ; } 2> time.txt { time sleep 1 ; } 2>&1 | tee -a time.txt
- xz: a general-purpose data compression tool
- cpio: copy files between archives and directories
- shuf: random number generation
shuf -i 1-100 -n 1
bc
calculator- grep: -i (case insensitive), -n (show line number), -v (inverse), -H (show file name)
- xargs: consume the standard output, and integrate result with new command:
find /etc -name '*.conf' | xargs ls -l # the same as: ls -l ~find ...~
time <command>
: # the total user and system time consumed by the shell and its childrencolumn
: formats its input into multiple columns.mount | column -t
dd
:dd if=xxx.iso of=/dev/sdb bs=4m; sync
convert
:convert xxx.jpg -resize 800 xxx.out.jpg # 800x<height>
nl
:nl <filename>
添加行号。输出到stdoutln
:ln -s <target> <linkname>
记忆:新的东西总要最后才发布。ls
: order:-r
reverse;-s
file size;X
extension;-t
time
17.1 Patch System
Create a patch (notice the order: old then new):
diff -u hello.c hello_new.c > hello.patch diff -Naur /usr/src/openvpn-2.3.2 /usr/src/openvpn-2.3.4 > openvpn.patch
To apply a patch
patch -p3 < /path/to/openvpn.patch patch -p1 <patch -d /path/to/old/file
the number after p
indicates how many the leading slashes are skipped when find the old file
To reverse (un-apply) a patch:
patch -p1 -R <patch
This works as if you swapped the old and new file when creating the patch.
17.2 tr: translate characters
tr <string1> <string2>
the characters in string1 are translated into the characters in string2 where the first character in string1 is translated into the first character in string2 and so on. If string1 is longer than string2, the last character found in string2 is duplicated until string1 is exhausted.
characters in the string can be:
any characters will represent itself if not:
\\octal
: A backslash followed by 1, 2 or 3 octal digits\n
,\t
a-z
: inclusive, ascending[:class:]
: space, upper, lower, alnum- if
[:upper:]
and[:lower:]
appears in the same relative position, they will correlate.
- if
17.3 uniq: report or filter out repeated lines in a file
Repeated lines in the input will not be detected if they are not adjacent, so it may be necessary to sort the files first.
uniq -c
: Precede each output line with the count of the number of times the line occurred in the input, followed by a single space. You can then comtine this withsort -n
-u
: Only output lines that are not repeated in the input.-i
: Case insensitive comparison of lines.
17.4 Find
find . -type f -name *.flac -exec mv {} ../out/ \;
Copy file based on find, and take care of quotes and spaces:
find CloudMusic -type f -name "*mp3" -exec cp "{}" all_music \;
- find
find ~/data/fast/pick-master/ -name '*.[ch]'
18 Trouble Shooting
18.1 Cannot su root
When su cannot change to root, run
chmod u+s /bin/su
18.2 in docker, cannot open chromium
failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted.
Solution
chromium --no-sandbox
18.3 Encoding
When converting MS windows format to unix format, you can use emacs
and call set-buffer-file-coding-system
and set to unix. Or you can
use dos2unix
, perhaps by
find . -name *.java | xargs dos2unix
18.4 Cannot open shared library
On CentOS
, the default LD_LIBRARY_PATH
does not contains the
/usr/local/lib
. The consequence is the -lpugi
and -lctags
are
not recognized because they are put in that directory. Set it, or
edit /etc/ld.conf.d/local.conf
and add the path. After that, run
ldconf
as root to update the database.
18.5 auto expansion error for latex font
when compiling latex using acmart template, auto expansion error is reported.
Solution:
mktexlsr # texhash updmap-sys
Reference: https://github.com/borisveytsman/acmart/issues/95
18.6 time not up-to-date
Although I set the right timezone (check by timedatectl
), the clock
is still incorrect. To fix that, install ntp
package and run
sudo ntpd -qg
18.7 backlight on TP25
For regular laptops, using debian
cat /sys/class/backlight/intel_backlight/max_brightness cat /sys/class/backlight/intel_backlight/brightness echo 400 > /sys/class/backlight/intel_backlight/brightness
But on Archlinux, on TP25, The xorg-xbacklight
is not working. The
drop-in replacement acpilight
(aur) does.
To setup for video group users to adjust backlight, place a file
/etc/udev/rules.d/90-backlight.rules
SUBSYSTEM=="backlight", ACTION=="add", \ RUN+="/bin/chgrp video %S%p/brightness", \ RUN+="/bin/chmod g+w %S%p/brightness"
The command is still xbacklight
.
18.8 xinit won't start
On Debian, when I dist-upgrade
Debian 8 Jessie to 9 Stretch,
the startx
stop working.
I try install a Debian 9 from its own image, and still the same result.
The error message says:
vesa cannot read int vect screen found but none leave a usable configuration xf86enableioports failed to set iopl for i/o
The trick is you need:
chomd u+s /usr/bin/xinit