If you are just like me, want to learn PostgreSQL but don’t want to mess up your system, docker is for you.


To understand the importance of goroutines, we must first understand concurrency and parallelism.

As we all know, it’s the CPU, or more precisely, its core, that executes our code, and one CPU core can handle only one instruction at a time. It’s OK back in the days when people still used punch cards to run their programs, but today, it can be hardly satisfying if we have to run our programs one by one. To solve this problem, there are basically two solutions: Have more cores, or interleave programs.

By having more cores, we can increase the number of programs…


Based on this Stack Overflow answer

Before the introduction of Go modules, the file structure of local Go projects were like

// an executable
$GOPATH/github.com/you/project1
-> $GOPATH/github.com/you/project1/package_main
-> $GOPATH/github.com/you/project1/package1
-> $GOPATH/github.com/user_a/project1/package2 // imports project3
-> $GOPATH/github.com/user_a/project1/package3
// an executable
$GOPATH/github.com/user_b/project2
-> $GOPATH/github.com/user_b/project2/main_package
-> $GOPATH/github.com/user_b/project2/package_a
-> $GOPATH/github.com/user_b/project2/package_b
// a library
$GOPATH/github.com/user_c/project3
-> $GOPATH/github.com/user_b/project2/package_a
-> $GOPATH/github.com/user_b/project2/package_b

While it’s simple, it’s also cumbersome and error-prone.

After the introduction of Go modules, project management gets more convenient but also more complicated. The old way still works, of course, but is no longer recommended, and you can now place your project anywhere you like, like…


Despite what’s said about Go slices in this blog post, if you really know what dynamic arrays are internally, you should immediately recognize the resemblance between Go’s slices and a dynamic array.

To be more precise, a typical dynamic array implementation is like

typedef struct {
int *array;
int used;
int size;
} Array;

Where the underlying array is allocated by malloc and will be reallocted by realloc if need be. Now, let’s take a look at the internals of a Go slice. Oh, it has three parts too. A pointer to an underlying array, a length, and a capacity. Sounds familiar, right? How should we call such a structure if not “dynamic arrays” then?


Based on https://www.calhoun.io/how-do-interfaces-work-in-go/

Coming from a C background, I did not understand the importance of interfaces at first, but as it turns out, in some way, interfaces can make object oriented programming a lot easier.

What confused me about Go’s interface was that if we still have to write the listed functions again and again for each type we define, what good does it do to have an interface? To understand this, we must first understand inheritance and polymorphism, two very useful concepts from the object oriented programming paradigm.

The basic idea behind inheritance is that a class can inherit…


Test your HTTP server with ApacheBenchmark (shipped along with apache2) like

ab -n 100 -c 10 -rkl http://127.0.0.1:1234

Check out man ab to find out what each of these options means.

An similar alternative to ab is wrk.


Install packages using pacman

# update and yes to all questions
sudo pacman -Syyu
# install a package, apache in this case
sudo pacman -S apache

Based on this medium post

Long story short, when you use epoll, the kernel keeps relevant data in the kernel space, monitors files in the interest list behind the scene and sends back only a short list of ready descriptors. On the contrary, poll/select is more like an on-demand service, remembers nothing and returns everything, which means that you will have to pass everything to the kernel every time, wait until the kernel is done polling, get back a long list of all the descriptors you just sent and loop through the long list yourself to find out what’s up.

More details can be found here.


Based on The Linux Programming Interface

Today, if you are writing a server (HTTP daemon) program, you should be expecting tens of thousands of connections already, and oftentimes, after these connections are established, most of them will just remain alive for the minutes to come to save unnecessary hand-shakings. But how can we handle such a huge amount of (long lived) connections simultaneously?

The most natural response to this problem would be to use more processes or threads to handle new connections. However, even today, it would be still too expensive to create tens of thousands of threads, let alone…


Based on this Stack Overflow answer and this Stack Exchange answer

Just like cmp and jl, the call and ret instructions accomplish many tasks using a single instruction so it’s a bit hard to see what they are doing (if you don’t know about it already). If you take a look at the cdecl calling convention for Intel x86, it says just that a new function will first push the content of ebp onto the stack and pop it back to ebp when it’s done to restore it, like

But what about eip? It’s mentioned from time to time that…

Isshiki🐈

Writing short notes.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store