Categories
All

Allocate memory to work as swap space on VMs

When physical RAM is already in use, VM instances use swap space as a short-term replacement for physical RAM.

Contents of RAM that aren’t in active use or that aren’t needed as urgently as other data or instructions can be temporarily paged to a swap file. This frees up RAM for more immediate use.

Resolution

Calculate the swap space size

As a general rule, calculate swap space according to the following:

Amount of physical RAMRecommended swap space
2 GB of RAM or less2x the amount of RAM but never less than 32 MB
More than 2 GB of RAM but less than 32 GB4 GB + (RAM – 2 GB)
32 GB of RAM or more1x the amount of RAM
Note: Swap space should never be less than 32 MB.

In this example dd command, the swap file is 4 GB (128 MB x 32):  

$ sudo dd if=/dev/zero of=/swapfile bs=128M count=32

2.    Update the read and write permissions for the swap file:

$ sudo chmod 600 /swapfile

3.    Set up a Linux swap area:

$ sudo mkswap /swapfile

4.    Make the swap file available for immediate use by adding the swap file to swap space:  

$ sudo swapon /swapfile

5.    Verify that the procedure was successful:

$ sudo swapon -s

6.    Enable the swap file at boot time by editing the /etc/fstab file.

Open the file in the editor:

$ sudo nano /etc/fstab

Add the following new line at the end of the file, save the file, and then exit:

/swapfile swap swap defaults 0 0

Done )

run $ htop and check swp [ ] section.

Categories
All

Awesome Docker WordPress

Simple and easy containerized WordPress website with docker-compose.

Components used:

  • [image] wordpress:latest
  • [image] mysql:5.7
  • [image] phpmyadmin/phpmyadmin
  • docker
  • docker-compose
  • wget, curl, tar, mysqladmin

Usage

  • $ git clone
  • $ cd awecome-docker-wordpress
  • make sure ports in docker-compose file are opened and not in used
  • make sure no running containers using same ports. delete all containers: $ docker rm $(docker -aq) -f
  • $ sudo ./build.sh
  • Naviagate to:
    website: localhost or 127.0.0.1
    PMA: localhost:5000 or 127.0.0.1:5000

Notes: if you want to use your existing WordPress files remove or comment <<< Download latest wordpress section in build.sh file and put your files in app folder!


GitHub: https://github.com/khaledalam/awesome-docker-wordpress

Categories
All

API Multiprocessing

Motivation

In this article, I am going to show you how we can improve the computing power of simple API script from total overall (6 minutes and 17 seconds) to (1 minute 14 seconds)

The Idea

I will share with you one of my simple favourite technique that I prefer to use especially when I work on data science tasks such as data visualization, data analysis, code optimization, and big data processing.

Processing a task in a sequential way may take a long time especially when we are talking about a huge amount of data(eg. big inputs)

This technique takes advantage of parallelization capabilities in order to reduce the processing time.

The idea is to divide the data into chunks so that each engine takes care of classifying the entries in their corresponding chunks. Once performed, each engine reads, writes and processes its chunks, each chunk be processed in the same amount of time.

Example

The example I choose to use for this article is Genderize names that consist of 2 alphabetic characters.

Output Analysis Chart

Explanation

Clone GitHub Repo and follow instructions in Usage section.

Let’s generate all alphabet names that consist of 2 characters(to make the testing process easy)

we can use some Linux Kali penetration testing tool[1] such as crunch
$ crunch 2 2 > names.txt
so we generate all possible alphabet names with length 2 (676 lines)

then let’s create directories which are needed for splitting process
$ mkdir subs/ subs/inputs subs/outputs subs/outputs/parts subs/outputs/all

now we can split out input data, there are many ways to do that but I prefer to use Unix split command [2]
$ split -l 100 -d names.txt ./subs/inputs/
so we split names.txt file into small files, each file consists of 100 lines

now let’s run all processes: ./init.bash
after finish use merger.py script to merge all outputs.
merging process separated to avoid conflicts behaviours and sorting-save.


The Project on GitHub:
https://github.com/khaledalam/api-multiprocessing

An application uses this technique:
Hiring-Related Email(https://github.com/khaledalam/amazon-jobs)

Interesting related ideas:
Parallelizing using GPUs
– MapReduce (https://en.wikipedia.org/wiki/MapReduce)

[1] https://tools.kali.org/password-attacks/crunch
[2] https://en.wikipedia.org/wiki/Split_(Unix)