bookmark_borderComposer Smart Updater

Reflection:
In a big Symfony v5.0 project we wanted to upgrade composer packages (around +140 updates available includes orm, doctrine packages, fixtures, migration, api platform, etc)
after update using composer update when run the migration symfony console doctrine:migration:migrate -n migration not working fine because of conflicts and different migrations table name pattern so it can’t detect names of migrations file and because some packages needs PHP v7.4.11 (relreased this month 1st October 2020) and our env uses PHP v7.4.3 and needs Composer v2 and our env used old Composer version.


I created a Python script that reads composer.json file and loop over its required and required-dev packages and install each package individually then run migration command and check if everything is working fine or not!

composer-smart-updater.py on GitHub: https://github.com/khaledalam/composer-smart-updater

This script will helps to detect and find which exactly package that cause this conflict.

Plus I unify infrastructure cloud architecture env (AWS) and Git pipelines (BitBucket) container images to use most updated versions.

bookmark_borderUsing Pipelines to Invalidate AWS CloudFront Cache that Pointing to AWS S3 bucket

Steps:

  • Add Repository Variables for Distribution IDs
  • Add Repository Variables for AWS keys as well
  • Add invalidate step in your pipeline
    e.g.
- step:
          name: ">> Invalidate AWS CloudFront (by: Khaled alam)"
          script:
          - pipe: atlassian/aws-cloudfront-invalidate:0.4.1
            variables:
              AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
              AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
              AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
              DISTRIBUTION_ID: $TEST_CLOUDFRONT_DISTRIBUTION_ID
              # PATHS: <string> # Optional
              DEBUG: "true" # Optional

Note: in case you want to invalidate specific files or paths, you should define that in PATHS: <string>

bookmark_borderConfirm AWS Elastic Beanstalk Deployment Progress

Amazon Web Services Elastic Beanstalk

Use case:
You are using Amazon Web Service(AWS) Elastic Beanstalk to handle your deployment process via some git version control pipelines and there is a step that needs to run after your code has been deployed successfully.

There are a lot of solutions that can be used e.g CloudWatch, Lambda Function,.. etc. but I decided to invent my own solution that is easy, interesting and will cost 0$ :))



Idea:
Use a unique value on pipeline level (e.g pipeline build number) and add it to new code before upload it to server then call some endpoint or file to check this value. while this value does not equal out the new unique value in the pipeline then the deployment not finished yet and we should wait!


How to add new value to new code?
– we will create special endpoint or file in our project
– in the pipeline before upload new code to server change value of this file eg. deploy_version.txt or endpoint {api_link}/current_version
e.g: set some variable value with a new unique value. (you can do that using e.g sed Unix-like command)

ex.

we have a static endpoint that returns JSON result with the value of currentVersion variable that placed in e.g `version.js`

const currentVersion='123';

in the pipeline before upload code step:
- sed -i 2"s/currentVersion='123'/currentVersion='$BITBUCKET_BUILD_NUMBER'/g" src/version.js


How to validate deployment progress?

create a bash script that will send curl request to that endpoint to validate if current version from the endpoint(currently deployed code) is equal to new unique pipeline value or not.

how can this bash script look like? email me for more details(khaledalam.net@gmail.com)


bookmark_borderAllocate memory to work as swap space on VMs

When physical RAM is already in use, VM instances use swap space as a short-term replacement for physical RAM.

Contents of RAM that aren’t in active use or that aren’t needed as urgently as other data or instructions can be temporarily paged to a swap file. This frees up RAM for more immediate use.

Resolution

Calculate the swap space size

As a general rule, calculate swap space according to the following:

Amount of physical RAMRecommended swap space
2 GB of RAM or less2x the amount of RAM but never less than 32 MB
More than 2 GB of RAM but less than 32 GB4 GB + (RAM – 2 GB)
32 GB of RAM or more1x the amount of RAM
Note: Swap space should never be less than 32 MB.

In this example dd command, the swap file is 4 GB (128 MB x 32):  

$ sudo dd if=/dev/zero of=/swapfile bs=128M count=32

2.    Update the read and write permissions for the swap file:

$ sudo chmod 600 /swapfile

3.    Set up a Linux swap area:

$ sudo mkswap /swapfile

4.    Make the swap file available for immediate use by adding the swap file to swap space:  

$ sudo swapon /swapfile

5.    Verify that the procedure was successful:

$ sudo swapon -s

6.    Enable the swap file at boot time by editing the /etc/fstab file.

Open the file in the editor:

$ sudo nano /etc/fstab

Add the following new line at the end of the file, save the file, and then exit:

/swapfile swap swap defaults 0 0

Done )

run $ htop and check swp [ ] section.