Friday, June 28, 2024

 

MONOLITHIC VRS MICROSERVICE

THE LEGANCY MONOLOTHIC

A monolithic application is a software design pattern where all components of the application are interconnected and interdependent, forming a single unified unit. Here's a detailed breakdown of what a monolithic application entails:

Characteristics of Monolithic Applications

o   All functionalities of the application are contained within a single codebase.

o   This includes the user interface, business logic, and data access layers.

o   The different modules and components of the application are tightly integrated.

o   Any changes in one part of the application can potentially impact other parts, making it difficult to modify or update the system incrementally.

Disadvantages of Monolithic Applications

  1. Flexibility:
    • Hard to adopt new technologies or frameworks incrementally.
    • Making changes or updates can be risky and time-consuming because of the tight coupling between components.
  2. Deployment Complexity:
    • The entire application must be redeployed even for small changes, which can lead to longer downtimes and increased risk of introducing bugs.
  3. Maintenance Challenges:
    • As the codebase grows, it becomes more complex and harder to manage.
    • Technical debt can accumulate quickly, making it harder to introduce new features or fix bugs.

 

What is a Monolithic Application?

Think of a monolithic application as a single, large restaurant kitchen where all the chefs work together in the same space to prepare every dish on the menu.

How it Works

  1. All-in-One Kitchen:
    • The entire kitchen is one big room where all the cooking happens.
    • In our restaurant, this means the appetizers, main courses, desserts, and drinks are all prepared in the same kitchen, using the same equipment and space.
  2. Tightly Connected Chefs:
    • All the chefs work closely together. If one chef needs to make a change, like using a different ingredient, they have to coordinate with everyone else in the kitchen.
    • This is like having all the cooking stations (grill, oven, prep area) in one big room. If one station changes, it can affect the others.
  3. Single Kitchen Operation:
    • When the restaurant needs to update the menu or fix something in the kitchen, they have to update or fix the entire kitchen.
    • Think of it like having to close the entire kitchen for renovations, even if only one part of it needs updating.

Disadvantages

  1. Hard to Scale:
    • If the restaurant gets busier and needs to serve more customers, scaling up the kitchen can be difficult because you can't just add more chefs without reorganizing the entire kitchen.
    • It’s like trying to fit more chefs into the same space without adding more kitchen areas, leading to overcrowding.
  2. Difficult to Update:
    • Making changes to the menu or fixing kitchen equipment can be risky and complicated because everything is so interconnected.
    • This is like needing to close the entire kitchen for any update or repair, which can disrupt the whole restaurant.
  3. Downtime:
    • Every time there's a major update or repair needed, the entire kitchen has to close, leading to more downtime.
    • It's like having to stop all cooking in the restaurant to fix or update one part of the kitchen.

When It's a Good Fit

A monolithic kitchen is great for smaller restaurants or when you're just starting out because it's simpler to manage and oversee. However, as the restaurant grows and the menu expands, it can become harder to manage efficiently and may require a different approach, like dividing the kitchen into specialized areas.

In summary, a monolithic application is like a single, large kitchen where everything is prepared and managed together. It's easy to start with but can be challenging to maintain and scale as the restaurant (or application) grows. For larger, more complex systems, a microservices architecture might be more suitable, as it allows for independent scaling, deployment, and management of different components.

 

 THE MODERN MICROSERVICE

What is a Microservice?

A microservice is a software design pattern where an application is composed of small, independent services that work together. Each service handles a specific piece of the application's functionality and can be developed, deployed, and scaled independently.

Characteristics of Microservices

  1. Independence:
    • Each microservice operates independently and focuses on a specific task or business function, such as user authentication, payment processing, or inventory management.
  2. Decoupled:
    • Microservices are loosely coupled, meaning changes to one service usually don’t affect the others.
  3. Small and Focused:
    • Microservices are small and manage a single piece of functionality, making them easier to understand and manage.

Advantages of Microservices

  1. Scalability:
    • Each microservice can be scaled independently. If one part of the application needs more resources, only that service needs to be scaled up.
  2. Flexibility:
    • Different microservices can use different technologies, languages, and databases best suited for their specific tasks.
  3. Resilience:
    • If one microservice fails, it doesn’t necessarily bring down the entire application. The other services can continue to operate.
  4. Faster Development:
    • Teams can develop, test, and deploy microservices independently, speeding up the development process and making continuous deployment easier.

When to Use Microservices

  • Large, Complex Applications:
    • Ideal for large applications with many different functions that need to be developed and maintained by separate teams.
  • Continuous Deployment:
    • When you need to deploy updates frequently and independently without affecting the whole application.


What is a Microservice?

Think of your application as a music band, and each musician in the band represents a different part of the application. In a microservice architecture, each musician (microservice) plays their own instrument and can perform independently, but together they create a complete performance.

How it Works

  1. Individual Musicians:
    • Each musician in the band plays a specific instrument: one plays the guitar, another plays the drums, and another plays the keyboard.
    • In your application, one microservice might handle user login, another might handle payments, and another might manage product inventory.
  2. Independent but Coordinated:
    • Each musician can practice and improve their part independently, but they coordinate to play music together.
    • Each microservice can be developed, deployed, and scaled independently, but they communicate and work together to form the complete application.
  3. Communication:
    • The musicians use sheet music or signals to stay in sync with each other.
    • Microservices communicate with each other through APIs (like sending messages or signals).

Advantages of Microservices

  1. Scalability:
    • If the band becomes popular, you can add more guitarists or drummers without needing to change the whole band.
    • In your application, if one service needs more resources (like handling more users), you can scale up just that service.
  2. Flexibility:
    • Each musician can use their own instrument and techniques that are best suited for their part of the performance.
    • Different microservices can use different technologies that are best suited for their specific tasks.
  3. Resilience:
    • If the drummer can't make it to the performance, the rest of the band can still play, albeit with some adjustments.
    • If one microservice fails, the others can keep running.
  4. Faster Development:
    • Each musician can practice their part separately, making it quicker to learn new songs.
    • Development teams can work on different microservices independently, speeding up the overall development process.

When to Use Microservices

  • Large, Complex Projects:
    • If your application is big and has many different features, breaking it into smaller, manageable pieces can be very helpful.
  • High Scalability Needs:
    • If different parts of your application need to handle different amounts of work, microservices allow you to scale them independently.
  • Frequent Updates:
    • If you need to update parts of your application often, microservices make it easier to update one part without affecting the others.

Conclusion

A microservice architecture is like a music band where each musician plays their own instrument independently, but together they create beautiful music. This makes it easier to manage, update, and scale different parts of the application separately. While it can be more complex to manage, the benefits often make it worth it for large and complex applications.


 Container Orchestrators

With enterprises containerizing their applications and moving them to the cloud, there is a growing demand for container orchestration solutions. While there are many solutions available, some are mere re-distributions of well-established container orchestration tools, enriched with features and, sometimes, with certain limitations in flexibility.

 

Where to Deploy Container Orchestrators?

·        Most container orchestrators can be deployed on the infrastructure of our choice - on bare metal, Virtual Machines, on-premises, on public and hybrid clouds.

·        In addition, there are cloud solutions which allow production Kubernetes clusters to be installed, with only a few commands, on top of cloud Infrastructures-as-a-Service. These solutions paved the way for the managed container orchestration as-a-Service, more specifically the managed Kubernetes as-a-Service (KaaS) solution, offered and hosted by the major cloud providers. Examples of KaaS solutions are Amazon Elastic Kubernetes Service (Amazon EKS), Azure Kubernetes Service (AKS), DigitalOcean KubernetesGoogle Kubernetes Engine (GKE), IBM Cloud Kubernetes ServiceOracle Container Engine for Kubernetes

 

 

 

Friday, June 21, 2024

 Dockerize an Application (lab 9)


But Docker also gives you the capability to create your own Docker images, and it can be done with the help of Docker Files. A Docker File is a simple text file with instructions on how to build your images.

The following steps explain how you should go about creating a Docker File.

Step 1 − Create a file called Docker File and edit it using 

vi Dockerfile

Step 2 − Build your Docker File using the following instructions.

FROM ubuntu:latest
MAINTAINER Joe Ben Jen200@icloud.com
RUN apt-get update
RUN apt-get install -y nginx
ENTRYPOINT ["/usr/sbin/nginx","-g","daemon off;"]
EXPOSE 80






The following points need to be noted about the above file −

  • The first line "#This is a sample Image" is a comment. You can add comments to the Docker File with the help of the # command

  • The next line has to start with the FROM keyword. It tells docker, from which base image you want to base your image from. In our example, we are creating an image from the ubuntu image.

  • The next command is the person who is going to maintain this image. Here you specify the MAINTAINER keyword and just mention the email ID.

  • The RUN command is used to run instructions against the image. In our case, we first update our Ubuntu system and then install the nginx server on our ubuntu image.

  • ENTRYPOINT: Specifies the command which will be executed first

  • EXPOSE: Specifies the port on which the container is exposed

  • Once you are done with that, just save the file.
    Build the Dockerfile using the below command

Step 3 − Save the file. In the next chapter, we will discuss how to build the image

·                docker build -t kakrahanson-app .

·                $ sudo docker Images    # to show the docker image that was just built#



you have to run the following command
docker run -it -p port_number -d image_id

docker run -itd -p 9000:80 e916a9463607



Make sure port 9000 is open 

docker ps






Saturday, June 15, 2024

 CI/CD Pipelines

Branching Strategies/Feature Branch

This lab  aims to help non-technical people like non-technical product managers, project managers, and executives to support their engineering teams on the path to building better high-quality products, keep high delivery pace and avoid unpleasant surprises for your valued customers.

The continuous delivery process starts with keyboard strokes of your engineering team and ends with the experience your customer has with the product. The goal of continuous delivery is to shorten the time and avoid mistakes between these two points in time. A well-oiled mature continuous delivery system takes deployment management burden off your engineering team, allowing them to focus on engineering and product build.

The very first and most important step towards continuous delivery is a centralized place to store and share code. Code is the key (and sometimes the only) outcome of software engineering activities, and this code needs a place to live. This place is a common and shared environment that everybody on your engineering team knows and uses every day. You can think of a code repository as a Google Drive or Dropbox for engineers to share and work on the code. A great place to start your journey to build continuous delivery is to introduce a code repository in your organization if you happen not to have one.

 


The code repository of choice is most often available through providers like githubgitlab or bitbucket to name a few [1]. These providers offer powerful free tiers that will serve you for a long time until you will see a need to pay for premium features. Check-in with the team if they have any preference or prior experience before choosing any specific platform.

The code repository is an integral part of any software development process and should be adopted by any technological company early its days. The code repository makes it possible for engineers to store and share code, track revisions, and incoming change requests.

Review, inspect and test every product change

All changes to your products are made by humans, and humans are prone to making mistakes — which is not bad, but rather how things are. And while we can all ask our colleagues and teams to write bug-free code — bugs still do slip through the cracks. To minimize the risk of getting bad product changes into our customers’ hands we can introduce a couple of additional hops that the change needs to jump through. First, we can employ computer programs (automated tests and various code analysis tools) to be impartial judges of the quality of changes we introduce. Second, other people on the team might be able to spot issues or deficiencies in the change the author overlooked (manual review process). Both of these steps ideally would be triggered automatically on every code change.

 


Most of code repository providers (including githubgitlab and bibucket) provide tools to build continuous delivery pipelines. A pipeline is a set of predefined steps that run automatically following a particular event (code commit, time schedule, manual trigger). An example of a pipeline definition could be “on every commit, build the product and run automated tests on it”. Configuring and streamlining pipelines will require some time and iterations but will quickly bring the reward of stable products and happy customers.

One of the options how the process of accepting new code change into repository can look as following:

1.    Submit new code change. The author of a code change submits a change request to the code repository. At this stage, code is placed in a separate “sandbox” (also known as a pull-request, or change-request). This is done to prevent potentially harmful changes to making their way into the main code base unverified.

2.    Trigger automated tests. On changes submitted to the code repository, pipeline executes tests that validate basic functionality, compliance with code style, and security standards. If the pipeline fails, it notifies the author of spotted errors and waiting for the author to provide a fix.

3.    Open a code review request. At the same, either automatically after smoke tests or on author request, the code management system opens a code review request where teammates can check out the change candidate, leave comments, improvement suggestions or ask questions. Once all involve satisfied and approve a request — change can make it to the next stage. 1–2 people to look through the code is usually enough.

Once steps above are completed, your code is ready to be merged to the main codebase where it will slowly progress through a series of stages and additional verifications.

 

 



  PROMETHEUS AND GRAFANA A robust performance monitoring and alerting stack is crucial to service reliability. Cloud Native Computing Foun...