Linux+ - XK0-005

14 ) TCP/IP & Networking

- Describing Networking

All network devices needs an IP addressed assigned to it.

  • ip - the ip command is relatively new in the Linux kernel.

ip addr - show all interfaces IP addresses.

  • ss - socket statistics utility.

ss -aN - show all open sessions.

a - show all the sessions.

N - do not do DNS lookups.

- Configuring Networking in Linux

Most of the system will be pre configured. There are many tools to configure networking.

  • ifconfig - is an old tool not installed in most the distros anymore.

If you run the command without any argument it will present you with networking information of your system.

  • route - command used to control the routes of the system. You can use it to set your default gateway.
  • ip route - shows the routes of you system. ip is the new command and found in pretty much all of the distros nowadays.
  • dhclient - tries to get an IP address from an available DHCP server on the network.
  • systemctl restart network.service - is used to restart the whole network stack.
  • service network restart - used by distros still using sysVinit. The current distros converts the old command to its systemd equivalent.

It will vary according to the network manager in use when permanently configuring the network.

DNS as a global config is usually stored some where else and in different places depending on the distro.

  • hostnamectl set-hostname HOST_NAME - command to be used to set the distro hostname.
  • nmcli - is a tool that can be used to manage network config as well.

nmcli device status - can be used to verify which devices are currently being managed by nmcli.

nmcli device show INTERFACE_NAME - inspect the configuration of a particular device.

nmcli connection edit INTERFACE_NAME - go into CLI mode to configure the specific interface. save persistent is used to persistently save the the configuration.

nmcli connection reload - is used to restart the network service and apply the changes.

15 ) Troubleshooting Network Connections

Section where we will describe how to find issues with Linux network connections.

  • ping - send ICMP frames to a server and receive replies.
  • traceroute - map out all the hops between the host and remote host.
  • tracepath - simpler version of traceroute. It does the same but has limited capabilities.
traceroute vs tracepath
  • nslookup - tool used to query the system default's DNS server.
  • dig - another tool to query DNS servers. dig returns a much more detailed information when compared with nslookup.
  • tcpdump - is a network sniffer. tcpdump is a powerful tool to capture and analyse network traffic.
  • netcat - NC is a utility tool that uses TCP and UDP connections to read and write in a network. It can be used for both attacking and security. In the case of attacking. It helps us to debug the network along with investing it. It runs on all operating systems.  

16 ) Installing & Managing Software

- Managing Software Packages Using Apt

Apt is used to manage software for Debian based distros.

  • apt - utility used to manage packages on Ubuntu/Debian based systems.
  • apt install PACKAGE_NAME - is used to install new software.
  • /etc/apt/sources.list - contains the list of repositories the OS looks for new packages.
  • apt list PACKAGE_NAME - search for a package by its exact name.
  • apt search REGULAR_EXPRESSION - search for packages in all your listed repositories. The argument accept regular expressions to facilitate the search.
  • apt update - update the packages cache.
  • apt upgrade - upgrade packages.
  • apt dist-upgrade - upgrade the distro.
  • apt remove PACKAGE_NAME - remove packages.
Look into dpkg and apt-get for the exam as well.

- Managing Software Packages Using yum and dnf

Yum is the stable package manager for fedora based distros. Dnf will replace yum in the future.

  • /etc/yum.repos.d - location of the repositories list.
webmin.repo - example
  • yum list PACKAGE_NAME - search for a package named as PACKAGE_NAME. It accepts wildcards.
  • yum search SEARCH_TERM - search for keywords in the packages meta file.
  • yum info PACKAGE_NAME - displays information for the package queried.
  • yum install PACKAGE_NAME - install a package.
  • yum update - check for packages upgrades.
  • yum remove PACKAGE_NAME - remove the specified package.

17 ) Installing Software from Source Code

- Building and Installing from Source Code

Building software from source has a few advantages. One of them, is the ability to customise the software in accordance with your hardware. Increasing performance and even adding new features.

3 basic tools are typically used to compile software. These 3 tools are :

  • gcc - GNU project C and C++ compiler
  • gzip - tool to expand or compress files.
  • make - GNU make utility to maintain groups of programs.

1) make config - run this command inside the folder where the Makefile is located.

2) make install - installs the binaries in the system folders.

18 ) Security Best Practices

- Describing Linux Security

CompTIA Linux+ relies on 3 security standards.

CompTIA security pillars.
  • Confidentiality - the data should be protected from third party entities.
  • Integrity - assertion that the data has not been modified.
  • Availability - the data should be available at any time.

  • chroot jail is used to isolate an application from the rest of the system. Therefore, protecting it from attacks.
  • audit can monitor file changes in the system and report in case changes are detected.
  • syslog can be send to a remote server to avoid it being flushed in case of an attack.
  • encryption is another technique to protect your data from unauthorised access.
  • shred - write zeros and ones to a disk.

--iterations - defines how many iterations it will run. The minimum recommend is 3.

  • cryptsetup --verbose --verify-password luksFormat VOLUME - manage plain dm-crypt and LUKS encrypted volumes.

--verbose - output verbose information of the operation.

--verify-passphrase - ask for the password twice.

luksFormat - format and encrypt the new volume.

  • cryptosetup luksOpen VOLUME VIRTUAL_MAPPPER_NAME - maps the volume to a virtual mapper.
  • mkfs.xfs VOLUME_MAPPER_PATH - format a mapped drive.

- Hardening SSH Clients and Servers

  • Use key authentication.
  • Disable empty passwords.
  • Disable root login.
  • Disable password login.
  • ssh-keyscan SERVER_IP - output the public key of server.

19 ) SELinux & AppArmor

- Configuring SELinux Security

SELinux stands for security enhanced Linux. It allows us to lock the access that applications have. The technology came from the NSA. It create walls around applications setting tags and defining which parts of the system the app can access.

SELinux has 3 modes.

  • disabled - It won't log anything.
  • permissive - it logs every time an application tries to access something that it is not authorised.
  • enforcing - it logs and blocks every unauthorised access.

  • sestatus - displays the status of SELinux.
  • setenforce MODE - change SELinux to the specified MODE. ( permissive | enforcing )
  • /etc/selinux/config - location of the configuration file.
  • /var/log/audit - location of SELinux log files.
  • ls -lZ - show SELinux tags for files and folders.
  • ps auxZ - list process and its SELinux labels.

SELinux uses labels to control access to resources.

Listing processes labels.

Example Apache Webserver

CentOS already pre-configured Apache when installed with the correct labels.

/var/www/html

index.html is labelled with thehttpd_sys_content label.

If the index file is moved to the folder/website the label will be changed.

/website

Let's move the Apache root directory to /website.

/etc/httpd/conf/httpd.conf
/etc/httpd/conf/httpd.conf

Apache will display the test page or fail if we try to access the website.

Inspecting the SELinux log files we can see the denied log entries.

/var/log/audit/audit.log

To allow access we need to change the labels on that directory and its files. SELinux labels are called context.

We use the change context command to change SELinux content from a file or folder.

  • chcon -Rv --type=httpd_sys_contenct_t /website - this command recursively changes the /website folder context.

Although the command above changes the folder context. The recommend is to change the SELinux policy, otherwise if reapplied the change above will be removed.

  • restorecon -Rv /website - changes a file/folder to its default context.
  • semanage fcontext -a -t httpd_sys_content_t /website - using this command will change the SELinux policy.

If the restorecon command is run again the folder will be modified according to the set policy.

SELinux can also block access to network ports.

  • semanage port -a -t http_port_t -p 8080 - this command can be used to allow apache to use another port.
  • semanage port -l - list all ports SELinux has on its policies.

- Configuring AppArmor Security

AppArmor is very similar to SELinux. AppArmor is broadly used in Debian distros.

The main difference between SELinux and AppArmor is that SELinux has policies to restrict every process running on the system. AppArmor is meant to actually monitor single apps. It creates profiles to protect single apps.

SELinux is actually looking in the inodes directly on the disk. AppArmor on the other hand uses path names and it is more efficient. However, since AppArmor uses path it is susceptible to manipulation in the path names.

AppArmor has two modes complain and enforce.

complain - will only log monitored apps.

enforce - will block apps.

AppArmor operates with profile files located in /etc/apparmor/.

  • aa-unconfined - list all the apps that it is not being secured by AppArmor.
  • aa-genprof PROCESS_NAME - create a profile for the specified process. It scans what the process is accessing and asks the user what to allow and creates a profile.
  • aa-enforce PROFILE_FILE - turns on and enforce a profile.

19 ) Network Firewall & Traffic Filtering

- Using firewalld to Implement a Host-based Firewall

Distros moved to firewalld to better integrate with systemd.

firewalld offers an easier management compared to iptables.

  • firewall-cmd --state - is the command used to configure firewalld.

firewalld works putting interfaces into zones and when traffic passes through zones they get filtered.

  • firewall-cmd --get-zones - list all the defined zones.
  • firewall-cmd --get-default-zones - list the default zone that the interfaces is put into.
  • firewall-cmd --get-active-zones - shows zones that contains interfaces.
  • firewallcmd --reload - apply disk configs to running configs.

Use the --permanent flag to save rules to the disk.

To control the default zone that each interface uses when it comes up. The config file for that interface need to be changed and the interface added to the desired zone.

  • firewall-cmd --get-services - list all the pre-configured services that firewalld supports.
  • /usr/lib/firewalld/services - is the location of the services' configuration files.
  • firewall-cmd --zone=ZONE --permanent --add-service=http - allow http traffic on the specified zone. If ZONE is omitted the rule is applied to the default zone.
  • firewall-cmd --permanent --list-services - is used to list all allowed services.
  • firewall-cmd --permanent --add-port=8080/tcp - this command can be used to add a non-defined service port. If you want more control an XML file can be created with more options.

- Using iptables to Implement a Host-based Firewall

iptables uses chains to manipulate traffic. The traffic entering a chain needs to be identified and and action is applied to it.

There are 3 actions : ACCEPT, REJECT and DROP

  • iptables --list-rules - show all the rules.
  • iptables --list - show all chains and its rules.
  • iptables-save - show all the rules on the screen (stdout).
  • iptables-save > /etc/sysconfig/iptables - saves the rules in RAM to the main config file.
  • iptables -F - flush all rule in RAM and reload the rules in the config file.
  • iptables -vnL --line - show statistics of all rules and chains.

watch -n 0.5 iptables -vnL - run a command every specified seconds. It can be used to create a "dashboard" for iptables.

20 ) Backup & Restore

- Performing Backups and Restoring Files

Full backups - all data is backed up.

Incremental backups - only the differences from the previous incremental backups are backed up. To restore, all backups have to be restored.

Differential backups - only the difference from the full backup is backed up. Only two backups needs to be restored to get all the data. However, it gets bigger and bigger every new backup.

  • tar OPTIONS FOLDER_OR_FILE - is the tape archive utility and can be used to create backups. Some flags needs to be remembered.

c - is used to compress files/folders.

v - verbose shows more information when running the utility.

z - compress the data.

f - sets a file name for the backup.

x - extract a backup file.

  • dar - disk archive utility. dar is more flexible then tar. It can do differential and incremental backups.
  • dd - copy and convert utility. dd is suitable for disk backups.

Usage : dd if=INPUT_DISK of=OUTPUT_FILE

21 ) Bourne-again Shell & Scripting

- Using and Configuring Bash

Bash (Bourn Again Shell) has various configuration files. Each configuration files has precedence over each other.

/etc/profile - start-up script.

In PopOS distro the global profile file loads /etc/bash.bashrc and all the scripts inside /etc/profile.d.

  • export VARIABLE_NAME=VALUE - command to set environment variables.
  • set -o allexport - set all variables as global.
  • alias - this command is used to create shortcuts commands.
  • unalias - remove aliases.

You can create functions inside the .bashrc file and use them as commands.

And run the function as a command sysinfo.

- Creating a Bash Script

Bash scripts are files with a set of commands that will be run when the script file is executed.

Use #!/BINARY_PATH to tell bash to which binary to use when the script is executed.

22 ) Scheduling Tasks

- Using at and cron to Schedule Tasks

at is an old command to schedule tasks on Linux. However, with at you cannot create reoccurring commands.

CTRL + D to close the at prompt

atq - list all the scheduled jobs.

atrm - remove jobs.

cron is another tool to schedule tasks on Linux. You can add scripts to /etc/cron.daily, /etc/cron.hourly, /etc/cron.monthly and /etc/cron.weekly folders to have these tasks executed.

If the job needs to be executed an specific date and time. The /etc/crontab has to be edited.

crontab format table.

- Using cron and anacron to Schedule Tasks

Users need to use the crontab -e command to edit their own tables. The command will open the defined text editor where the user can edit and save to set jobs.

anacron - is a tool used to run missed tasks by cron.

23 ) Git Version Control

- Describing Version Control and Using Git

Git was created by Linus Torvalds. It is used to version control changes.

The git package needs to be installed.

Setting up git required variables.

  • git config --global user.name "NAME"
  • git config --global user.email "EMAIL"

  • git clone ADDRESS - command used to clone a repository.
  • git init - create a repository on the current folder.
  • git status - shows the status of the current repository.
  • git add FILE - tracks a file.
  • git commit -m "MESSAGE" - commit the changes to the repository.
  • git log - show the commits.
  • git branch -a - show all branches.
  • git checkout -b BRANCH_NAME - create a new branch and switch to it.
  • git merge BRANCH_NAME - merge branches.

24 ) Implementing Configuration Management and IaC

- What is IaC?

Infrastructure as code ( IaC ) is the process of provisioning our system instead of using manual processes using automation.

CI/CD is key for IaC, it is a method to frequently deliver apps/configurations.

File formats depends on your IaC program in use.

  • YAML - Salt and Ansible
  • JSON - Often used as backend data
  • RUBY - Vagrant, Puppet, Chef

A couple of tools will be looked into for the Linux+. These are:

  • Ansible
  • Chef
  • Vagrant
  • Puppet
  • Terraform

- Syntax Control Using Modeline in Vim

Let's configure vim to model our various configuration files.

We need to create a .vimrc file in our home directory. This file can be used to control global configurations.

/home/user/.vimrc

The above config set vim to syntax format all files that ends with .yaml.

To configure vim for individual files we need to add the first line as comment with the parameters we want.

/home/user/Vagrantfile

- Understanding Vagrant Provisioning

Vagrant configuration file.

Vagrant configuration files have the main block where we can refer to variables. The variables are defined using a $ and referred by the name. Ex) $script.

We nee to create a folder and initiate vagrant.

  • vagrant init --minimal ubuntu/focal64 - creates a virtual machine definition file.

The example above defines the provisioning of a virtual machine.

config.vm.provision "shell", inline: $script - we are using a provisioner of the type shell to run commands in our VM.

config.vm.synced_folder "web/", "/var/www/html" - map the local folder web/ to the VM /var/www/html.

config.vm.network "forwarded_port", guest:80, host: 8080 - map the port 80 of the VM to 8080 on the host.

$script - the script variable defines commands to be run on the VM.

Vagrant requires the installation of plugins to work. In out example it needs the Virtual Box Guest Addition to deploy our virtual machine.

  • vagrant plugin list - list all installed plugins.
  • vagrant plugin install vagrant-vbguest - install the Virtual Box Guest Addition.
  • vagrant up - used to deploy our VM.
  • vagrant port - list all the port mappings.

- Writing YAML

YAML it is a data structure language similar to JSON but without all the extra syntax elements.

An YAML file should start with --- # doc Header and ends with a ... # doc Footer.

/home/user/example.yaml

We can use the package yamllint to verify if our syntax is correct.

YAML and JSON can be compared. JSON is more verbose then YAML as seen below.

25 ) Implementing Ansible for Configuration Management.

- Introduction to Ansible

Ansible is owned by Red Hat and it is serverless. It is python based and can be downloaded from distro repos our straight from the python repo.

We need to create a configuration and inventory. To configure nodes we can run ad-hoc commands and are not repeatable. Playbooks in the other hand are repeatable.

- Installing Ansible

To install on ubuntu run the below commands.

sudo apt install ansible

After the installation we can run the command below to confirm if it has been installed.

ansible --version

A quick test can be run to confirm that ansible is working.

ansible localhost -m ping

- Creating Ansible Configurations and Inventory

We can create a new directory and create a a configuration file.

Ansible maintains a config file in /etc/ansible/ansible.cfg. However, this configuration file should not be used instead we will create a new one.

/home/user/ansible/ansible.cfg

remote_user - is the name of the user to be used with the remote hosts.

inventory - is the list of hosts to be managed. inventory is a local file.

host_key_checking - set to false to auto accept offered keys.

ansible-config dump --only-changed - shows all the configuration changed.

The inventory file is used to maintain remote hosts configurations. We can use group names and list as many remote hands we want.

/home/user/tiago/ansible/inventory

ansible-inventory --list --yaml - can be used to list all groups and remote hosts.

all:
  children:
    games:
      hosts:
        192.168.25.10: {}
    ungrouped: {}

We can run ansible against a group or the all group to reach all hosts.

Let's install a package in our remote host. The host does not have the package tree. Let's use ansible to install it.

ansible all -kKbm package -a "name=tree state=present"
SSH password: 
BECOME password[defaults to SSH password]: 
192.168.25.10 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "cache_update_time": 1660877784,
...
        "Setting up tree (1.8.0-1) ...",
        "Processing triggers for man-db (2.9.1-1) ..."
    ]
}

-k - asks for SSH password.

-K - asks for sudo password.

-b - elevate privileges to root.

-m - module to use.

The remote host has now the package tree installed.

- Implementing Ansible Playbook

Let's create a playbook to uninstall the package tree installed in the example above.

--- # Example Playbook - START
- name: First PLaybook
  hosts: all
  become: true
  tasks:
    - name: Remove package
      package:
        name: tree
        state: absent
... # Example Playbook - END

Let's check the syntax of our playbook. If there are no errors the output will be blank.

ansible-playbook playbook.yaml --syntax-check

playbook: playbook.yaml

We can use the -C flag to check if there will be changes.

ansible-playbook -kK playbook.yaml -C
SSH password: 
BECOME password[defaults to SSH password]: 

PLAY [First PLaybook] *****************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************
ok: [192.168.25.10]

TASK [Remove package] *****************************************************************************************************************
changed: [192.168.25.10]

PLAY RECAP ****************************************************************************************************************************
192.168.25.10              : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

And finally, run the command without the -C flag to execute the tasks described in our playbook.

ansible-playbook -kK playbook.yaml
SSH password: 
BECOME password[defaults to SSH password]: 

PLAY [First PLaybook] *****************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************
ok: [192.168.25.10]

TASK [Remove package] *****************************************************************************************************************
changed: [192.168.25.10]

PLAY RECAP ****************************************************************************************************************************
192.168.25.10              : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

26 ) Implementing Puppet for Configuration Management.

Puppet is from PuppetLabs and it has two versions:

  • Puppet Enterprise
  • Puppet CE ( Community Edition )

It is a server / client module and uses the port 8140. Puppet can also be locally used.

Puppet is Ruby based and use .pp extensions.

- Installing Puppet on Ubuntu

On ubuntu the puppet package is named puppet.

sudo apt install puppet
Command to install puppet on Ubuntu.
  • puppet - has a whole lot of sub-commands. Use the describe sub-command to show help for a module.
  • puppet resource RESOURCE - can be used to show resources like the state of a package, details of a user and so on.

- Executing Ad-Hoc Puppet Commands

Puppet can apply configurations directly from the CLI. We will need to elevate privileges if required by the underlying operation.

We can use the puppet apply command to apply configurations straight from the command without the need of manifest files.

  • puppet apply -e 'package { "chrony": ensure => "installed" }

The above command can be used to install a package if not already installed. In the case of the host above it was already installed.

-e - this flag tells puppet to run apply without a manifest file.

  • puppet apply -e 'service { "chrony": ensure => "running", enable => true }

The above command can be used to check if the chrony service is running and enabled.

- Working with Modules

Predefined code can be downloaded as module from forge.puppet.com.

Let's install apache as an example of how to use modules.

  • puppet module list - can be used to list all modules installed.
  • puppet module install -i /usr/share/puppet/modules puppetlabs/apache

With the above command we will install the apache module.

  • puppet apply -e "include apache"

The above command will install the apache webserver with its default configurations.

- Understanding Manifests

Manifests make up the basis of Puppet configuration. They are a form of Ruby file, so we may choose to add the modeline.

The above picture exemplify the creation of a puppet file.


Copy a resource to a puppet file.

We can copy an existing resource to create a new manifest and edit it as required.


The file above, make sure that the SSH service is running and add the line to disable root login to the SSH service configuration file.

The last parameter ( notify ) restarts the SSH service.

- Working with Manifests

Let's create a new user with puppet. First, we need to create a manifest file.

user.pp

puppet parser validate user.pp - this command can be used to verify if the manifest file does not contain any syntax errors.

puppet apply --noop user.pp - to simulate a run an analyse what would be the outcome.

puppet apply user.pp - to finally create the new user.

27 ) Implementing SaltStack for Configuration Management.

SaltStack is owned by VMware and the comercial version is called vRealize and has Salt Open which is the project version.

It is server/client and uses TCP ports 4505/4506 on the server.

SaltStack is python based and uses YAML configuration files with .sls suffix.

To use SaltStack we need install the salt-minion agent package.

- Reading CLI Documentation

As well as web documentation, Salt ships with extensive command line documentation for the modules and functions. Here, we list help for the Remote Execution modules.

salt-call --local sys.list_functions pkg - can be used to list all functions inside a module.


Installing a package with SaltStack.

salt-call --local pkg.install vim-data


- Understanding Salt State Files

A state file defines the desired state of the system. We can install multiple packages should be need. The vim-data package is required for syntax highlighting.

Vim syntax configuration.

common.sls

The common.sls state file defines a list of packages to be installed and ensures that the time zone is set to UK Time Zone.

The file needs to be copied to the /svr/salt directory and is executed using the state.sls Remote Execution module. If needed, we can test before the full operation.

salt-call --local state.sls common test=True - to check the state file syntax and present the changes that will be applied. Omit test=True to apply the state.

28 ) Implementing Chef for Configuration Management.

Chef is own by a company called Progess. Chef is a server / client tool, ruby based and configuration files ate stored with a .rb extension.

apt update install chef - to install the chef package.

- Working with the Git Resource

Chef recipes are written in Ruby that add a little complexity compared with YAML.

~/cookbook/repo/recipes/default.rb

The above file is an example of a recipe to clone a git repository.

chef-client -z -o  repo - this command executes the default.rb recipe.

- Working with Files

We will create a file, set its mode and update its contents.

Let's create a new recipe directory and a recipe file.

~/cookbooks/hello/recipes/default.rb

chef-client -z -o  hello - this command executes the default.rb recipe.

29 ) Implementing Terraform for Configuration Management.

Use this tutorial as a reference for this module.

Terraform - Proxmox Provider Setup
I want to learn more about Infrastructure as a Code and the first tool I wil be testing is terraform. In another article, I have described how to create the terraform container and setup a proxy to help with the troubleshooting in case anything goes wrong and trust me it

30 ) Implementing Containers Using Docker

The docker container section of the blog and its articles covers all the expected topics expected by the Comptia Linux+.

Docker - Infoitech - [B]logging
Docker related posts.

31 ) Managing Container Micro-services Using Kubernetes

Kubernetes add scalability and reliability to containers.

Kubernetes add clustering to your container applications adding reliability and the ability to scale services to meet demand.

MicroK8 is a CNCF upstream Kubernetes deployment that allows for a simple setup on a single host. Ideally a Kubernetes cluster would consist of many systems.

snap install microk8s --classic
Installing Microk8s service.
gpasswd -a $USER microk
Adding the logged user to the Microk8s group
microk8s status

- Creating Micro-services in Kubernetes

Deployments exists as one or more pods running on one or more nodes within the cluster. A pod represents a container. A deployment becomes a micro-service when a cluster-ip is assigned via exposing a port.

For avaialbility we can create a deployment of Nginx with two replicas, in our case the replicas will run on the single node but is adequate for the demonstration.

microk8s kubectl get deployment - list all available deployments.

micrik8s kubectl create deployment web --image=nginx - creates a new deployment and uses the nginx to deploy it.

microk8s kubectl get pods - list all running pods ( containers )

microk8s kubectl get services - list all available services.

microk8s kubectl scale deployment web --replicas=2 - scales any running pod.

microk8s kubectl expose deployment web --type=NodePort --port=80 --name=nginx-web - command used to expose a running deployment.

microk8s kubectl delete service nginx-web

microk8s kubectl delete deployment web - commands used to delete resources.

Linux Certifications

Linux Foundation Certifications: A Primer - Linux Foundation - Training
We frequently receive questions at Linux Foundation Training & Certification about which certification is the best fit for a given individual. You may be unsure if a given certification will...