Sunday, August 30, 2020

OCI — Local Peering between tenants in the same region — Part I

 

This blog explains how to do Local peering between two Virtual Cloud Networks owned by two different Tenants in same region.

Here is a high level diagram which depicts the Oracle Cloud Infrastructure resources in this topology.

In Part I of this series, I’ll cover the high level steps required to setup the topology.

Part II will cover how to create required OCI resources in Tenancy 2 (Tenant B) in above diagram. Basically creates Compartment, VCN, Subnet, Network Security Group and then create a Autonomous Data Warehouse using Private Endpoint option. Also create Local peering Gateway required to connect to virtual machine in Tenancy 1 (Tenant A ) in above diagram. Since the other VCN is in a different tenancy(Tenant B), need to add IAM policies to allow peering between tenancies.

Part III will cover how to create required OCI resources in Tenancy 1(Tenant A) in above diagram. Basically creates Compartment, VCN, two regional subnets, two virtual machines(bastion host in public subnet and another virtual machine in private subnet used to connect to Autonomous Database in Tenant B). Also creates Internet Gateway required to connect to internet from public subnet. And creates NAT Gateway required to make outbound connections from private subnet to download required softwares. Also create Local peering Gateway to connect to Autonomous database in Tenant 2. Since the other VNC is in a different tenancy(Tenant A), need to add IAM policies to allow peering between tenancies.

Part IV will cover how to establish peering connection between Virtual cloud networks owned by different tenants. Once connection is established, install required Oracle client software on the virtual machine in private subnet. And connect to autonomous database in Tenancy 2.

Sunday, April 5, 2020

Kubernetes series - Part II - virtualization vs containerization



The main goal of virtualization/containerization is to provide a certain level of isolation of host resources(CPU/ Memory/ Network/ Filesystem etc) used by each service/application running on the host.
Before we go into details of container/containerization/virtualization, let's look at the origin of virtualization.

Chroot environments

The UNIX/LINUX chroot command (stands for change root) allows to create file system level isolation in a host. chroot command changes the root of the filesystem for a process and it's child processes.This would help to run an unprivileged service like a web server in a protected environment(chroot jail). The web server process can see only files under the root directory of the chroot environment.

But this provides only a crud level of file system isolation. The web server can still hog other resources of the host like CPU/Memory/Network.

Chroot feature was available as early as 1979 in version 7 of UNIX. This feature was added to BSD in 1982.

FreeBSD jail

FreeBSD enahcened the chroot environment feature and provided the jail feature in 2000. This feature enabled lightweight virtualization. Each virtual environment running on a shared host has its own processes, files, network and users. And the overhead of a jail is < 10MB.

The freeBSD jail doesn’t provide true virtualization. The OS kernel is still shared by all virtual environments. Also there is no way to limit CPU/Memory usage by each jail. Still freeBSD jail is a very popular solution used in production.

Virtualization

Virtualization provides strong isolation compared to FreeBSD. Basically instead of installing an Operating System you would install a software layer called hypervisor on a host. Then you would log in to the hypervisor and create virtual machines. And each virtual machine will have its own Operating System. So nothing is shared between two virtual machines running on a host. The hypervisor is interacting with the hardware resources of the host machine and allocates these resources to the virtual machines.

Containerization

Container is an advanced version of chroot and FreeBSD jail. It's built on top of two Linux kernel features called control groups (cgroups) and Linux namespaces.

Google originally developed Control Groups(cgroups) and contributed to the Linux kernel in 2008.  Control groups (cgroups)  allow you to isolate resources like CPU, Memory, Network and block IO used by a group of processes. It also allows you to stop and start(control)  a group of processes. This feature along with the Linux namespaces are the foundation for containers. Linux namespaces allow you to isolate network and process space.

Docker leveraged cgroups and namespaces to come up with containers. Docker is the most popular containerization technology. Docker ecosystem makes it easy to build and share images. And create/manage containers using the image.

Docker makes it very easy to create an image. All you need to do is to create a file(Dockerfile) and specify all the dependencies in it. Then use the docker command to create an image based on the Dockerfile. Then you can create a container using the created image. The container is executed in the docker container runtime engine.

The docker engine is a client-server application. The docker container runtime aka the docker daemon(dockerd). This is the server part. There are two types of clients available. Docker client aka docker command line interface (docker). This client is used to create images and create containers using the image. Also used to manage the lifecycle of the containers.
 

Docker REST APIs and SDKs. These interfaces can be used to interact with docker engines.

After creating the image, you can upload the image to dockerhub. This is a repository to store and share images. You can version the images too.

Containers are light-weight compared to virtual machines. Containers share the host operating system kernel. Whereas each virtual machine has its own operating system. A good analogy is process vs thread. There is some overhead to spawn a new process. Similarly there is some overhead to start a virtual machine, and need to bring up the operating system. Threads are lightweight processes. Threads are created inside the process and they share the resources of the process. So there is less overhead. Similarly containers share the host operating system kernel, so starting up a container is faster than starting up a virtual machine.


Lets look at the definition from docker.com : “A container is a standard unit of software that packages up code and all its dependencies.”

Packaging an application and it's dependencies together helps to keep the dev/stage/pre-prod/prod environments as similar as possible. This is very important for Microservice architecture bases applications.



Kubernetes Series - Part I - Prerequisites - Microservices

Kubernetes Series - Part I - Prerequisites

There are many prerequisites to learn Kubernetes.It's difficult to understand a container orchestration tool without understanding containers. And how to package and deploy applications using containers. When you search for containerization you will come across the term Microservices also.

Also you will see articles comparing containerization with virtualization and virtualization with non-virtualization. And if you keep searching it goes to the history of UNIX. And you search enough, you will see references to Google Borg, chroot environments and UNIX network jail.

I’ll try to touch up on these concepts before getting into the details of Kubernetes.

Microservice

Before Microservices, applications were monolithic. For example an online shopping application has sub systems like customer profile, catalog, shopping cart, customer wish list. You can break down this monolithic application into a bunch of sub systems. And model each one as a Microservice.

There are many advantages of breaking down a monolithic application into Microservices. 

You can make changes to each Microservice independently and deploy them. Typically each Microservice has a separate code base. This would reduce dependency between teams and make development /deployment more more agile. 

In a monolithic application, you need to scale the entire application. Even though some of the sub systems are less utilized. But you can scale each Microservice independently on a need basis.

Typically monolithic applications are based on a single technology stack. But in a Microservices based architecture, the Microservices talk to each other over REST APIs. So each Microservice has the flexibility to choose the most appropriate technology stack.

But there are disadvantages too. Microservices based architecture increases complexity. In a monolithic application, for example a J2EE application,  the entire application is running in a single JVM. So it's easy to make a call from one subsystem to another. But in Microservices architecture, each Microservice runs in it's own JVM. And there is no way to make direct calls. So you would need to make calls using REST API.


Since each service is built and deployed independently, there is no way to know the URL of the other Microservice up front. You may scale out a particular service based on need and there could be multiple instances of the service running on different hosts. So there are additional services required to discover a service. Then only you can invoke a REST API present in that Microservice. A Service Registry is used to register all the Microservice instances. Basically the URL and some additional metadata about each Microservice instance is published in the service registry. Whenever a microservice comes up, it will register these details with the service registry. And when a Microservice needs to  invoke another, it can first perform a lookup against the registry to get the URL of the other Microservice(IP address/Port etc).  This service discovery layer add complexity.

These additional layers(registry/lookup) increase latency. In a monolithic application(say J2EE app) everything is running in a single JVM and there is no network latency to invoke one sub system from another. But in Microservices architecture, you need to first do a service discovery(service registry lookup) to get hold of the other Microserviece instance URL and and then invoke the REST API, this would increase the network latency.

Compared to Microservices, it's easy to troubleshoot/diagnose a monolithic application. You just need to go to the application server logs to start triage. But in a Microservices based application, these services may be running on different hosts and there could be multiple instances of each service. It's a challenge to even identify the correct host and then get to the log location in production. Also you need to correlate the logs as the log messages related to a workflow might go across Microservices.


Another challenge is packaging and deploying multiple Microservices. Containers are used widely to package and deploy Microservices. In the next part I'll go through containerization concepts.





Friday, February 21, 2020

How to connect to Oracle ATP using SQL Plus


I have created ATP instance using terraform from OCI compute instance. Here are the steps to connect to ATP instance using sqlplus, from OCI compute instance.

Install Oracle Instant Client RPM

I have provisioned compute instance using Oracle Linux 7.6 image. From OCI compute instance, run below steps to install instant client.
to pull the correct yum repo, need to figure out the OCI Region. My account is in Ashburn region(Home Region). Use below steps to figure out the region code.
cd /etc/yum.repos.d
 
# specific to OCI compute instances, get the yum mirror for the region
export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2`

# use below commands to verify that the region is set
export | grep REGION
or
echo $REGION

I am in Ashburn region. So above commanded returned iad

#  below command will download the yum repo file your your region
sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo

# Since I am in Ashburn region, it downloaded yum-iad-ol7.repo
# open yum-iad-ol7.repo
less yum-iad-ol7.repo

TBD # List available packages before enabling the repo - include image
# list all packages with name 
yum list oracle-instantclient*   

# There is and entry with name "ol7_oci_included" in yum-iad-ol7.repo
# enable repository wih name "ol7_oci_included" using yum-config-manager 
sudo yum-config-manager --enable ol7_oci_included

list all packages after enabling the repo

yum list oracle-instantclient*    

We can see that oracle-instantclient* packages are available now.

install required pkgs

sudo yum install -y  oracle-instantclient18.3-basic.x86_64 oracle-instantclient18.3-devel.x86_64 oracle-instantclient18.3-jdbc.x86_64 oracle-instantclient18.3-sqlplus.x86_64 oracle-instantclient18.3-precomp.x86_64 oracle-instantclient18.3-tools.x86_64

Now sqlplus binary is available under /usr/lib/oracle/18.3/client64/bin

Download and unzip wallet zip file

Since I have created ATP instance using terraform, from this OCI compute instance, the wallet zip file is already available under "/home/opc/atp-example/autonomous_database_wallet.zip"
cd /home/opc/atp-example/
jar -tvf /home/opc/atp-example/autonomous_database_wallet.zip 6661 Fri Apr 12 13:53:16 GMT 2019 cwallet.sso 3422 Fri Apr 12 13:53:16 GMT 2019 tnsnames.ora 3336 Fri Apr 12 13:53:16 GMT 2019 truststore.jks 87 Fri Apr 12 13:53:16 GMT 2019 ojdbc.properties 114 Fri Apr 12 13:53:16 GMT 2019 sqlnet.ora 6616 Fri Apr 12 13:53:16 GMT 2019 ewallet.p12 3243 Fri Apr 12 13:53:16 GMT 2019 keystore.jks

Download Wallet zip from Service Console

Otherwise you need to login to ATP Instance's Service Console. Default user name is "admin". After login, go to Administration link on left side. And click "Download Client Credentials (Wallet)"
Provide a wallet password to download the wallet zip file. Now this client credentials (wallet) can used to login from SQL Developer and other sql clients liek SQL Plus. If you have downloaded the zip from windows, you would need to scp the file to OCI compute host.
Once the wallet zip is present on oci compute host, continue with below steps.

Unzip wallet archive

unzip autonomous_database_wallet.zip cd autonomous_database_wallet

Modify sqlnet.ora

Modify sqlnet.oraand give path to walet folder as "/home/opc/atp-example/autonomous_database_wallet"
Here is cat /home/opc/atp-example/autonomous_database_wallet/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/home/opc/atp-example/autonomous_database_wallet"))) SSL_SERVER_DN_MATCH=yes
OR set TNS_ADMIN variable to point to wallet folder. And Modify sqlnet.ora And give DIRECTORU=$TNS_ADMIN
cat /etc/ORACLE/WALLETS/ATPDB2/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="$TNS_ADMIN"))) SSL_SERVER_DN_MATCH=yes
#Now export TND_ADMIN variable to point to wallet folder export TNS_ADMIN=/home/opc/atp-example/autonomous_database_wallet
#wallet zip is extracted and TNS_ADMIN env var pointing to the wallet folder. Refer https://docs.oracle.com/en/cloud/paas/atp-cloud/atpug/connect-sqlplus.html#GUID-A3005A6E-9ECF-40CB-8EFC-D1CFF664EC5A

Connect to ATP instane using SQL Plus

/usr/lib/oracle/18.3/client64/bin/sqlplus admin/Welcome#1234@atpdb2_low

create schema in ATP instance using SQL Plus

/usr/lib/oracle/18.3/client64/bin/sqlplus admin/Welcome#1234@atpdb2_low @/home/opc/schema/create_tables.sql

 

Monday, April 1, 2019

How to install OCI CLI on Oracle Enterprise Linux 6.8

Below are the steps to install the cli and dependencies manually.  For more details refer https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/climanualinst.htm

 

 Step 1-  Configure Yum and install dependencies



I am using Red Hat Enterprise Linux Server release 6.8 (Santiago)
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.8 (Santiago)

Need to be root to install below packages.
#sudo su

Set proxy server:

#bash
#export http_proxy=proxy_host_name_here:port_here
#export https_proxy=proxy_host_name_here:port_here

Make sure yum is configured:
#cd /etc/yum.repos.d
#wget https://yum.oracle.com/public-yum-ol6.repo
# yum repolist

Now install packages:
#sudo yum install gcc libffi-devel python-devel openssl-devel
Download EPEL(Extra Packages for Enterprise Linux). And then install python-pip.

Download RPM : 

Install RPM manually:
#rpm -Uvh epel-release-6-8.noarch.rpm
Make sure 'yum repolist' is showing epel:
#yum repolist
Loaded plugins: fastestmirror, refresh-packagekit, security, ulninfo
Loading mirror speeds from cached hostfile
epel/metalink | 18 kB 00:00
* epel: fedora-epel.mirror.lstn.net
epel | 4.7 kB 00:00
epel/primary_db | 6.0 MB 00:02
repo id repo name status
APOVM34 OracleVM APOVM34 x86_64 556
UEK4 OracleVM OVM34_UEK x86_64 1,374+2
*epel Extra Packages for Enterprise Linux 6 - x86_64 12,517
ol6_UEK_latest Latest Unbreakable Enterprise Kernel for Oracle Linux 6Server (x86_64) 723+125
ol6_latest Oracle Linux 6Server Latest (x86_64) 9,143+2,605
repolist: 24,313

Now install pip using 'sudo yum install python-pip':
# sudo yum install python-pip
Loaded plugins: fastestmirror, refresh-packagekit, security, ulninfo
Setting up Install Process
Loading mirror speeds from cached hostfile
* epel: fedora-epel.mirror.lstn.net
Resolving Dependencies
--> Running transaction check
---> Package python-pip.noarch 0:7.1.0-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved

==================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================
Installing:
python-pip noarch 7.1.0-1.el6 epel 1.5 M
Transaction Summary
==================================================================================================================================
Install 1 Package(s)
Total download size: 1.5 M
Installed size: 6.6 M
Is this ok [y/N]: y
Downloading Packages:
python-pip-7.1.0-1.el6.noarch.rpm | 1.5 MB 00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Importing GPG key 0x0608B895:
Userid : EPEL (6) 
Package: epel-release-6-8.noarch (installed)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows:
libvirt-client-1.2.14-19.1.el6.x86_64 is a duplicate with libvirt-client-0.10.2-62.0.1.el6_9.1.x86_64
Installing : python-pip-7.1.0-1.el6.noarch 1/1
Verifying : python-pip-7.1.0-1.el6.noarch 1/1
Installed:
python-pip.noarch 0:7.1.0-1.el6
Complete!


I have installed oci cli and dependencies in to /scratch/rajiv/oci.
mkdir -p /scratch/rajiv/oci
cd /scratch/rajiv/oci
Download Python 3.6 manually.
#tar -xvzf Python-3.6.0.tgz
#cd Python-3.6.0
Need to specify the path to install python. If we don't specify, it will try to install under /usr/local. And since /usr is mounted as read only in hosted Linux boxes, it will fail.
I have installed python under /scratch/rajiv/oci/python-3.6.0
#mkdir -p /scratch/rajiv/oci/python-3.6.0
Its important to specify the --prefix. Point it to the folder where you plan to install python. Otherwise it will try to install under /usr/local. And since /usr is mounted as read only in hosted Linux boxes, it will fail.
#./configure --prefix=/scratch/rajiv/oci/python-3.6.0
Now do 'make':
#make
And then do 'make install':
#sudo make install


Step 2: Installing and Configuring virtualenv

This is to create a Python virtual environment. This helps to avoid dependency hell.
#pip3 install virtualenv - didn't work
# find . -name pip3
./python-3.6.0/bin/pip3
/scratch/rajiv/oci/python-3.6.0/bin/pip3 install virtualenv

Collecting virtualenv
Downloading https://files.pythonhosted.org/packages/33/5d/314c760d4204f64e4a968275182b7751bd5c3249094757b39ba987dcfb5a/virtualenv-16.4.3-py2.py3-none-any.whl (2.0MB)
100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 2.0MB 473kB/s
Installing collected packages: virtualenv
Successfully installed virtualenv-16.4.3
You are using pip version 9.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.


Now virtualenv installed under python 3.6 folder.
# find . -name virtualenv
./python-3.6.0/bin/virtualenv

To create a directory for storing your virtual environments, run the following command.
#mkdir -p /scratch/rajiv/oci/myvirtualspaces/virtualenvs

To create a new virtual environment without any packages, run the following command.

/scratch/rajiv/oci/python-3.6.0/bin/virtualenv myvirtualspaces/virtualenvs/cli-testing --no-site-packages

Using base prefix '/scratch/rajiv/oci/python-3.6.0'
New python executable in /scratch/rajiv/oci/myvirtualspaces/virtualenvs/cli-testing/bin/python3.6
Also creating executable in /scratch/rajiv/oci/myvirtualspaces/virtualenvs/cli-testing/bin/python
Installing setuptools, pip, wheel...
done.

To reference the new version of Python, run the following command with the -p parameter.

/scratch/rajiv/oci/python-3.6.0/bin/virtualenv -p /scratch/rajiv/oci/python-3.6.0 cli-testing

The path /scratch/rajiv/oci/python-3.6.0 (from --python=/scratch/rajiv/oci/python-3.6.0) is not an executable file



Step 3: Installing the Command Line Interface


download oci-cli from github (https://github.com/oracle/oci-cli/releases)
#unzip oci-cli-2.5.5.zip
#cd /scratch/rajiv/oci/oci-cli
# /scratch/rajiv/oci/python-3.6.0/bin/pip3 install oci_cli-*-py2.py3-none-any.whl

This will install oci cli under /scratch/rajiv/oci/python-3.6.0/bin/oci


Token-based Authentication for the CLI

Find the location of oci cli
find . -name oci
./Python-3.6.0/build/temp.linux-x86_64-3.6/scratch/rajiv/oci
./python-3.6.0/lib/python3.6/site-packages/oci
./python-3.6.0/bin/oci


Starting a Token-based CLI Session

Need to run below step from VNC session as non root user. Otherwise it won't open the browser window.
All the steps till this point, are run as ROOT user. So need to change the file ownership of oci and all folders under that to rajiv.
'/scratch/rajiv/oci'
change ownership of oci and all folders under it:
#sudo chown -R rajiv:groups_name_here oci

Now authenticate session:
#/scratch/rajiv/oci/python-3.6.0/bin/oci session authenticate

This step ill open a browser window. Need to provide OCI tenant name and user credentials. Once auth is successful, close the browser window.
And go back to command line.

Now try below command to make sure oci cli is working: ( Try out your newly created session credentials with the following example command:)
/scratch/rajiv/oci/python-3.6.0/bin/oci iam region list --config-file /home/rajiv/.oci/config --profile DEFAULT --auth security_token

It worked!
#./python-3.6.0/bin/oci --version
2.5.5

setenv PATH $PATH\:/scratch/rajiv/oci/python-3.6.0/bin
#oci --version
   2.5.5