Hi All,
I am moving on to domain http://www.adminz.in. Please feel free to contact me on http://www.adminz.in .
Hi All,
I am moving on to domain http://www.adminz.in. Please feel free to contact me on http://www.adminz.in .
HAproxy Load Balancing Algorithms
The algoritm you define determines how HAproxy balances load across your servers. You can set the algorithm to use with the balance parameter.
Round Robin
Requests are rotated among the servers in the backend.
Servers declared in the backend section also accept a weight parameter which specifies their relative weight. When balancing load, the Round Robin algorithm will respect that weight ratio.
Example:
…
option tcplog
balance roundrobin
maxconn 10000
…
Static Round Robin
Each server is used in turn, according to the defined weight for the server. This algorithm is a static version of the round-robin algoritm, which means that changing the weight ratio for a server on the fly will have no effect. However, you can define as many servers as you like with this algorithm. In addition, when a server comes online, this algoritm ensures that the server is immediately reintroduced into the farm after re-computing the full map. This algoritm also consome slightly less CPU cycles (around -1%).
Example:
…
option tcplog
balance static-rr
maxconn 10000
…
Least Connection
Each server is used in turn, according to the defined weight for the server. This algorithm is a static version of the round-robin algoritm, which means that changing the weight ratio for a server on the fly will have no effect. However, you can define as many servers as you like with this algorithm. In addition, when a server comes online, this algoritm ensures that the server is immediately reintroduced into the farm after re-computing the full map. This algoritm also consome slightly less CPU cycles than the Round Robin algorithm (around -1%).
Example:
…
option tcplog
balance leastconn
maxconn 10000
…
Source
A hash of the source IP is divided by the total weight of the running servers to determine which server will receive the request. This ensures that clients from the same IP address always hit the same server, which is a poor man’s session persistence solution.
Example:
…
option tcplog
balance source
maxconn 10000
…
URI
This algorithm hashes either the left part of the URI (before the question mark) or the whole URI (if the whole parameter is present) and divides the hash value by the total weight of the running servers. The result designates which server will receive the request. This ensures that the proxy will always direct the same URI to the same server as long as all servers remain online.
This is used with proxy caches and anti-virus proxies in order to maximize the cache hit rate. This algorithm is static by default, which means that changing a server’s weight on the fly will have no effect. However, you can change this using a hash-type parameter.
You can only use this algorithm for a configuration with an HTTP backend.
Exampple:
…
option tcplog
balance uri
maxconn 10000
…
URL Parameter
The URL parameter specified in argument will be looked up in the query string of each HTTP GET request.
You can use this algorithm to check specific parts of the URL, such as values sent through POST requests. For example, you can set this algorithm to direct a request that specifies a user_id with a specific value to the same server using the url_param method. Essentially, this is another way of achieving session persistence in some cases (see the official HAproxy documentation for more information).
Example:
…
option tcplog
balance url_param userid
maxconn 10000
…
or
…
option tcplog
balance url_param session_id check_post 64
maxconn 10000
…
Installing Applications along with the heat template. If needed we can mention the network ID, Image ID etc in the file itself instead of asking it from outside.
================================================
heat_template_version: 2013-05-23
description: Test Template
parameters:
NAME:
type: string
description : Instance Name
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server1:
type: OS::Nova::Server
properties:
name: { get_param: NAME }
image: { get_param: ImageID }
key_name: Cloud
flavor: “m1.small”
networks:
– network: { get_param: NetID }
user_data_format: RAW
user_data: |
#!/bin/bash -v
echo “nameserver 8.8.8.8” > /etc/resolv.conf
yum update -y
yum install httpd -y
outputs:
server1_private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server1, first_address ] }
================================================
heat stack-create -f heat.yml -P “ImageID=abc9818d-ee5f-4778-ada5-a29105ea9c02;NetID=71ed8a34-a2d5-4d84-9d47-e5e107dd8d7e” Centos-Stack
Installing Applications along with the heat template. If needed we can mention the network ID, Image ID etc in the file itself instead of asking it from outside.
================================================
heat_template_version: 2013-05-23
description: Test Template
parameters:
NAME:
type: string
description : Instance Name
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server1:
type: OS::Nova::Server
properties:
name: { get_param: NAME }
image: { get_param: ImageID }
key_name: Cloud
flavor: “m1.small”
networks:
– network: { get_param: NetID }
user_data_format: RAW
user_data: |
#!/bin/bash -v
echo “nameserver 8.8.8.8” > /etc/resolv.conf
yum update -y
yum install httpd -y
outputs:
server1_private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server1, first_address ] }
================================================
heat stack-create -f heat.yml -P “ImageID=abc9818d-ee5f-4778-ada5-a29105ea9c02;NetID=71ed8a34-a2d5-4d84-9d47-e5e107dd8d7e” Centos-Stack
Installing Heat – Orchestration Service
Installing the Packages
yum install openstack-heat-api openstack-heat-engine openstack-heat-api-cfn
Configuring the Service
Setting Message Brocker
rpc_backend = heat.openstack.common.rpc.impl_kombu
rabbit_host=controller
Configuring Mysql
openstack-config –set /etc/heat/heat.conf database connection mysql://heat:test4heat@controller/heat
on mysql Server
mysql
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’localhost’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’%’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’192.168.10.30’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’192.168.10.31’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’192.168.10.35’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’192.168.10.32’ IDENTIFIED BY ‘test4heat’;
GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’192.168.10.36’ IDENTIFIED BY ‘test4heat’;
FLUSH PRIVILEGES;
exit
Create the heat service tables
# su -s /bin/sh -c “heat-manage db_sync” heat
Creating Service User
keystone user-create –name=heat –pass=test4heat –email=heat@example.com
keystone user-role-add –user=heat –tenant=service –role=admin
Run the following commands to configure the Orchestration service to authenticate with the Identity service:
openstack-config –set /etc/heat/heat.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config –set /etc/heat/heat.conf keystone_authtoken auth_port 35357
openstack-config –set /etc/heat/heat.conf keystone_authtoken auth_protocol http
openstack-config –set /etc/heat/heat.conf keystone_authtoken admin_tenant_name service
openstack-config –set /etc/heat/heat.conf keystone_authtoken admin_user heat
openstack-config –set /etc/heat/heat.conf keystone_authtoken admin_password test4heat
openstack-config –set /etc/heat/heat.conf ec2authtoken auth_uri http://controller:5000/v2.0
Register the Heat and CloudFormation APIs with the Identity Service so that other OpenStack services can locate these APIs. Register the services and specify the endpoints:
keystone service-create –name=heat –type=orchestration –description=”Orchestration”
keystone endpoint-create –service-id=$(keystone service-list | awk ‘/ orchestration / {print $2}’) –publicurl=http://controller:8004/v1/%\(tenant_id\)s –internalurl=http://controller:8004/v1/%\(tenant_id\)s –adminurl=http://controller:8004/v1/%\(tenant_id\)s
keystone service-create –name=heat-cfn –type=cloudformation –description=”Orchestration CloudFormation”
keystone endpoint-create –service-id=$(keystone service-list | awk ‘/ cloudformation / {print $2}’) –publicurl=http://controller:8000/v1 –internalurl=http://controller:8000/v1 –adminurl=http://controller:8000/v1
Create the heat_stack_user role.
keystone role-create –name heat_stack_user
The example uses the IP address of the controller (10.0.0.11) instead of the controller host name since our example architecture does not include a DNS setup. Make sure that the instances can resolve the controller host name if you choose to use it in the URLs.
openstack-config –set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://192.168.10.30:8000
openstack-config –set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://192.168.10.30:8000/v1/waitcondition
service openstack-heat-api start
service openstack-heat-api-cfn start
service openstack-heat-engine start
chkconfig openstack-heat-api on
chkconfig openstack-heat-api-cfn on
chkconfig openstack-heat-engine on
Error creating issue: Could not create workflow instance: root cause: while inserting: [GenericEntity:OSWorkflowEntry][id,null][name,jira][state,0] (SQL Exception while executing the following:INSERT INTO OS_WFENTRY (ID, NAME, INITIALIZED, STATE) VALUES (?, ?, ?, ?) (Binary logging not possible. Message: Transaction level ‘READ-COMMITTED’ in InnoDB is not safe for binlog mode ‘STATEMENT’))
Cause This is required by MySQL:
Statement based binlogging does not work in isolation level READ UNCOMMITTED and READ COMMITTED since the necessary locks cannot be taken.
Resolution
To change to row based binary logging, set the following in /etc/my.cnf (or your my.cnf if it’s elsewhere):
binlog_format=row
Following Command allow us to add the Vmware license through the ssh access into the Esxi Server.
vim-cmd vimsvc/license –set *********************
ERROR: rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6]
I use rsync with ssh and authorized key files for auto login to mirror a remote system to the local one.The only change I made was on my .bashrc on the remote end I added in some commands to show file system usage. do a du -f and a tail of the log on login for conveniences.
My assumption here is that when rsync executes the ssh to connect it received ‘junk’. Once I removed the extra output from the .bashrc file on the remote end, it worked just fine.SO check your remote end for .profile, .bashrc, .bash_profile etc…. any scripts running that add extra output on login.
When default log rotate is not working we need to check its configuration using command
/usr/sbin/logrotate -f /etc/logrotate.conf
and try running a selected configuration using
logrotate -fd /etc/logrotate.d/test
where test is the configuration file name.
Install Cinder- Block Storage Service
On Controller Node
Install the appropriate packages
yum install openstack-cinder -y
Configure Block Storage to use your database
openstack-config –set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder
Creating Database
On Mysql Server
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’localhost’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.30’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.31’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.35’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.36’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.32’ IDENTIFIED BY ‘cinder4admin’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’10.1.15.42’ IDENTIFIED BY ‘cinder4admin’;
exit;
Create the database tables
su -s /bin/sh -c “cinder-manage db sync” cinder
Create a cinder user.
keystone user-create –name=cinder –pass=cinder4admin –email=cinder@example.com
keystone user-role-add –user=cinder –tenant=service –role=admin
Edit the /etc/cinder/cinder.conf configuration file:
openstack-config –set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin
Configure Block Storage to use the Qpid message broker:
openstack-config –set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config –set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40
Register the Block Storage service with the Identity service so that other OpenStack services can locate it:
keystone service-create –name=cinder –type=volume –description=”OpenStack Block Storage”
keystone endpoint-create –service-id=$(keystone service-list | awk ‘/ volume / {print $2}’) –publicurl=http://controller:8776/v1/%\(tenant_id\)s –internalurl=http://controller:8776/v1/%\(tenant_id\)s –adminurl=http://controller:8776/v1/%\(tenant_id\)s
Register a service and endpoint for version 2 of the Block Storage service API:
keystone service-create –name=cinderv2 –type=volumev2 –description=”OpenStack Block Storage v2″
keystone endpoint-create –service-id=$(keystone service-list | awk ‘/ volumev2 / {print $2}’) –publicurl=http://controller:8776/v2/%\(tenant_id\)s –internalurl=http://controller:8776/v2/%\(tenant_id\)s –adminurl=http://controller:8776/v2/%\(tenant_id\)s
Start and configure the Block Storage services to start when the system boots:
service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on
On Cinder Service Node.
Setting Up NFS Share .
Installing NFS packages
yum install nfs-utils nfs-utils-lib
Make and configure partition
mkfs.ext4 /dev/mapper/vg_cloud2-LogVol03
mkdir /home/cinder_nfs
mount /dev/mapper/vg_cloud2-LogVol03 /home/cinder_nfs/
Add entries in Fstab
/dev/mapper/vg_cloud2-LogVol02 /home/cinder_nfs ext4 rw 0 0
Add Share to NFS
vi /etc/exports
/home/cinder_nfs *(rw,sync,no_root_squash,no_subtree_check)
exportfs -a
showmount -e 192.168.11.42
service nfs start
service nfs restart
service iptables stop
chkconfig iptables off
Install the Cinder Software
yum install openstack-cinder scsi-target-utils
Configure the Service
Copy the /etc/cinder/cinder.conf configuration file from the controller, or perform the following steps to set the keystone credentials:
openstack-config –set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config –set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin
openstack-config –set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config –set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40
openstack-config –set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder
openstack-config –set /etc/cinder/cinder.conf DEFAULT glance_host controller
[root@compute2 ~]# cat /etc/cinder/nfsshares
192.168.11.42:/home/cinder_nfs
[root@compute2 ~]#
openstack-config –set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfsshares
openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
service openstack-cinder-volume start
chkconfig openstack-cinder-volume on