We have been discussing the installation and configuration of keystone and glance service for OpenStack in the previous blocks. Now its the time to progress further to the next services configuration ie Compute or Nova service.
What is OpenStack Nova?
OpenStack Nova is a component within the OpenStack open source cloud computing platform developed to provide on-demand access to compute resources by provisioning and managing large networks of virtual machines (VMs).
Also known as OpenStack Compute, Nova offers “massively” scalable, on-demand, self-service access to compute resources such as virtual machines, containers and bare metal servers. It manages the lifecycle of compute instances in an OpenStack environment. Responsibilities include spawning, scheduling and decomissioning of machines on demand.
As the most distributed (and complex) component in the OpenStack platform, Nova interacts heavily with other OpenStack services like Keystone for performing authentication, Horizon for its Web interface and Glance for supplying its images.
1) Install the compute service (controller)
a. Install the compute packages on the controller node as shown below.
root@controller# apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
The compute service uses MySQL database to store information. So you need to specify the location of the database in the compute/nova configuration file.
Add/modify the entry shown below in /etc/nova/nova.conf file under the [database] section.
[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova
Remember to change the NOVA_DBPASS with the desired password for the nova database user.
Configure the Compute Service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration sectiom of the /etc/nova/nova.conf file.
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
Make sure to replace the RABBIT_PASS with the RabbitMQ guest user password which you set while configuring the Message Broker.
Configure the vncserver_listen, my_ip and vncserver_proxyclient_address variables to the management interface IP address of the controller node(192.168.1.11)
Edit the /etc/nova/nova.conf and add the entries as shown below under the [DEFAULT] section.
[DEFAULT]
my_ip = 192.168.1.11
vncserver_listen = 192.168.1.11
vncserver_proxyclient_address = 192.168.1.11
By default, the Ubuntu packages create a SQLite database. Delete the nova.sqlite file created in the /var/lib/nova/ directory so that it does not get used by mistake.
root@controller# rm /var/lib/nova/nova.sqlite
Login to MySQL database as root in the controller node and create the nova database user.
root@controller# mysql -u root -p mysql> CREATE DATABASE nova; mysq> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
Replace NOVA_DBPASS with the one set in the nova configuration file /etc/nova/nova.conf.
b. Populate the tables for compute service
root@controller# su -s /bin/sh -c "nova-manage db sync" nova
c. Create a nova user that the compute service can use to authenticate with the identity or keystone
root@controller# keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.com
You can chose your own NOVA_PASS and the email address.
d. Give the created nova user admin role and assign it to service tenant as shown below
root@controller# keystone user-role-add --user=nova --tenant=service --role=admin
e. Configure the compute service to use Keystone service for authentication
Add/modify the [keystone_authtoken] and the [DATABASE] section in the /etc/nova/nova.conf file as shown below.
[keystone_authtoken]
…
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
[DEFAULT]
…
auth_strategy = keystone
Edit the NOVA_PASS to match the one you set for the nova user created using keystone –user-create command.
f. Register the compute service with the Identity service and create the endpoint
<pre>root@controller# keystone service-create --name=nova --type=compute \ --description="OpenStack Compute" root@controller# keystone endpoint-create \ --service-id=$(keystone service-list | awk '/ compute / {print $2}') \ --publicurl=http://controller:8774/v2/%\(tenant_id\)s \ --internalurl=http://controller:8774/v2/%\(tenant_id\)s \ --adminurl=http://controller:8774/v2/%\(tenant_id\)s</pre>
g. Restart compute services for the new configuration to take effect.
<pre>root@controller# service nova-api restart root@controller# service nova-cert restart root@controller# service nova-consoleauth restart root@controller# service nova-scheduler restart root@controller# service nova-conductor restart root@controller# service nova-novncproxy restart </pre>
h. Verify the compute installation using the nova command.
root@controller# nova image-list
The output should show the Cirros VM image which we already uploaded to the glance service as shown in the screenshot above.
2) Install the compute service (compute node)
Once we have done the compute service configuration on the controller node, we should configure compute service on the compute node. The compute node runs the hypervisor portion of the compute service.
OpenStack supports many hypervisors, I’m using KVM hypervisor in this tutorial.
a. Install the compute packages on the compute node
root@compute# apt-get install nova-compute-kvm
Edit/modify the /etc/nova/nova.conf file to match with the configuration shown below.
[DEFAULT]
auth_strategy = keystone
…
[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova
[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
Replace NOVA_PASS to match the one in the /etc/nova/nova.conf on the controller node.
b. Configure compute services to use RabbitMQ as the message broker
Edit the /etc/nova/nova.conf file on the compute node.
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
Make sure to replace the RABBIT_PASS with the RabbitMQ guest user password which we configured while configuring the message service.
c. Configure Compute to provide remote console access to instances
Edit /etc/nova/nova.conf on the compute node and add the following entries under the [DEFAULT] section.
[DEFAULT]
my_ip = 192.168.1.31 #compute node management IP
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.1.31 #compute node management IP
novncproxy_base_url = http://controller:6080/vnc_auto.html
We need to specify the host that runs the Image service under the [DEFAULT] section of the /etc/nova/nova.conf on the compute node.
[DEFAULT]
glance_host = controller
You must make sure that your system’s processor and/or hypervisor support hardware acceleration for virtual machines. By default, the Ubuntu packages create a SQLite database.
Delete the nova.sqlite file created in the /var/lib/nova/ directory
root@compute# rm /var/lib/nova/nova.sqlite
Finally, restart the compute service on the compute node
root@compute# service nova-compute restart
Legacy Network Service [Nova-network] Configuration
Legacy network configuration aims at providing network services to VM instances by using a flat network of IP address and assigning them to the VM’s using DHCP. Using this you cannot manage the network service from the Openstack Dashboard interface, rather you need to manually configure using nova-network CLI commands. Legacy networking primarily involves compute nodes. But we need to do a little configuration on the controller node too as described below.
1) Controller Network Configuration
Edit the /etc/nova/nova.conf file on the controller node and add the following entries to the [DEFAULT] section:
[DEFAULT]
network_api_class = nova.network.api.API
security_group_api = nova
Restart the compute services on the controller node
root@compute# service nova-api restart root@compute# service nova-scheduler restart root@compute# service nova-conductor restart
2) Compute Network Configuration
Install the legacy networking services as shown below:
root@compute# apt-get install nova-network nova-api-metadata
Edit the /etc/nova/nova.conf on the compute node and add the below given entries to the [DEFAULT] section
[DEFAULT]
…
network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = True
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = eth1
public_interface = eth0
Restart the network services
root@compute# service nova-network restart root@compute# service nova-api-metadata restart
3) Configure Initial Network
The initial network defines the Range of IP address to be used while assigning IP address to the OpenStack instances.To do this run the following commands on the controller node.
Source the admin variable file admin-openrc.sh
root@controller# source admin-openrc.sh
Create the network to be used by the instances.
root@controller# nova network-create demo-net --bridge br100 --multi-host T --fixed-range-v4 10.0.0.0/24
Here I’m using the 10.0.0.0/24 IP rage for my instances. You can create you own network range.
Verify whether the network has been created by running the below command.
root@controller# nova net-list
Once all these steps are done properly, the basic required services to get an instance UP and Running are configured. You can create instances using CLI from now. But lets configure OpenStack Dashboard, so that you get a web based interface from where you can manage instances.
Recommended Readings
OpenStack Cloud Computing Fundamentals
OpenStack On Ubuntu – Part 1- Prerequisite Setup
OpenStack on Ubuntu – Part 2 – Identity or Keystone Service
OpenStack on Ubuntu – Part 3 – Image or Glance Service
OpenStack on Ubuntu – Part 5 – Dashboard or Horizon Service
OpenStack on Ubuntu – Part 6 – Block Storage or Cinder Service