星期四, 1月 22, 2015

CNI online course - systemd 小記

systemd
Location:  /usr/lib/systemd/system/  and  /etc/systemd/system/

設定範例
# cat   /usr/lib/systemd/system/sshd.service
[Unit]
Description=OpenSSH Daemon
After=network.target

[Service]
EnvironmentFile=/etc/sysconfig/ssh
ExecStartPre=/usr/sbin/sshd-gen-keys-start
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=always

[Install]
WantedBy=multi-user.target

systemctl   mask    <service>.service

  • mask 只要使用, 不可以手動啟動服務
  • unmask 取消mask

查詢的方式 ( 該服務會被導向到 /dev/null  )
# ls   -l   /etc/systemd/system | grep null

以類型來列出
# systemctl  -t  service

# systemctl  -t  service | grep ssh
sshd.service                       loaded active running OpenSSH Daemon

有些 target 可以跟 runlevel 對照
Target units group service and other units
Some target units are the equivalent to runlevels
  • multi-user.target - runlevel 3
  • graphical.target - runlevel 5

# systemctl  -t  target
UNIT                 LOAD   ACTIVE SUB    DESCRIPTION
basic.target         loaded active active Basic System
cryptsetup.target    loaded active active Encrypted Volumes
getty.target         loaded active active Login Prompts
graphical.target     loaded active active Graphical Interface

開機的時候可以在 linux 那一行指定 target 來暫時切換target ( runlevel )
例如原本是圖形界面, 開機要指定文字界面開機
systemd.unit=multi-user.target

開機以文字界面(runlevel 3) 開機, 但是如果想切換圖形界面
可以更改 target 來達成 ( 當然目前還可以用 init 切換 )
#systemctl  isolate  graphical.target

開機時間
# systemd-analyze
Startup finished in 2.739s (kernel) + 17.430s (userspace) = 20.170s

詳細的列出
# systemd-analyze  blame
        11.260s network.service
         8.133s network@ens160.service
         2.104s ModemManager.service
         2.050s SuSEfirewall2_init.service
         1.658s network@ens192.service
         1.522s rsyslog.service
          974ms vmtoolsd.service

SLES 12 跟 openSUSE 13.2 一樣用 systemd-logger 取代 syslog-ng
Systemd has an internal log daemon – journald

透過導向的方式存檔
journalctl  >  file.txt

systemd and cgroups
  • Each service is placed into a cgroup in  /sys/fs/cgroup/systemd/system.slice
  • systemd uses this directory to control services, not for resource control

See man systemd.resource­control for available options

檢查相關資訊
# systemd-cgls
Working Directory /sys/fs/cgroup/systemd/system.slice:
├─1 /sbin/init showopts
├─polkit.service
│ └─18657 /usr/lib/polkit-1/polkitd --no-debug
├─accounts-daemon.service
│ └─18652 /usr/lib/accounts-daemon
├─system-network.slice
│ ├─network@ens192.service
│ │ ├─1955 avahi-autoipd: [ens192] sleeping  
│ │ └─1956 avahi-autoipd: [ens192] callout dispatche
│ └─network@ens160.service
│   ├─1518 avahi-autoipd: [ens160] sleeping  
│   └─1519 avahi-autoipd: [ens160] callout dispatche

星期五, 1月 16, 2015

解決如果 Master 安裝時 ceph 主站不穩定沒有安裝 ceph 套件小記

解決如果 Master 安裝時 ceph 主站不穩定沒有安裝 ceph 套件小記

一般來說 ceph master 應該是在第一次點選 Setup Slave 的時候才進行 ceph 套件安裝.
但是如果安裝時的時候 ceph 的主要套件網站網路不穩定, 有可能會造成master沒有安裝 ceph的情況.

這個時候可以先觀察 相關的 shell script 有沒有安裝到master

$ls  /opt/ezilla/sbin/
ezilla-autoinstall-server          ezilla_diskless               ezilla-setup-nat.sh                ezilla-slave                        ezilla-slave-setup.sh
ezilla-ceph-install-setup-disk.sh  ezilla_diskver                ezilla-setup-network.sh            ezilla-slave-addhost.sh             ezilla-slave-ssh.sh
ezilla-ceph-install.sh             ezilla-drbl-patch.sh          ezilla-setup-one-user.sh           ezilla-slave-ceph-install.sh        ezilla-ssl-lighttpd.sh
ezilla-ceph-mount.sh               ezilla-filesystem-install.sh  ezilla-setup-opennebula-config.sh  ezilla-slave-filesystem-install.sh  mkdemo.sh
ezilla-ceph-patch.sh               ezilla-init                   ezilla-setup-opennebula-env.sh     ezilla-slave-generate-preseed.sh
ezilla-demo-modify-ip.sh           ezilla-libvirtd-patch.sh      ezilla-setup-opennebula-patch.sh   ezilla-slave-init.sh
ezilla-desktop.sh                  ezilla-pkg-install.sh         ezilla-setup-slave-netinstall.sh   ezilla-slave-network.sh

如果都有安裝到 master

接下來切換到 root 身份或是利用 sudo 去刪除  /opt/ezilla/.already_filesys_setup  ( 以此檔案是否存在進行判斷 )

# rm  /opt/ezilla/.already_filesys_setup

重新點選 Setup Slave 再進行 Slave 安裝, 就可以安裝 ceph 套件於 Master

^^


先記下來

星期六, 12月 27, 2014

VMware 從 datastore 匯入 OVF / OVA 檔案

這兩天在進行機器測試,
其中不知道是因為網路流量限制的關係, 還是 firewall 的關係( 目前待查 )
在匯入 OVF / OVA 來佈署的時候, 會有一些問題.
所以想法上就轉向先將 OVF / OVA 上傳到 datastore 然後再進行佈署( 排除網路的部份 )

但是不管是用 vSphere Client 或是 vSphere Web Client, 要準備佈署 OVF / OVA 的時候

都只有 從檔案或 URL 部署


爬了一些文章, 發現可藉由登入 VMware ESX Web-Based Datastore Browser 的方式取得 URL 來佈署

記得這個畫面嗎?
當點選 Browse datastores in this host's inventory 就是連到 Web-Based Datastore



或是使用  https://<hostname>/folder  來進行連接

連接之後會跳出驗證視窗



登入之後會看到主機的 datastore
作法很簡單, 就是在該 *.ova  *.ovf 上面按滑鼠右鍵,  複製捷徑


將這個 URL 位置填入 OVF 匯入來源




別擔心會要求驗證的, 不是任何人都可以存取的到.



接下來就按照一般的流程來匯入即可.


但是奇怪的是, 同樣的 URL

* Windows vSphere Client 匯入 --  成功
* Windows vSphere Web Client 匯入  --  失敗, 一直說OVF 無法存取
* Linux vSphere Web Client -- 裝了整合性外掛之後, 還是說你沒有裝 Orz.....


先記下來
^^


Reference:
http://ephrain.pixnet.net/blog/post/46208406-%5Bvmware%5D-%E5%BE%9E-datastore-%E4%B8%8A%E5%8C%AF%E5%85%A5-ova-%E6%AA%94%E6%A1%88
http://www.virtuallyghetto.com/2012/03/how-to-deploy-ovf-located-on-esxi.html


VMware vSphere Web Client with linux 中文亂碼小記

最近開始要管理 VMware vSphere 相關設備

一直以來在管理 vSphere 都是藉由 vSphere Client 來連結, 操作的方式也都習慣了.
但是由於VMware 5.5 之後有些功能慢慢都會移到 Web Client, 加上本身就是 Linux / Mac 使用者, 所以也讓自己慢慢習慣使用vSphere Web Client.

使用vSphere Web Client

  • 優點
    • 不一定要有 Windows 作業系統 + 安裝 vSphere Client 來管理 VMware vSphere
  • 缺點
    • 如果要佈署OVF 或是進行一些特定動作要下載安裝用戶端


    • 在 Linux 下的 Chome 瀏覽器中文介面是亂碼
以下的螢幕截圖
左邊是使用VMware Player 安裝Windows 7 使用瀏覽器開啟的狀況
右邊是在openSUSE 13.1  使用 google chrome 瀏覽器開啟的狀況

在網路上找到的文章都是將介面改成中文的文章
方法是在 vsphere-client 路徑後面加上 ?locale=語系 的方式
https://<hostname>:9443/vsphere-client/?locale=zh_CN

所以就用這樣的方式, 將語系換成英文, 解決亂碼的問題

https://<hostname>:9443/vsphere-client/?locale=en_EN



先記下來
^^


星期六, 12月 13, 2014

openSUSE 13.2 小記 - log 紀錄機制的改變

當很多事情慢慢改變的時候, 我們就要慢慢學習適應了

現在新的 OS 慢慢採用 systemd, 之前的想法是 ~ 就是開機方式改變啦, 所以也沒有特別去注意

直到今天我的硬碟有些狀況, console 吐出來一些訊息, 我要回頭去查 log 的時候, 才發現......真的要花時間學習啦 ^^

-- 是的 /var/log/messages 不見了
-- 因為 systemd-logger 取代我可愛的 syslog-ng

可以從這邊看到

https://news.opensuse.org/category/distribution/sneak-peeks/

journald

journald is replacing the old logging technologies in openSUSE (at least for most common cases). The two most important commands you need to know:
  • journalctl – the old “cat /var/log/messages”
  • journalctl -f – the old “tail -f /var/log/messages”

但是大家心理的 OS 是...........字這麼小.......會去注意才有鬼

對.....現在要用  journalctl  來看 log 了
◢▆▅▄▃ 崩╰(〒皿〒)╯潰 ▃▄▅▆◣

我是還沒有試 grep 結合啦....
當下馬上衝到 /var/log 底下

# ls   /var/log/
README            apparmor  btmp     gdm      krb5     pbl.log          snapper.log        wpa_supplicant.log  zypper.log
YaST2             audit     cups     hp       lastlog  pk_backend_zypp  speech-dispatcher  wtmp                zypper.log-20141118.xz
alternatives.log  boot.log  faillog  journal  ntp      samba            tuned              zypp

是的, 懷念的 /var/log/messages 已經不見了

可愛的是如果你去看  /var/log/README

You are looking for the traditional text log files in /var/log, and
they are gone?

Here's an explanation on what's going on:

You are running a systemd-based OS where traditional syslog has been
replaced with the Journal. The journal stores the same (and more)
information as classic syslog. To make use of the journal and access
the collected log data simply invoke "journalctl", which will output
the logs in the identical text-based format the syslog files in
/var/log used to be. For further details, please refer to
journalctl(1).

Alternatively, consider installing one of the traditional syslog
implementations available for your distribution, which will generate
the classic log files for you. Syslog implementations such as
syslog-ng or rsyslog may be installed side-by-side with the journal
and will continue to function the way they always did.

Thank you!

Further reading:
        man:journalctl(1)
        man:systemd-journald.service(8)
        man:journald.conf(5)
        http://0pointer.de/blog/projects/the-journal.html

有沒有再被補一刀的感覺??

好啦, 你會想說, 有看到 /var/log/journal 目錄, 去看看吧

# ls -R /var/log/journal/
/var/log/journal/:
016627c3c4784cd4812d4b7e96a34226

/var/log/journal/016627c3c4784cd4812d4b7e96a34226:
system.journal                                     user-1001.journal
system@00050a15226e65e2-6a2adaf099149b92.journal~  user-1001@00050a1568c14eb9-763573ad8f79750c.journal~
user-1000.journal                                  user-484.journal

這個檔案也不是 text 文字格式
所以你如果嘗試去 貓它 ( cat )
你只會得到無情的亂碼
然後那個 log 大小是那一招 ?  8 M / 24M ?

# ls -hl  /var/log/journal/016627c3c4784cd4812d4b7e96a34226/
total 97M
-rw-r-----  1 root systemd-journal 8.0M Dec 13 19:21 system.journal
-rw-r-----  1 root systemd-journal  24M Dec 13 16:52 system@00050a15226e65e2-6a2adaf099149b92.journal~
-rwxr-xr-x+ 1 root systemd-journal 8.0M Nov 15 14:21 user-1000.journal
-rw-r-----+ 1 root systemd-journal 8.0M Dec 13 19:19 user-1001.journal
-rw-r-----+ 1 root systemd-journal  40M Dec 13 17:12 user-1001@00050a1568c14eb9-763573ad8f79750c.journal~
-rw-r-----+ 1 root systemd-journal 8.0M Dec 13 18:45 user-484.journal


使用 file 指令來檢查格式
# file  /var/log/journal/016627c3c4784cd4812d4b7e96a34226/user-1000.journal
/var/log/journal/016627c3c4784cd4812d4b7e96a34226/user-1000.journal: Journal file, offline, compressed

看來用 cat 指令無望 Orz....

# journalctl   |  grep  error
Nov 15 20:31:34 linux-dxsi gdm-Xorg-:0[791]: (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
Nov 15 20:31:59 linux-dxsi org.a11y.Bus[1213]: g_dbus_connection_real_closed: Remote peer vanished witherror: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.
Nov 15 20:31:59 linux-dxsi org.gtk.vfs.Daemon[1213]: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.
Nov 15 20:31:59 linux-dxsi ca.desrt.dconf[1213]: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.
Nov 15 20:31:59 linux-dxsi org.gtk.Private.GoaVolumeMonitor[1213]: g_dbus_connection_real_closed: Remote peer vanished with error: 取回郵件發生錯誤:連線被對方重設 (g-io-error-quark, 0). Exiting.

目前採取  journalctl  搭配  grep 來使用
但是還是不太習慣

但是暫時又不想移除 systemd-logger 然後重新安裝  syslog-ng   (能稱多久呢?)

# zypper   search   systemd-
Loading repository data...
Reading installed packages...

S | Name                              | Summary                                             | Type      
--+-----------------------------------+-----------------------------------------------------+-----------
i | systemd-32bit                     | A System and Session Manager                        | package   
i | systemd-bash-completion           | Bash completion support for systemd                 | package   
  | systemd-devel                     | Development headers for systemd                     | package   
  | systemd-journal-gateway           | Gateway for serving journal events over the netwo-> | package   
i | systemd-logger                    | Journal only logging                                | package  


先記下來吧
看來要找時間去 man 一下相關資訊了
# man  journald.conf

# ls  /etc/systemd/
bootchart.conf  journald.conf  logind.conf  system  system.conf  user  user.conf


~ fun in share

星期二, 11月 25, 2014

20141125 VMware Solutions Symposium 小記

今年有去參加 VMware Solutions Symposium 會議
將參加的一些心得小記一下



主要是放在  vSphere 6 的預告以及 VMware Horizon 以及 vSAN 比較多

以下是自己的小記, 以免自己記性差.

Horizon 建議搭配 vSAN 來降低成本, 因為 VDI 首重 Storage 與 IO, 所以Storage 的成本會實際反應建立的成效.

Horizon

  • 可以指定 OP 進行管理作業
  • 母版管理
    • 透過 link clone 節省儲存成本
    • VMware mirage 來進行映像管理, P2V 與 V2P
  • 管理監控
    • V4H ( vRealize Operation for Horizon )
    • 端對端全程監控
    • 虛擬桌面評估
  • 應用軟體交付
    • Thin App ( 執行不同版本的App, 例如在 Win7 環境底下執行XP App )
    • Hosted App ( Terminal Service, 一對多環境 )
    • App Volumes ( 快速掛載App [ 以Volume 的方式掛載 ])
  • 使用者體驗
    • PCoIP 使用 UDP 4172 ( Citrix 使用 TCP 協定 )
    • VMware Blast ( vSGA, vDGA, vGPU )
  • 離線存取
    • Thin App ( 軟體打包, 終端執行 )
    • Horizon FLEX
      • 映像統一管理
      • FLEX Client - Fusion / Player
      • Mirage - 終端實體機器 ( 3D 效能較好 )
      • 安全管控
        • USB / 時間管控
Nvidia
  • http://www.nvidia.com/trygrid
  • GRID Virtual GPU Manager 控管 K1 / K2 資源分配
Horizon 調校
VMware 桌面優化
  • 乾淨可被維護的母版很重要
  • 可以追蹤問題點
母版不要用 P2V, 一定要重新建立母版
使用 P2V 方式對後續維護的風險很高

母版建立
  • 利用 KB1012225 移除熱插拔功能, 且不會影響 USB 重導向功能 ( Windows 7 )
  • 建立母版記得快照
  • 進行系統優化來降低 IOPS / CPU 需求
    • index / wifi 相關設定停用
    • 利用 Horizon View 優化Guide 附的 command script 來執行優化
    • 利用 VMware OS optimization tool 來進行 ( 不可以回復, 所以快照很重要 )

Some picture memo































先記下來
^^

星期三, 10月 29, 2014

openStack CL210課程筆記 - Day 3

20141029


Configuring Swift object storage service rings
workbook p69


磁碟分配方式
檔案進行  hash 然後除以 zone 的數量, 依照得到的餘數分配要放在那個磁碟.


[root@server5 ~]# source  /root/keystonerc_admin


[root@server5 ~(keystone_admin)]$ swift-ring-builder  /etc/swift/account.builder create  12  2 1


[root@server5 ~(keystone_admin)]$ swift-ring-builder  /etc/swift/container.builder create  12  2 1


[root@server5 ~(keystone_admin)]$ swift-ring-builder  /etc/swift/object.builder create  12  2 1


[root@server5 ~(keystone_admin)]$ for i in 1 2; do swift-ring-builder /etc/swift/container.builder add z${i}-192.168.0.105:6001/z${i}d1 100; done


WARNING: No region specified for z1-192.168.0.105:6002/z1d1. Defaulting to region 1.
Device d0r1z1-192.168.0.105:6002R192.168.0.105:6002/z1d1_"" with 100.0 weight got id 0
WARNING: No region specified for z2-192.168.0.105:6002/z2d1. Defaulting to region 1.
Device d1r1z2-192.168.0.105:6002R192.168.0.105:6002/z2d1_"" with 100.0 weight got id 1


[root@server5 ~(keystone_admin)]$ for i in 1 2; do swift-ring-builder /etc/swift/account.builder add z${i}-192.168.0.105:6002/z${i}d1 100; done
WARNING: No region specified for z1-192.168.0.105:6002/z1d1. Defaulting to region 1.
Device d0r1z1-192.168.0.105:6002R192.168.0.105:6002/z1d1_"" with 100.0 weight got id 0
WARNING: No region specified for z2-192.168.0.105:6002/z2d1. Defaulting to region 1.
Device d1r1z2-192.168.0.105:6002R192.168.0.105:6002/z2d1_"" with 100.0 weight got id 1


[root@server5 ~(keystone_admin)]$ for i in 1 2; do swift-ring-builder /etc/swift/object.builder add z${i}-192.168.0.105:6000/z${i}d1 100; done
WARNING: No region specified for z1-192.168.0.105:6002/z1d1. Defaulting to region 1.
Device d0r1z1-192.168.0.105:6002R192.168.0.105:6002/z1d1_"" with 100.0 weight got id 0
WARNING: No region specified for z2-192.168.0.105:6002/z2d1. Defaulting to region 1.
Device d1r1z2-192.168.0.105:6002R192.168.0.105:6002/z2d1_"" with 100.0 weight got id 1


[root@server5 ~(keystone_admin)]$ swift-ring-builder /etc/swift/account.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00.


[root@server5 ~(keystone_admin)]$ swift-ring-builder /etc/swift/container.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00.


[root@server5 ~(keystone_admin)]$ swift-ring-builder /etc/swift/object.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00.


[root@server5 ~(keystone_admin)]$ ls /etc/swift/*.gz
/etc/swift/account.ring.gz  /etc/swift/container.ring.gz  /etc/swift/object.ring.gz


[root@server5 ~(keystone_admin)]$ chown -R root:swift  /etc/swift/


Lab: Deploying the Swift object storage proxy
workbook p72


[root@server5 ~(keystone_admin)]$ cp  /etc/swift/proxy-server.conf  /etc/swift/proxy-server.conf.orig


[root@server5 ~(keystone_admin)]$ openstack-config  --set /etc/swift/proxy-server.conf  filter:authtoken admin_tenant_name services


[root@server5 ~(keystone_admin)]$ openstack-config  --set /etc/swift/proxy-server.conf  filter:authtoken auth_host 192.168.0.105


[root@server5 ~(keystone_admin)]$ openstack-config  --set /etc/swift/proxy-server.conf  filter:authtoken admin_user swift


[root@server5 ~(keystone_admin)]$ openstack-config  --set /etc/swift/proxy-server.conf  filter:authtoken admin_password redhat


[root@server5 ~(keystone_admin)]$ service  memcached  start
Starting memcached:                                     [  OK  ]


[root@server5 ~(keystone_admin)]$ service  openstack-swift-proxy  start
Starting openstack-swift-proxy:                         [  OK  ]


[root@server5 ~(keystone_admin)]$ tail  /var/log/messages
Oct 28 18:28:30 server5 proxy-server Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
Oct 28 18:28:30 server5 proxy-server Using /tmp/keystone-signing-swift as cache directory for signing certificate
Oct 28 18:28:30 server5 proxy-server Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
Oct 28 18:28:30 server5 proxy-server Using /tmp/keystone-signing-swift as cache directory for signing certificate
Oct 28 18:28:30 server5 proxy-server Starting keystone auth_token middleware
Oct 28 18:28:30 server5 proxy-server Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
Oct 28 18:28:30 server5 proxy-server Using /tmp/keystone-signing-swift as cache directory for signing certificate
Oct 28 18:28:30 server5 proxy-server Starting keystone auth_token middleware
Oct 28 18:28:30 server5 proxy-server Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
Oct 28 18:28:30 server5 proxy-server Using /tmp/keystone-signing-swift as cache directory for signing certificate


[root@server5 ~(keystone_admin)]$ chkconfig  memcached  on
[root@server5 ~(keystone_admin)]$ chkconfig  memcached  --list
memcached 0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_admin)]$ chkconfig  openstack-swift-proxy  on
[root@server5 ~(keystone_admin)]$ chkconfig  openstack-swift-proxy  --list
openstack-swift-proxy    0:off    1:off    2:on    3:on    4:on    5:on    6:off

Lab:  Validating Swift object storage
workbook p74


還沒做之前檢查一下
[root@server5 ~(keystone_admin)]$ swift list


[ Lab 未完成 ]


* Chapter  6 Implementing the Glance image service


Lab:  Deploying the Glance image service
workbook p82


[root@server5 ~]# yum  install -y openstack-glance


[root@server5 ~]# cp /etc/glance/glance-registry.conf  /etc/glance/glance-registry.conf.orig


[root@server5 ~]# cp /etc/glance/glance-api.conf   /etc/glance/glance-api.conf.orig


[root@server5 ~]# cp /usr/share/glance/glance-registry-dist.conf /etc/glance/glance-registry.conf
cp: overwrite `/etc/glance/glance-registry.conf'? y


[root@server5 ~(keystone_admin)]$ openstack-db --init --service glance --password redhat --rootpw redhat


[root@server5 ~(keystone_admin)]$ keystone user-create --name glance --pass redhat
+----------+----------------------------------+
| Property |           Value            |
+----------+----------------------------------+
|  email   |                               |
| enabled  |            True            |
| id | 3a924d24c2b84b2c95e35230ede33c9b |
|   name   |           glance           |
+----------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone user-role-add --user glance --role admin --tenant services


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password redhat
[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_username qpidauth


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_password redhat


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_protocol ssl


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_port 5671


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password redhat

[root@server5 ~(keystone_admin)]$ service  openstack-glance-registry start
Starting openstack-glance-registry:                     [  OK  ]


[root@server5 ~(keystone_admin)]$ service openstack-glance-api start
Starting openstack-glance-api:                          [  OK  ]


[root@server5 ~(keystone_admin)]$ chkconfig openstack-glance-registry on
[root@server5 ~(keystone_admin)]$ chkconfig openstack-glance-registry --list
openstack-glance-registry    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_admin)]$ chkconfig  openstack-glance-api on
[root@server5 ~(keystone_admin)]$ chkconfig  openstack-glance-api --list
openstack-glance-api    0:off    1:off    2:on    3:on    4:on    5:on    6:off

[root@server5 ~(keystone_admin)]$ egrep 'ERROR|CRITICAL' /var/log/glance/*
/var/log/glance/api.log:2014-10-28 19:44:16.861 3973 ERROR glance.store.sheepdog [-] Error in store configuration: Unexpected error while running command.
[root@server5 ~(keystone_admin)]$ keystone service-create --name glance --type image --description "openStack Image Service"
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
| description | openStack Image Service   |
|   id | e6b34babc2d34918a3003aa9c9005d3f |
| name |           glance           |
| type |           image            |
+-------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone endpoint-create --service-id e6b34babc2d34918a3003aa9c9005d3f --publicurl http://server5.example.com:9292 --adminurl  http://server5.example.com:9292 --internalurl http://server5.example.com:9292
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
|   adminurl  | http://server5.example.com:9292  |
|   id | 3fab15f5cd0747cfba284d2d648bf71e |
| internalurl | http://server5.example.com:9292  |
|  publicurl  | http://server5.example.com:9292  |
| region   |         regionOne          |
|  service_id | e6b34babc2d34918a3003aa9c9005d3f |
+-------------+----------------------------------+

Notes:
建立範本套用修改事項
  • 修改 網卡設定中, 將 HWADDR 設定移除
  • 將 HOSTNAME 設定改成 localhost.localdomain
  • 刪除 /etc/udev/rules.d/70-persistent-net.rules
  • 刪除 /etc/ssh/ssh_host_*
  • 刪除 /etc/pki/tls/certs/localhost.crt
  • 刪除 /etc/pki/tls/private/localhost.key


Virtio for windows


顯卡的  virtio
  • Video 請選  qxl
  • Graphics 請選  Spice


==== 中午休息 ====


Lab: Using Glance to upload a system image
workbook p85


透過指令上傳  image 到 glance
[root@server5 ~(keystone_admin)]$ glance image-create --name "test" --is-public True --disk-format qcow2 --container-format bare --copy-from http://instructor.example.com/pub/materials/small.img
+------------------+--------------------------------------+
| Property      | Value                             |
+------------------+--------------------------------------+
| checksum      | None                              |
| container_format | bare                              |
| created_at    | 2014-10-28T13:42:13               |
| deleted       | False                             |
| deleted_at    | None                              |
| disk_format   | qcow2                             |
| id            | f8379597-981b-4105-bb4a-7b596b529156 |
| is_public     | True                              |
| min_disk      | 0                                 |
| min_ram       | 0                                 |
| name          | test                              |
| owner         | 0fa2ca1bd34c4a4b88ce36272038574d |
| protected     | False                             |
| size          | 92909568                          |
| status        | queued                            |
| updated_at    | 2014-10-28T13:42:13               |
+------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ glance image-list
+--------------------------------------+------+-------------+------------------+----------+--------+
| ID                                | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+------+-------------+------------------+----------+--------+
| f8379597-981b-4105-bb4a-7b596b529156 | test | qcow2    | bare          | 92909568 | active |
+--------------------------------------+------+-------------+------------------+----------+--------+

上傳的影像檔案  /var/lib/glance/images


[root@server5 ~(keystone_admin)]$ ls -hl /var/lib/glance/images/
total 89M
-rw-r-----. 1 glance glance 89M Oct 28 21:42 f8379597-981b-4105-bb4a-7b596b529156


[root@server5 ~(keystone_admin)]$ glance image-show  test
+------------------+--------------------------------------+
| Property      | Value                             |
+------------------+--------------------------------------+
| checksum      | cf3345a6131ee413e8f41457ab57e8c8 |
| container_format | bare                              |
| created_at    | 2014-10-28T13:42:13               |
| deleted       | False                             |
| disk_format   | qcow2                             |
| id            | f8379597-981b-4105-bb4a-7b596b529156 |
| is_public     | True                              |
| min_disk      | 0                                 |
| min_ram       | 0                                 |
| name          | test                              |
| owner         | 0fa2ca1bd34c4a4b88ce36272038574d |
| protected     | False                             |
| size          | 92909568                          |
| status        | active                            |
| updated_at    | 2014-10-28T13:42:14               |
+------------------+--------------------------------------+

Glance 有獨立的 log


[root@server5 ~(keystone_admin)]$ ls /var/log/glance/
api.log  registry.log


* Chapter 7 Implementing the Cinder block storage service


Lab: Install the Cinder block storage service and managing volumes
workbook p98

[root@server5 ~]# yum  install -y openstack-cinder


[root@server5 ~]# cp  /etc/cinder/cinder.conf   /etc/cinder/cinder.conf.orig


[root@server5 ~]# cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf
cp: overwrite `/etc/cinder/cinder.conf'? y


[root@server5 ~]# source /root/keystonerc_admin


[root@server5 ~(keystone_admin)]$ openstack-db --init --service cinder --password redhat --rootpw redhat
Verified connectivity to MySQL.
Creating 'cinder' database.
Updating 'cinder' database password in /etc/cinder/cinder.conf
Initializing the cinder database, please wait...


[root@server5 ~(keystone_admin)]$ keystone user-create --name cinder --pass redhat
+----------+----------------------------------+
| Property |           Value            |
+----------+----------------------------------+
|  email   |                               |
| enabled  |            True            |
| id | 2323a6d898994cf79bb2187e560531f4 |
|   name   |           cinder           |
+----------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone user-role-add --user cinder --role admin --tenant services


[root@server5 ~(keystone_admin)]$ keystone service-create --name cinder --type volume --description "OpenStack Block Storage Service"
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
| description | OpenStack Block Storage Service  |
|   id | 79d96c900a174132a328a1ed078c8687 |
| name |           cinder           |
| type |           volume           |
+-------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone endpoint-create --service-id 79d96c900a174132a328a1ed078c8687 --publicurl "http://server5.example.com:8776/v1/%(tenant_id)s" --adminurl  "http://server5.example.com:8776/v1/%(tenant_id)s" --internalurl "http://server5.example.com:8776/v1/%(tenant_id)s"
+-------------+--------------------------------------------------+
|   Property  |                   Value                    |
+-------------+--------------------------------------------------+
|   adminurl  | http://server5.example.com:8776/v1/%(tenant_id)s |
|   id |      5aa848c8833a4afb8ddc6bb302e81834      |
| internalurl | http://server5.example.com:8776/v1/%(tenant_id)s |
|  publicurl  | http://server5.example.com:8776/v1/%(tenant_id)s |
| region   |                 regionOne                  |
|  service_id |      79d96c900a174132a328a1ed078c8687      |
+-------------+--------------------------------------------------+


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password redhat


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose true


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_username qpidauth


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_password redhat


[root@server5 ~(keystone_admin)]$  openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_protocol ssl


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_port 5671


[root@server5 ~(keystone_admin)]$ chkconfig openstack-cinder-scheduler on
[root@server5 ~(keystone_admin)]$ chkconfig openstack-cinder-api on
[root@server5 ~(keystone_admin)]$ chkconfig openstack-cinder-volume on
[root@server5 ~(keystone_admin)]$ service openstack-cinder-api start
Starting openstack-cinder-api:                          [  OK  ]
[root@server5 ~(keystone_admin)]$ service openstack-cinder-scheduler start
Starting openstack-cinder-scheduler:                    [  OK  ]
[root@server5 ~(keystone_admin)]$ service openstack-cinder-volume start
Starting openstack-cinder-volume:                       [  OK  ]


[root@server5 ~(keystone_admin)]$ tail /var/log/cinder/*


[root@server5 ~(keystone_admin)]$ echo "include /etc/cinder/volumes/*" >> /etc/tgt/targets.conf
[root@server5 ~(keystone_admin)]$ tail -n 3 /etc/tgt/targets.conf
# </direct-store>
#</target>
include /etc/cinder/volumes/*


[root@server5 ~(keystone_admin)]$ service tgtd start
Starting SCSI target daemon:                            [  OK  ]
[root@server5 ~(keystone_admin)]$ chkconfig tgtd on
[root@server5 ~(keystone_admin)]$ chkconfig tgtd --list
tgtd       0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_admin)]$ tail /var/log/messages
Oct 29 13:50:59 server5 yum[1795]: Installed: scsi-target-utils-1.0.24-10.el6.x86_64
Oct 29 13:51:02 server5 yum[1795]: Installed: openstack-cinder-2013.2.1-1.el6ost.noarch
Oct 29 13:56:44 server5 ntpd[1299]: 0.0.0.0 c612 02 freq_set kernel 0.330 PPM
Oct 29 13:56:44 server5 ntpd[1299]: 0.0.0.0 c615 05 clock_sync
Oct 29 14:36:10 server5 rhsmd: In order for Subscription Manager to provide your system with updates, your system must be registered with the Customer Portal. Please enter your Red Hat login to ensure your system is up-to-date.
Oct 29 14:43:43 server5 tgtd: semkey 0x6101003d
Oct 29 14:43:43 server5 tgtd: tgtd daemon started, pid:2913
Oct 29 14:43:43 server5 tgtd: tgtd logger started, pid:2916 debug:0
Oct 29 14:43:43 server5 tgtd: work_timer_start(146) use timer_fd based scheduler
Oct 29 14:43:43 server5 tgtd: bs_init(313) use signalfd notification


[root@server5 ~(keystone_admin)]$ openstack-status
== Glance services ==
openstack-glance-api:                active
openstack-glance-registry:           active
== Keystone service ==
openstack-keystone:                  active
== Swift services ==
openstack-swift-proxy:               dead   (disabled on boot)
openstack-swift-account:             active
openstack-swift-container:           active
openstack-swift-object:              active
== Cinder services ==
openstack-cinder-api:                active
openstack-cinder-scheduler:          active
openstack-cinder-volume:             active
== Support services ==
mysqld:                              active
messagebus:                          active
tgtd:                                active
qpidd:                               active
memcached:                           active
== Keystone users ==
+----------------------------------+--------+---------+-------+
|             id             |  name  | enabled | email |
+----------------------------------+--------+---------+-------+
| 864fef71904746feaad1c75e0ba3a911 | admin  |   True  |    |
| 2323a6d898994cf79bb2187e560531f4 | cinder |   True  |    |
| 3a924d24c2b84b2c95e35230ede33c9b | glance |   True  |    |
| 11468bea059d4955b976c4c1753a1fdc | swift  |   True  |    |
+----------------------------------+--------+---------+-------+
== Glance images ==
+--------------------------------------+------+-------------+------------------+----------+--------+
| ID                                | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+------+-------------+------------------+----------+--------+
| f8379597-981b-4105-bb4a-7b596b529156 | test | qcow2    | bare          | 92909568 | active |
+--------------------------------------+------+-------------+------------------+----------+--------+


還沒做之前先觀察
[root@server5 ~(keystone_admin)]$ ls  /etc/cinder/volumes/


[root@server5 ~(keystone_admin)]$ cinder create --display-name vol1 2
+---------------------+--------------------------------------+
|    Property   |             Value              |
+---------------------+--------------------------------------+
| attachments |               []               |
|  availability_zone  |              nova              |
|    bootable   |             false              |
|   created_at |   2014-10-29T06:46:07.962597   |
| display_description |              None              |
| display_name |              vol1              |
|       id      | 51ea44ec-3fe1-4b52-ae7b-26c2979085bf |
|    metadata   |               {}               |
|      size     |               2                |
| snapshot_id |              None              |
| source_volid |              None              |
|     status    |            creating            |
| volume_type |              None              |
+---------------------+--------------------------------------+


建立完的 volumes 會存放在 /etc/cinder/volumes
[root@server5 ~(keystone_admin)]$ ls  /etc/cinder/volumes/
volume-51ea44ec-3fe1-4b52-ae7b-26c2979085bf


[root@server5 ~(keystone_admin)]$ vgs
 VG          #PV #LV #SN Attr   VSize  VFree
 cinder-volumes   1   1   0 wz--n-  4.97g 2.97g
 vol0          1   2   0 wz--n- 29.97g 0
[root@server5 ~(keystone_admin)]$ lvs
 LV                                       VG          Attr    LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
 volume-51ea44ec-3fe1-4b52-ae7b-26c2979085bf cinder-volumes -wi-ao----  2.00g                                        
 root                                     vol0        -wi-ao----  4.00g                                        
 var                                      vol0        -wi-ao---- 25.97g          


[root@server5 ~(keystone_admin)]$ cinder delete vol1


[root@server5 ~(keystone_admin)]$ cinder list
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
|               ID               |  Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| 51ea44ec-3fe1-4b52-ae7b-26c2979085bf | deleting | vol1 |  2   | None |  false   |          |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+


Notes:
  • Cinder 的做法是在本機上建立一個 VG, 然後建立 LV, 然後當成 iscsi  共享出去

Lab:  Adding a Red Hat storage volume to Cinder
workbook p104


[root@server5 ~(keystone_admin)]$ yum -y install glusterfs-fuse


[root@server5 ~(keystone_admin)]$ cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig2


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends glusterfs,lvm


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf lvm volume_backend_name LVM


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf glusterfs volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf glusterfs glusterfs_shares_config /etc/cinder/shares.conf


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf glusterfs glusterfs_sparsed_volumes false


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/cinder/cinder.conf glusterfs volume_backend_name RHS


[root@server5 ~(keystone_admin)]$ echo "rhs.example.com:volume5" >> /etc/cinder/shares.conf


[root@server5 ~(keystone_admin)]$ for svc in scheduler volume; do service openstack-cinder-${svc} restart; done
Stopping openstack-cinder-scheduler:                    [  OK  ]
Starting openstack-cinder-scheduler:                    [  OK  ]
Stopping openstack-cinder-volume:                       [  OK  ]
Starting openstack-cinder-volume:                       [  OK  ]


[root@server5 ~(keystone_admin)]$ tail /var/log/cinder/volume.log
2014-10-29 15:23:12.779 3532 INFO cinder.service [-] Started child 3543
2014-10-29 15:23:12.789 3543 AUDIT cinder.service [-] Starting cinder-volume node (version 2013.2.1)
2014-10-29 15:23:18.452 3543 INFO cinder.openstack.common.rpc.impl_qpid [req-7f5c5f29-ccac-4916-87ab-0b1f640734b5 None None] Connected to AMQP server on localhost:5671
2014-10-29 15:23:18.467 3543 INFO cinder.volume.manager [req-7f5c5f29-ccac-4916-87ab-0b1f640734b5 None None] Starting volume driver LVMISCSIDriver (2.0.0)
2014-10-29 15:23:19.087 3542 INFO cinder.openstack.common.rpc.impl_qpid [req-fdaa0e13-64af-4943-8188-019fb5b0a629 None None] Connected to AMQP server on localhost:5671
2014-10-29 15:23:19.117 3542 INFO cinder.volume.manager [req-fdaa0e13-64af-4943-8188-019fb5b0a629 None None] Starting volume driver GlusterfsDriver (1.1.0)
2014-10-29 15:23:20.903 3542 INFO cinder.volume.manager [req-fdaa0e13-64af-4943-8188-019fb5b0a629 None None] Updating volume status
2014-10-29 15:23:20.985 3543 INFO cinder.volume.manager [req-7f5c5f29-ccac-4916-87ab-0b1f640734b5 None None] Updating volume status
2014-10-29 15:23:21.367 3542 INFO cinder.openstack.common.rpc.impl_qpid [req-fdaa0e13-64af-4943-8188-019fb5b0a629 None None] Connected to AMQP server on localhost:5671
2014-10-29 15:23:21.437 3543 INFO cinder.openstack.common.rpc.impl_qpid [req-7f5c5f29-ccac-4916-87ab-0b1f640734b5 None None] Connected to AMQP server on localhost:5671


[root@server5 ~(keystone_admin)]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vol0-root 4.0G  1.2G  2.6G  31% /
tmpfs                 1.9G 0  1.9G   0% /dev/shm
/dev/vda1             248M   34M  202M  15% /boot
/dev/mapper/vol0-var   26G  353M   24G   2% /var
/dev/vdb1              93M  5.6M   83M   7% /srv/node/z1d1
/dev/vdc1              93M  5.6M   83M   7% /srv/node/z2d1
rhs.example.com:volume5  1.3G   33M  1.3G   3% /var/lib/cinder/mnt/bd5297560573d0b99c0db6110059b92f


[root@server5 ~(keystone_admin)]$ cinder type-create lvm
+--------------------------------------+------+
|               ID               | Name |
+--------------------------------------+------+
| 918f5038-7051-4a08-bcd4-0254b2777f27 | lvm  |
+--------------------------------------+------+


[root@server5 ~(keystone_admin)]$ cinder type-key 918f5038-7051-4a08-bcd4-0254b2777f27 set volume_backend_name=LVM


[root@server5 ~(keystone_admin)]$ cinder type-create glusterfs
+--------------------------------------+-----------+
|               ID               | Name   |
+--------------------------------------+-----------+
| 8433f17b-c279-40c5-a0f7-a8ae80720f00 | glusterfs |
+--------------------------------------+-----------+


[root@server5 ~(keystone_admin)]$ cinder type-key 8433f17b-c279-40c5-a0f7-a8ae80720f00 set volume_backend_name=RHS


[root@server5 ~(keystone_admin)]$ cinder type-list
+--------------------------------------+-----------+
|               ID               | Name   |
+--------------------------------------+-----------+
| 8433f17b-c279-40c5-a0f7-a8ae80720f00 | glusterfs |
| 918f5038-7051-4a08-bcd4-0254b2777f27 | lvm |
+--------------------------------------+-----------+


[root@server5 ~(keystone_admin)]$ cinder create --volume-type lvm --display-name vol2 1
+---------------------+--------------------------------------+
|    Property   |             Value              |
+---------------------+--------------------------------------+
| attachments |               []               |
|  availability_zone  |              nova              |
|    bootable   |             false              |
|   created_at |   2014-10-29T07:37:41.554652   |
| display_description |              None              |
| display_name |              vol2              |
|       id      | 445e58d0-5a7b-4940-a23f-545c062d2102 |
|    metadata   |               {}               |
|      size     |               1                |
| snapshot_id |              None              |
| source_volid |              None              |
|     status    |            creating            |
| volume_type |              lvm               |
+---------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|               ID               |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 445e58d0-5a7b-4940-a23f-545c062d2102 | available | vol2 |  1   | lvm |  false   |          |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


[root@server5 ~(keystone_admin)]$ cinder create --volume-type glusterfs --display-name vol3 1
+---------------------+--------------------------------------+
|    Property   |             Value              |
+---------------------+--------------------------------------+
| attachments |               []               |
|  availability_zone  |              nova              |
|    bootable   |             false              |
|   created_at |   2014-10-29T07:38:51.478776   |
| display_description |              None              |
| display_name |              vol3              |
|       id      | edd10aec-efb5-4634-a123-1c8ffe31a669 |
|    metadata   |               {}               |
|      size     |               1                |
| snapshot_id |              None              |
| source_volid |              None              |
|     status    |            creating            |
| volume_type |           glusterfs            |
+---------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|               ID               |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 445e58d0-5a7b-4940-a23f-545c062d2102 | available | vol2 |  1   | lvm |  false   |          |
| edd10aec-efb5-4634-a123-1c8ffe31a669 |  creating | vol3 |  1   |  glusterfs  |  false   |          |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


[root@server5 ~(keystone_admin)]$ cinder create --volume-type glusterfs --display-name vol4 1
+---------------------+--------------------------------------+
|    Property   |             Value              |
+---------------------+--------------------------------------+
| attachments |               []               |
|  availability_zone  |              nova              |
|    bootable   |             false              |
|   created_at |   2014-10-29T07:39:42.471966   |
| display_description |              None              |
| display_name |              vol4              |
|       id      | 7e126005-4187-4b98-b5db-e5d3431b9c36 |
|    metadata   |               {}               |
|      size     |               1                |
| snapshot_id |              None              |
| source_volid |              None              |
|     status    |            creating            |
| volume_type |           glusterfs            |
+---------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|               ID               |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 445e58d0-5a7b-4940-a23f-545c062d2102 | available | vol2 |  1   | lvm |  false   |          |
| 7e126005-4187-4b98-b5db-e5d3431b9c36 |   error   | vol4 |  1   |  glusterfs  |  false   |          |
| edd10aec-efb5-4634-a123-1c8ffe31a669 | available | vol3 |  1   |  glusterfs  |  false   |          |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


[root@server5 ~(keystone_admin)]$ cinder create --volume-type lvm --display-name vol5 1
+---------------------+--------------------------------------+
|    Property   |             Value              |
+---------------------+--------------------------------------+
| attachments |               []               |
|  availability_zone  |              nova              |
|    bootable   |             false              |
|   created_at |   2014-10-29T07:40:51.148736   |
| display_description |              None              |
| display_name |              vol5              |
|       id      | e310cbec-48a6-4217-af86-cd4c3710c40f |
|    metadata   |               {}               |
|      size     |               1                |
| snapshot_id |              None              |
| source_volid |              None              |
|     status    |            creating            |
| volume_type |              lvm               |
+---------------------+--------------------------------------+

[root@server5 ~(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|               ID               |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 445e58d0-5a7b-4940-a23f-545c062d2102 | available | vol2 |  1   | lvm |  false   |          |
| 7e126005-4187-4b98-b5db-e5d3431b9c36 |   error   | vol4 |  1   |  glusterfs  |  false   |          |
| e310cbec-48a6-4217-af86-cd4c3710c40f | available | vol5 |  1   | lvm |  false   |          |
| edd10aec-efb5-4634-a123-1c8ffe31a669 | available | vol3 |  1   |  glusterfs  |  false   |          |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


[root@server5 ~(keystone_admin)]$ cinder delete vol2
[root@server5 ~(keystone_admin)]$ cinder delete vol3
[root@server5 ~(keystone_admin)]$ cinder delete vol4
[root@server5 ~(keystone_admin)]$ cinder delete vol5


[root@server5 ~(keystone_admin)]$ cinder type-list
+--------------------------------------+-----------+
|               ID               | Name   |
+--------------------------------------+-----------+
| 8433f17b-c279-40c5-a0f7-a8ae80720f00 | glusterfs |
| 918f5038-7051-4a08-bcd4-0254b2777f27 | lvm |
+--------------------------------------+-----------+


[root@server5 ~(keystone_admin)]$ cinder type-delete 8433f17b-c279-40c5-a0f7-a8ae80720f00


[root@server5 ~(keystone_admin)]$ cinder type-delete 918f5038-7051-4a08-bcd4-0254b2777f27


[root@server5 ~(keystone_admin)]$ cinder type-list


還原相關設定
[root@server5 ~(keystone_admin)]$ cp /etc/cinder/cinder.conf.orig2 /etc/cinder/cinder.conf
cp: overwrite `/etc/cinder/cinder.conf'? y


[root@server5 ~(keystone_admin)]$ chown cinder:cinder /etc/cinder/cinder.conf
[root@server5 ~(keystone_admin)]$ chmod 600 /etc/cinder/cinder.conf
[root@server5 ~(keystone_admin)]$ restorecon -v /etc/cinder/cinder.conf


[root@server5 ~(keystone_admin)]$ for svc in scheduler volume; do service openstack-cinder-${svc} restart; done
Stopping openstack-cinder-scheduler:                    [  OK  ]
Starting openstack-cinder-scheduler:                    [  OK  ]
Stopping openstack-cinder-volume:                       [  OK  ]
Starting openstack-cinder-volume:                       [  OK  ]


Notes
glusterfs 是 file base, 所以會建立檔案來模擬

* Chapter 8 Implementing the openStack netowrking service


Lab:  Installing openStack networking
workbook p114


[root@server5 ~]# source  /root/keystonerc_admin


[root@server5 ~(keystone_admin)]$ keystone service-create --name neutron --type network --description "OpenStack Networking Service"
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
| description |   OpenStack Networking Service   |
|   id | a15abe8a14b942fc981a2a0f50d1e6be |
| name |          neutron           |
| type |          network           |
+-------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone endpoint-create --service-id a15abe8a14b942fc981a2a0f50d1e6be --publicurl "http://server5.example.com:9696" --adminurl  "http://server5.example.com:9696" --internalurl "http://server5.example.com:9696"
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
|   adminurl  | http://server5.example.com:9696  |
|   id | a5ae8d2814aa415a8ee26fbdc864e9e5 |
| internalurl | http://server5.example.com:9696  |
|  publicurl  | http://server5.example.com:9696  |
| region   |         regionOne          |
|  service_id | a15abe8a14b942fc981a2a0f50d1e6be |
+-------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone catalog
Service: volume
+-------------+---------------------------------------------------------------------+
|   Property  |                             Value                             |
+-------------+---------------------------------------------------------------------+
|   adminURL  | http://server5.example.com:8776/v1/0fa2ca1bd34c4a4b88ce36272038574d |
|   id |                60d06be459c94a778711bf6856d0b59b               |
| internalURL | http://server5.example.com:8776/v1/0fa2ca1bd34c4a4b88ce36272038574d |
|  publicURL  | http://server5.example.com:8776/v1/0fa2ca1bd34c4a4b88ce36272038574d |
| region   |                           regionOne                           |
+-------------+---------------------------------------------------------------------+
Service: object-store
+-------------+--------------------------------------------------------------------------+
|   Property  |                               Value                                |
+-------------+--------------------------------------------------------------------------+
|   adminURL  | http://server5.example.com:8080/v1/AUTH_0fa2ca1bd34c4a4b88ce36272038574d |
|   id |                  9a71a157dea348ba92d5d67d1a42bf92                  |
| internalURL | http://server5.example.com:8080/v1/AUTH_0fa2ca1bd34c4a4b88ce36272038574d |
|  publicURL  | http://server5.example.com:8080/v1/AUTH_0fa2ca1bd34c4a4b88ce36272038574d |
| region   |                             regionOne                              |
+-------------+--------------------------------------------------------------------------+
Service: image
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
|   adminURL  | http://server5.example.com:9292  |
|   id | 32567d94b08f4de0bf83f437a49cec2f |
| internalURL | http://server5.example.com:9292  |
|  publicURL  | http://server5.example.com:9292  |
| region   |         regionOne          |
+-------------+----------------------------------+
Service: network
+-------------+----------------------------------+
|   Property  |           Value            |
+-------------+----------------------------------+
|   adminURL  | http://server5.example.com:9696  |
|   id | 10ed087ee3d14319808ff4856ec4adb4 |
| internalURL | http://server5.example.com:9696  |
|  publicURL  | http://server5.example.com:9696  |
| region   |         regionOne          |
+-------------+----------------------------------+
Service: identity
+-------------+---------------------------------------+
|   Property  |              Value              |
+-------------+---------------------------------------+
|   adminURL  | http://server5.example.com:35357/v2.0 |
|   id | 778fca3b408242598aa5428d3f7fff70   |
| internalURL |  http://server5.example.com:5000/v2.0 |
|  publicURL  |  http://server5.example.com:5000/v2.0 |
| region   |            regionOne            |
+-------------+---------------------------------------+


[root@server5 ~(keystone_admin)]$ keystone user-create --name neutron --pass redhat
+----------+----------------------------------+
| Property |           Value            |
+----------+----------------------------------+
|  email   |                               |
| enabled  |            True            |
| id | 0805aea2c48947bbb56b72d437f68aa7 |
|   name   |          neutron           |
+----------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone user-role-add --user neutron --role admin --tenant services


[root@server5 ~(keystone_admin)]$ keystone user-role-list
+----------------------------------+-------+----------------------------------+----------------------------------+
|             id             |  name |          user_id           |         tenant_id          |
+----------------------------------+-------+----------------------------------+----------------------------------+
| db5b551d50dc4d97a7bd89cc65edf149 | admin | 864fef71904746feaad1c75e0ba3a911 | 0fa2ca1bd34c4a4b88ce36272038574d |
+----------------------------------+-------+----------------------------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ keystone --os-username neutron --os-password redhat --os-tenant-name services user-role-list
+----------------------------------+-------+----------------------------------+----------------------------------+
|             id             |  name |          user_id           |         tenant_id          |
+----------------------------------+-------+----------------------------------+----------------------------------+
| db5b551d50dc4d97a7bd89cc65edf149 | admin | 0805aea2c48947bbb56b72d437f68aa7 | 047e809fc22e4ff687cfecbe15e728a0 |
+----------------------------------+-------+----------------------------------+----------------------------------+


[root@server5 ~(keystone_admin)]$ yum -y install openstack-neutron openstack-neutron-openvswitch


[root@server5 ~(keystone_admin)]$ service qpidd status
qpidd (pid  1562) is running...


[root@server5 ~(keystone_admin)]$ cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.org


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend quantum.openstack.common.rpc.impl_qpid


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname 192.168.0.105


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_username qpidauth


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_password redhat


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_protocol ssl


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_port 5671


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password redhat


[root@server5 ~(keystone_admin)]$ openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"


[root@server5 ~(keystone_admin)]$ vi  /root/keystonerc_neutron
新增內容
export OS_USERNAME=neutron
export OS_TENANT_NAME=services
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://server5.example.com:35357/v2.0/
export PS1='[\u@\h \W(keystone_neutron)]\$'


[root@server5 ~(keystone_admin)]$ source /root/keystonerc_neutron


[root@server5 ~(keystone_neutron)]#yum -y install openstack-nova-common


[root@server5 ~(keystone_neutron)]#neutron-server-setup --yes --rootpw redhat --plugin openvswitch
Neutron plugin: openvswitch
Plugin: openvswitch => Database: ovs_neutron
Verified connectivity to MySQL.
Configuration updates complete!


[root@server5 ~(keystone_neutron)]#neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini stamp head
No handlers could be found for logger "neutron.common.legacy"
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.


[root@server5 ~(keystone_neutron)]#service neutron-server start
Starting neutron:                                       [  OK  ]


[root@server5 ~(keystone_neutron)]#egrep 'ERROR|CRITICAL' /var/log/neutron/server.log
2014-10-29 16:15:21.949 5242 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver


[root@server5 ~(keystone_neutron)]#chkconfig neutron-server on
[root@server5 ~(keystone_neutron)]#chkconfig neutron-server --list
neutron-server     0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#openstack-status


== neutron services ==
neutron-server:                      active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                    inactive  (disabled on boot)
neutron-metadata-agent:              inactive  (disabled on boot)
neutron-lbaas-agent:                 inactive  (disabled on boot)
neutron-openvswitch-agent:           inactive  (disabled on boot)


[root@server5 ~(keystone_neutron)]#neutron-node-setup --plugin openvswitch --qhost 192.168.0.105
Neutron plugin: openvswitch
Would you like to update the nova configuration files? (y/n):
y
Configuration updates complete!


[root@server5 ~(keystone_neutron)]#service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db        [  OK  ]
Starting ovsdb-server                                   [  OK  ]
Configuring Open vSwitch system IDs                     [  OK  ]
Inserting openvswitch module                            [  OK  ]
Starting ovs-vswitchd                                   [  OK  ]
Enabling remote OVSDB managers                          [  OK  ]


[root@server5 ~(keystone_neutron)]#egrep 'ERROR|CRITICAL' /var/log/openvswitch/*


[root@server5 ~(keystone_neutron)]#chkconfig openvswitch on
[root@server5 ~(keystone_neutron)]#chkconfig openvswitch --list
openvswitch    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#ovs-vsctl add-br br-int
[root@server5 ~(keystone_neutron)]#ovs-vsctl show
666a6415-7ab8-4ebe-980b-181cf5567c7d
Bridge br-int
    Port br-int
        Interface br-int
            type: internal
ovs_version: "1.11.0"


[root@server5 ~(keystone_neutron)]#cp /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.orig


[root@server5 ~(keystone_neutron)]#openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS integration_bridge br-int


[root@server5 ~(keystone_neutron)]#service neutron-openvswitch-agent start
Starting neutron-openvswitch-agent:                     [  OK  ]


[root@server5 ~(keystone_neutron)]#egrep 'ERROR|CRITICAL' /var/log/neutron/openvswitch-agent.log


[root@server5 ~(keystone_neutron)]#chkconfig neutron-openvswitch-agent on
[root@server5 ~(keystone_neutron)]#chkconfig neutron-openvswitch-agent --list
neutron-openvswitch-agent    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#chkconfig neutron-ovs-cleanup on
[root@server5 ~(keystone_neutron)]#chkconfig neutron-ovs-cleanup --list
neutron-ovs-cleanup    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#neutron-dhcp-setup --plugin openvswitch --qhost 192.168.0.105
Neutron plugin: openvswitch
Configuration updates complete!


[root@server5 ~(keystone_neutron)]#service neutron-dhcp-agent start
Starting neutron-dhcp-agent:                            [  OK  ]


[root@server5 ~(keystone_neutron)]#egrep 'ERROR|CRITICAL' /var/log/neutron/dhcp-agent.log
2014-10-29 16:30:40.599 6913 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver


[root@server5 ~(keystone_neutron)]#chkconfig neutron-dhcp-agent on
[root@server5 ~(keystone_neutron)]#chkconfig neutron-dhcp-agent --list
neutron-dhcp-agent    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#ovs-vsctl add-br br-ex
[root@server5 ~(keystone_neutron)]#ovs-vsctl show
666a6415-7ab8-4ebe-980b-181cf5567c7d
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
Bridge br-int
    Port br-int
        Interface br-int
            type: internal
ovs_version: "1.11.0"


[root@server5 ~(keystone_neutron)]#cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/
[root@server5 ~(keystone_neutron)]#cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex


[root@server5 ~(keystone_neutron)]#vi /etc/sysconfig/network-scripts/ifcfg-eth0
只剩下下列設定
DEVICE=eth0
HWADDR=52:54:00:00:00:05
ONBOOT=yes


[root@server5 ~(keystone_neutron)]#vi /etc/sysconfig/network-scripts/ifcfg-br-ex
修改 DEVICE 移除 HWADDR
DEVICE=br-ex
IPADDR=192.168.0.105
PREFIX=24
GATEWAY=192.168.0.254
DNS1=192.168.0.254
SEARCH1=example.com
ONBOOT=yes


[root@server5 ~(keystone_neutron)]#ovs-vsctl show
666a6415-7ab8-4ebe-980b-181cf5567c7d
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
Bridge br-int
    Port br-int
        Interface br-int
            type: internal
ovs_version: "1.11.0"


[root@server5 ~(keystone_neutron)]#ovs-vsctl add-port br-ex eth0 ; service network restart
Shutting down interface br-ex:                          [  OK  ]
Shutting down interface eth0:                           [  OK  ]
Shutting down interface eth1:                           [  OK  ]
Shutting down loopback interface:                       [  OK  ]
Bringing up loopback interface:                         [  OK  ]
Bringing up interface br-ex:  Determining if ip address 192.168.0.105 is already in use for device br-ex...
                                                       [  OK  ]
Bringing up interface eth0:                             [  OK  ]
Bringing up interface eth1:                             [  OK  ]


[root@server5 ~(keystone_neutron)]#ovs-vsctl show
666a6415-7ab8-4ebe-980b-181cf5567c7d
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port "eth0"
        Interface "eth0"
Bridge br-int
    Port br-int
        Interface br-int
            type: internal
ovs_version: "1.11.0"


[root@server5 ~(keystone_neutron)]#neutron-l3-setup --plugin openvswitch --qhost 192.168.0.105
Neutron plugin: openvswitch
Configuration updates complete!


[root@server5 ~(keystone_neutron)]#service neutron-l3-agent start
Starting neutron-l3-agent:                              [  OK  ]


[root@server5 ~(keystone_neutron)]#egrep 'ERROR|CRITICAL' /var/log/neutron/l3-agent.log
2014-10-29 16:43:50.503 10008 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver


[root@server5 ~(keystone_neutron)]#chkconfig neutron-l3-agent on
[root@server5 ~(keystone_neutron)]#chkconfig neutron-l3-agent --list
neutron-l3-agent    0:off    1:off    2:on    3:on    4:on    5:on    6:off


[root@server5 ~(keystone_neutron)]#openstack-status


== neutron services ==
neutron-server:                      active
neutron-dhcp-agent:                  active
neutron-l3-agent:                    active
neutron-metadata-agent:              inactive  (disabled on boot)
neutron-lbaas-agent:                 inactive  (disabled on boot)
neutron-openvswitch-agent:           active

Lab: Configuring openStack networking
workbook p123



[root@server5 ~(keystone_neutron)]#source /root/keystonerc_myuser
[root@server5 ~(keystone_neutron)]#neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field              | Value                             |
+-----------------------+--------------------------------------+
| admin_state_up     | True                              |
| external_gateway_info |                                   |
| id                 | 9efbb27d-0b79-4efb-b4a2-3e0e82cd9d1a |
| name               | router1                           |
| status             | ACTIVE                            |
| tenant_id          | 047e809fc22e4ff687cfecbe15e728a0 |
+-----------------------+--------------------------------------+
[root@server5 ~(keystone_neutron)]#neutron net-create private
Created a new network:
+---------------------------+--------------------------------------+
| Field                  | Value                             |
+---------------------------+--------------------------------------+
| admin_state_up         | True                              |
| id                     | ececab49-b4c7-4dac-a661-8d3b0c48309e |
| name                   | private                           |
| provider:network_type | local                             |
| provider:physical_network |                                   |
| provider:segmentation_id  |                                   |
| shared                 | False                             |
| status                 | ACTIVE                            |
| subnets                |                                   |
| tenant_id              | 047e809fc22e4ff687cfecbe15e728a0 |
+---------------------------+--------------------------------------+


[root@server5 ~(keystone_neutron)]#neutron subnet-create --name subpriv private 192.168.32.0/24
Created a new subnet:
+------------------+----------------------------------------------------+
| Field         | Value                                           |
+------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.32.2", "end": "192.168.32.254"} |
| cidr          | 192.168.32.0/24                                 |
| dns_nameservers  |                                                 |
| enable_dhcp   | True                                            |
| gateway_ip    | 192.168.32.1                                    |
| host_routes   |                                                 |
| id            | ddb7a99b-9d05-49cd-8a3f-b56d55bfa8fd            |
| ip_version    | 4                                               |
| name          | subpriv                                         |
| network_id    | ececab49-b4c7-4dac-a661-8d3b0c48309e            |
| tenant_id     | 047e809fc22e4ff687cfecbe15e728a0                |
+------------------+----------------------------------------------------+

[root@server5 ~(keystone_neutron)]#neutron router-interface-add router1 subpriv
Added interface 3e55916a-de77-4468-a02c-ebc6086a6444 to router router1.


[root@server5 ~(keystone_neutron)]#neutron port-list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                | name | mac_address    | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 3e55916a-de77-4468-a02c-ebc6086a6444 |   | fa:16:3e:06:8c:11 | {"subnet_id": "ddb7a99b-9d05-49cd-8a3f-b56d55bfa8fd", "ip_address": "192.168.32.1"} |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+


因為要更動底層網路所以需要 admin權限故切換身份
[root@server5 ~(keystone_neutron)]#source /root/keystonerc_admin


[root@server5 ~(keystone_admin)]$ neutron net-create --tenant-id services public --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field                  | Value                             |
+---------------------------+--------------------------------------+
| admin_state_up         | True                              |
| id                     | 52903fd8-05ff-4aa2-b2fe-e9da2c84d516 |
| name                   | public                            |
| provider:network_type | local                             |
| provider:physical_network |                                   |
| provider:segmentation_id  |                                   |
| router:external        | True                              |
| shared                 | False                             |
| status                 | ACTIVE                            |
| subnets                |                                   |
| tenant_id              | services                          |
+---------------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ neutron subnet-create --tenant-id services --allocation-pool start=172.24.5.1,end=172.24.5.100 --gateway 172.24.5.254 --disable-dhcp --name subpub public 172.24.5.0/24
Created a new subnet:
+------------------+------------------------------------------------+
| Field         | Value                                       |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "172.24.5.1", "end": "172.24.5.100"} |
| cidr          | 172.24.5.0/24                               |
| dns_nameservers  |                                             |
| enable_dhcp   | False                                       |
| gateway_ip    | 172.24.5.254                                |
| host_routes   |                                             |
| id            | d5cfb7d6-2ed7-4f5a-bb4b-dbe50e780323        |
| ip_version    | 4                                           |
| name          | subpub                                      |
| network_id    | 52903fd8-05ff-4aa2-b2fe-e9da2c84d516        |
| tenant_id     | services                                    |
+------------------+------------------------------------------------+


[root@server5 ~(keystone_admin)]$ neutron router-gateway-set router1 public
Set gateway for router router1


[root@server5 ~(keystone_admin)]$ neutron port-list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                | name | mac_address    | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 3e55916a-de77-4468-a02c-ebc6086a6444 |   | fa:16:3e:06:8c:11 | {"subnet_id": "ddb7a99b-9d05-49cd-8a3f-b56d55bfa8fd", "ip_address": "192.168.32.1"} |
| 81b922ce-1544-42b6-a60e-4d899618f34c |   | fa:16:3e:3f:52:5f | {"subnet_id": "d5cfb7d6-2ed7-4f5a-bb4b-dbe50e780323", "ip_address": "172.24.5.1"}   |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+


[root@server5 ~(keystone_admin)]$ neutron floatingip-list


[root@server5 ~(keystone_admin)]$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field            | Value                             |
+---------------------+--------------------------------------+
| fixed_ip_address |                                   |
| floating_ip_address | 172.24.5.2                        |
| floating_network_id | 52903fd8-05ff-4aa2-b2fe-e9da2c84d516 |
| id               | b1db8c2a-09d9-4c5b-8e50-096e15af019c |
| port_id          |                                   |
| router_id        |                                   |
| tenant_id        | 0fa2ca1bd34c4a4b88ce36272038574d |
+---------------------+--------------------------------------+


[root@server5 ~(keystone_admin)]$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| b1db8c2a-09d9-4c5b-8e50-096e15af019c |               | 172.24.5.2       |      |
+--------------------------------------+------------------+---------------------+---------+