星期二, 10月 30, 2018

Google Home Mini 的兩三事

Google Home Mini 的兩三事

這次去日本出差的時候剛好遇到 Bic Camera 針對 Google Home Mini 促銷
所以我們家就多了一個新成員 咕咕盧



== 安裝 Google Home Mini ==
這個部份比較簡單 ( 我使用 Android 所以以下列出 Android 方式 )

== 註冊並連結 Spotify ==

到 Spotify 官網進行註冊
  • 我是註冊免費帳號
  • 因為目前 Google Music 在台灣似乎不能使用, 以 Spotify 其實功能就夠了


====== 功能取得, 所以之後說 “OK Google, Play some Music” 就會播放音樂 ======

==== Spotify in openSUSE Leap 15 ====

要安裝 Spotify in openSUSE Leap 15 最簡單的方式是透過 Flatpak



==== 目前的語音控制 ====

目前自己有使用的, 當然前面都要加上 Ok, Google 或是 Hey, Google

下次的目標是 IFTTT with Google Home mini

~ enjoy it

Reference

星期六, 10月 20, 2018

SCSI scan tool in VMware 小記

SCSI scan tool in VMware 小記

OS: openSUSE Leap 42.3 in VMware
VMware: vSphere ESXi 6.5

這兩天因為一個專案要調整 VM 磁碟空間, 所以有注意到 在 VMware 內新增磁碟會遇到的狀況, 寫這篇小記紀錄一下

首先描述一下情境
當我們在 VMware 的 VM 裡面新增硬碟的時候, 如果用主觀意識來想, 會覺得應該 OS 會馬上抓到硬碟.

所以想法上 當使用 fdisk -l 的時候應該要看得到新的硬碟

但是如果在 VMware 內( 也許實體也是 ), 事實上不是這樣的, 還要經過 rescan SCSI 裝置, 才能讓 VM 知道他有新的裝置可以使用.

在 openSUSE Leap 的環境內, 已經有內建的指令可以作這件事.
使用 rescan-scsi-bus.sh

檔案位置在 /usr/bin 下
# which  rescan-scsi-bus.sh
/usr/bin/rescan-scsi-bus.sh

提供的套件為 sg3_utils
# rpm  -qf  /usr/bin/rescan-scsi-bus.sh
sg3_utils-1.43-12.1.x86_64


當在 VMware 內新增硬碟的時候 rescan-scsi-bus.sh 是必須加上 -a 參數才有效的( 實際測試也是這樣 )

所以作法就是
#rescan-scsi-bus.sh  -a

接下來使用  fdisk -l 就可以看的到了
:)

==== 同場加映 gparted ====

在 openSUSE and SUSE 的環境, 磁碟的編輯都是透過 yast2  disk 來進行, 但是如果在 ubuntu 或是其他的系統, 要建立分割區的話, 另外一種簡單的方式就是使用 gparted

openSUSE 這邊只要使用
# zypper  install gparted
就可以進行安裝
在圖形界面下就可以看到 GParted 程式



==== 同場加映 scsi rescan in Ubuntu 16.04 ====

那上述的情況如果在 Ubuntu 16.04 又該如何呢?

OS: Ubuntu 16.04 in VMware

作法就是安裝 scsitools 套件

#sudo  apt-get  install  scsitools

接下來就可以使用  rescan-scsi-bus 指令啦
#sudo  rescan-scsi-bus

這邊也感謝  Daniel Lin 提供給我資訊還有方向 :)

~ enjoy it

Reference:





星期一, 10月 15, 2018

Play with Kubernetes 小記

Play with Kubernetes  小記

Kubernetes 讀書會將至
所以練習的環境也是一個重點
Kubernetes 目前練習的想法有

今天就來寫 Play with Kubernetes 練習的小記

官方網站



可以使用 Github 或是 Docker ID 登入來使用

Docker 官方也有練習可以搭配

登入的界面跟 Play with Docker 一樣



一登入就有相關訊息
引導如何建立 Kubernetes 的 cluster

You can bootstrap a cluster as follows:

1. Initializes cluster master node:
kubeadm init --apiserver-advertise-address $(hostname -i)

2. Initialize cluster networking:
kubectl apply -n kube-system -f \
   "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

3. (Optional) Create an nginx deployment:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/cn/docs/user-guide/nginx-app.yaml

先在裡面的終端機觀察相關資訊

[node1 ~]$ ls  -a
.  .. .bash_logout  .bash_profile .bashrc  .cshrc .kube  .pki  .tcshrc  anaconda-ks.cfg

有發現 .kube 目錄, 然後底下有 config
[node1 ~]$ ls -a .kube/
.  .. config

實際觀察, 其實這個 config 是個 link
[node1 ~]$ cat  .kube/config
cat: .kube/config: No such file or directory

[node1 ~]$ ls -l .kube/
total 0
lrwxrwxrwx 1 root root 26 May  9 02:25 config -> /etc/kubernetes/admin.conf

接下來嘗試開始建立 kubernetes cluster, 先來進行初始化
上面提到的指令是   kubeadm init --apiserver-advertise-address $(hostname -i)

之前搭配 hostname 指令比較少下 -i 的參數, 所以先來看看 使用 -i 參數會跑出啥
會跑出 ip, 又學會一招
[node1 ~]$ hostname  -i
192.168.0.13

自己補充學習, hostname -i 要搭配名稱解析才會顯示 IP
-i, --ip-address
             Display the network address(es) of the host name. Note that this
             works only if the host name can be resolved.  Avoid  using this
             option; use hostname --all-ip-addresses instead.

來進行初始化吧

[node1 ~]$ kubeadm  init  --apiserver-advertise-address $(hostname -i)
Initializing machine ID from random generator.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.15
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.13]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 34.506542 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node node1 as master by adding a label and a taint
[markmaster] Master node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: daad56.e5f6d0ea2dc27904
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

 kubeadm join --token daad56.e5f6d0ea2dc27904 192.168.0.13:6443 --discovery-token-ca-cert-hash sha256:b99b6b81f35cd4d1da40c8999ecd7d0bd2b7ce250152bea3c730311c85fdd526

Waiting for api server to startup.............
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset "kube-proxy" configured
No resources found

初始化之後再來觀察相關資訊
現在 .kube 目錄下就不是空的
[node1 ~]$ ls -l .kube/
total 4
drwxr-xr-x 3 root root   23 Oct 15 13:34 cache
lrwxrwxrwx 1 root root   26 May 9 02:25 config -> /etc/kubernetes/admin.conf
drwxr-xr-x 3 root root 4096 Oct 15 13:34 http-cache

觀察相關資訊, 目前 kubectl  get nodes 就可以抓到資訊了
另外觀察 Status 是 NotReady

[node1 ~]$ kubectl  get  nodes
NAME      STATUS ROLES     AGE VERSION
node1     NotReady   master    4m v1.10.2

初始化的時候會給一個  kubeadm join 的 token , 接下來就來試試看吧

新開一個 Instance,

== 在 node2 ==

使用 master 給的那串 token 來加入 cluster

[node2 ~]$ kubeadm  join  --token daad56.e5f6d0ea2dc27904 192.168.0.13:6443 --discovery-token-ca-cert-hash sha256:b99b6b81f35cd4d1da40c8999ecd7d0bd2b7ce250152bea3c730311c85fdd526
Initializing machine ID from random generator.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.13:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.13:6443"
[discovery] Requesting info from "https://192.168.0.13:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.13:6443"
[discovery] Successfully established connection with API Server "192.168.0.13:6443"
[bootstrap] Detected server version: v1.8.15
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
 received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.


== 在 node1 ( Master ) 上面 ==

觀察相關資訊
Node2 已經被加入了, status 還是 NotReady

[node1 ~]$ kubectl  get  nodes
NAME      STATUS ROLES     AGE VERSION
node1     NotReady   master    7m v1.10.2
node2     NotReady   <none>    51s v1.10.2


接下來執行上面提到的步驟 2  kubectl apply -n kube-system -f     "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

初始化 cluster 的網路
[node1 ~]$ kubectl  apply -n  kube-system -f  "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

上面的 kubectl 如果去觀察

後面帶的處理如下

[node1 ~]$ kubectl  version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDa
te:"2018-04-05T17:24:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.15", GitCommit:"c2bd642c70b3629223ea3b7db566a267a1e2d0df", GitTreeState:"clean", BuildDa
te:"2018-07-11T17:52:15Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

[node1 ~]$ kubectl  version | base64
Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI4IiwgR2l0VmVy
c2lvbjoidjEuOC4xMSIsIEdpdENvbW1pdDoiMWRmNmE4MzgxNjY5YTZjNzUzZjc5Y2IzMWNhMmUz
ZDU3ZWU3YzhhMyIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERhdGU6IjIwMTgtMDQtMDVU
MTc6MjQ6MDNaIiwgR29WZXJzaW9uOiJnbzEuOC4zIiwgQ29tcGlsZXI6ImdjIiwgUGxhdGZvcm06
ImxpbnV4L2FtZDY0In0KU2VydmVyIFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1p
bm9yOiI4IiwgR2l0VmVyc2lvbjoidjEuOC4xNSIsIEdpdENvbW1pdDoiYzJiZDY0MmM3MGIzNjI5
MjIzZWEzYjdkYjU2NmEyNjdhMWUyZDBkZiIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERh
dGU6IjIwMTgtMDctMTFUMTc6NTI6MTVaIiwgR29WZXJzaW9uOiJnbzEuOC4zIiwgQ29tcGlsZXI6
ImdjIiwgUGxhdGZvcm06ImxpbnV4L2FtZDY0In0K

[node1 ~]$ kubectl  version | base64 | tr -d '\n'Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI4IiwgR2l0VmVyc2lvbjoidjEuOC4xMSIsIEdpdENvbW1pdDoiMWRmNmE4MzgxNjY5YTZjNzUzZjc5Y2IzMWNhMmUzZDU3
ZWU3YzhhMyIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERhdGU6IjIwMTgtMDQtMDVUMTc6MjQ6MDNaIiwgR29WZXJzaW9uOiJnbzEuOC4zIiwgQ29tcGlsZXI6ImdjIiwgUGxhdGZvcm06ImxpbnV4
L2FtZDY0In0KU2VydmVyIFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI4IiwgR2l0VmVyc2lvbjoidjEuOC4xNSIsIEdpdENvbW1pdDoiYzJiZDY0MmM3MGIzNjI5MjIzZWEzYjdk
YjU2NmEyNjdhMWUyZDBkZiIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERhdGU6IjIwMTgtMDctMTFUMTc6NTI6MTVaIiwgR29WZXJzaW9uOiJnbzEuOC4zIiwgQ29tcGlsZXI6ImdjIiwgUGxhdGZv


再次觀察資訊

[node1 ~]$ kubectl get nodes
NAME      STATUS ROLES     AGE VERSION
node1     Ready     master    8m v1.10.2
node2     Ready     <none>    2m v1.10.2

這樣一個 kubernetes 的環境就 Ready 了

也算是往讀書會的腳步更往前進一步

~ enjoy it