{"id":166,"date":"2018-07-05T02:48:37","date_gmt":"2018-07-04T23:48:37","guid":{"rendered":"https:\/\/buraksuatgorgun.com.tr\/?p=166"},"modified":"2018-07-05T02:48:37","modified_gmt":"2018-07-04T23:48:37","slug":"ceph-storage-esxi-datastore-linux","status":"publish","type":"post","link":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/2018\/07\/05\/ceph-storage-esxi-datastore-linux\/","title":{"rendered":"CEPH Storage Kurulumu \/ Linux &#8211; ESXi ye Disk Eklenmesi"},"content":{"rendered":"<h3><strong>Ceph Storage Kurulum, Ubuntu, CentOS7 Mount ve ESXi Datastore olarak kullan\u0131m<\/strong><\/h3>\n<p>Merhaba Arkada\u015flar,<\/p>\n<p>Bu makalede Ceph Storage i\u00e7in kurulum prosed\u00fcrlerini ve \u00f6rnek bir ESXi sistemine datastore olarak nas\u0131l eklenece\u011fini i\u015fleyece\u011fiz.<\/p>\n<p>\u00d6ncelik ile Bu yap\u0131 i\u00e7in, en az 4 sunucu gereksinimimiz olacakt\u0131r. Bunlar;<\/p>\n<p>Admin \/ Deploy Node<\/p>\n<p>Monitor Node<\/p>\n<p>Storage Node 1<\/p>\n<p>Storage Node 2..<\/p>\n<p>Gibi olup, Her node ayr\u0131 bir sunucuyu belirtmektedir. Storage Node&#8217;lar\u0131 i\u00e7in ihtiyac\u0131n\u0131z ne ise o \u015fekilde d\u00fczenleyebilir, ek sunucular kurabilir yahut \u00e7al\u0131\u015fan yap\u0131ya ihtiyac\u0131n\u0131z oldu\u011funda rahatl\u0131kla yeni Storage Node&#8217;u ekleyebilirsiniz.<\/p>\n<p>Anlat\u0131mlarda 4 sunucunun da halihaz\u0131rda kurulu ve SSH&#8217;\u0131n aktif oldu\u011funu varsayarak ilerleyece\u011fim.<\/p>\n<h4><strong>\u00d6n Ayarlar<\/strong><\/h4>\n<p>Her\u015feyden \u00f6nce, OS diskiniz ve Ceph Storage \u00fczerinde kullan\u0131ma sunmay\u0131 planlad\u0131\u011f\u0131n\u0131z diskin ayr\u0131 olmas\u0131 gerekdi\u011fini belirtmek isterim. OS diskindeki alan kullan\u0131mda olmayacakt\u0131r. Kullan\u0131ma sunmay\u0131 planlad\u0131\u011f\u0131n\u0131z Storage diskinde ise hi\u00e7bir veri olmamal\u0131d\u0131r.<\/p>\n<p>\u0130lk olarak Admin \/ Deploy Node&#8217;unun kurulumunu ger\u00e7ekle\u015ftiriyoruz.<\/p>\n<p>Ben kurulumlarda Ubuntu 16.04.4 kulland\u0131m ancak ihtiyac\u0131n\u0131za g\u00f6re OpenSUSE yahut RHEL\/CentOS ta kullanabilirsiniz. Anlat\u0131m Ubuntu \u00fczerinden olacakt\u0131r,<\/p>\n<p>\u00d6ncelik ile Ceph Release key&#8217;i ekliyoruz;<\/p>\n<pre><span class=\"n\">wget<\/span> <span class=\"o\">-<\/span><span class=\"n\">q<\/span> <span class=\"o\">-<\/span><span class=\"n\">O<\/span><span class=\"o\">-<\/span> <span class=\"s1\">'https:\/\/download.ceph.com\/keys\/release.asc'<\/span> <span class=\"o\">|<\/span> <span class=\"n\">sudo<\/span> <span class=\"n\">apt<\/span><span class=\"o\">-<\/span><span class=\"n\">key<\/span> <span class=\"n\">add<\/span> <span class=\"o\">-<\/span><\/pre>\n<p>Ard\u0131ndan, repomuza Ceph paketlerini ekliyoruz.<\/p>\n<pre>echo deb https:\/\/download.ceph.com\/debian-{ceph-stable-release}\/ $(lsb_release -sc) main | sudo tee \/etc\/apt\/sources.list.d\/ceph.list<\/pre>\n<p>Ard\u0131ndan repomuzu update ederek ceph-deploy kurulumu ger\u00e7ekle\u015ftiriyoruz.<\/p>\n<pre><span class=\"n\">sudo<\/span> <span class=\"n\">apt<\/span> <span class=\"n\">update<\/span>\n<span class=\"n\">sudo<\/span> <span class=\"n\">apt<\/span> <span class=\"n\">install<\/span> <span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span><\/pre>\n<p>Bu i\u015flemler ile Admin \/ Deploy nodunun ilk kurulumunu tamamlam\u0131\u015f olduk.<\/p>\n<p>NOT: Secure Linux (Selinux) aktif ise mutlaka devred\u0131\u015f\u0131 b\u0131rakmal\u0131s\u0131n\u0131z. (Ubuntu \u00fczerinde system utils kurmad\u0131 iseniz otomatik disable gelir.)<br \/>\nNOT: Seknronizasyon sorunu ya\u015fanmamas\u0131 i\u00e7in T\u00fcm node&#8217;lara ntp server kurulumunu ihmal etmemeniz gerekiyor.<\/p>\n<pre><span class=\"n\">apt<\/span> <span class=\"n\">install<\/span> <span class=\"n\">ntp\n\/etc\/init.d\/ntp start\n<\/span><\/pre>\n<p>Bu a\u015famada \u00f6ncelik ile, Admin \/ Deploy Node&#8217;unun sunuculara hostname ile eri\u015febilmesi laz\u0131m.<\/p>\n<p>Ben node&#8217;lara,<\/p>\n<p>Admin Node &#8211; ceph<\/p>\n<p>Monitor Node &#8211; monitor<\/p>\n<p>Storage Node 1 &#8211; node1<\/p>\n<p>Storage Node 2 &#8211; node2<\/p>\n<p>\u015eeklinde Hostname tan\u0131mlad\u0131m.<\/p>\n<p>Admin \/ Deploy nodu \u00fczerinde\u00a0\/etc\/hosts i\u00e7erisine girerek IP &#8211; Hostname \u015feklinde giriyoruz;<\/p>\n<p>93.187.202.212 monitor ceph.monitor<br \/>\n93.187.202.235 node1 ceph.node1<br \/>\n93.187.202.231 node2 ceph.node2<\/p>\n<p>Eri\u015fimi ping node1 diyerek test edebilirsiniz.<\/p>\n<p>Bu i\u015flemin ard\u0131ndan Monit\u00f6r ve Storage nodelar\u0131 \u00fczerinde Admin\/Deploy nodunun eri\u015fece\u011fi kullan\u0131c\u0131lar\u0131 olu\u015fturuyoruz ve \u015fifrelerini belirliyoruz.<\/p>\n<pre>useradd -d \/home\/cephusr -m cephusr\npasswd cephusr<\/pre>\n<p>Ve \u0130lgili kullan\u0131c\u0131lar\u0131n sudo yetkisi oldu\u011funa emin oluyoruz. Sudo yetkisi kazand\u0131rmak i\u00e7in;<\/p>\n<pre>echo \"cephusr ALL = (root) NOPASSWD:ALL\" | sudo tee \/etc\/sudoers.d\/cephusr\nsudo chmod 0440 \/etc\/sudoers.d\/cephusr<\/pre>\n<p>\u0130\u015flem ard\u0131ndan \u015eifresiz SSH eri\u015fimini bu kullan\u0131c\u0131lar i\u00e7in aktif etmemiz gerekecektir. ceph-deploy komutu \u015fifre girmez, bu y\u00fczden ssh key generate etmemiz gerekiyor.<\/p>\n<p>Admin nodunda ssh-keygen komutunu \u00e7al\u0131\u015ft\u0131r\u0131n.<\/p>\n<p>(passphrase bo\u015f b\u0131rak\u0131lacak)<\/p>\n<pre><span class=\"n\">ssh<\/span><span class=\"o\">-<\/span><span class=\"n\">keygen<\/span>\n\n<span class=\"n\">Generating<\/span> <span class=\"n\">public<\/span><span class=\"o\">\/<\/span><span class=\"n\">private<\/span> <span class=\"n\">key<\/span> <span class=\"n\">pair<\/span><span class=\"o\">.<\/span>\n<span class=\"n\">Enter<\/span> <span class=\"n\">file<\/span> <span class=\"ow\">in<\/span> <span class=\"n\">which<\/span> <span class=\"n\">to<\/span> <span class=\"n\">save<\/span> <span class=\"n\">the<\/span> <span class=\"n\">key<\/span> <span class=\"p\">(<\/span><span class=\"o\">\/<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">admin<\/span><span class=\"o\">\/.<\/span><span class=\"n\">ssh<\/span><span class=\"o\">\/<\/span><span class=\"n\">id_rsa<\/span><span class=\"p\">):<\/span>\n<span class=\"n\">Enter<\/span> <span class=\"n\">passphrase<\/span> <span class=\"p\">(<\/span><span class=\"n\">empty<\/span> <span class=\"k\">for<\/span> <span class=\"n\">no<\/span> <span class=\"n\">passphrase<\/span><span class=\"p\">):<\/span>\n<span class=\"n\">Enter<\/span> <span class=\"n\">same<\/span> <span class=\"n\">passphrase<\/span> <span class=\"n\">again<\/span><span class=\"p\">:<\/span>\n<span class=\"n\">Your<\/span> <span class=\"n\">identification<\/span> <span class=\"n\">has<\/span> <span class=\"n\">been<\/span> <span class=\"n\">saved<\/span> <span class=\"ow\">in<\/span> <span class=\"o\">\/<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">admin<\/span><span class=\"o\">\/.<\/span><span class=\"n\">ssh<\/span><span class=\"o\">\/<\/span><span class=\"n\">id_rsa<\/span><span class=\"o\">.<\/span>\n<span class=\"n\">Your<\/span> <span class=\"n\">public<\/span> <span class=\"n\">key<\/span> <span class=\"n\">has<\/span> <span class=\"n\">been<\/span> <span class=\"n\">saved<\/span> <span class=\"ow\">in<\/span> <span class=\"o\">\/<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">admin<\/span><span class=\"o\">\/.<\/span><span class=\"n\">ssh<\/span><span class=\"o\">\/<\/span><span class=\"n\">id_rsa<\/span><span class=\"o\">.<\/span><span class=\"n\">pub<\/span><span class=\"o\">.<\/span><\/pre>\n<p>Ard\u0131ndan t\u00fcm Node&#8217;lara keyi y\u00fcklemeliyiz.<\/p>\n<pre><span class=\"n\">ssh<\/span><span class=\"o\">-<\/span><span class=\"n\">copy<\/span><span class=\"o\">-<\/span><span class=\"nb\">id<\/span> cephusr<span class=\"nd\">@node1<\/span>\n<span class=\"n\">ssh<\/span><span class=\"o\">-<\/span><span class=\"n\">copy<\/span><span class=\"o\">-<\/span><span class=\"nb\">id<\/span> cephusr<span class=\"nd\">@node2<\/span>\n<span class=\"n\">ssh<\/span><span class=\"o\">-<\/span><span class=\"n\">copy<\/span><span class=\"o\">-<\/span><span class=\"nb\">id<\/span> cephusr<span class=\"nd\">@monitor<\/span><\/pre>\n<p>\u015eifreleri soracakt\u0131r, \u015fifrelerini girmenizin ard\u0131ndan art\u0131k Admin nodu, t\u00fcm node lara ssh \u00fczerinden \u015fifre gereksinimi olmadan eri\u015fim sa\u011flayabilecektir.<\/p>\n<p>Kontrol i\u00e7in;<\/p>\n<p>ssh cephusr@monitor<\/p>\n<p>Komutunu kullanabilir, eri\u015febildi\u011fini do\u011frulayabilirsiniz.<\/p>\n<p>Bu i\u015flemin ard\u0131ndan ceph-deploy&#8217;un SSH ba\u011flant\u0131s\u0131nda cephusr kullan\u0131c\u0131s\u0131n\u0131 \u00e7al\u0131\u015ft\u0131rabilmesi i\u00e7in, Admin \/ Deploy Node \u00fczerinde config dosyas\u0131 olu\u015fturuyoruz;<\/p>\n<p>nano \/etc\/.ssh\/config<\/p>\n<p>Ve i\u00e7erisini a\u015fa\u011f\u0131daki \u015fekilde d\u00fczenliyoruz;<\/p>\n<pre><span class=\"n\">Host<\/span> <span class=\"n\">node1<\/span>\n   <span class=\"n\">Hostname<\/span> <span class=\"n\">node1<\/span>\n   <span class=\"n\">User<\/span> <span class=\"p\">{<\/span><span class=\"n\">username<\/span><span class=\"p\">}<\/span>\n<span class=\"n\">Host<\/span> <span class=\"n\">node2<\/span>\n   <span class=\"n\">Hostname<\/span> <span class=\"n\">node2<\/span>\n   <span class=\"n\">User<\/span> <span class=\"p\">{<\/span><span class=\"n\">username<\/span><span class=\"p\">}<\/span>\n<span class=\"n\">Host<\/span> <span class=\"n\">node3<\/span>\n   <span class=\"n\">Hostname<\/span> <span class=\"n\">node3<\/span>\n   <span class=\"n\">User<\/span> <span class=\"p\">{<\/span><span class=\"n\">username<\/span><span class=\"p\">}<\/span><\/pre>\n<p>Bu benim yap\u0131land\u0131rmam i\u00e7in a\u015fa\u011f\u0131daki \u015fekildedir;<\/p>\n<pre>Host monitor\n  Hostname monitor\n  User cephusr\nHost node1\n  Hostname node1\n  User cephusr\nHost node2\n  Hostname node2\n  User cephusr<\/pre>\n<p>B\u00f6ylece d\u00fcz SSH ile ba\u011flanmak istedi\u011finde otomatik root a y\u00f6nlendirilmeyecek, cephusr kullan\u0131c\u0131s\u0131na y\u00f6nlendirilecektir.<\/p>\n<p>Bu i\u015flemler ile \u00f6n haz\u0131rl\u0131klar\u0131m\u0131z\u0131 tamamlam\u0131\u015f olduk. Art\u0131k konfig\u00fcrasyona ba\u015flayabiliriz.<\/p>\n<h4><strong>Storage Cluster yap\u0131s\u0131n\u0131n olu\u015fturulmas\u0131<\/strong><\/h4>\n<p>Admin nodu \u00fczerinde, ceph-deploy komutunun otomatik olu\u015fturdu\u011fu konfig\u00fcrasyon dosyalar\u0131 ve keylerin tutulmas\u0131 i\u00e7in bir klas\u00f6r olu\u015fturuyoruz.<\/p>\n<pre><span class=\"n\">mkdir<\/span> <span class=\"n\">my<\/span><span class=\"o\">-<\/span><span class=\"n\">cluster<\/span>\n<span class=\"n\">cd<\/span> <span class=\"n\">my<\/span><span class=\"o\">-<\/span><span class=\"n\">cluster<\/span><\/pre>\n<p>NOT: ceph-deploy komutu, \u00e7\u0131kt\u0131y\u0131 bulundu\u011fu klas\u00f6re aktaraca\u011f\u0131 i\u00e7in kullan\u0131rken daima my-cluster klas\u00f6r\u00fcnde yahut siz cluster ad\u0131n\u0131 ne \u015fekilde belirledi iseniz, o klas\u00f6rde olmal\u0131s\u0131n\u0131z.<\/p>\n<p>NOT: Ba\u015fka bir kullan\u0131c\u0131 ile giri\u015f yapm\u0131\u015f iken sudo kullanmamal\u0131s\u0131n\u0131z.<\/p>\n<p>Art\u0131k Cluster olu\u015fturma i\u015flemine ba\u015flayabiliriz.<\/p>\n<p>Test i\u00e7in ufak bir journal olu\u015fturarak devam ediyoruz.<\/p>\n<p>\/my-cluster\/ceph.conf i\u00e7erisinde, en alta;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">[osd]\nosd_journal_size = 2000<\/code><\/pre>\n<p>\u015eeklinde ekliyoruz.<\/p>\n<p>Deploy i\u015fleminden sonra Journal olu\u015fmu\u015f olmal\u0131. E\u011fer herhangi bir hata al\u0131yor iseniz conf dosyas\u0131nda yahut kurulumda yanl\u0131\u015fl\u0131k ger\u00e7ekle\u015ftirmi\u015f olmal\u0131s\u0131n\u0131z, tekrar kontrol etmeniz gerekmektedir.<\/p>\n<p>\u015eimdi deploy i\u015flemini my-cluster klas\u00f6r\u00fc i\u00e7erisinde iken ger\u00e7ekle\u015ftirerek cluster&#8217;i olu\u015ftural\u0131m;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">new<\/span> monitor node1 node2<\/pre>\n<p>Ben \u015fahsen a\u015fa\u011f\u0131daki hatay\u0131 ald\u0131m;<\/p>\n<p>bash: python: command not found<br \/>\n[ceph_deploy][ERROR ] RuntimeError: connecting to host: monitor resulted in errors: IOError cannot send (already closed?)<\/p>\n<p>Bunun sebebi de ceph-deploy&#8217;un phyton ile \u00e7al\u0131\u015fmas\u0131 ve sunucularda phyton bulunmamas\u0131d\u0131r.<\/p>\n<p>T\u00fcm sunuculara a\u015fa\u011f\u0131daki komut ile phyton kurulumu ger\u00e7ekle\u015ftiriyoruz;<\/p>\n<p>sudo apt install python-minimal<\/p>\n<p>Bu i\u015flemin ard\u0131ndan ceph-deploy new nodex nodex nodex komutunu tekrar uyguluyoruz. Sorunsuzca tamamlanm\u0131\u015f olmal\u0131.<\/p>\n<p>Ard\u0131ndan node&#8217;lar \u00fczerine kurulumu sa\u011fl\u0131yoruz;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">install<\/span> <span class=\"n\">node1<\/span> <span class=\"n\">node2<\/span> <span class=\"n\">node3<\/span><\/pre>\n<p>\u0130\u015flem tamamland\u0131\u011f\u0131nda Ceph, her node&#8217;a kurulmu\u015f olacakt\u0131r.<\/p>\n<p>Ard\u0131ndan monit\u00f6r sistemi i\u00e7in \u00f6n haz\u0131rl\u0131k yap\u0131yoruz,<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">mon<\/span> <span class=\"n\">create<\/span><span class=\"o\">-<\/span><span class=\"n\">initial<\/span><\/pre>\n<p>Bu komut keyleri toplayacakt\u0131r ve my-cluster klas\u00f6r\u00fcn\u00fczde;<\/p>\n<ul class=\"simple\">\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.client.admin.keyring<\/span><\/code><\/li>\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.bootstrap-mgr.keyring<\/span><\/code><\/li>\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.bootstrap-osd.keyring<\/span><\/code><\/li>\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.bootstrap-mds.keyring<\/span><\/code><\/li>\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.bootstrap-rgw.keyring<\/span><\/code><\/li>\n<li><code class=\"docutils literal\"><span class=\"pre\">ceph.bootstrap-rbd.keyring<\/span><\/code><\/li>\n<\/ul>\n<p>Dosyalar\u0131 olu\u015fmu\u015f olacakt\u0131r.<\/p>\n<p>Bu a\u015famada, Konfig\u00fcrasyon dosyalar\u0131 ve admin key&#8217;i node lar aras\u0131 haberle\u015fme i\u00e7in Deploy node&#8217;undan di\u011fer nodlar\u0131n\u0131za konfig\u00fcrasyonlar\u0131 ile aktarman\u0131z gerekiyor.<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">admin monitor node1 node2<\/span><\/pre>\n<p>Ard\u0131ndan Node&#8217;lar\u0131n\u0131zdaki diski OSD olarak eklemelisiniz.<\/p>\n<p>Ekleme komutu;<\/p>\n<p>ceph-deploy osd create \u2013data {device} {ceph-node}<\/p>\n<p>Format\u0131ndad\u0131r. Buradaki Device, Node lar\u0131n\u0131z \u00fczerindeki OS diskiniz de\u011fil, Storage yap\u0131s\u0131na sa\u011flayaca\u011f\u0131n\u0131z 2. diskleriniz olacakt\u0131r. Komut benim i\u00e7in;<\/p>\n<pre>ceph-deploy osd create \u2013-data <span class=\"o\">\/<\/span><span class=\"n\">dev<\/span><span class=\"o\">\/sdb<\/span> monitor\nceph-deploy osd create \u2013-data <span class=\"o\">\/<\/span><span class=\"n\">dev<\/span><span class=\"o\">\/sdb<\/span> <span class=\"n\">node1<\/span>\nceph-deploy osd create \u2013-data <span class=\"o\">\/<\/span><span class=\"n\">dev<\/span><span class=\"o\">\/sdb<\/span> <span class=\"n\">node2<\/span><\/pre>\n<p>\u015eeklindedir.<\/p>\n<p>E\u011fer bu komut ile OSD leri olu\u015fturamad\u0131 iseniz, Manuel olarak zap&#8217;laman\u0131z (Formatlaman\u0131z) ve haz\u0131rlaman\u0131z gerekiyor. Bunun i\u00e7in de komut;<\/p>\n<pre>ceph-deploy disk zap {osd-server-name}:{disk-name} Format\u0131nda yani;\nceph-deploy disk zap monitor:sdb<\/pre>\n<p>Gibi. Zap i\u015flemi ard\u0131ndan OSD mizi haz\u0131rl\u0131yoruz;<\/p>\n<pre>ceph-deploy osd prepare {node-name}:{data-disk} Format\u0131nda yani;\nceph-deploy osd prepare monitor:sdb\nceph-deploy osd prepare node1:sdb\nceph-deploy osd prepare node2:sdb<\/pre>\n<p>\u015eeklinde.<\/p>\n<p>Bu i\u015flem sadece OSD leri haz\u0131rlamakta olup, aktif etmemektedir. Aktif etmek i\u00e7in;<\/p>\n<pre>ceph-deploy osd activate {node-name}:{data-disk-partition} Format\u0131nda Yani;\nceph-deploy osd activate monitor:\/dev\/sdb1\nceph-deploy osd activate node1:\/dev\/sdb1\nceph-deploy osd activate node2:\/dev\/sdb1<\/pre>\n<p>Komutlar\u0131 ile Ceph Storage sistemimizin Storage g\u00f6revini g\u00f6recek olan OSD lerimizi yay\u0131na al\u0131yoruz.<\/p>\n<p>Bu a\u015famada art\u0131k Monit\u00f6r Node&#8217;umuza g\u00f6revini belirtebilir, Sisteme Monit\u00f6r olarak atayabiliriz.<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">mon<\/span> <span class=\"n\">add MONITOR-NODE Format\u0131nda yani;\nceph<span class=\"o\">-<\/span>deploy mon add monitor\n<\/span><\/pre>\n<p>Bu, Birden fazla monit\u00f6r nodu eklemek i\u00e7in;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">mon<\/span> <span class=\"n\">add MONITOR-NODE1 MONITOR-NODE2 Format\u0131nda yani;\nceph<span class=\"o\">-<\/span>deploy mon add monitor1 monitor2\n<\/span><\/pre>\n<p>\u015eeklinde olacakt\u0131r.<\/p>\n<p>NOT: G\u00fcvenlik i\u00e7in birden fazla monit\u00f6r nodu eklemeniz durumunda bu node lar birbirleri ile senkronize olarak quorum olu\u015fturacaklard\u0131r.<\/p>\n<p>Quorum durumunu;<\/p>\n<pre><span class=\"n\">ceph<\/span> <span class=\"n\">quorum_status<\/span> <span class=\"o\">--<\/span><span class=\"nb\">format<\/span> <span class=\"n\">json<\/span><span class=\"o\">-<\/span><span class=\"n\">pretty<\/span><\/pre>\n<p>Komutu ile inceleyebilirsiniz.<\/p>\n<p>NOT: G\u00fcvenlik ve Yedeklilik i\u00e7in birden fazla monit\u00f6r eklemeniz <strong>\u015e\u0130DDETLE<\/strong> tavsiye edilmektedir. Aksi halde bir monit\u00f6r fail verdi\u011finde, monit\u00f6r sunucusunda herhangi bir sorun olu\u015fur ise Ceph Storage sisteminiz susacak, \u00e7al\u0131\u015fmay\u0131 durduracakt\u0131r. En az 2. bir monit\u00f6r nodu eklemeniz, hasarl\u0131 sunucuyu onarana kadar sistemin durmamas\u0131 i\u00e7in \u00f6nemlidir. Anlat\u0131m\u0131 sadele\u015ftirmek ve minimal tutmak i\u00e7in tek monit\u00f6r kullan\u0131lm\u0131\u015ft\u0131r.<\/p>\n<p>NOT: Monitor eklemeye \u00e7al\u0131\u015f\u0131r iken;<\/p>\n<p>accepter.accepter.bind unable to bind to 6789: (98) Address already in use<\/p>\n<p>Ya da<\/p>\n<p>[ceph_deploy][ERROR ] GenericError: Failed to add monitor to host: monitor<\/p>\n<p>\u015eeklinde hata al\u0131r iseniz Monitor node \u00fczerinde<\/p>\n<pre>cd \/var\/lib\/ceph\/mon\nrm -rf *<\/pre>\n<p>Komutunu uygulaman\u0131z, Admin node&#8217;u \u00fczerinde de<\/p>\n<pre>rm -rf \/var\/run\/ceph\/ceph-mon.monitor.asok<\/pre>\n<p>Komutunu uygulayarak lock&#8217;u silmeniz gerekmektedir. Bu hatalar, bir yerde hatal\u0131 konfig\u00fcrasyon oldu\u011fu i\u00e7in monit\u00f6r ekleyemedi\u011fini belirtir. Ard\u0131ndan tekrar;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">mon<\/span> <span class=\"n\">add MONITOR-NODE Format\u0131nda yani;\nceph<span class=\"o\">-<\/span>deploy mon add monitor\n<\/span><\/pre>\n<p>Komutu ile monit\u00f6r\u00fcn\u00fcz\u00fc\/monit\u00f6rlerinizi tekrar eklemeyi deneyerek ba\u015far\u0131l\u0131 bir \u015fekilde ekleyebilirsiniz.<\/p>\n<h4><strong>CephFS kurulumu<\/strong><\/h4>\n<p>ODS lerimizi yaratt\u0131\u011f\u0131m\u0131z, monit\u00f6r\u00fcm\u00fcz\u00fc ekledi\u011fimize g\u00f6re art\u0131k Linux\/Unix sistemlerine mount edilebilen CephFS sistemini olu\u015fturabiliriz.<\/p>\n<p>Ben Metadata nodu olarak monit\u00f6r nodumu kulland\u0131m. Ancak dilerseniz ek bir sunucuyu, \u00f6rne\u011fin metadata nodunu sisteme ekleyerek g\u00f6revi ona da atayabilirsiniz.<\/p>\n<p>Anlat\u0131m sadeli\u011fi ve monit\u00f6re \u00e7ok y\u00fck d\u00fc\u015fmedi\u011fi i\u00e7in ben monit\u00f6r nodunu tercih ettim.<\/p>\n<p>Komutumuz;<\/p>\n<pre>ceph-deploy mds create Metadata Node Format\u0131nda yani benim i\u00e7in;\nceph-deploy mds create monitor<\/pre>\n<p>\u015eeklindedir. \u0130\u015flem ard\u0131ndan \u0130lgili Node \u00fczerinde,<\/p>\n<pre>ceph mds stat<\/pre>\n<p>Komutunu kullanarak sistemin up oldu\u011funu, standby da bekledi\u011fini g\u00f6rebiliyor olmal\u0131y\u0131z.<\/p>\n<figure id=\"attachment_197\" aria-describedby=\"caption-attachment-197\" style=\"width: 393px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-197 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_7-1.png\" alt=\"mds\" width=\"393\" height=\"132\" \/><figcaption id=\"caption-attachment-197\" class=\"wp-caption-text\">Metadata node&#8217;unu kontrol<\/figcaption><\/figure>\n<p>Ard\u0131ndan Monit\u00f6r nodumuzda Cephfs i\u00e7in gerekli olan datapool ve metadatay\u0131 yaratmal\u0131y\u0131z.<code class=\"aphph-addlightplain\"><\/code><\/p>\n<p><code class=\"aphph-addlightplain\">ceph osd pool create cephfs_data 128<br \/>\nceph osd pool create\u00a0 cephfs_metadata 128<\/code><\/p>\n<p>\u0130\u015flem ard\u0131ndan bu pool&#8217;u kullanan mycephsfs isimli bir dosya sistemi yaratmak i\u00e7in a\u015fa\u011f\u0131daki komutu kullan\u0131yoruz;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">ceph fs new mycephfs cephfs_metadata cephfs_data<\/code><\/pre>\n<p>A\u015fa\u011f\u0131daki komut ile olu\u015ftu\u011funu g\u00f6zlemleyebiliriz;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">ceph fs ls<\/code><\/pre>\n<p>Bu i\u015flemler ile Ceph Storage sistemimiz \u00fczerinde, mycephfs ad\u0131nda bir dosya sistemi yaratm\u0131\u015f olduk. Art\u0131k bu dosya sistemini mount edebiliriz.<\/p>\n<h4><strong>Linux \u00dczerine Mount Etme<\/strong><\/h4>\n<p>Bu i\u015flem \u00e7ok basittir.<\/p>\n<p>\u00d6ncelik ile, Sunucuya ceph-common paketi kurulmal\u0131d\u0131r.<\/p>\n<h4><strong>CentOS 7 i\u00e7in;<\/strong><\/h4>\n<p>CentOS 7 sisteminde ve repolar\u0131nda ceph yerle\u015fik olarak bulunmamaktad\u0131r. Ancak i\u015flem i\u00e7in Epel Reposunu kullanabiliyoruz.<\/p>\n<p>A\u015fa\u011f\u0131daki komutlar ile Epel reposu sunucuya tan\u0131mlan\u0131r,<\/p>\n<p>yum -y install epel-release<br \/>\nrpm -Uhv http:\/\/download.ceph.com\/rpm-jewel\/el7\/noarch\/ceph-release-1-1.el7.noarch.rpm<\/p>\n<p>Ard\u0131ndan a\u015fa\u011f\u0131daki komutlar\u0131 kullanarak ceph-common paketini kurabilirsiniz.<\/p>\n<p class=\"command\">yum -y update<br \/>\nyum -y install ceph-common<\/p>\n<p>Bu i\u015flemler ard\u0131ndan \u00f6ncelik ile \/etc\/hosts dosyas\u0131n\u0131 Admin \/ Deploy Node&#8217;unda oldu\u011fu gibi hostname leri \u00e7\u00f6z\u00fcmleyecek \u015fekilde d\u00fczenlemelisiniz.<\/p>\n<p>Bu benim i\u00e7in daha \u00f6nce de belirtmi\u015f oldu\u011fum gibi;<\/p>\n<p>93.187.202.212 monitor ceph.monitor<br \/>\n93.187.202.235 node1 ceph.node1<br \/>\n93.187.202.231 node2 ceph.node2<\/p>\n<p>\u015eeklindedir.<\/p>\n<p>Ard\u0131ndan Admin Node&#8217;u \u00fczerinde, Admin Secret Key tespit edilir.<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">cat \/my-cluster\/ceph.client.admin.keyring | grep \"key =\" | awk {'print $3'} &gt; admin.secret<\/code><\/pre>\n<p>Bu komut bize admin.secret dosyas\u0131n\u0131 olu\u015fturacakt\u0131r.<\/p>\n<p>Bu admin.secret dosyas\u0131n\u0131 mount edece\u011fimiz sunucuya aktar\u0131yoruz.<\/p>\n<p>Ard\u0131ndan komutumuz a\u015fa\u011f\u0131daki gibidir;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">mount -t ceph MONITOR:PORT,NODE:PORT,NODE:PORT,NODE:PORT:\/ \/KLAS\u00d6R\/ -o name=admin,secretfile=\/VER\u0130YOLU\/admin.secret\n<\/code><\/pre>\n<p>\u015eeklindedir. Bu benim olu\u015fturdu\u011fum yap\u0131 i\u00e7in;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">mount -t ceph monitor:6789,node1:6789,node2:6789:\/ \/mnt\/ -o name=admin,secretfile=\/root\/admin.secret<\/code><\/pre>\n<p>Komutlar\u0131 uygulanacakt\u0131r.<\/p>\n<p>Tebrikler, CentOS 7 art\u0131k Ceph Storage&#8217;inizi kullan\u0131yor.<\/p>\n<figure id=\"attachment_172\" aria-describedby=\"caption-attachment-172\" style=\"width: 951px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-172 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_7.png\" alt=\"Centos\" width=\"951\" height=\"195\" \/><figcaption id=\"caption-attachment-172\" class=\"wp-caption-text\">Ceph Mount Edildi<\/figcaption><\/figure>\n<p>Her reboot i\u015fleminde otomatik mount olmas\u0131 i\u00e7in \/etc\/fstab i\u00e7erisine;<\/p>\n<pre class=\"contents \"><code class=\"aphph-addlightplain\">NODE:PORT,NODE:PORT,NODE:PORT<\/code>:\/ \/MOUNT_ED\u0130LECEK_KLAS\u00d6R ceph name=admin,secretfile=\/VER\u0130YOLU\/admin.secret,_netdev,noatime 0 0<\/pre>\n<p>\u015eeklinde eklenir. Yani bu benim olu\u015fturdu\u011fum yap\u0131 i\u00e7in;<\/p>\n<pre class=\"contents \">monitor:6789,node1:6789,node2:6789:\/ \/mnt ceph name=admin,secretfile=\/root\/admin.secret,_netdev,noatime 0 0<\/pre>\n<p>\u015eeklindedir. Ard\u0131ndan sunucuyu reboot edip mount oldu\u011funu test edebilirsiniz.<\/p>\n<h4><strong>Ubuntu 16.04 i\u00e7in;<\/strong><\/h4>\n<p>A\u015fa\u011f\u0131daki komutlar\u0131 kullanarak ceph-common paketini kurabilirsiniz.<\/p>\n<pre>apt install ceph-common ceph-fs-common -y<\/pre>\n<p>Bu i\u015flemler ard\u0131ndan \u00f6ncelik ile \/etc\/hosts dosyas\u0131n\u0131 Admin \/ Deploy Node&#8217;unda oldu\u011fu gibi hostname leri \u00e7\u00f6z\u00fcmleyecek \u015fekilde d\u00fczenlemelisiniz.<\/p>\n<p>Bu benim i\u00e7in daha \u00f6nce de belirtmi\u015f oldu\u011fumdan biraz de\u011fi\u015fik \u00e7\u00fcnk\u00fc bu sefer admin \/ deploy nodunu da giriyoruz.<\/p>\n<p>93.187.202.207 ceph ceph.admin<br \/>\n93.187.202.212 monitor ceph.monitor<br \/>\n93.187.202.235 node1 ceph.node1<br \/>\n93.187.202.231 node2 ceph.node2<\/p>\n<p>\u015eeklindedir.<\/p>\n<p>Ard\u0131ndan Admin Node&#8217;u \u00fczerinde, Admin Key ve Admin Secret tespit edilir.<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">cat \/my-cluster\/ceph.client.admin.keyring | grep \"key =\" | awk {'print $3'} &gt; admin.secret<\/code><\/pre>\n<p>Bu bize admin.secret dosyas\u0131n\u0131 olu\u015fturacakt\u0131r.<\/p>\n<p>Bu admin.secret dosyas\u0131n\u0131 a\u00e7arak i\u00e7indeki Key kodunu al\u0131yoruz.<\/p>\n<p>Ard\u0131ndan komutumuz a\u015fa\u011f\u0131daki gibidir;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">mount -t ceph NODE:PORT,NODE:PORT,NODE:PORT:\/ \/MOUNT_ED\u0130LECEK_KLAS\u00d6R\/ -o name=admin,secret=ADM\u0130N_SECRET_KEY\n<\/code><\/pre>\n<p>Ubuntu \u00fczerinde Secret File yerine, direkt Secret Code yani admin.secret dosyas\u0131n\u0131n i\u00e7eri\u011findeki kodu ADM\u0130N_SECRET_KEY alan\u0131na girece\u011fiz. Benim olu\u015fturdu\u011fum yap\u0131da bu;<\/p>\n<pre class=\"aphph-container aphph-addlightplain\"><code class=\"aphph-addlightplain\">mount -t ceph monitor:6789,node1:6789,node2:6789:\/ \/mnt\/ -o name=admin,secret=AQDfUjxbIshaMBAArKNSraYVhLc+tlqbtS9M1w==<\/code><\/pre>\n<p>\u015eeklindedir ve bu komutlar\u0131 uygulanm\u0131\u015ft\u0131r.<\/p>\n<p>Tebrikler, Art\u0131k Ceph Storage&#8217;iniz Ubuntu sunucunuz taraf\u0131ndan kullan\u0131lmaktad\u0131r.<\/p>\n<figure id=\"attachment_184\" aria-describedby=\"caption-attachment-184\" style=\"width: 1174px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-184 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_2.png\" alt=\"Ubuntu\" width=\"1174\" height=\"530\" \/><figcaption id=\"caption-attachment-184\" class=\"wp-caption-text\">Ceph Mount Edildi<\/figcaption><\/figure>\n<figure id=\"attachment_185\" aria-describedby=\"caption-attachment-185\" style=\"width: 998px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-185 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_3.png\" alt=\"Ubuntu\" width=\"998\" height=\"207\" \/><figcaption id=\"caption-attachment-185\" class=\"wp-caption-text\">Ceph Mount Edildi<\/figcaption><\/figure>\n<p>Her reboot i\u015fleminde otomatik mount olmas\u0131 i\u00e7in \/etc\/fstab i\u00e7erisine;<\/p>\n<pre class=\"contents \"><code class=\"aphph-addlightplain\">NODE:PORT,NODE:PORT,NODE:PORT<\/code>:\/ \/MOUNT_ED\u0130LECEK_KLAS\u00d6R ceph name=admin,secret=ADMIN_SECRET_KEY,_netdev,noatime 0 0<\/pre>\n<p>\u015eeklinde eklenir. Yani bu benim olu\u015fturdu\u011fum yap\u0131 i\u00e7in;<\/p>\n<pre class=\"contents \">monitor:6789,node1:6789,node2:6789:\/ \/mnt ceph name=admin,secret=AQDfUjxbIshaMBAArKNSraYVhLc+tlqbtS9M1w==,_netdev,noatime 0 0<\/pre>\n<p>\u015eeklindedir. Ard\u0131ndan sunucuyu reboot edip mount oldu\u011funu test edebilirsiniz.<\/p>\n<h4><strong>ESXi \u00dczerine Datastore Olarak Mount Etme<\/strong><\/h4>\n<p>Bu i\u015flem standart bir Linux kurulumuna mount etmekten \u00e7ok daha zahmetlidir. \u00c7\u00fcnk\u00fc Ceph Storage sistemi tek ba\u015f\u0131na ESXi&#8217;nin g\u00f6zlemleyebilece\u011fi bir yap\u0131da de\u011fildir.<\/p>\n<p>Bunun i\u00e7in \u00f6ncelik ile bir\u00a0iSCSI Gateway sunucusu ya da NFS Gateway sunucusu kurulmal\u0131 ve konfig\u00fcre edilmeli, CEPH Storage bu sunucu \u00fczerinde ESXi&#8217;nin okuyabilece\u011fi \u015fekilde ESXi ye mount edilmelidir. Kabaca Ceph i\u00e7in bir arac\u0131, bir Gateway sunucusu kurmam\u0131z gerekiyor.<\/p>\n<p>Bu i\u015flem i\u00e7in de NFS Gateway olarak Ubuntu 16.04.4 kullanaca\u011f\u0131m. iCSCI i\u00e7in Ubuntu, PetaSAN, OpenSUSE, CentOS\/RHEL gibi alternatiflerimiz mevcut ancak iCSCI \u00fczerindeki ara\u015ft\u0131rmalar\u0131mda yeterli verim alamad\u0131\u011f\u0131m i\u00e7in es ge\u00e7iyorum.<\/p>\n<h4><strong>NFS Gateway<\/strong><\/h4>\n<p>Bu i\u015flem i\u00e7in Admin\/Deploy Node, Monitor Node, Node1 ve Node2 d\u0131\u015f\u0131nda 5. bir sunucunun default olarak haz\u0131r kurulu oldu\u011funu varsay\u0131yorum. Anlat\u0131m\u0131mda Ubuntu&#8217;dan yola \u00e7\u0131kaca\u011f\u0131m.<\/p>\n<p>\u00d6ncelik ile,\u00a0 &#8220;Ubuntu 16.04 i\u00e7in&#8221; ba\u015fl\u0131\u011f\u0131ndaki t\u00fcm ad\u0131mlar\u0131 ger\u00e7ekle\u015ftirerek Ceph Storage yap\u0131n\u0131z\u0131, Ubuntu sunucunuza mount etti\u011finizi varsay\u0131yoruz.<\/p>\n<p>E\u011fer hen\u00fcz mount etmedi iseniz,\u00a0<a href=\"https:\/\/buraksuatgorgun.com.tr\/index.php\/2018\/07\/05\/ceph-storage-kurulumu\/#Ubuntu_1604_icin\">buraya<\/a> t\u0131klayarak makalede ilgili b\u00f6l\u00fcme ula\u015fabilirsiniz.<\/p>\n<p>\u00d6ncelik ile sunucumuza NFS Server kural\u0131m;<\/p>\n<pre>apt-get -y install portmap nfs-kernel-server<\/pre>\n<p>Kurulum tamamland\u0131\u011f\u0131nda\u00a0\/etc\/exports dosyas\u0131n\u0131 d\u00fczenleyece\u011fiz.<\/p>\n<p>Burada, ESXi Sunucumuzun IP Adresini belirterek NFS ye a\u00e7aca\u011f\u0131m\u0131z klas\u00f6r\u00fc, benim olu\u015fturdu\u011fum yap\u0131 i\u00e7in \/mnt klas\u00f6r\u00fcn\u00fc belirtece\u011fiz.<\/p>\n<p>nano \/etc\/exports<\/p>\n<p>Dosya i\u00e7erisine, en alt\u0131na;<\/p>\n<pre>\/NETWORKE_A\u00c7ILACAK_KLAS\u00d6R_VER\u0130YOLU  000.000.000.000(rw,async,no_subtree_check) Format\u0131nda Yani;\n\n\/mnt 93.187.202.230(rw,async,no_subtree_check)\n\n<\/pre>\n<p>Format\u0131nda ekleyece\u011fiz.<\/p>\n<figure id=\"attachment_193\" aria-describedby=\"caption-attachment-193\" style=\"width: 724px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-193 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_6.png\" alt=\"NFS\" width=\"724\" height=\"160\" \/><figcaption id=\"caption-attachment-193\" class=\"wp-caption-text\">Mount edilmi\u015f ceph&#8217;i NFS ye a\u00e7mak<\/figcaption><\/figure>\n<p>Ard\u0131ndan,<\/p>\n<p>exportfs -ra<\/p>\n<p>Komutunu kullan\u0131p NFS Servisimizi restart ediyoruz.<\/p>\n<p>\/etc\/init.d\/nfs-kernel-server restart<\/p>\n<p>Tebrikler, Art\u0131k CephFS yap\u0131n\u0131z\u0131 NFS \u00fczerinden networke a\u00e7t\u0131n\u0131z!<\/p>\n<h4><strong>ESXi Sistemine NFS Disk Ekleme<\/strong><\/h4>\n<p>Ben kurulum ve testlerimde\u00a0VMWare 6.7 kulland\u0131m. Anlat\u0131m\u0131 da bunun i\u00e7in ger\u00e7ekle\u015ftirece\u011fim.<\/p>\n<p>\u0130\u015flemler i\u00e7in, ESXi Web aray\u00fcz\u00fcn\u00fcze giri\u015f yap\u0131n\u0131z.<\/p>\n<p>Storage<br \/>\n&gt; New Datastore<\/p>\n<p>\u015eeklinde ilerleyiniz.<\/p>\n<p>Mount NFS Datastore<\/p>\n<p>\u0130sim = Datastore ad\u0131<\/p>\n<p>NFS Server = Sunucu IP si<\/p>\n<p>NFS Share = Veri yolu<\/p>\n<p>NFS Version = NFS 3<\/p>\n<figure id=\"attachment_189\" aria-describedby=\"caption-attachment-189\" style=\"width: 848px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-189 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_4.png\" alt=\"Ceph Datastore'u NFS olarak eklemek\" width=\"848\" height=\"376\" \/><figcaption id=\"caption-attachment-189\" class=\"wp-caption-text\">Datastore&#8217;u eklemek<\/figcaption><\/figure>\n<p>Tebrikler! Art\u0131k Ceph Storage sisteminizi sorunsuz bir \u015fekilde ESXi Sisteminize Mount etmi\u015f bulunuyorsunuz.<\/p>\n<figure id=\"attachment_191\" aria-describedby=\"caption-attachment-191\" style=\"width: 808px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-191 size-full\" src=\"https:\/\/buraksuatgorgun.com.tr\/wp-content\/uploads\/2018\/07\/Screenshot_5.png\" alt=\"Ceph Storage'nin Datastore olarak eklenmesi\" width=\"808\" height=\"274\" \/><figcaption id=\"caption-attachment-191\" class=\"wp-caption-text\">Eklenmi\u015f Datastore<\/figcaption><\/figure>\n<p>Asl\u0131nda bu bize yeterli olsa dahi NFS Gateway sunucumuzda olu\u015facak herhangi bir kesinti, VM&#8217;lerin de u\u00e7mas\u0131 anlam\u0131na gelmektedir. Bu y\u00fczden makalenin ilerleyen k\u0131s\u0131mlar\u0131nda \u00e7ok daha g\u00fcvenli ve yedekli bir yap\u0131 olan iCSCI format\u0131n\u0131 inceleyece\u011fiz.<\/p>\n<h4><strong>\u00d6nemli Bilgiler<\/strong><\/h4>\n<ul>\n<li>Herhangi bir a\u015famada herhangi bir node&#8217;umuzun sorunlu oldu\u011fundan \u015f\u00fcphelenecek olur isek sa\u011fl\u0131k durumunu;<\/li>\n<\/ul>\n<pre><span class=\"n\">ssh<\/span> NODE <span class=\"n\">sudo<\/span> <span class=\"n\">ceph<\/span> <span class=\"n\">health Format\u0131nda Yani;\n<\/span><span class=\"n\">ssh monitor sudo ceph health\nssh<\/span> <span class=\"n\">node1<\/span> <span class=\"n\">sudo<\/span> <span class=\"n\">ceph<\/span> <span class=\"n\">health\nssh node2 sudo ceph health<\/span><\/pre>\n<p>\u015eeklinde \u00f6\u011frenebiliriz. Daha detayl\u0131 bir analiz i\u00e7in;<\/p>\n<pre><span class=\"n\">ssh<\/span> NODE <span class=\"n\">sudo<\/span> <span class=\"n\">ceph<\/span> <span class=\"o\">-<\/span><span class=\"n\">s<\/span><\/pre>\n<p>Komutu ile analiz sa\u011flayabiliriz.<\/p>\n<ul>\n<li>Herhangi bir a\u015famada geri d\u00f6n\u00fc\u015f\u00fc olmayan bir hata yapt\u0131\u011f\u0131n\u0131z\u0131 anlarsan\u0131z, t\u00fcm ceph paketlerini temizleyerek en ba\u015ftan ba\u015flayabilirsiniz.<\/li>\n<\/ul>\n<p>Bunun i\u00e7in Komutumuz;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">purge<\/span> <span class=\"p\">{<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">node<\/span><span class=\"p\">}<\/span> <span class=\"p\">[{<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">node<\/span><span class=\"p\">}]<\/span>\n<span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">purgedata<\/span> <span class=\"p\">{<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">node<\/span><span class=\"p\">}<\/span> <span class=\"p\">[{<\/span><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">node<\/span><span class=\"p\">}]<\/span>\n<span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">forgetkeys<\/span>\n<span class=\"n\">rm<\/span> <span class=\"n\">ceph<\/span><span class=\"o\">.*<\/span><\/pre>\n<p>Format\u0131ndad\u0131r. Ve Admin \/ Deploy nodunda \u00e7al\u0131\u015ft\u0131r\u0131lmal\u0131d\u0131r.<\/p>\n<p>Yani bu benim olu\u015fturdu\u011fum yap\u0131 i\u00e7in;<\/p>\n<pre><span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">purge<\/span> monitor node1 node2\n<span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">purgedata<\/span> monitor node1 node2\n<span class=\"n\">ceph<\/span><span class=\"o\">-<\/span><span class=\"n\">deploy<\/span> <span class=\"n\">forgetkeys<\/span>\n<span class=\"n\">rm<\/span> <span class=\"n\">ceph<\/span><span class=\"o\">.*<\/span><\/pre>\n<p>Format\u0131ndad\u0131r.<\/p>\n<p>Kaynak\u00e7a;<\/p>\n<p><a href=\"http:\/\/docs.ceph.com\/docs\/master\/start\/quick-start-preflight\/\">\u00d6n Haz\u0131rl\u0131k<\/a><\/p>\n<p><a href=\"http:\/\/docs.ceph.com\/docs\/master\/start\/quick-ceph-deploy\/\">Cluster Deploy<\/a><\/p>\n<p><a href=\"http:\/\/docs.ceph.com\/docs\/master\/cephfs\/\">CephFS<\/a><\/p>\n<p><a href=\"https:\/\/www.digitalocean.com\/community\/tutorials\/how-to-set-up-an-nfs-mount-on-ubuntu-16-04\">Ubuntu NFS Gateway Kurulumu<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ceph Storage Kurulum, Ubuntu, CentOS7 Mount ve ESXi Datastore olarak kullan\u0131m Merhaba Arkada\u015flar, Bu makalede Ceph Storage i\u00e7in kurulum prosed\u00fcrlerini ve \u00f6rnek bir ESXi sistemine datastore olarak nas\u0131l eklenece\u011fini i\u015fleyece\u011fiz. \u00d6ncelik ile Bu yap\u0131 i\u00e7in, en az 4 sunucu gereksinimimiz olacakt\u0131r. Bunlar; Admin \/ Deploy Node Monitor Node Storage Node 1 Storage Node 2.. Gibi [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[],"class_list":["post-166","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/posts\/166","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/comments?post=166"}],"version-history":[{"count":0,"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/posts\/166\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/media?parent=166"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/categories?post=166"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.buraksuatgorgun.com.tr\/index.php\/wp-json\/wp\/v2\/tags?post=166"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}