Posts

Showing posts from 2018

Allow User to Launch GUI based YaST Without Password

Note that this is not for security conscious user. If you are concern about security, you can skip this post :) To allow GUI based yast2 to run without password, create yast2 file in /etc/sudoers.d with the following content: Defaults env_keep += "DISPLAY XAUTHORITY" tux ALL = NOPASSWD: /sbin/yast2 With the above file created, when user tux will be able to launch graphical yast2 with the following command # sudo yast2 You can then create a shortcut in your desktop that call sudo yast2 Enjoy!!! 

Deploy salt-minion using salt-ssh

Typical saltstack stack setup requires installing salt-master and installing salt-minions into all the managed nodes. This task can be tedious when the number as the managed nodes increased. As long as the managed nodes are accessible via ssh from the master nodes, we can automate this task and deploy salt-minion using salt-ssh. In your salt-master run the following: # cd /srv/salt # mkdir -p deploysalt/conf # cd deploysalt Create a file called init.sls with the following content: Add_Repository:                                                                                                                                          pkgr...

Vagrantfile - FOSSTECH Style - Disable NAT

I have been using vagrant to setup my demo system and I don't really like how vagrant setup my virtual machine so I did few changes to it. I need my virtual machines to be able to interact with each other so I will need my virtual machines to be bridged. I also like to interact using standard ssh using password instead of using vagrant ssh, so my setup will enable password based ssh and disable vagrant ssh. My environment router is 192.168.100.254 so you will need to adjust the file a little bit to your environment. Create a directory to contain your virtual machine: # mkdir node01 # cd node01 Create a file called Vagrantfile containing the following:   Vagrant.configure("2") do |config|   # Use SUSELeap 15.0  config.vm.box = "opensuse/openSUSE-15.0-x86_64"  # Configure Hostname and Public IP Address  config.vm.hostname = "node01"  config.vm.network "public_network", bridge:"eth0", ip:"192.168.100.51" ...

How to Vagrant in SUSE Leap 15

Whenever I need to quickly setup some demo system on my laptop, vagrant has been my go to tools. vagrant packages can be found in the following repositories: http://download.opensuse.org/repositories/Virtualization:/vagrant/openSUSE_Leap_15.0/ By default, vagrant works well with virtualbox which is currently part of default openSUSE Leap 15.0. Install virtualbox and add your current user to vboxusers. Assuming your username is called testuser, run the following command: > sudo zypper in virtualbox > sudo usermod -G vboxusers testuser Logout and re-login again so that your OS reload user group information. You can verify that your user belongs to vboxusers by running the following command: > id Now the system is ready for vagrant. Add the repo above and install vagrant > sudo zypper in vagrant I usually make a separate directory to keep my demo environment. > mkdir vagrant > cd vagrant You can easily download and startup another openSUSE Lea...

SaltStack on SUSE Leap 42 and 15

SUSE is supposed to preferred saltstack compared to ansible, but I found that the saltstack repositories were outdated. I decided to rebuilt saltstack using openSUSE build service and glad to report that I have saltstack-2018.3.0 built for SLES 12, SLES15, SUSE Leap 42 and SUSE Leap 15. You can find the repositories details here: https://build.opensuse.org/repositories/home:davidtio:saltstack Enjoy the build and do let me know if anything breaks!!! For those new to saltstack I will write mini how to start your saltstack adventure sometimes next week

Ansible on SUSE Leap 15

Ansible is more commonly associated with Red Hat instead of SUSE, but setting up ansible on SUSE Leap 15 is actually quite simple. Below are how you can get started with ansible on SUSE Leap 15.   Start with installing ansible: # zypper in ansible Add the host that you want to manage in /etc/ansible/hosts , the format is as follow: ipaddress:port ansible_connection=ssh ansible_user=root ansible_password=password Assuming the system you want to manage has IP address of 192.168.5.1, running root enabled sshd with password 'pass1234' and sshd running on the default port 22 the configuration will be as follow:  192.168.5.1 ansible_connection=ssh ansible_user=root ansible_password=pass1234 Notice that the port number is optional when sshd is running on default port 22 Disable host key checking by adding the following configuration in /etc/ansible/ansible.cfg host_key_checking = False And just like that you are ready to play with ansible....