Skip to main content

Vagrant create your own base box

apt-get install vim sudo update-alternatives --config editor ensure you have sudo installed

apt-get install sudo

add vagrant to sudoers file and passwordless login vagrant ALL=(ALL) NOPASSWD:ALL

mkdir -p /home/vagrant/.ssh wget --no-check-certificate -O /home/vagrant/.ssh/authorized_keys Ensure we have the correct permissions set chmod 0700 /home/vagrant/.ssh chmod 0600 /home/vagrant/.ssh/authorized_keys chown -R vagrant /home/vagrant/.ssh

apt-get install -y openssh-server

$ sudo apt-get install -y build-essential linux-headers-server

# Mount guest additions ISO via virtualbox window then run... $ sudo mount /dev/cdrom /media/cdrom $ sudo /media/cdrom/ $ sudo umount /media/cdrom $ sudo apt-get clean

Make sure that GRUB_TIMEOUT is set to “1”, GRUB_HIDDEN_TIMEOUT_QUIET is set to “true”, and GRUB_CMDLINE_LINUX_DEFAULT is set tp “quiet.” Save & close the file then update GRUB. sudo vi /etc/default/grub $ sudo update-grub

sudo dd if=/dev/zero of=/EMPTY bs=1M

$ sudo rm -f /EMPTY # Shutdown the machine $ sudo shutdown -h now

The next command will actually create our box. The directory where you run this command is where the box file will be created.

$ vagrant package --base vagrant-{distro}-{version}


fabric execution::::: fab -H me@host1,me@host2,me@host3 function

Example: fab -H get_backup

or alternatively:

Example: fab production1 deploy but then you'll have to production1 defined inside your
def production():

 def development():

local - execute a local command means host from which we launch fabric

run - execute a remote command on all specified hosts, user-level permissions

sudo - sudo a command on the remote server)

put - copy over a local file to a remote destination)

get - download a file from the remote server)

prompt - prompt user with text and return the input (like raw_input))

reboot - reboot the remote system, disconnect, and wait for wait seconds)

Download some logs

get(remote_path="/tmp/log_extracts.tar.gz", local_path="/logs/new_log.tar.gz")

Python combine dates , current date with different hour

Specification: I want to launch deployment script but not before 18:00 so every user can leave home. I want to lunch it today but with different time that gives To do it I think best way is t to combine with time(18,00) and then compare with Command of lauching is up to us so jsut for tests lets pass it in argvs:

.:w . code-block:: bash

python 16 35

import sys
from datetime import datetime, date,time

exe_time=datetime.combine(, time(ehour,eminute))
#or datetime.(,,,ehour,eminute)
print "time left:"
print (exe_time - current_time)

if exe_time<=current_time:
    print "executing..."


git diff --cached /what is about to be committed git status brief summary of the situation with git log At any point you can view the history of your changes usingggit log -pit log -p

git log -pIf you also want to see complete diffs at each step, use

git log --stat --summaryOften the overview of the change is useful to get a feel of each step:w

remove wrongly commited large files ➜ my_project git:(master) ✗ git filter-branch --tree-filter 'rm -rf site_media/help' HEAD Rewrite 22d1cead6a67d68939da2eef48c5372b36651a5c (70/70) Ref 'refs/heads/master' was rewritten

restore deleted uncommited file git status git checkout -- filename

The output tells you what you need to do. git reset HEAD etc.

This will unstage the rm operation. After that, running a git status again will tell you that you need to do a git checkout -- to get the file back.

Update: I have this in my config file

$ git config alias.unstage reset HEAD which I usually use to unstage stuff.


(flats)➜  projects  pip install pycurl
 Downloading/unpacking pycurl
   Downloading pycurl- (116kB): 116kB downloaded
   Running (path:/workspace/virtualenvs/flats/build/pycurl/ egg_info for package pycurl
     Traceback (most recent call last):
       File "<string>", line 17, in <module>
       File "/workspace/virtualenvs/flats/build/pycurl/", line 563, in <module>
         ext = get_extension()
       File "/workspace/virtualenvs/flats/build/pycurl/", line 368, in get_extension
         ext_config = ExtensionConfiguration()
       File "/workspace/virtualenvs/flats/build/pycurl/", line 65, in __init__
       File "/workspace/virtualenvs/flats/build/pycurl/", line 100, in configure_unix
         raise ConfigurationError(msg)
     __main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
     Complete output from command python egg_info:
     Traceback (most recent call last):

   File "<string>", line 17, in <module>

   File "/workspace/virtualenvs/flats/build/pycurl/", line 563, in <module>

     ext = get_extension()

   File "/workspace/virtualenvs/flats/build/pycurl/", line 368, in get_extension

     ext_config = ExtensionConfiguration()

   File "/workspace/virtualenvs/flats/build/pycurl/", line 65, in __init__


   File "/workspace/virtualenvs/flats/build/pycurl/", line 100, in configure_unix

     raise ConfigurationError(msg)

 __main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory

 Cleaning up...
 Command python egg_info failed with error code 1 in /workspace/virtualenvs/flats/build/pycurl
 Storing debug log for failure in /home/greg/.pip/pip.log
 (flats)➜  projects  pip install pycurl


apt-get install libcurl4-gnutls-dev librtmp-dev

Linux suspended jobs

If you suspended job by accident with ctrl+z you can always resume by fg here more examples:

crtlz suspend
jobs - list the current jobs
fg - resume the job that's next in the queue
fg %[number] - resume job [number]
bg - Push the next job in the queue into the background
bg %[number] - Push the job [number] into the background
kill %[number] - Kill the job numbered [number]
kill -[signal] %[number] - Send the signal [signal] to job number [number]
disown %[number] - you won't be owner of process anymore so it will be alive after leaving terminal

Postgres connect through ssh tunnel

There is possibility to use local pgadmin3 instance even if remote connection on postgresql server is not granted. To do so powerfull ssh tunneling comes with help : c:w reate ssh connection where $ ssh -L 63333:mad-erp04:5432 $ ssh -L 63333:localhost:5432 It is possible to use SSH to encrypt the network connection between clients and a PostgreSQL server. Done properly, this provides an adequately secure network connection, even for non-SSL-capable clients.

First make sure that an SSH server is running properly on the same machine as the PostgreSQL server and that you can log in using ssh as some user. Then you can establish a secure tunnel with a command like this from the client machine:

ssh -L 63333:localhost:5432 The first number in the -L argument, 63333, is the port number of your end of the tunnel; it can be any unused port. (IANA reserves ports 49152 through 65535 for private use.) The second number, 5432, is the remote end of the tunnel: the port number your server is using. The name or IP address between the port numbers is the host with the database server you are going to connect to, as seen from the host you are logging in to, which is in this example. In order to connect to the database server using this tunnel, you connect to port 63333 on the local machine:

psql -h localhost -p 63333 postgres To the database server it will then look as though you are really user joe on host connecting to localhost in that context, and it will use whatever authentication procedure was configured for connections from this user and host. Note that the server will not think the connection is SSL-encrypted, since in fact it is not encrypted between the SSH server and the PostgreSQL server. This should not pose any extra security risk as long as they are on the same machine.

In order for the tunnel setup to succeed you must be allowed to connect via ssh as, just as if you had attempted to use ssh to create a terminal session.

You could also have set up the port forwarding as

ssh -L but then the database server will see the connection as coming in on its interface, which is not opened by the default setting listen_addresses = 'localhost'. This is usually not what you want.

If you have to "hop" to the database server via some login host, one possible setup could look like this:

ssh -L Note that this way the connection from to will not be encrypted by the SSH tunnel. SSH offers quite a few configuration possibilities when the network is restricted in various ways. Please refer to the SSH documentation for details.