Letsencrypt autorenew

Now they renamed it from Letsencrypt to Certbot. Working on script to reflect the change but i have to make sure it does not change | Break all required dependencies. Great thing happen securing internet servers, And it’s Free.
But there is catch, You have to renew your certificated Often. Since they provided tool to do so, i don’t think there is problem at all. First install command line API tool. letsencrypt source

There is many way you can get new certificate or renew certificate.
But i like following way, which can be scripted easily.

Get New Certificate

./letsencrypt-auto --email <email> --agree-tos certonly -d <fqdn> -c <Location_for_config>

configuration for certificate request / location

It is good idea to create config file for each certificate because we can use it for renewal


# Domain which you are trying to get certificate for;
domains = wiki.k2patel.in
# Define rsa keysize
rsa-key-size = 4096
# Define the api server
server = https://acme-v01.api.letsencrypt.org/directory
# email address for your certificate
email = k2patel@rediffmail.com
# we can disable the UI and turn on the text mode
text = True
# authenticate by placing file in webroot located under .well-known/acme-challenge/
authenticator = webroot
webroot-path = /var/www/letsencrypt/

Nginx configuration

I’m using https redirect for my hosts so i use following code on each domain.
Works fine for me.

    if ($request_uri !~ "^/.well-known/acme-challenge/(.*)") {
        rewrite     ^(.*)   https://$host$1 permanent;
    location /.well-known/acme-challenge {
        root /var/www/letsencrypt;

Nginx configuration

I’m using https redirect for my hosts so i use following code on each domain.
Works fine for me.


    if ($request_uri !~ "^/.well-known/acme-challenge/(.*)") {
        rewrite     ^(.*)   https://$host$1 permanent;
    location /.well-known/acme-challenge {
        root /var/www/letsencrypt;

Cron setup

Now i have script which run every 11 week.


#!/usr/bin/env bash
# Renew Certificate using lets-encrypt
# Author : Ketan Patel <k2patel.in>
# License : BSD
source /etc/bashrc
# Globals ( Please update )
ldomains=('wiki.k2patel.in' 'www.k2patel.in' 'ip.k2patel.in' 'rpm.k2patel.in')
# Enable System level logging
# Redirect log to logger
exec 1> >(logger -t $(basename $0)) 2>&1
for i in ${ldomains[@]}
    ${LETSENCRYPT_HOME}/letsencrypt-auto certonly -c /etc/letsencrypt/config/${i}.conf --renew-by-default
# Start web services
if /usr/bin/systemctl restart ${WEBSERVER} ; then
   echo "Web service re-started after certificate renewal."
   echo "Failed to start web services"

Reference :

1.  https://wiki.k2patel.in/doku.php?id=letsencrypt

Please follow and like us:

Clean Inodes on Linux

This morning i just got alarm on our system monitoring said that inodes on one of server had 5% left space, then check disk with df -h there had not disk space issue, aha this wierd, then try to read more detail about the alarm and it said no disk space issue but inodes space issue, you have to know what different disk and inodes once.

To cehck inodes had different command on linux to check the disk, you can running this command to check inodes :

df -i

and it will showing like this

# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda3 671744 640908 30836 96% /var
/dev/sda4 3662848 584548 3078300 16% /home
tmpfs 507592 4 507588 1% /run/user/1033
1263/3965MB 0.34 0.54 0.50 1/141 20462

on the print command above said that /var had 96% inodes usage, so next we have to search where path inodes that have a lot files.

just run this command to find a lot files :

# for i in /var/*; do echo $i; find $i |wc -l; done

On the result command above said that /var/lib had a lot files, but we still don’t know exactly what path folder, so we have to check again on /var/lib/ with this command :

# for i in /var/lib*; do echo $i; find $i |wc -l; done

And the result above said /var/lib/php5 had a lot files, so we have to re-check again the path folder that had a lot files.

# for i in /var/lib/php5/*; do echo $i; find $i |wc -l; done
the result said :

After we checcked folder /var/lib/php5/sessions there were many files, so we sure that this path folder that had many files, so to clean up the /var/lig/php5/sessions just running this command :

First we have to know that we on the right path to delete the files, if you on wrong path folder this will make a big issue, you would be on fired 😀

find . -mtime +7 -exec stat -c "%n %y" {} \;</li>
find . -mtime +7 -exec rm {} \;</li>

the command above will delete file 7day before.

And We hope it will solve your issue 🙂

Noted :
This will not worked fo you, so we won’t be responsible if there is Error on you OS!

Please follow and like us:

Limiting Access with SFTP Jails on Debian and Ubuntu

As the system administrator for your Linode, you may want to give your users the ability to securely upload files to your server. The most common way to do this is to allow file transfers via SFTP, which uses SSH to provide encryption. This means you need to give your users SSH logins. But, by default, SSH users are able to view your Linode’s entire filesystem, which may not be desirable.

Limiting Access with SFTP Jails on Debian and Ubuntu

This guide will help you configure OpenSSH to restrict users to their home directories, and to SFTP access only. Please note that these instructions are not intended to support shell logins; any user accounts modified in accordance with this guide will have the ability to transfer files, but not the ability to log into a remote shell session.

These instructions will work for Ubuntu 9.04, Debian 5, and later. Unfortunately, the version of SSH packaged with Ubuntu 8.04 is too old to support this configuration.

Configure OpenSSH

First, you need to configure OpenSSH.

1. Edit your /etc/ssh/sshd_config file with your favorite text editor:

vim /etc/ssh/sshd_config

2. Add or modify the Subsystem sftp line to look like the following: /etc/ssh/sshd_config

Subsystem sftp internal-sftp

3. Add this block of settings to the end of the file: /etc/ssh/sshd_config

Match Group filetransfer
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp

Save the changes to your file.

4. Restart OpenSSH:

service ssh restart

OpenSSH has been successfully modified.

Modify User Accounts

In this section, we’ll set up the correct new groups, ownership, and permissions for your user accounts.

1. Create a system group for users whom you want to restrict to SFTP access:

addgroup filetransfer

2. Modify the user accounts that you wish to restrict to SFTP. Issue the following commands for each account, substituting the appropriate username. Please keep in mind that this will prevent these users from being able to log into a remote shell session.

usermod -G filetransfer username
sudo chown root /home/username
sudo chmod go-w /home/username

These users will now be unable to create files in their home directories, since these directories are owned by the root user.

3. Next, you need to create new directories for each user, to which they will have full access. Issue the following commands for each user, changing the directories created to suit your needs:

sudo mkdir /home/username/writable
sudo chown username:filetransfer /home/username/writable
sudo chmod ug+rwX /home/username/writable

Your users should now be able to log into their accounts via SFTP and transfer files to and from their assigned subdirectories, but they shouldn’t be able to see the rest of your Linode’s filesystem.

Reference :
1. https://www.linode.com/docs/tools-reference/tools/limiting-access-with-sftp-jails-on-debian-and-ubuntu

2. https://askubuntu.com/questions/134425/how-can-i-chroot-sftp-only-ssh-users-into-their-homes

Please follow and like us:

Mysql : ERROR 1016 (HY000) at line 1: Can’t open file

When creating a large number of partitions or tables, MySQL may mysteriously stop working and you find this type of error on /var/lib/mysql/$HOSTNAME.err:

[ERROR] /usr/sbin/mysqld: Can’t open file: ‘./database/table.frm’ (errno: 24)

errno: 24 simply means that too many files are open for the given process. There is a read-only mysql variable called open_files_limit that will show how many open files are allowed by the mysqld:


A lot systems set this to something very low, like 1024. Unfortunately, the following will NOT work:
SET open_files_limit=100000<

MySQL will respond with:

ERROR 1238 (HY000): Variable ‘open_files_limit’ is a read only variable

However, it is possible to make a change to /etc/my.cnf This file may not exist, if not, just create it. Be sure it has the following contents:

open_files_limit = 100000

Then, be sure to restart mysql:

sudo /etc/init.d/mysql restart

Now, SHOW VARIABLES LIKE ‘open%’ should show 100000. The number you use may be different.

Source : http://www.solomonson.com/content/how-fix-errno-24-mysql

Please follow and like us: