How to Increase File Open Limits on Linux

Raising the Maximum Number of File Descriptors (Open Files) on Ubuntu 14.04 Trusty

Being a developer it’s difficult at times when you bump in to issues related to the system and network. This was a complete nightmare trying to fix an issue which no where was related to the development. Gear up for this it is a possibility and you could see the network folks flying it out at developers, and developers being absolutely clueless about it.

 

Knowing you have a problem at hand is the First Step

When the server is not yet live, follow these steps

To check the limits for your current session is ulimit command:

$ ulimit -n
4096
$ ulimit -Hn
4096
$ ulimit -Sn
4096

A couple things to note:

  • There are separate limits for different users, so make sure to run this as the user your process is using.
  • There’s a hard limit, and a soft limit. The latter is the actual limit your processes have to obey, and the former set the maximum number the soft limit can be set to. If you need to set separate values for these two, you probably already know how to do that and are not reading this post, so just keep in mind to always modify both, and check the soft limit.

So this is what you would find after 10 seconds of Googling, but keep in mind that ulimit is not guaranteed to give you the limits your processes actually have! There’s a million things that can modify a limits of a process after (or before) you initialized your shell. So what you should do instead is fire up tophtopps, or whatever you want to use to get the ID of the problematic process, and do a cat /proc/{process_id}/limits:

$ cat /proc/1882/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            4096                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Eek! Our Elasticsearch process has a max file limit of 4096, which is way less than we intended! Lucky for us /proc/{process_id}/fd is a directory that holds a file for each open file the process has, so it’s pretty easy to count how close we are to reach the limit:

$ sudo ls /proc/1882/fd | wc -l
4096

Welp, at least that explains why we’re seeing all those errors in the log. For the record, it took us 79 Elasticsearch indices to hit the 4096 open file limit. Oh well, let’s move on to actually fixing this.

The Stuff You Came Here to Read: Raising the Limit

Sorry it took this long to get here! The ulimit -n 64000 command that’s floating around, as every easy ‘solution’, will not actually fix your problem. The issue is that the command only raises your limit for the active shell session, so it’s not permanent, and it most definitely will not affect your processes that are already running (actually, nothing will, so don’t have high expectations here.)

The actual way to raise your descriptors consists of editing three files:

  • /etc/security/limits.conf needs to have these lines in it:
    *    soft nofile 64000
    *    hard nofile 64000
    root soft nofile 64000
    root hard nofile 64000
    

    The asterisk at the beginning of the first two lines means ‘apply this rule to all users except root’, and you can probably guess that the last two lines set the limit only for the root user. The number at the end is of course, the new limit you’re setting. 64000 is a pretty safe number to use

  • /etc/pam.d/common-session needs to have this line in it:
    session required pam_limits.so
    
  • /etc/pam.d/common-session-noninteractive also needs to have this line in it:
    session required pam_limits.so
    

I never got around to looking into what exactly this does, but I’d assume that these two files control whether the limits file you edited above is actually read at the beginning of your sessions.

So, you did it, great job! Just reboot the machine (yup, sadly, you need to) and your limits should reflect your changes:

$ ulimit -n
64000
$ ulimit -Hn
64000
$ ulimit -Sn
64000

Whee! And to check your problematic process again:

$ cat /proc/1860/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            4096                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Wait, what? That still says 4096! There must be something we missed (except if you’re seeing 64000–in which case you can pat yourself on the back and close this tab.)

Finding That One Last Thing You Still Need to Do

So you stitched together and followed all the steps that you found on three different blogs, and four Stack Overflow questions, and it still won’t work. Huh?

The thing that most resources neglect to emphasize, is that your limits can really easily be modified by anything responsible for execution of your processes. If ulimit -n (run as the correct user) is giving you the number you just set, but cat /proc/{process_id}/limits is still printing the low number, you almost certainly have a process manager, an init script, or something similar messing your limits up. You also need to keep in mind that processes inherit the limits of the parent process. This last step (if necessary) is going to vary a lot based on your stack and configuration, so the best I can do is give you an example, of how our Supervisor setup had to be configured to fix the number of file descriptors.

Supervisor has a config variable that sets the file descriptor limit of its main process. Apparently, this setting is in turn inherited by any and all processes it launches. What’s even worse is that the default for this setting is 4096, and that’s no good when we’re wanting 64000 instead. To override this default, you can add the following line to /etc/supervisor/supervisord.conf, in the [supervisord] section:

minfds=64000

Pretty easy to fix, but also really hard to find if you have no knowledge of this inheritance rule, or the fact that Supervisor by default is not having any of that OS level limit configuration. At this point, all you have to do is restart supervisord, in my case with sudo service supervisor restart. This should automatically restart your problematic processes as well, with your newly set limit:

$ cat /proc/1954/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            64000                64000                files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Neat! You got that thing fixed. Now go celebrate by setting this up on your other servers, too. And while you’re at it, you can tack on some monitoring as well.

Thanks: https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors

Remove unused Linux Kernels

Removing unused Ubuntu Linux Kernels.

When we install updates on Ubuntu Linux, the system gives a message free up kernel space message. Usually as updates are regularly downloaded & installed, the old versions exist, with their new counterparts. The old updates can be purged by issuing the following command.

The command cleans up all unused kernels on Ubuntu Linux.

$ > dpkg -l ‘linux-*’ | sed ‘/^ii/!d;/’$(uname -r | sed “s/\(.*\)-\([^0-9]\+\)/\1/”)‘/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d’ | xargs sudo apt-get -y purge

Remote Debugging Play Framework

At times it is required to debug the code, debugging can be easy with any IDE. Running a play framework app and remote debug the application is possible by attaching the process and placing appropriate break points which would hit the breakpoint when the process is attached.

Start sbt-shell

Run Command clean compile test

This would result in starting a process with PID which can be traced from the sbt-shell window

opt/jdk1.8.0_181/bin/java -agentlib:jdwp=transport=dt_socket,address=localhost:35226,suspend=n,server=y -Xdebug -server -Xmx1536M -Didea.managed=true -Dfile.encoding=UTF-8 -Didea.runid=2018.2 -jar /home/sufia/.IdeaIC2018.2/config/plugins/Scala/launcher/sbt-launch.jar idea-shell
Listening for transport dt_socket at address: 35226

[sufia@sufia informis-server]$ lsof -i:35226
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 5981 sufia 4u IPv4 668592 0t0 TCP localhost:35226 (LISTEN)

In InteliJ->Run->Attach to Process, select the PID: 5981

This would hit the breakpoint and you would be able to debug through

Upgrade Ubuntu from 16.04 to 18

If you are on Ubuntu 16.04 and need to move to 18, the following steps would help you achieve that. The upgrade is large and you would be inquired with a few questions while the up-gradation is happening.

First update the current distribution, with the following command

$ sudo apt update

$ sudo apt upgrade

This makes sure your Ubuntu is up to date. Next, upgrade the distribution of ubuntu

$ sudo apt dist-upgrade

This handles changing software dependencies with new versions of packages.

I then follow this up with:

$ sudo apt-get autoremove

This removes dependencies from uninstalled applications. You can do the same thing from the GUI desktop with the utility application Bleachbit. After that, it’s time to get things ready for the big upgrade with:

$ sudo apt install update-manager-core

Finally:

$ sudo do-release-upgrade

This will start upgrading your system to 18.04. Along the way, Ubuntu will ask you several questions about how to handle the upgrade.

Guice Dependency Injection

Examples

https://dzone.com/articles/about-dependency-injection

https://www.tgunkel.de/wiki/IT/Guice

Assisted Annotation

https://dzone.com/articles/advanced-dependency-injection

https://groups.google.com/forum/#!topic/google-guice/di5T1JthGPE

The 92% chunk asset optimisation problem nodejs Angular5

At times running Angular Application on an environment which is fresh can be daunting. We may end up with errors which are not in alignment towards the development. One such of a case is the 92% chunk asset optimization problem when running a build of the Angular application.

On AWS hooked up an ec2-instance with Ubuntu 16.04, every thing works fine until you run the command npm run build.

We are stuck in an exception, 92% chunk asset optimisation and the dist directory is not created. Though we end up looking information on the development stack, but at times it could also be related to the Operating System SWAP space.

If the SWAP space on the OS if short or not available, this error shows up and to go around the error follow the instructions to check and allocate swap space on ubuntu. Hopefully this should help in getting around the problem.

One of the easiest way of increasing the responsiveness of your server and guarding against out of memory errors in your applications is to add some swap space. Swap is an area on a hard drive that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM.

Basically, this gives you the ability to increase the amount of information that your server can keep in its working “memory”, with some caveats. The space on the hard drive will be used mainly when space in RAM is no longer sufficient for data.

The information written to disk will be slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fall back for when your system’s RAM is depleted is a good safety net.

sudo swapon -s

Another command for the same thing

free -m

Before we do this, we should be aware of our current disk usage. We can get this information by typing:

df -h

Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point.

Since my system has 4 Gigabytes of RAM, and doubling that would take a significant chunk of my disk space that I’m not willing to part with, I will create a swap space of 4 Gigabytes to match my system’s RAM.

Creating SWAP File
sudo fallocate -l 4G /swapfile
Check by running the following command that the SWAP file was created and space was reserved
ls -lh /swapfile
Enable SWAP Fiile
sudo mkswap /swapfile
Verfiy Permissions are set correctly
ls -lh /swapfile
Now that our file is more secure, we can tell our system to set up the swap space by typing:
sudo swapon /swapfile

We can verify that the procedure was successful by checking whether our system reports swap space now:

sudo swapon -s
Filename                Type        Size    Used    Priority
/swapfile               file        4194300 0       -1

We have a new swap file here. We can use the free utility again to corroborate our findings:

free -m

https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04

———————Setting up Angular Environment Using NVM———————-

Requirements:
1.nodejs.
2.@angular/cli.

Procedure:
1.Install NVM (Node Version Manager):
By far the easiest way to install node.
run commad :
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash

-This installs NVM, use command ‘nvm -v’ to check if it is installed.

Note: Re-open Terminal after install ‘nvm'(if ‘nvm -v’ doesn’t work).

2.run command:
nvm install node

-This installs the latest version of node

To install a specific version of node use:
nvm install <version>
ex:
nvm install 6.11.5
(Installs version 6.11.5)

3.Install Angular CLI(Command Line Interface)
run:
npm install -g @angular/cli

To check run:
ng

Install pm2 (For running on Server. Skip this step if you are doing local setup):

npm install pm2 -g

—–AND YOU ARE GOOD TO GO

git fails with command: Failed to connect to xxx-xxx-xx-xx port xxxx: Connection refused

 

user@Sys154:~/projects/hipl$ git clone https://gitlab.com/test/test-server.git
Cloning into ‘test-server’…
fatal: unable to access ‘https://gitlab.com/test/test-server.git/’: Failed to connect to 163.172.110.218 port 2465: Connection refused

This happens because a proxy is configured in git.

Since it’s https proxy (and not http) git config http.proxy and git config --global http.proxy can’t help.

1 : take a look at your git configuration

git config --global -l

If you have nothing related to https proxy like https_proxy=... the problem is not here.

If you have something related to https proxy then remove it from the file ~/.gitconfig and try again

2 : if it still doesn’t work, unset environment variables

Check your environment variables :

env|grep -i proxy  

You should have one or several lines with https_proxy=...

Unset one by one with : unset https_proxy (or HTTPS_PROXY depending of the name of the variable)

3 : check environment variables again

env|grep -i proxy

If it shows nothing you should be good.

Note : This solution can applies to http and https proxy problems. just the variables name changes from https to http