How to Increase File Open Limits on Linux

Raising the Maximum Number of File Descriptors (Open Files) on Ubuntu 14.04 Trusty

Being a developer it’s difficult at times when you bump in to issues related to the system and network. This was a complete nightmare trying to fix an issue which no where was related to the development. Gear up for this it is a possibility and you could see the network folks flying it out at developers, and developers being absolutely clueless about it.

 

Knowing you have a problem at hand is the First Step

When the server is not yet live, follow these steps

To check the limits for your current session is ulimit command:

$ ulimit -n
4096
$ ulimit -Hn
4096
$ ulimit -Sn
4096

A couple things to note:

  • There are separate limits for different users, so make sure to run this as the user your process is using.
  • There’s a hard limit, and a soft limit. The latter is the actual limit your processes have to obey, and the former set the maximum number the soft limit can be set to. If you need to set separate values for these two, you probably already know how to do that and are not reading this post, so just keep in mind to always modify both, and check the soft limit.

So this is what you would find after 10 seconds of Googling, but keep in mind that ulimit is not guaranteed to give you the limits your processes actually have! There’s a million things that can modify a limits of a process after (or before) you initialized your shell. So what you should do instead is fire up tophtopps, or whatever you want to use to get the ID of the problematic process, and do a cat /proc/{process_id}/limits:

$ cat /proc/1882/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            4096                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Eek! Our Elasticsearch process has a max file limit of 4096, which is way less than we intended! Lucky for us /proc/{process_id}/fd is a directory that holds a file for each open file the process has, so it’s pretty easy to count how close we are to reach the limit:

$ sudo ls /proc/1882/fd | wc -l
4096

Welp, at least that explains why we’re seeing all those errors in the log. For the record, it took us 79 Elasticsearch indices to hit the 4096 open file limit. Oh well, let’s move on to actually fixing this.

The Stuff You Came Here to Read: Raising the Limit

Sorry it took this long to get here! The ulimit -n 64000 command that’s floating around, as every easy ‘solution’, will not actually fix your problem. The issue is that the command only raises your limit for the active shell session, so it’s not permanent, and it most definitely will not affect your processes that are already running (actually, nothing will, so don’t have high expectations here.)

The actual way to raise your descriptors consists of editing three files:

  • /etc/security/limits.conf needs to have these lines in it:
    *    soft nofile 64000
    *    hard nofile 64000
    root soft nofile 64000
    root hard nofile 64000
    

    The asterisk at the beginning of the first two lines means ‘apply this rule to all users except root’, and you can probably guess that the last two lines set the limit only for the root user. The number at the end is of course, the new limit you’re setting. 64000 is a pretty safe number to use

  • /etc/pam.d/common-session needs to have this line in it:
    session required pam_limits.so
    
  • /etc/pam.d/common-session-noninteractive also needs to have this line in it:
    session required pam_limits.so
    

I never got around to looking into what exactly this does, but I’d assume that these two files control whether the limits file you edited above is actually read at the beginning of your sessions.

So, you did it, great job! Just reboot the machine (yup, sadly, you need to) and your limits should reflect your changes:

$ ulimit -n
64000
$ ulimit -Hn
64000
$ ulimit -Sn
64000

Whee! And to check your problematic process again:

$ cat /proc/1860/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            4096                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Wait, what? That still says 4096! There must be something we missed (except if you’re seeing 64000–in which case you can pat yourself on the back and close this tab.)

Finding That One Last Thing You Still Need to Do

So you stitched together and followed all the steps that you found on three different blogs, and four Stack Overflow questions, and it still won’t work. Huh?

The thing that most resources neglect to emphasize, is that your limits can really easily be modified by anything responsible for execution of your processes. If ulimit -n (run as the correct user) is giving you the number you just set, but cat /proc/{process_id}/limits is still printing the low number, you almost certainly have a process manager, an init script, or something similar messing your limits up. You also need to keep in mind that processes inherit the limits of the parent process. This last step (if necessary) is going to vary a lot based on your stack and configuration, so the best I can do is give you an example, of how our Supervisor setup had to be configured to fix the number of file descriptors.

Supervisor has a config variable that sets the file descriptor limit of its main process. Apparently, this setting is in turn inherited by any and all processes it launches. What’s even worse is that the default for this setting is 4096, and that’s no good when we’re wanting 64000 instead. To override this default, you can add the following line to /etc/supervisor/supervisord.conf, in the [supervisord] section:

minfds=64000

Pretty easy to fix, but also really hard to find if you have no knowledge of this inheritance rule, or the fact that Supervisor by default is not having any of that OS level limit configuration. At this point, all you have to do is restart supervisord, in my case with sudo service supervisor restart. This should automatically restart your problematic processes as well, with your newly set limit:

$ cat /proc/1954/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15922                15922                processes
Max open files            64000                64000                files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15922                15922                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Neat! You got that thing fixed. Now go celebrate by setting this up on your other servers, too. And while you’re at it, you can tack on some monitoring as well.

Thanks: https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors

find out which process is listening upon a port

If you don’t have lsof already you can download and install it by becoming root and running:

root@mystery:~# apt-get install lsof

This will download and install the package for you, along with any dependencies which might be required:

Reading package lists... Done
Building dependency tree... Done
The following NEW packages will be installed:
  lsof
0 upgraded, 1 newly installed, 0 to remove and 16 not upgraded.
Need to get 339kB of archives.
After unpacking 549kB of additional disk space will be used.
Get:1 http://http.us.debian.org unstable/main lsof 4.75.dfsg.1-1 [339kB]
Fetched 339kB in 3s (90.8kB/s)
Selecting previously deselected package lsof.
(Reading database ... 69882 files and directories currently installed.)
Unpacking lsof (from .../lsof_4.75.dfsg.1-1_i386.deb) ...
Setting up lsof (4.75.dfsg.1-1) ...

Once you have the package installed you can now discover precisely which processes are bound upon particular ports.

If you have the Apache webserver running on port 80 that will provide a suitable test candidate. If not you can choose another port you know is in use.

To discover the process name, ID (pid), and other details you need to run:

lsof -i :port

So to see which process is listening upon port 80 we can run:

root@mystery:~# lsof -i :80

This gives us the following output:

COMMAND   PID     USER   FD   TYPE   DEVICE SIZE NODE NAME
apache2 10437     root    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10438 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10439 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10440 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10441 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10442 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 25966 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 25968 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)

Here you can see the command running (apache2), the username it is running as www-data, and some other details.

Similarly we can see which process is bound to port 22:

root@mystery:~# lsof -i :22
COMMAND   PID USER   FD   TYPE   DEVICE SIZE NODE NAME
sshd     8936 root    3u  IPv6 12161280       TCP *:ssh (LISTEN)

To see all the ports open for listening upon the current host you can use another command netstat (contained in the net-tools package):

root@mystery:~# netstat -a |grep LISTEN |grep -v unix
tcp        0      0 *:2049                  *:*                     LISTEN     
tcp        0      0 *:743                   *:*                     LISTEN     
tcp        0      0 localhost.localdo:mysql *:*                     LISTEN     
tcp        0      0 *:5900                  *:*                     LISTEN     
tcp        0      0 localhost.locald:sunrpc *:*                     LISTEN     
tcp        0      0 *:8888                  *:*                     LISTEN     
tcp        0      0 localhost.localdom:smtp *:*                     LISTEN     
tcp6       0      0 *:www                   *:*                     LISTEN     
tcp6       0      0 *:distcc                *:*                     LISTEN     
tcp6       0      0 *:ssh                   *:*                     LISTEN

Here you can see that there are processes listening upon ports 20497435900, and several others.

(The second grep we used above was to ignore Unix domain sockets).

If you’re curious to see which programs and services are used in those sockets you can look them up as we’ve already shown:

root@mystery:~# lsof -i :8888
COMMAND   PID    USER   FD   TYPE   DEVICE SIZE NODE NAME
gnump3d 25834 gnump3d    3u  IPv4 61035200       TCP *:8888 (LISTEN)

This tells us that the process bound to port 8888 is the gnump3d MP3 streamer.

Port 2049 and 743 are both associated with NFS. The rest can be tracked down in a similar manner. (You’ll notice that some ports actually have their service names printed next to them, such as the smtp entry for port 25).

lsof is a very powerful tool which can be used for lots of jobs. If you’re unfamiliar with it I recommend reading the manpage via:

man lsof

If you do so you’ll discover that the -i flag can take multiple different types of arguments, to allow you to check more than one port at a time, and use IPv6 addresses too.

It’s often used to see which files are open upon mounted devices, so you can kill the processes and unmount them cleanly.

Install & Uninstall NVM and Node.js globally in a Linux based system

Note: You should be a sudoer.

  1. Enter into root user
    $ sudo -s 
  2. Install nvm globally accessible by all the users:
    $ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | NVM_DIR=/usr/local/nvm bash
  3. Install node:
    For specific version of node.js
    $ nvm install <version of node>
    eg: nvm install 10.10.0
                ——— OR ———-
    For latest version of node.js
    $ nvm install node
  4. Create a file called nvm.sh in /etc/profile.d with the following contents:
    export NVM_DIR="/usr/local/nvm"
    [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
  5.  Install Angular CLI(Command Line Interface)
    $ npm install -g @angular/cli
  6. Install pm2 (For running on Server. Skip this step if you are doing local setup):
    $ npm install -g pm2

Uninstalling NVM follow the steps

rm -rf ~/.nvm
rm -rf ~/.npm
rm -rf ~/.bower
rm -rf $NVM_DIR ~/.npm ~/.bower

upgrade or downgrade node

How to Upgrade (or Downgrade) Node.js using NPM

Need to update your version of Node.js? Here’s how you can upgrade or downgrade from the command line using NPM.

Upgrading to the latest stable version

This will update you to the latest available stable version:

sudo npm cache clean -f
sudo npm install -g n
sudo n stable

Upgrading to the latest LTS version

Node also offers a long-term support (LTS) version. If you need that version (or any other), simply specify the version number you want:

sudo npm cache clean -f
sudo npm install -g n
sudo n 8.0.0

Checking your Node version

To see which version of Node is currently installed, simply run:

node -v

The version number displayed is the one that’s currently active on your machine.

Top UX Designers in India 2018 Reviews

Top UX Designers in India
Lollypop Design Studio. Crafting Sweet Experiences. …
WinkTales. Designing Experiences, Forging Relationships. …
YUJ Designs. Serious UX design is at the heart of what we do. …
Divami Design Labs. UX Design and Innovation. …
ProCreator. We Create Digital Experiences. …
Rillusion. …
Prismic Reflections. …
Parallel Labs.More items@ this link…

Update Visual Studio Code(VS Code) Ubuntu

(1) open the terminal (Ctrl+Alt+T).

(2) copy the link: wget https://vscode-update.azurewebsites.net/latest/linux-deb-x64/stable -O /tmp/code_latest_amd64.deb (paste it inside the terminal) and press ENTER

(3)copy the link: sudo dpkg -i /tmp/code_latest_amd64.deb (paste it inside the terminal) and press ENTER.

(4) close and reopen the vs code

Good to go..!! your Visual Studio Code is updated…