"Out of memory" in Linux

Hi All!

Entschuldigung, dass ich nicht in deutsch schreibe. Aber ich glaube das ist besser für allen :slight_smile:

I’m using PostgreSQL 8.3 on a dedicated server (64bits, with 4GB RAM) with Debian.
I’d like to build/index a huge database (OpenStreetMap). With the default configuration it would take months, so I’ve been trying to optimize it.
The progress is much faster now, but the kernel keeps killing the postgres process due to “out of memory”.

What am I doing wrong? Any help or suggestion would be welcome!


The changes to the default configuration are the following:

Postgresql:

107,108c107
< shared_buffers = 800MB

shared_buffers = 32MB
115,116c114,115
< work_mem = 128MB
< maintenance_work_mem = 128MB


#work_mem = 1MB
#maintenance_work_mem = 16MB
121c120
< max_fsm_pages = 411000


max_fsm_pages = 204800
153c152
< fsync = off


#fsync = on
163c162
< wal_buffers = 6MB


#wal_buffers = 64kB
172c171
< checkpoint_segments = 15


#checkpoint_segments = 3
174c173
< checkpoint_completion_target = 0.8


#checkpoint_completion_target = 0.5
205c204
< random_page_cost = 2.0


#random_page_cost = 4.0
209c208
< effective_cache_size = 3GB


#effective_cache_size = 128MB
383c381
< autovacuum = off


#autovacuum = on

Debian:
I’ve set SHMMAX to 1GB = 102410241024 = 1073741824, and SHMALL to SHMMAX/4096 = 262144

After a few tries I’ve set strict overcommit mode (sysctl -w vm.overcommit_memory=2) as was suggested in the “managing server resources” part of the pg documentation.

Thanks in advance!
Gergő

Which process is killed by the kernel?

Oh, and 4 GB RAM for OpenStreetMap seems … a bit low.

Thanks for your quick reply!

The postgres process was killed (by oom-killer). (while running gazetteer-loaddata.sql, if you are familiar with OSM things)

I’ve forgot to mention that since the overcommit mode was set to strict, the oom-killer doesn’t kill the postgres process, but the postgresql is giving “out of memory” error messages (and the script stops running).

Some things which are not total related, but which are questionable:

  • Why is work_mem set to 128 MB?
  • Why is fsync off?
  • max_fsm_pages is probably too low.
  • effective_cache_size is probably too high.
  • autovacuum should be on.
  • More checkpoint_segments would be nice. Please monitor the pg_xlog directory, how many logfiles are created by PostgreSQL.


    How big is your SQL file? Is it all load in one big transaction?

Yes, good question. Consider: work_mem is for every client and, if you have complex queries, more than once per query. In other words: if you have several connections and complex queries, you run out of memory very fast …

Regards, Andreas (and forgive my bad english…)

Maybe this link will help you. Ok, it is in german, but maybe it will work or you can get some hints. The main problem is, that you have not enough memory to run what you want as fast as you want. So all playing around with kernel parameters will not solve this problem. Another question is, why do you need such a large osm-database? For testing, maybe a subset of the data will do the same(and you can specify your future hardware-requirements :wink: ) and for production use the server is for sure too small. If you can not get your boss to buy you a new server, maybe some partitoning of the data may be helpful, but this is also not a fast solution.

CU Uwe

Thank you for your suggestions! That’s exactly what I need because my config settings were ad-hoc, I’m not familiar with PostgreSQL.

This particular sql file is small. The query is inserting records from a table to another. A before-insert trigger is doing the work for the database making it ready for mapping purposes (using PostGIS). The size of the original table is 13GB with 58M records (Europe extract of the OSM data).

Turning off the fsync and the autovacuum for the initial process was suggested by the OSM stuff.
I’ve misunderstood the effective_cache_size. I thought it was using hard drive space…

So I’m running the script now with the following changes:
work_mem=32MB
maintenance_work_mem=64MB
max_fsm_pages=611000
checkpoint_segments=20
effective_cache_size=1GB

With the current settings 15 MB/h is processed (and it will slow down as the size of the new table grows), so it would take 1-2 months to finish. :S
With my very first settings it was like 25 times faster. (the main differences to the current: 8MB wal_buffers, 128MB work_mem, 5GB effecitve cache_size)

In the pg_xlog directory 28 files were created in the first 30 mins.

Loading 13 GB should not take 1 month. It should take at longest some hours.

I could even load in in some minutes, but not on your hardware.

The system has run out of memory again, and the process has stopped with this latest settings. What do you think waht should I adjust on it?

Getting some RAM makes sense. I’m trying to borrow some big modules for this inital phase hoping that I won’t need to build the database again (I’ll make a backup of it).

Really, try more ram …

Thank you for your help! I could finally borrow a neat server machine with 24GB Ram, with fast HDD and CPU (System x3650 M2)
The script is running on it now for 8 hours. 3 MB/s is processed and it is slowing down. With this preformance it will need 60 more hours to finish it. It is far more better than 60 days! :slight_smile:

But I still want to otimize on it because it is only the first phase of the two, and maybe I won’t have enough time to finish if I need to restart it for some reason.

Do you have any adjustment suggestions? Here are my settings:
107,108c107
< shared_buffers = 3GB

shared_buffers = 32MB
115,116c114,115
< work_mem = 256MB
< maintenance_work_mem = 256MB


#work_mem = 1MB
#maintenance_work_mem = 16MB
121c120
< max_fsm_pages = 1592000


max_fsm_pages = 204800
153c152
< fsync = off


#fsync = on
163c162
< wal_buffers = 24MB


#wal_buffers = 64kB
172c171
< checkpoint_segments = 20


#checkpoint_segments = 3
174c173
< checkpoint_completion_target = 0.9


#checkpoint_completion_target = 0.5
205c204
< random_page_cost = 1.5


#random_page_cost = 4.0
209c208
< effective_cache_size = 8GB


#effective_cache_size = 128MB
383c381
< autovacuum = off


#autovacuum = on

Thanks in advance!

I think, you can increase shared_mem up to 6 GB, work_mem is maybe too high. Why fsync = off? It’s dangerous! Why autovacuum off? Is it a dedicated pg-server? If yes, you can increase effective_cache_size up to 20 GB. Why random_page_cost to 1.5, i think, it’s too low. Which version you are running? I’m asking because of max_fsm_pages, it’s deprecated in newer versions.

Sorry for my bad english, it’s not my native language.

Andreas

Thanks for your input Andreas. I’ll apply your suggestions when the current run ends.

The process runs on a pg dedicated server with version of 8.3.12. This version is the most supported by the mapping application I want to use. As for fsync and autovacuum, I’m aware of that turning them off is dangerous, but in this case these are disabled for this initial phase only to make the process even faster.

The kernel (oom-killer) killed the postgre process again :frowning: . I don’t understand what I am doing wrong.
It’s a default Debian and PostgreSQL. Nothing was changed besides what I’ve mentioned. (in this case Debian’s shmmax was set to 8GB, and the shmall accordingly)

Now I’m setting the overcommit mode of the Debian to strict, applying the new settings to see what happens…

Here is the debian’s message log of this incident. Most of this is chinese for me, but maybe there is somebody among you who can make conclusions:

Feb 17 12:41:57 psql-host kernel: [249339.209591] sshd invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
Feb 17 12:41:57 psql-host kernel: [249339.209596] Pid: 4315, comm: sshd Not tainted 2.6.26-2-amd64 #1
Feb 17 12:41:57 psql-host kernel: [249339.209597]
Feb 17 12:41:57 psql-host kernel: [249339.209598] Call Trace:
Feb 17 12:41:57 psql-host kernel: [249339.209612] [] oom_kill_process+0x57/0x1dc
Feb 17 12:41:57 psql-host kernel: [249339.209616] [] __capable+0x9/0x1c
Feb 17 12:41:57 psql-host kernel: [249339.209618] [] badness+0x188/0x1c7
Feb 17 12:41:57 psql-host kernel: [249339.209622] [] out_of_memory+0x1f5/0x28e
Feb 17 12:41:57 psql-host kernel: [249339.209627] [] __alloc_pages_internal+0x31d/0x3bf
Feb 17 12:41:57 psql-host kernel: [249339.209633] [] __do_page_cache_readahead+0x79/0x183
Feb 17 12:41:57 psql-host kernel: [249339.209637] [] filemap_fault+0x15d/0x33c
Feb 17 12:41:57 psql-host kernel: [249339.209642] [] __do_fault+0x50/0x3e8
Feb 17 12:41:57 psql-host kernel: [249339.209647] [] handle_mm_fault+0x452/0x8de
Feb 17 12:41:57 psql-host kernel: [249339.209651] [] autoremove_wake_function+0x0/0x2e
Feb 17 12:41:57 psql-host kernel: [249339.209655] [] current_fs_time+0x1e/0x24
Feb 17 12:41:57 psql-host kernel: [249339.209659] [] do_page_fault+0x5d8/0x9c8
Feb 17 12:41:57 psql-host kernel: [249339.209663] [] vfs_read+0x11e/0x152
Feb 17 12:41:57 psql-host kernel: [249339.209665] [] recalc_sigpending+0xe/0x38
Feb 17 12:41:57 psql-host kernel: [249339.209669] [] error_exit+0x0/0x60
Feb 17 12:41:57 psql-host kernel: [249339.209674]
Feb 17 12:41:57 psql-host kernel: [249339.209675] Mem-info:
Feb 17 12:41:57 psql-host kernel: [249339.209676] Node 0 DMA per-cpu:
Feb 17 12:41:57 psql-host kernel: [249339.209679] CPU 0: hi: 0, btch: 1 usd: 0
Feb 17 12:41:57 psql-host kernel: [249339.209681] CPU 1: hi: 0, btch: 1 usd: 0
Feb 17 12:41:57 psql-host kernel: [249339.209682] CPU 2: hi: 0, btch: 1 usd: 0
Feb 17 12:41:57 psql-host kernel: [249339.209684] CPU 3: hi: 0, btch: 1 usd: 0
Feb 17 12:41:57 psql-host kernel: [249339.209685] Node 0 DMA32 per-cpu:
Feb 17 12:41:57 psql-host kernel: [249339.209687] CPU 0: hi: 186, btch: 31 usd: 158
Feb 17 12:41:57 psql-host kernel: [249339.209689] CPU 1: hi: 186, btch: 31 usd: 175
Feb 17 12:41:57 psql-host kernel: [249339.209691] CPU 2: hi: 186, btch: 31 usd: 124
Feb 17 12:41:57 psql-host kernel: [249339.209692] CPU 3: hi: 186, btch: 31 usd: 123
Feb 17 12:41:57 psql-host kernel: [249339.209694] Node 0 Normal per-cpu:
Feb 17 12:41:57 psql-host kernel: [249339.209696] CPU 0: hi: 186, btch: 31 usd: 177
Feb 17 12:41:57 psql-host kernel: [249339.209697] CPU 1: hi: 186, btch: 31 usd: 149
Feb 17 12:41:57 psql-host kernel: [249339.209699] CPU 2: hi: 186, btch: 31 usd: 185
Feb 17 12:41:57 psql-host kernel: [249339.209700] CPU 3: hi: 186, btch: 31 usd: 167
Feb 17 12:41:57 psql-host kernel: [249339.209703] Active:5651793 inactive:961599 dirty:0 writeback:0 unstable:0
Feb 17 12:41:57 psql-host kernel: [249339.209705] free:32269 slab:8346 mapped:59 pagetables:23779 bounce:0
Feb 17 12:41:57 psql-host kernel: [249339.209706] Node 0 DMA free:11600kB min:8kB low:8kB high:12kB active:0kB inactive:0kB present:10660kB pages_scanned:0 all_unreclaimable? yes
Feb 17 12:41:57 psql-host kernel: [249339.209710] lowmem_reserve[]: 0 1965 26205 26205
Feb 17 12:41:57 psql-host kernel: [249339.209713] Node 0 DMA32 free:98484kB min:1552kB low:1940kB high:2328kB active:829744kB inactive:693388kB present:2012496kB pages_scanned:2456458 all_unreclaimable? yes
Feb 17 12:41:57 psql-host kernel: [249339.209717] lowmem_reserve[]: 0 0 24240 24240
Feb 17 12:41:57 psql-host kernel: [249339.209720] Node 0 Normal free:18992kB min:19160kB low:23948kB high:28740kB active:21777172kB inactive:3153264kB present:24821760kB pages_scanned:45436605 all_unreclaimable? yes
Feb 17 12:41:57 psql-host kernel: [249339.209724] lowmem_reserve[]: 0 0 0 0
Feb 17 12:41:57 psql-host kernel: [249339.209726] Node 0 DMA: 24kB 38kB 316kB 432kB 264kB 0128kB 2256kB 1512kB 21024kB 02048kB 24096kB = 11600kB
Feb 17 12:41:57 psql-host kernel: [249339.209733] Node 0 DMA32: 45
4kB 548kB 2516kB 1832kB 2664kB 10128kB 13256kB 11512kB 111024kB 282048kB 44096kB = 98484kB
Feb 17 12:41:57 psql-host kernel: [249339.209740] Node 0 Normal: 854kB 28kB 116kB 132kB 064kB 1128kB 0256kB 0512kB 01024kB 12048kB 4*4096kB = 18964kB
Feb 17 12:41:57 psql-host kernel: [249339.209747] 695604 total pagecache pages
Feb 17 12:41:57 psql-host kernel: [249339.209748] Swap cache: add 7431531, delete 7431528, find 738559/1270117
Feb 17 12:41:57 psql-host kernel: [249339.209750] Free swap = 0kB
Feb 17 12:41:57 psql-host kernel: [249339.209751] Total swap = 11590856kB
Feb 17 12:41:57 psql-host kernel: [249339.295405] 6815744 pages of RAM
Feb 17 12:41:57 psql-host kernel: [249339.295407] 113880 reserved pages
Feb 17 12:41:57 psql-host kernel: [249339.295409] 906 pages shared
Feb 17 12:41:57 psql-host kernel: [249339.295410] 3 pages swap cached

I was thinking, and what came to my mind is that PostGIS is not the part of the default pg. So on google I’ve found that postgis versions prior to 1.3.5 have some memory leak issues. As I have 1.3.3 installed, I upgrade it to the latest, hoping that it solves my case.

I’ll inform you as soon as I get some results.

It was not the PostGIS. :frowning:

WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

Take a look into your syslog.