7

TLDR: Can't find the core dump even after setting ulimit and looking into apport. Sick of working so hard to get a single backtrace. Questions on the bottom.

I'm having a little nightmare here. I'm currently doing some c coding, which in my case always means a metric ton of segfaults. Most of the times I'm able to reproduce the bug with little to no problem, but today I hit a wall.

My code produces segfaults inconsistently. I need that core dump it is talking about.

So I'm going on a hunt for a core dump, for my little precious a.out. And that is when I'm starting to pull my hair off.

My intuition would tell me, that core dump files should be stored somewhere in the working directory - which obviously isn't the case. After reading this, I happily typed:

ulimit -c 750000

And... nothing. Output of my program told me that it did the core dump - but I can't find it in cwd. So after reading this I learnt that I should do things to apport and core_pattern.

Changing core_pattern seems a bit too much for getting one core dump, I really don't wan't to mess with it, because I know I will forget about it later. And I tend to mess these things up really badly.

Apport has this magical property of chosing which core dumps are valuable and which are not. It's logs told me...

ERROR: apport (pid 7306) Sun Jan  3 14:42:12 2016: executable does not belong to a package, ignoring

...that my program isn't good enough for it.


  1. Where is this core dump file?
  2. Is there a way to get a core dump a single time manually, without having to set everything up? I rarely need those as files per se, GDB alone is enough most of the time. Something like let_me_look_at_the_core_dump <program name> would be great.

I'm already balding a little, so any help would be appreciated.

6
  • Try askubuntu.com/questions/246972/…?
    – lreeder
    Commented Jan 3, 2016 at 14:40
  • did you try find / -name 'core*' ? (Go out for a coffee to clear you head, as it will talke a while to run). Feel your pain, good luck!
    – shellter
    Commented Jan 3, 2016 at 14:47
  • 1
    Why not set core size to unlimited? Commented Jan 3, 2016 at 14:54
  • 1
    @JonathanLeffler I'm a control freak. Joke aside, I don't think that it would be a good idea to make unlimited access to ssd drive to something that with my tendency to produce core dumps will do it each 10 seconds. It just makes me feel badly. Tried it though - and it refuses to make me happier than I was. Commented Jan 3, 2016 at 14:59
  • If you still don't get a core dump, I am not sure what's up. There was a chance you core file was too big for the finite limit. Next step; I suggest running the program from the debugger. No break points or anything. Simply load and run. When it crashes, the debugger will intervene without the need for a post Morton core dump. Do you have any idea why your programs routinely dump core? Mismanaged memory allocation? Not checking for error returns from system calls? Commented Jan 3, 2016 at 15:05

1 Answer 1

3

So, today I learnt:

  • ulimit resets after reopening the shell.
  • I did a big mistake in my .zshrc - zsh nested and reopened itself after typing some commands.

After fiddling a bit with this I also found solution to the second problem. Making a shell script:

ulimit -c 750000
./a.out
gdb ./a.out ./core
ulimit -c 0
echo "profit"

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.