Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

An in-place c++filt ?

Posted by peeterjoot on May 26, 2010

A filter script like c++filt can be a bit irritating sometimes. Imagine that you want to run somelike like the following

$ c++filt < v > v

The effect of this is to completely clobber the input file, and not alter it in place. You may think that something like the following may work, so that the read is done first by the cat program:

$ cat v | c++filt > v

but this also doesn’t work, and one is also left with a zero sized output file, and not the filtered output. I’ve run stuff like the following a number of times:

$ for i in *some list of files* ; do c++filt < $i > $i.tmp$$ ; mv $i.tmp$$ $i ; done

and have often wondered if there’s an easier way. One way would be to put something like this in a script and avoid re-creating a command line like this every time. I tried this in perl, making a stdin/stdout filter by default, and a file modifying helper when files are listed specifically (not really a filter anymore, but often how I’d like to be able to invoke c++filt). Here’s that beastie:


use warnings ;
use strict ;

# slurp whole file into a single variable
undef( $/ ) ; #slurp mode

if ( scalar(@ARGV) )
   foreach (@ARGV)
      my $cmd = "cat $_ | c++filt |" ;

      open( my $fhIn, $cmd ) or die "pipe open '$cmd' failed\n" ;

      my $file_contents = ( <$fhIn> ) ;

      close $fhIn or die "read or pipe close of '$cmd' failed\n" ;

      open( my $fhOut, ">$_") or die "open of '$_' for write failed\n" ;

      print $fhOut $file_contents ;

      close $fhOut or die "close or write to '$_' failed\n" ;
   my $file_contents = ( <> ) ;

   print $file_contents ;

This also works, but is clunkier than I expected. If anybody knows of some way to use or abuse the in place filtering capability of perl (ie: perl -p -i) to do something like this, or some other clever way to do this, I’d be curious what it is?

10 Responses to “An in-place c++filt ?”

  1. MattW said

    Hi Peeter,

    cat v | c++filt | tee v 1>/dev/null


    • peeterjoot said

      interesting. that does appear to work. Why is that. There’s also a subtlety here since a very small variation:

      c++filt < t | tee t

      defeats it (this produces the usual empty output).

      • MattW said

        c++filt t
        $ c++filt that’s causing the problem in your first examples, the shell must open and truncate the file first, which then leave it empty for input to the program.

  2. MattW said

    Let me try that again.

    Your example seems to work for me using bash/linux:

    $ echo "f__1XFi" > t
    $ c++filt < t | tee t
    $ cat t

    I’d expect that to work, since it’s the

     that was probably causing the problem initially as the shell would be opening the FD and truncating the file before it opens it to feed the program.
    • peeterjoot said

      that example doesn’t appear to work for me:

      $  echo $SHELL
      $  bash --version
      GNU bash, version 3.1.17(1)-release (x86_64-suse-linux)
      Copyright (C) 2005 Free Software Foundation, Inc.
      $  echo "f__1XFi" > t
      $  c++filt < t | tee t
      $  echo "f__1XFi" > t
      $  cat t | c++filt | tee t

      In fact, for a larger file, your first example doesn’t either

      $ cp alibctcsqe.esym t
      $ cat t | c++filt | tee t

      It seems like one is vulnerable to random orders for the reads and writes in the pipeline, and the injection of the tee command only sometimes makes that order right.

      • MattW said

        Odd, I can’t get it to fail for me… even with large symbol files (libsqe.esym). (using ksh for those on linux).

        it sounds like pipes are buffered.

        What if instead of the “cat t” at the beginning, you write a perl script that just open, reads and then closes the file, then outputs to STDOUT the contents? And at the end of the pipe, a script that read in from STDIN until EOF, then opens and writes to the file. But that time, because the first script has already closed the file there shouldn’t be any contention. Just a guess.

  3. peeterjoot said

    Nice idea Matt. It has the elegance that seemed like it ought to have been possible, but didn’t occur to me. However, at least with bash on linux, it doesn’t appear to work:

    use warnings ;
    use strict ;
    # slurp whole file into a single variable
    undef( $/ ) ; #slurp mode
    my $file_contents = ( <> ) ;
    print $file_contents ;


    $ cat h
    cp alibctcsqe.esym t
    slurp t | c++filt > t
    wc -l t
    $ ./h
    0 t

    It looks like the output file gets opened before this slurping script gets a chance at it.

    • MattW said

      Hey Peeter,

      I also meant to have something at the end, the redirect will cause the shell to truncate the file before slurp has a chance to read it. I’m pretty sure the shell opens all file descriptors for redirects before starting any commands.

      If you have another slurp like script like:

      use warnings ;
      use strict ;
      my $file = shift ;
      # slurp whole file into a single variable
      undef( $/ ) ; # slurp mode
      my $file_contents = (  ) ;
      open ( FD, ">$file" ) ;
      print FD $file_contents ;
      close ( FD ) 

      I tried on a 10K line listing of the simple f__1XFi symbol running:

      slurp t | c++filt | burp t

      and it seemed to work. (where burp is the script above).

      I’m thinking you can expand on slurp to handle writing the file out like burp, and could use it like:

      slurp 't'

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: