[ensembl-dev] Thoughts on Speeding up the Variant Effect Predictor

Rocky Bernstein rocky.bernstein at gmail.com
Thu Dec 18 19:19:40 GMT 2014

Ok, good to know you all are looking at ways to optimize VEP run time.

To sum up, 50 minutes of elapsed time would probably go to about 3-5 with
map/reduce (but with 20 or so times the number of processors). It probably
wouldn't take that much thought.  This is an area I could explore if there
is interest.

Any comments regarding recoding hot spots in C?

Other comments in line...

On Thu, Dec 18, 2014 at 11:34 AM, Will McLaren <wm2 at ebi.ac.uk> wrote:

> Hi Rocky,
> Thanks very much for the detailed feedback.
> We are currently looking into ways to further optimise VEP run time, and
> we are already considering a number of the solutions you propose, along
> with some others. We'll definitely take a look at the things we haven't
> thought of too.
> Further comments inline below.
> Regards
> Will McLaren
> Ensembl Variation
> On 18 December 2014 at 15:51, Rocky Bernstein <rocky.bernstein at gmail.com>
> wrote:
>> Running the Variant Effect Predictor on a Human Genome VCF file (130780
>> lines)  with a local Fasta cache (--offline) takes about 50 minutes on a
>> quad-core Ubuntu box.
>> I could give more details, but I don't think they are that important.
>> In looking at how to speed this up, it looks like VEP goes through the
>> VCF file,  is sorted by chromosome, and processes each
>> Chromosome independently. A simple and obvious way to speed this up would
>> be to do some sort of 24-way map/reduce.
>> There is of course the --fork option on the variant_effect_predictor.pl program
>> which is roughly the same idea, but it parallelizes only across the cores
>> of a single computer rather than make use of multiple ones.
>> To pinpoint the slowness better, I used Devel::NYTProf. For those of you
>> who haven't used it recently, it now has flame graphs and it makes it very
>> easy to see what's going on.
> We regularly run Devel::NYTProf on the code, very useful it is too!
>> The first thing that came out was a slowness in code to remove carriage
>> returns and line feeds. This is in Bio::DB::Fasta ::subseq:
>>      $data =~ s/\n//g;
>>      $data =~ s/\r//g;
>> Compiling the regexp, e.g:
>>      my $nl = qr/\n/;
>>      my $cr = qr/\r/;
>>      sub subseq {
>>          ....
>>         $data =~ s/$nl//g;
>>         $data =~ s/$cr//g;
>>      }
>> Speeds up the subseq method by about 15%. I can elaborate more or
>> describe the other methods I tried and how they fared, if there's interest.
>> But since this portion is really part of BioPerl and not Bio::EnsEMBL, I'll
>> try to work up a git pull request on that repository.
> Thanks, that would be useful. This regex came up in our NYTProf runs too.
>> So now I come to the meat of what I have to say. I should have put this
>> at the top -- I hope some of you are still with me.
>> The NYTProf graphs seem to say that there is a *lot* of overhead in
>> object lookup and type testing. I think some of this is already known as
>> there already are calls to "weaken" and "new_fast" object creators. And
>> there is this comment in
>>  Bio::EnsEMBL::Variation::BaseTranscriptVariation:_intron_effects:
>>     # this method is a major bottle neck in the effect calculation code so
>>     # we cache results and use local variables instead of method calls
>> where
>>     # possible to speed things up - caveat bug-fixer!
> One major avenue we will be investigating is subs such as this.
> For each TranscriptVariationAllele object created (this represents the
> overlap of a variant allele and a transcript), the VEP evaluates a number
> of predicate statements. All predicates are evaluated for all objects,
> without any filtering with prior knowledge (for example if we know a
> variant falls entirely in the coding sequence there's no point checking if
> it's intronic).
> This should be fairly low-hanging fruit to pick, so this will probably be
> one of the first optimisations to reach production code.

Ok. This too is good to know.  I can think of several ways this can be
done, so I will be eager to see how this works.

If I have this right (and I may not) the predicate tests are somewhere
under Bio::EnsEMBL::Variation-BaseVariationFeature::display_consequence and
that constitutes less than half of the time.

An equal portion of time is in part that formats the results for
output, Bio::EnsEMBL::Variation::Utils::VEP::vfa_to_line.  It is in this
portion where "transcript" mentioned below lies.

In the graphs I have, vfa_to_line is in the right-hand portion while
display_consequences is in the left-hand portion. If you or others want to
see my graphs, I can try to make them available online somewhere.

Since you've looked at NYTProf output, you should see this too, right?
Again I have this right, this is big area to look for improvements.
Reformatting objects doesn't feel like it should be that time consuming.

>> In the few cases guided by NYTProf that I have looked at, I've been able
>> to make reasonable speed ups at the expense of eliminating the tests
>> and object overhead.
>> For example, in EnsEMBL::Variation::BaseTranscriptVariation changing:
>>  sub transcript {
>>      my ($self, $transcript) = @_;
>>      assert_ref($transcript, 'Bio::EnsEMBL::Transcript') if $transcript;
>>      return $self->SUPER::feature($transcript, 'Transcript');
>> }
>> to:
>>      sub transcript {
>>          my ($self, $transcript) = @_;
>>         return $self->{feature};
>> Gives a noticeable speed up. But you may ask: if that happens, then we
>> lose type safety and there is a potential for bugs?
>> Here ist how to address these valid concerns.
> Type safety is of course important, but there are places (such as this
> one) where the setter is only ever called internally. Testing should find
> cases where this is done incorrectly, but it may be possible to remove the
> assert_ref() call here completely.

I just want to be clear about something here. It is not just removing the
assert_ref that helps. It is
also removing the call to $self->SUPER::feature which also includes another
assert_ref call before it
returns $self->{feature}.

>> First, I think there could be two sets of the Perl modules, such as for
>> EnsEMBL::Variation::BaseTranscriptVariation. One set with all of the
>> checks and another without that are fast.  A configuration parameter might
>> specify which version to use. In development or by default, one might use
>> the ones that check types.
> I think having two sets of modules would be a bit clumsy and could lead to
> inconsistencies between the two, even if more tests were added.

The model I was thinking of is like Scalar::Util or at least the way it was
which had a pure Perl version and an XS version. In Ruby I there used to be
(and there still may be) there were some database drivers that had both
C-compiled versions as well as pure Ruby ones.

As far as tests go, if one specified an optimization build, then that would
be used for the tests. Travis or some other Continuous Integration would
run with both sets of options.

But, yes, I can see that it is more complex. If it is of interest I might
be able to work up a little example in a branch of a fork to show how it
might work. I think the key though is to limit in just those few places it
is needed, drop into C occasionally.

>> Second and perhaps more import, there are the tests! If more need to be
>> added, then let's add them. And one can always add a test to make sure the
>> results of the two versions gives the same result.
>> One last avenue of optimization that I'd like to explore is using say
>> Inline::C or basically coding in C hot spots. In particular, consider
>> Bio::EnsEMBL::Variation::Utils::VariationEffect::overlap which looks like
>> this:
>>          my ( $f1_start, $f1_end, $f2_start, $f2_end ) = @_;
>>          return ( ($f1_end >= $f2_start) and ($f1_start <= $f2_end) );
> Again, something we've thought of, but I'd be surprised if you saw that
> much speedup from changing some simple int comparisons.
> I believe there may be better ways to optimise this overlap code, but it's
> not something we've got to yet.
>> I haven't tried it on this hot spot, but this is something that might
>> benefit from getting coded in C. Again the trade off for speed here is a
>> dependency on compiling C. In my view anyone installing this locally or
>> installing CPAN modules probably already does, but it does add complexity.
>> Typically, this is handled in Perl by providing both versions, perhaps as
>> separate modules.
>> Thoughts or comments?
>> Thanks,
>>    rocky
>> _______________________________________________
>> Dev mailing list    Dev at ensembl.org
>> Posting guidelines and subscribe/unsubscribe info:
>> http://lists.ensembl.org/mailman/listinfo/dev
>> Ensembl Blog: http://www.ensembl.info/
> _______________________________________________
> Dev mailing list    Dev at ensembl.org
> Posting guidelines and subscribe/unsubscribe info:
> http://lists.ensembl.org/mailman/listinfo/dev
> Ensembl Blog: http://www.ensembl.info/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ensembl.org/pipermail/dev_ensembl.org/attachments/20141218/b2cf35e7/attachment.html>

More information about the Dev mailing list