Eu tenho dois scripts que calculam cada fatorial de um número. Eu gostaria de saber qual é mais rápido. O time
comando fornece milissegundos e o resultado é diferente de tempos em tempos:
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac2.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.089s
user 0m0.052s
sys 0m0.028s
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac1.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.091s
user 0m0.048s
sys 0m0.036s
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac1.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.088s
user 0m0.048s
sys 0m0.040s
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac2.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.088s
user 0m0.048s
sys 0m0.028s
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac1.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.087s
user 0m0.064s
sys 0m0.028s
piousbox@piousbox-laptop:~/projects/trash$ time ruby fac2.rb
30414093201713378043612608166064768844377641568960512000000000000
real 0m0.089s
user 0m0.068s
sys 0m0.016s
piousbox@piousbox-laptop:~/projects/trash$
Como levo o tempo médio necessário para executar o script? Eu poderia analisar e calcular a média da saída de 100 time
, mas imagino que exista uma solução melhor?