What determines the “speed” of a programming language?

Suppose a program was written in two distinct languages, let them be language X and language Y, if their compilers generate the same byte code, why I should use language X instead of the language Y? What defines that one language is faster than other?

I ask this because often you see people say things like: “C is the fastest language, ATS is a language fast as C”. I was seeking to understand the definition of “fast” for programming languages.

Answer

There are many reasons that may be considered for choosing a language
X over a language Y. Program readability, ease of programming,
portability to many platforms, existence of good programming
environments can be such reasons. However, I shall consider only the
speed of execution as requested in the question. The question does not
seem to consider, for example, the speed of development.

Two languages can compile to the same bytecode, but it does not mean
that the same code will be produced,

Actually bytecode is only code for a specific virtual machine. It does
have engineering advantages, but does not introduce fundamental
differences with compiling directly for a specific harware. So you
might as well consider comparing two languages compiled for direct
execution on the same machine.

This said, the issue of relative speed of languages is an old one,
dating back to the first compilers.

For many years, in those early times, professional considered that
hand written code was faster than compiled code. In other words,
machine language was considered faster than high level languages such
as Cobol or Fortran. And it was, both faster and usually smaller. High
level languages still developed because they were much easier to use
for many people who were not computer scientists. The cost of using
high level languages even had a name: the expansion ratio, which could
concern the size of the generated code (a very important issue in
those times) or the number of instructions actually executed. The
concept was mainly experimental, but the ratio was greater than 1 at
first, as compilers did a fairly simple minded job by today standards.

Thus machine language was faster than say, Fortran.

Of course, that changed over the years, as compilers became more
sophisticated, to the point that programming in assembly language is
now very rare. For most applications, assembly language programs
compete poorly with code generated by optimizing compilers.

This shows that one major issue is the quality of the compilers
available for the language considered, their ability to analyse the source
code, and to optimize it accordingly.

This ability may depend to some extend on the features of the language
to emphasize the structural and mathematical properties of the source
in order to make the work easier for the compiler. For example, a
language could allow the inclusion of statements about the algebraic
properties of user defined functions, so as to allows the compiler to
use these properties for optimization purposes.

The compiling process may be easier, hence producing better code, when
the programming paradigm of the language is closer to the features of
the machines that will intepret the code, whether real or virtual
machine.

Another point is whether the paradigms implemented in the language are
closed to the type of problem being programmed. It is to be expected
that a programming language specialized for specific programming
paradigms will compile very efficiently features related to that
paradigm. Hence the choice of a programming language may depend, for
clarity and for speed, of the choice of a programming language
adapted to the kind of problem being programmed.

The popularity of C for system programming is probably due to the fact
that C is close to the machine architecture, and that system
programming is directly related to that architecture too.

Some other problem will be more easily programmed, with faster
execution using logic programming and constraint resolution languages.

Complex reactive systems can be very efficiently programmed with specialized synchronous programming languages like Esterel which embodies very specialized knowledge about such systems and generate very fast code.

Or to take an extreme example, some languages are highly specialized,
such as syntax description languages used to program parsers. A parser
generator
is nothing but a compiler for such languages. Of course, it
is not Turing complete, but these compilers are extremely good for
their specialty: producing efficient parsing programs. The domain of
knowledge being restricted, the optimization techniques can be very
specialized and tuned very finely. These parser generators are usually
much better than what could be obtained by writing code in another
language. There are many highly specialized languages with compilers that produce excellent and fast code for a restricted class of problems.

Hence, when writing a large system, it may be advisable not to rely on
a single language, but to choose the best language for different
components of the system. This, of course, raises problems of
compatibility.

Another point that matters often is simply the existence of efficient libraries for the topics being programmed.

Finally, speed is not the only criterion and may be in conflict with
other criteria such as code safety (for exemple with respect to bad
input, or resilience to system errors), memory use, ease of
programming (though paradigm compatibility may actually help that),
object code size, program maintainability, etc.

Speed is not always the most important parameter. Also it may take different guises, like complexity which may be average complexity or worse case complexity. Also, in a large system as in a smaller program, there are parts where speed is critical, and others where it matters little. And it s not always easy to determine that in advance.

Attribution
Source : Link , Question Author : Rodrigo Valente , Answer Author : babou

Leave a Comment