All you need is
Upload a data set, and the automatic statistician will attempt to describe the final column of your data in terms of the rest of the data. After constructing a model of your data, it will then attempt to falsify its claims to see if there is any aspect of the data that has not been well captured by its model.
I describe an approach to compiling common idioms in R code directly to native machine code and illustrate it with several examples. Not only can this yield significant performance gains, but it allows us to use new approaches to computing in R. Importantly, the compilation requires no changes to R itself, but is done entirely via R packages. This allows others to experiment with different compilation strategies and even to define new domain-specific languages within R. We use the Low-Level Virtual Machine (LLVM) compiler toolkit to create the native code and perform sophisticated optimizations on the code. By adopting this widely used software within R, we leverage its ability to generate code for different platforms such as CPUs and GPUs, and will continue to benefit from its ongoing development. This approach potentially allows us to develop high-level R code that is also fast, that can be compiled to work with different data representations and sources, and that could even be run outside of R. The approach aims to both provide a compiler for a limited subset of the R language and also to enable R programmers to write other compilers. This is another approach to help us write high-level descriptions of what we want to compute, not how.
So, I’m no longer the only one using machine learning in order to measure democracy – I just found out about this initiative. It looks interesting and it made me regret not going to APSA this year. We differ in tools (they use SVM, I use Wordscores in one paper and a combination of LSA, LDA, and decision trees in another) and texts (they use human rights reports and Freedom House reports, I use 6,043 newspapers and magazines), but the spirit is the same: producing measures that are more transparent and reproducible (and eventually maybe real-time).
1. C++ and Fortran are still considerably faster than any other alternative, although one needs to be careful with the choice of compiler.
2. C++ compilers have advanced enough that, contrary to the situation in the 1990s and some folk wisdom, C++ code runs slightly faster (5-7 percent) than Fortran code.
3. Julia, with its just-in-time compiler, delivers outstanding performance. Execution speed is only between 2.64 and 2.70 times the execution speed of the best C++ compiler.
4. Baseline Python was slow. Using the Pypy implementation, it runs around 44 times slower than in C++. Using the default CPython interpreter, the code runs between 155 and 269 times slower than in C++.
5. However, a relatively small rewriting of the code and the use of Numba (a just-in-time compiler for Python that uses decorators) dramatically improves Python’s performance: the decorated code runs only between 1.57 and 1.62 times slower than the best C++ executable.
6. Matlab is between 9 to 11 times slower than the best C++ executable. When combined with Mex files, though, the difference is only 1.24 to 1.64 times.
7. R runs between 500 to 700 times slower than C++. If the code is compiled, the code is between 240 to 340 times slower.
8. Mathematica can deliver excellent speed, about four times slower than C++, but only after a considerable rewriting of the code to take advantage of the peculiarities of the language. The baseline version our algorithm in Mathematica is much slower, even after taking advantage of Mathematica compilation.