During the past weeks I’ve been working again on the BlueZ project and now we can finally announce that the AVRCP 1.3 profile is officially supported.
For those who don’t know what I’m talking about, here comes a little background for those buzzwords:
BlueZ is the user-space part of the Bluetooth® technology stack used on Linux and Android. It has support for several Bluetooth profiles such as RFCOMM, HID, PAN, PBAP, OBEX, HFP, A2DP (some of them are implemented as separate projects) that are defined by the Bluetooh SIG. In simpler terms, BlueZ is what allows your Linux device to do amazing things with Bluetooth technology such as stream stereo music, make phone calls and other wireless magics.
One of these profiles supported by BlueZ is AVRCP (Audio/Video Remote Control Profile), which allows two devices to communicate through Bluetooth technology and exchange commands/messages to control the music/video being played.
Until some weeks ago BlueZ only had support for version 1.0 of the AVRCP protocol. This early version allows a Controller device (e.g. a Bluetooth technology-based car kit) to tell the Target device (e.g. a smartphone) to play, pause, go to next music and go to previous music. We’ve now upstreamed an implementation of AVRCP Version 1.3, which adds some nice features to the previously supported version, such as:
Transmitting metadata of the music being played;
Change Application Settings such as Equalizer, Repeat, Shuffle and Scan modes;
Set current status of media playback: playing, stopped, paused, forward-seeking, reverse-seeking.
Some time ago I bought a Bluetooth stereo car kit. How boring it was having the ability to stream music from my phone but not see any information regarding who was playing, which album, etc. This is no more. Now we have proper support for AVRCP 1.3 :-). Our ProFUSION team utilized the open source baseport for the OMAP™ processor-based Blaze™ mobile development platform from Texas Instruments Incorporated (TI) to help achieve this milestone. Additionally, we worked together with TI on testing and debugging to make this AVRCP 1.3 support a reality. Below you can see yours truly holding a Blaze™ mobile development platform from TI, sending music metadata to a Bluetooth technology-enabled car kit.
One of the issues I with codespell was that it was trying to parse cscope.out, since it’s a text file. On Linux Kernel this file can get very big and besides taking much longer, sometimes it was running out of memory :-). Now codespell has an option to ignore files, even text ones. It’s as easy as passing –skip=”*.eps,cscope.out” (notice this is useful for ignoring eps images too). Another useful thing (not much for Linux Kernel though) is proper detection of encoding by using chardet.
For those who didn’t know, Stanford is offering some online courses, starting from next month. During my graduation I already have some AI classes but I thought it would be interesting to participate in this class with thousand of students.
One area that I like on AI is the on to teach computers how to play games. I like that so much that during my AI classes in Politecnico di Milano I developed a computer game to play tic-tac-toe in NxNxN dimensions. It’s actually fun to play, and you can find some screenshots (and the code) on a past blog post. I didn’t have time to continue maintaining it, but the part that I liked more was actually designing the algorithm, using MinMax, and eventually finding out that I couldn’t win the game anymore when playing against the computer. This was both motivating and demotivating because I knew that if I did developed it right, I couldn’t possibly win anymore. Maybe I’ll update this project when I have some time. If you would like to try it and find any problem, send me a patch and I will happily apply. If you think you have a better algorithm, it’s very easy to develop your own Player and plug it in the game. I challenge you to win my MinMax player ;-).
During the time I was developing this game, a friend of mine that lived with me in Italy had a dream of making money playing poker. Because of this he was attending the AI classes too and eventually playing online to develop his strategies (in sites such as partypoker). It was very fun talking to him and his plans and strategies, though I don’t know if he succeeded.
Even if you frequented some AI class before like me and my friend, I think it will be very interesting to participate on this one. If you didn’t register yet, you still have some time: go to http://www.ai-class.com and subscribe.
Gostei bastante do evento, que teve também participação do Antognolli, que trabalha comigo, sobre interfaces gráficas em sistemas embarcados usando o conjunto de bibliotecas EFL. Outro conhecido meu de outras conferências, Glauber Costa, falou sobre QEMU.
To answer these questions Gustavo Barbieri created some C, JS and Python benchmarks to compare the performance of EFL using each of these languages. The JS benchmarks were using Spidermonkey as the engine since elixir was already done for EFL. I then created new engines (with only the necessary functions) to also compare to other well-known JS engines: V8 from Google and JSC (or nitro) from WebKit.
For all benchmarks EFL revision 58186 was used. Following the setup of each engine:
Spidermonkey: I’ve used version 1.8.1-rc1 with the already available bindings on EFL repository, elixir;
V8: version 126.96.36.199, using a simple binding I created for EFL. I named this binding ev8;
Startup time: This benchmark measures the startup time by executing a simple application that imports evas, ecore, ecore-evas and edje, bring in some symbols and then iterates the main loop once before exiting. I measured the startup time for both hot and cold cache cases. In the former the application is executed several times in sequence and the latter includes a call to drop all caches so we have to load the library again from disk
Runtime – Stress: This benchmark executes as many frames per second as possible of a render-intensive operation. The application is not so heavy, but it does some loops, math and interacts with EFL. Usually a common application would do far less operations every frame because many operations are done in EFL itself, in C, such as list scrolling that is done entirely in elm_genlist. This benchmark is made of 4 phases:
Phase 0 (P0): Un-scaled blend of the same image 16 times;
Phase 1 (P1): Same as P0, with additional 50% alpha;
Phase 2 (P2): Same as P0, with additional red coloring;
Phase 3 (P3): Same as P0, with additional 50% alpha and red coloring;
The C and Elixir’s versions are available at EFL repository.
Runtime – animation: usually an application doesn’t need “as many FPS as possible”, but instead it would like to limit to a certain amount of frames per second. E.g.: iphone’s browser tries to keep a constant of 60 FPS. This is the value I used on this benchmark. The same application as the previous benchmark is executed, but it tries to keep always the same frame-rate.
The first computer I used to test these benchmarks on was my laptop. It’s a Dell Vostro 1320, Intel Core 2 Duo with 4 GB of RAM and a standard 5400 RPM disk. The results are below.
First thing to notice is there are no results for “Runtime – animation” benchmark. This is because all the engines kept a constant of 60fps and hence there were no interesting results to show. The first benchmark shows that V8’s startup time is the shortest one when considering we have to load the application and libraries from disk. JSC was the slowest and Spidermonkey was in between.
With hot caches, however, we have another complete different scenario, with JSC being almost as fast as the native C application. Following, V8 with a delay a bit larger and Spidermonkey as the slowest one.
The runtime-stress benchmark shows that all the engines are performing well when there’s some considerable load in the application, i.e. removing P0 from from this scenario. JSC was always at the same speed of native code; Spidermonkey and V8 had an impact only when considering P0 alone.
Next computer to consider in order to execute these benchmarks was a Pandaboard, so we can see how well the engines are performing in an embedded platform. Pandaboard has an ARM Cortex-A9 processor with 1GB of RAM and the partition containing the benchmarks is in an external flash storage drive. Following the results for each benchmark:
Once again, runtime-animation is not shown since it had the same results for all engines. For the startup tests, now Spidermonkey was much faster than the others, followed by V8 and JSC in both hot and cold caches. In runtime-stress benchmark, all the engines performed well, as in the first computer, but now JSC was the clear winner.
Two weeks ago I’d give a talk about realtime Linux at ESC Brazil: “Usando Linux como Sistema de Tempo Real” (Using Linux as a realtime OS). Sadly some days before while playing soccer I broke my fibula and I had to have a surgery. I regret I couldn’t attend this conference.
At least in the company I work for there are more people with knowledge in this area. Gustavo Barbieri went there in my place and had a good feedback from the attendees.
Now I have stay home. At least for 1 or 2 months :-(.
I’m glad to announce the first RC of codespell 1.1. I decided to let the biggest feature for the next version and release 1.1 with the small but important features that were already implemented. This new version comes with the following features:
Verbosity level: tired of seeing so many things printed while the fixes are taking place? Now you can filter what you see
Exclusion list: there are cases in which the codespell spots a false positive, but disabling the entry in the dictionary will prevent it from fixing many other places. This is particularly true when there are names in source code. In Linux kernel I’ve seen some names with “Taht” and “Teh” that were incorrectly fixed to “That” and “The”. Now we have a file with lines that are exclude from the ones codespell will fix. Hopefully such lines will not change very often and we can maintain a file per project for future executions of codespell in each project. I’m maintaining one for the Linux kernel. It’s in data/linux-kernel.exclude.
Interactive mode: for those fixes that are not done automatically (because they have more than one possible fix), now we can interactively decide on each one. I recommend for everyone interested in this feature to run codespell once without this option to fix the automatic ones and another time to go through the other fixes.
Stats (summary) of the changes: are you interested how many times a word was misspelled? Now codespell can display a summary of all the fixes it has done.
I was particularly worried about the increase in runtime when using the exclusion list. However it proved to be very fast when excluding the lines by using their hashes. I can successfully parse the entire Linux kernel tree within 1min30s in my laptop with slow HD. The biggest feature that I’ve left for the next version is to allow changes to be applied only to parts of the source code like comments and strings. I expect to implement this for a future 1.2 version.
Besides these new features, there are some fixes to the dictionary. Thanks to all of you who have sent me fixes and suggestions. I’m glad to see patches generated by codespell been applied to other opensource projects that I’m not the one to send. As of now, I’ve seen patches been applied to: Linux Kernel, oFono, ConnMan, FreeBSD, LLVM, clang, EFL and others that I don’t remember right now.
For those who prefer to wait for a stable release, I’m also releasing codespell 1.0.2 with fixes only to the dictionary.
As always, you can download codespell packages from:
Hoje adicionei no site um script para alertar usuários que usam versões antigas (e inseguras) de browsers. O projeto é o “Salve a Web, por favor”. Ele é opensource e pode ser visto no repositório do github. Basta carregar o script com a seguinte entrada na sua página:
Acho que pela própria natureza dos posts no meu blog, a divisão dos browsers não é parecida com aquela global, em que o Internet Explorer ainda é o browser mais usado. Porém acho importante alerr os usuários. Quem sabe poderemos ter uma web melhor e mais segura futuramente? Veja abaixo as estatísticas de acesso a esse blog no últimoo ano.
Os poucos visitantes com browsers antigos que leem esse blog vão ver o seguinte banner acima (obtido mudando o user agent do Chromium para um IE 6):
Very common questions I hear when dealing with compiling open source projects are:
How do I cross-compile a project using icecc/icecream?
How to use a different compiler version for compiling my project and still benefiting from icecc/icecream?
Note: from now on I’ll always refer to icecream instead icecc/iceccd/distcc for the name of the project.
Given you already created your cross toolchain (or downloaded from somewhere else, e.g. CodeSourcery/Linaro) these two questions are essentially the same. All you have to do is to follow the two steps below:
1. Create the “compiler environment”
Understanding this part is really understanding how this magic remote-compiling works. When you want to compile a source remotely, what icecream does is sending a copy of your compiler and the things it needs to the remote machine, executing the process and getting back the result. By “things it needs” I mean: assembler, linker, libc, libgcc and some other libraries like libm, libgmp, libstdc++, libz, etc. Creating this environment with icecream is dead easy: call “icecc –build-native”. Following is the output I get on my Archlinux box with gcc 4.6.0 as default compiler:
Note that in the last line it created a .tar.gz file. This is the environment that will be sent to other machines. If you want to use another compiler, you need to create another environment that will be later passed to icecream in the second step.
To create an environment for a compiler that is not the default in your machine, first thing you need is to have it in your PATH, pointing to the icecc binary. Here sometimes I use GCC 4.4 instead of the default compiler, so I’ll use it as an example. In my machine, GCC 4.4 is installed in /usr/bin/gcc-4.4 and icecream is installed in /opt/icecream/bin:
$ which gcc-4.4/usr/bin/gcc-4.4
$ which icecc
Go to where icecc is installed and make a symlink to icecc with the same name of the compiler you want to use:
$ sudoln-s icecc gcc-4.4
$ sudoln-s icecc g++-4.4
Now, tell icecream to create the environment for this compiler:
Now you can compile your source code as usual, be it calling gcc directly or through makefiles or other build systems. For example:
$ gcc-4.4 helloworld.c -o helloworld
If you manage a handful of machines running icecream, I’d recommend a software we developed at ProFUSION called Picolé.
UPDATE: if you want a recommendation on how to build a cross toolchain, crossdev it is. The steps are the same as above, replacing gcc-4.4 with the name given to your compiler (e.g. arm-elf-gcc-4.6.0)