Back from Kernel Summit, LinuxCon Europe and ELCE

Last week from 23-Oct to 28-Oct I was at 3 conferences in Prague, Czech Republic, together with Gustavo Barbieri, Gustavo Padovan and Ulisses Furquim: the ProFUSION crew in Prague.

Starting from Kernel Summit, I had the opportunity to join the Bluetooth Summit and participate in the discussions regarding this subsystem in Linux, both in kernel and user space. We had a lot of hot topics to discuss, including the upcoming BlueZ 5.0, Bluetooth 3.0 (high speed), Bluetooth 4.0 (low energy) and I could also demonstrate the work I’ve been doing with the AVRCP profile. I’m glad it received a good acceptance from other developers. Some of them I didn’t know personally such as Luiz von Dentz, Claudio Takahasi, Vinicius Gomes. Others I had the pleasure to meet again like Marcel Holtmann and Johan Hedberg.

(We didn’t discuss only bluetooth related things. We noticed that more than 1/3 of the people there, working in the core of Bluetooth in Linux, was Brazilian and soon we were discussing with Samuel Ortis – a French, maintainer of ConnMan – who is the best soccer player :-).)

Daniel Wagner from BMW also brought up some interesting scenarios of multiple devices connected through Bluetooth in car kits and helmets (like this one): HFP, A2DP, HSP (and maybe also AVRCP?). All of them interacting and working together at the same time. Since the gstreamer conference was also taking place at the same facility we could also discuss with PulseAudio developers. In the end, it seems BlueZ and PulseAudio are working pretty well together, though we still have to polish some rough edges for some use cases like this.

Being at Kernel Summit was a great time to meet developers of other parts of the kernel too, such as Steven Rostetd and Peter Zylstra, with whom I had more contact some time ago when I was working in the Linux scheduler.

When the Kernel Summit was over (on Tuesday), LinuxCon and ELCE were taking off. It was great to have once more these two conferences collocated and being able to attend talks on both of them. There were several talks I’d like to attend but some of them were overlapping. I’m looking forward to see the recorded talks later this year[1]. It would be too extensive to detail each one here, so I’m just detailing some of them that grabbed more attention from me.

Gustavo Barbieri and Sulamita Garcia talked about Demistifying HTML5 and how it can be used to develop Apps. Gustavo focused on the EFL port of WebKit (in which I’m one of the developers ;-)) and the underlying technologies. It seems like the mentality of “let’s do apps in a very high-level language” instead of “providing a native language in a sdk” is coming back. Differently from what happened some years ago, this time maybe it will work out. Only future will show us.

Since this year I got involved with Android and development of the platform, I went to several Android-related talks. Leveraging Android’s Linux Heritage was really good stuff, showing how to replace some parts of the Android platform: bash instead of the I-wanna-be-a-shell that comes with Android by default, putting gstreamer in, optimizing some parts of the code, etc. In the same tune there was another talk entitled Build Community Android Distribution and Ensure the Quality. Interesting (but not surprising) to see how hard is to contribute to AOSP and how Android is much different from other open source projects we are used to.

Another interrelated areas that I have interest in (maybe because I work for a company related to embedded systems :-)) are system initialization, fast boot and development boards (such as Pandaboard). Therefore I attended systemd Administration in the Enterprise and Integrating systemd: Booting Userspace in Less Than 1 Second. The former, given by Lennart and Kay, focused on detailing some systemd features for guys running enterprise servers while in the latter Koen told us about his experience reducing boot time by using systemd in a Pandaboard. In this last talk I also met Jean Christophe, one of the developers of barebox (a bootloader aiming to replace U-Boot). Last time I checked, pandaboard was not in the list of supported boards but I was greatly surprised that now it is. Barebox has the advantages of running with caches enabled, having an architecture much more beautiful and being much faster than u-boot. In summary, IMHO it’s a bootloader done right.

Other interesting talk was Tuning Linux For Embedded Systems: When Less is More, in which Darren Hart gave instructions to reduce boot time and image size in very resource constrained scenarios (he was aiming a rootfs of only 4MB and total boot time under a second). Some key things to know is how to investigate what is not important to the application, what can be removed from kernel/userspace in order to fit the requirements and when to replace, why to replace and what to replace. Last but not least, in Developing Embedded Linux Devices Using the Yocto Project and What’s new in 1.1 David Stewart gave a status quo of the Yocto project. Interesting how the project evolved over this year and next time someone doing embedded systems think about ruling out its own distro from scratch, it would be good to look at Yocto.

I met a lot of other people for whom I apologize not citing their name here. This post would be yet bigger than it already is. I had a really great time their and I hope to continue going to these conferences. And the next one is LinuxCon Brazil, in which I’ll talk about How to Become an Open Source Developer. I look forward to seeing all of you there.


I’d like to thank the Linux Foundation for organizing such a great event and ProFUSION to allowing and sponsoring me to be there.

Side note: the problem is that now I want to do a lot of things in different projects without having time to to: systemd, Linux kernel, BlueZ, pandaboard, barebox, Android, etc :-)


[1] UPDATE: videos have been published –

AVRCP 1.3 on BlueZ

During the past weeks I’ve been working again on the BlueZ project and now we can finally announce that the AVRCP 1.3 profile is officially supported.

Technical background

For those who don’t know what I’m talking about, here comes a little background for those buzzwords:

BlueZ is the user-space part of the Bluetooth® technology stack used on Linux and Android. It has support for several Bluetooth profiles such as RFCOMM, HID, PAN, PBAP, OBEX, HFP, A2DP (some of them are implemented as separate projects) that are defined by the Bluetooh SIG. In simpler terms, BlueZ is what allows your Linux device to do amazing things with Bluetooth technology such as stream stereo music, make phone calls and other wireless magics.

One of these profiles supported by BlueZ is AVRCP (Audio/Video Remote Control Profile), which allows two devices to communicate through Bluetooth technology and exchange commands/messages to control the music/video being played.

New features

Until some weeks ago BlueZ only had support for version 1.0 of the AVRCP protocol. This early version allows a Controller device (e.g. a Bluetooth technology-based car kit) to tell the Target device (e.g. a smartphone) to play, pause, go to next music and go to previous music. We’ve now upstreamed an implementation of AVRCP Version 1.3, which adds some nice features to the previously supported version, such as:

  • Transmitting metadata of the music being played;
  • Change Application Settings such as Equalizer, Repeat, Shuffle and Scan modes;
  • Set current status of media playback: playing, stopped, paused, forward-seeking, reverse-seeking.

Some time ago I bought a Bluetooth stereo car kit. How boring it was having the ability to stream music from my phone but not see any information regarding who was playing, which album, etc. This is no more. Now we have proper support for AVRCP 1.3 :-). Our ProFUSION team utilized the open source baseport for the OMAP™ processor-based Blaze™ mobile development platform from Texas Instruments Incorporated (TI) to help achieve this milestone. Additionally, we worked together with TI on testing and debugging to make this AVRCP 1.3 support a reality. Below you can see yours truly holding a Blaze™ mobile development platform from TI, sending music metadata to a Bluetooth technology-enabled car kit.

ANNOUNCE: codespell 1.2

Since I created a mailing list for codespell, the announcements here will not have as many details as before. Checkout the new version of codespell:

One of the issues I with codespell was that it was trying to parse cscope.out, since it’s a text file. On Linux Kernel this file can get very big and besides taking much longer, sometimes it was running out of memory :-). Now codespell has an option to ignore files, even text ones. It’s as easy as passing –skip=”*.eps,cscope.out” (notice this is useful for ignoring eps images too). Another useful thing (not much for Linux Kernel though) is proper detection of encoding by using chardet.

Attending AI classes

For those who didn’t know, Stanford is offering some online courses, starting from next month. During my graduation I already have some AI classes but I thought it would be interesting to participate in this class with thousand of students.

One area that I like on AI is the on to teach computers how to play games. I like that so much that during my AI classes in Politecnico di Milano I developed a computer game to play tic-tac-toe in NxNxN dimensions. It’s actually fun to play, and you can find some screenshots (and the code) on a past blog post. I didn’t have time to continue maintaining it, but the part that I liked more was actually designing the algorithm, using MinMax, and eventually finding out that I couldn’t win the game anymore when playing against the computer. This was both motivating and demotivating because I knew that if I did developed it right, I couldn’t possibly win anymore. Maybe I’ll update this project when I have some time. If you would like to try it and find any problem, send me a patch and I will happily apply. If you think you have a better algorithm, it’s very easy to develop your own Player and plug it in the game. I challenge you to win my MinMax player ;-).

During the time I was developing this game, a friend of mine that lived with me in Italy had a dream of making money playing poker. Because of this he was attending the AI classes too and eventually playing online to develop his strategies (in sites such as partypoker). It was very fun talking to him and his plans and strategies, though I don’t know if he succeeded.

Even if you frequented some AI class before like me and my friend, I think it will be very interesting to participate on this one. If you didn’t register yet, you still have some time: go to and subscribe.

Seminário de Linux Embarcado 2011

No último final de semana participei do Seminário de Linux Embarcado, em que palestrei sobre “systemd: repensando a inicialização”. O feedback que tive do pessoal foi positivo, mostrando bastante interesse nos diversos tipos de inicialização sob demanda, exclusivos desse sistema de init. Os slides estão disponíveis aqui no blog.

Gostei bastante do evento, que teve também participação do Antognolli, que trabalha comigo, sobre interfaces gráficas em sistemas embarcados usando o conjunto de bibliotecas EFL. Outro conhecido meu de outras conferências, Glauber Costa, falou sobre QEMU.

Benchmarking Javascript engines for EFL

The Enlightenment Foundation Libraries has several bindings for other languages in order to ease the creation of end-user applications, speeding up its development. Among them, there’s a binding for Javascript using the Spidermonkey engine. The questions are: is it fast enough? Does it slowdown your application? Is Spidermonkey the best JS engine to be used?

To answer these questions Gustavo Barbieri created some C, JS and Python benchmarks to compare the performance of EFL using each of these languages. The JS benchmarks were using Spidermonkey as the engine since elixir was already done for EFL. I then created new engines (with only the necessary functions) to also compare to other well-known JS engines: V8 from Google and JSC (or nitro) from WebKit.

Libraries setup

For all benchmarks EFL revision 58186 was used. Following the setup of each engine:

  • Spidermonkey: I’ve used version 1.8.1-rc1 with the already available bindings on EFL repository, elixir;
  • V8: version, using a simple binding I created for EFL. I named this binding ev8;
  • JSC: WebKit’s sources are needed to compile JSC. I’ve used revision 83063. Compiling with CMake, I chose the EFL port and enabled the option SHARED_CORE in order to have a separated library for Javascript;


Startup time: This benchmark measures the startup time by executing a simple application that imports evas, ecore, ecore-evas and edje, bring in some symbols and then iterates the main loop once before exiting. I measured the startup time for both hot and cold cache cases. In the former the application is executed several times in sequence and the latter includes a call to drop all caches so we have to load the library again from disk

Runtime – Stress: This benchmark executes as many frames per second as possible of a render-intensive operation. The application is not so heavy, but it does some loops, math and interacts with EFL. Usually a common application would do far less operations every frame because many operations are done in EFL itself, in C, such as list scrolling that is done entirely in elm_genlist. This benchmark is made of 4 phases:

  • Phase 0 (P0): Un-scaled blend of the same image 16 times;
  • Phase 1 (P1): Same as P0, with additional 50% alpha;
  • Phase 2 (P2): Same as P0, with additional red coloring;
  • Phase 3 (P3): Same as P0, with additional 50% alpha and red coloring;

The C and Elixir’s versions are available at EFL repository.

Runtime – animation: usually an application doesn’t need “as many FPS as possible”, but instead it would like to limit to a certain amount of frames per second. E.g.: iphone’s browser tries to keep a constant of 60 FPS. This is the value I used on this benchmark. The same application as the previous benchmark is executed, but it tries to keep always the same frame-rate.


The first computer I used to test these benchmarks on was my laptop. It’s a Dell Vostro 1320, Intel Core 2 Duo with 4 GB of RAM and a standard 5400 RPM disk. The results are below.

Benchmarks on Dell 1320 laptop

First thing to notice is there are no results for “Runtime – animation” benchmark. This is because all the engines kept a constant of 60fps and hence there were no interesting results to show. The first benchmark shows that V8’s startup time is the shortest one when considering we have to load the application and libraries from disk. JSC was the slowest and  Spidermonkey was in between.

With hot caches, however, we have another complete different scenario, with JSC being almost as fast as the native C application. Following, V8 with a delay a bit larger and Spidermonkey as the slowest one.

The runtime-stress benchmark shows that all the engines are performing well when there’s some considerable load in the application, i.e. removing P0 from from this scenario. JSC was always at the same speed of native code; Spidermonkey and V8 had an impact only when considering P0 alone.


Next computer to consider in order to execute these benchmarks was  a Pandaboard, so we can see how well the engines are performing in an embedded platform. Pandaboard has an ARM Cortex-A9 processor with 1GB of RAM and the partition containing the benchmarks is in an external flash storage drive. Following the results for each benchmark:


Benchmarks on Pandaboard

Once again, runtime-animation is not shown since it had the same results for all engines. For the startup tests, now Spidermonkey was much faster than the others, followed by V8 and JSC in both hot and cold caches. In runtime-stress benchmark, all the engines performed well, as in the first computer, but now JSC was the clear winner.


There are several points to be considered when choosing an engine to be use as a binding for a library such as EFL. The raw performance and startup time seems to be very near to the ones achieved with native code. Recently there were some discussions in EFL mailing list regarding which engine to choose, so I think it would be good to share these numbers above. It’s also important to notice that these bindings have a similar approach of elixir, mapping each function call in Javascript to the correspondent native function. I made this to be fair in the comparison among them, but depending on the use-case it’d  be good to have a JS binding similar to what python’s did, embedding the function call in real python objects.

ESC Brazil – Realtime Linux with RT_PREEMPT

Two weeks ago I’d give a talk about realtime Linux at ESC Brazil: “Usando Linux como Sistema de Tempo Real” (Using Linux as a realtime OS). Sadly some days before while playing soccer  I broke my fibula and I had to have a surgery. I regret I couldn’t attend this conference.

At least in the company I work for there are more people with knowledge in this area. Gustavo Barbieri went there in my place and had a good feedback from the attendees.

Now I have stay home. At least for 1 or 2 months :-(.


codespell 1.1-rc1

I’m glad to announce the first RC of codespell 1.1. I decided to let the biggest feature for the next version and release 1.1 with the small but important features that were already implemented. This new version comes with the following features:

  • Verbosity level: tired of seeing so many things printed while the fixes are taking place? Now you can filter what you see
  • Exclusion list: there are cases in which the codespell spots a false positive, but disabling the entry in the dictionary will prevent it from fixing many other places. This is particularly true when there are names in source code. In Linux kernel I’ve seen some names with “Taht” and “Teh” that were incorrectly fixed to “That” and “The”. Now we have a file with lines that are exclude from the ones codespell will fix. Hopefully such lines will not change very often and we can maintain a file per project for future executions of codespell in each project. I’m maintaining one for the Linux kernel. It’s in data/linux-kernel.exclude.
  • Interactive mode: for those fixes that are not done automatically (because they have more than one possible fix), now we can interactively decide on each one. I recommend for everyone interested in this feature to run codespell once without this option to fix the automatic ones and another time to go through the other fixes.
  • Stats (summary) of the changes: are you interested how many times a word was misspelled? Now codespell can display a summary of all the fixes it has done.

I was particularly worried about the increase in runtime when using the exclusion list. However it proved to be very fast when excluding the lines by using their hashes. I can successfully parse the entire Linux kernel tree within 1min30s in my laptop with slow HD. The biggest feature that I’ve left for the next version is to allow changes to be applied only to parts of the source code like comments and strings. I expect to implement this for a future 1.2 version.

Besides these new features, there are some fixes to the dictionary. Thanks to all of you who have sent me fixes and suggestions. I’m glad to see patches generated by codespell been applied to other opensource projects that I’m not the one to send. As of now, I’ve seen patches been applied to: Linux Kernel, oFono, ConnMan, FreeBSD, LLVM, clang, EFL and others that I don’t remember right now.

For those who prefer to wait for a stable release, I’m also releasing codespell 1.0.2 with fixes only to the dictionary.

As always, you can download codespell packages from:

Repositories are available at:

Por uma web melhor e mais segura

Hoje adicionei no site um script para alertar usuários que usam versões antigas (e inseguras) de browsers. O projeto é o “Salve a Web, por favor”. Ele é opensource e pode ser visto no repositório do github. Basta carregar o script com a seguinte entrada na sua página:

<script type="text/javascript" src=""></script>

Acho que pela própria natureza dos posts no meu blog, a divisão dos browsers não é parecida com aquela global, em que o Internet Explorer ainda é o browser mais usado. Porém acho importante alerr os usuários. Quem sabe poderemos ter uma web melhor e mais segura futuramente? Veja abaixo as estatísticas de acesso a esse blog no últimoo ano.


Browsers usados no acesso ao blog
Sistemas Operacionais usados no acesso ao blog


Os poucos visitantes com browsers antigos que leem esse blog vão ver o seguinte banner acima (obtido mudando o user agent do Chromium para um IE 6):

Internet Explorer desatualizado

by Lucas De Marchi