Tag Archives: Linux

Hacking the Intel fan, for fun

An alternative headline is: “how to show your wife how much you love her, the geek way”.

From 17 to 22 of September I was in New Orleans participating in the discussions of the Linux Plumbers Conference, which has already turned into one of my favorite conferences. Lots of fun, talking to great people and good discussions about systemd, containers, cgroups, kernel modules, etc. However as the headline indicates this blog post is not to talk about the conference but rather about a toy the Intel booth was giving out: a fan with 7 leds in its propeller. See below:

intel_otc_fan

Fan distributed to attended during LPC

When turned on it shows a text message: “We’re hiring!”, “01.org/jobs”. So, if you are looking for a job and want to come to work with me, you already know where to apply ;-). The fun part is that in its box it’s written “programmable message fan”. The guys from the booth told me that the first question people were asking was how to change the message appearing there, but they had no idea. This post is to show how I did that.

Some days after arriving in Brazil I saw a post from Steven Rostedt on G+ regarding this fan and a blog post he found: http://hackingwithgum.com/2009/10/06/hacking-the-cenzic-pov-fan/. Disassembling our fan showed that it’s a little bit different than that one, changing the EEPROM and with 1 extra pin.

Disassembling the fan

Disassembling the fan

However looking carefully at the board we can see it’s pretty similar: It’s a T24C04A EEPROM that is programmable via I2C. I’m not sure if the extra pin is the the write-protect feature that is present in this EEPROM or if it’s to select the address (in which case we would just have another address on our side). Hence we are safe connecting it to ground. From T24C04A’s datasheet we figure it can work in the range 1.8V to 5.5V. So, instead of using a serial connector like in the other blog post, we can use any development board that has an I2C bus available for us to play with it, particularly BeagleBone Black that has a 3.3V I2C bus, which I’m using here. From the picture below you can notice that a) I didn’t have many HW components available and b) my drawing skills are horrible. I just did a quick hack to get it to work, i.e.: connect GND, VCC and the pull-up resistors (since in I2C the bus is high when nobody is transmitting) [ See UPDATE 2 ].

Wiring the fan to beaglebone

Wiring the fan to beaglebone

For reading from and writing to the EEPROM I’m using i2cdump and i2cset respectively. And i2cdetect to show me where the device was plugged and its address. Beware that in beaglebone the devices in /dev don’t match the ones in the HW schematics (thanks Koen for pointing out this to me).

Now the software part. Like in the other fan we have a column of 7 leds and each letter is “rendered” using 5 rows. However the way the strings are written is different. I tried to use the python script that was provided, but after some tests I figured I’d need to do some modifications. Below is how the strings are stored in our fan’s EEPROM:

fan_eeprom_storage

First byte is the number of strings present in the EEPROM. Each string then has as its first byte the string length. Then we have 5 bytes for each char, in reverse order. They encode the state of the LEDs in each column: 0 means ON and 1 is OFF. After some try and errors we realize that not only the string is reversed, but also the columns. So the first byte in the character encodes the right-most column. In the end we have a 7×5 matrix for each char. I started to draw all chars and change the python script to use them, but I got lazy and just finished the letters I was interested in (see UPDATE 1). The final result is the video shown above that says “Talita, I love you”, in Portuguese :-).

I used the following commands to dump the EEPROM, encode the text and write to it.

root@beaglebone:~# # dump what's in address 0x50 on bus 1 (use i2cdetect to find out the bus and address of your device)
root@beaglebone:~# i2cdump 1 0x50
root@beaglebone:~# # encode the message given as args
root@beaglebone:~# /tmp/ascii2fan "string1" "string 2 with space" "string 3" > ~/message.bin
root@beaglebone:~# # write the content of ~/message.bin into the EEPROM
root@beaglebone:~# i=0; od -An -t x1 ~/message.bin | while read line; do \
                   for c in $line; do \
                       cmd=$(printf "i2cset -y 1 0x50 0x%x 0x$c b" $i); $cmd; ((i++));
                   done;
               done

You can download my modified ascii2fan. I was using it on /tmp and lost it after power cycling, so I needed to change the file again and I didn’t confirm it’s still working. It’s almost the same as the one provided in hackingwithgum.com, it’s just the table that really changes.

UPDATES:

  1. I uploaded a new version of ascii2fan, containing all the letters.
  2. As Matt Ranostay pointed out, the beaglebone black has already an internal pullup resistor, so this is not really necessary.

Optimizing hash table with kmod as testbed

One thing that caught my interest lately was the implementation of hash tables, particularly the algorithms we are currently using for calculating the hash value. In kmod we use Paul Hsieh’s hash function, self entitled superfast hash. I fell troubled with anything that entitles itself as super fast, especially given the benchmarks provided are from some years ago, with older CPUs.

After some time spent on benchmarking and researching I realized there were much more things to look after than just the hash function. With this post I try to summarize my findings, showing some numbers. However do take them with a grain of salt.

The hash functions I’m using here for comparisons are: DJB, Murmur32, Paul Hsiesh and CRC32c (using the crc32c instruction present in SSE4.2). My goal is to benchmark hash functions when strings are used as key (though some of the algorithms above can be adpated for blobs). Also the use of these hash tables are only for fast lookups, so any cryptography-safe property of the functions or lack thereof is irrelevant here. For all the benchmarks I’m using depmod’s hash tables. There are 3 hash tables:

  1. module names: keys are very small, 2 to 20 chars
  2. symbol names: keys are mid-range, 10 to 40 chars
  3. module path names: keys are the largest: 10 to ~60 chars

On my benchmarks I identified the following items as important points to look after optmizations:

  • Hash functions: how do they spread the items across the table?
  • Time to find the bucket
  • Time to find the item inside the bucket
  • Hash functions: time to calculate the hash value

Whilst the first and the last ones depend on the algorithm chosen for calculating the hash value, the second and third are more related to the data structure used to accommodating the items and the size of the table.

Hash functions: how do they spread the items across the table?

It’s useless to find a hash function that is as fast as it can be if it doesn’t cope with it’s role: spreading the items across the tables. Ideally each bucket would have the same number of items. This ideal is what we call perfect hash, something is possible only if we known a priori all the items and can be accomplished by tools like gperf. If we are instead looking for a generic function that does its best on spreading the items, we need a function like the ones mentioned above: given any random string it calculates a 32 bits hash value used to find the bucket in which the value we are interested in is located. It’s not only desirable that we are able to calculate the hash value very fast, but also that this value can be in fact useful. A naive person could say the function below is a super fast constant time hash function.

uint32_t naive_hashval(const char *key, int keylen)
{
        return 1;
}

However as one can note it isn’t used anywhere because it fails the very first goal of a hash function: to spread the items on the table. Before starting the benchmarks I’ve read in several places that the crc32c instruction is like the function above (though not that bad) and couldn’t be used as a hash function for real. Using kmod as a test bed this isn’t what I would say. See the figures below.

paul djb murmur3 crc32c

We used all 3 hash tables used in depmod, dumping all the items just before destroying them. The graph shows how many items were in each bucket. For all algorithms we have almost the same average and standard deviation. So, which functions should we use for the next benchmarks? The answer is clearly all of them since they provide almost the same results, being good contenders.

Time to find the bucket

Once we calculated the hash value, it’s used to find in which bucket the item lies on. For starters, a bucket is nothing more than a row in the table as depicted below:

b0 it1 → it2 → it3
b1 it4
b2 it5 → it6 → it7 → it8
b3 it9 → it10 → it11

The hash table above has size 4, which is also the number of buckets. So we need to find a way to convert the (32-bit) hash value to a 3 bits. The most common way used in hash table implementations is to just take the value’s modulo:

uint32_t hashval = hashfunc(h, key, keylen);
int pos = hashval % h->size;

The size above is set when the hash table is created (some hash table implementations use a grow-able approach, which is not treated in this post). The modulo above is usually implemented by taking the rest of a division which uses a DIV instruction. Even if nowadays this instruction is fast, we can optimize it away if we pay attention to the fact that usually the hash table size is a power of 2. Since kmod’s inception we use size=512 for module names and paths; and size = 2048 for symbols. If size is always a power of 2, we can use the code above to derive the position, which will lead to the same result but much faster.

uint32_t hashval = hashfunc(h, key, keylen);
int pos = hashval & (h->size - 1);

The DEC and AND instructions above are an order of magninute faster than the DIV on today processors. However the compiler is not able to optimize the DIV away and use DEC + AND since it can’t ensure size is power of 2. Using depmod as a test bed we have the following clock cycle measurements for calculating the hash value + finding the bucket in the table:

keylen      before   after
2-10          79.0    61.9 (-21.65%)
11-17         81.0    64.4 (-20.48%)
18-25         90.0    73.2 (-18.69%)
26-32        104.7    87.0 (-16.82%)
33-40        108.4    89.6 (-17.37%)
41-48        111.2    91.9 (-17.38%)
49-55        120.1   102.1 (-15.04%)
56-63        134.4   115.7 (-13.91%)

As expected, the gain is constant regardless of the key length. The time to calculate the hash value varies with the key length, which explains the bigger gains for short keys. In kmod, to ensure the size is power of 2, we round it up in hash_init() to the next multiple with the following function:

static _always_inline_ unsigned int ALIGN_POWER2(unsigned int u)
{
	return 1 << ((sizeof(u) * 8) - __builtin_clz(u - 1));
}

There are other ways to calculate it, refer to kmod’s commit as to why this one is used.

Time to find the item inside the bucket

As noticed in the previous section we use a MOD operation (or a variation thereof) to find the bucket in the table. When there are collisions and
a bucket is storing more than 1 item, hash table implementations usually resort to a linked list or an array to store the items. Then the lookup ends up being:

  1. Calculate the hash value
  2. Use the hash value to find the bucket
  3. Iterate through the items in the bucket comparing the key in order to find the item we are interested in.
  4. Return the value stored.

So often the item is a struct like below:

struct hash_entry {
        const char *key;
        void *value;
};

In the struct above I’m considering the key is a string, but it’s also possible to have other types as key.

So once we have the bucket, we need to go through each item and strcmp() the key. In kmod since we use an array to store the items we have a little better approach: the array is kept sorted and during lookup it’s possible to use bsearch(). However as one can imagine, keeping the array sorted doesn’t come for free. We are speeding up lookup with the downside of slowing down insertion.

Thinking on this problem, the following came to mind: we use and benchmark complicated functions that do their best to give a good 32 bits value and then with the modulo operation we use just throw away most of it. What if we could continue using the value? If we don’t mind the extra memory used to store one more value in the struct hash_entry above, we can. We store the hash value of each entry and then compare them when searching inside the bucket. Since comparing uint32 is very fast, there’s no much point in keeping them sorted anymore and we can just iterate very fast through all items in the bucket checking first if the hash values match and only then strcmp() the key. With this we can drastically reduce the amount of string comparisons we do in a lookup-intensive path, the time to add and item (since it doesn’t need to be sorted anymore) and also the complexity of the code. The downside is the memory usage, with one extra 32 bit value for each entry. The table below shows the results.

keylen      before   after
2-10         222.8   127.7 (-42.68%)
11-17        231.2   139.1 (-39.85%)
18-25        273.8   181.3 (-33.78%)
26-32        328.7   236.2 (-28.13%)
33-40        366.0   306.1 (-16.34%)
41-48        354.0   341.7 (-3.48%)
49-55        385.1   390.5 (1.40%)
56-63        405.8   404.9 (-0.21%)

Hash functions: time to calculate the hash value

This was my original intention: to benchmark several hash functions and choose the best one based on data from real world, not random or fabricated strings. My desire was also to profit as much as I could from nowadays processors. So if there’s a new instruction, let’s use it for good.

Based on that I came to know the crc32c instruction in SSE4.2. By using it we have a blazing fast way to calculate crc32, capable of a throughput of around 1 char per clock cycle. I was also optimistic about it when I checked that its distribution was as good as the other contenders. The figure below shows the time to calculate the hash value for
each of the algorithms.

plot

As can be seen on the benchmark above, the winner is indeed crc32c. It’s much faster than the others. It’s noteworthy the fact that it increases much less than the other as the length goes up.

From the other contenders, DJB is the worst one reaching much higher times. It’s the simplest one to implement, but in my opinion the others are small enough (though not that simple) to take its place.

How much does the hash function affect the hash table lookup time? By lookup time I mean:

total_time

The figure below answers this question:

plot

 

We can see from the figure above that the crc32 implementation is the faster one. However, the gain is not as big as when we consider only the time to calculate the hash value. This is because the other operations take much more.

Conclusion

Although the CRC32 implementation is faster than its contenders and that I was tempted to change to it in kmod, I’m not really doing it. There would be a significant gain only if the key was big enough. As noted above, the time to calculate the hash value in CRC32 grows much more steadily than the others. If the keys were big enough, like 500 chars, then it could be a change to make. It’s important to note that if we were to change to this hash function we would need to add an implementation for architectures other than x86 as well as introduce the boilerplate code to detect in runtime if there’s support for the crc32c instruction and use the generic implementation otherwise.

The other 2 optimizations, although not ground breaking are easy to make and not intrusive. Nonetheless if people want to check the hash functions in their workload I will make available the code and benchmarks in a repository other than kmod’s.

Back from Linux Plumbers

I’m back from USA after one week attending Linux Plumbers Conference. This was my first time in LPC, in which I was part of the CoreOS, talking about “From libabc to libkmod: writing core libraries”.

It was a very good experience and I’m glad to meet so many developers, both kernel and userspace hackers. Some of them I only knew from IRC, mailing-lists, etc and it was great time to share our experiences, discuss the current problems in Linux and even fix bugs :-). We seem finally to have reached a consensus on how module signing should be done – the outcome of Rusty Russel’s talk is that he will now start applying some pending patches. There will be no required changes to kmod, except a cosmetic one in modinfo to show if a module is signed or not.

Rusty was also very helpful in fixing a long-standing bug in Linux kernel: a call to init_module() returns that a module is already loaded, even if it didn’t finish it’s initialization yet. This used to be “fixed” in module-init-tools by a nasty hack adding a “sleep(10000)” if the module state (checked on sysfs) is “coming”. I mean “fixed” because this approach is still racy, even though the race window is much shorter than without it. So we finally sat down and wrote a draft patch to fix it. This will probably reach Linus tree in the next merge window.

The above example only seconds what Paul McKenney said on his blog yesterday: “A number of the people I informally polled called out the hallway track as the most valuable part of the conference, which I believe to be a very good thing indeed!” – I was one of the people he informally polled ;-). I’d like to thank all the Committee and people involved in organizing this conference – it was a very great experience.

Finally, you can find my slides below (or download from Google Docs). I think soon the audio will be published. Meanwhile you may enjoy Lennart’s picture when he was a child in slide #5 (during the talk he claimed it’s not him, but I don’t believe – they are too similar :-)).
Continue reading

ELC 2012

Hey, this is my feedback of ELC 2012. If you didn’t read the first part, about ABS 2012, you can read the previous post first.

ELC is one of my favorite conferences as I can meet several talented people and have good talks about Linux in embedded devices. This time was not an exception and I enjoyed very much. The main reason I was there was because I was going to present kmod, the new tool to manage kernel modules. But that would be only on the last day of the conference. Let’s start from the beginning.

To open the conference Jon Corbet gave his usual kernel report starting from January 2011 and going on through the events in each month: the mess in ARM, death of the big kernel lock, userspace code in kernel tree (should we put libreoffice there, too?) and so on. Following this keynote I went to see Saving the Power Consumption of the Unused Memory. Loïc Pallard from ST-Ericsson talked about how memory consumption in increasingly important in embedded devices for the total power consumption. We are going from the usual 512 MB on smartphones to 2 ~ 4 GB of DDR RAM. There are some techniques to reduce this the power drained and he presented the PASR framework, that allows the kernel to turn on/off specific banks/dies of memory since not all of them is used all the time. Later on talking to the guys from Chromium OS I realized that this is especially true when the device is sleeping. We may want to discard  caches (therefore use much less memory when in sleep mode) and then turn off banks not used. In my opinion the battery consumption is one of the most important today for embedded Linux devices: I’m tired to have to charge my smartphone every day or every X hours. I hope we can improve the current state by using techniques as the one presented in this talk.

In Embedded Linux Pitfalls Sean Hudson from Mentor Graphics shared his experience while coming from closed embedded solutions to open source ones. Nice talk! I think people doing closed development should see presentations like this: one of the main reasons for failing in opensource is not being able to talk to each other: HW guys not talking to SW guys, NIH, not playing the rules of the communities and therefore having to carry a lot of patches, etc. I’ve always been involved with opensource so I don’t know very well how things work for companies doing closed development, but I do know that more often than not we see those companies trying to participate in communities/opensource and failing miserably. In my opinion one of the main reason is because they fail to talk, discuss and agree on the right solution with communities.

One of the best talks in ELC 2012 was Making RCU Safe for Battery-Powered Devices. Paul McKenney is one of the well known hackers of the Linux kernel, maintaining the RCU subsystem. Prior to this talk I had no idea RCU had anything to do with power consumption. He went through a series of slides showing how and why RCU got rewritten several times in the past years, how he solved the problems reported by community and how things get much more complicated with preemption and RT. He finished his presentation saying that the last decade was the most important of his carrier and that is because of the feedback he got from RCU being used in real life. I’d really love to see more people from Academia realizing this.

The next day Mike Anderson gave a great keynote about The Internet of things.  Devices in Internet are surpassing the number of people connected and soon they will be much more important. It’s a great opportunity for embedded companies and for Linux to become the most important Operating System in the world. Recent news about this are already telling us that 51% of the Internet traffic is non-human (although we can’t classify all of that as “good traffic”). Following his keynote I went to see Thomas Petazzoni from Free Electrons talk about Buildroot. I like Buildroot’s simplicity and from what Thomas said this is one thing they care about: Buildroot is a rootfs generator and not a meta-distro like openembedded. There were at least 3 people asking if Buildroot could support binary packages and he emphasized that it was a design decision not to support them. I like this: use the right tool for the each job. I already used Buildroot before to create a rootfs using uClibc and it was great to see that it was already packaging the last version of kmod before I went to ELC.

In the end of the second day I participated in Real-Time (BoFs) with Frank Rowand. It was great to have Steven Rostedt and Paul McKenney there as they contributed a lot to the discussion, pointing out the difficulties in RT, the current status of RT_PREEMPT patches regarding mainline and forecasts of when it will be completely merged. There were some discussions about “can we really trust in RT Linux? How does it compare with having an external processor doing the RT tasks?”. In the end people seemed to agree that it all boils down about what do you have in your kernel (you probably don’t want to enable crappy drivers), how do you tolerate fails (hard-RT vs soft-RT) and that RT is not a magic flag that you turn on and it’s done: it demands profiling, kernel and application tuning and expertise in the field. People gave several examples of devices using the RT_PREEMPT patches: from robots and aircrafts  in the space to cameras (the Sony cameras given away on the last day were 1 of the examples).

On Friday, the last day of the conference, I was much more worried about my presentation in the end of the day than with other talks. Nonetheless I couldn’t miss Koen Kooi from Texas Instruments talking about Beaglebone. It’s a very interesting device for those who like to DIY: it’s much smaller than its brothers like Beagleboard and Pandaboard and still has enough processing power for lots of applications. Koen was displaying his slides using node.js running on a Beaglebone. What I do like to see though is barebox replacing u-boot as the bootloader. If you attended Koen’s talk on ELCE last year, you know u-boot is one of the culprits for a longer boot. Jason from TI kindly gave me a Beaglebone so I can use it for testing kmod; when I have some spare time I’ll take a look on the things missing for using barebox on it, too.

The last talk of the conference was mine: Managing Kernel Modules With kmod. I received a good feedback from people there: they liked the idea behind kmod – designing a library and then the tools on top of that. I had some issues with my laptop in the middle of my presentation, but it all went well. I could show how kmod works, the details behind the scenes, the short history of the projects and how it’s replacing a well known piece of  userspace tools of Linux in all major desktop and embedded distros. When I was showing the timeline of the project I remember Mike Anderson saying: “tell us when it will be done”. I can’t really say it’s done now, but after the conference we already had versions 6 and 7 and contrary to other releases in the latest versions the number of commits is very small. After 3~4 months the project is reaching a maintenance phase as I said it would. If you would like to see my slides, download it here or see it online below. You can also watch the video of my talk as well as all the others in LF’s video website.

Continue reading

ANNOUNCE: kmod 3

Hey, kmod 3 is out. Really nice to finish this release. I was hoping to have it between the holidays, but there were some major bugs pending. It’s nice to see udev from git already using it instead of calling modprobe for each module. Kay reported a hundred less forks on bootup after start using libkmod and libblkid.

It’s nice too receive feedback about other architectures that we don’t have access, too. With kmod 3, sh4 joined the other architectures that were tested with kmod.

Since I’m already doing the announcements to the mailing lists, I’ll not repeat the NEWS here. Just look at the archives if you didn’t receive the email.

Happy new year!

Given enough eyeballs, all bugs are shallow

So, in last post I said kmod 2 could be released sooner than expected if there were major bugs. Not as much as a surprise, there was 1: depending on the alias passed to the lookup function we were blocked iterating a list.

It’s now fixed in git tree. Thanks to Ulisses Furquim for fixing it and Dave Reisner for the bug report. We already have some other great stuff implemented so we’ll soon have another release.

Another great news is that now we have the maintainer of module-init-tools (Jon Masters) cooperating with us. We will discuss how the two projects will co-exist/merge. So, for now on the official mailing list of the project is linux-modules@vger.kernel.org.

ANNOUNCE: kmod 1

For some weeks now I and Gustavo Barbieri at ProFUSION have been working on a new library and a set of tools, libkmod and kmod respectively. This is the announcement of its first public release.

Overview

The goal of the new library libkmod is to offer to other programs the needed flexibility and fine grained control over insertion, removal, configuration and listing of kernel modules. Using the library, with simple pieces of code it’s possible to interact with kernel modules and then there’s no need to rely on other tools for that. This is a thing lacking on Linux for a while and it’s one of the items in the Plumber’s Wish List for Linux. Quoting it:

provide a proper libmodprobe.so from module-init-tools:
Early boot tools, installers, driver install disks want to access
information about available modules to optimize bootup handling.

We went one step further and not only we are able now to give an API to load and remove kernel module, but also all the other common operations are being added to this API. The first user for this API will be udev. In a recent Linux Desktop (and also several embedded systems) when computer is booting up, udev is responsible for checking available hardware, creating device nodes under /dev (or at least configuring their permissions) and loading kernel modules for the available hardware. In a kernel from a distribution it’s pretty common to put most of the things as modules. Udev reads the /sys filesystem to check the available hardware and tries to load the necessary modules. This translates in hundreds of calls to the modprobe binary, and in several of them just to know the module is already loaded, or it’s in-kernel. With libkmod it’s possible for udev with a few lines of code to do all the job, benefiting from the configurations and indexes already opened and parsed. We’ve been talking to Kay Sievers (udev’s mantainter) and Lennart Poettering (systemd’s maintainer) regarding this and we are looking forward to have udev using libkmod soon.

Example code:

To insert a module by name without any options and strange configurations it’s sufficient to do as following (without treating errors for easy of comprehension – see the documentation for possible errors):

	struct kmod_ctx *ctx = kmod_new(NULL, NULL);
	struct kmod_module *mod;
	kmod_module_new_from_name(ctx, name, &mod);
	kmod_module_insert_module(mod, 0, NULL);
	kmod_module_unref(mod);
	kmod_unref(ctx);

Tools

Besides doing the library, we are re-designing the module-init-tools tools on top of the new API we created. With this first version we are already providing compatible binaries for lsmod, rmmod, insmod and modprobe, the last one with some functionality missing. Next versions we plan to fill the gaps with the provided tools and provide all the others like depmod and modinfo.

License

We try to avoid issues regarding licences: the library is licensed under “LGPLv2 or later” and the tools are under “GPLv2 or later”. There’s still lots of work to be done and places to optimize. We greatly appreciate contribution from other developers.

Roadmap

The API is not set on stone and is going to suffer some changes in future releases as we see fit to finish implementing all the tools. Below is the list of the features already implemented

kmod 1

libkmod provides the necessary API for:

  • List modules currently loaded
  • Get information about loaded modules such as initstate, refcount, holders, sections, address and size
  • Lookup modules by alias, module name or path
  • Insert modules: options from configuration and extra options can be passed, but flags are not implemented, yet
  • Remove modules
  • Filter list of modules using blacklist
  • For each module, get the its list of options and install/remove commands
  • Indexes can be loaded on startup to speedup lookups later

Tools provided with the same set of options as in module-init-tools:

  • kmod-lsmod
  • kmod-insmod
  • kmod-rmmod
  • kmod-modprobe, with some functionality still missing (use of softdep, dump configuration, show modversions)

Following is a rough roadmap for future releases:

kmod 2

  • Provide the API for features missing in kmod-modprobe, namely: dump configuration and indexes, soft dependencies, install and remove commands. Features relying on ELF manipulation will still be missing;
  • Provide all the tools available in module-init-tools. Some of them like depmod may be entirely copied from module-init-tools for later convertion;

kmod 3

  • Provide a single kmod tool that will abstract all the others, accepting commands like “kmod list”, “kmod remove”, “kmod insert”. Distributions may then use symlinks from current tools to the kmod binary and we can kill the ‘kmod-*’ test tools that we are introducing in kmod 1;

We thoroughly test the features implemented in kmod, but like any other software it’s possible to contain bugs that we didn’t find; we may decide to release new versions between the versions above and then this numbers change. Otherwise kmod 2 will already be sufficient for udev to pick it up as a dependency and start benefiting from the fine grained control over its operations with kernel modules.

Repositories

The repository for this project is located at http://git.profusion.mobi/cgit.cgi/kmod.git/

Package with kmod 1 source code can be downloaded from: http://packages.profusion.mobi/kmod/

Thanks

Last I’d like to thank Kay Sievers for his support in reviewing code, giving advices and helping to design kmod.

AndroidConf 2011

Hoje dei uma palestra na AndroidConf sobre “Modificando a API do Android”. Referências ao projeto que falei sobre AVRCP podem ser encontradas em um post meu anterior. Estou disponibilizando abaixo os slides.

EDIÇÃO 02/12/2011: coloquei uma nota no slide 7, relatando o que falei durante a apresentação sobre uso de IDE.

Para aqueles que não conseguem visualizar acima, segue o link direto para a apresentação.

LinuxCon Brazil

I’m back from LinuxCon Brazil, that was held in Sao Paulo on 17 and 18 November. Before the first keynote, ProFUSION was announced as becoming member of Linux Foundation :-)! Our logo is already in their members page.

It was also a great time to talk again to some developers I met in LinuxCon Europe last month and some that were not present there. One talk I really like was given by Eugeni Dodonov about the Intel Linux Graphics stack. It was a good overview of all the graphics stack in Linux, paying attention to Intel’s boards and drivers. Gustavo Barbieri talked about HTML5 and WebKit and other 2 ProFUSION’s employees — Rafael Antognolli and Bruno Dilly — presented “Application Development using Enlightenment Foundation (EFL)”

This time I also gave a presentation entitled “How to become an open source developer”. My focus was on the Brazilian crowd out there, willing to start to contribute to open source projects, looking for a job or just trying to understand why we do open source development. I hope it was useful for them and for you reading this blog, too. So, below are the slides of my presentation:

For those of you who can not see the file embedded above or want the direct link, here it’s in PDF format.

I also talked to some important people regarding a new project of mine. Stay tuned for a new library soon.

Back from Kernel Summit, LinuxCon Europe and ELCE

Last week from 23-Oct to 28-Oct I was at 3 conferences in Prague, Czech Republic, together with Gustavo Barbieri, Gustavo Padovan and Ulisses Furquim: the ProFUSION crew in Prague.

Starting from Kernel Summit, I had the opportunity to join the Bluetooth Summit and participate in the discussions regarding this subsystem in Linux, both in kernel and user space. We had a lot of hot topics to discuss, including the upcoming BlueZ 5.0, Bluetooth 3.0 (high speed), Bluetooth 4.0 (low energy) and I could also demonstrate the work I’ve been doing with the AVRCP profile. I’m glad it received a good acceptance from other developers. Some of them I didn’t know personally such as Luiz von Dentz, Claudio Takahasi, Vinicius Gomes. Others I had the pleasure to meet again like Marcel Holtmann and Johan Hedberg.

(We didn’t discuss only bluetooth related things. We noticed that more than 1/3 of the people there, working in the core of Bluetooth in Linux, was Brazilian and soon we were discussing with Samuel Ortis – a French, maintainer of ConnMan – who is the best soccer player :-).)

Daniel Wagner from BMW also brought up some interesting scenarios of multiple devices connected through Bluetooth in car kits and helmets (like this one): HFP, A2DP, HSP (and maybe also AVRCP?). All of them interacting and working together at the same time. Since the gstreamer conference was also taking place at the same facility we could also discuss with PulseAudio developers. In the end, it seems BlueZ and PulseAudio are working pretty well together, though we still have to polish some rough edges for some use cases like this.

Being at Kernel Summit was a great time to meet developers of other parts of the kernel too, such as Steven Rostetd and Peter Zylstra, with whom I had more contact some time ago when I was working in the Linux scheduler.

When the Kernel Summit was over (on Tuesday), LinuxCon and ELCE were taking off. It was great to have once more these two conferences collocated and being able to attend talks on both of them. There were several talks I’d like to attend but some of them were overlapping. I’m looking forward to see the recorded talks later this year[1]. It would be too extensive to detail each one here, so I’m just detailing some of them that grabbed more attention from me.

Gustavo Barbieri and Sulamita Garcia talked about Demistifying HTML5 and how it can be used to develop Apps. Gustavo focused on the EFL port of WebKit (in which I’m one of the developers ;-)) and the underlying technologies. It seems like the mentality of “let’s do apps in a very high-level language” instead of “providing a native language in a sdk” is coming back. Differently from what happened some years ago, this time maybe it will work out. Only future will show us.

Since this year I got involved with Android and development of the platform, I went to several Android-related talks. Leveraging Android’s Linux Heritage was really good stuff, showing how to replace some parts of the Android platform: bash instead of the I-wanna-be-a-shell that comes with Android by default, putting gstreamer in, optimizing some parts of the code, etc. In the same tune there was another talk entitled Build Community Android Distribution and Ensure the Quality. Interesting (but not surprising) to see how hard is to contribute to AOSP and how Android is much different from other open source projects we are used to.

Another interrelated areas that I have interest in (maybe because I work for a company related to embedded systems :-)) are system initialization, fast boot and development boards (such as Pandaboard). Therefore I attended systemd Administration in the Enterprise and Integrating systemd: Booting Userspace in Less Than 1 Second. The former, given by Lennart and Kay, focused on detailing some systemd features for guys running enterprise servers while in the latter Koen told us about his experience reducing boot time by using systemd in a Pandaboard. In this last talk I also met Jean Christophe, one of the developers of barebox (a bootloader aiming to replace U-Boot). Last time I checked, pandaboard was not in the list of supported boards but I was greatly surprised that now it is. Barebox has the advantages of running with caches enabled, having an architecture much more beautiful and being much faster than u-boot. In summary, IMHO it’s a bootloader done right.

Other interesting talk was Tuning Linux For Embedded Systems: When Less is More, in which Darren Hart gave instructions to reduce boot time and image size in very resource constrained scenarios (he was aiming a rootfs of only 4MB and total boot time under a second). Some key things to know is how to investigate what is not important to the application, what can be removed from kernel/userspace in order to fit the requirements and when to replace, why to replace and what to replace. Last but not least, in Developing Embedded Linux Devices Using the Yocto Project and What’s new in 1.1 David Stewart gave a status quo of the Yocto project. Interesting how the project evolved over this year and next time someone doing embedded systems think about ruling out its own distro from scratch, it would be good to look at Yocto.

I met a lot of other people for whom I apologize not citing their name here. This post would be yet bigger than it already is. I had a really great time their and I hope to continue going to these conferences. And the next one is LinuxCon Brazil, in which I’ll talk about How to Become an Open Source Developer. I look forward to seeing all of you there.

 

I’d like to thank the Linux Foundation for organizing such a great event and ProFUSION to allowing and sponsoring me to be there.

Side note: the problem is that now I want to do a lot of things in different projects without having time to to: systemd, Linux kernel, BlueZ, pandaboard, barebox, Android, etc :-)

 

[1] UPDATE: videos have been published - http://free-electrons.com/blog/elce-2011-videos/