AppArmor: First Impressions

Well, after playing around with Gutsy for a little while, I started looking into AppArmor, a software restriction tool. It's quite similar to SELinux, though easier to use (and faster according to their site). AppArmor lets you specify which files/folders an application can access while running, so if it is comprimised, it can only access files that it should (i.e. Apache looking at /etc/passwd). There are a few pre-made profiles for that can be installed by typing:

    sudo apt-get install apparmor-profiles

From there, you can view the status of AppArmor using the command:

    sudo apparmor_status

Which should return somethin like this: 

apparmor module is loaded.
15 profiles are loaded.
9 profiles are in enforce mode.
   /usr/sbin/ntpd
   ...
   /usr/sbin/named
   /usr/sbin/avahi-daemon
6 profiles are in complain mode.
   /sbin/klogd
   ..
   /bin/ping
5 processes have profiles defined.
3 processes are in enforce mode :
   /usr/sbin/avahi-daemon (4958)
   /usr/sbin/cupsd (4595)
   /usr/sbin/avahi-daemon (4957)
2 processes are in complain mode.
   /sbin/klogd (4397)
   /sbin/syslogd (4345)
0 processes are unconfined but have a profile defined.

    The difference between enforce and complain is what AppArmor will do when a profile violation occurs. In complain mode, it will log the offense, but let it occur, in enforce mode, it will deny access to that file (and log it). To switch modes for a profile, use the aa-enforce <binname> and aa-complain <binname> commands as root.

     You should probably keep things set to complain mode until you use logprof to update the profiles. Speaking of which, logprof parses the logs for AppArmor violations and asks you what to do about them, allow them or deny. It will also let you widen access using globs, to make it more generic (rather than /proc/17238/foo /proc/*/foo). Once you are done, you are given the option to save your changes. Once you've used to program's full functionality (i.e. Firefox downloads and flash/applets), and finished the profile, you can switch it in into enforce mode. Congratulations! You've added another link onto your computer fence.

    What about adding more programs to be watched? Glad you asked for that there is genprof, simply run as root with the path to the executable you'd like to add, it will see what libraries it requires, and make a simple profile and put it in complain mode. Now rn the program, and use it how you normally would, making sure to test all the features that might access files. Now, go back to genprof, and hit 's', it will show you all the files it tried to access, just like logprof. After adjusting the profile, you are good to go. If you think you did a good job, you can set it into enforce mode, or keep it in complain mode for a while longer to make sure you got all the needed files.

 

    Congratulations! You're now an AppArmor pro! If you'd like more information, check out AppArmor Geeks or the Ubuntu Docs. As an interesting sidenote, I found that Firefox looks at /etc/passwd, though if I block it, it still works fine. While I'm sure there is a reason, as Firefox is open source, still makes you think.

 

                                               Peace and chow,

 

                                             ranok

Life as a semi-early adopter

    Well, in preparation for Ubuntu 7.10 (Gutsy) to arrive, I did what I normaly do, I installed Tribe 5 on my backup machine (Compaq Presario V2000), just to test for any glaring bugs or upgrade problems, and if all goes well, I upgrade on my main machine (IBM T60). I do this, so when the final release comes out, I have less to download when the servers are slow.

    So, the upgrade to Gusty on my Compaq was a breeze, no hitches, everything works fine. It's pretty neat being an early adopter and watching the new version take shape with every update you do. So after a few weeks, once the update were less dramatic, I plugged in my T60, and did the upgrade. I was a little suspicious with it, as it seemed that Metacity was messed up while the update was in progress, however, I assumed that it would right itself after a reboot. After the update had completed, and I had restarted my computer, things started off badly. As it would start booting, it would reach a stage, an then print out endless '[xxxxxxxx] Device Mapper: dm-linear: Device lookup failed'. After looking around the internet for a few minutes, I determined that this is related to LVM, which I didn't use (I had a similar error when upgrading to Feisty with mdadm). I hit CTRL-ALT-DEL, and it continued to boot as normal. The fix I did to make things work properly was to 'apt-get remove evms'. After fixing that and rebooting, I had X11 issues, where it would try to launch gdm, fail, then try again, so it was tough getting to a virtual terminal to fix the problem (there were two Display 1's with different resolutions), however, the tool to change these settings is much nicer now, and can help setup external monitors, which previously needed to happen through a xorg.conf edit.

    So, life is good now,  Compiz is working great (though I can't get it to use the cube setting), as is the new Thunderbird. There are still some bugs to be worked out, (my sshd crashed, along with a number of gnome applets), but I think that this release will really help push Linux out onto the desktop market.

 

                                          Peace and chow,

                                            ranok
 

Regenerative DHT

Hello avid readers Wink,

     Today I got a very rough proof of concept regenerative distributed hash table (DHT) working. It's a peer-to-peer setup. When the client makes a request, it queries the master server (version two will make it true p2p) and gets an address of where the data corresponding to the hash key is. If that server is down, or the server doesn't respond, the client returns to the master and gets the next host that has a copy of that key slice. If that host is up, the client downloads the entire keyslice, and tells the master server that it's now replacing the down host. If there are enough clients making requests, the downed servers should be replaced fast enough to keep the data intact.

    Currently, it will only allow users to query and regenerate, I haven't added any logic to allow for updates or additions, but I'm going to keep plugging away at it. I also want to add some logic into the master server to be able to split up the entire DHT once it's redundant enough, that way each server has reduced strain.

 

                         Peace and chow,

                                ranok