How to brew install OpenSSL 1.0.2p (for Python3.6)

“`brew install –ignore-dependencies -f“`

Posted in Random | Leave a comment

Linux users and groups in PostgreSQL database

Standard Linux user and group accounts are defined in three files:

  • /etc/passwd
  • /etc/shadow
  • /etc/group

These files store user accounts and group information one per line, as fields separated by “:”. That kind of structure suffices for most user and group authentication needs, but the problem arises when you would like to have e.g. centralized authentication database in your network or more flexible means for managing user accounts.

I am currently designing and developing solution for hosting servers that requires defining customer Linux accounts. I could use for e.g. LDAP service for storing this information, but I prefer database, since it’s easier to develop tools and generally more flexible solution in the context of whole system.

I am a fan of PostgreSQL database server and it seems that there is a little nifty plugin for NSS (Name Service Switch) which allows you to store this information in database. The Name Service Switch is a standard Linux facility for common information and name resolution, which allows you to combine this information from multiple sources (flat files, LDAP, NIS and also various databases). We will use this facility to implement user and group information and authentication stored in PostgreSQL database.

Enough talk, let’s get to action. The plugin you need for NSS is named libnss-pgsql2. In some systems it may be lib64nss-pgsql2, but also it may be named libnss-pgsql or lib64nss-pgsql. Beware however that you need plugin version 2, and some systems use package name without number at the end, but it is still the correct version. Some systems however use libnss-pgsql to indicate an older version of the plugin, which you can use, but this is out of scope of this post. I am using Linux Debian 7.8 system, so the following commands are for this system, but this tutorial still will be relevant for other distributions after slightly changing commands (i.e. package manager, etc.).

We’ll begin by installing our plugin from terminal:

$ sudo apt-get install libnss-pgsql2

After this operation is completed you’ll have default configuration files ready. We’ll talk about it soon. Remember that you should install this package using sudo if you are normal user (i.e. non-administrative one) or you should issue these commands as root user. You can also take notice that when installing this plugin, apt-get suggests installing nscd. This is a Name Service Cache Daemon which will speed up resolving users and groups by caching them in memory, but for now we don’t want it to interfere with setting up libnss-pgsql2 plugin. We’ll get back to nscd later.

After installing NSS plugin we have to create our PostgreSQL database and database users which will be used to access system user and group information. I assume you already have PostgreSQL server installed. If not, install it and set it up first before you continue!

Let’s login as postgres user:

$ sudo su - postgres

We should now see a prompt like this:


Now we create two users for accessing passwd, group and shadow information:

  • nss – which will be used to access passwd and group information
  • nssadmin – which will be used to access shadow information

Remember that we have to create two distinct users, because we don’t want to give access for non-administrative users to information stored in shadow since there are password hashes, which may be used by malicious users (i.e. hackers). Access to shadow information should be only available to administrative user (mostly root account). Let’s create those two users:

postgres@localhost:~$ createuser -P nss
Enter password for new role: PASSWORD
Enter it again: PASSWORD
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

This will create new PostgreSQL user (role) named nss. Remember to provide password for user nss substituting PASSWORD with your own. We also disallow this user to be superuser, disallow creating databases and more PostgreSQL user accounts (which are called roles). We repeat this procedure for user (role) nssadmin, changing password to be different than nss role:

postgres@localhost:~$ createuser -P nssadmin
Enter password for new role: PASSWORD
Enter it again: PASSWORD
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Now we need to create and set up database used by libnss-pgsql2. Still being logged in as user postgres we do:

postgres@localhost:~$ createdb -O postgres -E utf-8 unix

This will create database named unix which is owned by role postgres and has encoding set to UTF-8. Fairly standard. However if your postgres role is not considered secure, you have changed your PostgreSQL administrative role to some other or would like to have another dedicated user that may be used for updating and managing system user accounts database, consider changing option -O postgres to -O your_role_name. Just remember that you have to create this role first if it does not exist (you can do this like I’ve shown you before).

Now let’s verify that we have access to this newly created database. Still being logged as postgres user let’s type in:

postgres@localhost:~$ psql unix
psql (9.1.15)
Type "help" for help.


If you see no errors and something like above, we have our database working. Type in “\q” to quit PostgreSQL shell. Now we have to create the database structure. Here’s what it should look like:

-- Default table setup for nss-pgsql


CREATE TABLE "group_table" (
"gid" int4 NOT NULL DEFAULT nextval('group_id'),
"groupname" character varying(16) NOT NULL,
"descr" character varying,
"passwd" character varying(20),

CREATE TABLE "passwd_table" (
"username" character varying(64) NOT NULL,
"passwd" character varying(128) NOT NULL,
"uid" int4 NOT NULL DEFAULT nextval('user_id'),
"gid" int4 NOT NULL,
"gecos" character varying(128),
"homedir" character varying(256) NOT NULL,
"shell" character varying DEFAULT '/bin/bash' NOT NULL,

CREATE TABLE "usergroups" (
"gid" int4 NOT NULL,
"uid" int4 NOT NULL,
PRIMARY KEY ("gid", "uid"),
CONSTRAINT "ug_gid_fkey" FOREIGN KEY ("gid") REFERENCES "group_table"("gid"),
CONSTRAINT "ug_uid_fkey" FOREIGN KEY ("uid") REFERENCES "passwd_table"("uid")

CREATE TABLE "shadow_table" (
"username" character varying(64) NOT NULL,
"passwd" character varying(128) NOT NULL,
"lastchange" int4 NOT NULL,
"min" int4 NOT NULL,
"max" int4 NOT NULL,
"warn" int4 NOT NULL,
"inact" int4 NOT NULL,
"expire" int4 NOT NULL,
"flag" int4 NOT NULL,
PRIMARY KEY ("username")

This SQL defines two sequences: one for groups and one for user accounts. You can adjust MINVALUE to set starting UID and GID accordingly. The above SQL defines four tables:

  • group_table – which is equivalent for /etc/group
  • passwd_table – which is equivalent for /etc/passwd
  • shadow_table – which is equivalent for /etc/shadow
  • usergroups – which stores a relation between passwd_table and group_table, that defines additional groups to which user is also assigned (primary group is stored in passwd_table, so you shouldn’t define this group in usergroups table)

You should save the above SQL definition in a file db_schema.sql and then under user postgres do:

postgres@localhost:~$ psql unix < db_schema.sql

If no errors occured you should have your database schema set up in database unix. Now let’s verify that everything is ok. Type in psql unix and issue “\d” after logging in to unix database:

postgres@srv01:~$ LC_ALL=en_US.UTF8 psql unix
psql (9.1.15)
Type "help" for help.

unix=# \d
              List of relations
 Schema |     Name     |   Type   |  Owner
 public | group_id     | sequence | postgres
 public | group_table  | table    | postgres
 public | passwd_table | table    | postgres
 public | shadow_table | table    | postgres
 public | user_id      | sequence | postgres
 public | usergroups   | table    | postgres
(6 rows)

If you see something similar it means that the database schema is properly set up. Now still being in PostgreSQL shell we have to grant priviledges to our two new roles we have defined before. You can do this typing in:

unix=# grant select on passwd_table to nss;
unix=# grant select on group_table to nss;
unix=# grant select on passwd_table to nssadmin;
unix=# grant select on group_table to nssadmin;
unix=# grant select on shadow_table to nssadmin;
unix=# grant select on usergroups to nssadmin;
unix=# grant select on usergroups to nss;

This will grant SELECT priviledge on tables passwd_table, group_table and usergroups to role nss, and it will also grant SELECT priviledge to role nssadmin on all tables. We don’t want to grant any other priviledges on those tables to these two users, since they will be used only as read only by NSS facility. Watch out for granting shadow_table priviledge to nss role. You shouldn’t do it!

Now we can quit PostgreSQL shell by typing “\q” and then logout from postgres system account by typing in “exit” or pressing CTRL+D. Let’s verify if our new roles namely nss and nssadmin have access to our database. Under normal user account type in:

wolverine@localhost:~$ psql -U nss -W unix
Password for user nss:
psql (9.1.15)
Type "help" for help.


and then if no errors occured, type in PostgreSQL shell:

unix=> select * from passwd_table;
 username | passwd | uid | gid | gecos | homedir | shell
(0 rows)

unix=> select * from group_table;
 gid | groupname | descr | passwd
(0 rows)

unix=> select * from usergroups;
 gid | uid
(0 rows)

unix=> select * from shadow_table;
ERROR:  permission denied for relation shadow_table

This shows that we have SELECT priviledge for role nss to tables: passwd_table, group_table, usergroups, but not to shadow_table – which is exactly what we want. Do the same verification for user nssadmin and you should see something like this:

wolverine@srv01:~$ psql -U nssadmin -W unix
Password for user nssadmin:
psql (9.1.15)
Type "help" for help.

unix=> select * from passwd_table;
 username | passwd | uid | gid | gecos | homedir | shell
(0 rows)

unix=> select * from group_table;
 gid | groupname | descr | passwd
(0 rows)

unix=> select * from usergroups;
 gid | uid
(0 rows)

unix=> select * from shadow_table;
 username | passwd | lastchange | min | max | warn | inact | expire | flag
(0 rows)

This shows that role nssadmin has permissions to SELECT on all tables. If any errors occured during the above verification, you have to make sure that roles nss and nssadmin have SELECT permission properly granted. It may be necessary sometimes to grant access to database schema itself. Consult PostgreSQL documentation on how to do it.

You may wonder why I have given so much time to ensure proper priviledges to the database. It seems that if you fail to do it properly, you will have hard time debugging why the libnss-pgsql is not working. The scarce documentation for libnss-pgsql doesn’t help either and there is literally no information available on Google if you are looking for help. So, make sure you have your database server working properly and that the roles have necessary priviledges to access database tables. Unfortunatelly, there is no way to debug or see the logs for libnss-pgsql plugin, so you have to be extra careful with this step.

When the database is properly setup we can setup configuration files for libnss-pgsql. There are two files in /etc directory which handles querying information from your database and feeding it to NSS facility.

The first one is /etc/nss-pgsql.conf and should look like this:

connectionstring        = hostaddr= dbname=unix user=nss password=PASSWORD connect_timeout=1
# you can use anything postgres accepts as table expression

# Must return "usernames", 1 column, list
getgroupmembersbygid    = SELECT username FROM passwd_table WHERE gid = $1
# Must return passwd_name, passwd_passwd, passwd_gecos, passwd_dir, passwd_shell, passwd_uid, passwd_gid
getpwnam        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table WHERE username = $1
# Must return passwd_name, passwd_passwd, passwd_gecos, passwd_dir, passwd_shell, passwd_uid, passwd_gid
getpwuid        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table WHERE uid = $1
# All users
allusers        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table
# Must return group_name, group_passwd, group_gid
getgrnam        = SELECT groupname, passwd, gid FROM group_table WHERE groupname = $1
# Must return group_name, group_passwd, group_gid
getgrgid        = SELECT groupname, passwd, gid FROM group_table WHERE gid = $1
# Must return gid.  %s MUST appear first for username match in where clause
groups_dyn      = SELECT ug.gid FROM passwd_table JOIN usergroups USING (uid) where username = $1 and ug.gid <> $2
allgroups       = SELECT groupname, passwd, gid  FROM group_table

Remember to substitute PASSWORD with your nss role password.

The second file is /etc/nss-pgsql-root.conf and should look like this:

# example configfile for PostgreSQL NSS module
# this file must be readable for root only

shadowconnectionstring = hostaddr= dbname=unix user=nssadmin password=PASSWORD connect_timeout=1

#Query in the following format
#shadow_name, shadow_passwd, shadow_lstchg, shadow_min, shadow_max, shadow_warn, shadow_inact, shadow_expire, shadow_flag
shadowbyname = SELECT * FROM shadow_table WHERE username = $1
shadow = SELECT * FROM shadow_table

Also remember to substitute PASSWORD with nssadmin role password. If you fail to do this, you may render your system completely unaccessible! Both configuration files must be owned by root and the second one should be readable only by root. Ensure it has proper permissions set:

wolverine@localhost:~$ sudo chown root:root /etc/nss-pgsql.conf /etc/nss-pgsql-root.conf
wolverine@localhost:~$ sudo chmod 644 /etc/nss-pgsql.conf
wolverine@localhost:~$ sudo chmod 600 /etc/nss-pgsql-root.conf

Now we have to be extra careful! I recommend you to leave another terminal open with editor open on /etc/nsswitch.conf, until we verify everything works as it should. If there are errors or the plugin is not working properly YOU WILL DISABLE ACCESS TO THE WHOLE SYSTEM (i.e. ssh, login and other services depending on system user accounts). Do not log out from root account at least on one terminal before you make sure everything works properly!

Let’s login as root:

sudo su

and then open up /etc/nsswitch.conf in vim or another console editor. Do the same on another terminal console (just so we can be sure to revert to previous configuration if anything goes wrong). When you have opened /etc/nsswitch.conf in editor, you have to change three lines to look like this:

passwd:     pgsql compat
group:      pgsql compat
shadow:     pgsql compat

Instead of compat you may have files, so if you do substitute compat to files and you should be ok:

passwd: pgsql compat
group: pgsql compat
shadow: pgsql compat

Save the file and close it (leave it open in another terminal). What we have done now is we say to NSS to first look for user in database and if it fails, fall back to /etc/passwd, /etc/shadow and /etc/group files.

WARNING! The documentation for libnss-pgsql2 plugin states that you should state compat or files first and after this pgsql. THIS IS WRONG AND MAY RENDER YOUR SYSTEM UNUSABLE! The same goes for “[SUCCESS=continue]”. Do not use this statement in /etc/nsswitch.conf because it DOESN’T WORK PROPERLY and WILL DENY ACCESS TO ALL USERS!

Now we have to test if NSS is still resolving users and groups. You can do this by typing in:

getent group
getent passwd
getent shadow

Do this under root and under normal user. For root user you should see entries for group, passwd and shadow (essentially what is currently available in /etc files). The normal user should see group and passwd entries, but running getent shadow should not return anything. Here’s an example:

root@localhost:~# getent group

If any of the getent commands hang up or are not returning entries it indicates problem with libnss-pgsql2 configuration or nsswitch.conf. In this case I recommend to revert back to original /etc/nsswitch.conf and make sure you have made everything properly especially if PostgreSQL server is running, if the database exists and has proper schema and also if roles have proper priviledges. Make sure that your pg_hba.conf is set up properly and that PostgreSQL is accessible through TCP socket on localhost ( or any other address if you are using another server for PostgreSQL.

If all getent commands behaved properly as described and returned entries when they should it should mean that everything is working properly and our plugin is used by NSS facility.

Now we can create our first user in the database and see if we can log in. Let’s start by logging in as postgres user and then psql to our unix database:

wolverine@localhost:~$ sudo su - postgres
[sudo] password for wolverine:
postgres@localhost:~$ psql unix
psql (9.1.15)
Type "help" for help.

unix=# insert into group_table (groupname) values ('testgroup');

Now let’s verify our group is inserted into table and get it’s gid, which we will need for setting up user group:

unix=# select * from group_table;
  gid  | groupname | descr | passwd
 10000 | testgroup |       |
(1 row)
unix=# insert into passwd_table (username, passwd, gid, homedir) values ('testuser', 'x', 10000, '/home/testuser');

and verify if the user passwd entry is set:

unix=# select * from passwd_table;
 username | passwd |  uid  |  gid  | gecos |    homedir     |   shell
 testuser | x      | 10000 | 10000 |       | /home/testuser | /bin/bash
(1 row)

As you can see the passwd entry exists. We have set ‘x’ as user password which means that we will use shadow_table to store password instead of plain text password in passwd_table (exactly the same as /etc files are doing). Let’s set up shadow_table entry for our user. First we need to create extension PGCrypto on our database:

unix=# create extension pgcrypto;

Remember that you must have PGCrypto installed for your PostgreSQL server installation for this to work. You can also create extension on your database only with administrative role (e.g. postgres). Now let’s insert shadow information for our user:

unix=# insert into shadow_table values ('testuser', crypt('mypassword', gen_salt('md5')), cast(extract(epoch from now()) as INTEGER) / 86400, 0, 99999, 7, 0, -1, 0);

Let’s stop here for a moment. Since shadow_table and /etc/shadow format may not be very obvious I’ll explain each field here:

  • username – name of the user stored as username in passwd_table
  • passwd – encrypted hash for password
  • lastchange – number of days since epoch (1970-01-01)
  • min – minimal number of days before user is allowed to change password
  • max – maximum number of days after which user must change password
  • warn – number of days before maximum when user is warned to change password
  • inact – number of days after password expires that account will be disabled
  • expire – number of days since epoch (1970-01-01) account will be disabled and cannot be used to login
  • flag – reserved field

Our insert to shadow_table may not be obvious since we have used two value constructs:

crypt('mypassword', gen_salt('md5')) 
cast(extract(epoch from now()) as INTEGER) / 86400

The first one uses PGCrypto extension to generate salted password hash from password “mypassword” and salt using md5 algorithm. YOU SHOULD NOT USE MD5 for salting, because MD5 is insecure. PGCrypto however doesn’t support newer hash algorithms like SHA-256 or SHA-512 which are considered secure. For salting with these algorithms you have to devise your own solution, which is beyond the scope of this article.

The second one is just a simple algorithm that extracts UNIX TIMESTAMP (epoch) from current date (now) and since this is FLOAT, casts it to INTEGER and then divides the number of seconds by number of seconds in one day (86400) to obtain number of days since 1970-01-01. We need this value inserted into lastchange field.

Now, verify that the shadow entry was inserted properly:

unix=# select * from shadow_table;
 username |               passwd               | lastchange | min |  max  | warn | inact | expire | flag
 testuser | $1$dksgT54M$JVwFYQS/j8NkZHeGVgbki0 |      16575 |   0 | 99999 |    7 |     0 |     -1 |    0
(1 row)

If everything was ok, close psql shell. In case you are not logged in under normal user (i.e. you are root), logout. Now you should be able to test if you can log in with your newly created user in the database by typing in:

wolverine@srv01:~$ id testuser
uid=10000(testuser) gid=10000 groups=10000
wolverine@localhost:~$ su - testuser
No directory, logging in with HOME=/

Congratulations! Authentication through PostgreSQL database now works and you can define your new users simply by inserting records to the database. Of course, you have to create user directories and skeleton files, since you cannot use useradd, usermod, groupadd and other such tools. You should build your own solutions for adding, modifying and deleting users in the database and ensuring to properly manage home directories for each newly added or modified user.

The last thing we should do is installing nscd. nscd is a Name Service Caching Daemon which will cache entries from your PostgreSQL database in memory. This will significantly speed up user and group lookups and decrease performance impact on PostgreSQL server. This is especially important when user and group databases are large and there are many queries for this information. You can install nscd by typing in:

wolverine@localhost:~$ sudo apt-get install nscd

That’s it! Authenticating user accounts through PostgreSQL database is now fully set up. If you have any questions or comments, I’d love to hear them.

Posted in Administration, Linux | Tagged , , , , , , , , , , , , | 1 Comment

Defining custom vars for Pyramid scaffold

This is a quickie. I was working on creating custom Pyramid scaffold for easing development of multiple REST based microservices that share a common base. Instead of trying to copy, paste, change, I decided to ease my work by creating a scaffold. Here’s a quick tutorial from the documentation on how to do it:

However it took me a little bit of time to find out, how am I supposed to pass custom variables used by PyramidTemplate when rendering files within a scaffold. Pyramid documentation doesn’t explicitly state it, but it seems that PyramidTemplate is instantiated from Template class from PythonPaste (or PasteDeploy, I don’t remember which one).  Taking a quick look at Paster templates documentation here: – I have stumbled upon this sentence:

You can also prepare template variables in Python code in your Paster template class’s pre() method:

So. It seems that when defining your own Pyramid scaffold, you can override pre() method of PyramidTemplate like this:

from pyramid.scaffolds import PyramidTemplate

class MyCustomTemplate(PyramidTemplate):
    _template_dir = 'mycustom_scaffold'
    summary = 'Template for mycustom scaffold'

    def pre(self, command, output_dir, vars):
        vars['myvar'] = 'THIS IS MY VARIABLE'
        return PyramidTemplate.pre(self, command, output_dir, vars)

As you can see there is vars dictionary passed into pre() method which you can update with your own variables. Hope you find it useful.

Posted in Programming, Pyramid, Python | Tagged , , , , | Leave a comment

How to resolve “NoMethodError” in Chef

Recently I was given a task to implement Chef on clients infrastructure. What I have learned along the way is that when deploying Chef on existing server infrastructure, there will be almost no two identical systems and that each server node is different. You have to be extra careful when provisioning servers, especially production ones. Sometimes you even stumble upon errors in Chef itself. I have discovered one such error recently and I’m going to tell you a simple solution for solving it.

If you ever encountered error like this:

Error Syncing Cookbooks:

Unexpected Error:
NoMethodError: undefined method `close!' for nil:NilClass

you may wonder what this error means, especially if you are not a software developer. Luckily you can run chef-client client in debug mode like this:

chef-client -l debug

If you know Ruby, you’ll probably spot a traceback, but you’ll have to dig deeper into it only to find, that one of the libraries in Chef (file http.rb on line 368) has a broken exception handling. When there is a temporary file creation problem, the traceback fires, but instead giving you a proper Exception, it will raise error like the one on top of this post. Changing line:



tr.close! if tr

resolves the problem with Exception and gives us a proper error:

Error Syncing Cookbooks:

Unexpected Error:
ArgumentError: could not find a temporary directory

This is way easier to solve than the previous error, because it simply means that the temporary directory (usually /tmp) has improper permissions.

You should do:

chmod o+t /tmp

and voila! The problem is solved. You can now run chef-client again and the cookbooks will be synced.

Posted in Chef | Tagged , , , , | 1 Comment

256 color terminal in Konsole running under Mageia

I have stumbled upon a problem with Konsole being incapable of showing 256 colors. The Linux distribution I have experienced this particular problem is Mageia. It turns out that you have to do two things.

First, make sure you have ncurses-extraterms installed. You can install them (as root) in Mageia as follows:

urpmi ncurses-extraterms

After doing this open Konsole and go to menu Settings -> Edit current profile -> Environment -> Edit and then add or substitute line beginning with TERM= as follows:


Restart your Konsole and you should be ready to go.

Posted in Linux | Leave a comment

Robotic Raspberry Pi powered lawn mower

Last week I got my second Raspberry Pi. If you don’t know what it is already, it’s a 25$ fully blown computer with two USB ports, ethernet, video, audio and HDMI port of credit card dimensions. It has sixteen programmable GPIO ports, external display and camera port and is powered by single micro-usb. It’s power consumption is 3,5 Watts and it’s capable of HD video output. Currently demand for it overwhelmed it’s supply, so it’s hard to come by, but I was lucky to get two development boards already.

So. What’s it good for? Well. There are many projects by hobbyists and geeks already in the workings, but since it runs on fully capable Linux it is very good solution for many things, especially universal and powerful robotics controllers. I have few ideas for projects using my Raspberry Pi’s. I want to talk a little bit about one of them here.

Since I have a recreational plot in the countryside, there’s always a problem with grass growing fast. On this parcel there are some flower borders, some bushes and some fruit trees. Also the terrain is a little bit rough. Mowing grass on this parcel is a lot of work and it needs to be done almost weekly, because grass grows really fast. Unfortunatelly neither I, nor my parents have time to do this. And since our lawn-mower is somehow old and mowing pieces of rough terrain requires taking mowed grass to the composter, it’s tedious and hard work. So. I have come up with the idea for automatic and robotic lawn mower. I have already started programming side of the project and am currently investigating mechanical and electronic solutions for this project. Since I own a car for few months now and had to work on fixing it I have acquired much knowledge about mechanics, because I am forced to repair and bring my car to usable state by my own – fairly saying because I don’t have money to let professional mechanics do it.

Let me talk a little bit about this project. I have already coded some basic building blocks like discrete topographic representation maps for the terrain. My language of choice is of course Python. The terrain map is a two-dimensional representation of discrete areas – one cell in an array represents ten by ten centimeters of terrain area, which should be quite sufficient, but resolution is adjustable and is limited only by amount of memory and computing resources needed for it. Since the topographic map is discrete representation of square 10 cm cubed areas and I need to represent only passability of the discrete area at hand, I have developed a class called BitMap which uses a great bitarray module. It’s lightning fast and uses very little resources. For example. A representation of 64 by 64 discrete cells which corresponds to 6,4 by 6,4 meters area takes only 512 bytes memory space. So basically representing a large terrain of thousand square meters with resolution of ten centimeters would take 12,5 megabytes of memory, which is not really a lot, given memory constraints of Raspberry Pi. Of course this topographic map can be divided into segments (regions) and offloaded to SD card and loaded on demand to conserve memory further.

The BitMap class is a represents passability of terrain chunks. Currently I’m developing fully software simulator for testing ideas that will be based on this class. The class allows loading and saving to byte representations, 1-bit bitmap images and implements few algorithms, for example Bresenham’s line drawing algorithm, ultra fast Queue-Linear flood fill, matrix combinations and differences. It also implements a simple bounding-box collision detection. I have few more ideas on improvement for this class, but for now it already fullfills it’s basic goals.

The idea for my robotic lawn mower is to put it in unknown terrain and allow it to map it and it’s boundaries. Mechanically robot will be equipped with a gasoline powered engine like in typical lawn mowers. The engine will power alternator that will feed current into battery. The battery will be used to power electronics (including Raspberry Pi controller) and used for electric starter motor for the gasoline engine. Engine will be connected to simple electronic driven clutch and two-gear gearbox (forward and backwards). Front wheels will be able to turn by a servo controlled with Raspberry Pi. There will be some ultrasound sensors mounted on the front and back of the robot to detect obstacles. Since the lawn mower will explore the terrain by itself it will have some mechanical or other type of sensors for detecting holes in the ground, so the robot won’t fall into them. I haven’t decided just yet what solution I will go with for this problem. There’s also a problem for detecting off-limits areas of the terrain like water reservoirs and flower areas. There will be a camera mounted on the servo to allow computer vision including, but not limited to shape detection, obstacle detection aid and entity detection.

Since the robot must also be careful not to harm any animal or human in the area of robot operation (we are dealing here with quickly rotating knifes) the camera and sensor arrays will also aid in detecting interested cats, playful dogs, humans, etc. This will need careful programming of threshold values. It also means that terrain mapping will need to utilize some kind of heuristic allowing for exploring chunks of terrain previously mapped as inaccessible. Since the problem with lawn mower is covering unkown terrain area of operation as quickly and efficiently as possible, topographic mapping and detecting area limits is of crucial importance. The software simulator I’m currently building will allow me to test different navigation and area covering algorithms, but also will provide me with the platform for implementing statistical and feedback based neural networks, allowing the robot to learn and improve decisions on operation algorithms with each iteration for given terrain. Pathfinding will be based on heuristic algorithms including graph based D-Star which is successfully used on Mars Exploration Rovers and military grade autonomous systems.

Since Raspberry Pi is equipped with sixteen GPIO ports and also I2C bus, designing a relay board for sensor arrays and servomotors shouldn’t be problematic. Of course connecting sensitive electronic board to electrical parts can be quite dangerous, filter systems must be also implemented to protect the controller board.

That’s all there is for now, I will post more details about this project will be available as it progresses. So stay tuned, comment and wait for further updates.

Posted in Hacking, Programming, Python | 2 Comments

Windows 7 installation under KVM hypervisor

I had to install Windows 7 virtual machine under KVM hypervisor running on Debian 6.0 host server. The goal was to use QCOW2 type file as virtual hard drive. I’m used to take advantage of available tools, so I have made QCOW2 file using:

qemu-img create -f qcow2 win7.qcow2 100G

Then I used virt-manager to setup libvirt/qemu file for the virtual machine. However it seems that virt-manager (at least the stock one from Debian stable) has problems with xml description files management. It doesn’t always properly set them up. So I have modified hard driver definition in /etc/libvirt/qemu/VM_NAME.xml by hand, changing bus type from ide to virtio, raw image to qcow2 and address type from drive to pci (this is required when using virtio driver). I have obtained an ISO image of Windows 7 Professional which I have attached to VM as IDE CDROM. So far, so good.

After restarting libvirt-bin I have launched installation of VM from virt-manager. Windows 7 setup started without any problems, but as soon as I got to drive partitioning the problems started to mount. I have downloaded VirtIO drivers from Red Hat as suggested by KVM website. I have attached drivers CD ISO to another IDE CDROM and restarted libvirt. After loading drivers in the partitioning step of Windows 7 setup, virtual drive appeared, I have created new partition, but Windows installer refused to proceed with installation saying something about “Windows will not be able to boot from this drive due to unexisting controller. Fuck you, I won’t allow you to proceed with installation” or such kind of crap.

The solution. Install on QCOW2 partition using ide bus type instead of virtio. Installation will take many hours, guaranteed. After installing, start the VM and allow it to configure everything on the first run. As soon as you see Start Menu appear, shutdown this VM immediately, so you wouldn’t have to wait until your death for Windows to install six million updates and even one more. Attach virtio CD ISO to your VM and also create additional small partition image that will use virtio bus. You can use virt-manager to do this, clicking on the VM and then in hardware setup/information tab choosing Add below the hardware list. Choose Storage, set driver type to VirtIO and create a small 1GB partition or so. Don’t care about QCOW2, it can be simple RAW image type. After doing this, start your Windows 7 virtual machine, wait until it loads and then do: Start -> right click on Computer -> Properties -> Hardware Manager. Find Unknown SCSI Controller, right click Install/Update Driver, find your VirtIO CD, go to Win7\amd64 (I’m assuming you have 64 bit virtual machine) and proceed. Windows should automatically find appropriate driver and you should see after a while that Unknown SCSI Controller is now RedHat VirtIO SCSI Controller. Also, under hard drives you should see VIRTIO IDE DRIVE – this is your new small partition. After this shutdown Windows 7 again.

Why we did this step you might wonder? Well. Windows 7 won’t let you install driver for non-existing hardware (or at least I don’t know how to do it), so we have to cheat and use a temporary decoy as a small virtio bus partition for Windows 7 to see it and install VirtIO SCSI Controller driver.

After this we can delete this temporary decoy partition and change our /etc/libvirt/qemu/MACHINE_NAME.xml file. We have to change our primary partition to use virtio bus instead of ide bus, and select appropriate address, usually type pci, domain 0x0000, bus 0x00, slot 0x06, function 0x0. If you have non standard setup look for all address tags and choose slot accordingly – usually if you have the highest number for slot on the same bus number somewhere in the file, add one and you’ll be fine. Just remember that you count in hex, i.e. if your highest slot number is 0x09 then you have to use 0x0a and not 0x10.

After doing this start your VM and there you go, Windows 7 will run much faster using paravirtualized VirtIO SCSI driver.

Posted in Random | 1 Comment

All alone at night

Posted in Random | Leave a comment

Force Midnight Commander to exit in current directory

I have some Gentoo systems that have mc installed, but it’s default behaviour is to exit to the directory it was run from. I wanted to change this, because most of the time I’m using it to navigate the filesystem and then after exiting mc I want to be dropped to the directory I have navigated to. To achieve this you can issue this little script which will force Midnight Commander to exit in this directory.

mkdir /etc/profile.d
cat > /etc/profile.d/ << EOF
if [ /usr/libexec/mc/ ]; then
    . /usr/libexec/mc/
chmod 775 /etc/profile.d/


Posted in Random | Tagged | Leave a comment

Why the all famous “cloud” is not an answer?

Edit: Seems like my post is irrelevant since there are projects like Tahoe-LAFS and Ceph.

Being lucky to own a new machine capable of running multiple virtual systems, I have decided to try few of them I wasn’t able to easily install and use before. Being a hardcore Linux user, but not inclined to bash Microsoft or any other operating system vendor right out of the box (hey, Windows 7 is not that bad after all), I have tried few Linux distros I wasn’t very familiar with and also this new Windows 8 Developer Preview.

Since I am a familiar with and really like Mandriva I’ve decided to give this new Mandriva 2011 a spin. I’ve downloaded new Ubuntu, Fedora, Arch, Mageia and some other distros also just to try and see if any of them really are being that much different from the others. I wanted to see the progress in Linux and other operating systems as I haven’t got touch with other systems than XP, Mandriva and Gentoo recently.

I won’t go into details of each and every OS I have tested, but I want to share with you some thoughts about the direction and general progress in evolution of operating systems in general. My feelings are not very positive I might say.

I’m currently using Mandriva 2010.2 mostly. My server is running multiple Gentoo installations over a VMWare hypervisor. Those two distros are two different worlds and both have their pros and cons, but so far they did the job for me. Mandriva 2010.2 is really good desktop and laptop distro.

Gentoo makes pretty good server system – however I have some objections. Those objections are that you have to put significant amount of time to maintain your system. Time that is precious for me and I just want things to work and update as they should. Without spending hours for simple system upkeep. I’m a business person and I’m calculating my time economically. Gentoo is really cool system and I find it very satisfying and pleasant to use – when you have time. I don’t, but unfortunatelly I must somehow stick to it for the time being – lacking any real alternative that would offer that much stability and speed. Gentoo, if maintained properly, is fast. It’s customizable to the point no other distro can be. However when you’re a business conscious person I would not recommend to use it on daily basis. Too much knowledge and time must be put into simple upkeep and maintenance.

Mandriva is really great distribution for daily use, but I would think twice about using it on the server. Why? Because there were some outstanding critical security bugs that lead to compromise of my server machines not once, but few times. I wouldn’t have anything against it if patches were provided in a reasonable time, but honestly. Waiting half a year for fixing critical security bug in ProFTPD was way too much. Mandriva is also a distro which longevity (i.e. support time) is way too short for a real production server.

Of course. It has many security features out-of-the-box. I like msec and the way it’s configured from the start. I like it’s so uninstrusive for a power user and doesn’t get in the way when you want to accomplish some more advanced things without all those druid crap getting in the way. It just plays nicely and I’m really, really amazed by how Mandriva found balance between ease of use and needs of power users. They had their ups and downs, but this distro was really solid for most of the time. I have used many different distributions, but I always was amazed that in Mandriva most of the time everything worked. I liked it’s logical layout of configuration files and interoperation between editing configs by hand and by druids. Something Debianbased distros (including Ubuntu) could only dream of. Despite the fact that the OS was never pretty and I must admit it lacked taste when it came down to fancy user experience, it was still very rock solid. There were some great tools that Mandriva offered like msec, mcc and relevant drak*/*drake utilities and URPMi which I still find superior to any other package manager, maybe except for Gentoo’s emerge.

I’m still using MDV 2010.2 and honestly I’m a little bit disappointed that I’d have to stick to it for the nearest future. I simply find that this system is the most rock solid distro ever. For now. I did some certifications from Novell, so I also know SLES and SLED. Those are not bad distros indeed and when you take into account some of the tools Novell built, you have to say that they are really good choice for an enterprise network. Somehow I’ve always found SuSE really messy distro. They have their own ways of doing things which I personally don’t like.SuSE without Novell stuff is just too chaotic and inconsistent. Nevertheless Novell OES2 is a great business system which is stable and reliable. However I don’t find SuSE based management tools too attractive or even useful sometimes. Zypper is nowhere near URPMi. I know it’s a little bit more focused on tech-heads, but I always found it distracting that Novell tools are made by engineers for engineers and not for your ordinary next-door admin. I being a highly technical person, find it somehow too overwhelming and not pleasant to work with. Sure. I’m amazed by technical superiority of SLES over other distros, but honestly – technical superiority doesn’t mean it has to be technically impossible to manage without engineer level of knowledge. Sure. I can manage it, but people who are not into IT for twenty years, but have some good knowledge should be able to manage it too. Simply. Using and managing Novell systems is not fun at all. And your daily work should not only be pleasant, but also productive. Novell tools are technically superior to other solutions, but are just too hard to use for an ordinary person. And technical person also has to invest significant time into getting the idea of how things tick. That shouldn’t be this way.

I’m a person who likes doing things my way, but I don’t want to invest much time for getting knowledge of all the tips and tricks to make things work my way. Configuring your system should be pretty straightforward and when you want to tackle with parts of the system on a more advanced level, system shouldn’t get into your way, allow you to configure and manage things the way you want, but without any overhead of unncecessary technical details and stealing your time for learning really unnecessary things. After all computer is just a tool for getting things done and if you are like me, not a technical marvel to praise and invest all of your time for exploring it’s nuances, quirks and technicalities when it’s really unnecessary.

Since a computer and in fact the underlaying operating system is the key to your experience we are slowly getting to the point of this elaborate. For me productivity is the most valuable thing. I’m running a small company and I don’t want to invest time in managing my systems. I work with many clients who treat computer as a tool and not their toy. Time is what counts. I don’t want to be managing all the technical stuff and since I’m working with many clients I have some experience that they do not care about their updates, anti-virus scanning, backups, system configuration or management. They only care about how they can do their work with their computer and their operating system. Most of my clients treat IT as necessary evil, but still evil. Most of my clients can’t afford a dedicated network administrator or full-blown IT department. And I share their beliefs. Ordinary operating system user should get to the computer, do his work and leave. Not caring about all the technical details or computer/OS management stuff. For many years I have seen that most of open source community (and IT in general) is simply forgetting this. Client treats computer/OS as a tool, not as a thing of admiration. No matter how much we love all the intricate details of how those binary logic things tick, ordinary user doesn’t share it. And we shouldn’t rebel. We have to accept it, that not everyone can find the beauty in IT, nor does he want to know anything about the technical details of his own operating system. Nevermind it’s so beautiful, simple and logical for us. It’s a little bit arrogant of us, IT guys, to force our knowledge and admiration on those poor souls who doesn’t understand the art of IT. And it’s us to shame, not them. Because we thought our way is the only way that should be taught to others. It’s not. And it’s a tragedy of IT and open source community as a whole that most of us don’t understand this.

I will stress this once again. Ordinary user wants to do his work on his operating system. Computer is just a tool. And honestly as a technical person with a background of network administrator and software developer, not to mention an owner of the company – I want to do my work with my computer and do it as productively as I can. That’s why I care about offloading tedious and really mostly uninteresting work from my shoulders. I want my operating system to be able to do most of the ordinary management automatically. I don’t want to care about backups, synchronization, antivirus sweeps, updates and such things when I really have to do my work. And do it quickly.

So there was an idea. Big companies and the industry thought. Hey! Why don’t we do this by pushing more and more to the cloud? We can keep our users updated, synchronized whereever they want to be, on whatever device they like. We can scan their files to see if they are not infected, ensure their data is always available and they will enjoy the benefits of (marketing bullshit) cloud. And we will slowly force everybody towards Software-as-a-Service upon which we can monetize (not to mention we will get knowledge of everything about them – who will guarantee we won’t). Right?

Wrong. In my opinion IT industry got it wrong. There are many objections to the cloud which we all know. Privacy and data security being one of the most significant concerns. Let me express my opinion it this way. Industry got the right problem, but they provide the wrong solution. Of course many will praise that it’s the way, but I won’t discuss their motives. I have just recently installed Ubuntu and Windows 8. What I’m seeing? Two most significant operating systems (except MacOS, but I’m not familiar with it) are subtly forcing it’s users to use their clouds. We all know about those privacy invasions done by mobile phone operating systems – guess what they will do with all your private or company data offloaded to the cloud. Who will guarantee that your data is properly stored, inaccessible to others, etc.? Knowledge is power. You’d better not hand your data right away to the corporation you have no control over. After all corporations are known to be handing over data about you to security agencies or using it against people who can endanger them. Corporations are entities making profit of their shareholders. Who can guarantee that your innovative technological company won’t interfere with their plans to dominate some part of the market you are competing with them? Cloud is dangerous.

But I’m not going to elaborate more on this issue. I’m sure if you’re interested in the topic you already know those problems with the cloud. Cloud is a centralisation of power and knowledge about it’s users in one invisible hand. Why open source community should be on high alert when hearing “cloud”? Because it’s against one of the most significant principles of the community itself. Open source was and I hope still is about freedom. Freedom of communication, freedom to share, freedom to express, freedom to innovate, freedom to creatively express oneself. It’s not about the software. It’s about art. It’s about the humanistic ecosystem that was built around the software. It’s about exchange of thoughts and ideas, improvement and striving for excellence. Even if it manifests itself only in software development.

Cloud is against the open source’s inherited ideal of decentralisation, of Bazaar. Cloud is the next level of Cathedral. You may own the free bazaar software, but you will be forced to use cathedral cloud – one or another, but still in the hands of one or another entity. I’m really surprised that the community doesn’t raise objections to this matter. Take it this way. It’s an issue like Napster or Kazaa vs Bittorrent. Cloud is a centralised entity. No matter there are many clouds. They all have one weakness – a controlling entitiy. Just like Napster or Kazaa was one day. Bittorent on the other hand is decentralised. And that is what we need. We don’t need clouds. We need SWARMS.

What is a SWARM? It’s a decentralised cloud. No single entity controls it. It’s a community thing. A SWARM is voluntary cloud. It’s a concept of encrypted, decentralised storage allowing synchronisation of your data in a secure manner. Much like Bittorent is a peer to peer network of data exchange, so is SWARM to synchronization. Of course SWARM poses significant technical difficulties to develop. But I’m sure there are many brilliant minds that can and ultimately will overcome problems with this concept.

How I would see the SWARM implementation? It may be global distributed filesystem. It may be a distributed database. Data synchronised with the SWARM may be distributed to active SWARM nodes and replicated on purpose. I imagine a daemon that would be running on each and every SWARM participating node that would give up some resources for the whole SWARM. Think of it as a server running on each SWARM node providing for example 100MB of storage space to the SWARM. The amount of data you can store in the SWARM would be determined by your participation of resources for the SWARM. Security of your data would be guaranteed by strong cryptography and chunking of your data. Like in Freenet, nobody will know what his SWARM node is storing and to whom this data belongs.

Unfortunately there are some problems with this, Freenet storage is based on popularity of the requested content. There are more problems. Which nodes should receive your chunk of data? How the data should be distributed and replicated? How many nodes should receive a copy of your chunk synchronised with the SWARM? How the SWARM should cope with data loss or significant loss of storage providing SWARM nodes? Should your node track copies of your data and if it looses some of the nodes synchronised to your data, how should it decide to distribute replication of your data? How to ensure you can always reach your data if we must assume tha SWARM nodes are unreliable? How to deal with contraction of node count hosting copies of your data? How much data can you store in the SWARM at any given time?

If the SWARM is going to be a real project those and other questions must be answered first. Due to it’s chaotic and unreliable nature it would be hard or even impossible to devise algorithms that will guarantee the availability of your data all the time. It may be necessary to loosen some of the restrictions on SWARM concept. However the idea might find it’s way to some private environments where SWARM might be an attractive alternative for non-controllable cloud.

If you want to talk about the SWARM concept with me feel free to do so in the comments section.

Posted in Random | Leave a comment