Retro-causal experiment

For a few years I was contemplating ideas about the future. Especially those that give some insights about predictability of future events. When I was working in a cryptocurrency company I was exposed and became interested in markets. I became hooked on the ideas of predicting the behavior of them. I learned a great deal about the game theory, stochastic processes and sparks of insight that launched me on a journey to gain more knowledge about the mechanics of markets and generally patterns that guide time series data analysis and predictability.

If you don’t know what time series data means, you can imagine for example temperature readings outside your home that you take note of systematically with specified period like a minute, an hour, a day, a week, a month, etc. You make measurements of the specific variable like temperature, pressure, solar irradiation, etc., every minute and then given enough data you can make some predictions about seasonality, periodic trends caused by seasons and average temperatures during day or night. After all you know that during winter months the temperature is most likely to be lower than during summer, at least in the northern hemisphere. So given enough data you can determine that average temperature will be fifteen to twenty degrees higher in July than in January.

Since then I was experimenting with different techniques to see if any of them yields some better odds than just pure random chance. Three years ago I have discovered something interesting that lead to me to develop some of the beliefs I hold currently and mostly assume are true with my limited understanding. But before I explain what idea exactly I had there’s a need for explanation of some things that allowed me to devise this idea that maybe very ineffectual, but might or might not lead to something great or at last spark of some discussion and possibly lead to some new discovery in the future.

I was always interested in physics, philosophy and the nature of consciousness. It’s fascinating for me to read and learn about new discoveries in physics, especially in the quantum mechanics realm and recent discoveries that hint that quantum effects might have a large effect on how our brains and consciousness works. In neuroscience there are some hints that our brains use quantum effects like tunneling and perhaps even quantum entanglement of particles that allow our brains and minds to function at all.

I have to admit that I am no expert in those topics. I am simply fascinated by them and I try to integrate this knowledge to my everyday life and beliefs. I like them grounded in solid science and always have a skeptical point of view, reinforced by different perspectives, opinions and new empirical evidence. However despite knowing that there will be new inputs that might change my view of some things, at any given time I can only devise hypothesis and try to verify them with the knowledge and evidence that I have. But I’m always willing to explore ideas that seem fringe or far out. I take a stance that if you limit yourself to what you already know and not try to poke those cracks in reality, you are essentially regressing, as the most motivating factor in ones life is novelty. And I always seek it and challenge my established views and opinions, to break down and shake my stable worldview. Honestly, despite experiencing much resistance from external and internal factors, it never turned out to be bad. It always lead me to new insights and ideas.

And this is fun. Exploring new things. Shaking off everything you thought to be true, to broaden your perceptions and question your reality. After all, even if truth is not pleasant, it is possibly the truth (not saying objective as I am not capable to determine if such thing even exists) so one has to accept it or live in a blissful ignorance.

So let’s talk for a while about mundane topic of signal processing and the Fourier Transform. I understood what Fourier Transform was, and that it allowed to decompose the raw signal into time frequency domain to show a spectrum of frequencies of the signal. When I first learned visually and intuitively that Fourier Transform is effectively the first mathematical machine that was invented I was absolutely astounded and honestly, blown away. Every signal could be decomposed into constituting frequencies of sine waves and show you exactly how much of each frequency component contributes to the overall signal. This however comes with one problem. Fourier Transform gives you accurate information about frequencies in the signal spectrum, but you have no way of pointing where exactly these frequencies occur within the signal.

Okay, so why is, you might ask. It comes down to some fundamental principles and physical laws that I see reoccurring time and time again in different physical and informational phenomena, that essentially boils down to Heisenberg Uncertainty Principle. Setting aside a little philosophical thought that the only thing in life and universe that is certain is uncertainty, we can say that when you try to measure a signal you have two variables: time of occurrence and frequency of constituting waves, sinuses or sub-signals. You can either have absolute certainty on one, but then not the other and vice versa.

There is of course a little hack that allows you to analyze the signal with increasing precision in both of these analysis domains. You can use this ingenious mathematical trick called wavelets to process signals, with different time and frequency resolution to produce not one dimensional frequency map, but a two dimensional heat map, that correlates time and frequency of the signal, so it’s easier to pinpoint exact frequency appearance within time domain.

However, this gives you only moderate clues about when, where and how the signal looks like. And most of all you are working on data that is already set in stone (already happened). This might give you some insights into how the signal will develop in the future and many times, if your model is working well, it might give you good hints about some predictions with regards to the signal developing in the future. But since most of the time series data is essentially a stochastic, random process, there’s a high degree of uncertainty for your predictions. That might be alleviated to some extent by seasonality and trend analysis, but essentially underneath you have to assume that time series data like the market is essentially a random walk and you can’t predict it with 100% certainty. You can only make some statistical assumptions which are better or worse.

So, what can you do about it? Well, I found a probable way out of this. So let’s think of the signal in the given time period. What can we say about it? We can probably try to average it to it’s constituent sinuses or base waves in the frequency domain. This is called the polynomial fitting. If you can assign a highly correlated wave (compound sinuses) to match and fit the signal then you can probably to high degree predict with ease the development of the polynomial function in the future. Of course this prediction will degrade with time, but for short-term predictions it should be fairly accurate.

Let me digress here a little bit, before I expand on this topic further. When I first thought about all these things, I encountered a concept of retro-causality. It appears in physics that many of the theories and physical tools we have at our disposal, are time-reversible. It’s a fact. Quantum mechanics and particle interactions could be simply visualized by Feynman Diagrams. And guess what? Within them we it’s normal to see particles interacting with other particles that supposedly go back in time. We have also already proofs that particle observation (measurement) results in establishing determination of a state of particle in the past with experiment of Delayed Quantum Eraser with entangled particles.

This is not yet proven or an established interpretation or even a theory in physics, but if you take a look at the philosophical concepts of eastern scholars, like that the only thing that matters is perpetual now, you can try to imagine what it would be like if not only past influenced your current state of being, but also the future events influence the past? I won’t go into much detail here as this is a broad topic to explore and it’s not my intention to do here, but consider this. What if your future self is influencing where you are going? What if there was a way to communicate information from now to your past? Or from your future to your present? Is it even possible?

Well, many physical processes work the same in both directions of time and the results hold consistent. Of course there’s entropy and as far as we know and anything that could travel backwards in time would be faster than light in essence, which we now think is nonphysical. But. There are hints that antimatter might be traveling backwards in time. And some other physical phenomena like quantum entanglement essentially breaks the cosmic speed limit of light speed as it’s a spooky action at a distance that happens instantly, not obeying the speed of light. Yes, we are fairly certain that these cannot convey any meaningful information, but it keeps you wondering, if there could be a way to determine the future from the signals originating from the future.

Given the strange nature of quantum interactions and deduction from Feynman Diagrams of particle interactions, might give you counter-intuitive results that indeed information could be carried over from the future to the present, just perhaps with a phase shift of the wave propagation from the point of origin of the signal to the past or your present.

Most physicist however think that retro-causality is non physical, but I wouldn’t dismiss it already as there are some clues that it might be realistic and true. We just don’t know yet.

So, given all this what I have said, is it possible to even predict the future of the essentially random stochastic process? Perhaps it’s not very elegant solution, but I think there is a way. At least to very high degree of probability. As the nature of the Universe, especially the quantum microscopic world is essentially random, how you could predict randomness? Seems impossible.

But on the small time scales, you might reach a fairly good degree of certainty by using the oldest and the most simple prediction algorithm. Brute force.

Let’s assume we have a signal consisting of variable that changes over time. We can use tools given to us by mathematics and physics to try to model the signal to make a high degree polynomial fit, utilizing also the Wavelet transform or Fourier transform to support our polynomial fitting of the signal to the function. This would allow us to model the probabilistic future outcomes of the polynomial fit for the signal at short time-scales. Of course this fitting would be very inefficient, but in theory if we define a vast array of functions to compare our polynomial fit of the signal, we should find a wave or a compound sines forming the signal that allows us to predict future signal behavior based on a rather simple equation. Of course this is inefficient, slow and as all brute force algorithms, computationally intensive, but in theory it should work on short time scales.

So, maybe this does not constitute a true retro-causal interaction, but in my opinion it is as close as we can get to predicting the signal that is essentially random at nature.

For all this to work there’s a concept of coherence of two waves. In physics and especially quantum mechanics coherence allows you to compare two waves in terms of their similarity to each other. Of course for this to work this algorithm would need a large set of functions, including those chaotic ones (exponent powers, etc.) and a lot of trial and error on basic and not so basic transformations, like scaling, translating and perhaps even rotating. And it has to work a vast search space for a finite time series period. It would be very inefficient as an algorithm, but it’s a start.

Intriguing as it is coherence and Fourier Transform gives some interesting clues about functions of the data that are non-stationary. Did you know that when you feed these with a chaotic function that is non-stationary, the overall slope of the frequency domain would reflect the time-dependent change of the function values? So there might lie a clue for predicting the coherence of a polynomial fit of the signal to the function of interest.

If you think about it, there’s nothing that can prevent such an algorithm to work and I have done tests with the code I have developed, to show that finding a coherence of polynomial fit of the signal to the baseline function is indeed possible. The hard part is of course finding a high-degree polynomial fit for the signal, but you could probably do away with a good approximation.

So, there you have it. It should be possible to predict future development of the signal, by trying to find coherence between it’s polynomial fit and a vast search space of transformed base functions. It would be inefficient, but it could find solutions to a good degree and with a lot of computing power it should be more than likely to beat the odds of random chance.

And one more food for thought. If you think about this mechanism, if it’s possible to take a peek at the future with a high degree of certainty, it gives you some philosophical entertainment about the nature of consciousness and reason. What if our consciousness is not only a product of the past, but also of the future? What if our consciousness can only really function with retro-causal information? What if the future influences ourselves as much as our past influences us? If consciousness incorporates this mechanism one way or another, even if we do it unconsciously, what would it mean for causality? Or perhaps that given the strange nature of some interpretations of quantum mechanics or the reality itself, it is essentially true, what would it mean for you to exist?

I’ll leave that up to you.

Posted in Random | Leave a comment

Doomsday paradox

It’s always good to imagine the worst. If you can imagine the worst that can happen, then doing this process repeatedly builds up your shield and hardens yourself against fears if you are able to make the leap to the other side. Put yourself in a mirror. Be in your mind exactly what you despise and fear of. Imagine it, visualize it, process it, take a look at yourself from totally opposite perspective. If someone says you shouldn’t do something, but you can’t find any rationale behind this, except for disgust and repulsiveness, which by itself is mostly a product of conditioning and socialization, then it’s only fear of some possible penalty that prevents you from doing it.

But take it this way. If you are a truth seeker, it matters to you and your identity, then truth is the ultimate reason for your fulfillment, so much so that if prohibition starts to conflict with reason, when there’s mounting evidence there’s a logical conflict, you ought to feel like this prohibition has nothing to do with reason, but rather with prejudice. Lack of open discussion on some sensitive topic causes lack of dispute, lack of exchange of thoughts, opinions, ideas and most of all reasonable solutions. Plurality is important. It cannot be forgotten in the name of political correctness. Having only one side of discussion is straightforward totalitarian. Of course, there’s a question if tolerant society can tolerate the intolerant? But I’ll leave it up to a reader to figure it out.

You can see this prohibition yourself everywhere. In the media, when talking with friends, in your workplace, in your home. This prohibition covers almost every topic in your life. You have to do this and that, this way and this way only. Otherwise you are prohibited. You are prohibited to entertain some topics, explore far out ideas or even let your mind loose and do crazy things you had never dreamed of doing. Why? Because someone said you shouldn’t. Why? Because this and that. And then you as a reasonable person start to ask more specific questions. Like a three year old child: “but why? but why?” “Shut up you monster and go clean your room.”

But as a smart ass you are not going to be flushed down the toilet. So you take the questions deeper. You’ve already seen the cracks in the wall, so now it’s only a matter of time you will poke this wall and start seeing the cracks open wider and wider for you to see what’s the truth behind the wall. That’s what I like about science. Even thought we have some established theories, doesn’t mean we wouldn’t have better models and suppositions in the future. If significant, repeatable, reproducible signal that contradicts one of the established theories is found then nobody reasonable refutes this new find or lead based on prejudice and their own beliefs. Instead if the signal leads to an extension or a new theory itself, science will accept it as a new model as close to factual reality as possible.

It seems however when you deep dive into physics, the more you begin to understand the reality itself might not be what we were supposed to think it is. We can happily and ignorantly leave our whole lives living as what we perceived was real, but really was it? It turns out that our recent understanding of microscopic quantum interactions and developments of holographic principle and other outlandish interpretations of some physical theories, point to the fact that reality itself is not real. For me, such knowledge was and still is eye opening. I still am trying to merge bits and pieces of information to get the whole picture, which is probably a futile task, but there’s small non-zero probability it isn’t.

So here, in every line of the above paragraph lies a stack of Pandora’s boxes and another can of worms. Of course one can ponder upon this knowledge attained and available to us and invented by us and found by us over so many minds and discoveries linked in an intricate web of inter-dependencies. Life is improbable. Life has probability and it’s life span compared to the predicted and estimated lifetime of the universe so small it should be statistically impossible. And yet, here the quantum fluctuation that gave rise to false Boltzmann brain is writing this stuff as it was all real. So if this is all false, then why is it true? In philosophy you can take either a supernatural stance, a nihilist stance or and absurd one. Some great philosophers said that for the first two you either choose the intellectual death or spiritual one. So the only obvious choice is absurdist view of existence.

Of course there were some recent developments that try to reconcile the ever going dispute between determinism and in-determinism. However it seems that our understanding of true nature of reality is yet to be achieved and probably a long time in the future. But maybe I shouldn’t write we as it is me, who doesn’t have this knowledge. Perhaps everyone else has and are just playing games. That’s something I am probably unable to determine, so I set it aside as irrelevant. Since it’s a statistical impossibility to be above or below average, I’d assume that most of human population are similar to my level of understanding.

So let’s speculate a little bit about the reality from a very subjective and probably non-scientific point of view. However as I have written before, I don’t like prohibitions for their own sake.

First of all, there’s some clues in our physical theoretical calculations that we might indeed be living in a universe that is a hologram. This holographic principle gave rise to Simulation Hypothesis. Now, taking aside all discussion around this topic, is it possible that we live in such a simulation? We cannot yet conclude that we know the answer to this question, but there are many clues in physics that might suggest it is a good description of reality.

Let’s assume we live in simulation. By our limited view and analogy we can think about simulation running on some kind of computer or a computing machine. Besides philosophical questions that may arise like “who started the simulation, what is it running on, what is it’s purpose, etc.”, we can think about some universal constants like for example energy which changes in it’s form, but not in it’s quantity. So the universe has a constant pool of energy which by itself can be used to do the work. However, since we have entropy, the amount of useful work that can be done as time progresses is lower and lower.

So we could find an analogy between energy and available computing power. Let’s assume that the universe is a simulation running on unknown hardware. That would imply that the rules and projections of the Universe in this moment are governed by some kind of code, a program or something of similar purpose that enables the creation of a rich, vast and dynamic world, including simulating supposedly intelligent entities with an unprecedented scale. We also know that information is another form of energy and information itself has mass. We also know that nothing cannot exist, so because of the nature of the Universe something will always arise from nothing. Random quantum fluctuations can give rise to a whole new Universe.

So how can we prove that we are living in a simulation? There are many arguments for it, but let’s consider a thought experiment given those perhaps shaky assumptions.

Let’s suppose we somehow invent a process or a machine or an algorithm that operation of  would result in exceeding the total energy of the Universe, which would probably mean that the Universe would cease to exist. But as far as we know, the Universe prohibits actions occurring from inside of itself that will cause it to self destruct. Maybe with a notable exception of false vacuum, but that doesn’t really constitute a destruction of Universe, rather it’s reorganization to lower equilibrium state. So if we apply the Universe self-censorship principle that states that no action that results in the destruction of the Universe is allowed, the machine, algorithm or process that allows this destruction cannot exist and every attempt to build or invent such thing, will be already self-censored by the Universe and therefore impossible. So is it that we are in fact living in a simulation?

Let’s suppose that such a machine, process or algorithm was possible. Someone at some point in the Universes time will invent such doomsday device, sooner or later. That would mean that at some point such Universe would cease to exist. But if the Universe ceases to exist then why we are here? So perhaps nobody really invented this algorithm, the Universe is not eternal and doesn’t have self-censorship, so we are yet to experience the end of Universe. Of course this raises a whole lot of other questions like “would we even know it?”

On the other hand the possibility is that the Universe is eternal and has self-censorship. That needs a deeper thought about the nature of the Universe. If we as conscious beings are observing the Universe as part of it, does that mean that the Universe itself is looking and exploring itself through our eyes and experiences? Does the observer exists because of the Universe or the Universe because observer exists? Or maybe both are two aspects of the same? If observer ceases to exist does this mean the Universe ceases to exist? What does it even mean for the observer or the Universe to cease to exist? But if the Universe has those intrinsic properties that prevent it’s destruction from inside, then we must conclude that such machine, algorithm or process cannot exist in this Universe, as any attempts to create one will be wiped out or prevented by the Universe itself.

Take an analogy that is recognized as contradicting causality in physics of closed time-like curves of space time. We are not entirely sure, but we are almost certain that those space time structures are likely impossible as they would violate the conservation laws and causality. Physicists are fairly certain that the Universe doesn’t allow breaking some fundamental laws and therefore we can also extend this idea to a doomsday algorithm or process that will cause the destruction of the Universe. It is simply not allowed in this Universe.

To summarize, if a doomsday algorithm was allowed in the Universe, then the Universe at some point will cease to exist, but then if we assume the Universe is eternal (or when there’s nothing, something must occur from quantum fluctuations), it can’t be non-existing, as one evidence suggests that you are reading this. Therefore such an algorithm was already censored or any attempts at inventing such algorithm would be prevented by the Universe.

How the Universe may do this? If we take into account some of the interpretations of Quantum Mechanics, such as many worlds or Copenhagen interpretation, and we assume the Universe is a simulation, then we can speculate that every observation causes branching of the timeline into a tree of possible histories. Some are more likely than others, but nonetheless, each and every possibility is evaluated this way. Consider that every such interaction that causes branching is somehow evaluated by the simulation hardware. Multiple parallel histories are concurrently evaluated by this simulation and when one of the branches are dying of because of for example a Universe destruction, it’s ghost, which is a timeline of bad choices is being reintegrated into your reality (main timeline) with positive results coming from negative test done on that branch of time.

Given that we are now considering two dimensions of time instead of one, that gives you some food for thought. How can you even imagine its manifestation in reality is beyond me. So, given all of this we can conclude that hypothetical doomsday algorithm results in paradox, which causes Universe to cease to exists. But if we consider that Universe itself, even if it had a beginning, is eternal and self-correcting by itself (which BTW string theory found some mathematical structures that resembles computer error correction codes in laws of physics!) then such doomsday algorithm will be censored by the Universe, therefore no such algorithm exists and may exist in the future. However, despite it didn’t exist in your timeline, doesn’t mean that it didn’t exist at some point in the past. It caused the Universe (or the other timeline) to cease to exist, therefore it was reintegrated into our timeline of Universe with negative result or was phase shifted. The negative result reinforces our stable timeline.

So, if no such doomsday algorithm can exist, does that mean that proves the simulation hypothesis? I’ll leave that up to you.

Posted in Random | Leave a comment

How to brew install OpenSSL 1.0.2p (for Python3.6)

“`brew install –ignore-dependencies -f https://raw.githubusercontent.com/Homebrew/homebrew-core/062799b7a384eddc42be0dfbfd1b63e7127c4d7b/Formula/openssl.rb“`

Posted in Random | Leave a comment

Linux users and groups in PostgreSQL database

Standard Linux user and group accounts are defined in three files:

  • /etc/passwd
  • /etc/shadow
  • /etc/group

These files store user accounts and group information one per line, as fields separated by “:”. That kind of structure suffices for most user and group authentication needs, but the problem arises when you would like to have e.g. centralized authentication database in your network or more flexible means for managing user accounts.

I am currently designing and developing solution for hosting servers that requires defining customer Linux accounts. I could use for e.g. LDAP service for storing this information, but I prefer database, since it’s easier to develop tools and generally more flexible solution in the context of whole system.

I am a fan of PostgreSQL database server and it seems that there is a little nifty plugin for NSS (Name Service Switch) which allows you to store this information in database. The Name Service Switch is a standard Linux facility for common information and name resolution, which allows you to combine this information from multiple sources (flat files, LDAP, NIS and also various databases). We will use this facility to implement user and group information and authentication stored in PostgreSQL database.

Enough talk, let’s get to action. The plugin you need for NSS is named libnss-pgsql2. In some systems it may be lib64nss-pgsql2, but also it may be named libnss-pgsql or lib64nss-pgsql. Beware however that you need plugin version 2, and some systems use package name without number at the end, but it is still the correct version. Some systems however use libnss-pgsql to indicate an older version of the plugin, which you can use, but this is out of scope of this post. I am using Linux Debian 7.8 system, so the following commands are for this system, but this tutorial still will be relevant for other distributions after slightly changing commands (i.e. package manager, etc.).

We’ll begin by installing our plugin from terminal:

$ sudo apt-get install libnss-pgsql2

After this operation is completed you’ll have default configuration files ready. We’ll talk about it soon. Remember that you should install this package using sudo if you are normal user (i.e. non-administrative one) or you should issue these commands as root user. You can also take notice that when installing this plugin, apt-get suggests installing nscd. This is a Name Service Cache Daemon which will speed up resolving users and groups by caching them in memory, but for now we don’t want it to interfere with setting up libnss-pgsql2 plugin. We’ll get back to nscd later.

After installing NSS plugin we have to create our PostgreSQL database and database users which will be used to access system user and group information. I assume you already have PostgreSQL server installed. If not, install it and set it up first before you continue!

Let’s login as postgres user:

$ sudo su - postgres

We should now see a prompt like this:

postgres@localhost:~$

Now we create two users for accessing passwd, group and shadow information:

  • nss – which will be used to access passwd and group information
  • nssadmin – which will be used to access shadow information

Remember that we have to create two distinct users, because we don’t want to give access for non-administrative users to information stored in shadow since there are password hashes, which may be used by malicious users (i.e. hackers). Access to shadow information should be only available to administrative user (mostly root account). Let’s create those two users:

postgres@localhost:~$ createuser -P nss
Enter password for new role: PASSWORD
Enter it again: PASSWORD
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

This will create new PostgreSQL user (role) named nss. Remember to provide password for user nss substituting PASSWORD with your own. We also disallow this user to be superuser, disallow creating databases and more PostgreSQL user accounts (which are called roles). We repeat this procedure for user (role) nssadmin, changing password to be different than nss role:

postgres@localhost:~$ createuser -P nssadmin
Enter password for new role: PASSWORD
Enter it again: PASSWORD
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Now we need to create and set up database used by libnss-pgsql2. Still being logged in as user postgres we do:

postgres@localhost:~$ createdb -O postgres -E utf-8 unix

This will create database named unix which is owned by role postgres and has encoding set to UTF-8. Fairly standard. However if your postgres role is not considered secure, you have changed your PostgreSQL administrative role to some other or would like to have another dedicated user that may be used for updating and managing system user accounts database, consider changing option -O postgres to -O your_role_name. Just remember that you have to create this role first if it does not exist (you can do this like I’ve shown you before).

Now let’s verify that we have access to this newly created database. Still being logged as postgres user let’s type in:

postgres@localhost:~$ psql unix
psql (9.1.15)
Type "help" for help.

unix=#

If you see no errors and something like above, we have our database working. Type in “\q” to quit PostgreSQL shell. Now we have to create the database structure. Here’s what it should look like:

-- Default table setup for nss-pgsql

CREATE SEQUENCE group_id MINVALUE 1000 MAXVALUE 2147483647 NO CYCLE;
CREATE SEQUENCE user_id MINVALUE 1000 MAXVALUE 2147483647 NO CYCLE;

CREATE TABLE "group_table" (
"gid" int4 NOT NULL DEFAULT nextval('group_id'),
"groupname" character varying(16) NOT NULL,
"descr" character varying,
"passwd" character varying(20),
PRIMARY KEY ("gid")
);

CREATE TABLE "passwd_table" (
"username" character varying(64) NOT NULL,
"passwd" character varying(128) NOT NULL,
"uid" int4 NOT NULL DEFAULT nextval('user_id'),
"gid" int4 NOT NULL,
"gecos" character varying(128),
"homedir" character varying(256) NOT NULL,
"shell" character varying DEFAULT '/bin/bash' NOT NULL,
PRIMARY KEY ("uid")
);

CREATE TABLE "usergroups" (
"gid" int4 NOT NULL,
"uid" int4 NOT NULL,
PRIMARY KEY ("gid", "uid"),
CONSTRAINT "ug_gid_fkey" FOREIGN KEY ("gid") REFERENCES "group_table"("gid"),
CONSTRAINT "ug_uid_fkey" FOREIGN KEY ("uid") REFERENCES "passwd_table"("uid")
);

CREATE TABLE "shadow_table" (
"username" character varying(64) NOT NULL,
"passwd" character varying(128) NOT NULL,
"lastchange" int4 NOT NULL,
"min" int4 NOT NULL,
"max" int4 NOT NULL,
"warn" int4 NOT NULL,
"inact" int4 NOT NULL,
"expire" int4 NOT NULL,
"flag" int4 NOT NULL,
PRIMARY KEY ("username")
);

This SQL defines two sequences: one for groups and one for user accounts. You can adjust MINVALUE to set starting UID and GID accordingly. The above SQL defines four tables:

  • group_table – which is equivalent for /etc/group
  • passwd_table – which is equivalent for /etc/passwd
  • shadow_table – which is equivalent for /etc/shadow
  • usergroups – which stores a relation between passwd_table and group_table, that defines additional groups to which user is also assigned (primary group is stored in passwd_table, so you shouldn’t define this group in usergroups table)

You should save the above SQL definition in a file db_schema.sql and then under user postgres do:

postgres@localhost:~$ psql unix < db_schema.sql
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE

If no errors occured you should have your database schema set up in database unix. Now let’s verify that everything is ok. Type in psql unix and issue “\d” after logging in to unix database:

postgres@srv01:~$ LC_ALL=en_US.UTF8 psql unix
psql (9.1.15)
Type "help" for help.

unix=# \d
              List of relations
 Schema |     Name     |   Type   |  Owner
--------+--------------+----------+----------
 public | group_id     | sequence | postgres
 public | group_table  | table    | postgres
 public | passwd_table | table    | postgres
 public | shadow_table | table    | postgres
 public | user_id      | sequence | postgres
 public | usergroups   | table    | postgres
(6 rows)

If you see something similar it means that the database schema is properly set up. Now still being in PostgreSQL shell we have to grant priviledges to our two new roles we have defined before. You can do this typing in:

unix=# grant select on passwd_table to nss;
GRANT
unix=# grant select on group_table to nss;
GRANT
unix=# grant select on passwd_table to nssadmin;
GRANT
unix=# grant select on group_table to nssadmin;
GRANT
unix=# grant select on shadow_table to nssadmin;
GRANT
unix=# grant select on usergroups to nssadmin;
GRANT
unix=# grant select on usergroups to nss;
GRANT

This will grant SELECT priviledge on tables passwd_table, group_table and usergroups to role nss, and it will also grant SELECT priviledge to role nssadmin on all tables. We don’t want to grant any other priviledges on those tables to these two users, since they will be used only as read only by NSS facility. Watch out for granting shadow_table priviledge to nss role. You shouldn’t do it!

Now we can quit PostgreSQL shell by typing “\q” and then logout from postgres system account by typing in “exit” or pressing CTRL+D. Let’s verify if our new roles namely nss and nssadmin have access to our database. Under normal user account type in:

wolverine@localhost:~$ psql -U nss -W unix
Password for user nss:
psql (9.1.15)
Type "help" for help.

unix=>

and then if no errors occured, type in PostgreSQL shell:

unix=> select * from passwd_table;
 username | passwd | uid | gid | gecos | homedir | shell
----------+--------+-----+-----+-------+---------+-------
(0 rows)

unix=> select * from group_table;
 gid | groupname | descr | passwd
-----+-----------+-------+--------
(0 rows)

unix=> select * from usergroups;
 gid | uid
-----+-----
(0 rows)

unix=> select * from shadow_table;
ERROR:  permission denied for relation shadow_table

This shows that we have SELECT priviledge for role nss to tables: passwd_table, group_table, usergroups, but not to shadow_table – which is exactly what we want. Do the same verification for user nssadmin and you should see something like this:

wolverine@srv01:~$ psql -U nssadmin -W unix
Password for user nssadmin:
psql (9.1.15)
Type "help" for help.

unix=> select * from passwd_table;
 username | passwd | uid | gid | gecos | homedir | shell
----------+--------+-----+-----+-------+---------+-------
(0 rows)

unix=> select * from group_table;
 gid | groupname | descr | passwd
-----+-----------+-------+--------
(0 rows)

unix=> select * from usergroups;
 gid | uid
-----+-----
(0 rows)

unix=> select * from shadow_table;
 username | passwd | lastchange | min | max | warn | inact | expire | flag
----------+--------+------------+-----+-----+------+-------+--------+------
(0 rows)

This shows that role nssadmin has permissions to SELECT on all tables. If any errors occured during the above verification, you have to make sure that roles nss and nssadmin have SELECT permission properly granted. It may be necessary sometimes to grant access to database schema itself. Consult PostgreSQL documentation on how to do it.

You may wonder why I have given so much time to ensure proper priviledges to the database. It seems that if you fail to do it properly, you will have hard time debugging why the libnss-pgsql is not working. The scarce documentation for libnss-pgsql doesn’t help either and there is literally no information available on Google if you are looking for help. So, make sure you have your database server working properly and that the roles have necessary priviledges to access database tables. Unfortunatelly, there is no way to debug or see the logs for libnss-pgsql plugin, so you have to be extra careful with this step.

When the database is properly setup we can setup configuration files for libnss-pgsql. There are two files in /etc directory which handles querying information from your database and feeding it to NSS facility.

The first one is /etc/nss-pgsql.conf and should look like this:

connectionstring        = hostaddr=127.0.0.1 dbname=unix user=nss password=PASSWORD connect_timeout=1
# you can use anything postgres accepts as table expression

# Must return "usernames", 1 column, list
getgroupmembersbygid    = SELECT username FROM passwd_table WHERE gid = $1
# Must return passwd_name, passwd_passwd, passwd_gecos, passwd_dir, passwd_shell, passwd_uid, passwd_gid
getpwnam        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table WHERE username = $1
# Must return passwd_name, passwd_passwd, passwd_gecos, passwd_dir, passwd_shell, passwd_uid, passwd_gid
getpwuid        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table WHERE uid = $1
# All users
allusers        = SELECT username, passwd, gecos, homedir, shell, uid, gid FROM passwd_table
# Must return group_name, group_passwd, group_gid
getgrnam        = SELECT groupname, passwd, gid FROM group_table WHERE groupname = $1
# Must return group_name, group_passwd, group_gid
getgrgid        = SELECT groupname, passwd, gid FROM group_table WHERE gid = $1
# Must return gid.  %s MUST appear first for username match in where clause
groups_dyn      = SELECT ug.gid FROM passwd_table JOIN usergroups USING (uid) where username = $1 and ug.gid <> $2
allgroups       = SELECT groupname, passwd, gid  FROM group_table

Remember to substitute PASSWORD with your nss role password.

The second file is /etc/nss-pgsql-root.conf and should look like this:

# example configfile for PostgreSQL NSS module
# this file must be readable for root only

shadowconnectionstring = hostaddr=127.0.0.1 dbname=unix user=nssadmin password=PASSWORD connect_timeout=1

#Query in the following format
#shadow_name, shadow_passwd, shadow_lstchg, shadow_min, shadow_max, shadow_warn, shadow_inact, shadow_expire, shadow_flag
shadowbyname = SELECT * FROM shadow_table WHERE username = $1
shadow = SELECT * FROM shadow_table

Also remember to substitute PASSWORD with nssadmin role password. If you fail to do this, you may render your system completely unaccessible! Both configuration files must be owned by root and the second one should be readable only by root. Ensure it has proper permissions set:

wolverine@localhost:~$ sudo chown root:root /etc/nss-pgsql.conf /etc/nss-pgsql-root.conf
wolverine@localhost:~$ sudo chmod 644 /etc/nss-pgsql.conf
wolverine@localhost:~$ sudo chmod 600 /etc/nss-pgsql-root.conf

Now we have to be extra careful! I recommend you to leave another terminal open with editor open on /etc/nsswitch.conf, until we verify everything works as it should. If there are errors or the plugin is not working properly YOU WILL DISABLE ACCESS TO THE WHOLE SYSTEM (i.e. ssh, login and other services depending on system user accounts). Do not log out from root account at least on one terminal before you make sure everything works properly!

Let’s login as root:

sudo su

and then open up /etc/nsswitch.conf in vim or another console editor. Do the same on another terminal console (just so we can be sure to revert to previous configuration if anything goes wrong). When you have opened /etc/nsswitch.conf in editor, you have to change three lines to look like this:

passwd:     pgsql compat
group:      pgsql compat
shadow:     pgsql compat

Instead of compat you may have files, so if you do substitute compat to files and you should be ok:

passwd: pgsql compat
group: pgsql compat
shadow: pgsql compat

Save the file and close it (leave it open in another terminal). What we have done now is we say to NSS to first look for user in database and if it fails, fall back to /etc/passwd, /etc/shadow and /etc/group files.

WARNING! The documentation for libnss-pgsql2 plugin states that you should state compat or files first and after this pgsql. THIS IS WRONG AND MAY RENDER YOUR SYSTEM UNUSABLE! The same goes for “[SUCCESS=continue]”. Do not use this statement in /etc/nsswitch.conf because it DOESN’T WORK PROPERLY and WILL DENY ACCESS TO ALL USERS!

Now we have to test if NSS is still resolving users and groups. You can do this by typing in:

getent group
getent passwd
getent shadow

Do this under root and under normal user. For root user you should see entries for group, passwd and shadow (essentially what is currently available in /etc files). The normal user should see group and passwd entries, but running getent shadow should not return anything. Here’s an example:

root@localhost:~# getent group
root:x:0:
bin:x:1:
daemon:x:2:
sys:x:3:
adm:x:4:
tty:x:5:
disk:x:6:
lp:x:7:
mem:x:8:
kmem:x:9:
wheel:x:10:wolverine
mail:x:12:postfix
news:x:13:
uucp:x:14:
man:x:15:
...

If any of the getent commands hang up or are not returning entries it indicates problem with libnss-pgsql2 configuration or nsswitch.conf. In this case I recommend to revert back to original /etc/nsswitch.conf and make sure you have made everything properly especially if PostgreSQL server is running, if the database exists and has proper schema and also if roles have proper priviledges. Make sure that your pg_hba.conf is set up properly and that PostgreSQL is accessible through TCP socket on localhost (127.0.0.1) or any other address if you are using another server for PostgreSQL.

If all getent commands behaved properly as described and returned entries when they should it should mean that everything is working properly and our plugin is used by NSS facility.

Now we can create our first user in the database and see if we can log in. Let’s start by logging in as postgres user and then psql to our unix database:

wolverine@localhost:~$ sudo su - postgres
[sudo] password for wolverine:
postgres@localhost:~$ psql unix
psql (9.1.15)
Type "help" for help.

unix=# insert into group_table (groupname) values ('testgroup');
INSERT 0 1

Now let’s verify our group is inserted into table and get it’s gid, which we will need for setting up user group:

unix=# select * from group_table;
  gid  | groupname | descr | passwd
-------+-----------+-------+--------
 10000 | testgroup |       |
(1 row)
unix=# insert into passwd_table (username, passwd, gid, homedir) values ('testuser', 'x', 10000, '/home/testuser');
INSERT 0 1

and verify if the user passwd entry is set:

unix=# select * from passwd_table;
 username | passwd |  uid  |  gid  | gecos |    homedir     |   shell
----------+--------+-------+-------+-------+----------------+-----------
 testuser | x      | 10000 | 10000 |       | /home/testuser | /bin/bash
(1 row)

As you can see the passwd entry exists. We have set ‘x’ as user password which means that we will use shadow_table to store password instead of plain text password in passwd_table (exactly the same as /etc files are doing). Let’s set up shadow_table entry for our user. First we need to create extension PGCrypto on our database:

unix=# create extension pgcrypto;
CREATE EXTENSION

Remember that you must have PGCrypto installed for your PostgreSQL server installation for this to work. You can also create extension on your database only with administrative role (e.g. postgres). Now let’s insert shadow information for our user:

unix=# insert into shadow_table values ('testuser', crypt('mypassword', gen_salt('md5')), cast(extract(epoch from now()) as INTEGER) / 86400, 0, 99999, 7, 0, -1, 0);
INSERT 0 1

Let’s stop here for a moment. Since shadow_table and /etc/shadow format may not be very obvious I’ll explain each field here:

  • username – name of the user stored as username in passwd_table
  • passwd – encrypted hash for password
  • lastchange – number of days since epoch (1970-01-01)
  • min – minimal number of days before user is allowed to change password
  • max – maximum number of days after which user must change password
  • warn – number of days before maximum when user is warned to change password
  • inact – number of days after password expires that account will be disabled
  • expire – number of days since epoch (1970-01-01) account will be disabled and cannot be used to login
  • flag – reserved field

Our insert to shadow_table may not be obvious since we have used two value constructs:

crypt('mypassword', gen_salt('md5')) 
cast(extract(epoch from now()) as INTEGER) / 86400

The first one uses PGCrypto extension to generate salted password hash from password “mypassword” and salt using md5 algorithm. YOU SHOULD NOT USE MD5 for salting, because MD5 is insecure. PGCrypto however doesn’t support newer hash algorithms like SHA-256 or SHA-512 which are considered secure. For salting with these algorithms you have to devise your own solution, which is beyond the scope of this article.

The second one is just a simple algorithm that extracts UNIX TIMESTAMP (epoch) from current date (now) and since this is FLOAT, casts it to INTEGER and then divides the number of seconds by number of seconds in one day (86400) to obtain number of days since 1970-01-01. We need this value inserted into lastchange field.

Now, verify that the shadow entry was inserted properly:

unix=# select * from shadow_table;
 username |               passwd               | lastchange | min |  max  | warn | inact | expire | flag
----------+------------------------------------+------------+-----+-------+------+-------+--------+------
 testuser | $1$dksgT54M$JVwFYQS/j8NkZHeGVgbki0 |      16575 |   0 | 99999 |    7 |     0 |     -1 |    0
(1 row)

If everything was ok, close psql shell. In case you are not logged in under normal user (i.e. you are root), logout. Now you should be able to test if you can log in with your newly created user in the database by typing in:

wolverine@srv01:~$ id testuser
uid=10000(testuser) gid=10000 groups=10000
wolverine@localhost:~$ su - testuser
Password:
No directory, logging in with HOME=/
testuser@localhost:/$

Congratulations! Authentication through PostgreSQL database now works and you can define your new users simply by inserting records to the database. Of course, you have to create user directories and skeleton files, since you cannot use useradd, usermod, groupadd and other such tools. You should build your own solutions for adding, modifying and deleting users in the database and ensuring to properly manage home directories for each newly added or modified user.

The last thing we should do is installing nscd. nscd is a Name Service Caching Daemon which will cache entries from your PostgreSQL database in memory. This will significantly speed up user and group lookups and decrease performance impact on PostgreSQL server. This is especially important when user and group databases are large and there are many queries for this information. You can install nscd by typing in:

wolverine@localhost:~$ sudo apt-get install nscd

That’s it! Authenticating user accounts through PostgreSQL database is now fully set up. If you have any questions or comments, I’d love to hear them.

Posted in Administration, Linux | Tagged , , , , , , , , , , , , | 1 Comment

Defining custom vars for Pyramid scaffold

This is a quickie. I was working on creating custom Pyramid scaffold for easing development of multiple REST based microservices that share a common base. Instead of trying to copy, paste, change, I decided to ease my work by creating a scaffold. Here’s a quick tutorial from the documentation on how to do it: http://docs.pylonsproject.org/docs/pyramid/en/latest/narr/scaffolding.html.

However it took me a little bit of time to find out, how am I supposed to pass custom variables used by PyramidTemplate when rendering files within a scaffold. Pyramid documentation doesn’t explicitly state it, but it seems that PyramidTemplate is instantiated from Template class from PythonPaste (or PasteDeploy, I don’t remember which one).  Taking a quick look at Paster templates documentation here: http://docs.plone.org/develop/plone/misc/paster_templates.html – I have stumbled upon this sentence:

You can also prepare template variables in Python code in your Paster template class’s pre() method:

So. It seems that when defining your own Pyramid scaffold, you can override pre() method of PyramidTemplate like this:

from pyramid.scaffolds import PyramidTemplate

class MyCustomTemplate(PyramidTemplate):
    _template_dir = 'mycustom_scaffold'
    summary = 'Template for mycustom scaffold'

    def pre(self, command, output_dir, vars):
        vars['myvar'] = 'THIS IS MY VARIABLE'
        return PyramidTemplate.pre(self, command, output_dir, vars)

As you can see there is vars dictionary passed into pre() method which you can update with your own variables. Hope you find it useful.

Posted in Programming, Pyramid, Python | Tagged , , , , | Leave a comment

How to resolve “NoMethodError” in Chef

Recently I was given a task to implement Chef on clients infrastructure. What I have learned along the way is that when deploying Chef on existing server infrastructure, there will be almost no two identical systems and that each server node is different. You have to be extra careful when provisioning servers, especially production ones. Sometimes you even stumble upon errors in Chef itself. I have discovered one such error recently and I’m going to tell you a simple solution for solving it.

If you ever encountered error like this:


================================================================================
Error Syncing Cookbooks:
================================================================================

Unexpected Error:
-----------------
NoMethodError: undefined method `close!' for nil:NilClass

you may wonder what this error means, especially if you are not a software developer. Luckily you can run chef-client client in debug mode like this:

chef-client -l debug

If you know Ruby, you’ll probably spot a traceback, but you’ll have to dig deeper into it only to find, that one of the libraries in Chef (file http.rb on line 368) has a broken exception handling. When there is a temporary file creation problem, the traceback fires, but instead giving you a proper Exception, it will raise error like the one on top of this post. Changing line:

tr.close!

to:

tr.close! if tr

resolves the problem with Exception and gives us a proper error:


================================================================================
Error Syncing Cookbooks:
================================================================================

Unexpected Error:
-----------------
ArgumentError: could not find a temporary directory

This is way easier to solve than the previous error, because it simply means that the temporary directory (usually /tmp) has improper permissions.

You should do:

chmod o+t /tmp

and voila! The problem is solved. You can now run chef-client again and the cookbooks will be synced.

Posted in Chef | Tagged , , , , | 1 Comment

256 color terminal in Konsole running under Mageia

I have stumbled upon a problem with Konsole being incapable of showing 256 colors. The Linux distribution I have experienced this particular problem is Mageia. It turns out that you have to do two things.

First, make sure you have ncurses-extraterms installed. You can install them (as root) in Mageia as follows:

urpmi ncurses-extraterms

After doing this open Konsole and go to menu Settings -> Edit current profile -> Environment -> Edit and then add or substitute line beginning with TERM= as follows:

TERM=xterm-256color

Restart your Konsole and you should be ready to go.

Posted in Linux | Leave a comment

Robotic Raspberry Pi powered lawn mower

Last week I got my second Raspberry Pi. If you don’t know what it is already, it’s a 25$ fully blown computer with two USB ports, ethernet, video, audio and HDMI port of credit card dimensions. It has sixteen programmable GPIO ports, external display and camera port and is powered by single micro-usb. It’s power consumption is 3,5 Watts and it’s capable of HD video output. Currently demand for it overwhelmed it’s supply, so it’s hard to come by, but I was lucky to get two development boards already.

So. What’s it good for? Well. There are many projects by hobbyists and geeks already in the workings, but since it runs on fully capable Linux it is very good solution for many things, especially universal and powerful robotics controllers. I have few ideas for projects using my Raspberry Pi’s. I want to talk a little bit about one of them here.

Since I have a recreational plot in the countryside, there’s always a problem with grass growing fast. On this parcel there are some flower borders, some bushes and some fruit trees. Also the terrain is a little bit rough. Mowing grass on this parcel is a lot of work and it needs to be done almost weekly, because grass grows really fast. Unfortunatelly neither I, nor my parents have time to do this. And since our lawn-mower is somehow old and mowing pieces of rough terrain requires taking mowed grass to the composter, it’s tedious and hard work. So. I have come up with the idea for automatic and robotic lawn mower. I have already started programming side of the project and am currently investigating mechanical and electronic solutions for this project. Since I own a car for few months now and had to work on fixing it I have acquired much knowledge about mechanics, because I am forced to repair and bring my car to usable state by my own – fairly saying because I don’t have money to let professional mechanics do it.

Let me talk a little bit about this project. I have already coded some basic building blocks like discrete topographic representation maps for the terrain. My language of choice is of course Python. The terrain map is a two-dimensional representation of discrete areas – one cell in an array represents ten by ten centimeters of terrain area, which should be quite sufficient, but resolution is adjustable and is limited only by amount of memory and computing resources needed for it. Since the topographic map is discrete representation of square 10 cm cubed areas and I need to represent only passability of the discrete area at hand, I have developed a class called BitMap which uses a great bitarray module. It’s lightning fast and uses very little resources. For example. A representation of 64 by 64 discrete cells which corresponds to 6,4 by 6,4 meters area takes only 512 bytes memory space. So basically representing a large terrain of thousand square meters with resolution of ten centimeters would take 12,5 megabytes of memory, which is not really a lot, given memory constraints of Raspberry Pi. Of course this topographic map can be divided into segments (regions) and offloaded to SD card and loaded on demand to conserve memory further.

The BitMap class is a represents passability of terrain chunks. Currently I’m developing fully software simulator for testing ideas that will be based on this class. The class allows loading and saving to byte representations, 1-bit bitmap images and implements few algorithms, for example Bresenham’s line drawing algorithm, ultra fast Queue-Linear flood fill, matrix combinations and differences. It also implements a simple bounding-box collision detection. I have few more ideas on improvement for this class, but for now it already fullfills it’s basic goals.

The idea for my robotic lawn mower is to put it in unknown terrain and allow it to map it and it’s boundaries. Mechanically robot will be equipped with a gasoline powered engine like in typical lawn mowers. The engine will power alternator that will feed current into battery. The battery will be used to power electronics (including Raspberry Pi controller) and used for electric starter motor for the gasoline engine. Engine will be connected to simple electronic driven clutch and two-gear gearbox (forward and backwards). Front wheels will be able to turn by a servo controlled with Raspberry Pi. There will be some ultrasound sensors mounted on the front and back of the robot to detect obstacles. Since the lawn mower will explore the terrain by itself it will have some mechanical or other type of sensors for detecting holes in the ground, so the robot won’t fall into them. I haven’t decided just yet what solution I will go with for this problem. There’s also a problem for detecting off-limits areas of the terrain like water reservoirs and flower areas. There will be a camera mounted on the servo to allow computer vision including, but not limited to shape detection, obstacle detection aid and entity detection.

Since the robot must also be careful not to harm any animal or human in the area of robot operation (we are dealing here with quickly rotating knifes) the camera and sensor arrays will also aid in detecting interested cats, playful dogs, humans, etc. This will need careful programming of threshold values. It also means that terrain mapping will need to utilize some kind of heuristic allowing for exploring chunks of terrain previously mapped as inaccessible. Since the problem with lawn mower is covering unkown terrain area of operation as quickly and efficiently as possible, topographic mapping and detecting area limits is of crucial importance. The software simulator I’m currently building will allow me to test different navigation and area covering algorithms, but also will provide me with the platform for implementing statistical and feedback based neural networks, allowing the robot to learn and improve decisions on operation algorithms with each iteration for given terrain. Pathfinding will be based on heuristic algorithms including graph based D-Star which is successfully used on Mars Exploration Rovers and military grade autonomous systems.

Since Raspberry Pi is equipped with sixteen GPIO ports and also I2C bus, designing a relay board for sensor arrays and servomotors shouldn’t be problematic. Of course connecting sensitive electronic board to electrical parts can be quite dangerous, filter systems must be also implemented to protect the controller board.

That’s all there is for now, I will post more details about this project will be available as it progresses. So stay tuned, comment and wait for further updates.

Posted in Hacking, Programming, Python | 2 Comments

Windows 7 installation under KVM hypervisor

I had to install Windows 7 virtual machine under KVM hypervisor running on Debian 6.0 host server. The goal was to use QCOW2 type file as virtual hard drive. I’m used to take advantage of available tools, so I have made QCOW2 file using:

qemu-img create -f qcow2 win7.qcow2 100G

Then I used virt-manager to setup libvirt/qemu file for the virtual machine. However it seems that virt-manager (at least the stock one from Debian stable) has problems with xml description files management. It doesn’t always properly set them up. So I have modified hard driver definition in /etc/libvirt/qemu/VM_NAME.xml by hand, changing bus type from ide to virtio, raw image to qcow2 and address type from drive to pci (this is required when using virtio driver). I have obtained an ISO image of Windows 7 Professional which I have attached to VM as IDE CDROM. So far, so good.

After restarting libvirt-bin I have launched installation of VM from virt-manager. Windows 7 setup started without any problems, but as soon as I got to drive partitioning the problems started to mount. I have downloaded VirtIO drivers from Red Hat as suggested by KVM website. I have attached drivers CD ISO to another IDE CDROM and restarted libvirt. After loading drivers in the partitioning step of Windows 7 setup, virtual drive appeared, I have created new partition, but Windows installer refused to proceed with installation saying something about “Windows will not be able to boot from this drive due to unexisting controller. Fuck you, I won’t allow you to proceed with installation” or such kind of crap.

The solution. Install on QCOW2 partition using ide bus type instead of virtio. Installation will take many hours, guaranteed. After installing, start the VM and allow it to configure everything on the first run. As soon as you see Start Menu appear, shutdown this VM immediately, so you wouldn’t have to wait until your death for Windows to install six million updates and even one more. Attach virtio CD ISO to your VM and also create additional small partition image that will use virtio bus. You can use virt-manager to do this, clicking on the VM and then in hardware setup/information tab choosing Add below the hardware list. Choose Storage, set driver type to VirtIO and create a small 1GB partition or so. Don’t care about QCOW2, it can be simple RAW image type. After doing this, start your Windows 7 virtual machine, wait until it loads and then do: Start -> right click on Computer -> Properties -> Hardware Manager. Find Unknown SCSI Controller, right click Install/Update Driver, find your VirtIO CD, go to Win7\amd64 (I’m assuming you have 64 bit virtual machine) and proceed. Windows should automatically find appropriate driver and you should see after a while that Unknown SCSI Controller is now RedHat VirtIO SCSI Controller. Also, under hard drives you should see VIRTIO IDE DRIVE – this is your new small partition. After this shutdown Windows 7 again.

Why we did this step you might wonder? Well. Windows 7 won’t let you install driver for non-existing hardware (or at least I don’t know how to do it), so we have to cheat and use a temporary decoy as a small virtio bus partition for Windows 7 to see it and install VirtIO SCSI Controller driver.

After this we can delete this temporary decoy partition and change our /etc/libvirt/qemu/MACHINE_NAME.xml file. We have to change our primary partition to use virtio bus instead of ide bus, and select appropriate address, usually type pci, domain 0x0000, bus 0x00, slot 0x06, function 0x0. If you have non standard setup look for all address tags and choose slot accordingly – usually if you have the highest number for slot on the same bus number somewhere in the file, add one and you’ll be fine. Just remember that you count in hex, i.e. if your highest slot number is 0x09 then you have to use 0x0a and not 0x10.

After doing this start your VM and there you go, Windows 7 will run much faster using paravirtualized VirtIO SCSI driver.

Posted in Random | 1 Comment

All alone at night

Posted in Random | Leave a comment