Category Archives: ubuntu

Authentication Tokens

I have two websites. The first (on the server tarragon) is readily available on the internet to the public (you are looking at it now), and also has a username/password based login capability. Some selected people are able to get into the back end of this website. Mostly these are people who get their mail on tarragon, or who have websites on tarragon, or both. The login capability allows them to manage their own accounts, change their password, etc.

The second website (on the server oregano) is inaccessible (or at least non-functional) except to authorized users. The authorized users are exactly those people having a login credential on tarragon. The only way to achieve a usable connection to oregano is to first log on to tarragon, and click a link there. This will create a redirect to oregano, passing a token which will allow the connection to succeed. The website on oregano will politely decline to function unless an appropriate token is received.

This article is about building that token. The properties of the token are as follows. First it must provide the identity (username) of the user (the login used on tarragon). It must be encrypted, so that all (or at least part) of its contents are protected. It must not be replayable, that is, it should not be possible for someone to capture the token used by an authorized person, and reuse it later. This includes the provision that it must be time limited, the token should expire after a short time, and subsequently be useless. It should be possible to include other information in the token if needed. For exampe, the same token machinery can be (and is) used for sending an email to handle forgotten passwords, presuming that if joe has forgotten his password, we can send a link to joe’s email address and only joe will receive it.

In the first implementation, I thought to use joe’s password for the encryption. While this can be done, it is really a flawed plan, because I don’t actually have joe’s cleartext password, I only have the hashed version of it. Using joe’s hashed password as a key is obviously vulnerable to capturing the file containing the hashed passwords. The reason they are hashed at all, of course, is that capture of files full of user information occurs all too frequently. If tarragon is available on the internet, I am obliged to assume that a sufficiently motivated and funded attacker could get his hands on the user database. Of course, it is highly unlikely that tarragon would be an interesting target for such an attack. But just because the server doesn’t have national security secrets, that is no excuse to be sloppy.

So I reimplemented it using the certificates on the two machines. Both of the servers have certificates, and use tls for their connections, i.e. they are https instead of http sites. After a little futzing around and reading, I discovered a fairly straightforward way for the php code in server ‘a’ (tarragon) to capture the certificate for remote server ‘b’ (oregano), and extract it’s public key. Then tarragon can encrypt the token using oregano’s public key, so that only oregano can decrypt the token. I could go further, and encrypt again (actually first) using tarragon’s private key, so that oregano could verify that only tarragon could have sent it.

Actually, public key encryption isn’t really used for the token. Instead, a random key is chosen for a symmetric cipher, and the key itself is then encrypted with oregano’s public key, and subsequently decrypted on oregano with his private key. The encrypted key is sent along with the encrypted message.

The function I used in php is part of the openssl library in php, and is called openssl_seal (and the other end is openssl_open. There are lower level functions that would allow one to accomplish the same things, but these seemed straightforward to use. One problem however, is that openssl_seal is written to use RC4 for the cipher. RC4 is frowned upon as insecure in a number of contexts. Openssl_seal allows passing an additional parameter, to select a different cipher, but strangely has no provision for passing an initialization vector so one can’t use any cipher that requires an initialization vector. Eventually I decided to use AES in ECB mode, despite the problems with ECB. This passed syntax but failed horribly at run time – meaning the apache worker just seemed to disappear! In debugging it an error_log call before the openssl_seal was present, and an error_log call afterwards was not, and a surrounding try catch block was not triggered. WTF? It took two days to figure this out. So I just went back to not specifying a cipher and letting it use RC4, and it worked. For the moment at least, I’m leaving it with RC4, since I am only encoding a small token.

The trick to getting a remote certificate in PHP was to use the stream facility, which opens a socket. and a stream context which is a set of parameters to the stream. The stream context is set to use ssl, the socket is established to port 443, and then the stream context will happily yield up the peer certificate that it received during the tls negotiation.

Making MySQL serve UTF8 correctly

If the MySQL server decides that its default environment is UTF8, and that its client actually wants Latin1, it will translate the return values.

I’ve never before had to be careful of the distinction. Perhaps once in a blue moon I would have a record with a “real” quotation mark or a character with an accent, but if it didn’t work correctly, it was never much of a bother.

Once it became important, I had to understand what was happening. I have a table that has filenames in it, and some of those filenames contain characters (a acute, e grave, o umlaut, etc). The actual files on disk have the names encoded in utf8. The records in the database are also recorded in utf8. But the records were being translated by mysql from utf8 to latin1 as they came in. So “Mamá” was recorded on disk as Mam\xc3\xa1 in the directory, and in the database, but when I got the row into memory, the filename field said Mam\xe1. The difference between latin1 and utf8 for this purpose, is that all these many “western/latin special characters” were actually mapped in latin1 to values within the 256 characters available with 1 byte. So the first 128 in the latin1 codespace were ordinary ascii, and the high order 128 had as many of the western/latin diacriticals as possible crammed in there. And in latin1, e9 is a-acute.

But on the web these days, utf8 is much preferred. Latin1 is ok if all you want is the carefully selected subset of 128 characters that can be shoehorned into the high end of the code-point space. But utf8 is a far more general solution. Using a multibyte sequence to represent over a million characters and special symbols.

Turns out mysql has a bunch of variables to control character set and collating sequence. With phpmyadmin, one can look at database->Variables and see characters_set_database, character_set_filesystem, character_set_result, character_set_server, character_set_system, and a bunch more. Or in mysql client one can show varaiables like ‘%character_set%’;

My problem was that the server had come up believing that some of these were set to utf8 and some were set to latin1. I haven’t tried to figure out the logic of how it figures out its default – I don’t want it to default, I want to tell it what I want. So the solution was to add the directive: “character-set-server=utf8” to the mysql configuration file (on cinnamon it was /etc/mysql/mysql.conf.d/mysqld.conf. After restarting all of the relevant character_set_xxx variables come up as utf8.

Update: 8/9/17

I used these changes also on oregano and tarragon, but it results in a different problem for me on blogforacure data. The blogforacure database, built a long time ago, has lots of tables in latin1. There are not a lot of non-ascii characters, but there are a few. One frequent source is people typing double space after a period, which the ckeditor tries to preserve by creating a non-breaking space, which is hex A0 in latin 1. When the site reads this back, if mysql is told that the database is actually utf8, then it displays this as Â. So if I see a bunch of A circumflex in the output, it means I actually have latin 1 characters in the database, which I am interpreting as if they were utf-8.

Removing the specification of character-set-server=utf8 causes the negotiation to give the right result, and the latin1 non-breaking space appears correctly in the output.

Reinstalling Libvirtd

I hosed up the configuration of libvirt on Cinnamon, trying to change the network definition.
It was so fouled up I decided to remove and reinstall libvirtd-bin, qemu, and virt-manager.
After the reinstall, the default network did not reappear, and I went looking for how to reinstall it. In the end I had to piece together various information, but the upshot is that the definition of networks for libvirt is in /usr/share/libvirt/networks. This directory was missing for me, and I had to recreated it:

root@cinnamon:~# mkdir /usr/share/libvirt/networks
root@cinnamon:~# cd /usr/share/libvirt/networks
root@cinnamon:/usr/share/libvirt/networks# touch default.xml
root@cinnamon:/usr/share/libvirt/networks# chmod 0777 default.xml
root@cinnamon:/usr/share/libvirt/networks# emacs default.xml

What I put into the file was:
<network>
<name>internal&lt;/name>
<bridge name=”virbr0″ />
<forward/>
<ip address=”192.168.122.1″ netmask=”255.255.255.0″>
<dhcp>
<range start=”192.168.122.2″ end=”192.168.122.254″/>
</dhcp>
</ip>
</network>

Getting a Gnome desktop in VNC under Ubuntu

There is a lot of old and wrong data out on the net about this. I think it is because of the continuing evolution of the desktop environment/gnome/unity etc., most of which I don’t understand.

Below is the overall approach I use, and some things I had to do to make it work. I will use as an example getting up a vnc viewer screen on oregano showing a gnome desktop on cinnamon. Interestingly, it proved much harder on cinnamon (running Wily Werewolf) than on pepper or the Butcher box named kodi, both of which are running Trusty Tahr.

On oregano the file /usr/local/bin/<remote hostname> contains a script to make an ssh connection to <remote hostname> and also to establish various ssh tunnels to that hostname (for example, to look at databases). In the case of hosts where I want to be able to open a graphical environment using vnc, the script will contain the following, among other things (here the remote host is cinnamon, and the port numbers in the command aren’t important, except that the “:3” has to match the “5903”. Selecting the port numbers carefully beccomes important if one has multiple such connections in play at once):

ssh uname@cinnamon vncserver -geometry 2400x1200 :3
ssh -L *:5901:localhost:5903 -g uname@cinnamon

which runs the vncserver command on cinnamon as user uname. The vncserver command looks for an xstartup file under ~/.vnc/xstartup. That file is the key. The (important) contents of the file are (line folding is not in the file):

#!/bin/sh
unset SESSION_MANAGER
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
gnome-session --disable-acceleration-check –session=gnome-flashback &

I think that the parameter –disable-acceleration-check is still required, as of this writing, but it is one thing I put in to try to solve the problem, and then had to do other things, and I haven’t yet gone back to test without it. In any case I’m pretty sure it can’t hurt, because acceleration is what we don’t have. (Update: I later found pepper didn’t work correctly till I put it in).
Continue reading Getting a Gnome desktop in VNC under Ubuntu

MYSQL on Ubuntu 15.10

I haven’t researched whether this has changed, in 15.10, or whether it has been this way since ubuntu switched to systemd, which is probably the case.

Under systemd, ubuntu no longer uses the /etc/init.d/mysql script, but instead uses a systemd unit in /lib/systemd/system/mysql.service which invokes /usr/bin/mysqld_safe to start and stop mysqld.

I have had a lot of trouble with this, and had to do a lot of debugging to figure out what is going on. Probably I would not have had trouble if I were not trying to port over a running mysql installation manually, i.e. if I just installed mysql-server and proceeded to create new databases, new entries in mysql etc.

One issue is that a mysql install creates a file in /etc/mysql called debian.cnf which contains a user/password for user debian-sys-maint with a generated password, and this is put into the mysql users table, to enable various operations to be performed by mysqladmin using these credentials.

The first problem was that when I copied over the mysql table from the previous installation, I was copying in the old password for debian-sys-main, which didn’t match the debian.cnf file which was installed when I did the apt-get install mysql-server. So I had to read the debian.cnf file, extract the password and change the password in the mysql table.
Continue reading MYSQL on Ubuntu 15.10

S3cmd on ubuntu 15.04

After installing ubuntu 15.04 my backups to S3 stopped working.

I tried running them manually to see what was happening and I got errors – some goofy stuff about the url I was using net.wmbuck.backups….s3.amazonaws.com not being part of *s3.amazonaws.com. When I searched the net I found that there was a change in python 2.7.9 having to do with evaluating certificates, and some conflict with the wildcard cert being used by Amazon S3, with the result that there is an error which occurs whenever an S3 bucket happens to contain the “.” character in its title.

My buckets are all named net.wmbuck.x so I am vulnerable to this error.

There is a fix for this in S3cmd version 1.6.0 but the latest ubuntu as of this writing has only S3cmd 1.5.x and attempting to upgrade using apt-get doesn’t get anything new.

I did an apt-get remove of s3cmd, and then downloaded a tarball, and installed it into /usr/local/bin.

Ubuntu 15.10 will be coming out next month, and when I get around to installing that perhaps the version of s3cmd will have the fix.

top of page

Notes on setup of HDHomeRun, tvheadend, kodi live tv

HDHomeRun provides a source of tv in htsp format. They provide an app for windows/linux/mac which enables watching the tv stream directly, and changing channels. They also provide a Kodi Add-on which allows watching the streamed material directly from there. However, this is just watching, and doesn’t enable the guide, PVR etc.

To use the built in features in Kodi for “live tv”, you have to have another piece of software, which Kodi calls the “backend”. There are apparently different backends supporting different hardware, but one of the backends is called “tvheadend”, and it supports HDHomeRun, and is supported by Kodi.

The tvheadend software has to be installed. apt-cache search tvheadend shows:
tvheadend – Tvheadend
tvheadend-dbg – Debug symbols for Tvheadend
kodi-pvr-tvheadend-hts – Kodi PVR Addon TvHeadend Hts – PVR API:1.9.2
kodi-pvr-hts – TVHeadEnd PVR for Kodi
kodi-pvr-hts-dbg – debugging symbols for TVHeadEnd PVR for Kodi

The “kodi-pvr” bits are kodi add-ons that have to be added to kodi (in linux only) in order to provide the api between kodi and the backend. Kodi for mac and windows has the pvr bits included, but they have to be added in linux.After the apt install, add-ons->my add-ons->PVR clients,  select TVHeadend HTSP Client and configure it, then Activate it.

Since the kodi I watch is on coriander (the mac mini), the pvr stuff is already installed with kodi. I only needed to install the tvheadend piece somewhere, and I put it on cinnamon where the large file media array is, so that PVR recorded material can be stored there too.
Continue reading Notes on setup of HDHomeRun, tvheadend, kodi live tv

Cyrus-Imap Administration

Every time I have to mess with cyrus-imap mailboxes I spend a half hour trying to figure out how to get cyradm to run. While I have by no means figured it all out, I do have one piece of lore worthy of being written down.

My imap server forbids plaintext logins unless they are within a TLS session, so /etc/imapd.conf has the setting allowplaintext: 0

But, cyradm uses imap authentication (witness all the failed attempts to get cyradm to authenticate putting entries in the /var/log/secure log using pam_unix imap:auth). The problem of course is that cyradm doesn’t have a tls session, so allowplaintext rejecting the plaintext password.

Reset /etc/imapd.conf to allowplaintext:1 temporarily, systemctl restart cyrus-imapd, and then, as root,  cyradm tarragon. Make all the mailboxes you want. Then reverse and turn plaintext back off.

Managing passwords on this server

This blog is running on my wmbuck.net server, tarragon, in the Amazon cloud. This server, in addition to hosting this blog, hosts about 20-25 websites (for friends, most of them very low traffic), including my own. It also operates mail for myself and a few others, and provides some other services.
One of the weaknesses has been that most of the people who use the server aren’t really very unix literate, and they don’t really WANT to be. Perhaps they want a website, or they want to have a good place to manage their mail. But in general, the last thing they want is to learn how to ssh into the server to change their password.
So, for most of them, they just use whatever password I set up for them.
One of my friends, who just began using mail on the server, was surprised that it was not convenient to change his password. That spurred me to address the long standing problem. How to let people manage their password for access to services.
The blog now has a new menu on the left, for access to the backend, and for linking to the reset-password screen. There is also a reset password link on the login page https://wmbuck.net/index/login.
The same password is used for all the wmbuck.net stuff: the password for access to mail, the password to get access to protected websites in apache, and the password for logging in to the wmbuck.net backend website.
Continue reading Managing passwords on this server

Boinc client: No usable GPUs

The first thing I had to do to get this to work was to obtain the updates for GPUs, for ubuntu this was boinc-amd-opencl.

Then I had to add into /etc/init.d/boinc-client the xhost command, which would give access to the GPU to the boinc username.

The information on the web was wrong about this. The command I had to add was:

xhost si:localuser:boinc

si means server interpreted, and the kinds of strings accepted are described in man xsecurity. Localuser implies a local username. The web articles I found claimed one needed to do xhost local:boinc, but the description of xhost:local is that it doesn’t take a username, and it makes LOCAL connections available. Which sounds good, but didn’t work. After doing xhost local:boinc it was the same as if I had just done xhost local, and I got an entry “LOCAL” when I did xhost, but it didn’t work.