Saslauth, mail and realms

This server (tarragon) runs a postfix instance which provides mail service for my own wmbuck.net as well as for about a dozen other domains belonging to friends and clients. Postfix offers three different ways that a server can receive (be the final destination for) mail directed to a domain:

1) as what postfix calls the canonical destination (i.e. mail for x@wmbuck.net) where tarragon IS wmbuck.net, and each mail recipient maps onto a user who has a login account on the server, and messages are delivered to that account;

2) as a virtual alias destination, where mail directed to y@somedomain.com is accepted, but for each such address there is a corresponding forward address to some other location bob@gmail.com or something, and the actual mail messages do not reside on the server; and finally

3) as a virtual mailbox destination, where mail directed to z@anotherdomain.com arrives and is stored in mailboxes on tarragon, awaiting pickup/reading by the user, but without requiring that there be a user z with an actual login account on tarragon. This requires that the mail store on tarragon be set up to maintain different sets of mailboxes for different domains. There can be a user fred@wmbuck.net and another user fred@fredsdomain.com and the mail is not intermixed.

Tarragon uses cyrus-imap as the mail store, and it provides the ability to have different mailboxes for different domains. To support that, the mailboxes are actually constructed differently, so that cyrus-imap can have a mailbox fred, but can also have a mailbox fred@fredsdomain.com.

This requires, in turn, that the imap server be able to identify the correct mailbox when a mail client attaches, and be able to separately authenticate for each mailbox. When cyrus-imap is configured to support this separation, it requires that the username on login be fred@fredsdomain.com, rather than simply fred.

Cyrus-imap uses the saslauthd daemon to authenticate, and saslauthd in turn calls upon pam, passing in the username,  password and realm (domain) received from imap (or postfix for smtp, or apache for website auth), who receives it in the login message from the user’s mail client. Pam’s authentication for mail is set to use a module called pam_mysql, which is able to match against credentials in a mysql database.

Here is where things get tricky. Take the mail account dee@thegraygeek.com. There is also a user dee with a system account (i.e a type 1 canonical mail account dee@wmbuck.net). I can choose either to have a) only one entry in the database, for user dee, with a password. That same entry is consulted for access to either mailbox (dee@wmbuck.net or dee@thegraygeek.com) but the are still separate mailboxes. Or alternatively, b) I can have different database entries for dee and dee@thegraygeek.com, each with its own password. 

A digression: I could, and for many years did, choose to list thegraygeek.com as a canonical final destination in postfix. If I do that, then mail for dee@thegraygeek.com goes into the mailbox for user dee on tarragon, just the same as mail for dee@wmbuck.net. They go into the same mailbox. But when I began supporting virtual mailbox domains, I separated them – using the gray geek account as a test case for hosting virtual mailbox domains. 

Originally I set it up with the idea that there would be seapate database entries. With them separated, an imap login for user dee at host wmbuck.net will attach to the mailbox for the user dee, while an imap login for user dee@thegraygeek.com at host wmbuck.net will attach to the mailbox for user dee@thegraygeek.com who does not have an account on tarragon.

I set all this up over a year ago, and it seemed to be working. Then I set up a new account for a friend who had a new domain name. And I discovered that I had a problem. It so happened that in every case where I had created a mailbox of the form fred@fredsdomain.com, I actually also had an account fred on wmbuck.net, many of those never used and left over from the days when I was only doing canonical logins. I discovered that even though I had entries in the database for login as fred@fredsdomain.com, the login process was actually using the database entry for fred. It so happened that all these accounts (fred and fred@) had the same password. As soon as I added an account bob@ which did NOT have a corresponding server login account bob, with the same password, it failed.

When I tried using testsaslauthd -u bob@bobsdomain.com -p <pw> it would work, so the pam machinery and the pam_mysql plugin were working right. The problem occurred between cyrus-imap and saslauthd. I discovered that (a) cyrus imap takes an incoming username of a@b and separates it into “username” and “realm”, and passes those separately to saslauthd, and (b) saslauthd has a parameter ‘-r’ which I had previously failed to discover, which causes it to append the incoming ‘realm’ to the incoming ‘username’ when it attempts to authenticate. Without the ‘-r’ parameter, saslauthd was using only the incoming ‘username’ – fred or bob, in its call on pam. If there was such an account and the password matched, saslauthd would succeed, and the connection would be permitted. Note that cyrus-imapd would now actually connect to the correct mailbox – the issue was that it used the wrong database entry to authenticate against.

Once I turned on the ‘-r’ parameter to saslauthd, imap works for both dee and dee@thegraygeek.com, but I get a problem in smtp.  On the call from smtp->saslauthd->pam_mysql when saslauthd has the -r parameter simply ‘dee’ as the username will not authenticate. The log record coming from pam_mysql indicates the username comes across as ‘dee’, but fails. If I create a database entry for dee@wmbuck.net, and set the username on smtp to dee@wmbuck.net, it works.

After a couple of weeks, I began to change my mind. The database is used for several kinds of logins. Going this way – with separate logins for bob@bobsdomain.com and bob meant: I have to have 2 records in the database. Bob could get confused if he changed his password on one but not the other. For some users I could probably just delete the ‘bob’ entry, and force bob to type in his full email address in order to log in for other things. But I decided this was unappealing.

So I have turned off the ‘r’ parameter, and eliminated all the entries that say bob@bobsdomain.com from the database. Bob must still use his full email address when connecting to imap but for plain logins in other places, he can use just bob. This also means I no longer have to have a separate database record for canonical mail users to do smtp.

 

Authentication Tokens

I have two websites. The first (on the server tarragon) is readily available on the internet to the public (you are looking at it now), and also has a username/password based login capability. Some selected people are able to get into the back end of this website. Mostly these are people who get their mail on tarragon, or who have websites on tarragon, or both. The login capability allows them to manage their own accounts, change their password, etc.

The second website (on the server oregano) is inaccessible (or at least non-functional) except to authorized users. The authorized users are exactly those people having a login credential on tarragon. The only way to achieve a usable connection to oregano is to first log on to tarragon, and click a link there. This will create a redirect to oregano, passing a token which will allow the connection to succeed. The website on oregano will politely decline to function unless an appropriate token is received.

This article is about building that token. The properties of the token are as follows. First it must provide the identity (username) of the user (the login used on tarragon). It must be encrypted, so that all (or at least part) of its contents are protected. It must not be replayable, that is, it should not be possible for someone to capture the token used by an authorized person, and reuse it later. This includes the provision that it must be time limited, the token should expire after a short time, and subsequently be useless. It should be possible to include other information in the token if needed. For exampe, the same token machinery can be (and is) used for sending an email to handle forgotten passwords, presuming that if joe has forgotten his password, we can send a link to joe’s email address and only joe will receive it.

In the first implementation, I thought to use joe’s password for the encryption. While this can be done, it is really a flawed plan, because I don’t actually have joe’s cleartext password, I only have the hashed version of it. Using joe’s hashed password as a key is obviously vulnerable to capturing the file containing the hashed passwords. The reason they are hashed at all, of course, is that capture of files full of user information occurs all too frequently. If tarragon is available on the internet, I am obliged to assume that a sufficiently motivated and funded attacker could get his hands on the user database. Of course, it is highly unlikely that tarragon would be an interesting target for such an attack. But just because the server doesn’t have national security secrets, that is no excuse to be sloppy.

So I reimplemented it using the certificates on the two machines. Both of the servers have certificates, and use tls for their connections, i.e. they are https instead of http sites. After a little futzing around and reading, I discovered a fairly straightforward way for the php code in server ‘a’ (tarragon) to capture the certificate for remote server ‘b’ (oregano), and extract it’s public key. Then tarragon can encrypt the token using oregano’s public key, so that only oregano can decrypt the token. I could go further, and encrypt again (actually first) using tarragon’s private key, so that oregano could verify that only tarragon could have sent it.

Actually, public key encryption isn’t really used for the token. Instead, a random key is chosen for a symmetric cipher, and the key itself is then encrypted with oregano’s public key, and subsequently decrypted on oregano with his private key. The encrypted key is sent along with the encrypted message.

The function I used in php is part of the openssl library in php, and is called openssl_seal (and the other end is openssl_open. There are lower level functions that would allow one to accomplish the same things, but these seemed straightforward to use. One problem however, is that openssl_seal is written to use RC4 for the cipher. RC4 is frowned upon as insecure in a number of contexts. Openssl_seal allows passing an additional parameter, to select a different cipher, but strangely has no provision for passing an initialization vector so one can’t use any cipher that requires an initialization vector. Eventually I decided to use AES in ECB mode, despite the problems with ECB. This passed syntax but failed horribly at run time – meaning the apache worker just seemed to disappear! In debugging it an error_log call before the openssl_seal was present, and an error_log call afterwards was not, and a surrounding try catch block was not triggered. WTF? It took two days to figure this out. So I just went back to not specifying a cipher and letting it use RC4, and it worked. For the moment at least, I’m leaving it with RC4, since I am only encoding a small token.

The trick to getting a remote certificate in PHP was to use the stream facility, which opens a socket. and a stream context which is a set of parameters to the stream. The stream context is set to use ssl, the socket is established to port 443, and then the stream context will happily yield up the peer certificate that it received during the tls negotiation.

Setting up OpenVAS

I haven’t done any serious security scanning since playing around with Nessus back in 2006. I decided that I needed to do this, not only on my own servers but on those that I managed for others. It would be very embarrassing to get hacked.

So first I grabbed up Nessus, and discovered that it has, in the meantime, become a mostly commercial product. There is an open source spin called OpenVAS. This is about setting that up.

OpenVAS itself has two parts, and it comes with a third part from a company called Greenbone Security which is a web frontend. The two parts of openvas are the scanner (openvassd) and the manager (openvasmd) while the front end is gsad.

I installed them with dnf, as they are packaged with fedora. This creates a dozen bin files, an /etc/openvas & /etc/pki/openvas, a /var/lib/openvas, and systemd scripts. A good way to go through the setup process is to use openvas-check-setup, which will give clues to what you should do next.

First step was openvas-mkcert which builds a self-signed cert in /etc/pki/openvas. Next step was to install a “redis” server (dnf install redis), and fix its config file with unixsocket /tmp/redis.sock. systemctl enable redis; systemctl start redis.sock. Another step that is needed before downloading the “nvt” files, is to set up a gpg key. Some of the instructions wanted the gnupg directory in /etc/openvas, but the fedora install creates a gnupg directory in /var/lib/openvas, so I used that. Then I downloaded the nvt files (openvas-nvt-sync) and also did openvas-scapdata-sync and openvas-certdata-sync.

I was unsuccessful with checking for signed nvt scripts on either tarragon or oregano. The openvassd scanner (systemctl start openvas-scanner) won’t run with the parameter in /etc/openvas/openvassd.conf set to check the script sigs.

When the instructions said to start the scanner, and then run the manager with the –rebuild option, I started the scanner with systemctl but did the manager with openvasmd –rebuild, to build the “tasks” database.

After that I enabled and started openvas-manager and openvas-gsa (had already enabled and started openvas-scanner).

To use this on tarragon I use an ssh tunnel rather than opening up another port there. It must be connected to using https.

Making MySQL serve UTF8 correctly

If the MySQL server decides that its default environment is UTF8, and that its client actually wants Latin1, it will translate the return values.

I’ve never before had to be careful of the distinction. Perhaps once in a blue moon I would have a record with a “real” quotation mark or a character with an accent, but if it didn’t work correctly, it was never much of a bother.

Once it became important, I had to understand what was happening. I have a table that has filenames in it, and some of those filenames contain characters (a acute, e grave, o umlaut, etc). The actual files on disk have the names encoded in utf8. The records in the database are also recorded in utf8. But the records were being translated by mysql from utf8 to latin1 as they came in. So “Mamá” was recorded on disk as Mam\xc3\xa1 in the directory, and in the database, but when I got the row into memory, the filename field said Mam\xe1. The difference between latin1 and utf8 for this purpose, is that all these many “western/latin special characters” were actually mapped in latin1 to values within the 256 characters available with 1 byte. So the first 128 in the latin1 codespace were ordinary ascii, and the high order 128 had as many of the western/latin diacriticals as possible crammed in there. And in latin1, e9 is a-acute.

But on the web these days, utf8 is much preferred. Latin1 is ok if all you want is the carefully selected subset of 128 characters that can be shoehorned into the high end of the code-point space. But utf8 is a far more general solution. Using a multibyte sequence to represent over a million characters and special symbols.

Turns out mysql has a bunch of variables to control character set and collating sequence. With phpmyadmin, one can look at database->Variables and see characters_set_database, character_set_filesystem, character_set_result, character_set_server, character_set_system, and a bunch more. Or in mysql client one can show varaiables like ‘%character_set%’;

My problem was that the server had come up believing that some of these were set to utf8 and some were set to latin1. I haven’t tried to figure out the logic of how it figures out its default – I don’t want it to default, I want to tell it what I want. So the solution was to add the directive: “character-set-server=utf8” to the mysql configuration file (on cinnamon it was /etc/mysql/mysql.conf.d/mysqld.conf. After restarting all of the relevant character_set_xxx variables come up as utf8.

Saving YouTube Video

Trying to collect the old episodes of NYPD Blue, and there were some I couldn’t find on usenet. Turns out that the DVDs past season 4 were never produced. But there are old episodes, mostly captured from VHS that are out on Youtube.
After a little research, there is a cute way to capture the video files from youtube.
Click the episode on youtube to start it, and then capture the url in the browser location bar.
Now open VLC, select to play a network stream, and give it that url. It will start to play.
At this point, one can just tell VLC to convert and save, but that takes a long time. An easier and faster technique, as long as the youtube webm format is ok:
In vlc, with the stream playing, open tools->codec information and copy the link that is in the “Location” text box at the bottom.
Paste that link into a browser and go, and now the video plays directly from the Youtube servers.
Now you can just right click on the browser window where it is playing, and save video as.

Automatic TV Downloads

Having spent a lot of time downloading tv shows from primewire, and keeping a spreadsheet up to date about what has been gotten and what has not, I decided to look into using sickbeard to automate.

The parts of this are:

Usenet – One apparently has to have access to usenet to do this well. Usenet (sometimes also called net news) consists of thousands of “newsgroups”, and posts within the newsgroups are “articles” (originally this was all textual bulletin board type stuff). A single piece of a binary is posted in a article, so a number of articles must be reassembled to obtain the entire file. Usenet was at one time, before the invention of the world-wide web, one of the two or three most important things in the internet (with FTP and mail). One used to find a site that offered a usenet feed, and tap into it. These days apparently only some parts of usenet are available for free. The so-called “binary” newsgroups, which have all the media stuff, are too big, too volatile, and take too much bandwidth for the ISPs to provide them for free. One has to pay somebody (apparently) who does the daily updates. So I signed up for an account with Giganews.

NZB files – These are files that describe where one can find the binaries for the tv shows, or movies or what have you. They are somewhat like torrent files. The NZB file lists which articles in which news groups have the pieces for a particular file. For a given show if you can download an NZB file and give it to a download program (like sabnzbd mentioned in a moment), that program will download all the pieces, in parallel, put them together and recreate the desired file.

Sickbeard – a python script that automates the process of keeping track of what tv episodes are available for designated shows, and which ones you have. It takes advantage of indices on the net to figure out where stuff can be downloaded from usenet. The index tells which articles in which newsgroups have which piece. Sickbeard doesn’t actually do downloads. It retrieves NZB files from the index sites, and gives them to Sabnzbd.

Sabnzbd – This is a download program for downloading NZB files. It is a python script. If you give it an NZB file, it will download the parts of the desired file and reassemble them, taking care of compression, encryption, whatever, with options for renaming, removing temporary files, and lots of other stuff. This is a VERY useful thing in it’s own right. There is a wealth of stuff out there on usenet in the alt.binary newsgroups – applications, movies, tv, etc. It is quite easy to search for something, download an NZB file and stick it into the folder where Sabnzbd is listening (mentioned more later), and it will take care of everything.

Couchpotato – Where Sickbeard manages tv, Couchpotato manages movies. It keeps track of what movies are released on DVD, and when movies become available it will download them, either from usenet or via torrents from movie sites.

I have created a directory for doing all the parts of the automated tvdownloading on oregano. I put it on the /backup volume because there is a lot of space there. The torrent downloads are already there (in /torrentdownloads), so now there is a directory /tvdownloads.

I got sickbeard from git://github.com/midgetspy/Sick-Beard.git. I run the python scripts directly from the git repository. It’s configuration is in /etc/sickbear. I created /etc/systemd/system/sickbeard.service (from an example file called init.systemd). systemctl enable sickbeard, and systemctl start sickbeard.

Manage sickbeard through a web interface to http://localhost:8081. Sickbeard is started with its datadir pointing at /etc/sickbeard. In there sickbeard keeps its database, with information about each show it is managing, and the location where the finished shows should be placed (which I have pointing to my /tvshows directory). Sickbeard is configured so that a new episode becomes available, it searches usenet index sites and gets an NZB file for that episode and passes it to sabnzbd for download. When sabnzbd has completed the download, it invokes a script in sickbeard’s /etc/sickbeard/autoProcessTV folder called sabToSickBeard.py. That script takes care of taking the file downloaded by sabnzbd and moving it to /tvshows.

I had to sign up with a commercial usenet provider. I chose one called giganews. This will cost $14.95/mo.

I cloned the product SABnbzd from github (https://github.com/sabnzbd/sabnzbd.git). This is also python, and I run it also directly from the repo. It normally will create a config in /home but I set up its config in /etc/sabnzbd. I also set it up in systemd. The launch tells it where its config is. When sabnbzd runs it responds on http://localhost:8080 so it can be configured and managed with a browser. It has directories sabnzbd/incomplete and sabnzbd/complete in the /backup/tvdownloads folder to do its work in.

Sabnzbd receives its instructions about what to download in one of two ways: (a) It watches for nzb files in /backup/tvdownloads/nzbtodownload. Anything that appears there it tries to download, and it leaves the result in it’s “complete” folder; or (b) Sabnzbd generates keys (API key and NZB key) which allow other programs to communicate with it through its url (on localhost), so a requester (like sickbeard) can communicate the nzb information by connecting to Sabnzbd on the url. I have tried both, and sickbeard is currently using the url upi. Sabnzbd can also be configured to run a script when it has finished the download, and as already mentioned, it invokes a sickbeard script.

Sickbeard searches several online indexes to discover NZB files. It comes configured to use “Wombie’s Index” and the “Sickbeard Index”. It also will use some others. One of them called “Usenet-Crawler” I registered with.

I had a couple sickbeard had trouble with, don’t know why yet. But worth writing down that there are some other sites out there where you can search manually, and download nzb files by hand. For example, I downloaded NZB files manually for Hill Street Blues.

Sickbeard is now successfully monitoring the tvshows I watch. When a new episode is available it has successfully notified Sabnzbd who has downloaded the episode into its completed folder. It then invokes sickbeard’s script, and the show gets moved and renamed into the correct location in /tvshows.

I have also done “catchup” on shows. I had not downloaded any episodes of Doctor Who for the last (current) season. This morning I added Doctor Who as a show to monitor, told it to skip seasons 1-8 (which I have), but that I wanted all the episode in season 9. It immediately queued it all up and started grabbing the episodes and downloading them.

I also told Sickbeard I wanted NYPD Blue, and it managed to get almost all of it, although I had to intervene a little manually. This is a very old show, and not all the parts were available. But sickbeard did an amazing job of finding and gathering everything that was available, with a little bit of help.

I cloned couchpotato, also python, from git (https://github.com/CouchPotato/CouchPotatoServer.git) and run it from the repo. Its config is in /home/dee/.couchpotato. It listens on localhost:5050. Like sickbeard it searches sites for movies and it can use either sabnzbd (for usenet) or transmission (for torrents) to acquire movies. You can pick movies you want as soon as they are released. They may not be avaiable for months after they are released into theaters, but you don’t have to care. As soon as a version becomes available on the net, couchpotato will download it.

Reinstalling Libvirtd

I hosed up the configuration of libvirt on Cinnamon, trying to change the network definition.
It was so fouled up I decided to remove and reinstall libvirtd-bin, qemu, and virt-manager.
After the reinstall, the default network did not reappear, and I went looking for how to reinstall it. In the end I had to piece together various information, but the upshot is that the definition of networks for libvirt is in /usr/share/libvirt/networks. This directory was missing for me, and I had to recreated it:

root@cinnamon:~# mkdir /usr/share/libvirt/networks
root@cinnamon:~# cd /usr/share/libvirt/networks
root@cinnamon:/usr/share/libvirt/networks# touch default.xml
root@cinnamon:/usr/share/libvirt/networks# chmod 0777 default.xml
root@cinnamon:/usr/share/libvirt/networks# emacs default.xml

What I put into the file was:
<network>
<name>internal&lt;/name>
<bridge name=”virbr0″ />
<forward/>
<ip address=”192.168.122.1″ netmask=”255.255.255.0″>
<dhcp>
<range start=”192.168.122.2″ end=”192.168.122.254″/>
</dhcp>
</ip>
</network>

Kodi Keymap Files

Kodi control with the Harmony 1100

I have a Harmony 1100 Remote Control which I bought when I had nearly a dozen different devices that I wanted to control (cable boxes and several tivos, as well as tv, receiver, dvd player, vcr, etc). I don’t have most of this stuff anymore, since I dumped the cable and started using Kodi. So I could probably live without the Harmony Remote, but I paid a lot of money for it, so I want it to work correctly. A big part of this project, therefore, was figuring out how to get that remote to control Kodi properly. I also wanted to improve the handling of keyboards, and understand better how I could use the configuration capabilities in Kodi to get better control.

The Harmony remote software for the Harmony 1100 permits assigning to each hard button or touchscreen button, a “command”, and a “device” to which the command applies.

When a “device” is set up for control by the Harmony remote, one selects what type of device it is from the Harmony database. The database has various device types (tv, dvd player etc), including one called Media Center PC. Within each device type there are a number of specific “devices” with manufacturer (Sony, Samsung) and model number. For the device type Media Center PC, there is a manufacturer “Plex” and model “Plex Player”. There are other “manufacturers” in the database that look promising (like “kodi”), but they turn out not to have anything useful in them. The only one that has been useful for me is the one called “Plex”.

The database for a device, in this case “Plex”, has a set of what it calls “commands”, which to the user look like “Play”, “Pause”, “F3”, “Channel Up”, etc. When configuring the buttons on the remote, one can choose from that offered set of commands, and no others. For other devices with manufacturer supplied remote controls, one can create ones own “commands” by having the harmony app learn from the remote, but since there is no manufacturer supplied physical remote control for Kodi, the commands in the database are the only ones which can be assigned. One has to make do with the set of commands which are provided.
Continue reading Kodi Keymap Files

Getting a Gnome desktop in VNC under Ubuntu

There is a lot of old and wrong data out on the net about this. I think it is because of the continuing evolution of the desktop environment/gnome/unity etc., most of which I don’t understand.

Below is the overall approach I use, and some things I had to do to make it work. I will use as an example getting up a vnc viewer screen on oregano showing a gnome desktop on cinnamon. Interestingly, it proved much harder on cinnamon (running Wily Werewolf) than on pepper or the Butcher box named kodi, both of which are running Trusty Tahr.

On oregano the file /usr/local/bin/<remote hostname> contains a script to make an ssh connection to <remote hostname> and also to establish various ssh tunnels to that hostname (for example, to look at databases). In the case of hosts where I want to be able to open a graphical environment using vnc, the script will contain the following, among other things (here the remote host is cinnamon, and the port numbers in the command aren’t important, except that the “:3” has to match the “5903”. Selecting the port numbers carefully beccomes important if one has multiple such connections in play at once):

ssh uname@cinnamon vncserver -geometry 2400x1200 :3
ssh -L *:5901:localhost:5903 -g uname@cinnamon

which runs the vncserver command on cinnamon as user uname. The vncserver command looks for an xstartup file under ~/.vnc/xstartup. That file is the key. The (important) contents of the file are (line folding is not in the file):

#!/bin/sh
unset SESSION_MANAGER
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
gnome-session --disable-acceleration-check –session=gnome-flashback &

I think that the parameter –disable-acceleration-check is still required, as of this writing, but it is one thing I put in to try to solve the problem, and then had to do other things, and I haven’t yet gone back to test without it. In any case I’m pretty sure it can’t hurt, because acceleration is what we don’t have. (Update: I later found pepper didn’t work correctly till I put it in).
Continue reading Getting a Gnome desktop in VNC under Ubuntu

MYSQL on Ubuntu 15.10

I haven’t researched whether this has changed, in 15.10, or whether it has been this way since ubuntu switched to systemd, which is probably the case.

Under systemd, ubuntu no longer uses the /etc/init.d/mysql script, but instead uses a systemd unit in /lib/systemd/system/mysql.service which invokes /usr/bin/mysqld_safe to start and stop mysqld.

I have had a lot of trouble with this, and had to do a lot of debugging to figure out what is going on. Probably I would not have had trouble if I were not trying to port over a running mysql installation manually, i.e. if I just installed mysql-server and proceeded to create new databases, new entries in mysql etc.

One issue is that a mysql install creates a file in /etc/mysql called debian.cnf which contains a user/password for user debian-sys-maint with a generated password, and this is put into the mysql users table, to enable various operations to be performed by mysqladmin using these credentials.

The first problem was that when I copied over the mysql table from the previous installation, I was copying in the old password for debian-sys-main, which didn’t match the debian.cnf file which was installed when I did the apt-get install mysql-server. So I had to read the debian.cnf file, extract the password and change the password in the mysql table.
Continue reading MYSQL on Ubuntu 15.10