When I switched my main server to CentOS, described in an earlier post, one of the big pains was that I had to use CentOS 7, and there was a lot of software which had come a long way since CentOS 7, and I had to upgrade a log of things from upstream to get functionality that I had grown reliant upon.
I didn’t realize that Apache itself was one of those things that was sufficiently backwards in CentOS 7 that I would have trouble.
Ever since I move the server to CentOSdid that “upgrade”, I’ve been struggling with problems with the certificates not being honored. For the last few days I have been working pretty diligently to try to figure out this nagging problem, and today I finally figured it out. It is owing to an old Apache.
Continue reading Apache certificate chains
I have had trouble twice now with modifying a working vpn configuration, and then being unable to get it to start. Both times I never actually solved it, so much as eliminating the problem by switching to a different nordvpn config file.
There was a penetration at nordvpn in which some passwords and userinfo were leaked. I wanted to change my password, and did, and had to get into the vpn router and change it there. And after I did the vpn just would not start. Eventually, I switched to another vpn endpoint, put in a new .conf file in /etc/openvpn/client and it came right up.
I don’t know what this is about.
For a very goofy reason involving a bug with composer, and new warning messages in PHP, I decided I needed to reinstall eclipse on oregano. Reinstalling eclipse is always a test of my patience. During the entire course of my life I do not believe that the installation of eclipse along with the various components I need, especially subclipse, has ever “just worked”. There is always some issue. So, of course, I do not look forward to it with gleeful anticipation.
I think part of the problem for me – a problem of my own making I suppose – is that I mostly use distros (Fedora and Ubuntu) in which the packagers have already arranged for tools like eclipse to appear as “system level” rather than “user level” tools. By this I mean they appear (or are symlinked from) /usr/bin, their configuration files are located it /etc, their icons appear when you search for apps, etc. Whereas if you just use the upstream installers the installation will be carried out at the user level, they will ask you where to put the binary, and they will put configuration files in ~/.config,
Continue reading Desktop Files
After implementing the new tarragon the biggest problem I had involved the clamav package, and its loading of signatures. If clamd doesn’t come up and open its socket, then amavisd (the daemon who is consulted by postfix to handle all the checking of each piece of mail on input and output) will fail (assuming he is configured to do virus checking), This results in various problems. Amavis will mark the mail as “unchecked”, but worse, it will report failure back to postfix who gets confused and very often the message is delivered two or three times.
Clamd, the clamav daemon, now has over 6 million signatures. There are a lot of bad boys out there. The signatures are loaded by clamd from its database (in /var/lib/clamav) on startup, into memory. As a result, clamd has a large memory footprint, almost 800Mb on my system. The first issue, discovered before going live, was that systemd’s default parameters expect any daemon he starts to load within 90 seconds. If it fails to check in within that time, systemd considers it broken and terminates it. Clamd takes at least 3 minutes to load. I had to set a special TimeoutStartSec value in the systemd service script for clamd@.service.
Whew! I thought, boy I’m glad I figured that out. Hah!
Continue reading Clamd signatures and Apache memory
This server, on Amazon, hosts my website and a dozen others, provides mail service for several people’s email including my own with postfix, dovecot, opendkim, amavis, spamassassin and clamd, provides contacts and calendar service using radicale, provides vpn service with openvpn, provides a tor relay, provides nextcloud service, and hosts my svn repository.
The server was last rebuilt in 2017. Long, long ago when I built the first version of it, I was most familiar with Red Hat/Fedora, and since then it has been easiest just to upgrade it with Fedora, always grumbling to myself that someday I’m going to change it. The problem with being on Fedora, of course, is that Fedora changes every 6 months, so I’m constantly behind. And after a year I’m at end of life. This is dumb for a server that I don’t want to be messing with all the time.
Continue reading Tarragon Rebuild 2019
I now have 8 of these gateway boxes out there. This morning as I was checking backups on one of them, I observed that it took quite a long time to respond. I ran a top on it and was horrified to see that its memory use was 100% and so was its swap. Holy @#$%!@ Batman!
Most of the memory was being used by the lxpanel. And (hangs head in embarrassment) there were actually two lxpanels running – one for the console and one in the vnc window I launch at startup.
It seems the lxpanels leak. I don’t know how badly, but it doesn’t matter. These boxes are meant to run forever so even a tiny leak is eventually fatal.
Well this was simple. I will seldom, if ever, need to get into a graphical environment remotely, and if I do I can always start vnc from the command line. So I took out the startvnc from the startup script. And I have even LESS need for a graphical console since there is not even a monitor on these things. So I set the default systemd target to multi-user.target.
Did this on all the gateways that are running on pi-zeros. Those few running on bigger ubuntu boxes I didn’t really have the problem anyway.
After rebooting them they come up with no lxpanels. I’ll watch the memory use, but I think this will fix the problem.
Cinnamon and Rosemary are now both happily rack-mounted in the basement (where it is cool, and where their many disk drives and fans can make as much racket as they wish).
Mostly I control them from the office with ssh and or vnc, but once in a while I need to actually be down there. My neighbor gave me a monitor, I have plenty of mice and keyboards, so I hooked up a KVM switch on the two of them so I didn’t have to keep getting behind the rack to move the monitor.
But alas, neither of them picked up the resolution of the monitor, I suppose (not sure) that with the KVM in the middle, they can’t really read the EDID and such stuff from the monitor. And since it is an “unknown” monitor, the display panel only shows 1024×768, 800×600 etc. The monitor itself helpfully tells me that it wants to be 1440×900 @60Hz.
Continue reading Forcing Monitor resolution
I found out today something I am sure to forget.
In every zfs dataset there is an invisible directory (by invisible, I mean that it does NOT show up with ls -a) name .zfs. In side this directory are two subdirectories, shares and snapshots.
The snapshots subdirectory is a perfectly serviceable read-only access to all the snapshots. Viz:
Continue reading Invisible zfs snapshot directory
I realized as I was writing a new post that I had never documented the gateway pi undertaking.
This started when a friend in the mountains got a new internet service where the ISP would not allow him (and therefore me) access to his router. As a result I could no longer use ssh to connect to his systems.
I solved this problem by setting his systems up to use a tool called autossh, with which I could have his system start, monitor, and keep running an ssh daemon with reverse tunnels open to my system. I could then reach him by attaching through the reverse tunnels.
Continue reading Gateway pi
I had in mind (still do) to use Cinnamon as a host for virtual machine. In fact, I have had that idea in the back of my mind for many years. Recently that idea percolated up to the top again, and one thing I did was to buy some additional ram for it, I bought a 16G stick and tried to add it. It wouldn’t boot. The very poorly written manual on the motherboard seems to suggest that it absolutely requires one to have balanced sticks in the dimm slots. I find that hard to believe, but decided it couldn’t hurt to comply, and bought another stick, so I would have 2 8G sticks, and 2 16G sticks, 48G.
It still wouldn’t boot. But I noticed that this doesn’t look like the hardware is failing – it gets up into Xen and then stops. I don’t think this is a hardware problem with the memory.
After some googling around I found an article on the Suse website with a similar thing, saying that Dom0 won’t come up if it has more than 32G of memory, and offering a solution.
I’m very ignorant about Xen. I have never really gotten beyond installing it, with my Cinnamon ubuntu installation in Dom0 and using all the resources. But, but it is clear of course that the right way to do this is for the Dom 0 to be small and confined to its management job, and Cinnamon should actually be a Dom U.
What seems to be true is that if you do not specify on the command line, the Dom 0 will come up with all the memory. And that if you have more than 32GB of memory for it to come up with it will fail. Thus if you have more than 32GB of memory, you MUST avail yourself of the command line to limit the memory available to Dom 0.
I added to the linux command line in the default grub,:
And the box came up fine. Once I manage to get Cinnamon and it’s functions into separate Dom U, I will reduce the dom0 down to 1 G or so.