Friday, June 19, 2015

Email With Comcast

Circumstances pushed me into using Comcast for my Internet connection.

Comcast blocks ports and will provide fake response to erroneous DNS queries so they're not my first choice for an ISP.  I appear to be getting normal error responses to bad Name Server requests.  That's good.  I think it's because they activated a defunct account and it still had the opt-out from Comcast's enhanced DNS.

Port 25 is blocked, inbound and outbound.  My mail server functions that needed to accept inbound connections got moved to a different server with a different ISP.  However I still needed to process locally generated emails: server logs, alerts, web look-ups.

I'm using postfix for email services the changes were fairly straight forward.  In the main.cf file:

smtp_tls_cert_file = /etc/postfix/postfix-cert.pem
smtp_tls_key_file = /etc/postfix/postfix.key
smtp_tls_loglevel = 6
smtp_tls_note_starttls_offer = yes
smtp_tls_security_level = may
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
relayhost = [smtp.comcast.net]:submission
smtp_sasl_password_maps = hash:/etc/postfix/smtp_sasl_password_maps

The cert and key lines were already there.  Bump the loglevel so that you can enough detail about the errors.  Once this is working, put the loglevel back to 1.  You DO want to encrypt your email transmissions.  It's not that hard.  NOTE: the empty security_options line is needed.  Otherwise you will see the error message: No worthy mechs found.

The relayhost line specifies the ISP's email server.  If you are also using Comcast just copy it verbatim.  Finally you need to supply your credentials for your ISP.  I used smtp_sasl_password_maps as the file name - shamelessly stolen from the postfix documentation. The file name tells you that we are creating a mapping between keys and values.  The key is the EXACT value specified for relayhost.  The value will be the credentials for that host, i.e. username:password.

My file contains a single line that looks like:
[smtp.comcast.net]:submission username:password
with white space (a tab in this case) separating the key and the value.  Your username is NOT user@comcast.net.  It is simply user.

I use a Makefile to manage my postfix configuration courtesy of those nice folks at Tummy.com.

NEWALIASES=/usr/bin/newaliases
PDIR=/etc/postfix
ADIR=/var/spool/amavisd
POSTMAP=/usr/sbin/postmap
ETC=/etc

# NOTES
#
# sendmail -bv someone@somedomain
# will provide a delivery status report showing how that address would be handled
#
all: $(ETC)/aliases.db $(PDIR)/virtual.db $(PDIR)/access.db \
    $(PDIR)/smtp_sasl_password_maps.db $(ADIR)/virtual-domains \
    $(PDIR)/reload.done

$(ETC)/aliases.db: $(ETC)/aliases
 $(NEWALIASES)

$(PDIR)/virtual.db: $(PDIR)/virtual
 $(POSTMAP) $^

$(PDIR)/access.db: $(PDIR)/access
 $(POSTMAP) $^

$(PDIR)/generic.db: $(PDIR)/generic
 $(POSTMAP) $^

$(PDIR)/smtp_sasl_password_maps.db: $(PDIR)/smtp_sasl_password_maps
 $(POSTMAP) $^

$(ADIR)/virtual-domains: $(PDIR)/virtual-domains
 cp $^ $@

$(PDIR)/reload.done: $(PDIR)/virtual-domains $(PDIR)/main.cf $(PDIR)/smtp_sasl_password_maps.db
 xargs -i ./missing.sh {} virtual < virtual-domains
 touch $(PDIR)/reload.done
 service postfix reload

# check for domains with no address handling
check:
 xargs -i ./missing.sh {} virtual < virtual-domains
This makes it easy to keep the hashed files uptodate. I also have a little script to make sure that I've included all my virtual domains in the virtual file.\

# cat missing.sh 
grep -q $1 $2 || echo $1

I wrote this hoping to save others some time and effort. The places where I wasted a lot of time were:

  • smtp_sasl_security_options =
  • trying user@comcast.net when it must be user

Friday, November 15, 2013

rsync With Host in the Middle

I have clients that have configured their firewalls so that I must use my office Internet address when connecting to their servers.  This is inconvenient for me when traveling, but I understand their concerns.

This means I must connect to their system via my office server.  When using ssh, it's typically two commands:

ssh -A -X myoffice-computer
ssh -X the-client-computer # issued from myoffice-computer session
However, with rsync you can't break up the commands, or can you?  The -e option allows you to feed arguments to the underlying ssh transport. So:
rsync -a -e 'ssh -A myoffice-computer ssh' \
localfile client-computer:/path/dir
The file transfer can be in either direction.  rsync splices the specified remote hostname into the ssh commands.  This will work for longer chains of ssh connections.  Just follow the pattern:
-e 'ssh -A host1 ssh -A host2 ssh'
I've been using the ssh -A option in my examples.  From the man page:
-A      Enables forwarding of the authentication agent connection.
This can also be specified on a per-host basis in a configuration file.
You may need a different approach depending upon how you've configured authentication and distributed your keys among the different computers.

Thursday, September 26, 2013

Using Javascript

Javascript is the language supported within browsers.  It is handy for providing instant responsiveness on a web page without involving the web server.  There are all too many web pages implemented with Javascript that simply do not function when Javascript is disabled.

http://eloquentjavascript.net/ provides a very nice guide to the language. Javascript is quite different from Java in its underlying philosophy and approach.  Much of the information about Javascript is fairly low quality because the author failed to understand the language behind the syntax.

Eloquent Javascript understands the functional roots in the language.  It is also nicely written.  This is by far the best Javascript guide that I have seen.

Thursday, September 12, 2013

Certificate Processing

xca is a GUI program that helps with managing certificates and keys.  It does a nice job of managing keys and certificates.  Whether you are operating your own little certifying authority or obtaining certificates from recognized public authorities, it's helpful to keep copies in the xca database.

Linux server programs specify certificates in their configuration files using pathnames to the actual files.  Once you figure out your naming conventions, it is quite easy to export files from xca and copy them to the proper file system locations, replacing expiring certificates with replacements.  Restart the service and the new certificate is in operation.

I find dealing with a Microsoft Windows server quite confusing.  The menu choices never seem to match what I am doing.  One key point: the server wants the key and the certificate to be bundled into one file.  This is a PKCS #12 format.  .p12 is commonly used for the file extension, but Microsoft prefers .pfx.  Once you manage to navigate the menus to where the server wants your .pfx file, you'll be able to install the certificate.

A fine source for certificates is
https://www.startssl.com/
When you setup your account, they will install a certificate in your browser. There is no password to remember and logging in to the site is painless. If you need more than one identity, make sure your browser is configured to let you choose which certificate to present. My main gripe with the site is that the work flows all use a "wizard" approach, but with no capability to backtrack. This avoids complex forms, but can be quite frustrating when you're following the wrong flow and need to abandon your inputs and start over.

Wednesday, August 21, 2013

Remote Access to MySQL database

MySQL can tie accounts to specific hostnames and/or IP addresses.  This does not work so well when traveling or when the access locations are somewhat ad hoc.  While certificates are an option, it is easier to use SSH to tunnel the connection.  The normal localhost accounts then work remotely.

ssh -f -L 33306:localhost:3306 mysql.example.com sleep 300
MySQL listens on port 3306. We need to map a local port to the remote 3306. Since I already have a local MySQL server running I need a different port number. 33306 was picked as easy to remember and available. The sleep 300 simply holds the connection open for 5 minutes (300 seconds) giving me ample time to connect.
mysql -P 33306

should connect, but there is a pitfall. On Linux, a localhost connection is done through a socket file and _not_ through the network stack. If you are also running a local MySQL server, the -P (port) argument will be ignored and you will not connect to your remote MySQL server. What to do?

mysql -h 127.0.0.1 -P 33306
Now you are connecting to your localhost, but using the network stack. The port argument (-P 33306) is no longer ignored.
You really want better authentication than a password provides when opening a port to the Internet. SSH allows you to use Public Key Encryption to control access. The public key is stored on the server. Anyone trying to access the server _must_ have the corresponding private key to establish a connection.

Wednesday, July 24, 2013

RAID 1 Drive Failure

One of my laptop's mirrored drives failed.  This should have been a non-event.  I expected the laptop to just keep working off the remaining drive.  This was the case until I tried to reboot.  The laptop reported no bootable drives!

My bootable USB sticks were not terribly helpful.  Both the Ubuntu and PuppyOS sticks were missing mdadm (multi drive administration) and lvm2 (logical volume manager).  They were unable to read and mount the good drive.  The Ubuntu Desktop CD was similarly unhelpful.  The Ubuntu 12.04.2 Server CD did the trick.  It includes the necessary mdadm and lvm2 packages and provides a rescue mode that works very smoothly.  (I shall be building an improved Ubuntu boot stick.)

The first step in getting back in operation was to write the MBR (master boot record) to the drive.  I must have failed to do that when I first installed the drive.  RAID1 with mdadm mirrors the file system partitions.  However the MBR is written to space outside the partitions.  Ubuntu 12.04 is using grub2 and this was my first opportunity to deal with grub2 booting problems.  The command for me was:
grub-setup -d /mnt/laptop/boot/grub /dev/sda
-d is used to specify the grub directory which must be accessible.  I had mounted my boot partition in /mnt/laptop/boot
/dev/sda is the name of the boot drive that is getting the new MBR.  When booting, the MBR needs to point grub to the grub directory.  And this time I remembered to repeat the command for /dev/sdb!

When I first tried to reboot with a single drive (degraded RAID), the laptop still failed to boot.  This appears to be a bug in the Ubuntu setup. mdadm was configured to allow a degraded RAID to boot. Once I installed /dev/sdb and gave it a few minutes to rebuild the boot partition, the laptop booted smoothly.

Things are much simpler when you have a working system.  Then the command to write an MBR is simply:
grub-install /dev/sda
grub will write an MBR for your current setup. This should be repeated for each of your potential boot drives. The command:
update-grub
will update the grub boot menu.

Don't go blindly using this advice on a dual boot system. There are other factors you need to worry about. In general, I think you are better off running any alternative OS virtualized. This lets you manage a simpler boot environment. If you need to run Windows, having Linux host it provides another level of security and management.