Skip to main content

openssl enc

I had to transfer my /Users directory to some other machine. I did not want to save it in the clear on the other machine, but time was an issue so I was looking for a fast solution, with a fast cipher. Also, I could not use rsync, as I did not trust the remote's machine filesystem to handle symlinks/permissions/ownerships very well. These were my options:

  • No cipher: I could tar the whole directory up and write it to a large tarball and save it on a Truecrypt volume already set up (and large enough) on the remote side. However, the tarball would be larger than 4GB but the volume has been formatted with FAT32, which won't handle files that big.

  • Some years ago I came across aespipe, doing exactly what the name suggests. I did not have it installed though and I would've liked to do this with the tools already at hand. Also, while AES sure is fast, it might be a bit overkill for this particular purpose.

  • Why not use openssl? It's installed on most systems, but I hardly use it (knowingly). Let's try:

    alice$ tar -cf - foo/ | ssh bob \
           "openssl enc -e -k s3cr3t -rc4 -out /tmp/foo.tar.rc4"
      bob$ openssl enc -d -rc4 -in /tmp/foo.tar.rc4 | tar -tf -
    enter rc4 decryption password:

  • Perfect, works like a charm! And since RC4 is basically just XOR (well, not really :-)), it should prove to be pretty fast (9,2GB of data in 15min, that's 10MB/s - and I think I was hitting some other bottleneck, as both CPUs were not running at full speed.) Oh, yes RC4 is not to be trusted anymore, but it's perfectly fine for this particular setup of mine. Really. Come to think of it I could've just used rot13, but I've never used this with binary data.

    Update: Apparently aespipe compiles under MacOS X too and now I remember the pain using it:
alice$ tar -cf - foo/ | ssh bob "aespipe -w5 > /tmp/foo.tar.enc"
Error: Unable to allocate memory
Oh dear. So maybe aespipe has a problem when it can't allocate a tty (passing -t/-T to ssh did not help). But what if we run aespipe locally:
alice$ tar -cf - foo/ | aespipe -p3 | ssh bob "cat > /tmp/foo.tar.enc"
Error: Password must be at least 20 characters.
Ah, right - I'd have to set up fd3 first so that aespipe can read a password from.

You do not have permission to access this server.

It was time to login to SourceForge again but upon providing my credentials, all I got was:

Error 403
We're sorry, but we could not fulfill your request for /account/login.php on this server.
You do not have permission to access this server.
Your technical support key is: 43ae-fa75-17f4-e8c8

Please contact ... and be sure to provide the technical support key shown above
and as much information as you can so we can resolve the problem.
Well, I really needed to login so I contacted the support guys. Although they responded within minutes(!), they were not really sure why I do not have permission to access this server. Something about my useragent, but then it could be my IP address as well. After a few exchanges it became clear that it was really my useragent what's being blocked. As I've modified general.useragent.override this must've been catched by some of their blacklists. To be specific, the following string was blocked:

     Mozilla/5.0 (X11; U; Linux; en-US; rv:1.0) Gecko/25250101
While the following allowed me to login:
     Mozilla/5.0 (X11; U; Linux; en-US; rv:1.0) Gecko/20050316
Now, the only tow questions left open are:

What kind of blacklist logic filters on useragent strings?
Why can't they put a nice desciptive explanation in the error message instead of this tedious support key?

Update: their helpdesk turned out to be more helpful as I expected and they provided me with a few details. Apparently it was the BadBehaviour filter which refused my login with the weird useragent.

You shall not upload!

Ever tried uploading a .html file to a Mediawiki installation? Well, I did and got:

    Permitted file types: png, gif, jpg, jpeg, pdf, txt, gz, bz2, deb. 
OK, let's add .html to the wgFileExtensions array. Let's try again:
    Files of the MIME type "text/html" are not allowed to be uploaded.
Sigh. OK, so we have to edit wgMimeTypeBlacklist as well, or just disable this damn thing with wgVerifyMimeType. That should do it, right?
    This file contains HTML or script code that may be erroneously
    interpreted by a web browser.
Are you kidding me? And yes, I did check the "Ignore any warnings" box. Turns out that this is still anresolved issue and wgDisableUploadScriptChecks did not make it into trunk yet.

Fedora 13

OK, F13 is not out yet, but testing their Alpha release made me wonder: it's 2010 and we're about to release Fedora 13, but we're still unable to boot off GPT disks? And when installing a headless system, we're still required to use VNC or mess with cryptic Kickstart files and hack together our own partition tables, as Anaconda doesn't let me customize the disklayout when booting in textmode.

So, that's two major annoyances and I haven't even installed the system yet :-\

Oh, and while I'm now fiddling with system-config-kickstart, I'm required to install intltool:

$ yum install intltool
Install      48 Package(s)
Upgrade       0 Package(s)
Total download size: 33 M
Is this ok [y/N]:
$ apt-get install intltool
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 1136kB of archives.
After this operation, 2949kB of additional disk space will be used.
Do you want to continue [Y/n]?
-- wtf? Oh, and does anybody know what import meh is and where I get it from?

$ yum install system-config-kickstart
Install     212 Package(s)
Upgrade       0 Package(s)
Total download size: 130 M
Is this ok [y/N]:
-- ? Speechless...


There's now a workaround for the GPT disk issue, and it seems to work with FC13-Alpha and FC13-Beta.RC5. But FC13-Alpha then just hangs after/while installing the very last package. The box is pretty much frozen and I can't see the syslog any more, I suspect something like #564330 or 571241.

With FC13-Beta.RC5, Xorg is working in a VirtualBox VM, then the kernel panics. Well, let's set up a serial console for this VM:

  • Enable the serial console in VirtualBox (COM1, use host-pipe, create pipe, /tmp/fedora.log)
  • After starting the VM, we can read on the socket with "nc -U /tmp/fedora.log"

  • We still have to tweak our bootloader:
    grub> [...]  console=ttyS0,115200n8 console=tty0 \
    However, shortly after booting, Fedora somehow detached itself from ttyS0. So we just tell rsyslog to log everything to the serial console:
    $ grep ttyS0 /etc/rsyslog.conf
    *.*         /dev/ttyS0
    $ kill -1 RSYSLOGPID

    Let's see if we can capture that oops now...

    clamscan vs. clamscan

    Found some Facebook scam in the trashbin today and was curious enough to have a closer look at the attachment:

     $ clamscan | head -1 Suspect.Bredozip-zippwd-3 FOUND
     $ unzip 
     $ clamscan Facebook_password_845.exe | head -1
     Facebook_password_845.exe: OK
    Huh? The .zip might contain a virus, but not the .exe file included? Should this be some kind of Zip Virus, where only the ZIP part is malicious? No, Virustotal confirms the .zip file infection and also the infection of the .exe file - maybe my clamav version is a bit old.

    Notice of Claim of Copyright Infringement

    Wow. Only a few weeks after starting to run a Tor node, the first "Notice of Claim of Copyright Infringement" made it to my inbox, sent by my beloved ISP. Well, it's a "notice of claim", so someone (they didn't tell me who) claims that my IP address was the "source of the infringing works", as they put it.

    Of course I replied instantly, so let's see what happens next. Hopefully they'll put me in contact with the so called "copyright owner, or its authorized agent" and Comcast won't have to deal with this. Dealing with this kind of crap won't get them (or any ISP, for that matter) any revenue, so may be they'll just terminate my contract to be done with it. But maybe not, Comcast could use a few plus points on their net neutrality stance.

    Oh, and for the record: "Harry Potter Audio Books", wtf? Not me, your honor - not me. Go get your LI tools straight.

    urlsnarf uses obsolete PF_INET

    Just before going to sleep, I spotted this in my kernel log:

    urlsnarf uses obsolete (PF_INET,SOCK_PACKET)
    As someone else a few years back already explained:
    It means that it should be opening a PF_PACKET socket (see packet(7))
    instead of a PF_INET, SOCK_PACKET (see COMPATIBILITY ip(7)):
        "For compatibility with Linux 2.0, the obsolete socket(PF_INET,
         SOCK_RAW, protocol) syntax is still supported to open a
         packet(7) socket.  This is deprecated and should be replaced by
         socket(PF_PACKET, SOCK_RAW, protocol) instead.  The main
         difference is the new sockaddr_ll address structure for generic
         link layer information instead of sockaddr_pkt." - ip(7)
    This made me curious: where exactly does urlsnarf use PF_INET or SOCK_PACKET? Turns out - it doesn't. But the Debian package introduces a patch trying to fix #420129:
    $ cat 15_checksum_libnids.dpatch
    +       *ifaces = malloc(ifaces_size);
    +       sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_IP);
    +       if (sock <= 0)
    Well, turns out that even with the patch applied (i.e. a stock Debian/dsniff-2.4b1+debian-18 installed) dsniff is not working. However, urlsnarf is working - regardless wether the patch is applied or not :-)


    After starting to run a Tor node, the Hulu GeoFilter thinks I'm trying to access their content from outside the U.S. As their GeoFilter issues form was having difficulties finding out where I was connecting from, they advised to send an email to, but I've haven't heard back from them ever since. That's a pity, now I can't watch their shows any more. Also, they've taken the Daily Show off their program, which is basically the only show I've watched over there. So, it's off to Bittorrent-World for me now; maybe we meet again when you fix your GeoFilter?