0xpebbles.org http://blog.0xpebbles.org 0xpebbles blog Bypassing At&t U-verse hardware NAT table limits http://blog.0xpebbles.org/Bypassing-At-t-U-verse-hardware-NAT-table-limits 12 Dec 2015 At&t U-verse comes with a vendor-supplied router that is designed for home-use, even when having a "small business" contract. This shows in the router's config interface, designed for non-tech-savvy people, but also in terms of a bunch of limitations. The main problem, especially when used in a business context, is a limitation of the router's NAT table, capped at 2048 sessions (with the At&t support claiming already flaky behaviour when over 1000). Not sure why At&t, defining small business as up to 50 people, is even selling this, as a lot of them will end up calling support very soon, complaining about intermittent packet loss.

The box provides some passthrough modes for people that run their own routers, but although it sounds like this would easily avoid that limitation, it doesn't. For whatever not-so apparent reason, all of those passthrough modes actually just route, still filling up the NAT table. Some older firmwares had an exploitable vulnerability, which allowed to root some models and enable a true bridge mode, but newer versions plugged this; and ideally we would like to have a solution for a business environment, that doesn't need tampering with At&t's equipment, anyways.

Looks like many others are having the same problem (across different router models), debating workarounds, but no real hands-off solution was found that doesn't involve to either root the router or having to re-rig things occasionally.

One solution that does work and is easy to set up is to tunnel all the office's outgoing traffic, exactly generating one entry in the At&t box' NAT table. However, although a valid solution, this post will try to focus on an internal-network-only workaround. Also, such a tunnel wouldn't necessarily fix the problem for inbound connections, either.

Why can't we just use our own hardware? Well, the U-verse home-use uplinks require 802.1X authentication, with therefore a certificate that is on the router. It additionally sends every 24h a CWMP periodic inform message to At&t, which we probably should keep sending, also. I think it would be possible to open the router and dump the box' ROM contents to get a hold of the cert, then reimplement the logic on a better box. Or, if the limitation is software based, we could even attempt flashing the router with a modified version, as the firmwares used seem to be all open source and available (excluding the cert, of course). However, both approaches would be tampering with their equipment.

All of this means, that since we don't want to tamper with the router, it has to stay part of the equation.

So, let's try to have the At&t router still connected to the uplink to do the authentication and heartbeat, but not pass any real traffic through it, that doesn't need to. Basically, we want the following:


                       uplink
                         │
                         │
                 ┌──── magic ────┐
                 │               │
                 │             At&t
                 │            router
              intranet

Note that the traffic that goes from uplink to the intranet does not pass through the At&t box, at all. The latter will simply stay attached to the uplink (whether you use fiber, ONT, etc., in my case it was a fiber cable), but that's the only link connected to it. No other cable will be attached to it.

So, magic now has to split the traffic into:

  • all 802.1X/EAP and management traffic should be allowed to and from the At&t box
  • everything else should flow between uplink and the intranet

Let's hook up a box in between the At&t router and the uplink, first, so we can intercept the traffic. It doesn't really matter what type of box it is, but needs to be more programmable/flexible than your standard managed switch's or router's vendor GUI/CLI. In my case, an Ubiquiti EdgeRouterPro was used, as it allows full Linux shell access.

So, with the At&t box connected to... let's say eth6, and the uplink connected to eth7, let's just bridge those interfaces (br0), so the At&t box can talk to the uplink. Turns out that the 802.1D standard (which is about bridging), says that standards compliant bridges aren't supposed to pass MAC addresses in the range of 01:80:c2:00:00:00 to 01:80:c2:00:00:0f. 802.1X uses 01:80:c2:00:00:03, so it's effectively not going through the bridge. So much about the general assumption that a bridge is just a "virtual switch"... not.

Anyways, looks like we can enable this on our bridge br0, so if this is a >=3.2 kernel:

echo 8 > /sys/class/net/br0/bridge/group_fwd_mask

If this is a pre-3.2 kernel, some folks maintained a patch to achieve the same.

With this out of the way, the At&t router, with an uplink connection going through magic, should now sync with the At&t uplink just fine (and all sync/broadband LEDs turn steady green).

On to the next step. What's basically missing is to now direct all incoming traffic to the correct next hop, depending on whether it's for the At&t box itself (802.1X, management traffic, etc.), or else. This is luckily fairly easy to figure out, on paper. The way this is setup for a U-verse "small business" contract that comes with 5 public IPs, is the following:

  • the At&t box gets provisioned with a static IP, which the support guy called "street IP", which "usually doesn't change, except if the phyisical wiring changes or something like that"; note that that IP is different from our 5 public ones
  • this is the IP assigned to the box itself, so everything sent to the box itself is using that one
  • the rest, with one of our 5 public IPs as destination, is also sent to the At&t box, for further routing
  • in other words, everything from the outside is passed to the At&t box, as either destination or to be routed for our public IP subnet; this means in layer 2 terms, that every incoming ethernet frame has the At&t box' MAC as destination address
  • for outgoing traffic, everything coming from the intranet needs to go directly to the uplink (which sounds obvious, but given that the gateway address for our public IP block ARPs to the At&t router, we also need a rule on magic to redirect those packets, so they actually go out and don't get dropped on the WAN interface of the At&t router)

Note that the last bullet point mentioned the gateway address - as a sidenote here: we will use the same gateway address with this setup as we would by using the At&t box the default way (running all traffic through it). This will be the address used by the intranet as internet gateway. This address can be looked up in the At&t box' configuration, if you don't know it - it definitely has to match the one in the configuration, though, or the below won't work. The benefit here is that this allows for removing magic at any time from the setup, replugging everything in the old way, and it will work (except for then having NAT table limitations again).

So, what we want to do is to filter on layer 2 by matching on destination IP address, and rewrite the MACs to either go to the At&t box, or to the intranet. Well, for the latter, we need a destination now, so we need a box there, somewhere, as router, with one of our public IPs set. This would be your main firewall (in my case it runs simply on the EdgeRouterPRO, also, but virtually separated):


                       uplink
                         │
                         │
                 ┌──── magic ────┐
                 │               │
              router/fw        At&t
                 │            router
              intranet

We can use ebtables to do the traffic-splitting, as this is layer 2 logic. Unfortunately, there is another stumbling block. At&t seems to use (not sure if always) a VLAN, here, probably per customer.
The current version of ebtables can either match layer 3 addresses but not VLAN tags, or vice versa, but not both at the same time. So we need to strip the VLAN/802.1Q tag from the ethernet frames, first, to be able to make use of ebtables. To do that, we need to figure out the tag they assigned to us, first.

So, on magic's eth7, run a tcpdump with -e, and make sure some traffic goes through there (e.g. plug a machine into the LAN ports of the At&t router, and visit a website, or so):

tcpdump -ei eth7

Output will be something like this, with the VLAN tag displayed

16:36:56.631607 10:20:30:40:50:60 (oui Unknown) > 60:50:40:30:20:10 (oui Unknown), ethertype 802.1Q (0x8100), length 147: vlan 2, p 0, ethertype IPv4, 1.2.3.4.5555 > 4.3.2.1.7777: UDP, length 101

So, in our case it's VLAN 2 that At&t assigned to us. Let's create VLAN interfaces eth6.2 and eth7.2, and also add them to br0. Now we can run ebtables rules on eth7.2 with matching of IP address, as this interface receives the incoming eth7 traffic with the VLAN tag removed. Before we get to the rules, we need to additionally add to our bridge the interface that links to the intranet. Let's say this is eth5, so add eth5 to br0 for our example. Then for ebtables:

# This dnats destination MAC addresses on eth5 for our public IP subnet to the uplink MAC.
ebtables -t nat -A PREROUTING -p IPv4 -i eth5 --ip-src $OUR_PUB_IP_RANGE -j dnat --to-dst $MAC_ATT_UPLINK --dnat-target ACCEPT

# This will set the firewall box' MAC as destination for packets coming from the ISP. Do
# it on eth7.vlan, so ebtables can match on IP (wich it can only on packets with VLAN
# tag stripped).
ebtables -t nat -A PREROUTING -p IPv4 -i eth7.2 --ip-dst $OUR_PUB_IP_RANGE -j dnat --to-dst $MAC_INTERNALFW --dnat-target ACCEPT

# This snats source MACs that go to the uplink (all of them) to the at&t box' MAC,
# to spoof it coming from there. This makes the packet go out to the ISP.
ebtables -t nat -A POSTROUTING -d $MAC_ATT_UPLINK -j snat --to-src $MAC_ATT_RG_WAN --snat-target ACCEPT

As you can see, there are some blanks to fill in, namely the following variables:

OUR_PUB_IP_RANGEthe public IP block given to you from At&t, e.g. 1.2.3.4/29
MAC_INTERNALFW MAC address of interface connected to eth5, the interface of the internal firewall box
MAC_ATT_RG_WAN MAC address of the At&t box' WAN interface; usually printed on At&t box' back
MAC_ATT_UPLINK MAC address of uplink hop's equipment, see below

In order to figure out MAC_ATT_UPLINK, which is the MAC address of the device at the other end of the cable coming out of your wall, you can use the following on magic:

brctl showmacs br0

This will list all interfaces that are part, or directly attached to br0:

port no mac addr                is local?       ageing timer
  5     44:77:44:77:44:77       no                 0.00
  1     10:20:30:40:50:60       no                 0.00
  1     00:11:77:44:22:05       yes                0.00
  2     00:11:77:44:22:06       yes                0.00
  4     00:11:77:44:22:07       yes                0.00
  3     a4:7a:77:a7:7a:77       no                44.82

Filtering out the three local addresses, which are the bridge interfaces eth5, eth6 and eth7 (note that eth6.2 and eth7.2 use the same MACs as eth6 and eth7), three others are left. Two of those are addresses we know, namely the firewall's (here: 10:20:30:40:50:60) and the At&t box' MAC (let's say this is a4:7a:77:a7:7a:77). The only one left is 44:77:44:77:44:77 in our example, which is the one we are looking for. (If you have more than one remaining line, this might come from having had something else plugged in, which the bridge learned. In that case, just check the ageing timer column for an ever increasing value, and let it time out. Eventually there should be only one address left.)

Now, put together your ebtables rules and give it a test.

This is basically it. You might want to think of the following though:

  • At&t might change the uplink hardware, so MAC_ATT_UPLINK might change; so it's a good idea to have your brctl showmacs br0 in some cronjob to update the uplink MAC in case of it changing
  • theoretically, At&t might also change the VLAN id, however, I don't think this is realistic as it's already an abstraction and easier to keep for them than when replacing hardware
  • be aware that every time you destroy and recreate br0, the group_fwd_mask needs to be reset to let 802.1X traffic pass

UPDATE (2016-07-22): new methods to root the At&t box were discovered, in the meantime (see comments, below); so the statement in the introduction doesn't fully hold, anymore

UPDATE (2016-10-02): as reported by others in the comments below, there are uplink setups not using any VLAN tagging, but still using 802.1Q frames with special value of 0 for the VLAN ID, using it as a priority tag, only. See comments below for more info.

]]>
File rescue with dd and gawk http://blog.0xpebbles.org/File-rescue-with-dd-and-gawk 11 Sep 2015 I recently had to undelete some accidentally deleted pictures on some SD card, after the owner of it was trying out different tools, and even brought it to some computer store (which tried more tools), but was only able to recover half of them. It was clear that the files have been deleted, only, but not overwritten, as he noticed his mistake immediately and refrained from using the card afterwards. The way default deletion usually works, means that pretty much everything still had to be recoverable.

Turns out it was, and even without any rescue tool. When I started looking into it, the first tool I saw in the FreeBSD ports was magicrescue, but somehow no matter what I tried, it always exited with the same error. Looking at the man-page I noticed right at the beginning:

It looks at "magic bytes" in file contents, [...] It works on any file system,
but on very fragmented file systems it can only recover the first chunk of each
file.  These chunks are sometimes as big as 50MB, however.

So, the tool is file-system agnostic, it seems to only look for some sequence of bytes and then to recover some sequential number of bytes. This also means that very complex file-systems or features like compression and deduplication will obviously not be suited for recovery with magicrescue.

Makes sense. And that applies also to my case: the card had a FAT32 file system on it (like probably most cameras use), meaning there won't be any fancy file system features. Also, given that a camera stores one picture after the other (and if people delete some it's often always the last right after taking it), there probably also is little fragmentation.

So, basically, all I need to do is read all bytes off of the card, and split on certain patterns. split(1) unfortunately doesn't help, as although you can use a pattern for splitting, it's only matching on entire lines.
Inspecting the first few megabytes on the SD card revealed, that the images on there are stored as Exif-JPEG files (starting with magic numbers 0xff 0xd8, and then 0xff 0xe1 for this subtype, details here). This is not something general purpose, of course. And even for this one type of JPEG file not something to rely on, but I didn't want to split on 0xff 0xd8, only (to keep false positives low), and assumed that the camera wrote all images in the same format/way.

Completely ignoring the end-markers of JPEG files, accepting that the recovered images might have some garbage data appended, I started splitting the data on the SD card up on those 4 byte patterns. And that works quite nicely with dd and gawk (note, POSIX awk won't work, as the record separator can only be one byte):

dd if=$SRC of=/dev/stdout bs=1M | \
  gawk 'BEGIN { FS="fs is not important"; RS="\xff\xd8\xff\xe1" } { print RS$0 > sprintf("%04d.jpg", NR) }'

Of course, set $SRC to the device you want to recover your files from.

That's it - I was able to recover every single image off of that card, with a shell one-liner! Of course, this is a specific case that made this possible: simple file system, no fragmentation, only JPEG files to recover, and only one JPEG type to look for, etc., but it can easily extended to suit other purposes.

Here's a little bit more convenient version as a shell script, allowing to seek, set the size to recover, and an optional prefix for the recovered images (still only looking for the same 4 bytes to separate on, though):

#!/bin/sh
if [ $# -lt 3 ]; then echo Usage: $0 DEV SIZE_MB SEEK_MB OUT_PREFIX; exit; fi
dd if=$1 of=/dev/stdout bs=1M count=$2 iseek=$3 | gawk 'BEGIN { FS="fs is not important"; RS="\xff\xd8\xff\xe1" } { print RS$0 > sprintf("'$4'%04d.jpg", NR) }'
]]>
Use GNU screen to hide your login http://blog.0xpebbles.org/Use-GNU-screen-to-hide-your-login 27 Jun 2015 A friend of mine showed me a cool trick to hide your login somewhat, using GNU screen. What I can tell from online searches, this doesn't look like to be that known... GNU screen has command line options -l and -ln (or command C-a L to toggle) to control the window's login behaviour. From the man page:

-l and -ln
    turns login mode on or off (for  /etc/utmp  updating).   This  can
    also be defined through the "deflogin" .screenrc command.
C-a L       (login)       Toggle this  windows  login  slot.  Available
                          only  if  screen  is configured to update the
                          utmp database.

The benefits we get from this is, once turned off, we are still controlling screen, so we are logged-in, but there is no record anymore in the user accounting login records. So, lets see, starting screen and running who(1):

$ who
dude            ttyv0    Jun 27 08:51
dude            pts/0    Jun 27 08:52 (:0)
dude            pts/1    Jun 27 08:52 (:0)
dude            pts/3    Jun 27 10:45 (:0)

Now after toggling login mode with C-a L:

$ who
dude            ttyv0    Jun 27 08:51
dude            pts/0    Jun 27 08:52 (:0)
dude            pts/1    Jun 27 08:52 (:0)
Also, other commands like w(1) on FreeBSD behave similar. However, it's not all hidden, of course. The screen process shows up with ps(1), along with user that runs it, and I'm sure there are more ways... e.g. on FreeBSD I can use getent(1) to get some idea of what's going on:
$ getent utmpx active
[1435387900.738886 -- Sat Jun 27 08:51:40 2015] user process: id="74a6f9bdc43b89a6" pid="1631" user="dude" line="ttyv0" host=""
[1435387921.482140 -- Sat Jun 27 08:52:01 2015] user process: id="cdab068901338b19" pid="1976" user="dude" line="pts/0" host=":0"
[1435387922.927262 -- Sat Jun 27 08:52:02 2015] user process: id="0eb0980869dbcdaf" pid="2027" user="dude" line="pts/1" host=":0"
[1435394277.047666 -- Sat Jun 27 10:45:57 2015] user process: id="2f33000000000000" pid="1518" user="dude" line="pts/3" host=":0"

And after toggling, it still shows the same number of entries, with a slight suspicious difference:

$ getent utmpx active
[1435387900.738886 -- Sat Jun 27 08:51:40 2015] user process: id="74a6f9bdc43b89a6" pid="1631" user="dude" line="ttyv0" host=""
[1435387921.482140 -- Sat Jun 27 08:52:01 2015] user process: id="cdab068901338b19" pid="1976" user="dude" line="pts/0" host=":0"
[1435387922.927262 -- Sat Jun 27 08:52:02 2015] user process: id="0eb0980869dbcdaf" pid="2027" user="dude" line="pts/1" host=":0"
[1435394277.047666 -- Sat Jun 27 10:45:57 2015] user process: id="2f33000000000000" pid="1518"

What I don't understand, though, is how this actually works. On FreeBSD I can achieve a similar effect using utxrm(8) by modifying the accounting database, directly. However, this is limited to root. At first I thought that screen has those permissions as it is by default installed with the setuid (or setgid on some distributions) bit set, which is necessary to make multiuser sharing work. However, a test shows that it still works after removing that bit...

Oh well, the answer is probably in what the manpage states, that this only works "if screen is configured to update the utmp database", which on all systems I touched seems to be the case.

Pretty neat... unfortunately, tmux doesn't have this feature. :(

]]>
shell script for S3-upload via curl using AWS version 4 signatures http://blog.0xpebbles.org/shell-script-for-S3-upload-via-curl-using-AWS-version-4-signatures 19 Mar 2015 This is a more modern version of this script, switching to AWS version 4 signatures that are mandatory for AWS regions created after January 2014. It also works with older regions as they seem to support the new signature format, as well.

The script's interface is a bit easier and more intuitive, too, and allows setting the access permissions, now. See the script's own help-text for information and examples on how to use it.

#!/bin/sh

usage()
{
    cat <<USAGE

Simple script uploading a file to S3. Supports AWS signature version 4, custom
region, permissions and mime-types. Uses Content-MD5 header to guarantee
uncorrupted file transfer.

Usage:
  `basename $0` aws_ak aws_sk bucket srcfile targfile [acl] [mime_type]

Where <arg> is one of:
  aws_ak     access key ('' for upload to public writable bucket)
  aws_sk     secret key ('' for upload to public writable bucket)
  bucket     bucket name (with optional @region suffix, default is us-east-1)
  srcfile    path to source file
  targfile   path to target (dir if it ends with '/', relative to bucket root)
  acl        s3 access permissions (default: public-read)
  mime_type  optional mime-type (tries to guess if omitted)

Dependencies:
  To run, this shell script depends on command-line curl and openssl, as well
  as standard Unix tools

Examples:
  To upload file '~/blog/media/image.png' to bucket 'storage' in region
  'eu-central-1' with key (path relative to bucket) 'media/image.png':

    `basename $0` ACCESS SECRET storage@eu-central-1 \\
      ~/blog/image.png media/

  To upload file '~/blog/media/image.png' to public-writable bucket 'storage'
  in default region 'us-east-1' with key (path relative to bucket) 'x/y.png':

    `basename $0` '' '' storage ~/blog/image.png x/y.png

USAGE
    exit 0
}

guessmime()
{
    mime=`file -b --mime-type $1`
    if [ "$mime" = "text/plain" ]; then
        case $1 in
            *.css)           mime=text/css;;
            *.ttf|*.otf)     mime=application/font-sfnt;;
            *.woff)          mime=application/font-woff;;
            *.woff2)         mime=font/woff2;;
            *rss*.xml|*.rss) mime=application/rss+xml;;
            *)               if head $1 | grep '<html.*>' >/dev/null; then mime=text/html; fi;;
        esac
    fi
    printf "$mime"
}

if [ $# -lt 5 ]; then usage; fi

# Inputs.
aws_ak="$1"                                                              # access key
aws_sk="$2"                                                              # secret key
bucket=`printf $3 | awk 'BEGIN{FS="@"}{print $1}'`                       # bucket name
region=`printf $3 | awk 'BEGIN{FS="@"}{print ($2==""?"us-east-1":$2)}'`  # region name
srcfile="$4"                                                             # source file
targfile=`echo -n "$5" | sed "s/\/$/\/$(basename $srcfile)/"`            # target file
acl=${6:-'public-read'}                                                  # s3 perms
mime=${7:-"`guessmime "$srcfile"`"}                                      # mime type
md5=`openssl md5 -binary "$srcfile" | openssl base64`


# Create signature if not public upload.
key_and_sig_args=''
if [ "$aws_ak" != "" ] && [ "$aws_sk" != "" ]; then

    # Need current and file upload expiration date. Handle GNU and BSD date command style to get tomorrow's date.
    date=`date -u +%Y%m%dT%H%M%SZ`
    expdate=`if ! date -v+1d +%Y-%m-%d 2>/dev/null; then date -d tomorrow +%Y-%m-%d; fi`
    expdate_s=`printf $expdate | sed s/-//g` # without dashes, as we need both formats below
    service='s3'

    # Generate policy and sign with secret key following AWS Signature version 4, below
    p=$(cat <<POLICY | openssl base64
{ "expiration": "${expdate}T12:00:00.000Z",
  "conditions": [
    {"acl": "$acl" },
    {"bucket": "$bucket" },
    ["starts-with", "\$key", ""],
    ["starts-with", "\$content-type", ""],
    ["content-length-range", 1, `ls -l -H "$srcfile" | awk '{print $5}' | head -1`],
    {"content-md5": "$md5" },
    {"x-amz-date": "$date" },
    {"x-amz-credential": "$aws_ak/$expdate_s/$region/$service/aws4_request" },
    {"x-amz-algorithm": "AWS4-HMAC-SHA256" }
  ]
}
POLICY
    )

    # AWS4-HMAC-SHA256 signature
    s=`printf "$expdate_s"   | openssl sha256 -hmac "AWS4$aws_sk"           -hex | sed 's/(stdin)= //'`
    s=`printf "$region"      | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
    s=`printf "$service"     | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
    s=`printf "aws4_request" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
    s=`printf "$p"           | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`

    key_and_sig_args="-F X-Amz-Credential=$aws_ak/$expdate_s/$region/$service/aws4_request -F X-Amz-Algorithm=AWS4-HMAC-SHA256 -F X-Amz-Signature=$s -F X-Amz-Date=${date}"
fi


# Upload. Supports anonymous upload if bucket is public-writable, and keys are set to ''.
echo "Uploading: $srcfile ($mime) to $bucket:$targfile"
curl                            \
    -# -k                       \
    -F key=$targfile            \
    -F acl=$acl                 \
    $key_and_sig_args           \
    -F "Policy=$p"              \
    -F "Content-MD5=$md5"       \
    -F "Content-Type=$mime"     \
    -F "file=@$srcfile"         \
    https://${bucket}.s3.amazonaws.com/ | cat # pipe through cat so curl displays upload progress bar, *and* response

]]>
Better filebrowser for mutt serving MH mailbox http://blog.0xpebbles.org/Better-filebrowser-for-mutt-serving-MH-mailbox 19 Dec 2014 Mutt has it's strong sides and also weak sides. One of the weak ones is the file browser, that isn't really designed for actual browsing, as it only displays one level of the file hierarchy at a time. If not entering a path directly, actual browsing is done by:

  • either typing a subpath, then using the browser again on the partial path
  • or one can tell mutt up front about the folders in existance using the mailboxes command (and also automatically populate that, as shown here and here), and then use the Mailboxes-view of the browser, which is a flat list

The problem is that both ways are frustrating to use with an ever growing mailbox. The former makes it hard to see which folders and subfolders have unread mail and to get some overview over the hierarchy in general. The latter is a flat list, which is hard to find anything in if you have a lot of nodes. There is a patchset for mutt adding a sidebar for browsing, but suffers from the same problem.

Some mail setups don't face those problems as they aren't file-system heavy, or use indexers. However, I'm used to MH as a mailstore, and like to drop/filter my mail into many folders and subfolders, the old-school way. So, I decided to come up with some own solution, hooking up vim as a filebrowser. Note, the scripts below work with MH, only, and require nmh to be accessible.

Add this "pretty" macro to your .muttrc. What it does is running nmh's flists command, uses the output to build a dummy directory tree under /tmp with number of unread emails (summed up towards the root), then lets vim do the folder browsing. Once a folder is picked, it hands it back to mutt:

# own filebrowser 
set wait_key=no 
macro index,pager b '\ 
	<shell-escape>\ 
	if [ -d /tmp/mailbox_tree_mirror ]; then rm -r /tmp/mailbox_tree_mirror; fi;\ 
	flists -alpha -recurse |\ 
		sed -E "s@^(.*) has *([0-9]+) .*@\1/\2@" |\ 
		sed -E "s@[+ ]*(/[0-9]+)\$@\1@" |\ 
		awk "\ 
			BEGIN{FS=\"/\"}\ 
			{s=\"\";for(i=1;i<NF;++i){s=s\"/\"\$i;u[s]+=\$NF}}\ 
			END{for(i in u){split(i,d);s=p=\"\";for(j in d){if(d[j]){s=s\"/\"d[j];p=p\"/\"d[j]\" (\"u[s]\")\"}}print p}}" |\ 
		while read l; do mkdir -p "/tmp/mailbox_tree_mirror/$l"; done;\ 
	(cd /tmp/mailbox_tree_mirror/ && vim -S ~/.mutt/browse_files.vim .);\ 
	<enter>\ 
	<enter-command>\ 
	source /tmp/mailbox_tree_mirror/pick\ 
	<enter>'

The macro is bound to b, and makes some assumptions about some paths. Adapt as needed.

Starting vim sources a file I put in ~/.muttrc called browse_files.vim, which works with either netrw (usually comes with vim) or NERDTree:

" This folder browser depends on either netrw or NERDTree
" Usage:
"   - use with muttrc macro building a dummy image of the mailbox hierarchy, then using vim as browser on it
"   - browse with vim; gm (go mailbox) is mapped to select inbox and return to mutt

fu! SwitchToMailboxInMutt()
	" hack to get path and write to file, so mutt can source it
	:silent !print push \'<change-folder>`span class="Statement">pwd|sed 's@^/.*tmp/mailbox_tree_mirror/@=@'|sed -E 's@ \([0-9]+\)(/|$)@/@g'|sed -E 's@( )@\<quote-char\>\1@'g`'\<return\> > /tmp/mailbox_tree_mirror/pick
	:q!
endf


function! AddMuttSyntaxHL(parent)
	exec 'syn match muttUnreadSel # ([0-9]\+)/# containedin='.a:parent.'Dir'
	syn match muttUnreadCnt #([0-9]\+)#   containedin=muttUnreadSel
	syn match muttUnread0   #(0)#         containedin=muttUnreadCnt
	hi link muttUnreadSel Ignore
	hi link muttUnreadCnt Number
	hi link muttUnread0   Comment
endf


function! ApplyMuttNERDTreeBindings()
	" Remove clutter and hook into syntax - call syntax setter directly,
	" as NT hijacks netrw, and thus is kinda delay-loaded, anyways (IIUC).
	let g:NERDTreeMinimalUI=1
	call AddMuttSyntaxHL('NERDTree')

	" hide files, jump to first folder entry
	:normal F
	call cursor(line('.')+4)

	" gm picks this directory
	map gm cd<CR>:call SwitchToMailboxInMutt()<CR>
endf


function! ApplyMuttNetrwBindings()
	" Hook into syntax, needs to happen after netrw's syntax is fully loaded.
	autocmd FileType netrw call AddMuttSyntaxHL('netrw')

	" Use tree mode, only, and hide all files
	let g:netrw_liststyle=3
	let g:netrw_list_hide='.*[^/]$'

	" gm picks this directory - long macro b/c of toggle-behaviour of netrw's browser, didn't find a way to get path under cursor
	map gm <CR>:let s=b:netrw_curdir<CR><CR>:if(strlen(b:netrw_curdir)>strlen(s))\|let s=b:netrw_curdir\|endif<CR>:exec 'cd '.fnameescape(s)<CR>:call SwitchToMailboxInMutt()<CR>
endf


if &ft ==# "nerdtree"
	call ApplyMuttNERDTreeBindings()
else
	call ApplyMuttNetrwBindings()
endif

That's it - instead of having to deal with mutt's browser, I now hit b, browse and when I found the folder I want to browse to, hit gm in vim (goto mailbox), and I'm back in mutt. Also, I get a quick overview of all my folders and the unread emails, even in color!

mutt/mh with vim folder browser
]]>