Simplistic and cross-platform make (as in Windows and *nix)

Most traditional make utilities are often tied to a single platform. For example, msbuild or nmake are Windows-only. BSD has its own make flavors and ports of GNU make exist for a lot of non-GNU platforms but don't come by default on many of them. To make things worse, IDEs come mostly with their own very specific project files, and often don't even stay compatible across versions of the same program (e.g. Visual Studio).

Working in games my whole life, the companies I worked for are pretty much committed to Visual Studio, and come up with all kinds of creative ways to support different versions of Visual Studio, for various reasons. Half of the team uses version A, b/c it supports new and needed features, the other half sticks to the old version for a while, and in some ways those project files need to be kept in sync. In middleware it gets even worse, as nobody wants to dictate to the clients what version to use, so support multiple versions at all times is needed. But this is a different story...

Obviously, some developers like to write portable software and prefer to use portable tools, s well. Those needs are covered by project-file generators (like bakefilegenmakecmake, and many more), or cross-platform build tools (like rake, SCons, etc.) - these tools are amazing, and I for myself use rake for a lot of things I work on in my spare time. Ruby is available on many platforms, so that solves this problem for me.

However, it always felt odd to build small projects with build tools that are much bigger than the project itself. Using FreeBSD and building from ports a lot, watching something build and what scrolls by, it's astounding to see how many small open source projects make use of automake - the latter taking longer to generate the makefile than the project itself building. And other projects that focus on portability often depend on some portable build tool, which in turn depends on something else, etc.. Often, a ton of other projects need to be built, just in order to build some project, at first hand.

Of course, big projects benefit from the flexibility a decent build-tool offers - I'm not trying to judge that, at all. However, wouldn't it be nice for tiny projects to simply use what's available by default on different platforms? Wouldn't it be nice if a small C/C++ project can be released without any additional dependencies on build tools, other than what usually comes with the default platform tools? By working on dyncall, we were asking ourselves this question a lot, as our goal was to have a project without any dependencies on any platform (other than the platforms' native build tools, of course). So, we simply wrote makefiles for GNU make, BSD make, Windows' nmake, Plan9's mk, etc.. Obviously, this works nicely and is tiny compared to some makefile generators, but is harder to maintain.

I came up with an experiment called dynMake, which aims to abstract the platforms' native build tools, relying on the system's shell/cmd and the C compiler/preprocessor (which is needed anyways to build). dynMake's feature set is pretty limited but enough for simple builds. This means simple makefile-style files that should work out of the box.

Please note that this is not in any way finished or polished, and abstracts only BSD make, GNU make and nmake, at the moment. So, at the very base, there is a Makefile, which is the main entry point for the build. It's tiny and makes only use of syntax that is valid for all of the 3 make systems, mentioned above:

all: ./buildsys/dynmake/dynmake.bat
	$(?:/=\\) all $(MAKE) && exit || sh $(?:bat=sh) all $(MAKE)

The platform selection pretty much happens here. The make target "all" depends on a script that gets called and takes over the build process. Left of the "||", the Windows version of the script gets called (the parameter substitution turns the script path's slashes into backslashes) and calls the .bat file with the target name and the name of the make system in use. If this is successful, we know we are on Windows and the process exits. If we aren't running this from cmd, it will fail and the right-hand side will get executed (the file extension gets adjusted and the shell script gets called). This assumes that the two files are in a subfolder ./buildsys/dynmake, relative to the Makefile.

dynmake.bat looks like this:

cl /nologo /DMAKE_CMD_%~n2 /EP Makefile.M 1> Makefile.dynmake
%2 /NOLOGO /f Makefile.dynmake %1

And like this:

gcc -D MAKE_CMD_$2 -E -P -x c Makefile.M | sed "s/^ */	/" > Makefile.dynmake
$2 -f Makefile.dynmake $1

Both invoke the system's compiler to preprocess a file called Makefile.M, which should be next to Makefile and is the makefile with the actual build logic. Depending on the build tool in use (specified by MAKE_CMD_<tool> parameter), the preprocessed output is written to Makefile.dynmake, which in turn is used to build the project. So, the syntax abstraction is done by the C preprocessor - unfortunately, the C standard doesn't guarantee that the preprocessor will preserve whitespace, that's why the sed invocation is needed to turn whitespace at the beginning of a line back into tabs (the replacement part of the sed command is a tab; cl seems to keep tabs).

This is how a simple Makefile.M would look like (to build a library, in this case (test.lib on Windows, otherwise libtest.a):

#include "buildsys/dynmake/Makefile.base.M"

all: _L(test)

_L(test): _O(obj1) _O(obj2) _O(obj3)

All the abstraction and substitution magic is in Makefile.base.M, a requirement for this to work. Every Makefile.M simply needs to #include it at the top of the file.

The following is a (work-in progress) version of Makefile.base.M:

/* dyncall_macros.h is from dyncall's sources and for DC_* macros, below. */
#include "../../dyncall/dyncall_macros.h"

#if defined(DC_WINDOWS) && defined(MAKE_CMD_nmake)

/* Abstractions */
#define _(X) $(X) /* Standard variables */
#define _L(X) X.lib
#define _O(X) X.obj

#define TARGET @
#define PREREQS **

/* Makefile internal vars for platform abstraction */

AR = lib

CFLAGS_USER = /nologo

LDFLAGS_USER = /nologo

ASFLAGS_USER = /nologo
AFLAGS = _(AFLAGS) _(ASFLAGS_USER) /* Set AFLAGS (without 'S'), which is the */
ASFLAGS = _(AFLAGS)                /* standard nmake predefined macro for MASM */

RM = del


cl /nologo /EP $< > $*.asm
_(AS) _(ASFLAGS) /c $*.asm
del $*.asm


/* Abstractions */
#define _(X) ${X} /* Standard variables */
#define _L(X) lib##X.a
#define _O(X) X.o

#define TARGET @
#if defined(MAKE_CMD_gmake) ||
   (defined(DC__OS_Linux) && !defined(MAKE_CMD_bsdmake)) ||
   (defined(DC__OS_Darwin) && !defined(MAKE_CMD_bsdmake)) ||
   (defined(DC__OS_SunOS) && !defined(MAKE_CMD_bsdmake))
# define PREREQS ^
# define PREREQS >

/* Makefile internal vars for platform abstraction */





RM = rm -f


I guess the idea is obvious - the combination of command line interpreter, preprocessor and some kind of make is powerful, and can be used to create a tiny and somewhat cross-platform build system. I say somewhat, because I'm sure that trying to get mk (or other make tools) into the mix will probably not be trivial. To build, just invoke nmake on windows, or make on other platforms, etc..

Please note that the above is simply a proof of concept and in no way complete. It's not flexible compared to the feature set of the native make tools, either, without massively extending Makefile.base.M. However, for very straight-forward small project's makefiles with simple make-target/dependency mappings that are written as-is, it's pretty neat.

To sum it up - to use this, wherever one needs to invoke {g,bsd,n}make, there needs to be a Makefile with the stub that runs the shell/cmd scripts, and next to it a Makefile.M with the actual rules for the project. For a working real world example which is a bit more complex than the snippets above, check out dyncall's sources, in dyncall's case the main Makefile to use for dynMake is called dynMakefile, though, so make sure you specify it on the command line (e.g. make -f dynMakefile).