CMake Rants

CMake is a highly popular ‘meta-build’ system: it is a custom declarative syntax that is used to generate build scripts for all major OSs and compilers (e.g., VS solutions and projects). Designing such a system is a formidable task, but really not a very wise one to undertake in the first place.

I know there are only languages that people complain about and languages nobody uses. I know it’s bad manners to complain about stuff you get for free. I also know I’m working on windows and CMake scripts are generally authored by people who don’t care much about windows development.

And still I can’t help it. Here are a few particular irks.


When CMake asks ‘where to build the binaries’ it isn’t talking about anything resembling a binary. It’s asking about the proper location for the outputs of its processing – i.e., solutions and projects (or other build scripts).


This goes beyond a simple ‘designed by an engineer’ cliché. How long did it take you to figure out you need to repeatedly click the ‘configure’ button until all red lines are gone, then ‘Generate’?


Not so good.

Clean/Rebuild generally just a recommendation.

But what really makes CMake nearly unusable to me is –

CMake’s treatment of paths

In all generates build scripts, paths are absolute. In vcxproj’s – Output path, Additional Include Directories, PDB path, Linker input directories, custom build steps in ZERO_CHECK and ALL_BUILD, etc. etc. – are all absolute paths.

This makes CMake-generated projects almost useless: you cannot source control them or share them in any other way.

Turns out this has been known for quite some time. There’s a macro called CMAKE_USE_RELATIVE_PATHS but its documentation says:

May not work!… In general, it is not possible to move CMake generated makefiles to a different location regardless of the value of this variable.

It seems they tried to fix it for a while but gave up, and instead posted a FAQ which I don’t really understand:

CMake uses full paths because:

  1. configured header files may have full paths in them, and moving those files without re-configuring would cause upredictable behavior.
  2. because cmake supports out of source builds, if custom commands used relative paths to the source tree, they would not work when they are run in the build tree because the current directory would be incorrect.
  3. on Unix systems rpaths might be built into executables so they can find shared libraries at run time. If the build tree is moved old executables may use the old shared libraries, and not the new ones.

Can the build tree be copied or moved?

The short answer is NO. The reason is because full paths are used in CMake, see above. The main problem is that cmake would need to detect when the binary tree has been moved and rerun. Often when people want to move a binary tree it is so that they can distribute it to other users who may not have cmake in which case this would not work even if cmake would detect the move.

The workaround is to create a new build tree without copying or moving the old one.

The way I see it the real reasons for this sorry state are laid out in this 2009 discussion:

You should give up on CMAKE_USE_RELATIVE_PATHS , and we should deprecate it from CMake.  It just does not work, and frustrates people.

… It is really hard to make everything work with relative paths, and you don’t get that much out of it, except lots of maintenance issues and corner cases that do not work.

An alternative that I’m growing fond of

Is ‘Project from existing code’:

Download the sources you wish to use, and instead of invoking CMake on the root CMakeLists.txt, invoke ‘Project from existing’ code, select the source folder and follow the rest of the wizard instructions.

Today this approach worked for me perfectly on the first try (on this library that is admittedly simple in structure), but that was just lucky. It certainly isn’t perfect – but it is simple, and the generated project files do use relative paths. I’m beginning to think even when tweaks are needed, this is a better starting point for a usable project. Do tell me in the comments if your experience is different.


It seems some words of clarification about my usage scenario are in order.

I wish to import an open source project to my build environment, and continue from there. With my build environment. Is that such an exceptional scenario? (It might be, judging by the comments below). I was under the impression that this is what CMake authors aimed for (why generate a VS project/solution otherwise?), and if I was creating an open source package that is how I would want others would use my code.

Moreover, except in the simplest of cases this is the only way to go: a CMake-generated project cannot possibly be the final say. All native build engines have their special knobs and handles that often must be tweaked. Did you ever, e.g., want to change the import library for a dll? Not really possible in CMake. Not to mention more advanced stuff – e.g., rebase it or make it ASLR.  I didn’t mention it above because I don’t consider this a CMake flaw – it’s just too much to ask of a portable build system. All you can expect from it is to set a portable common ground.

So I would expect CMake to generate a native build package (say, VS solution) in a way that would make it possible to forget it was generated by CMake.   Due to all the native knobs and handles this is an inherently hard user story to implement – but CMake fails much, much earlier. I’d consider using relative paths a must-have, and I don’t see why portability makes this task any harder than, say MSBuild authors’ task of using relative paths.

Posted in C++ | 12 Comments

On Matlab’s loadlibrary, proto file and pcwin64 thunk

Today we’ll try to shed some light on dark undocumented corners of Matlab’s external interfaces.

Matlab provides several ways to call into external native code – if you have just the binaries for this code, the way is loadlibrary.  To help parse the external dll contents you’d need to initially provide loadlibrary a C/C++ header file, but you can then tell loadlibrary to transform this header info into a Matlab-native representation typically named a ***proto.m file.  When the proper loadlibrary call is made, a proto.m file is generated along with something named ***_pcwin64_thunk.dll (well, on x64 pc’s, obviously).  The documentation says nearly nothing about either proto files or thunk files:

A prototype file is a file of MATLAB commands which you can modify and use in place of a header file. …

A thunk file is a compatibility layer to a 64-bit library generated by MATLAB.

One could do with this terse phrasing until something goes wrong – as it inevitably does.  Googling shows only that this seems to be an open question online as well. Time to peek inside.

Peeking inside

Take a toy C++ dll:

// ToyDLL.h
#define TOYDLL_API __declspec(dllexport)
#define TOYDLL_API __declspec(dllimport)

extern "C" { TOYDLL_API bool ToyFunc(int a, int b, double c); }

// ToyDLL.cpp: build with /D TOYDLL_EXPORT

#include "ToyDLL.h"
#include <stdio.h>
#include <tchar.h>

extern "C" {
	TOYDLL_API bool ToyFunc(int a, int b, double c)
		_tprintf(_T("%d, %d, %f"), a, b, c);
		return true;

Build it, try to loadlibrary it in Matlab, and get:

Error using loadlibrary
Call to Perl failed.  Possible error processing header file.
Output of Perl command:
Working string is 'extern " C " {  bool ToyFunc ( int a , int b , double c ); }'.
at C:\Program Files\MATLAB\R2016a\toolbox\matlab\general\private\ line 1099
main::DumpError('extern "C" { found in file. C++ files are not supported.  Use...') called at C:\Program Files\MATLAB\R2016a\toolbox\matlab\general\private\ line 312
ERROR: extern "C" { found in file. C++ files are not supported.  Use #ifdef __cplusplus to protect.

Found on line 13 of input from line 12 of file ToyDLL.h

Hmmmm. The perl script is shipped with Matlab (the error message above gives its full path), and the first few of its 1000 lines are:

# Parse a C/C++ header file and build up three data structures: the first
# is a list of the prototypes defined in the header file; the second is
# a list of the structures used in those prototypes.  The third is a list of the
# typedef statements that are defined in the file

It should be noted already that the C – C++ boundary is extremely fuzzy as far as Matlab is concerned. Not only is this script declared to ‘Parse C/C++ headers’ only to complain later that ‘C++ is not supported’, loadlibrary itself is advertised to ‘Load C/C++ shared library into MATLAB’ only to disclaim elsewhere that ‘The MATLAB® shared library interface supports C library routines only’ and offer various workarounds for C++.  More details later, but for now let’s humor the grumpy perl script and modify the header (it never touches the cpp) into:

// ToyDLL.h
#define TOYDLL_API __declspec(dllexport)
#define TOYDLL_API __declspec(dllimport)

#ifdef __cplusplus
extern "C" {

TOYDLL_API bool ToyFunc(int a, int b, double c);

#ifdef __cplusplus

And now loadlibrary quietly succeeds.

Peeking deeper inside

As the perl source is available you could in principle study it to understand what it does – but I certainly couldn’t, in principle or not (it’s horrible even as far as perl scripts go). With some semi-hacking, we can just inspect its output. First, fire up Process Monitor and filter to process ‘perl.exe’ to observe the exact files it receives and generates:


Observe that the perl script operates on the VC-preprocessed file ToyDLL.i   This could also be seen by examining earlier portions of the ProcMon trace or by noting that itself declares internally its usage as –

# prototypes [options] [-outfile=name] input.i  [optional headers to find prototypes in]

Next, observe that it outputs the source file ToyDLL_thunk_pcwin64.c.  Alas, trying to open it teaches that it is very short lived.  The final hack is to copy this temp file somewhere upon its creation. I considered coding such a tool (shouldn’t be too much trouble) but thought I’d google for one first and luckily did come across the free edition of Limagito. After some tweaking I got a nice persistent copy of the thunk.c source, which is essentially:

#include <tmwtypes.h>

/* use BUILDING_THUNKFILE to protect parts of your header if needed when building the thunkfile */


#include "ToyDLL.h"

/*  bool ToyFunc ( int a , int b , double c ); */
EXPORT_EXTERN_C bool boolint32int32doubleThunk(void fcn(),const char *callstack,int stacksize)
int32_T p0;
int32_T p1;
double p2;
p0=*(int32_T const *)callstack;
callstack+=sizeof(p0) % sizeof(<em>size_t</em>) ? ((sizeof(p0) / sizeof(<em>size_t</em>)) + 1) * sizeof(<em>size_t</em>):sizeof(p0);
p1=*(int32_T const *)callstack;
callstack+=sizeof(p1) % sizeof(<em>size_t</em>) ? ((sizeof(p1) / sizeof(<em>size_t</em>)) + 1) * sizeof(<em>size_t</em>):sizeof(p1);
p2=*(double const *)callstack;
callstack+=sizeof(p2) % sizeof(<em>size_t</em>) ? ((sizeof(p2) / sizeof(<em>size_t</em>)) + 1) * sizeof(<em>size_t</em>):sizeof(p2);
return ((bool (*)(int32_T , int32_T , double ))fcn)(p0 , p1 , p2);

For completion, here is the generated ToyDLL_proto.m file:

function [methodinfo,structs,enuminfo,ThunkLibName]=ToyDLL_proto
%TOYDLL_PROTO Create structures to define interfaces found in 'ToyDLL'.
%This function was generated by loadlibrary.m parser version  on Fri Jul 15 17:50:20 2016
%perl options:'ToyDLL.i -outfile=ToyDLL_proto.m -thunkfile=ToyDLL_thunk_pcwin64.c -header=ToyDLL.h'
ival={cell(1,0)}; % change 0 to the actual number of functions to preallocate the data.
fcns=struct('name',ival,'calltype',ival,'LHS',ival,'RHS',ival,'alias',ival,'thunkname', ival);
%  bool ToyFunc ( int a , int b , double c );
fcns.thunkname{fcnNum}='boolint32int32doubleThunk';{fcnNum}='ToyFunc'; fcns.calltype{fcnNum}='Thunk'; fcns.LHS{fcnNum}='bool'; fcns.RHS{fcnNum}={'int32', 'int32', 'double'};fcnNum=fcnNum+1;

Putting it all together

Now that all the raw material is at hand, we can gain some insight on what these components do and how.

The proto file is about calling the thunk

It contains the path to the thunk dll, and a ‘dictionary’ that tells Matlab what thunk function needs to be called en route to each dll-exported function. For our toy case, to get to ToyFunc Matlab calls into ‘boolint32int32doubleThunk’ – a name encoding the output and all input types, but in principle every other name could be used.

The thunk DLL is about adjusting calling conventions

The thunk function boolint32int32doubleThunk receives its arguments in the Matlab calling convention: all arguments are passed consecutively on the stack, untyped and aligned on sizeof(size_t) (64 bytes in x64) boundaries.  It also receives a function pointer to the actual DLL export, and after copying the arguments to local typed variables – calls this function with its native calling convention.  It never uses the ‘stacksize’ argument.

In real life cases the headers and generated thunks can get considerably more complicated – one notable omission so far is how structs and other compound types are handled (as far as I can tell this is the sole reason for inclusion of the library header in the thunk.c). The same technique laid out above can be used to investigate these cases, but we can already put all this newfound knowledge to good use.


Take our ToyDLL.h above and add to it the seemingly benign lines:

class ToyClass
ToyClass() {};
~ToyClass() {};

Build and try to load the resulting DLL into Matlab, only to get:

Error using loadlibrary
Building ToyDLL_thunk_pcwin64 failed.  Compiler output is:
cl -I"C:\Program Files\MATLAB\R2016a\extern\include" /Zp8  /W3  /nologo
-I"[…]ToyDLL" "ToyDLL_thunk_pcwin64.c" -LD -Fe"ToyDLL_thunk_pcwin64.dll"
[…]\ToyDLL\ToyDLL.h(10): error C2061: syntax error: identifier 'ToyClass'
[…]ToyDLL\ToyDLL.h(10): error C2059: syntax error: ';'
[…]ToyDLL\ToyDLL.h (11): error C2449: found '{' at file scope (missing function header?)
[…]ToyDLL\ToyDLL.h (14): error C2059: syntax error: '}'

We didn’t violate any of loadlibrary’s documented limitations but we already have enough visibility into the process to understand what’s going on.  The root issue is that for whatever reason the perl script generates a c file, not a cpp one.

The workaround I used, both in this toy scenario and in the real life DLLs I wanted to load into Matlab, was – grab the perl-generated sources, rename them to cpp and build your own thunk.dll .

In the immortal words of Todd Howard, It just works.   Solves plenty of other C/C++ idiosyncrasies too.

Now I have exactly zero insight into Mathworks considerations and decisions, but I have this vague suspicion the only reason for these half-hearted ‘C-only’ limitations is that they’re stuck with this essentially black-box perl script that parses headers.  Most of the time it succeeds (on C++ headers too), sometimes it doesn’t. When it doesn’t, they sometimes document the root cause as unsupported.

The question remains, who in their right mind would try to parse a header with a perl script. Based on my own experience, I can suggest sort of an answer.


This discussion can lead to all sorts of other workarounds and solutions. Here’s, briefly, another one that was useful to us.

When the Matlab component consuming the native DLL is deployed, the loadlibrary call is made from wherever the CTF archive was extracted to – and it occasionally fails to find the thunk dll. Our solution was to intervene in the proto.m file, and have it take the thunk path from the registry.

Perhaps more on CTF archives one day.

Posted in Matlab, VC++ | 2 Comments

On API-MS-WIN-XXXXX.DLL, and Other Dependency Walker Glitches

Dependency walker is the tool of choice for static dependency analysis of native binaries (it has some dynamic analysis too, but that niche at least has some alternative solutions). It is in a rather sorry state, however – development seems to be abandoned since more or less 2005, and it is unanimously described as aging. As a prominent example of dependency walker analysis failures, try to run it on itself:

It seems the dependencies it is able to resolve are a negligible minority of the overall dependencies – and the interwebs are full of similar reports. The DLLs falsely reported as missing are all strangely named and unfamiliar, and the explanations given as SO answers range from a vague ‘some internal OS stuff’ to hypotheses about delay loads and side-by-side assemblies.

To the best of my understanding, as of March 2016 DependencyWalker 2.2 resolves side-by-side manifests very well and has no trouble with delay loads. I’m aware of only two dependency scenarios where it falls short, but unfortunately they are ubiquitous.

1: Compatibility Shims

Maybe more on that in another post. But –

2: Api Sets

…are the main issue.

Scarcely mentioned on MSDN:

An API Set is a strong name for a list of Win32 APIs … you should think of an API Set’s name as just a unique character string, and not as a dll name … API Sets rely on operating system support in the library loader … the library loader performs a runtime redirection of the reference…

Don’t be alarmed if that still sounds opaque. Brief history, as I understand it:

Sometime in the Vista dev cycle an effort referred to as MinWin began: essentially, smart people started moving functionality around in hope of simplifying the OS architecture. To protect the myriad components from breaking during a change, the ultimate solution was called in: an extra layer of indirection. This level is exactly Api Sets.

For example, the API set “api-ms-win-core-fibers-l1-1-1.dll” is an ‘atom’ of functionality encompassing the 5 APIs FlsAlloc, FlsFree, FlsGetValue, FlsSetValue and IsThreadAFiber (it is an untypically small such ‘atom’). All applications that consume fiber functionality declare dependency on this API set, and thereby become insensitive to the exact location of implementation (that might change between OS releases). During load time, the OS searches somewhere and automagically routes the calls from api-ms-win-core-fibers-l1-1-1.dll to wherever they happen to be implemented in this OS version.

One could argue that API sets now serve the original intended role of DLLs and that the architecturally clean solution is to have each API set implemented in its own DLL, but I’m sure this tradeoff has performance implications that I cannot even begin to quantify.

Some Internals

API sets are very partially documented, and the load-time mechanism that properly routes the calls – even less so. One could start by inspecting the shipped apiset.h (documented to be authored by the venerable Arun Kishan in Sep-2008), and learn that the key call is to the undocumented ApiSetResolveToHost. It is called from LoadLibrary, typically through a call stack such as –

ntdll.dll!_ApiSetResolveToHost@20()  + 0xf bytes
ntdll.dll!_LdrpApplyFileNameRedirection@28()  + 0x35 bytes
ntdll.dll!_LdrpLoadDll@24()  + 0xae bytes
ntdll.dll!_LdrLoadDll@16()  + 0x74 bytes
KernelBase.dll!_LoadLibraryExW@12()  + 0x120 bytes

The actual per-OS-version redirection data lies in a special file called ApiSetSchema.dll. Its technically a DLL (conforms to the PE spec), but not an executable one – the redirection data lies in a specialized section called .apiset, mentioned at the apiset.h macros. Sebastien Renaud did some spectacular reversing work and described the layout of the redirection data it contains.

Full(er) Redirection Table

In principle one could – and hopefully someday would – use Renaud’s work to create a community-maintained version of dependency walker, but until that day we can get by with the aforementioned built-in loader logging: whenever ShowSnaps is raised the loader spits out many hundreds of messages like –

3e30:02b8 @ 370478046 – LdrpPreprocessDllName – INFO: DLL api-ms-win-core-rtlsupport-l1-2-0.dll was redirected to C:\WINDOWS\SYSTEM32\ntdll.dll by API set

Running a few applications and filtering the results, I arrived at the table dumped below. I’ll update it as time permits – but if you have some dependency you don’t understand you can follow the same steps (well, for apps you can run, anyway): raise ShowSnaps for your app and inspect the output to see where the ApiSet I missed really routes to. If you do, please comment here so I can correct the table.

API Set Routes to…
api-ms-win-appmodel-state-l1-2-0.dll kernel.appcore.dll
api-ms-win-core-apiquery-l1-1-0.dll ntdll.dll

Edit (May 2016):

I won’t be listing the API sets here, as it turns out Geoff Chappell already took upon himself to maintain a list of API set redirections, along with versions and a very nice survey of the underlying apparatus, and actually a link to an MS patent describing the apparatus (if you’re able to decipher such descriptions).

Posted in Win32 | 8 Comments

Data Read Breakpoints – redux

The Problem

~5Y ago I blogged about data breakpoints. A hefty bit of the discussion was devoted to persistence of hardware breakpoints across a thread switch: all four implementations mentioned assume that HW breakpoints persist across thread boundaries, and some rough testing showed that that was indeed the case back then. Alas, somewhere between Windows 7 and Windows 10 – this assumption broke. The naïve implementation via SetThreadContext now indeed sets the debug registers only in the context of a specific thread. I suspect a deep change in the OS scheduler broke it – possibly hardware tasks are used today where they previously weren’t, but I have no proof.

Failed Attempts

I’m aware of a single attempt to address this shortcoming and implement a truly cross-thread data breakpoint: a year ago my friend Meir Meshi published code that not only enumerates all existing threads and sets debug registers in their context, but also hooks thread creation (actually RtlCreateThread) via a coded assembly trampoline to make sure any thread created henceforth would respect the existing breakpoints. The code seemed to work marvelously for a while, and broke again in Windows 10 – where MS understandably recognizes patching of thread creation as an exploit, and bans it.

I set out to find a working alternative to hooking thread creation. Two immediate directions popped to mind: DLL_THREAD_ATTACH and the lesser known TLS callbacks. These are two documented hooks available to user mode upon thread creation, and seemed like a natural place to access a list of pre-set breakpoints and apply them to the context of new threads. Both these attempts fell short again, since it seems these hooks are called before the target thread is created (from the stack of a different process thread), and setting the debug registers in this context does not persist to the target thread.

Bottom line, it seems that as of 2016 you really have to be a debugger and handle CREATE_THREAD_DEBUG_EVENT to manage hardware breakpoints. I was recently told by John Robbins that the VS team are aware of this need but it just isn’t currently a priority (this might change if this UserVoice suggestion gets a bit more votes, though). Luckily, VS isn’t the only debugger in the MS universe – and in fact it integrates with a much stronger one.

A Real Solution

WinDBG (and siblings) had perfect hardware breakpoints (via ‘ba‘) since forever. It is a less known fact that VS integrates rather nicely with WinDBG, and for completeness I’ll rehash the integration steps here.

(1) Install WDK , and mark the integration with VS checkbox.

(2) Run the debugee without a debugger (Ctrl+F5) and attach to it via the newly added ‘Windows User Mode Debugger’ transport:

The debugging engine is now WinDBG. The debugging experience is noticeably different: the expression evaluator is different – e.g. the view at the watch window and what it agrees to process are changed, threads pane no longer has thread IDs (why??) etc. – but all in all the large majority of VS commands and keyboard shortcuts are nicely mapped to this new engine.

You should see a new ‘Debugger Immediate Window’ pane, that accepts command line inputs identical to that of WinDBG, and with a nice bonus of auto-completes and auto help:

While at a breakpoint, type a ba command at this window. For example, to break upon read (r) of any of the 4 bytes (r4) following the address 0x000000c1`3219f7f4 (the windbg engine likes a ` separator in the middle of x64 addresses), type:

ba r4 0x000000c1`3219f7f4

And enjoy your shiny new read breakpoints that works across existing threads and are inherited by newly created ones.

Posted in Debugging, Visual Studio | Leave a comment

Visual Studio Projects that Just Keep Rebuilding, or: How Quantum Mechanics Mess Up Your Build

A lot has already been said on it, (no, seriously, a lot) and yet some root causes are not yet covered. All that follows holds for C++ projects, C#, VB, and everything MSBuild in general.

The symptom

Normally if you build your solution, change nothing and immediately try to build again – nothing happens, as you’d expect. But occasionally some projects are rebuilt despite not being modified. If by bad luck these projects are high up the dependency tree – many other projects are re-built, and irks galore abound.

First step: diagnose

Set MSBuild’s build verbosity to ‘diagnostic’, under Tools / Options / Projects and solutions / Build and Run:

Then build. The first lines of the output should hold the reason that VS thinks the project is out of date. Typical stated reasons are –

—— Up-To-Date check: Project: Whateva, Configuration: Release Win32 ——
Project not up to date because build input ‘F:\sources\Shiny.h’ is missing.

Project ‘Whateva’ is not up to date. Project item ‘Shiny.html’ has ‘Copy to Output Directory’ attribute set to ‘Copy always’.

Project ‘Whateva’ is not up to date. CopyLocal reference source ‘F:\folder1\blah.dll’ is more recent than ‘F:\folder2\blah.dll’.

Project ‘Whateva’ is not up to date. Input file ‘F:\sources\yadda.csproj’ is modified after output file ‘F:\bin\yadda.pdb’.

Project ‘Whateva’ is not up to date. Last build was with unsaved files.

In my personal experience the most common reason is the first one: a file that is included in a project but is missing on disk. Treating this state as out-of-date is a weird design decision (I’d say VS should refuse to build), but that’s just the way it is. Anyway, more can – and should – be said about file dates.

File Dates: More than meets the eye

Oddly, sometime MSBuild’s claim that file1 is newer than file2 persists after a build re-copies file1 over file2. Sometimes manually copying file1 over file2 helps, and sometimes not.

The question whether file1 is newer than file2 is not as straightforward as it might seem. NTFS has two associated times: created time and modified time (never mind ‘last access time’ now). The full rules of how these dates react to a copy/move are somewhat involved but the key piece in this context is:

If you copy a file from D:\NTFS to D:\NTFS\SUB, it keeps the same modified date and time but changes the created date and time to the current date and time. [In particular, making the created date later than the modified date]

So the date that is preserved across copies is modified date – but MSBuild compares created dates to determine whether a file should be copied. I sincerely wish it wouldn’t – but wait, the creation date in the copy can only grow newer than the source, and this shouldn’t call for a re-copy anyway, should it?

Enter NTFS Tunneling.

This Windows feature is beyond esoteric, so don’t feel bad if it doesn’t ring any bell. In a nutshell:

When a name is removed from a directory (rename or delete), its short/long name pair and creation time are saved in a cache, keyed by the name that was removed. When a name is added to a directory (rename or create), the cache is searched to see if there is information to restore. The cache is effective per instance of a directory. If a directory is deleted, the cache for it is removed.

Simply put, when you copy a file to a location where it previously existed, its original created date is resurrected – regardless of the created date of the actual source file.


The name ‘tunneling’ derives from a quantum mechanics phenomena (hence the clickbait post title) where particles emerge in seemingly impossible locations. The original motivation for this design is irrelevant for decades now (probably since MS-DOS), and it is most probably left around as a compatibility constraint. Even better, on my own computer it seems the lifetime of the cache (advertised to be 15 seconds) is really infinite, and modifying it via the MaximumTunnelEntryAgeInSeconds registry key isn’t working.

<Double sigh/>.


You can shut down tunneling altogether by setting HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\MaximumTunnelEntries to 0, as shown in the KB article.

If for whatever reason you prefer not to mess with arcane NTFS registry keys, you can do the following: rename your bin folder (the one holding the created-date cache) to a temporary name, create a new bin folder and copy the previous bin contents to it. This seemingly no-op clears the tunneling cache and in my experience rids of the last bogus out-of-date checks.

These redundant re-builds are pretty much gone for me now. Please do tell me in the comments if it worked for you – and more importantly, if it didn’t.

Posted in Uncategorized | 2 Comments

How to Make a Phone Book Build and Run

Here’s a short cheat sheet summarizing an internal talk I gave a while ago about VC++ build diagnostics. These switches and tools are all documented, most are surveyed in previous posts here, and all are very useful when struck by baffling build/load errors.

Stage Diagnostic Tool
Solution / project configuration Tools/ Options/ Projects and Solutions/ Build and Run/ MSBuild output verbosity – Diagnostic
Includes Project Properties/ configuration /C-C++/ Advanced/Show Includes – Yes
Preprocessor Project Properties/ configuration /C-C++/ Preprocess to a File – Yes
Compiler output, obj or lib VS command prompt – “Dumpbin /ALL YourBin.lib > YourOutput.txt”
optionally undname on relevant names in the result
Linkage Project Properties/configuration/Linker/Show Progress – VERBOSE
Complete binary Dependency Walker
Loader Gflags / ShowSnaps
When in doubt Process Monitor

To use this effectively you must first define the exact stage where the failure occurs (which isn’t always trivial), then use the listed switch/tool.

Specifically Process Monitor is useful in multiple contexts – but mostly when you suspect used file is being consumed from an erroneous location. I recommend not adding a process name filter, as multiple processes are typically involved – devenv.exe, MSBuild.exe, cl.exe, link.exe, mspdbsvr etc. For header files there’s /ShowIncludes and for dll’s there’s link /verbose or Gflags/ShowSnaps, but many other files are consumed and have potential for errors (property sheets, resources, etc.) and ProcessMonitor covers them all.


Posted in Visual Studio | Leave a comment

An equivalent project (a project with the same global properties and tools version) is already present

Quick note: when you get this error while trying to add a VC project to your solution, good chance your project is missing a .filters file.

Googling taught me only that the same error message had a different reason a while ago, so this qualifies as deserving more web presence.

Posted in VC++, Visual Studio | Leave a comment