C++ Const Constructability

[Inspired by CppQuiz #264]

Take this code snippet:

struct C { int i; };
const C c;

It fails to compile in gcc, with:

error: ‘const struct C’ has no user-provided default constructor and the implicitly-defined constructor does not initialize ‘int C::i’

Clang and icc give similar error messages. MSVC does agree to compile it, but somewhat reluctantly:

warning C4269: ‘c’: ‘const’ automatic data initialized with compiler generated default constructor produces unreliable results

If you remove the const qualifier, everything builds fine. What’s the deal?


An uninitialized object that is also constant would not be able to be populated with meaningful values later – and so is a strong indication of a coding error. The C++ standard made an exception to its’ usual philosophy and tried to stop this particular bullet from hitting your foot:

If a program calls for the default-initialization of an object of a const-qualified type TT shall be a const-default-constructible class type or array thereof.

A class type T is const-default-constructible if default-initialization of T would invoke a user-provided constructor of T (not inherited from a base class) or if

  • each direct non-variant non-static data member M of T has a default member initializer or, if M is of class type X (or array thereof), X is const-default-constructible,

  • if T is a union …,

  • if T is not a union …,


1. First and most obvious, the compiler does not try to check whether the user provided ctor actually does everything it should, or anything at all. This builds fine:

struct C {
 C() {};
 int i;
const C c;

Perhaps the ctor contents could have been checked (most compilers already know enough to generate warnings for uninitialized members), but the current standard doesn’t require it. To appease the compiler, it is enough the user supplies any ctor.

2. Currently there are ways – or rather spec loopholes? – to still use the same compiler-generated constructor for const initialization.

struct C {int i;};
const C c1 = C();
const C c2 {};

– both are considered value initialization, distinct from the default initialization referred in this part of the standard.

3. Somewhat surprisingly, while this fails:

struct C {
C() = default;
int i;
const C c;

taking the ‘ = default’ out of the class declaration makes the program valid!

struct C {
  int i;
C::C() = default;
const C c;

While in this toy example the difference seems negligible, typically the ctor implementation does not appear in all translation units that use C’s declaration. Thus, the ctor implementation – and in particular whether it’s default or not – is invisible to the compiler, and the standard does not require it to take decisions based on an implementation it can’t see.

Posted in C++ | Leave a comment

Checking if your Graphics software runs on GPU

What are we asking exactly?

This is actually not that easy to phrase. If you see anything on a screen, your software does make use of some graphics processing unit – but it is more likely you’re interested in particular capabilities of the GPU you’re running on. The device capabilities range wildly and the full details comprise hundreds and hundreds of fields in structs retrievable e.g. by DirectX API. One can –

  1. Choose a a few (or single) capabilities that are of interest and query only them,
  2. Make do with a vendor – i.e., ‘I’m running on an nVidia card’
  3. Use some abstraction of GPU ‘level’, that is hopefully available from one of the API sets.

In my own scenario the HW platform is controlled and the GPU choice is limited to (a) an integrated Intel graphics processor, (b) a known nVidia card. So my code choices were (2) and (3) – see more below.

If you have a GPU, why wouldn’t it be used?

If you have a GPU on your computer and your software wants to use it, why wouldn’t it be able to? The two reasons I came across are –

  1. Erroneous connectivity
    1. When you plug your monitor to a (desktop) motherboard socket and not a graphics card socket, some systems do not use the graphics card,
    2. When your laptop is mounted on a docking station (USB docking station in particular), sometimes the laptop’s motherboard takes the wrong decision.
  2. nVidia Optimus (1-paragraph survey, technical details)
  3. Is an attempt by nVidia to save laptop battery life by turning off the GPU when (they think) you’re not using it. On an Optimus-enabled laptop if you right click your desktop and choose ‘NVIDIA control panel/Manage 3D settings’, you’d be able to indirectly see and select the graphics output to use:

    Now according to the whitepaper a switch to the discrete NVIDIA GPU is triggered by DX, DXVA and CUDA calls – but no OpenGL calls. One does come across online complaints, however, that CUDA calls do not trigger the switching mechanism.

Each of these is solvable, but all I wanted was a way for the software to warn about such situations and prompt the user for action.

Native solutions

The task of querying the underlying device is a basic one, and every reasonable platform that makes use of the GPU offers such API.

  1. If you’re using DirectX – Create a DX device, then try to use it to call IDXGIAdapter::GetDesc. Check if the Device creation fails or DXGI_ADAPTER_DESC.VendorID is not nVidia. Check this example on MSDN.
  2. If you’re using CUDA – it has API to enumerate and query available devices – essentially, cudaGetDeviceProperties. One usage example is here.
  3. If you’re using OpenGL – create a dummy window and call glGetString(GL_VENDOR) to detect the OpenGL implementation used.

I’m sure there are also OpenCL, Vulkan, DirectCompute (etc. etc.) APIs for that, but that should be enough. Our SW operates on OpenGL so I went with the 3rd option and here’s a shameless copy/past of the code snippet I used, by Daniel Cornel:

bool VerifyRunningOnNvidia()
WNDCLASS wc = { 0 };
wc.lpfnWndProc = DefWindowProc;
wc.lpszClassName = L"DummyWindow";
HWND hWnd = CreateWindow(L"DummyWindow", NULL, 0, 0, 0, 256, 256, NULL, NULL, GetModuleHandle(NULL), NULL);

HDC hDC = GetDC(hWnd);
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24;
pfd.cDepthBits = 16;
pfd.iLayerType = PFD_MAIN_PLANE;
int pixelFormat = ChoosePixelFormat(hDC, &pfd);
SetPixelFormat(hDC, pixelFormat, &pfd);
HGLRC hRC = wglCreateContext(hDC);
wglMakeCurrent(hDC, hRC);

// Check the device information. Vendor chould be Intel for the integrated GPU
// and NVIDIA for the discrete GPU
const char* GpuVendor = (const char*)glGetString(GL_VENDOR);

bool IsNvidia = 0 == strcmp(GpuVendor, "NVIDIA Corporation");

// Destroy the OpenGL context
wglMakeCurrent(NULL, NULL);
ReleaseDC(hWnd, hDC);

return IsNvidia;

Managed solution

Enter System.Windows.Media.RenderCapability class, and specifically its Tier member. It provides a 3-values abstraction of the GPU ‘level’, which I expect should be enough for all but AAA game developers. Some of the descriptions given seem specific to the context of WPF (“…graphics features of WPF will use hardware acceleration…”) but truth is it suffices as a basic GPU query for all needs. The mapping to DirectX versions seems rather heuristic, and if you care about it there’s a good chance WPF is not the tool for you in the first place.

As an added bonus WPF provides an event that digs deep enough into the OS to give you a hook to respond to Render Tier change – which (afaik) is more than any other graphics framework provides. Such a change might occur if the user re-plugs the monitor to a different socket at runtime, or docks/undocks his laptop.

Here’s the relevant code snippet:

using System.Windows.Media;

void SomeEarlyInit()
    CheckRenderTier(null, null);
    RenderCapability.TierChanged += CheckRenderTier;

private void CheckRenderTier(object sender, EventArgs e)
    int renderingTier = RenderCapability.Tier >> 16;
    if (renderingTier < 2)
        MessageBox.Show("Graphics card inaccessible. Application requires an active GPU to function properly", "Warning");
Posted in DirectX, Win32 | 1 Comment


Some years ago I encountered a crash that I reduced down to the following toy code, composed of a dll:

// DLLwithOMP.cpp : build into a dll *with* /openmp
#include <tchar.h>
extern "C"
   __declspec(dllexport)  void funcOMP()
#pragma omp parallel for
    for (int i = 0; i < 100; i++)
        _tprintf(_T("Please fondle my buttocks\n"));

and a console app:

// ConsoleApplication1.cpp : build into an executable *without* /openmp

#include <windows.h>
#include <stdio.h>
#include <tchar.h>

typedef void(*tDllFunc) ();

int main()
    HMODULE hDLL = LoadLibrary(_T("DLLwithOMP.dll"));
    tDllFunc pDllFunc = (tDllFunc)GetProcAddress(hDLL, "funcOMP");
    FreeLibrary(hDLL);  // !  BOOM  !
    return 0;

As emphasized and commented, FreeLibrary causes a crash – typically (but not always) an access violation, with weird stacks in weird threads:

To understand what happens, let’s go over the full flow of events.

  1. The app loads the dll.
  2. The dll makes use of openmp, and thus the openmp runtime (part of the VC redist package) is loaded. It is a single dll, named vcomp[%VS_VER%][d].dll. ([d] when you’re running a debug build).
  3. The OMP runtime opens its own thread pool, and does some work.
  4. The work ends and the dll function returns.
  5. The app frees the dll
  6. vcompXXX.dll refcount is decremented to zero (since the app doesn’t use it). vcompXXX.dll is thus unloaded as well.
  7. The threads in the OMP thread pool keep spinning, but the code they’re running has just been unloaded! The rug had been pulled from under their feet and they crash spectacularly – while their stack frame seems to point somewhere in outer space.

This much I understood myself. What remained unclear was what is the correct solution. Was this an OMP implementation bug? Was there some OMP cleanup API that I missed? (not for lack of searching) Are we stuck with a (weird) requirement that components which call into OMP-linked-components, must link against OMP themselves??

I went first on StackOverflow and then on Connect (hey, it was 2015). As often happens in Connect reports, it was arbitrarily deleted some time later. Part of Eric Brumer’s response I did document at the SO post:

for optimal performance, the openmp threadpool spin waits for about a second prior to shutting down in case more work becomes available. If you unload a DLL that’s in the process of spin-waiting, it will crash in the manner you see (most of the time).

You can tell openmp not to spin-wait and the threads will immediately block after the loop finishes. Just set OMP_WAIT_POLICY=passive in your environment, or call SetEnvironmentVariable(L”OMP_WAIT_POLICY”, L”passive”); in your function before loading the dll. The default is “active” which tells the threadpool to spin wait. Use the environment variable, or just wait a few seconds before calling FreeLibrary.

MSDN explicitly mentions (for many versions now) that VC supports only OpenMP 2.0. OMP_WAIT_POLICY is part of the newer OpenMP 3.0 specification, and is the only newer environment variable that MS implemented. There’s a good chance they did it as part of this 2012 hotfix – and in the 5 years since, it remains undocumented.

Eric Brumer did mention in his Connect answer that he will nudge the documentation team to add it – but that either didn’t happen or didn’t help. Oh well, these tidbits are what keeps me blogging occasionally.

Posted in C++, VC++ | 2 Comments

Matlab’s mxArray Internals

Everything in Matlab is a Matrix. The scalar 4 is a 1×1 matrix with the single value 4. The string ‘asdf’ is a 4×1 (not a typo – it is in fact a column vector) matrix, with the 4 char values ‘a’, ‘s’, ‘d’, ‘f’, etc. When writing MEX functions in C/C++ (MEX = Matlab Extension), or when feeding data to Matlab-compiled components it is revealed that the underlying unified type for (almost) all Matlab data is the C type mxArray.

It seems that in the distant past (10Y+) Mathworks did deploy headers with the real type definition, but today mxArray is a completely opaque type – it is passed around only via pointers, and its only public declaration is a forward one, in matrix.h:

 * Forward declaration for mxArray
typedef struct mxArray_tag mxArray;

The only serious attempt I’m aware of to reverse mxArray’s layout is this 2000 user-group posting by Peter Boettcher. Twelve years later Peter Li published this work – which is very partial, relies on ‘circumstantial’ evidence and not on investigation of disassembly, and contains multiple errors. The memory layout of mxArray changed considerably since both works, and it is high time for a new investigation.

Below I hope to do more than spill out the results – rather lay out the complete way to improve this work and reproduce it in the future. (when mxArray’s layout changes and re-reversing is needed).

The Setup

Matlab installation includes the useful file  [matlabroot]/extern/examples/mex/explore.c, which demonstrates usage of most mxArray-poking API. To use it –

  1. Build it from the matlab prompt:
    >> mex -g ‘C:\Program Files\MATLAB\R2017a\extern\examples\mex\explore.c’
    (assuming matlabroot is ‘C:\Program Files\MATLAB\R2017a).
    This would create explore.mexw64 in your current folder – make sure you have write permissions to it. The -g switch generates debug symbols, to enable you to step through the generated code and watch variable contents in VS.
  2. Open explore.c in visual studio, set a breakpoint somewhere early at mexFunction():
  3. Attach to the running instance of matlab.
  4. From the matlab command prompt, create a variable of the type you wish to investigate and explore() it. E.g.:
    >> A=rand(3); explore(A)
  5. Step in VS and investigate the underlying mxArrays as detailed below. Repeat for various types.
    Expect to get obscure exceptions once in a while – they seem benign and handled internally by the jvm.

Start Simple

The first mx function called is mxGetNumberOfDimensions. It is the simplest getter possible, with two instructions:

This means that the mxArray member holding the number of dimensions is located at offset 24 (18 hex) from the mxArray start.

Similarly, mxGetPr disassembly shows that the real part of the data is at offset 56 (38 hex):

And the imaginary part of the data at offset 64 (40 hex):

mxIsSparse is an itzy bit more complicated:

Meaning, at offset 36 (24h) is a bit mask of size at least 8 (1 byte). The 5th bit from the right is a ‘sparse’ flag.

mxGetClassID has a little bit more complexity, but the basic location of the info is immediate:

The ClassID data is at offset 8 and it is of type mxClassID. The value 10h is the maximum allowed and values above it are subtracted by 17 before returning. Note that values above 16 are mentioned in matrix.h, but it seems these are not values you’d meet every day:

typedef enum {
    mxUNKNOWN_CLASS = 0,
    mxFUNCTION_CLASS, // = 16,
    mxOBJECT_CLASS, /* keep the last real item in the list */
#if defined(_LP64) || defined(_WIN64)
} mxClassID;

And so on and so forth. Much of mxArray’s contents are as apparent as in these examples, but not all. Hopefully these examples are enough to get a feel for the type of investigation required.

Conclusions (some, anyway)

  1. mxArray’s contents are very dense, and many of the members are unions – i.e., have different interpretations in different contexts. For one, dimensions: when number_of_dims (offset 24) is 2, they’re stored as two size_t members – which I called rowdim and coldim. When number_of_dims is greater than 2, the first member (rowdim) is actually a pointer to a heap array of dims, of size number_of_dims.
    The main data pointers (pData, pimag_data) are much larger unions – they can point to an array of anything, from doubles to complete mxArrays, depending on the mxClassID.
  2. Peter Boettcher’s 2000 work describes how Matlab uses ‘crosslinks’ between copies of the same variable to implement copy-on-write semantics. Some time later in the millennium Mathworks decided to trade space for speed, and added bi-directional such links, to save traversing the list of copies when one is changed: the layout as listed now includes both forward and backward crosslinks.
  3. Structs are rather complex. For a single struct –
    1. pData points to an array of mxArrays, containing the field values.
    2. The field names are accessed via 3 indirections:
      1. pimag_data points to a type I called struct_field_info.
      2. a member of struct_field_info points to another type I called struct_field_info_tag.
      3. a member of struct_field_info_tag (‘field_names’) is an array of pointers to null terminated ascii strings, holding the field names in order.
  4. Sparse matrices: the raw data is referenced by pData/pimag_data as for usual matrix. The data’s location is governed by the arrays irptr/jcptr, used to implement the Compressed-Sparse-Column storage and accessible directly via mxGetIr and mxGetJc. The size of these arrays (nnz) is stored at the member I called nelements_allocated.


I imagine that even in the parts I did get right, the real mxArray headers greatly differ from my own. Since I’m interested only in watching mxArrays and not modifying them directly (as you should be too), I did not express, say, pData as a large union but rather as a single void*. The place where the content type manifests itself is inside the natvis file, in sections such as –

      <!--Dense Matrices-->
      <ArrayItems Condition="classID!=2 &amp;&amp; classID!=4 &amp;&amp; dataflags.sparse==false">
        <Size Condition="number_of_dims==2">(&amp;rowdim)[$i]</Size>
        <Size Condition="number_of_dims!=2">((size_t*)rowdim)[$i]</Size>
        <ValuePointer Condition="classID==1">(mxArray_tag*)pData</ValuePointer>
        <ValuePointer Condition="classID==6">(double*)pData</ValuePointer>
        <ValuePointer Condition="classID==7">(float*)pData</ValuePointer>
        <ValuePointer Condition="classID==8">(char*)pData</ValuePointer>
        <ValuePointer Condition="classID==9">(unsigned char*)pData</ValuePointer>
        <ValuePointer Condition="classID==10">(short*)pData</ValuePointer>
        <ValuePointer Condition="classID==11">(unsigned short*)pData</ValuePointer>
        <ValuePointer Condition="classID==12">(int*)pData</ValuePointer>
        <ValuePointer Condition="classID==13">(unsigned int*)pData</ValuePointer>
        <ValuePointer Condition="classID==14">(__int64*)pData</ValuePointer>
        <ValuePointer Condition="classID==15">(unsigned __int64*)pData</ValuePointer>

Here’s a simple example of the resulting watches:



While this work is partial – I stopped where it was useful enough to us – it might be of value as is. It is now on github, you’re welcome to use/improve/report issues. The license is as free as I know how to make it.  If you want to learn a bit more about the natvis syntax, here’s where to do it.

The easiest way to use it is to add mxArrayWatch.h and mxArrayWatch.natvis to your c++ project which uses mxArray’s.

MathWorks Plea

My guess is that you guys decided to hide mxArray’s layout after users wrote code that relied on undocumented specifics, the code broke when you upgraded the layout, and unnecessary burden on your support ensued. I can completely relate to that. However, I’m not sure you have a realistic picture of the price your customers pay.

Let’s take a far more ubiquitous runtime as an example – MS CRT. Microsoft have been exposing nearly all of its source – type internals and logic – for at least 15 years now. STL and CRT types do change layout in major versions, code which illegally relied on internal layout comes crumbling down, I’m sure their support suffers as a result – but I dare guess their support would be burdened considerably more had they not opened the CRT source. In real life, more often than not, you just have to peek in.

Encapsulation is a solid design principle – but when taken to extremes it fails. In general, you really need zero knowledge of internals only when everything 100% succeeds on your first attempt, which is never the case in real life projects. Just the other day we had a nasty crash with memory corruption on mxDestroyArray – and it turned out the issue in the code was that we updated the field ‘costGap’ instead of ‘CostGap’. There is no way in the world we would have been able to debug this without reverse engineering the mxArray layout.

If you take interfacing with C/C++ seriously, I urge you guys to reconsider. Please don’t force the community to reverse your stuff to be able to work with it.

Posted in Debugging, Matlab, VC++ | 2 Comments

Tracking the Current Directory from the debugger: RtlpCurDirRef

Some rogue code was changing our current directory from under our feet, and we needed to catch it in action. I was looking for a memory location to set a data breakpoint on.

(Note: for the remainder of this post it is assumed that ntdll.dll symbols are loaded.)

The High road

The natural repository for process-wide data is the Process Environment Block and indeed you can get to the current directory from the there. The PEB and its internal structures are almost entirely undocumented, but the venerable Nir Sofer (and others) got around it using MS public debug symbols: the path is PEB->ProcessParameters->CurrentDirectory. After translating the private field locations to offsets, you get a rather horrifying – but functional – expression, that you can paste in a watch window:


To break execution when your current folder changes, you can set a data breakpoint on:


An Easier Alternative

Interestingly when inspecting GetCurrentDirectory disassembly it turns out it doesn’t go the TEB/PEB way, but takes a detour:

Ntdll.dll!RtlpCurDirRef is undocumented, but is included in the public ntdll debug symbols and so can be used in the debugger. The Cygwin guys mention it as the backbone of their unix-like cwd command, and their _FAST_CWD_8 type seems to still accurately reflect the windows type (as of July 2017). If you’re willing to modify your source to enable better variable watch go ahead and add –

typedef struct _FAST_CWD_8 {
LONG           ReferenceCount;
HANDLE         DirectoryHandle;
ULONG          OldDismountCount;
LONG           FSCharacteristics;
WCHAR          Buffer[MAX_PATH];

And inspect in the debugger:

If you can’t or don’t want to modify the source, you can set the watch with a direct offset –


SetCurrentDirectory() replaces the contents of RtlpCurDirRef, so you can set a data breakpoint directly on it.

Posted in Debugging, VC++, Win32 | Leave a comment

Checking Memory Corruption from the debugger in 2016

It used to be something like


But this trick requires quite a bit of adaptation to use on modern VS versions.

First, the relevant module is now ucrtbased.dll – thanks to the universal CRT the expression is no longer version-dependent.

Second, the ugly context operator syntax – while still accepted – has an alternative in the module!function windbg-like form.

Even after these two fixes, more is needed. Type in a watch window ucrtbased.dll!_CrtCheckMemory(), and the value shown is –

No type information is available for the function being called. If you are calling a function from another module, please qualify the function name with the name of the module containing it.

The type information is definitely there – but don’t sweat it, just add the type information yourself:

((int (*)(void))ucrtbased.dll!_CrtCheckMemory)()

Posted in Debugging, VC++ | 2 Comments

CMake Rants

CMake is a highly popular ‘meta-build’ system: it is a custom declarative syntax that is used to generate build scripts for all major OSs and compilers (e.g., VS solutions and projects). Designing such a system is a formidable task, but really not a very wise one to undertake in the first place.

I know there are only languages that people complain about and languages nobody uses. I know it’s bad manners to complain about stuff you get for free. I also know I’m working on windows and CMake scripts are generally authored by people who don’t care much about windows development.

And still I can’t help it. Here are a few particular irks.


When CMake asks ‘where to build the binaries’ it isn’t talking about anything resembling a binary. It’s asking about the proper location for the outputs of its processing – i.e., solutions and projects (or other build scripts).


This goes beyond a simple ‘designed by an engineer’ cliché. How long did it take you to figure out you need to repeatedly click the ‘configure’ button until all red lines are gone, then ‘Generate’?


Not so good.


..is generally just a recommendation.

But what really makes CMake nearly unusable to me is –

CMake’s treatment of paths

In all generates build scripts, paths are absolute. In vcxproj’s – Output path, Additional Include Directories, PDB path, Linker input directories, custom build steps in ZERO_CHECK and ALL_BUILD, etc. etc. – are all absolute paths.

This makes CMake-generated projects almost useless: you cannot source control them or share them in any other way.

Turns out this has been known for quite some time. There’s a macro called CMAKE_USE_RELATIVE_PATHS but its documentation says:

May not work!… In general, it is not possible to move CMake generated makefiles to a different location regardless of the value of this variable.

It seems they tried to fix it for a while but gave up, and instead posted a FAQ which I don’t really understand:

CMake uses full paths because:

  1. configured header files may have full paths in them, and moving those files without re-configuring would cause upredictable behavior.
  2. because cmake supports out of source builds, if custom commands used relative paths to the source tree, they would not work when they are run in the build tree because the current directory would be incorrect.
  3. on Unix systems rpaths might be built into executables so they can find shared libraries at run time. If the build tree is moved old executables may use the old shared libraries, and not the new ones.

Can the build tree be copied or moved?

The short answer is NO. The reason is because full paths are used in CMake, see above. The main problem is that cmake would need to detect when the binary tree has been moved and rerun. Often when people want to move a binary tree it is so that they can distribute it to other users who may not have cmake in which case this would not work even if cmake would detect the move.

The workaround is to create a new build tree without copying or moving the old one.

The way I see it the real reasons for this sorry state are laid out in this 2009 discussion:

You should give up on CMAKE_USE_RELATIVE_PATHS , and we should deprecate it from CMake.  It just does not work, and frustrates people.

… It is really hard to make everything work with relative paths, and you don’t get that much out of it, except lots of maintenance issues and corner cases that do not work.

An alternative that I’m growing fond of

Is ‘Project from existing code’:

Download the sources you wish to use, and instead of invoking CMake on the root CMakeLists.txt, invoke ‘Project from existing’ code, select the source folder and follow the rest of the wizard instructions.

Today this approach worked for me perfectly on the first try (on this library that is admittedly simple in structure), but that was just lucky. It certainly isn’t perfect – but it is simple, and the generated project files do use relative paths. I’m beginning to think even when tweaks are needed, this is a better starting point for a usable project. Do tell me in the comments if your experience is different.


It seems some words of clarification about my usage scenario are in order.

I wish to import an open source project to my build environment, and continue from there. With my build environment. Is that such an exceptional scenario? (It might be, judging by the comments below). I was under the impression that this is what CMake authors aimed for (why generate a VS project/solution otherwise?), and if I was creating an open source package that is how I would want others would use my code.

Moreover, except in the simplest of cases this is the only way to go: a CMake-generated project cannot possibly be the final say. All native build engines have their special knobs and handles that often must be tweaked. Did you ever, e.g., want to change the import library for a dll? Not really possible in CMake. Not to mention more advanced stuff – e.g., rebase it or make it ASLR.  I didn’t mention it above because I don’t consider this a CMake flaw – it’s just too much to ask of a portable build system. All you can expect from it is to set a portable common ground.

So I would expect CMake to generate a native build package (say, VS solution) in a way that would make it possible to forget it was generated by CMake.   Due to all the native knobs and handles this is an inherently hard user story to implement – but CMake fails much, much earlier. I’d consider using relative paths a must-have, and I don’t see why portability makes this task any harder than, say MSBuild authors’ task of using relative paths.

Posted in C++ | 16 Comments