Tinkering with VS2015 (CTP 6)

Today I downloaded the latest VS bits and played around with the native debugger. It was a brief session and so would be the records of my impressions.

J Universal CRT is here!

And seems like a great idea.

A lot of cheese was moved around in the process, and it would probably take me a while to know my way around again. As a prominent example, dbgint.h is now replaced by debug_heap.cpp – which is a borderline-breaking change: dbgint.h was kind-of-documented (although wrapped in disclaimers and admittedly an internal implementation detail), and real code came tumbling down. What’s worse, the type declarations that were available in dbgint.h are now hidden in debug_heap.cpp – which includes many un-published internal headers – and tool writers would probably have no choice but to cut and paste the type declarations and hope for the best.

I’m not entirely sure this breaking change (and others like it) are by design. One can still hope the final bits would see this fixed. What’s much worse –

L Published MS symbols are all stripped

Which means you can’t step through CRT/MFC sources. This is a major setback in productivity, and I hope only a temporary one.

J Context operator replaced

The context operator (‘{,,dll}symbol’) while being mighty useful at debug time, was broken beyond repair – and as I hoped in 2009 is replaced by the windbg-like ‘!’ operator:

However, as apparent in the screenshot:

L Context operator no longer deduces type

… and explicit casts are in order where they previously weren’t. That might seem like a quibble but this in fact prevents some very useful hacks previously available, notably checking memory integrity from the debugger:

The closest I currently have to a workaround is to capture the function to a variable in code, and invoke it from the watch window:

J Micro Profiling!

After you step past a code line in the debugger, a neat little tooltip appears:

Even in disassembly!

J Wide-register watch

‘xmm0il’ now works in x64 also.

BTW, the default platform is now ‘x86′ and ‘Win32′ is addable as a separate configuration from the config manager. Not sure why and what is the difference.

L Auto-vectorization

It seems little progress was made in auto vectorization – AFAICT all previous report still hold.

Accelerating Debug Runs, Part 2: _ITERATOR_DEBUG_LEVEL

A previous post discussed the Windows Debug Heap – with the main motivation being how to avoid it, as it is just empty, expensive overhead, and it isn’t clear why it is on by default in the first place. Remarkably, 3 weeks after posting here the VC team announced that from VC “14” (now in CTP) onwards the WDH will be opt-in and not opt-out, as it should be. So hopefully in ~2 years the recommendations in the last post will be largely obsolete. Anyway, I’m often facing unworkably slow run times in debug even after disabling the WDH, and further measures are in order.

In debug builds the VC++ CRT implementation runs a hefty load of iterator-validation tests on all STL containers – the simplest example being raising an assertion when an std::vector subscript is out of range. This leads to the unfortunate reality wherein if someone writes C++ code ‘by the book’, using standard containers for everything, s/he often ends up writing code that is utterly unusable in debug.  In one of our projects image classes were coded with std::vector as the container for the image bits. The product code is very intensive in iteration on pixels of many images, and as a result debug builds completed a typical job in ~4 hours, whereas a release build completed in ~4 minutes. For a long while debugging that project was reduced to logging and stepping in disassembly, as debug builds were completely useless.

Now for some good news and some bad news. The good news is that this behavior is customizable via the _ITERATOR_DEBUGGING_LEVEL macro: #define it to 0 (or 1, if you have a particular need for it) early in the compilation – say in the project properties or the top of a precompiled header– and this disproportional computational overhead is gone.

The bad news is that this doesn’t work.

MSVCMRTD.lib(locale0_implib.obj) : error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information in duplicated types (std.basic_string<char,std::char_traits<char>,std::allocator<char> >): (0x0200004e).

Well that was a tad dramatic – it doesn’t always work, and in /clr builds in particular.

<rant>Now /clr projects will probably forever be second class citizens in the VC universe. Features will forever be coded for mainstream native code and will trickle down to /clr code as time and priorities permit (two notable examples are data breakpoints and debugger visualizers that are still unsupported in mixed debugging, but trust me – there are plenty more). </rant> Anyhow, as far as _ITERATOR_DEBUG_LEVEL goes – this is much more a bug than something resembling a decision. The venerable Stephan T. Lavavej elaborates (3rd reply from the bottom):

…The underlying problem is that _ITERATOR_DEBUG_LEVEL affects the representations of STL containers, and C++ (both native and especially managed) really hates it when code can’t agree on the representation of an object.  When _SECURE_SCL/_HAS_ITERATOR_DEBUGGING were added in VC8, we should have created 5 variants of the CRT/STL binaries (including DLLs).  Unfortunately we didn’t (this was before my time, otherwise I would have spoken up), and having only debug and release DLLs causes headaches.  We suffered from longstanding problems in VC8/9 until we overhauled how this worked in VC10.  During VC10 we untangled the worst of the problems by making std::string header-only.  With invasive surgery we were able to get native code working correctly in every case except one very obscure one that nobody has noticed or complained about yet.  (We now have 5 static libs, which solves the case of static linking absolutely 100% perfectly, but still only 2 DLLs.)  But managed code is structured differently, and the tricks that work in native don’t work for it.  As a result, customizing _ITERATOR_DEBUG_LEVEL basically doesn’t work under /clr[:pure].  Very few customers have encountered this (you’re one of the first) because we changed the release mode default to IDL=0 (which everyone wants), and few people want to modify debug mode’s default of IDL=2.

The thread is from Jan 2011 and this particular issue was resolved in VS2013. Similar issues remain, and I’m not sure /clr code would ever make it into routine test matrices in MS – so as the CRT code evolves, these issues would probably keep popping.

Bottom line: if – like me – you’re debugging C++ code that is both managed and makes extensive use of STL – your mileage may seriously vary when trying to customize iterator debug level. If you do develop purely native code and are trying to accelerate debug runs, I do recommend judiciously setting _ITERATOR_DEBUG_LEVEL to zero – and raising it back only when you’re tracking concrete iterator issues.

In the same reply Stephan offers an alternative:

Have you considered making your “debug build” compile in release mode with no optimizations?  Release mode versus debug mode affects what CRT/STL/etc. you link to (and whether you can effectively debug into them) and as a side effect affects your IDL default, but it’s not inherently tied to whether your own code is compiled with optimizations or not, and that’s what affects the debuggability of your own code.  The IDE pairs release mode with optimizations and debug mode without optimizations, but there’s no fundamental reason linking the two.

This makes sense and I briefly experimented with this approach. Sorry to report, still no success. While I still can’t pinpoint the root cause, even when compiling release builds with /Od (optimizations disabled) the debugging experience is severely crippled. (and yes, of course I raised the proper PDB generation switches in both the compiler and the linker). Local variable watch and single-steps seemed highly erratic, ‘this’ pointer for class methods seemed to stick with $rcx throughout the method and thus give rubbish on member watch – etc. etc.

However, this is a step in a better direction. More on that in the next post.

Accelerating Debug Runs, Part 1: _NO_DEBUG_HEAP

(A more appropriate but even-less-catchy title might have been ‘accelerating runs from the debugger‘. As elaborated below, these two are not strictly equal).

A common notion is that debug builds can and should carry as much debugging overhead as one can possibly scram in – after all, the point in debug builds is exactly this, debug, and you should never care about their performance. After too many cases of slow-to-the-extent-of-utterly-unworkable builds, I respectfully disagree. In this and the next post, a few techniques to make debug builds run faster are laid out.

Introducing the Windows Debug Heap

As many, many, have already discovered – the WDH is a big deal as far as performance goes, and yet MSDN is unusually terse about it. The HeapSetInformation page says:

When a process is run under any debugger, certain heap debug options are automatically enabled for all heaps in the process. These heap debug options prevent the use of the LFH. To enable the low-fragmentation heap when running under a debugger, set the _NO_DEBUG_HEAP environment variable to 1.

And in some arcane corner of the WinDBG documentation:

Processes that the debugger creates (also known as spawned processes) behave slightly differently than processes that the debugger does not create.

Instead of using the standard heap API, processes that the debugger creates use a special debug heap. You can force a spawned process to use the standard heap instead of the debug heap by using the _NO_DEBUG_HEAP environment variable or the -hd command-line option.

(While the latter was written for windbg, everything except the –hd switch holds equally for VS).

What are these ‘certain heap debug options’? What is the price in performance? Can the WDH be avoided altogether? Stay tuned.

Creating and Avoiding the WDH

The debugger itself calls IDebugClient5::CreateProcess2 which creates a debuggee process with WDH by default. The WDH creation can be bypassed by specifying DEBUG_CREATE_PROCESS_NO_DEBUG_HEAP in the options argument, and the MS debuggers do exactly that when the aforementioned environment variable _NO_DEBUG_HEAP exists and is set to 1.

( I suspect the underlying appartus is that CreateProcess with the DEBUG_PROCESS flag causes windows to check the environment variable _NO_DEBUG_HEAP and decide which process heaps to create, but I didn’t verify).

You can set this environment variable either globally for the machine (as I do) or in a specific debug session via the project properties:

What the WDH Does

  1. The only documented effect is disabling the LFH – which makes sense, as these are mutually exclusive heap layouts. You do lose some speedups by dropping the LFH but by and large this is a negligible factor compared to the others.
  2. On every allocation the memory manager initializes every allocated DWORD to 0xbaadfood, and on every deallocation sets the memory to 0xfeeefeee – in addition to some bookkeeping just after the allocated chunk. Here’s the normal view:

And here’s the view with _NO_DEBUG_HEAP=1:

These magic numbers can help in some debugging scenarios – use of uninitialized heap memory, and usage after free – but truth be told, they rarely do. Here are some more details. Most of the extra time, however, is not spent there.

  1. On every memory operation, the WDH walks the heap and checks for integrity! To observe, add some corruption:

And run:

Now run again with _NO_DEBUG_HEAP set to 1 – and watch the assertion vanish.

Err, this stuff actually sounds useful. Sure I should I disable it?

For regular C++ applications – beyond a doubt, yes.

the CRT delivers identical functionality, on top of the windows debug heap, with different magic numbers: 0xcdcdcdcd for fresh allocations and 0xdddddddd for freed memory. If you leave the WDH on you’re initializing memory chunks twice – and worse, checking heap integrity – for each allocation. In regular development scenarios WDH is just empty, very expensive overhead.

By ‘regular’ C++ programs I mean those that don’t do anything fancy with the heap and just stick to the built in CRT heap. You can overload new/delete, as long as your overloads eventually call the shipped new/debug/malloc/free, or some dbg/aligned siblings.

One potential argument in favour of leaving the WDH on is that unlike the CRT debug heap the WDH is operational in release builds also, but (1) it is disabled for any launch outside a debugger anyway, (2) in the extremely unlikely case that you’d require memory integrity checks but don’t want to run a debug build, I would suggest just editing your debug configurations to include optimizations. (add /O2).

Oh, and in our applications setting _NO_DEBUG_HEAP=1 accelerated some runs by a factor of 10. Nough said.


Edit (Oct 6 2014):

Remarkably, 3 weeks after initially publishing this post it seems the VC team themselves agree. Beginning with VS “14”, the WDH will be opt-in, not opt-out – as it ought to be.

Debugging Memory Corruption II

Some years ago I shared a trick that let’s you call _CrtCheckMemory from the debugger anywhere, without re-compilation.   The updated (as of VS2013) string to type at a watch window is:

{,,msvcr120d.dll}_CrtCheckMemory()

Let’s expand on that today, in two steps.

Checking memory on every allocation

The CRT heap accepts a neat little flag, called: _CRTDBG_CHECK_ALWAYS_DF.  Here’s how it used:

int main()
{
// Get current flag
int tmpFlag = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);

// Turn on corruption-checking bit
tmpFlag |= _CRTDBG_CHECK_ALWAYS_DF;

// Set flag to the new value
_CrtSetDbgFlag(tmpFlag);

int* p = new int[100]; // allocate,
p[101] = 1;   // corrupt,    and…

int* q = new int[100];  // BOOM! alarm fires here

}

Testing for corruption on every allocation can tangibly slow down your program, which is why the CRT allows testing only every N allocations, N being 16, 128 or 1024.  Usage adds half a line of code – pasted from MSDN:

// Get the current bits
tmp = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);

// Clear the upper 16 bits and OR in the desired frequency
tmp = (tmp &amp; 0x0000FFFF) | _CRTDBG_CHECK_EVERY_16_DF;

// Set the new bits
_CrtSetDbgFlag(tmp);
}

Note that testing for corruption on every memory allocation is nothing like testing on every memory write – the alarm would not fire at the exact time of the felony, but since your software allocates memory (even indirectly) very often – this will hopefully help narrow down the crime scene quickly.

Checking memory on every allocation – from the debugger

You might reasonably want to enable/disable these lavish tests at runtime.

The debug flags are stored in {,,msvcr120d}_crtDbgFlag, and the numeric value of _CRTDBG_CHECK_ALWAYS_DF is 4, so one might hope that these lines would enable and disable these intensive memory tests:

image

Alas, this doesn’t work – _CrtSetDbgFlag contains further logic that routes the input flags further to internal variables. The easiest solution is to just call it:

image

First two lines enable, last two lines disable.  If you’re running with non default flags, the actual values you’d see might be different.

Hidden Tracepoint Keywords

The tracepoints window includes instructions for several special keywords, the most useful by far being $CALLSTACK:

 

These are not all there are – two more exist: $TICK and $FILEPOS. Quoting the documentation:

$TICK inserts the current CPU tick count, while $FILEPOS inserts the current file position.

$TICK displays a time counter in hex, but otherwise both work as advertised and are documented and official. There is just a good chance nobody knows them, since –reasonably – no one thought of going on MSDN to dig them out, as the dialog itself goes unusually deep into details.

Debugging Handle Leaks

This is all well documented stuff and I won’t go into details – it’s here mostly for self reference (3rd time I had to chase this down in google).

Steps are:

(1) Install WDK to integrate the WinDbg engine with VS (not strictly necessary, but very convenient).

(2) Attach to the debugee via ‘User Mode’ transport:

image

(3) Continue execution, and break at the spot where the handle count is at ‘reference’ value.

(4) At the ‘Debugger Immediate Window’ type ‘!htrace –enable’

(5) Continue execution and break at a point where the handle count is supposed to be at reference value but isn’t.

(6) At the ‘Debugger Immediate Window’ type ‘!htrace –diff’.

 

The offending stack[s] should be visible at the debugger immediate window.  If you get garbage, there’s a good chance you’re debugging a 32bit process on a 64bit machine.