March 2016

Volume 31 Number 3

[Visual Studio]

Debugging Improvements in Visual Studio 2015

By Andrew Hall | March 2016

Despite our best efforts to write code that works correctly the first time, the reality is that as developers, we spend a lot of time debugging. Visual Studio 2015 brought new functionality and improvements to help developers identify issues earlier in the development cycle, as well as improving the ability to find and fix bugs. In this article, I’ll review the improvements added to Visual Studio 2015 for both .NET and C++ developers.

I’ll begin by looking at two usability improvements and the addition of performance tools that work during debugging for both the Microsoft .NET Framework and C++. I’ll then dig into a set of improvements specifically for .NET development and end with new developments for C++ developers.

Breakpoint Settings

Breakpoints are a fundamental capability of the debugger and chances are high that you used them the last time you fired up the debugger. There are times when using conditional breakpoints can help find the cause of a bug much faster. In Visual Studio 2015, it’s easier to take advantage of conditional breakpoints by introducing a non-modal breakpoint settings window that sits in context with your code. It also enables you to easily combine different settings within the same window.

To review, Visual Studio 2015 offers the following settings for breakpoints:

  • Conditional expressions break only when specified conditions are met. This is the logical equivalent of adding an if statement to the code and placing the breakpoint inside the if so it is only hit when the conditions are true. Conditional expressions are useful when you have code that’s called multiple times but the bug only occurs with a specific input, so it would be tedious to have to manually check the values and then resume debugging.
  • Hit counts break only after the breakpoint has been hit a certain number of times. These are useful in situations where code is called multiple times and you either know exactly when it’s failing or have a general idea that “it fails after at least” a certain number of iterations.
  • Filters break when the breakpoint is hit on a specific thread, process or machine. Filter breakpoints are useful for debugging code running in parallel when you only want to stop in a single instance.
  • Log a message (called TracePoints) prints a message to the output window and is capable of automatically resuming execution. TracePoints are useful for doing temporary logging when you need to trace something and don’t want to have to break and manually track each value.

To illustrate the value of conditional breakpoints, consider the example shown in Figure 1, where applications are retrieved by type inside a loop. The code is working for most input strings, failing only when the input is “desktop.” One option is to set a breakpoint and then inspect the value of appType each time it’s hit until it’s “desktop,” so I can begin stepping through the app to see what’s not working. However, it’ll be far quicker to create a breakpoint with a conditional expression that only breaks when appType is equal to “desktop,” as shown in Figure 1.

Breakpoint Settings Window with a Conditional Expression
Figure 1 Breakpoint Settings Window with a Conditional Expression

By leveraging breakpoints with a conditional expression, it reduces the manual steps required to get the application into the correct state for debugging.

Exception Settings

As a developer, you know that exceptions can occur while your app is running. In many cases, you need to account for the possibility that something will go wrong by adding try/catch statements. For example, when an application retrieves information via a network call, that call can throw an exception if the user doesn’t have a working network connection or if the server is unresponsive. In this case, the network request needs to be placed inside of a try, and if an exception occurs, the application should display an appropriate error message to the user. If the request fails when you expect it to work (because the URL is being incorrectly formatted in code, for example), you might be tempted to search through code to look for a place to set a breakpoint or to remove the try/catch so that the debugger will break on the unhandled exception.

The more efficient approach, however, is to configure the debugger to break when the exception is thrown using the Exception Settings dialog; this lets you set the debugger to break when all exceptions, or only exceptions of a certain type, are thrown. In previous versions of Visual Studio, feedback was that the Exception Settings dialog was too slow to open and had poor search functionality. So in Visual Studio 2015 the old Exception Settings dialog was replaced with a new modern Excep­tion Settings window that opens instantly and offers the fast, consistent search feature that you’ve come to expect, as shown in Figure 2.

The Visual Studio 2015 Exception Settings Window with a Search for All Exception Types That Contain “Web”
Figure 2 The Visual Studio 2015 Exception Settings Window with a Search for All Exception Types That Contain “Web”

Performance Tools While Debugging

Your end users increasingly expect software to be fast and responsive, and exposing users to spinners or sluggish UIs can negatively affect user satisfaction and retention. Time is valuable and when users have a choice between applications with similar functionality, they’ll pick the one that has the better UX. 

However, when writing software, you often defer on proactive performance tuning and instead follow best practices, hoping the application will be fast enough. The main reason for that is it’s time-consuming and difficult to measure performance and it can be even more difficult to find ways to improve it. To help with this, in Visual Studio 2015 the Visual Studio Diagnostics team integrated a set of performance tools directly into the debugger, called PerfTips and the Diagnostic Tools window. These tools help you learn about the performance of your code as a part of everyday debugging so that you can catch issues early and make informed design decisions that let you build your application for performance from the ground up.

An easy way to understand the performance of your application is to simply step through it using the debugger to get a feel for how long each line of code takes to execute. Unfortunately, this isn’t very scientific as it depends on the ability to perceive differences. So it’s hard to tell the difference between an operation that takes 25 ms versus an operation that takes 75 ms. Alternatively, you can modify the code to add timers that capture accurate information, but that comes at the inconvenience of changing code.

PerfTips solve this by showing you how long the code took to execute by placing the time in milliseconds on the right end of the current line when the debugger stopped, as shown in Figure 3. PerfTips show the time it takes for the application to run between any two-break states in the debugger. This means they work both when stepping through code and running to breakpoints. 

PerfTip Showing the Elapsed Time It Took to Step over a Function Call
Figure 3 PerfTip Showing the Elapsed Time It Took to Step over a Function Call

Let’s look at a quick example of how a PerfTip can help you know how long code takes to execute. You have an application that loads files from the disk when the user clicks a button and then processes them accordingly. The expectation is that it’ll take only a few milliseconds to load the files from the disk; however, using PerfTips, you can see it takes significantly longer. Based on this information, you can modify the application’s design so it doesn’t rely on loading all of the files when the user clicks the button early in the development cycle and before the change becomes too costly. To learn more about PerfTips, visit aka.ms/perftips.

The Diagnostic Tools window shows a history of the CPU and Memory usage of the application and all debugger break events (breakpoints, steps, exceptions and break all). The window offers three tabs: Events, Memory Usage and CPU Usage. The Events tab shows a history of all debugger break events, which means it contains a complete record of all PerfTips values. Additionally, with the Enterprise version of Visual Studio 2015, it contains all IntelliTrace events (covered later in this article). Figure 4 shows the Events tab in Visual Studio 2015 Enterprise. Additionally, by having CPU and memory information update during your debugging session, it gives you the ability to see the CPU and memory characteristics of specific sections of code. For example, you can step over a method call and watch how the graphs change measuring the impact of that specific method.

Diagnostic Tools Window with the Events Tab Selected
Figure 4 Diagnostic Tools Window with the Events Tab Selected

The memory graph enables you to see the total memory usage of your application and spot trends in the application’s memory use. For example, the graph might show a steady upward trend that would indicate the application is leaking memory and could eventually crash. I’ll start by looking at how it works for the .NET Framework, and then cover the experience for C++. 

When debugging the .NET Framework, the graph shows when Garbage Collections (GCs) occur, as well as total memory; this helps to spot situations where the overall memory use of the application is at an acceptable level but the performance of the application might be suffering due to frequent GCs (commonly caused by allocating too many short-lived objects). The Memory Usage tab lets you take snapshots of the objects in memory at any given point in time via the Take Snapshot button. It also provides the ability to compare two different snapshots, which is the easiest way to identify a memory leak. You take a snapshot, continue to use the application for a time period in a manner that you expect should be memory-neutral, and then take a second snapshot. Figure 5shows the memory tool with two .NET snapshots.

Memory Usage Tab of the Diagnostic Tools Window with Two Snapshots
Figure 5 Memory Usage Tab of the Diagnostic Tools Window with Two Snapshots

When you click on a snapshot, a second window called the Heap view opens, which shows you the details of what objects are in memory, including the total number and their memory footprint. The bottom half of the Heap view shows what’s holding references to the objects, preventing them from being garbage collected (referred to as Paths to Root), as well as what other types the selected type is referencing in the Referenced Types tab. Figure 6 shows the Heap view with the differences between the two snapshots.

Heap Snapshot View Shows the Differences Between Two .NET Snapshots
Figure 6 Heap Snapshot View Shows the Differences Between Two .NET Snapshots

The C++ memory tool tracks memory allocations and deallocations to know what’s in memory at any given point. To view the data, use the Take Snapshot button to create a record of the allocation information at that moment in time. Snapshots also can be compared to see what memory has changed between two snapshots, making it much easier to track down memory leaks in code paths you expect to completely free up in memory. When you select a snapshot to view, the Heap view shows you a list of types along with their sizes, and in the case of comparing two snapshots, the differences between those numbers. When you see a type whose memory consumption you’d like to understand better, choose to view the instances of that type. The instances view shows you how big each instance is, how long it has been alive in memory and the call stack that allocated the memory. Figure 7 shows the instances view.

Heap View of C++ Memory Showing the Instances View with the Allocating Call Stack
Figure 7 Heap View of C++ Memory Showing the Instances View with the Allocating Call Stack

The CPU graph shows CPU consumption of the application as a percentage of all cores on the machine. This means that it’s useful for identifying operations that cause unnecessary spikes in CPU consumption, and can be useful for identifying operations that aren’t taking full advantage of the CPU. Consider an example of processing a large amount of data where each record can be processed independently. When debugging the operation, you notice that the CPU graph on a machine with four cores is hovering slightly below 25 percent. This indicates that there’s opportunity to parallelize the data processing across all of the cores on the machine to achieve faster performance of the application. 

In Visual Studio 2015 Update 1, the Visual Studio Diagnostics team took it a step further and added a debugger-integrated CPU profiler into the CPU Usage tab that shows a breakdown of what functions in the application are using the CPU, as shown in Figure 8. For example, there’s code that validates an e-mail address entered by the user is a valid format using a regular expression. When a valid e-mail address is entered, the code executes extremely fast; however, if an improperly formatted e-mail address is entered, the PerfTip shows that it can take close to two seconds for it to determine the address isn’t valid. Looking at the Diagnostic Tools window, you see that there was a spike in CPU during that time. Then looking at the call tree in the CPU usage tab, you see that the regular expression matching is what’s consuming the CPU, which is also shown in Figure 8. It turns out that C# regular expressions have a drawback: If they fail to match complex statements, the cost to process the entire string can be high. Using PerfTips and the CPU usage tool in the Diagnostic Tools window you’re quickly able to determine the performance of the application will not be acceptable in all cases using the regular expression. So you can modify your code to use some standard string operations instead that yield much better performance results when bad data is input. This could have been a costly bug to fix later, especially if it made it all the way to production. Fortunately, the debugger-integrated tools enabled the design to change during  development to ensure consistent performance. To learn more about the Diagnostic Tools window go to aka.ms/diagtoolswindow.

CPU Usage Tab Showing the CPU Consumption of Matching a Regular Expression
Figure 8 CPU Usage Tab Showing the CPU Consumption of Matching a Regular Expression

Next, let’s look at some of the improvements made that are specifically for .NET debugging.

Lambda Support in the Watch and Immediate Windows

Lambda expressions (such as LINQ) are an incredibly powerful and common way to quickly deal with collections of data. They enable you to do complex operations with a single line of code. Often, you find yourself wanting to test changes to expressions in the debugger windows, or use LINQ to query a collection rather than manually expanding it in the debugger. As an example, consider a situation where your application is querying a collection of items and returning zero results. You’re sure there are items that match the intended criteria, so you start by querying the collection to extract the distinct elements in the list. The results confirm that there are elements that match your intended criteria, but there appears to be a mismatch with string casing; still, you don’t care if the case matches exactly. Your hypothesis is that you need to modify the query to ignore casing when it does the string comparison. The easiest way to test that hypothesis is to type the new query into the Watch or Immediate window and see if it returns the expected results, as shown in Figure 9.

Unfortunately, in versions of Visual Studio prior to 2015, typing a Lambda expression into a debugger window would result in an error message. Therefore, to address this top feature request, support was added for using Lambda expressions in the debugger windows.

Immediate Window with Two Evaluated Lambda Expressions
Figure 9 Immediate Window with Two Evaluated Lambda Expressions

.NET Edit and Continue Improvements

A favorite debugging productivity feature in Visual Studio is Edit and Continue. Edit and Continue lets you change code while stopped in the debugger, then have the edit applied without the need to stop debugging, recompile and run the application to the same location to verify the change fixed the problem. However, one of the most frustrating things when using Edit and Continue is to make the edit, attempt to resume execution and see a message that the edit you made couldn’t be applied while debugging. This was becoming a more common problem as the framework continued to add new language features that Edit and Continue could not support (Lambda expressions and async methods, for example). 

To improve this, Microsoft added support for several previously unsupported edit types that will significantly increase the number of times edits can be successfully applied during debugging. Improvements include the ability to modify Lambda expressions, edit anonymous and async methods, work with dynamic types and modify C# 6.0 features (such as string interpolation and null conditional operators). For a complete list of supported edits, visit aka.ms/dotnetenc. If you make an edit and receive an error message that the edit cannot be applied, make sure to check the Error List as the compiler places additional information about why the edit could not be compiled using Edit and Continue.

Additional improvements for Edit and Continue include support for applications using x86 and x64 CoreCLR, meaning it can be used when debugging Windows Phone apps on the emulator, and support for remote debugging.

New IntelliTrace Experience

IntelliTrace provides historical information about your application to help take the guesswork out of debugging in the Enterprise edition and get you to the relevant parts of your code faster, with less debug sessions. A comprehensive set of improvements were added to IntelliTrace to make it easier than ever to use. Improvements include a timeline that shows when in time events occur, the ability to see events in real time, support for TracePoints and integration into the Diagnostic Tools window.

The timeline enables you to understand when in time events occur and spot groups of events, which might be related. Events appear live in the Events tab, where in prior versions you needed to enter a break state in the debugger to see the events IntelliTrace collected. TracePoint integration lets you create custom IntelliTrace events using standard debugger functionality. Finally, Diagnostic Tools window integration puts IntelliTrace events in context with performance information, letting you use the rich information of IntelliTrace to understand the cause of performance and memory issues by correlating the information across a common timeline.

When you see a problem in the application, you normally form a hypothesis about where to start investigating, place a breakpoint and run the scenario again. If the problem turns out not to be in that location, you need to form a new hypothesis about how to get to the correct location in the debugger and start the process again. IntelliTrace aims to improve this workflow by removing the need to re-run the scenario again. 

Consider the example from earlier in this article where a network call is failing unexpectedly. Without IntelliTrace, you need to see the failure the first time, enable the debugger to break when the exception is thrown, then run the scenario again. With IntelliTrace, when you see the failure, all you have to do is look in the Events tab of the Diagnostic Tools window. The exception appears as an event; select it and click Activate Historical Debugging. You’re then navigated back in time to the location in the source where the exception occurred, the Locals and Autos windows show the exception information and the Call Stack window is populated with the call stack where the exception occurred, as shown in Figure 10.

Visual Studio in Historical Debugging Mode at the Location an Exception Was Thrown
Figure 10 Visual Studio in Historical Debugging Mode at the Location an Exception Was Thrown

Finally, let’s look at some of the most notable improvements that were made for C++ debugging in Visual Studio 2015. 

C+ + Edit and Continue

As mentioned in earlier, Edit and Continue is a productivity feature that enables you to modify your code while stopped in the debugger and then have the edits applied when you resume execution without the need to stop debugging to recompile the modified application. In previous versions, C++ Edit and Continue had two notable limitations. First, it only supported x86 applications. Second, enabling Edit and Continue resulted in Visual Studio using the Visual Studio 2010 C++ debugger, which lacked new functionality such as support for Natvis data visualizations (see aka.ms/natvis). In Visual Studio 2015, both of these gaps were filled. Edit and Continue is enabled by default for C++ projects for both x86 and x64 applications and it can even work when attaching to processes and remote debugging.

Android and iOS Support

As the world moves to a mobile-first mentality, many organizations are finding the need to create mobile applications. With the diversification of platforms, C++ is one of the few technologies that can be used across any device and OS. Many large organizations are using shared C++ code for common business logic they want to re-use across a broad spectrum of offerings. To facilitate this, Visual Studio 2015 offers tools that enable mobile developers to target Android and iOS directly from Visual Studio. This includes the familiar Visual Studio debugging experience developers use for daily work in C++ on Windows.

Wrapping It Up

The Visual Studio Diagnostics team is extremely excited about the broad range of functionality that’s been added to the debugging experience in Visual Studio 2015. Everything covered in this article with the exception of IntelliTrace is available in the Community edition of Visual Studio 2015.

You can continue to follow the team’s progress and future improvements on the team blog at aka.ms/diagnosticsblog. Please try out the features discussed here and give the team feedback on how it can continue to improve Visual Studio to meet your debugging needs. You can leave comments and questions on the blog posts, or send feedback directly through Visual Studio by going to the Send Feedback icon in the top-right part of the IDE.


Andrew Hall is the program manager lead for the Visual Studio Diagnostics team that builds the core debugging, profiling and IntelliTrace experiences, as well as the Android Emulator for Visual Studio. Over the years, he has directly worked on the debugger, profiler and code analysis tools in Visual Studio.

Thanks to the following technical experts for reviewing this article: Angelos Petropoulos, Dan Taylor, Kasey Uhlenhuth and Adam Welch