Incremental rebuild was a feature of the C# compiler which was meant to increase the throughput after an initial build. It worked on the principle that changes in between builds are localized, and that the information gathered by the compiler from previous builds wouldn't be
entirely invalidated; specifically, some of the information and, indeed, the assembly itself could be updated in an incremental fashion resulting in faster builds.
Both the VS 2002 and VS 2003 compilers exposed this option through the /incr switch on the command line, and the ‘Incremental Rebuild’ option in the Advanced tab of Project Properties. In 2002 incremental rebuild was enabled by default for all project types.
After we shipped 2002 we noticed that we started to get a large number of bugs (internal compiler errors - ICEs – the worst kind) which were a direct result of the incremental rebuild feature.
These bugs derived from the complexities involved in correctly implementing incremental rebuild and the problems associated with testing it. Consequently, for 2003 we fixed all known issues with incremental rebuild but subsequently turned it off by default for all projects.
Incremental build initially seems like a no-brainer win, it can theoretically improve compilation times by significant amounts, in certain cases we saw single assembly build times improve by as much as 6x.
However, and this may seem counter-intuitive at first, incremental rebuild could also lead to longer build times. Why? The feature had several heuristics to determine whether it should do a full rebuild or not.
One of those heuristics was if > 50% of the files being tracked needed to be recompiled. However, in most cases the files that needed to be recompiled aren't simply the files that have changes, but rather the dependency graph between the changed public interface of the types
within the files and those other source files that depend on those types. So, the compiler would do a lot of work to figure out the dependency graph, and occasionally discover it should actually do a full build anyway.
Due to how incremental rebuild worked, it would then throw out all of the work it had done, and simply perform the normal build at that point, increasing the end-to-end time. Incremental rebuild also had a few other implications that were likely non-obvious to folks.
We wanted to update the existing assembly and PDB, but due to the way they are laid out, incrementally updating them meant they would contain the old data as well and we'd just update the pointers (e.g. in the metadata table) to point to the new locations in the file.
That meant the assembly generated from an incremental build was different than what you'd get from a non-incremental build, and both be larger and slower to load.
The slower to load aspect likely wasn't a big deal; however, the different output spoke to one of the major problems that users had with incremental build. In VS 2002 it was entirely possible to build, have the compiler choke and issue an error, and the simply build again
and have it work. This lowered confidence in the compiler (reasonably), and led to odd conclusions about why the compiler was exhibiting this behavior that were unrelated to the incremental flag, because many users didn't even know it was set.
For example, folks might tweak their code, do another build, and have it work - not because of the change they made, but because a non-incremental build happened. There were at least 13 cases that would cause the compiler to bail on doing an incremental rebuild and perform a
full build instead, including things like 'more than 30 successful incremental builds without a full build', which would prevent the PE from getting too bloated over time. So, whether or not the user actually saw the incremental behavior was difficult for them to predict.
All of this led to the decision to cut incremental build in VS 2005. I should mention that this is incremental compilation of the assembly, incremental rebuild of the solution was most decidedly not cut and was greatly improved in VS 2005 through MSBuild.
The funny thing about this one is that we had initially spent significant design, added complexity to the codebase, and effort in validation (every feature added to the language needed to be separately tested in incremental cases) and when we turned it off by default in VS 2003
essentially no one noticed. We had a few folks who mentioned it in VS 2005, but not because they saw the compiler builds get slower, but simply because they saw the option was removed.
TBH the regular feedback we had was that the compiler was blazingly fast - particularly from folks who had been using C++ for a long while. This is another feature that, looking back, we probably should never have done.
It caused customers tons of headaches for little ultimate benefit. That said, I feel good about our willingness and decision to remove it. Occasionally, at Microsoft we fall prey to the sunk cost fallacy, but it's gotten much better over the years as our telemetry has
dramatically improved and we have lots of additional insight about usage and benefit versus what we had in 2003 when we made this decision. So, if anyone used VS 2002 and wondered what happened to this option, now you know :-)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One of the stories that my first manager used to tell, that I always got a kick out of was the following. Context: Think Week was a week that Bill Gates used to take every year to learn about a huge variety of topics, and folks at MS would submit papers/bits. It was a big deal.
"I joined Microsoft on 3/19/99 and found out I was working on a new language. The first week was a blur just getting up to speed and I don't think I even installed the complier (not that it did much then - I think you could define an interface but not use it yet).
In my second week (3/27/99) I get an email from Drew saying that Bill [Gates] wants the C# Language spec and the latest build of the compiler by 4.30 (Note that it was 4.30 not 4/30). I'm still getting used to US dates so I see 4.30 and think it's 4:30pm that day.
In 2001 I had only recently joined Microsoft full time, so I was really just getting my feet underneath me in the org. There were many internal teams using C#, so one of the things I owned was an internal DL called CSharp User Community which had thousands of folks on it.
The point of the DL was for C# users to ask questions of each other to get help as they needed it, but I did participate a lot as often folks would ask for definitive answers. The downside to this is that I often received a large number of mails throughout the day directly.
If I'd had some more experience, I would have added the user community back to the threads much more often than I did. Regardless, this ownership led to some funny and uncomfortable situations.
I was trying to remember any interesting event associated with a new year and the best I could come up with this morning is many years after what I've been tweeting about, in 2010. In 2010 we were working on Dev11 (VS 2012) and iterating closely with Windows on Windows 8.
I was leading a team to create a tooling experience for JavaScript Windows Store apps. Windows 8 was the introduction of the Windows Store and the new WinRT APIs, ABI format, etc. that allowed languages like JS, C#, VB .NET, C++, etc. to directly call the Windows API.
It was still early in Dev11 development, and my team was writing a new JavaScript language service (as well as a new project system). There was already an existing JS language service.
1/ In early 2004 we were heads down executing on Edit and Continue across a large contingent of teams. There had been several iterations of scoping, redesigns, and customer feedback.
2/ We had a weekly meeting every Thursday morning when representatives from each of the teams would get together and review progress. It was fairly heavy weight, but there were so many teams involved that it was necessary to have a regular sync.
3/ Regardless, E&C was coalescing, but teams were stretched thin working diligently to enable scenarios, improve performance, fix bugs, etc. It had been a month or so since we decided to add support for C# to the matrix as well, so folks were a bit stressed.
1/ Edit and continue was a beloved feature of VB6 and was a priority for making migration onto .NET easy for RAD developers. EnC is magical when it works correctly. For web developers that are used to hot reloading, it enables that type of rapid development, but maintains state.
2/ Unfortunately, it is an extremely difficult feature to implement in a JIT'ed world as we discovered that with .NET 1.0. We actually had a version of EnC in the early releases of VS 2002.
3/ I'm fairly sure it persisted all the way up to Beta 1, though the history of when we removed it is a little hazy. The initial implementation wasn't coalescing. There were a huge number of bugs, it performed poorly, and it often corrupted the debuggee.
1/ @werat asked about whether the debugger was using the C# compiler or language service in VS 2002. It was not. The debugger has a component called an ‘expression evaluator’ that is provided per language and is responsible for parsing and evaluating expressions when stopped at a
2/ breakpoint. For example, if you type into the immediate window, hover over a variable, type into the watch window, etc. the expression evaluator is involved. The debugger and the language service are actually deeply integrated in a number of scenarios in VS, which may
3/ initially seem surprising. I may talk about more of these scenarios in the future, but to give a flavor, when you set a breakpoint at design time the language service is involved, when you are using Edit and Continue the LS is involved, the range of what is being evaluated