My opinions on exception/error handling is like so: I (and I believe most other programmers) don't want to spend the time to carefully analyze every possible unlikely error case as I go. Some cases should be carefully handled. But if things go bad in an unexpected fashion, I believe this is what should happen: clean up all resources safely and then crash.
Seriously. And that's what exception handling does, if you design your code right. Either nice automatic resource cleanup (such as what's common in C#, Ruby, Python, C++, or upcoming in Java 7) or somewhat manual work such as Java finally blocks can take care of cleanup during crash.
But crash? Yes. The question is the scope of the crash.
The crash should affect the current request. For a web server, yep, that's the HTTP request. For a database, it would be the current query. For a GUI, it's the current user action.
In olden Unix days, processes were designed to do one thing and do it well. Globals were less (though still somewhat) evil, since every process was sort of like its own object. Global death was also less evil. Therefore, the old-timey C 'assert' feature was somewhat reasonable. If you kill the process, it generally kills the whole pipe chain, which also effectively kills the user command-line request. Quite reasonable in most cases.
If each response to a GUI command ran as its own process (interesting thought and probably been done by someone before), then killing that process by assert wouldn't be so evil.
But most of us seem to have decided that multi-process is too heavy and slow. We don't really need all that process weight. Even to the extent of "stackless" and subthread coroutines like in Erlang, Stackless Python, and now Go. But in any case, whether threads or smaller, killing processes by assert is clearly scary. Instead of kill the "request", you crash the whole system. Bad news.
Enter exception handling, which works well enough that almost all recent programming languages use very similar models with equivalents of 'throw' and 'catch'.
What doesn't work is expecting that everyone will want to pay attention to every possible unlikely error case along the way. Checked exceptions in Java and manual error code checking in C lead to the same problem that people ignore errors. Checked exceptions lead to ignored errors? Yes. I've seen it a lot, especially in the pre-chaining days in Java. And still it's an easy thing to do today. Why? Because I don't know what to do with the checked exception, but I have to catch it. So I catch it. And um, if I'm extra lazy (and surely no one's lazy, right?), I just ignore it. Maybe I'm nice enough to log it or wrap it.
But manual error code checking is just begging to be ignored instead of likely to be ignored.
And if you ignore errors, instead of crashing the request (and cleaning up resources), you end up in an unstable, dangerous program state.
So, the fine folks behind Go (and yes, some them obviously have many more years under their belts than me) say they don't need exceptions and can't make them work with serious parallelism anyway. Nice and well, but expect most programmers to create unstable, buggy code.
You can't expect programmers to write "a good error message now", emphasis on the now. If do, expect instead that laziness will lead to extreme bugs.
Seriously, figure out how to crash a specific "request" with automatic cleanup when errors happen, or expect bugs.
Oh, and figuring out how to produce only one error report in the logs for each failure is also awesome. Those of you who've worked in complex Java software (checked exceptions everywhere) can imagine what I mean by this. Similar issues seem easy to come by in a land of extreme parallelism, if you don't draw that "request" circle well.
No comments:
Post a Comment