Why is this an issue?

Sascha Brawer suggests: Maybe the confusion arises because there do exist distributed object systems that perform this sort of reference counting, allowing the server to know when no client has a reference left. DCOM has such a mechanism, and ILU has something similar, AFAIK.

Why doesn't CORBA follow these other systems? My personal impression is that this is another case of CORBA doing the right thing instead of the easy thing. Sascha continues:

However, I've read somewhere (don't remeber the exact location, sorry...) that the problem with distributed reference counting would be that first, it consumes network bandwith, and second, it does not work well with unreliable network connections. It was told in that article that these drawbacks would have been the rationale for designing CORBA without refcounting.

This is a problem. If your client disappears, then you've got unneeded objects lying around. All it takes for clients to disappear is for them to be coded by bad programmers, and we all know how unlikely that is, right? So how does one deal with this? There are several ways. The easiest is to have a timeout. This is the approach that I am using in the GNU Transaction Server; when the timeout expires, I deactivate all of my servant objects and free up their resources. Fortunately, the CosTransactions spec supports timeouts on transactions. Unfortunately, I don't really see how to do timeouts with the normal CORBA event model, so I'm going to have to use the ORBit-specific hack of having a glib event loop which supports both CORBA events and timeouts. Another way, I am told, is something called an Evictor pattern. I should understand this soon once I get my fancy CORBA book in the mail. The final way is just to leak memory like a sieve leaks goldfish; this is the easiest thing to do.