The basic problem of analogue modeling goes right to the core of modeling -- simulation of analog feedback loops. You can model an analogue circuit at the compenent level, and still have this problem, because analogue circuits for sound synthesis are full of feedback loops.
Why is this a problem?
Analogues circuits are at their root a way to control the flow of electrons. A straight piece of wire is the simplest analogue circuit. They grow in complexity from there. When you introduce a feedback loop in an analog circuit, there's a stream of electrons from the output of a circuit, back to the input of the circuit; these electrons travel at a large percentage of the speed of light, meaning that the output of the circuit is a continuous function of both the circuit's input and it's output, and the electron flow makes millions of round trips per second through the feedback circuit.
The actual value of the circuit's output is represented by one of those infinity equations that blow your mind in first year calculus: As X tends to infinity, Y tends to Z -- where X is the number of loops through the circuit current can make on the feedback path. In this case X tends to a large number that isn't, but might as well be infinite.
Digital simulations of feedback loops don't have the luxury of making millions of round trips per second. To simulate feedback, all you can do is go back in time (i.e. introduce a delay) and let a previous output stand in for the continuous looping of the analogue circuit. Mathematically this amounts to
F(t) = F'(F(t-1))
Which is to say the output of F is a function of the value of F one sample ago. This is a pretty nasty simplification of the analog process it's attempting to model. Now, clever programmer/mathematicians can approach this problem for specific cases; they can look at the original circuit and use a function on the feedback loop that corresponds to the 'infinite sequence' function embodied in the analogue circuit.
But in digital modular synthesis, a feedback loop is a user decision, and the programmers don't have prior knowledge of every possible feedback path, so the crudest approximation of analogue feedback is used. You can improve the approximation of the analogue behavior by oversampling, but there's only so much CPU bandwidth to go around. That's why selecting a high internal sample rate in Reaktor can sound better, even if the output is being downsampled to 44.1khz.
If you don't believe me, all you need to do is to compare the output of an analogue synth doing self-FM* of an oscillator with an equivalent patch in a software modular synthesis program. They simply sound nothing like each other. What's worse, self-FM is a aesthetically useful process in the analogue domain, and it usually isn't in Reaktor, or any other software modular I use.
This doesn't mean that Digital Is Shit, and Analogue is The Shit, though there are a lot of people who are analogue snobs. It means that they are different, and understanding what each can do means you can use the right tool for the right job. In music technology there's always an urge to introduce emulations of earlier technologies, primarily for economic and logistical reasons. A Sequential Circuits Prophet 5 costed a couple thousand dollars, but it was a lot cheaper than the salaries of a horn section, and it could emulate that horn section, kinda-sorta. Samplers were successful initially because they did a better job than the analogue synths of that same emulation.
Analogue synths started getting interesting when people started exploring their unique capabilities, and finding their aesthetically useful possibilities with no attempt to emulate something else. E.G. a Roland TB-303 is a failed simulation of a bass player, but came to define whole new genres of music once people gave up programming the bassline for 'Ticket To Ride' and started playing with the knobs. Sampling started getting interesting when people started experimenting with it for it's own sake, rather than using it to fake a traditional instrument.
I think that we've only scratched the surface of what digital synthesis can really do. There are whole worlds of sound out there yet to be discovered, that will only be discovered when you go beyond aping what Bob Moog was doing 40 years ago.
*There's a difference between Digital FM (as invented by Chowning, and most widely seen in the Yamaha FM Synths), and digitally simulating Analogue FM. Chowning's technique was really a reaction to the limitation of computing at the time -- since Chowning FM reduced FM to a series of table lookups, it was extremely efficient, and was by far the cheapest way to introduce variations in timbre, given the limitations of CPU speed at the time. Chowning has a lot to answer for -- I cringe every time I hear a DX7 Electric Piano patch. Nothing says cheesy imitation of real instruments like FM Synths.