Question about Mixing/Mastering

Hi friends, I have a question about mixing & mastering (I hope I do not embarrass myself).
I see very often that producer and composer render their midi tracks created in DAW and continue the mixing and mastering process in audio format.
My question is: what is the advantage of this approach?
I have to mention that in my case no midi tracks are recorded externally. All instruments are based on VSTs. I can add every effect in every way to each of the midi and audio tracks tracks.
I was searching in some forums for a reason to go that way, but I didn’t found any hard valid reason. One guy’s reason was, that it is a question of workflow. He finishes the Midi work and didn’t want to go back then to midi and adding stuff later.
My question is more about the mixing/mastering quality? Are there any differences in quality between both ways?

Well, if you ask 10 producers you will get 11 opinions.There are some pros and cons regarding rendering the vst in order to mix. First is cpu is less stressed especially if you are using some intensive power vst like diva.e.g. Same with memory intensive kontakt libraries e.g. Orchestral Tools. The other reasons are quite debatable. There are several issues especially if you are working in low bit rate depth and resolution which can lead to artefatcs and errors, intersample peaks and so on, which can be slightly avoided if your vst tracks are renedered. Intersample peaks may occur also even of your tracks are rendered. Thats why i strongly suggest that your projects must be at least 24 bit,and 48 khz. Optimal 32 bit float point, 48 kHz. ( Another debatable subject ) .

3 Likes

One reason people do it is to get timing more accurate. When you see the rendered audio files of drums for example you can line them up perfectly (or not). At least you have better insight in what you’re doing as you can zoom into the waveforms.
Some others save CPU this way and some indeed like to separate composing and mixing this way.
When you have separate audio files you can also cut them up and apply effects to them as opposed to effects that affect the whole chanel, maybe like automation triggering effects in midi but in a more committed way applied to individual segments of audio.

3 Likes

I think saving RAM and CPU power is the main reason for most of the cases. Indeed, seeing the waveform can help you to make better mixing decisions, and also to solve some timing issues. Sometimes i change the timbre of the instrument, sometimes i need to correct the part it plays, or any other cases where having a WAV just can’t help. So i prefer to stay with MIDI and all the VSTs live rather than frozen and rendered. Ofcourse, if you’re writing an orchestral piece with profound knowledge of music theory, chances are you won’t need to change anything, so it might be a smart move to render all the tracks and work with WAV, just to have more computing power.

2 Likes

Thanks for your answers and opinions, guys. As @SoniqBranding mentioned, it seems to be debatable. The reason for my question is, that I was going with Midi and VSTs live (like @Theo_Sound) and I never had any noticeable issue. So it’s good to see, that there are multiple ways to get the work done and all they have their pros and cons. I only wanted to avoid a mistake without knowing about an alternative way.

Another reason to keep rendered version of your projects (in addition to your original) could be to always have the option to go back and print out stems and make changes to the years later, even if some of your virtual instruments are outdated and no longer working.
Also, if you’re on a laptop, but your virtual instruments are on external drives, you can mix or playback your projects away from the studio.

2 Likes

That are two good points, @Hyperprod ! Thank you.