블로그 이미지
fiadot_old

칼퇴근을 위한 게임 서버 개발 방법론에 대한 심도있는 고찰 및 성찰을 위한 블로그!

Rss feed Tistory
DSP 2005. 4. 1. 13:38

VST개발관련 in detail

What is the VST plugin ID?
From the Steinberg SDK:
"The plug-in declares its unique identifier. The host uses this to identify the plug-in, for instance when it is loading effect programs and banks. [...] Be inventive; you can be pretty certain that someone already has used 'DLAY'. The unique plugin ID is set in the constructor of your plugin and consists of 4 characters, it must be different for every plugin/project you create. You have the option to check or (freely) register your unique plugin ID at this Steinberg site: http://service.steinberg.de/databases/plugin.nsf/plugIn?openForm

Does my plugin need a GUI (graphical user interface)?
No, the SDK provides a basic way of displaying changing parameters, so you just have to tell the host what parameters you have. Please note that different hosts display this so called "generic plugin interface" totally different, compare for example Orion, Cubase, Wavelab and FruityLoops!

What is the VSTGUI?
From the SDK: "The VSTGUI Libraries, that are available for all four supported platforms, demonstrate how providing a custom user interface can also be as cross-platform as writing a VST plug-in the first place." Actually, the VSTGUI is another set of routines, also provided by Steinberg and included with the SDK download, that can be used to create interfaces in a cross-platform way: Steinberg provides all the control methods (buttons, switches, sliders), the coder supplies the bitmaps and the behaviour. The big advantage is that you are mostly platform independent in developing an interface (GUI=graphical user interface) for your plugin, the disadvantage is that you are limited to the functions Steinberg provides. For the latest VSTGUI versions, go here: http://ygrabit.steinberg.de

Can I use my own graphical user interface library?
Yes, nothing keeps you from using your own (or other third-party) libraries for your compiler, not only for graphical interfaces but also for other issues like signal processing. After all, a plugin DLL is in many aspects the same as an EXE application, therefore the same flexibility is possible. If you use some components/libraries that are not open source and only available for one OS platform, you will lose platform compatibility.

What are the differences between process() and processReplacing()?
Here I may again just quote from the SDK:
"Audio processing in the plug is accomplished by one of 2 methods, namely process(), and processReplacing(). The first one must be provided, while the second one is optional (but it is highly recommended to always implement both methods). While process() takes input data, applies its pocessing algorithm, and then adds the result to the output (accumulating), processReplacing() overwrites the output buffer. The host provides both input and output buffers for either process method.
The reasoning behind the accumulating version process() is that it is much more efficient in the case where many processors operate on the same output (aux-send bus), while for simple chained connection schemes (inserts), the replacing method processReplacing() is more efficient."

Generally it is a good idea to
- support both types by calling canProcessReplacing() in your constructor
- write a doProcess() function and call this from both methods

What are the differences between suspend/resume and constructor/deconstructor?
The resume and suspend methods can be considered as secondary constructors and deconstructors in your base plugin class. resume() is called whenever your plugin is activated (usually upon construction and when it leaves bypass mode) and suspend is called whenever it is deactivated (put into bypass mode). If your plugin has variables that need to be reset when audio playback starts up (for example, you might have a feedback buffer of some sort, which is always the case with FIR & IIR filters), then these are the places to do that.

Things that only need to happen once (memory allocations, basic plugin initializations, etc.) should be taken care of in your constructor. Memory deallocations should be taken care of in your deconstructor. Because resume() and suspend() might be called during audio playback, you don't want to do anything during them that is more than what's necessary. It might be a good idea to put more costly operations in your suspend() function since that will occur at a point when the DSP load eases up (since your plugin has just been deactivated) and therefore is less likely to cause a CPU overload at that point.

What does "interrupt processing" mean and what problems might occur?
Under Mac OS, audio routines happen at the interrupt level. More specifically for VST plugins this means that process() and processReplacing() get called in the interrupt. Therefore you can only use interrupt-safe routines, which basically means that you should stick to calculations and memory accesses, but no allocating or deallocating of memory and no file routines unless they are done asynchronously. This also means that you shouldn't handle anything having to do with the GUI during process() or processReplacing(). If you need something done with the GUI on a regular basis, then overwrite the idle() method.

Here are two links from Apple for more information about interrupt processing:
interrupt-safe routines: http://developer.apple.com/technotes/tn/tn1104.html
asynchronous file access: http://developer.apple.com/techpubs/mac/Files/Files-139.html

Audio handling under Windows is not done in the interrupt, so none of the above applies, although generally you'll still want to follow most of the same rules so that you don't spend too much times during any process cycle and cause audio dropouts by not returning your results soon enough..

What is the idle() function used for?
Actually there are two idle functions in the VST world: idle() and fxIdle()
You can use them if you need something to happen regularly in your plugin, and don't want to use the time-critcial process() and processReplacing() functions (which should be used for actual sound processing only).

1. idle() is a GUI method that is called regularly whenever your plugin's editor window is open, so you might for example put some code for activity LEDs or screen updates in there. You must add the idle() function to your editor class and rewrite it to include whatever you want done in it. Also, make sure to call this at the end of your idle: AEffGUIEditor:1364le();
Keep in mind that idle() is called very frequently. It depends on the host, but it can be called several times per processing block. If your routine doesn't need to happen that often, then you should create some sort of flag to check on during idle() so that your idle routine will know whether to do what it's supposed to do or not.

. fxIdle() is an idle method that you can include in your base plugin class. This will be called regularly whether or not your plugin has an editor window open. In order to insure that it is called, you should call needIdle() during your base plugin class' constructor. Also, make sure that your fxIdle() method returns 1 which is its signal to the host that it wants to be called again. fxIdle() seems to be called not nearly as regularly as idle(), only every several processing blocks.

How do I know how much samples to process in one block?
The value of sampleFrames that is passed along to process() and processReplacing() is the number of samples in the current processing block. Unfortunately, this number is unpredictable. You can assume that it will be a nonnegative number, but you can't assume anything else about it (that means that it can even be 0, although that is unlikely).

The number can vary between different hosts, but also from one processing block to the next with the same host. Therefore if your plugin requires a particular number of samples at a given time (for FFT or other analysis, look-ahead dynamics processing, etc.) or simply a consistent number, then you need to create your own buffer. This means that your output will be delayed. This delay can be compensated by using the setInitialDelay() function.

How does the setInitialDelay() function work?
If you use your own fixed-size buffer for audio processing or have a used a filter, look-ahead processor or similar algorithm that introduces a delay in the signal chain, this delay can be compensated.
This does of course not work for processing of realtime audio input, this is a drawback that you can't do anything about. But if you're processing prerecorded audio, you can use setInitialDelay() to inform the host of your plugin's output delay, given in samples. This should be called in your constructor or during your resume() method. Not all hosts pay attention to setInitialDelay(), but you should still call it if your plugin will have a delay. Most hosts support only the first call of setInitialDelay() in the constructor, so variable delay compensation is not possible in most cases.
Also bear in mind that in Cubase, delay compensation works only if the effect is used as an insert, not as a send effect and does not work at all (neither insert nor send) on group channels.

Can an effect plugin receive MIDI input?
Yes, this is part of the VST 2.0 specifications and it works perfectly in Cubase and Nuendo for example, other hosts might not necessarily support it (yet).

Can a plugin/synth send MIDI data to the host?
Yes, this is part of the VST 2.0 specifications and it works in some hosts like Cubase, Nuendo, FruityLoops and Orion for example. The command used for this is "sendVstEventsToHost", see audioeffectx.h for more info.

How many parameters can my plugin have?
Practically unlimited. I have successfully tried plugins with several thousand single parameters and did not encounter problems, except for Emagic's Logic: Logic 4.x has an internal limit of about 32 kbyte for a VST plugin preset bank. This is Logic's bug and in no way justified! If the size of the program in Logic is larger than this 32k boundary, the preset data is simply not saved. However this behaviour seems to have changed with version 5.x of Logic.

What is the "chunk" method for parameter saving?
There are two internal ways to save the data of your current preset or the whole bank to an FXP/FXB file: single parameters and chunks.

single parameters are most common for smaller projects and should suffice in most cases. Here, every single parameter that is defined in your plugin class can be set independently with the setparameter() function. An example would be a call like setparameter(kCutoff, fCutoff) which would set the parameter with the index kCutoff to the value of fCutoff.
(By the way, generally it is a good idea to name your index constants corresponding to the float variable they represent, like kParX and fParX.)
The maximum amount of parameters and programs per bank is set within the constructor:
APlugin::APlugin(audioMasterCallback audioMaster): AudioEffectX(audioMaster, kNumPrograms, kNumParameters)

The disadvantage of this method is that you can save only float values within your preset, so whenever you have to include additional information like arrays, strings, names, graphics or even sample data, an extensive conversion from these formats into float values needs to be done. It is not impossible, but it is easier to do this with chunks.

chunks are simply data blocks which can contain just about anything. Here, no fixed structure is given, instead it is your task to define within the getChunk() and setChunk() functions, what information is to be found where. You just give a pointer to the start of your data block in memory and how much should be read from there. Please check audioeffect.h for more info.

Normally, single parameters are used in a project, to switch to the chunk method, call programsAreChunks() within your constructor. You can use only one of the methods in your plugins!

Please keep in mind that there is a big disadvantage when using the chunk method: You can no longer automate single parameters via the host :-( ! So the gain in more flexibility concerning parameter/preset management automatically limits the ease of use concering automation.

What about parameter automation?
From the SDK:
"The plug-in process can use whatever parameters it wishes, internally to the process, but it can choose to allow the changes to user parameters to be automated by the host, if the host application wishes to do so."
And you might add: if the host application supports this! Cubase 3.x and 5.0 allowed only the first 16 parameters of a plugin to be automated, higher parameter index numbers were put in completely wrong internal places and did not work. The latest Cubase 5.1 and Cubase SX have this 'bug' fixed.
Furthermore, the above mentioned method is possible only for single parameter plugins, not for plugins using the chunk method! And for single parameter plugins, automation only works where the setParameterAutomated() function is used and not the regular setParameter()!

Another option would be to let the user automate parameters via MIDI CC controllers. This would require some programming/conversion inside the processEvents() part of your plugin, but it can and should be done. I strongly opt for supporting both automation types and let the user choose which methods suits him (and his favorite host) best.

My plugin does not show up at all, at the wrong places (no insert, only send or effect as VSTi)!
If it does not show up at all, make sure you've copied it into the correct "VSTPlugins" directory of your host (check host settings/preferences). Most times there are several of these directories on the system and if it's not in the right one, the host won't find your plugin.

If you used MS Visual C++ to compile and it is in the right directory, but still does not show up, the "main" function was probably not exported correctly. Check if you have used the DEF file in your project!

If it shows up in the wrong category, you have to check several points:

- a VSTi should call "isSynth" in the constructor and "wantEvents" in resume

- an effect plugin should never call "isSynth", but may call "wantEvents" for MIDI data

- some "CanDos" should always be set properly, e.g. "ReceiveVSTMIDIEvents" or "PlugAsSend"

- the number of input/output channels should be set properly in the constructor, only even values please!

Is it possible to debug my plugins in realtime using breakpoints?
That depends on your host and debugger. In my experience, Orion and FruityLoops for example work very well together with debuggers. Wavelab crashes most of the time, but the Wavelab demo works. Most of the time you should use hosts like above that load quickly and can access plugins comfortably, as for example booting up the whole Cubase or Logic system take quite some time on most computers. Nevertheless a check with Cubase (demo or full version) should always be made before releasing your plugin as it is the initial and most wide-spread VST host application.

I change the code in the process() function, but nothing happens!
Please read the manual carefully for the difference between process() and processReplacing()! You probably called canProcessReplacing() in the constructor, then the process() is not called at all, but the processReplacing() function instead! Generally you should support both functions, maybe let them call the same sub-process function to make life easier.

Orion displays my GUI in a wrong way, it appears cropped or too large!
Make sure you set the ClientWidth and ClientHeight of the editor window form to the correct values, also the getRect() functions must return the right values, else the above mentioned error occurs. This is not Orion's fault but the plugin's!

Logic crashes when switching from controller view back to my GUI!
Seems to be a bug in Logic: editor.close() is called by Logic twice during this action, if there is no window anymore to close (after the first of the two close commands), it crashes. Check your editor.close() function if window still exists before doing anything.

I just finished my first VST plugin, what now?

- Is it working stable? Have you tested it under several hosts and different operating systems? It should at least be tested with the demo versions of: Cubase 5.x or SX (Steinberg), Wavelab (Steinberg), Logic Audio 5.x (Emagic), Fruityloops 3.x (In-Line Software) and Orion (Synapse). If your plugin works fine in all of them, chances are high it will work in most other hosts too.

- Did you register your plugin ID at the Steinberg site?

Go to this link: http://service.steinberg.de/databases/plugin.nsf/plugIn?openForm and register your plugin ID for this plugin, or check at least if it is not used by another plugin already.

- If you think your plugin is worth releasing, post a note on some frequented VST forum like K-V-R (http://www.kvr-vst.com) and give a valid email adress and/or website for feedback/bug reports/suggestions.

- Before uploading your plugin to a free web space provider, please bear in mind that many people will try to download it upon reading your post, so your traffic limit might very soon be exceed (trust me, I know what I am talking about :-) ).

출처 : http://www.u-he.com/vstsource/
,
TOTAL TODAY