블로그 이미지
fiadot_old

칼퇴근을 위한 게임 서버 개발 방법론에 대한 심도있는 고찰 및 성찰을 위한 블로그!

Rss feed Tistory
DSP 2005. 4. 11. 19:51

What are VST plugins? What do I need to get them running?

What are VST plugins? What do I need to get them running?


VST stands for "Virtual Studio Technology" and is an audio effect standard interface
developed by Steinberg Media GmbH.

VST effects and synthesizers are plugins that you can load into many audio
editing programs and sequencers on the market. VST is currently used on Windows
and Apple Macintosh platforms, although my plugins are for Windows only.

So, to use these plugins, you will need a VST compliant host like Steinberg Cubase,
Emagic Logic or Imageline FruityLoops (there are lots more). Just extract the plugins
into the program's plugins directory and it should find them.
,
DSP 2005. 4. 7. 12:01

VSTGUI Appwiz사용시...

MFC옵션 제거!

KNOWN ISSUES:
-------------------------------------------------------------
톂he created project is currently configured to:
"Use MFC in a shared DLL". Change this to: "Not using MFC"
for all configurations (project->settings) in order to
succesfully build the PlugIn-dll.

톂he "amount" display doesn't respond to sliderchanges.

Fixes to these issues will be posted.
,
DSP 2005. 4. 4. 14:37

[music-dsp] A good guitar distortion algorithm?

[music-dsp] A good guitar distortion algorithm?
Nick music-dsp@ceait.calarts.edu
Mon Dec 1 07:30:01 2003

Previous message: [music-dsp] A good guitar distortion algorithm?
Next message: [music-dsp] A good guitar distortion algorithm?
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]

--------------------------------------------------------------------------------

--Apple-Mail-6--356756437
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
charset=US-ASCII;
format=flowed

Hey Christoph,
I have recently done some research into the subject and have a great
article for you. Let me know if it helps (quite frankly it is a little
over my head so if you get it maybe you can share the knowledge). I
guess for that matter if anybody on the list can explain it, I would be
very appreciative.


http://www.tele.ntnu.no/akustikk/meetings/DAFx99/schattschneider.pdf

Good Luck,
Nick

On Monday, December 1, 2003, at 07:29 AM, Christoph Jung wrote:

> Hello everybody,
>
> I'm working on a distortion algorithm for guitar with the aim of a
> future
> DSP-implementation. My question is: What would be the right approach to
> achieve aliasing-free and flexible guitar overdrive with efficient use
> of the
> processing power?
>
> So far, I've experimented with several non-linear functions like tanh,
> arctan, 1/x, etc.
> but immediately encounter the problem of audible aliasing for higher
> gain
> settings - no matter which function is in use. Pre- and post
> equalization and
> separation into frequency bands (distortion decreases with increasing
> frequency) can be of help but don't give me full control of the
> harmonic
> content of the output signal.
>
> A second approach of mine was the implementation of chebyshev
> polynomial
> waveshaping which has been occasionally mentioned in this mailing
> list. On my opinion,
> this method is fine for full control of harmonic additives but only
> works with not too
> complex input signals. Given a single note or a sine as input, my
> version of polynomial
> WS produces smooth overdrive with absolutely no aliasing. But as soon
> as polyphonic
> input signals occur, this method produces new tones ("intermodulation
> partials") which
> sound totally unmusical. This gets worse with increasing complexity of
> the input signal.
> If a full guitar chord is given as input signal, the harmonic output
> consists of roughly nothing
> else than noise - due to the huge amount of unwanted intermodulation
> partials. Knowing that
> polynomial WS works fine with sine-like signals, I could improve my
> algorithm by dividing
> the input signal into different frequency bands and applying
> waveshaping individually. But
> what if the input signal consists of two tones that lie very close to
> each other? Do i need to
> separate these tones with two different filters? Doesn't the
> complexity of such an algorithm
> exceed the computational capacity of a standard low-cost DSP?
>
> In general, my question is: What would be the right way to achieve a
> good sounding guitar
> distortion algorithm? Should I continue with the polynomial WS? Or
> should I concentrate on
> the usage of non-linear functions (and maybe apply some oversampling
> for filtering of
> unwanted partials)? Or are there other ways? How does digital
> "modelling" of guitar amplifiers
> or stomp boxes work? Is there any expert in guitar effects processing
> around here to give me a hint?
>
> Thanks
> Christoph
>
> dupswapdrop -- the music-dsp mailing list and website: subscription
> info, FAQ, source code archive, list archive, book reviews, dsp links
> http://shoko.calarts.edu/musicdsp
> http://ceait.calarts.edu/mailman/listinfo/music-dsp

--Apple-Mail-6--356756437
Content-Transfer-Encoding: 7bit
Content-Type: text/enriched;
charset=US-ASCII

Hey Christoph,

I have recently done some research into the subject and have a great
article for you. Let me know if it helps (quite frankly it is a
little over my head so if you get it maybe you can share the
knowledge). I guess for that matter if anybody on the list can
explain it, I would be very appreciative.

Times


http://www.tele.ntnu.no/akustikk/meetings/DAFx99/schattschneider.pdf


Good Luck,

Nick


On Monday, December 1, 2003, at 07:29 AM, Christoph Jung wrote:


Hello everybody,


I'm working on a distortion algorithm for guitar with the aim of a
future

DSP-implementation. My question is: What would be the right approach to

achieve aliasing-free and flexible guitar overdrive with efficient use
of the

processing power?


So far, I've experimented with several non-linear functions like tanh,
arctan, 1/x, etc.

but immediately encounter the problem of audible aliasing for higher
gain

settings - no matter which function is in use. Pre- and post
equalization and

separation into frequency bands (distortion decreases with increasing

frequency) can be of help but don't give me full control of the
harmonic

content of the output signal.


A second approach of mine was the implementation of chebyshev
polynomial

waveshaping which has been occasionally mentioned in this mailing
list. On my opinion,

this method is fine for full control of harmonic additives but only
works with not too

complex input signals. Given a single note or a sine as input, my
version of polynomial

WS produces smooth overdrive with absolutely no aliasing. But as soon
as polyphonic

input signals occur, this method produces new tones ("intermodulation
partials") which

sound totally unmusical. This gets worse with increasing complexity of
the input signal.

If a full guitar chord is given as input signal, the harmonic output
consists of roughly nothing

else than noise - due to the huge amount of unwanted intermodulation
partials. Knowing that

polynomial WS works fine with sine-like signals, I could improve my
algorithm by dividing

the input signal into different frequency bands and applying
waveshaping individually. But

what if the input signal consists of two tones that lie very close to
each other? Do i need to

separate these tones with two different filters? Doesn't the
complexity of such an algorithm

exceed the computational capacity of a standard low-cost DSP?


In general, my question is: What would be the right way to achieve a
good sounding guitar

distortion algorithm? Should I continue with the polynomial WS? Or
should I concentrate on

the usage of non-linear functions (and maybe apply some oversampling
for filtering of

unwanted partials)? Or are there other ways? How does digital
"modelling" of guitar amplifiers

or stomp boxes work? Is there any expert in guitar effects processing
around here to give me a hint?


Thanks

Christoph


dupswapdrop -- the music-dsp mailing list and website: subscription
info, FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp


--Apple-Mail-6--356756437--


출처 : http://aulos.calarts.edu/pipermail/music-dsp/2003-December/025472.html
,
DSP 2005. 4. 1. 16:23

Optimization

Author Topic: Optimization
texture
KVRian- profile
- pm
Posted: Thu Jan 01, 2004 6:09 am reply with quote

I could do with learning a bit more about optimization techniques for C/C++. In particular for AMD or Intel.

Does anyone know of any decent books / resources?
Joined: 26 Mar 2003 Posts: 640 Location: Hassocks, England
benski
KVRer- profile
- pm
Posted: Thu Jan 01, 2004 8:18 am reply with quote

texture,

Don't know of any books. It is a bit of a "gray art" these days and a lot of the older techniques will actually slow down a modern processor. Your best bet may be to check the music-dsp mailing list archives for information.

I'll post some specific examples in a later reply, but just to get things started, here are the basic of AMD/Intel optimization.

Vector Optimization:
SSE lets you load 16 bytes of memory, representing 4 floats, into one of 8 giant 128bit "multimedia" registers. You can then do most floating point operations using it with other 128bit mm registers. It basically does 4 floating point operations in the time it normally takes to do one. You can them "stream" the register back to another place in memory. Very useful for volume envelopes, FIR filters, and mixing.

Cache Control:
This is the real make-or-break optimization. L1 cache is very fast (can be accessed instantly). L2 is also fast, but takes a few clock cycles to access. And anything in RAM must be moved into cache before use, and can take on the order of 100 clock cycles.
In a multiprocess, mutlithreaded environment, you can't have any guarantee as to what will be in cache (a context switch will flush your cache).
In C/C++ you can't control the cache too well, but you can be mindful of how it works. Some of the old optimization techniques, like look-up tables, and decrementing loop variables to cut out one instruction in the loop, no longer apply because of cache issues. (depending on the calculation, a look-up table can be slower to access than the actual calculation - and worse, it may bump something important out of cache. incrementing the loop variable is faster because the processor pulls memory into the cache chunks at a time, so if you've pulled in MyArray[0], it is likely that MyArray[1] through MyArray[15] also got pulled in - the guarantee doesn't go the other way).

Float-to-int conversions:
The default way that this is done causes a "stall" on the floating point processor, so no other work can get done while this is calculating. There is a short assembly routine (I believe it is available somewhere at www.musicdsp.org) that does it much more quickly.

Denormalization:
When floating point numbers get very small, the CPU goes into a special mode in order to maintain precision. This is a very, very slow mode =) If you've ever hit "stop" in your host, and seen the CPU meter hit 100%, this is the reason. The change-over occurs roughly at -300dB. There are various techniques for overcoming this problem, such as adding in noise or a slight DC offset (-300dB is waaaay below human hearing).

Benchmarking:
You should always measure any change you made. Because of cache issues, a speed-up in one section of code may well slow down the section following it. There is an x86 instruction to get the clock cycle count (you run it before and after a block of code and calculate the difference)

-benski
Joined: 20 Nov 2003 Posts: 19 Location: Pittsburgh, PA
texture
KVRian- profile
- pm
Posted: Thu Jan 01, 2004 8:37 am reply with quote

Thanks very much


I've been having a look around and have found the following if anyone is interested...


"Programming SIMD computers is no easy task, but here you will find all the information you need to build fast applications that fully exploit the power of MMX and SSE instructions on the current microarchitectures. "

http://www.tommesani.com/Docs.html
Joined: 26 Mar 2003 Posts: 640 Location: Hassocks, England
benski
KVRer- profile
- pm
Posted: Thu Jan 01, 2004 11:39 am reply with quote

Looks like a good site! Thanks


출처 : http://www.kvraudio.com/forum/viewtopic.php?t=32922
,
DSP 2005. 4. 1. 13:38

VST개발관련 in detail

What is the VST plugin ID?
From the Steinberg SDK:
"The plug-in declares its unique identifier. The host uses this to identify the plug-in, for instance when it is loading effect programs and banks. [...] Be inventive; you can be pretty certain that someone already has used 'DLAY'. The unique plugin ID is set in the constructor of your plugin and consists of 4 characters, it must be different for every plugin/project you create. You have the option to check or (freely) register your unique plugin ID at this Steinberg site: http://service.steinberg.de/databases/plugin.nsf/plugIn?openForm

Does my plugin need a GUI (graphical user interface)?
No, the SDK provides a basic way of displaying changing parameters, so you just have to tell the host what parameters you have. Please note that different hosts display this so called "generic plugin interface" totally different, compare for example Orion, Cubase, Wavelab and FruityLoops!

What is the VSTGUI?
From the SDK: "The VSTGUI Libraries, that are available for all four supported platforms, demonstrate how providing a custom user interface can also be as cross-platform as writing a VST plug-in the first place." Actually, the VSTGUI is another set of routines, also provided by Steinberg and included with the SDK download, that can be used to create interfaces in a cross-platform way: Steinberg provides all the control methods (buttons, switches, sliders), the coder supplies the bitmaps and the behaviour. The big advantage is that you are mostly platform independent in developing an interface (GUI=graphical user interface) for your plugin, the disadvantage is that you are limited to the functions Steinberg provides. For the latest VSTGUI versions, go here: http://ygrabit.steinberg.de

Can I use my own graphical user interface library?
Yes, nothing keeps you from using your own (or other third-party) libraries for your compiler, not only for graphical interfaces but also for other issues like signal processing. After all, a plugin DLL is in many aspects the same as an EXE application, therefore the same flexibility is possible. If you use some components/libraries that are not open source and only available for one OS platform, you will lose platform compatibility.

What are the differences between process() and processReplacing()?
Here I may again just quote from the SDK:
"Audio processing in the plug is accomplished by one of 2 methods, namely process(), and processReplacing(). The first one must be provided, while the second one is optional (but it is highly recommended to always implement both methods). While process() takes input data, applies its pocessing algorithm, and then adds the result to the output (accumulating), processReplacing() overwrites the output buffer. The host provides both input and output buffers for either process method.
The reasoning behind the accumulating version process() is that it is much more efficient in the case where many processors operate on the same output (aux-send bus), while for simple chained connection schemes (inserts), the replacing method processReplacing() is more efficient."

Generally it is a good idea to
- support both types by calling canProcessReplacing() in your constructor
- write a doProcess() function and call this from both methods

What are the differences between suspend/resume and constructor/deconstructor?
The resume and suspend methods can be considered as secondary constructors and deconstructors in your base plugin class. resume() is called whenever your plugin is activated (usually upon construction and when it leaves bypass mode) and suspend is called whenever it is deactivated (put into bypass mode). If your plugin has variables that need to be reset when audio playback starts up (for example, you might have a feedback buffer of some sort, which is always the case with FIR & IIR filters), then these are the places to do that.

Things that only need to happen once (memory allocations, basic plugin initializations, etc.) should be taken care of in your constructor. Memory deallocations should be taken care of in your deconstructor. Because resume() and suspend() might be called during audio playback, you don't want to do anything during them that is more than what's necessary. It might be a good idea to put more costly operations in your suspend() function since that will occur at a point when the DSP load eases up (since your plugin has just been deactivated) and therefore is less likely to cause a CPU overload at that point.

What does "interrupt processing" mean and what problems might occur?
Under Mac OS, audio routines happen at the interrupt level. More specifically for VST plugins this means that process() and processReplacing() get called in the interrupt. Therefore you can only use interrupt-safe routines, which basically means that you should stick to calculations and memory accesses, but no allocating or deallocating of memory and no file routines unless they are done asynchronously. This also means that you shouldn't handle anything having to do with the GUI during process() or processReplacing(). If you need something done with the GUI on a regular basis, then overwrite the idle() method.

Here are two links from Apple for more information about interrupt processing:
interrupt-safe routines: http://developer.apple.com/technotes/tn/tn1104.html
asynchronous file access: http://developer.apple.com/techpubs/mac/Files/Files-139.html

Audio handling under Windows is not done in the interrupt, so none of the above applies, although generally you'll still want to follow most of the same rules so that you don't spend too much times during any process cycle and cause audio dropouts by not returning your results soon enough..

What is the idle() function used for?
Actually there are two idle functions in the VST world: idle() and fxIdle()
You can use them if you need something to happen regularly in your plugin, and don't want to use the time-critcial process() and processReplacing() functions (which should be used for actual sound processing only).

1. idle() is a GUI method that is called regularly whenever your plugin's editor window is open, so you might for example put some code for activity LEDs or screen updates in there. You must add the idle() function to your editor class and rewrite it to include whatever you want done in it. Also, make sure to call this at the end of your idle: AEffGUIEditor:1364le();
Keep in mind that idle() is called very frequently. It depends on the host, but it can be called several times per processing block. If your routine doesn't need to happen that often, then you should create some sort of flag to check on during idle() so that your idle routine will know whether to do what it's supposed to do or not.

. fxIdle() is an idle method that you can include in your base plugin class. This will be called regularly whether or not your plugin has an editor window open. In order to insure that it is called, you should call needIdle() during your base plugin class' constructor. Also, make sure that your fxIdle() method returns 1 which is its signal to the host that it wants to be called again. fxIdle() seems to be called not nearly as regularly as idle(), only every several processing blocks.

How do I know how much samples to process in one block?
The value of sampleFrames that is passed along to process() and processReplacing() is the number of samples in the current processing block. Unfortunately, this number is unpredictable. You can assume that it will be a nonnegative number, but you can't assume anything else about it (that means that it can even be 0, although that is unlikely).

The number can vary between different hosts, but also from one processing block to the next with the same host. Therefore if your plugin requires a particular number of samples at a given time (for FFT or other analysis, look-ahead dynamics processing, etc.) or simply a consistent number, then you need to create your own buffer. This means that your output will be delayed. This delay can be compensated by using the setInitialDelay() function.

How does the setInitialDelay() function work?
If you use your own fixed-size buffer for audio processing or have a used a filter, look-ahead processor or similar algorithm that introduces a delay in the signal chain, this delay can be compensated.
This does of course not work for processing of realtime audio input, this is a drawback that you can't do anything about. But if you're processing prerecorded audio, you can use setInitialDelay() to inform the host of your plugin's output delay, given in samples. This should be called in your constructor or during your resume() method. Not all hosts pay attention to setInitialDelay(), but you should still call it if your plugin will have a delay. Most hosts support only the first call of setInitialDelay() in the constructor, so variable delay compensation is not possible in most cases.
Also bear in mind that in Cubase, delay compensation works only if the effect is used as an insert, not as a send effect and does not work at all (neither insert nor send) on group channels.

Can an effect plugin receive MIDI input?
Yes, this is part of the VST 2.0 specifications and it works perfectly in Cubase and Nuendo for example, other hosts might not necessarily support it (yet).

Can a plugin/synth send MIDI data to the host?
Yes, this is part of the VST 2.0 specifications and it works in some hosts like Cubase, Nuendo, FruityLoops and Orion for example. The command used for this is "sendVstEventsToHost", see audioeffectx.h for more info.

How many parameters can my plugin have?
Practically unlimited. I have successfully tried plugins with several thousand single parameters and did not encounter problems, except for Emagic's Logic: Logic 4.x has an internal limit of about 32 kbyte for a VST plugin preset bank. This is Logic's bug and in no way justified! If the size of the program in Logic is larger than this 32k boundary, the preset data is simply not saved. However this behaviour seems to have changed with version 5.x of Logic.

What is the "chunk" method for parameter saving?
There are two internal ways to save the data of your current preset or the whole bank to an FXP/FXB file: single parameters and chunks.

single parameters are most common for smaller projects and should suffice in most cases. Here, every single parameter that is defined in your plugin class can be set independently with the setparameter() function. An example would be a call like setparameter(kCutoff, fCutoff) which would set the parameter with the index kCutoff to the value of fCutoff.
(By the way, generally it is a good idea to name your index constants corresponding to the float variable they represent, like kParX and fParX.)
The maximum amount of parameters and programs per bank is set within the constructor:
APlugin::APlugin(audioMasterCallback audioMaster): AudioEffectX(audioMaster, kNumPrograms, kNumParameters)

The disadvantage of this method is that you can save only float values within your preset, so whenever you have to include additional information like arrays, strings, names, graphics or even sample data, an extensive conversion from these formats into float values needs to be done. It is not impossible, but it is easier to do this with chunks.

chunks are simply data blocks which can contain just about anything. Here, no fixed structure is given, instead it is your task to define within the getChunk() and setChunk() functions, what information is to be found where. You just give a pointer to the start of your data block in memory and how much should be read from there. Please check audioeffect.h for more info.

Normally, single parameters are used in a project, to switch to the chunk method, call programsAreChunks() within your constructor. You can use only one of the methods in your plugins!

Please keep in mind that there is a big disadvantage when using the chunk method: You can no longer automate single parameters via the host :-( ! So the gain in more flexibility concerning parameter/preset management automatically limits the ease of use concering automation.

What about parameter automation?
From the SDK:
"The plug-in process can use whatever parameters it wishes, internally to the process, but it can choose to allow the changes to user parameters to be automated by the host, if the host application wishes to do so."
And you might add: if the host application supports this! Cubase 3.x and 5.0 allowed only the first 16 parameters of a plugin to be automated, higher parameter index numbers were put in completely wrong internal places and did not work. The latest Cubase 5.1 and Cubase SX have this 'bug' fixed.
Furthermore, the above mentioned method is possible only for single parameter plugins, not for plugins using the chunk method! And for single parameter plugins, automation only works where the setParameterAutomated() function is used and not the regular setParameter()!

Another option would be to let the user automate parameters via MIDI CC controllers. This would require some programming/conversion inside the processEvents() part of your plugin, but it can and should be done. I strongly opt for supporting both automation types and let the user choose which methods suits him (and his favorite host) best.

My plugin does not show up at all, at the wrong places (no insert, only send or effect as VSTi)!
If it does not show up at all, make sure you've copied it into the correct "VSTPlugins" directory of your host (check host settings/preferences). Most times there are several of these directories on the system and if it's not in the right one, the host won't find your plugin.

If you used MS Visual C++ to compile and it is in the right directory, but still does not show up, the "main" function was probably not exported correctly. Check if you have used the DEF file in your project!

If it shows up in the wrong category, you have to check several points:

- a VSTi should call "isSynth" in the constructor and "wantEvents" in resume

- an effect plugin should never call "isSynth", but may call "wantEvents" for MIDI data

- some "CanDos" should always be set properly, e.g. "ReceiveVSTMIDIEvents" or "PlugAsSend"

- the number of input/output channels should be set properly in the constructor, only even values please!

Is it possible to debug my plugins in realtime using breakpoints?
That depends on your host and debugger. In my experience, Orion and FruityLoops for example work very well together with debuggers. Wavelab crashes most of the time, but the Wavelab demo works. Most of the time you should use hosts like above that load quickly and can access plugins comfortably, as for example booting up the whole Cubase or Logic system take quite some time on most computers. Nevertheless a check with Cubase (demo or full version) should always be made before releasing your plugin as it is the initial and most wide-spread VST host application.

I change the code in the process() function, but nothing happens!
Please read the manual carefully for the difference between process() and processReplacing()! You probably called canProcessReplacing() in the constructor, then the process() is not called at all, but the processReplacing() function instead! Generally you should support both functions, maybe let them call the same sub-process function to make life easier.

Orion displays my GUI in a wrong way, it appears cropped or too large!
Make sure you set the ClientWidth and ClientHeight of the editor window form to the correct values, also the getRect() functions must return the right values, else the above mentioned error occurs. This is not Orion's fault but the plugin's!

Logic crashes when switching from controller view back to my GUI!
Seems to be a bug in Logic: editor.close() is called by Logic twice during this action, if there is no window anymore to close (after the first of the two close commands), it crashes. Check your editor.close() function if window still exists before doing anything.

I just finished my first VST plugin, what now?

- Is it working stable? Have you tested it under several hosts and different operating systems? It should at least be tested with the demo versions of: Cubase 5.x or SX (Steinberg), Wavelab (Steinberg), Logic Audio 5.x (Emagic), Fruityloops 3.x (In-Line Software) and Orion (Synapse). If your plugin works fine in all of them, chances are high it will work in most other hosts too.

- Did you register your plugin ID at the Steinberg site?

Go to this link: http://service.steinberg.de/databases/plugin.nsf/plugIn?openForm and register your plugin ID for this plugin, or check at least if it is not used by another plugin already.

- If you think your plugin is worth releasing, post a note on some frequented VST forum like K-V-R (http://www.kvr-vst.com) and give a valid email adress and/or website for feedback/bug reports/suggestions.

- Before uploading your plugin to a free web space provider, please bear in mind that many people will try to download it upon reading your post, so your traffic limit might very soon be exceed (trust me, I know what I am talking about :-) ).

출처 : http://www.u-he.com/vstsource/
,
DSP 2005. 4. 1. 11:32

process() / processReplace()

What are the differences between process() and processReplacing()?
Here I may again just quote from the SDK:
"Audio processing in the plug is accomplished by one of 2 methods, namely process(), and processReplacing(). The first one must be provided, while the second one is optional (but it is highly recommended to always implement both methods). While process() takes input data, applies its pocessing algorithm, and then adds the result to the output (accumulating), processReplacing() overwrites the output buffer. The host provides both input and output buffers for either process method.
The reasoning behind the accumulating version process() is that it is much more efficient in the case where many processors operate on the same output (aux-send bus), while for simple chained connection schemes (inserts), the replacing method processReplacing() is more efficient."

Generally it is a good idea to
- support both types by calling canProcessReplacing() in your constructor
- write a doProcess() function and call this from both methods

출처 : http://www.u-he.com/vstsource/




>So even if the host is so stupid to call process() instead of
>processReplace() where is the harm? Please, don't write something like:
>"Always use process() and processReplace() is optional." We're no kids here,
>explain (briefly) why we shouldn't leave the process() method empty and
>provide only a processReplace()? All my tests show that it works flawlessly.


if it works now that's not a proof that it will do so in the future,
or with other host applications.


in cubase, the current architecture is that a synth output streams into its
own channel, which has a single port by definition. thus, the replacing()
method is called.


however please consider a different scenario, where several plug-ins
(synths or whatever) stream to the *same* port (bus), as for instance
the send fx in cubase do...if there were only replacing methods, the hosting
application would need to make all those plugs stream to a buffer, and
then merge (add, mix) all these buffers together into the destination
buffer which is a very significant performance 'penalty' because of
memory caching issues. if however the plug reads a value from a buffer,
modifies it, and writes it back, this can be handled way more efficiently
by the cache.


you can make no assumption about what the host does with your plug. the idea
is that any application hosting vst plugs should be absolutely free to
use these black boxes as desired, and the (only) one compromise for
the plug to be made because of performance issues is the process ()
method.


either your algorithm is simple, then it will do no harm to copy/paste
the code; or it is more elaborated in which case you will probably
have a subroutine doing the actual stuff such that again copying/pasting
the process method is not a big deal.


also note that you should really always provide both methods, as when
you only provide process(), and your plug should be used streaming to
a single port, host has to first clear the buffer and then call
process() which in turn does one more unnecessary buffer memory access.
hope that helps,
charlie
-------------------------------------------------------
on the in/out port question, he wrote:

출처 : http://lalists.stanford.edu/lad/1999/05/0708.html


I personally think, process() is aux effector, other than prcessReplace() is insert effector(each track).
(I don't know more this one. don't ask why.. )
,
DSP 2005. 4. 1. 11:27

Fourier Transform에 대한 이해

신호처리관련 엔지니어들은 항상 어떤 신호를 해석하거나 필터링과 같은 작업을
하기 이전에 시간영역과 주파수 영역 두 가지로 나눠서 생각하게 된다.
시간영역에서는 분석하기 힘든 신호의 특성을 주파수 영역에서 보게 됨으로써
신호에 대한 보다 명확하고 물리적인 통찰을 할수 있다.
이러한 변환을 가능케 해주는것이 바로 Fourier Transform(FT)이라고 보면 된다.

위의 경우,continuous signal에 대한 변환수단이라 할수 있고,
시간영역에서 sampling한 신호에 대한 FT을 DTFT(Discrete Time Fourier Transform)이
라 한다.

그리고 디지털 신호처리를 하기 위해선 컴퓨터가 데이터를 다룰수 있도록 주파수영역
에서 continuous한 신호를 discrete하게 만들어 유한개로 데이타 수를 제한시켜주는
작업을 거치게 되는데, 이 신호에 대해서 FT한것을 DFT(Discrete Fourier Transform)
이라 한다.

여기서, 한가지 중요한 사실은 시간영역에서 discrete한 신호를 FT을 하게 되면,
주기함수로 나타나고,또 duality theorem에 의해 시간영역에서 주기함수를 FT하게
되면, 주파수 영역에선 discrete하게 나타난다는 점이다.

따라서 sampling한 신호를 FT하게 되면, 주파수 영역에선 sampling frequency의
정수배에 해당하는 위치에 주기함수 형태로 신호가 나타난다고 할수 있다.

그리고 FFT(Fast Fourier Transform)는 DFT과정에서 중복되는 복소수 연산과정을 잘
조합하여,고속연산 알고리즘을 통해서 연산속도를 향상시킨 특별한 케이스정도라 볼
수 있다.
,
DSP 2005. 3. 30. 09:44

Link collection of DSP / audio programming sites

Link collection of DSP / audio programming sites

If you feel like diving into programming of plugins yourself (and I would encourage you to do so), here is an (unsorted) collection of links to get more information.
Some of them are pretty well-known among the community of programmers, others found their way into my bookmark list just by surfing the net and feeding the search engines with stupid questions.

Beware: Starting to program plugins can make you addicted, weird, look neglected and may isolate you from your normal environment :D
Furthermore, it can be very frustrating when things don't work as expected and your computer keeps on crashing. I recommend you to look for someone to pick you up from time to time... ;)



[NOTE:
- I am not responsible for the content the people behind these web sites offer.
- If a link on this page seems to be no more available, please let me know.]

출처 : http://www.digitalfishphones.com/main.php?item=3&subItem=2
,
DSP 2005. 2. 26. 10:19

VST/VSTi를 사용한 게임사운드 구현

이놈들 적용해보면 장난아니겠는데...

아니면 한글 vocaloid 으흐흐흐....
,
TOTAL TODAY