• Stereo Microphone Placement

    From Anton Shepelev@21:1/5 to All on Wed Jan 11 16:23:54 2017
    Scott Dorsey to Fred McKenzie:

    I am attempting to set up a couple of micro-
    phones to record an Orchestra performance. In
    order to achieve a stereo effect, it was my im-
    pression that the microphones should be spaced
    several feet apart, one on each side of the cen-
    ter of the stage.

    This is called A-B stereo.

    Only if the microphones are omnidirectional and
    placed closer together, usually about 20-30 cm,
    which is about the ear-to-ear distance. The musi-
    cians are positioned within a 40-degree angle. One
    may achieve it by varying the distance.

    Other combinations of distance and microphone spac-
    ing are possible, but then you risk to get nasty
    comb filtering or poor spatial impression:

    S = L * sqrt( 1 + ( 2d/w )^2 )

    where S is microphone spacing, d the distance to the
    source, w the width of the source, and L the
    wavepath that crates a phase difference perceived as
    the location of the source directly to the left or
    right. It may be estimted from the recommendation
    above:
    25 * sin 20 = 8.55 (cm)

    It gives you some intensity imaging but no phase
    imaging because the phase differences between the
    channels are too great for the brain to make sense
    of them. It was very popular back in the 1950s
    and 1960s when good directional microphones did
    not exist and omnis were the order of the day.

    True A-B technique (with smaller spacing) is purely
    phase-based stereo, which is the only right kind of
    stereo because it imitates human hearing. Another
    great phase-based technique is SASS, which does not
    suffer from the A-B limitations mentioned above.
    But unfortunatly the overwhelming majority of
    mordern recordings are made using polymicrophone
    technique and consequenty intensity-based stereo.

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred McKenzie@21:1/5 to Anton Shepelev on Wed Jan 11 12:06:19 2017
    In article <20170111162354.f37715f5ef0d7bde0c5ef467@gmail.com>,
    Anton Shepelev <anton.txt@gmail.com> wrote:

    True A-B technique (with smaller spacing) is purely
    phase-based stereo, which is the only right kind of
    stereo because it imitates human hearing.

    Anton-

    Since posting my earlier request, I settled on an approximation of this
    "ORTF" method. I am using a pair of Shure SM81 Cardioid microphones. I probably do not have the angles and separation exactly right, but have
    been satisfied with the results.

    One thing I found was that the SM81 microphones are extremely sensitive
    to mechanical disturbance. Mounted on an extended tall stand, motion
    induced by either flexing of the wooden floor or a breeze from the air
    handler, resulted in a loud rumble. Apparently the microphones or
    connectors were rubbing against each other! Fortunately there is a
    Shure vibration isolator that fits the SM81.

    The loud rumble was improved using a 55 Hz High Pass filter, but at the
    expense of Tympani and Bass Drum levels.

    Fred

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to fmmck@aol.com on Wed Jan 11 14:03:29 2017
    Fred McKenzie <fmmck@aol.com> wrote:

    Since posting my earlier request, I settled on an approximation of this >"ORTF" method. I am using a pair of Shure SM81 Cardioid microphones. I >probably do not have the angles and separation exactly right, but have
    been satisfied with the results.

    So try altering them to get a sense of what it does to the sound.

    One thing I found was that the SM81 microphones are extremely sensitive
    to mechanical disturbance. Mounted on an extended tall stand, motion
    induced by either flexing of the wooden floor or a breeze from the air >handler, resulted in a loud rumble. Apparently the microphones or
    connectors were rubbing against each other! Fortunately there is a
    Shure vibration isolator that fits the SM81.

    The vibration mount is a very good thing, also if possible it is wise to
    use a very flexible cable going into the microphone to reduce vibration transmitted through the cable. Olson will sell you a very good windscreen
    for wind issues.

    If you think the SM81 is sensitive to this kind of thing, you should see
    what the DPA omnis are like.

    The loud rumble was improved using a 55 Hz High Pass filter, but at the >expense of Tympani and Bass Drum levels.

    Find out where it's coming from and fix it. If it's an air handler issue,
    it's possible moving a few feet will deal with it.
    --scott

    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Thu Jan 12 19:35:28 2017
    Fred McKenzie to Anton Shepelev:

    True A-B technique (with smaller spacing) is
    purely phase-based stereo, which is the only
    right kind of stereo because it imitates human
    hearing.

    Since posting my earlier request, I settled on an
    approximation of this "ORTF" method. I am using a
    pair of Shure SM81 Cardioid microphones. I proba-
    bly do not have the angles and separation exactly
    right, but have been satisfied with the results.

    Glad to know that. It will be intensity-based
    stereo. May I listen to a fragment of your record-
    ing, unprocessed and preferably losseless?

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred McKenzie@21:1/5 to Scott Dorsey on Thu Jan 12 11:39:04 2017
    In article <o55vi1$bk1$1@panix2.panix.com>,
    kludge@panix.com (Scott Dorsey) wrote:

    Fred McKenzie <fmmck@aol.com> wrote:

    Since posting my earlier request, I settled on an approximation of this >"ORTF" method. I am using a pair of Shure SM81 Cardioid microphones. I >probably do not have the angles and separation exactly right, but have
    been satisfied with the results.

    So try altering them to get a sense of what it does to the sound.

    One thing I found was that the SM81 microphones are extremely sensitive
    to mechanical disturbance. Mounted on an extended tall stand, motion >induced by either flexing of the wooden floor or a breeze from the air >handler, resulted in a loud rumble. Apparently the microphones or >connectors were rubbing against each other! Fortunately there is a
    Shure vibration isolator that fits the SM81.

    The vibration mount is a very good thing, also if possible it is wise to
    use a very flexible cable going into the microphone to reduce vibration transmitted through the cable. Olson will sell you a very good windscreen for wind issues.

    If you think the SM81 is sensitive to this kind of thing, you should see
    what the DPA omnis are like.

    The loud rumble was improved using a 55 Hz High Pass filter, but at the >expense of Tympani and Bass Drum levels.

    Find out where it's coming from and fix it. If it's an air handler issue, it's possible moving a few feet will deal with it.
    --scott

    It took several sessions to realize that the problem was the result of mechanical motion. In the worst case, the microphone connectors were definitely touching each other. Vibration isolators along with making
    sure the connectors and cables do not touch, seems to have completely eliminated the problem.

    Fred

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Fri Jan 13 00:19:14 2017
    Fred McKenzie:

    I am trying to record what a person would hear if
    they were at the same spot as the microphones, re-
    alizing that there are differences.

    A very commendable goal, which deplorably few sound
    engineers strive for. Binaural (dummy head) record-
    ing is the best way to achieve it, but the recordigs
    must be listened to either via headphones or a con-
    ventional stereo system supplied with a binaural
    processor, which removes cross-feeding (right speak-
    er to left ear and vice versa).

    At a recent concert, I had the two microphones
    about 10 inches apart, but pointing in almost the
    same direction.

    Were they the cardioid SM81s?

    They were about 7 feet above the stage floor, and
    about 5 feet left of center. I am very happy with
    the results as far as frequency response is con-
    cerned.

    Note that cardioid will attenuate the lower frequen-
    cies when placed far from the sound source.

    I am not happy with some instruments not being
    heard as loudly as expected ->

    Do you mean as the listener would percieve their
    loudness at the microphone position? That may have
    to do with directivity and be amendable by orienting
    directional mics or using omnis.

    and the lack of stereo effect

    How far away was the (actual) scene and how wide?
    Try increasing the spacing to 20 inches. It may
    help.

    Try to compare your recording with the sound of ei-
    ther channel in mono using headphones. This con-
    trast helps to perceive even a small stereo effect
    if it is there, but only for AB stereo, which in
    your case means parallel microphones.

    One advantage of time-based stereo is that it will
    make individual instruments more discernible by bin-
    aural demasking.

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred McKenzie@21:1/5 to Anton Shepelev on Thu Jan 12 22:19:22 2017
    In article <20170113001914.2e0a121e4611d25bc2d5762a@gmail.com>,
    Anton Shepelev <anton.txt@gmail.com> wrote:

    At a recent concert, I had the two microphones
    about 10 inches apart, but pointing in almost the
    same direction.

    Were they the cardioid SM81s?

    I forget how long it has been since this thread started. My memory is
    getting old, but I believe I was using the SM81s then. They are now
    arranged at an angle of about 90 degrees.

    I use an adapter that would allow up to five microphones to be mounted
    on one stand. Two SM-81s are on the outside, with a Zoom H4N recorder
    in the center. The SM-81s are now each mounted in a Shure vibration
    isolator, with an 18 inch cable connecting between each microphone and
    the recorder.

    Needless to say, I am not a pro! My recordings are intended to be used
    by the Conductor and a few key Musicians to analyze the performance.

    In article <20170112193528.971ab20c0a7c8a8068ef3a43@gmail.com>,
    Anton Shepelev <anton.txt@gmail.com> wrote:

    Since posting my earlier request, I settled on an
    approximation of this "ORTF" method. I am using a
    pair of Shure SM81 Cardioid microphones. I proba-
    bly do not have the angles and separation exactly
    right, but have been satisfied with the results.

    Glad to know that. It will be intensity-based
    stereo. May I listen to a fragment of your record-
    ing, unprocessed and preferably losseless?

    The H4N recorder is configured to automatically set the recording level.
    As the concert progresses, the level gradually gets lower. I use
    "Audacity" to separate each piece, and then use it to amplify each to approximately the same level, sometimes more than 10 DB.

    Before separating pieces, the .WAV recordings run about 2 GB per hour.
    So unprocessed and/or lossless fragments would be too big for my limited internet access!

    Fred

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Fri Jan 13 16:52:22 2017
    Fred McKenzie:

    Needless to say, I am not a pro!

    Neither am I.

    My recordings are intended to be used by the Con-
    ductor and a few key Musicians to analyze the per-
    formance.

    I should say that the conductor and performers are
    more critical audience than the layman listener and
    usually have a more wholesome taste and criterii.

    If they are satisfied with your work it must be
    good.

    The H4N recorder is configured to automatically
    set the recording level. As the concert progress-
    es, the level gradually gets lower. I use "Audac-
    ity" to separate each piece, and then use it to
    amplify each to approximately the same level,
    sometimes more than 10 DB.

    So the level is set automatically at the beginning
    only? I simply ask the musician to play the loudest
    part and adjust my controls for that. Does the H4N
    use analog or digital attenuation? The latter is a
    bad idea.

    I too have used Audacity for simple editing. If you
    have to amplify by 10 db then you are using about
    30% of the recorder's dynamic range, but if the con-
    cert is one indivisible programme it is all right.
    Otherwise, I should adjust the levels for every com-
    position.

    Before separating pieces, the .WAV recordings run
    about 2 GB per hour. So unprocessed and/or loss-
    less fragments would be too big for my limited in-
    ternet access!

    That must be a high-definition format. Three min-
    utes of CD audio take about 30 Mb in WAV and about
    half that space in the lossless format FLAC.

    Here's a recording that made using the A-B tech-
    nique:

    https://soundcloud.com/anton-shepelev/honey-dont-cover

    It lasts 2:14 and weights only 12 Mb in FLAC.

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Fri Jan 13 18:10:18 2017
    I miswrote:

    ...and usually have a more wholesome taste and
    criterii.

    Should be: "and usually have more wholesome taste
    and criteria."

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred McKenzie@21:1/5 to Anton Shepelev on Fri Jan 13 11:44:58 2017
    In article <20170113165222.2196f7ac5fc507a9ebf4a419@gmail.com>,
    Anton Shepelev <anton.txt@gmail.com> wrote:

    Fred McKenzie:

    Needless to say, I am not a pro!

    Neither am I.

    My recordings are intended to be used by the Con-
    ductor and a few key Musicians to analyze the per-
    formance.

    I should say that the conductor and performers are
    more critical audience than the layman listener and
    usually have a more wholesome taste and criterii.

    If they are satisfied with your work it must be
    good.

    They take what they can get! One of the members used to record using a
    video cam in the auditorium projection booth.

    The H4N recorder is configured to automatically
    set the recording level. As the concert progress-
    es, the level gradually gets lower. I use "Audac-
    ity" to separate each piece, and then use it to
    amplify each to approximately the same level,
    sometimes more than 10 DB.

    So the level is set automatically at the beginning
    only? I simply ask the musician to play the loudest
    part and adjust my controls for that. Does the H4N
    use analog or digital attenuation? The latter is a
    bad idea.

    A concert starts with a tuning note followed by the Presentation of
    Colors and National Anthem. H4N recording level is set by the loudest
    sound, often the Bass Drum. By the time the first concert piece is
    started, level has been set. Over the course of a 90 minute concert, occasional loud sounds will reduce level further.

    I believe the recording level is digitally controlled. Whether it is
    gain or attenuation may be a matter of point of view!

    I too have used Audacity for simple editing. If you
    have to amplify by 10 db then you are using about
    30% of the recorder's dynamic range, but if the con-
    cert is one indivisible programme it is all right.
    Otherwise, I should adjust the levels for every com-
    position.

    Before separating pieces, the .WAV recordings run
    about 2 GB per hour. So unprocessed and/or loss-
    less fragments would be too big for my limited in-
    ternet access!

    That must be a high-definition format. Three min-
    utes of CD audio take about 30 Mb in WAV and about
    half that space in the lossless format FLAC.

    I was not familiar with .FLAC format. I see that Audacity can export
    it. If I do not forget, I'll check to see if the H4N can use it. There
    is a 2 GB limit to the size of a file the SD card format can accept, so
    a 90 minute concert is broken into two files. Perhaps .FLAC would help.

    Here's a recording that made using the A-B tech-
    nique:

    https://soundcloud.com/anton-shepelev/honey-dont-cover

    It lasts 2:14 and weights only 12 Mb in FLAC.

    Listening on my laptop, your recording is very clean. It sounds like
    you have a nice studio with absolutely no background noise. In addition
    to extraneous audience noises, my recordings range from the solo Flute
    in a quiet passage to the blaring Brass in a loud passage. I have been
    tempted to try compression, but have resisted the temptation.

    Fred

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to All on Fri Jan 13 12:04:49 2017
    Fred McKenzie:

    I am trying to record what a person would hear if
    they were at the same spot as the microphones, re-
    alizing that there are differences.

    Would you like rainbow-pooping unicorns with that?

    This is not only not possible, but it is a dangerous attitude to have because it will distract you from the actual goal of making an accurate recording.

    Things will _never_ sound the same at the microphone position, if only
    because when you listen on stereo playback you have the playback room
    acoustics superimposed on the original recording. So the original recording needs to be made drier, and specifically lacking in the short term reflections that will be dominant in the playback environment. Combine that with the microphones never having the same pickup pattern as the ears and you will find that the best position for the microphones is never going to be the bes position for the listener.

    The only exception to this is for headphone playback, either with a
    binaural recording or with a mono recording. But that is a totally
    different animal indeed.

    I am not happy with some instruments not being
    heard as loudly as expected and the lack of stereo effect.

    Which ones? If you want more strings, raise the mikes. If you want more brass, drop them down.

    If the microphones are parallel, you will have zero intensity stereo.
    --scott


    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Sat Jan 14 01:30:04 2017
    Fred McKenzie:

    A concert starts with a tuning note followed by
    the Presentation of Colors and National Anthem.
    H4N recording level is set by the loudest sound,
    often the Bass Drum. By the time the first con-
    cert piece is started, level has been set. Over
    the course of a 90 minute concert, occasional loud
    sounds will reduce level further.

    Understood.

    I believe the recording level is digitally con-
    trolled.

    Well, there must be access to the level of analog
    attenuation, because the DAC assumes the signal is
    normalized to a certain fixed level...

    Whether it is gain or attenuation may be a matter
    of point of view!

    Indeed. My viewpoint is that the maximum digital
    level is unity, so the rest is more or less attenu-
    ated.

    I was not familiar with .FLAC format. I see that
    Audacity can export it. If I do not forget, I'll
    check to see if the H4N can use it. There is a 2
    GB limit to the size of a file the SD card format
    can accept, so a 90 minute concert is broken into
    two files. Perhaps .FLAC would help.

    Yes, or you might try decreasing the resolution down
    to the CDDA standard, 16/44.

    Here's a recording that made using the A-B
    technique:

    https://soundcloud.com/anton-shepelev/honey-dont-cover

    Listening on my laptop, your recording is very
    clean.

    Hopefully via headphones rather than the built-in
    speakers?

    It sounds like you have a nice studio with abso-
    lutely no background noise.

    To me, clarity is not the absence of background
    noises but, rather, the absense distortion in the
    recording-playback tract.

    That recording was made in terrible conditions -- in
    a tiny closet generously treated with sound-absorb-
    ing material -- an acoustically dead space.

    In addition to extraneous audience noises, ->

    That ambience or "atmosphere" is an essential part
    of any live recording, and I think it is a responsi-
    bility of the sound engineer faithfully to convey it
    while keeping it unobtrusive.

    my recordings range from the solo Flute in a
    quiet passage to the blaring Brass in a loud pas-
    sage. I have been tempted to try compression, but
    have resisted the temptation.

    Good for you. There is a less harmful tool in Au-
    dacity that can help you. It is called Envelope:

    http://manual.audacityteam.org/man/envelope_tool.html

    With you can gradually increase the volume before a
    quiet solo and go back to unity gain afterwards.

    --
    () ascii ribbon campaign - against html e-mail
    /\ http://preview.tinyurl.com/qcy6mjc [archived]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ange@21:1/5 to Fred McKenzie on Mon Sep 4 14:41:41 2017
    On Tuesday, August 19, 2014 at 1:04:34 PM UTC-4, Fred McKenzie wrote:
    I am attempting to set up a couple of microphones to record an Orchestra performance. In order to achieve a stereo effect, it was my impression
    that the microphones should be spaced several feet apart, one on each
    side of the center of the stage.

    I came across an article that placed the two microphones together with
    front apertures almost touching in a "cross-eyed" configuration. The
    right microphone was pointed to the left and vice versa.

    I can see where it might be bad if microphones were too far apart. Our
    ears are fairly close together and hear things in stereo. Does it make sense to locate the microphones closer together than our ears are spaced?

    Fred

    My first esoteric encounter with this craft came one evening when my son was doing some guitar rendition with friends. I had a portable Nakamichi tape deck with headphones to hear what was being recorded, and a stereo mic pair that I was obliged to hold
    in both hands. I did hold then less than a foot apart, and the stereo action was clear. Having a necessary scratch, I temporarily put the two mics together in one hand (to scratch with the other). The stereo held until the mics were less than an inch
    apart. I then tested that phenomenon and concluded that stereo is achieved when the mics are 2" or more apart. Hypothetically, when they are about 5" apart, they should perform as do our ears naturally. That settles the basic issue.
    So why wide mic separations? IMHO, it is to sample two separate orchestra sound fields. On another occasion I witnessed that the 2 kHz etc sound radiation from the violin section (viewed as left of orchestra center) radiates upward to the right (as seen
    from the audience)so that the left mic will intercept that more than the right mic (as seen from the audience). The right side of the orchestra contains largely horns and brass so it is feasible to procure rich strings via the left mic and strong brass
    from the right mic. To that extent, the wider separation will provide that separation service. How you craft your mic setup is up to you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred McKenzie@21:1/5 to Ange on Tue Sep 5 11:35:46 2017
    In article <2f3197b5-f2df-4f90-be27-016db884fdd8@googlegroups.com>,
    Ange <a.campanella@att.net> wrote:

    On Tuesday, August 19, 2014 at 1:04:34 PM UTC-4, Fred McKenzie wrote:
    I am attempting to set up a couple of microphones to record an Orchestra performance. In order to achieve a stereo effect, it was my impression that the microphones should be spaced several feet apart, one on each
    side of the center of the stage.

    I came across an article that placed the two microphones together with front apertures almost touching in a "cross-eyed" configuration. The
    right microphone was pointed to the left and vice versa.

    I can see where it might be bad if microphones were too far apart. Our ears are fairly close together and hear things in stereo. Does it make sense to locate the microphones closer together than our ears are spaced?

    Fred

    My first esoteric encounter with this craft came one evening when my son was doing some guitar rendition with friends. I had a portable Nakamichi tape deck with headphones to hear what was being recorded, and a stereo mic pair that I was obliged to hold in both hands. I did hold then less than a foot apart, and the stereo action was clear. Having a necessary scratch, I temporarily put the two mics together in one hand (to scratch with the other). The stereo held until the mics were less than an inch apart. I then tested that phenomenon and concluded that stereo is achieved when the mics are 2" or more apart. Hypothetically, when they are about 5" apart, they should perform as do our ears naturally. That settles the basic issue.
    So why wide mic separations? IMHO, it is to sample two separate orchestra sound fields. On another occasion I witnessed that the 2 kHz etc sound radiation from the violin section (viewed as left of orchestra center) radiates upward to the right (as seen from the audience)so that the left mic will intercept that more than the right mic (as seen from the audience). The right side of the orchestra contains largely horns and brass so it is feasible to procure rich strings via the left mic and strong brass from the right mic. To that extent, the wider separation will provide that separation service. How you craft your mic setup is up to you.

    Ange-

    Since my original posting, I have used a pair of microphones spaced
    about Six inches, pointing about 90 degrees apart. Results have been acceptable.

    One thing I have noticed is that my microphones may be too close for
    weaker instruments in the front on each side. If they were located
    several feet apart, they would each be closer to the weak instruments.
    However the Stereo effect might be artificial sounding.

    Fred

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to a.campanella@att.net on Tue Sep 5 18:54:41 2017
    Ange <a.campanella@att.net> wrote:
    My first esoteric encounter with this craft came one evening when my son wa= >s doing some guitar rendition with friends. I had a portable Nakamichi tape=
    deck with headphones to hear what was being recorded, and a stereo mic pa=
    ir that I was obliged to hold in both hands. I did hold then less than a fo= >ot apart, and the stereo action was clear. Having a necessary scratch, I te= >mporarily put the two mics together in one hand (to scratch with the other)= >. The stereo held until the mics were less than an inch apart. I then teste= >d that phenomenon and concluded that stereo is achieved when the mics are 2= >" or more apart. Hypothetically, when they are about 5" apart, they should = >perform as do our ears naturally. That settles the basic issue.=20

    The problem is that listening on headphones totally changes all of the imaging in every possible way. So it's hard to translate anything you're hearing on headphones into anything going on with speakers.

    So why wide mic separations? IMHO, it is to sample two separate orchestra s= >ound fields. On another occasion I witnessed that the 2 kHz etc sound radia= >tion from the violin section (viewed as left of orchestra center) radiates = >upward to the right (as seen from the audience)so that the left mic will in= >tercept that more than the right mic (as seen from the audience). The right=
    side of the orchestra contains largely horns and brass so it is feasible t=
    o procure rich strings via the left mic and strong brass from the right mic= >. To that extent, the wider separation will provide that separation service= >. How you craft your mic setup is up to you.

    Traditionally the reason why wide separation was popular was because the only microphones that were any good were omnis, and getting clean and accurate directionality with a big baffle between omni mikes is problematic. Putting
    a widely spaced triad of microphones up above the orchestra gives you the ability to control balances by moving up and down and side to side as you
    note, and it allows you to use omnis.

    The bad part about that technique is that the imaging is only from amplitude differences between channels. The phase differences are so wide that the
    ear can't correlate them. The overall effect is not bad but has a weird sense of depth that you don't hear live. The Mercury Living Presence recordings
    made by Bob Fine are the classic examples of the spaced triad technique. --scott


    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)