Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Julio Di Benedetto

Pages: 1 ... 14 15 [16] 17 18 ... 36
301
Agreed, and I don't mean to say all synths are equal, or that a Casio CZ101 is just as good as an Andromeda or a Moog or a Matrix 12 or any of the "greats." I tend to think of all synths as having their own strengths. For some synths, like a CZ101 or my Roland SH101 (hey, my first two synths both had the number "101"...) one of the strengths is that they're straightforward and easy to learn, so maybe their value is more as learning tools which are limited in terms of practical usage.


This is very true Mike.....the learning is crucial and limited is good.  Without a decent amount of practical electronic synthesis knowledge one would have a hard time getting the most out of an Andromeda or M12.  They are very deep instruments and thus are a constant source of sonic surprises to the seasoned electronic musician. Im a much more knowledgeable programmer after spend time using a modular synth. 

I think a few different "limited" synths in one studio is just as good as using the some of the "greats" because as you suggest one becomes a master.

302
Music Gearheads Tech Talk / Re: KVR Black friday/cyber monday sales list
« on: December 14, 2013, 05:17:14 AM »

In all my years of recording the 2 most significant times when my ears perked up and I went ahhhh, I hear it! I get it! were:

1. when I treated my room and put my monitors on Prime acoustics recoil stabilizers
2. When I ran sound through a real hardware La-2a

PV

I had been wondering about the Prime Acoustic Stabilizers, seen them everywhere and heard good thing but from no one I knew and who's point of view I respect.  Treating ones studio is really the best thing you can do for your sound.  It requires little effort unless you choose to make your own panels, minimum knowledge, because the "where to place the panels" info is all over the web and it does not have to be a big investment. 

303
Now Playing / Re: Currently listening, part 1
« on: December 13, 2013, 05:37:10 AM »
Martian Chronicles : Seren + Oophoi    When I got this cd it did not grab at first.  Ive been playing it a lot these last few weeks and like some of the music in this genre it can need a deeper listening.  It ask that you come to it open.  Great disc that is still unfolding!

http://hypnos.com/mm5/merchant.mvc?Screen=PROD&Store_Code=HOS&Product_Code=hyp3159&Category_Code=hypnos

304
Exciting news....Hypnos download offering digital versions of the Hypnos catalog and then hopefully all the other ambient electronic labels and artists. I can see it growing to be a digital "Backroads Music"

305
Other Ambient (and related) Music / Re: Vangelis
« on: December 12, 2013, 06:43:43 PM »
Yes....could be Forrest.  Vangelis has some sort of corresponding symbols on his pedals as well.  I like the idea of inferred notation.

I remember seeing a Hans Zimmerman score, well the one part he performed and it was numerical, alphabetical and not annotated  in the tradition, I guess a language he could understand.  Being self taught.

306
Other Ambient (and related) Music / Re: Vangelis
« on: December 12, 2013, 03:25:52 PM »
Im still trying to figure out what all the hieroglyphic type symbols refer to.....a musical language of some sort.

307
Thank you Immersion...this is what I was hoping for as a contribution to this thread.

308
Music Gearheads Tech Talk / Re: effects - old and new
« on: December 10, 2013, 08:29:59 AM »
a lot of the fx these days do tend towards distortion, glitch and grunge such as.....OTO Biscuit.

www.youtube.com/watch?v=g7Bs9jDw3Mw&feature=player_embedded#at=15


In some ways I feel that musicians, certainly in this genre are looking to the recording process as FX...that is where it is recorded, natural reverb etc.  Old school put the amp in the stairwell type of thing.

Also the way individual tracks come together with in the DAW can often create an unexpected sense of processing when the actual tracks are clean and dry.  Im finding this a lot personally.

Forrest recently mentioned Audiomulch.......I dont use it but know of it.   Something I want to like into http://www.audiomulch.com

I do use Iris From time to time which I think of more as an fx processor which it actually is not.....http://www.izotope.com/products/audio/iris/

I do agree the power has increased but the fx remains the same and not always better.

Good post Seren......interested to see what others think.

309
So Far Immersion if I can sum up for you regarding the possible future of synthesis......classic vintage gear other than Moog sounds bad.  Current analog synths like the Prophet 12 sounds like bad soft synths and soft synths in general also sound bad except the one or two you use though they don't  really sound that good either according to you.  So actually the future is in processing.  The source is of no importance because its the processing that will make these dead synths come alive.  This is what I have come away with so far from your comments.

If you feel there is no future say so.....then say why you think so.....oh, you have already done that then why not offer some possible direction you hope it might go.



310
Ok, I see what you mean Forrest.  That would be exciting.  It seems most hardware synth allow you to process external audio but only run it through the FX and or filters. I could see some deeper integration as you suggest, not just pass the signal but the signal become part of the synth as a building block or as an oscillator, almost more organic if that possible.  Ive looked for such a thing but I dont think it exist....yet.

311
Julio, what I've love to see is in the future is more of an integration of treatment of real-time acoustic sound sources (not just loops) with synthesizers within the same box.  I guess Live does some of this, but I'd like to see some hardware synths take this on.

Forrest

I agree.....didnt the Korg Oasys system try and succeed from what Ive read to actually create acoustic instruments, not samples.  The Oasys Keyboard is discontinued but the technology still exists. 

Within software an oscillator could be anything and software is the future.  The Hartmann Neuron was a fantastic step forward and that technology it still around.  It made me laugh when people would complain that when you turned the Neuron on you could hear the computer boot up and disc drive spin.....nothings perfect.

I think its exciting times now and looks likely in the future.....modular synthesis is just going bonkers with new companies popping up all the time.  Synths like John Bowen's Solaris are selling out every production run.

In the very near future people wont care if a filter is modeled after a Moog or an Oberhiem or an ARP, and actually I dont think people really do today.  The Alesis Andromeda has just such physical modeling and the real programmers / players just got on and dug deep into the instrument to discover its unique character that has little to do with Moog's or Obie's.  How many times have I read that the A6 sounds just like...well you kind of missed the boat if thats all thats worth commenting on the Andromeda.

So its awesome in these rare times when someone says, "hey I'll try something new!" maybe it will succeed, maybe it will never get off the prototype stage or maybe the technology gets absorbed into a future model, but its still cool.

I can't wait to see what this one turns into as the guy has a great track record.


Exactly....and attempting to make an effort to be sustainable.

312
Immersion...who are you actually talking to because it sounds like its yourself.  Your diatribe has nothing to do with with any of the musicians here who are very critical about the sound quality of the synths, software or hardware, that they use to create music with.   Statements like "for me the Xpander was totally useless without processing".....good thing you put "for me" beforehand.  Can I assume you are talking about ambient music or just generally.  The Xpander/M12 does sound really good through a "lost in space" FX but this sort of processing is something found in our genre of perhaps soundtrack work. What about all the other music where an Xpander has been used and often with minimal processing because that was what the music may have needed. And tell me please who uses a synth totally dry.

In all of your comments you are have left out the most important thing.....how does it sound in the mix, because that is what people will ultimately hear.  If you are going to be scientific about it then be so.  Dazzle us with findings.

As this is a thread about the future of synthesis so to speak, what do think.....where will it go.....what form will it take?

Your thoughts are welcome!

313
Music Gearheads Tech Talk / The THX sound
« on: December 09, 2013, 05:17:41 AM »
I had read somewhere that the THX sound was created on a 90 oscillator Serge system...it was not so.  Heres the story behind it

THX INTRO HD QUALITY

I like to say that the THX sound is the most widely-recognized piece of computer-generated music in the world," says Andy Moorer. "This may or may not be true, but it sounds cool!"
>> It's called 'Deep Note'. 
>> It was made by Dr James 'Andy' Moorer in 1982, who has had a very cool career: Four patents, one Oscar. In the '60s he was working in Artificial Intelligence at Stanford. In the '70s he was at IRCAM in Paris, working on speech synthesis and ballet. In the '80s he worked at the LucasFilm DroidWorks, before joining Steve Jobs at NeXT. Today, he consults, repairs old tube radios and plays banjo.
>> At one point, the THX sound was being played 4,000 times a day at cinemas around the world (that's once every 20 seconds).
>> The Simpsons got permission for this [mpg movie] parody. Dr Dre was less lucky. He asked permission to sample 'Deep Note' but was turned down. He used it anyway, to open '2001', and LucasFilm sued.
>> Stanford student Jesse Fox tried to recreate 'Deep Note' for a course. His version sounds like a nasty accident in an organ factory. 
>> There are various theories on the web about how the THX sound was created - some people say it was a Yamaha CS-80, others that it was a Synclavier. I emailed Andy Moorer to ask how it was really made. The short answer was "On a big-ass mainframe computer at LucasFilm". But I thought I should give you the long answer here in full, just because it feels like Andy's writing his own history for the first time...
>> "I've never written the THX story down (nobody ever asked). So, here's the whole story:
>> "I was working in what was then called the "Lucasfilm Computer Division" that existed from roughly 1980 to 1987 or so. It spawned several companies, including Pixar and Sonic Solutions. I was head of the audio group. In about 1982, we built a large-scale audio processor. This was in the days before DSP chips, so it was quite a massive thing. We called it the ASP (Audio Signal Processor).
>> "At the same time Tom Holman was also working at Lucasfilm. He had developed what is now called the THX sound system. It was to premiere with Lucasfilm's "Return of the Jedi." They were making a logo to go before the film. I was asked by the producer of the logo piece to do the sound. He said he wanted "something that comes out of nowhere and gets really, really big!" I allowed as to how I figured I could do something like that.
>> "I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the "score" for the piece.
>> "The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form "set frequency of oscillator X to Y Hertz".
>> "The oscillators were not simple - they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand - based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.
>> "When we went to sync up the sound with the video (which I hadn't seen yet), we discovered that the timings were all different. I readjusted the times, generated a new score, and in ten minutes, we had the sound synced up with the video perfectly.
>> There are many, many random numbers involved in the score for the piece. Every time I ran the C-program, it produced a new "performance" of the piece. The one we chose had that conspicuous descending tone that everybody liked. It just happened to end up real loud in that version.
>> "Some months after the piece was released (along with "Return of the Jedi") they lost the original recording. I recreated the piece for them, but they kept complaining that it didn't sound the same. Since my random-number generators were keyed on the time and date, I couldn't reproduce the score of the performance that they liked. I finally found the original version and everybody was happy.
>> "If you get permission from THX, I can supply you with the written "score" for the piece (in music notation - this was used to get the copyright) or even the original C program that produced the parameter lists. I can't supply you with a program that makes the sound itself.
>> "The ASP was decommissioned in 1986 and later sold for scrap."

314
Thanks for the words Pete......not going anywhere, just needed some air.

Immersion.....its wonderful that you are able to express your emotions openly here on this forum and I respect that I just want us to be able to focus on the topic at hand.....

So on that note the Nonlinear lab prototype looks to me like it might have the format of the old Roland Alpha Juno which was actually the first analog synth I owned.  All the editing had to be dialed in with that big "alpha dial" on the left and viewed in that tiny screen.  The Nonlinear synth seems to have 2 large dials and a plethora of buttons.  Speculation of course but maybe what actually was a bad design by Roland  could be a good one for a software base hardware synth.  Most musicians today are comfortable with the external Akai like push button controllers so entering data and edits via a large selection of buttons might makes sense. 


315
Im done....might be time to take a little hiatus from the forum.  This is going no where and all this is sucking my energy as long as this "format" persists.

All the power to you Immersion!

316
Well, if you dont believe in the the future of software.....ok.  Oberhiem 4 voice...not new and not so forward looking though truly beautiful.....just an amazing awe inspiriting re creation with much improved midi etc.. Amazing to have it available today.....one foot in the past, one foot in the future.

By the way this is not a conversation.......

And you have spent 4-5k on and Eventide external FX box to run soft synths through......Ok,  time to take my blood pressure medication. Your notion of planning for the future has my nervous. My bad you actually have no vision for the future.  ;)




317
I have Matrix 12 in my studio.....its days are numbered.  What then?  Perhaps Stephan Schmitts new synth will be just what is need.  Can something replace...perhaps.  Will synthesis progress in some shape or form in hardware after electrically feed oscillators. Yes!

Again I welcome a debate....with a positive look into the future.....my thread, my rules.  ;)




318
The new Prophet 12 seemed amazing but it sounded like a average soft synth...
Or the Solaris synth. So much focus on everything else besides the pure sound. 

Immersion.....In one sense Im happy you are so vocal....and I do respect your opinion but you are mistaken.....its ok, you dont have the perspective which is something middle age men like myself  have earned through experience. You have youth that should have hope and a less jaded viewpoint..... ;)

You do know your pursuit for what is pure will only lead to silence...I hope thats what you are looking for.

No offense ment....just tired of having imho potentially good threads trashed.

Love & Peace

 

319
well let me guess, another super flexible product with terrible sound quality...
usually that is the way... it is not often both worlds meet together.

I cant fathom how you have an opinion about this metal frame and keyboard when it hardly makes a sound and one that you have not heard, unless you mean that NI soft synths have a terrible sound quality and this man has something to do with the creation of that company so......???  Did you actually read the quote or go to website.  It is the "idea" of something that is one of the most powerful forces on this planet. Its usually simple in essence and can often be extraordinary in completion.

320
This text is taken from the website......a gigantic looper, I dont thinks so.  But who cant tell at this stage.

Technology

The musical instruments we are developing embody a number of fundamental concepts which are important to us: performance-centered technology, product development focused on longevity and evolutionary development, an open source approach, and sustainable production methods.

Standalone systems. Our instruments are fully self-contained no external computers. We rely on ARM microcontrollers for the highest level of real-time performance, reliability and flexibility. Separate synthesis engines tap the vast audio processing power of embedded PCs. Optionally, software GUIs can be added by connecting Android mobile devices.

Full control. We have developed the TCD musical control protocol which overcomes many limitations of MIDI. TCD stands for "Time, Curve, Destination" and implements a high-resolution control over all aspects of a dynamic and expressive live musical performance. Read more about our TCD concept here.

Software-based digital sound synthesis. We are not interested in resurrecting the past by modeling analog machines of yesteryear. We are inspired by the virtually limitless sonic palette offered by digital sound synthesis. "Software-based" means that our durable instruments can evolve without falling into obsolescence. More about Phase 22, our first synthesis engine.

Top-quality hardware. Our musical instruments are built to last. They are not consumables to be thrown out and replaced every few years. We use the best components available to provide musicians with durable instruments.

Open source. Over the past few decades, the dynamics of open source has created many solid and mature technologies and has empowered people around the world. It is an invitation to sharing and community, fitting in well with how most musicians think. For Nonlinear Labs, it also means that our ideas can be used in other areas of music performance and production. Whenever possible, we will make our technologies freely available to these ends.

Local production. Our prototyping and production is 100% "Made in Berlin". Working locally means faster development cycles and better communication with manufacturing partners, resulting in higher quality. And by keeping travel and shipping to a minimum, we reduce our carbon footprint and can ensure that social working standards are met./i]

Pages: 1 ... 14 15 [16] 17 18 ... 36