PDA

View Full Version : 64-bit Macs in August



Beamtracer
03-27-2003, 05:43 PM
According to MacBidouille, Apple will use the World Wide Developers Conference on June 23 to announce 64-bit Macintoshes, with configurations of 1.4GHz, 1.8GHz and dual 2.3GHz, and that these machines will be in the stores in August this year.

Of course, none of this can be confirmed by Apple, but June 23 is not that far away for an announcement.

64-bit Macs running IBM-970 processors, with the 64-bit Panther OS. Yippee!!!!!

luka
03-27-2003, 06:42 PM
I wonder if any apps will be hitting 64bits by then:confused:

Beamtracer
03-27-2003, 07:16 PM
That's a question for Newtek!

luka
03-27-2003, 09:54 PM
Well Newtek can we ask you if you are thinking in that directrion?

Arnie Cachelin
03-27-2003, 11:24 PM
This is way too complex a subject to deal with fully. LW has been '64-bit clean' since v6.5, but it is not clear what advantages one would gain with a massive conversion to actually using 64-bit data everywhere. For many math op.s, LW already uses 64-bit double precision math. Where 32-bit floats can be used, it is probably faster and definitely less of a RAM burden to stay with this type of math. Perhaps running these operations on a 64-bit processor will simply double the speed? For integer operations, I would also hope to see some speed-up of operations on 32-bit ints. If not, then using 64-bit ints will just take more RAM to run at the same speed. It is not clear what the performance boost or penalty on existing, 32-bit compiled apps will be when running on a 64-bit CPU, so it is not obvious that building a new LW version for a 64-bit CPU will be worthwhile. If Apple does things right, 32-bit apps could get a nice boost without the serious memory footprint bloat that could occur by just using 64-bit data types indiscriminately. Alot will depend on the compilers available too.

Lynx3d
03-28-2003, 01:43 AM
Yea from what i understood about CPU architectures 64bit alone won't really speed up floating point math, double precision floats take the same amount of time as 32bit floats on current 32bit CPUs (Mac and PC)
If it was THAT easy to boost performance all of todays CPU s would already be 64bit i guess.

You should rather hope it has strong vector (SIMD) units, just look at the P4...SSE2 has support for double precisions floats, and that gives LW a massive boost.
From what i could find out Altivec can only handle single precision floats, and the PPC970 has Altivec compatible SIMD units, seemingly without any additions...(?)

Some interesting articlesaboutb 64bit computing, PPC 970 and P4 vs. G4:
http://arstechnica.com/cpu/index.html

ingo
03-28-2003, 03:30 AM
Hmm August, isn't that the time when Motorola wanted to present the 2 GHz 7457 processor, without the need for a dozen fans or a huge power supply like IBM's ? Just curious....

Darth Mole
03-28-2003, 05:48 AM
Isn't it odd that games consoles are already employing 128-bit CPUs, while computers are only now edging towards 64-bit? (Atari's 64-bit Jaguar came out in 1993!)

I understand that it's one thing to make a dedicated games machine; quite another to develop a stable, mainstream computing workstation. But it's an interesting disparity.

wapangy
03-28-2003, 11:26 AM
Originally posted by Darth Mole
Isn't it odd that games consoles are already employing 128-bit CPUs, while computers are only now edging towards 64-bit? (Atari's 64-bit Jaguar came out in 1993!)

I understand that it's one thing to make a dedicated games machine; quite another to develop a stable, mainstream computing workstation. But it's an interesting disparity.

Well those are just the graphic chips as far as I know, not the main CPU. All graphic chips in computers now a days are 128.

Lynx3d
03-28-2003, 05:44 PM
well yea i was curious and in case of the Jaguar, none of the programmable parts had 64bit registers, so you could call those 64bit processors GPUs. The PS2 however does have a real 128bit CPU, but it is also designed for pretty specific tasks.


Originally posted by Darth Mole
Isn't it odd that games consoles are already employing 128-bit CPUs, while computers are only now edging towards 64-bit?


Well you can buy 64bit systems since ages, have a look at SGI, Sun, HP, IBM, Alpha (ok that one's pretty much dead)...they're all selling 64bit workstations since years...

Besides that, a P4 for example has 128bit SIMD registers (SSE2) and that's exactly what boosts LW's render times, it's not really an issue how big your integers or address pointers can be, until you need more than 4GB RAM or your integers get so big they over- or underflow.

You should really read the "Introduction to 64-bit Computing and x86-64" article from my link above, to understand why there is no magical twice performance or anything from 64bit alone...

Ed M.
03-31-2003, 06:15 PM
Arnie, keep in mind that the 970 brings more than just 64-bit to the table. There are a host of other improvements -- the bus being just one of them. Anyway, the "double precision" and AltiVec discussion seems to be surfacing again, so perhaps we should revisit that for a moment.

It's my understanding that double precision buys you something like an extra 29 bits of precision. 2^29 is about 5*10^8. Therefore, double precision can tolerate about half a billion times more accumulated error before it reaches some absolute error threshold beyond which there would be simply too much error in the calculation. In other words douvle precision hides a lot of code-slop and error.

The question is.... Is Lightwave *actually* doing 1 million calculations on a pixel before it reaches the screen? If so, then perhaps they do *need* double precision. For comparison, a single precision float is only accurate to about 1 part in ~ 17. Still, many I've talked to ask whether they actually *need* to do 1 million calculations on it to produce the desired result (i.e., photo-realistic rendering).

For clarity, I'll "repost" a snippet from another NewTek forum that dealt with the topic. Beam might even remember it...

Here is what was posted:

[[[Q: Is an updated double precision-centric AltiVec unit the way to go?

A: No.

This is why:

The vector registers have room for four single precision floats to fit in each one. So for single precision, you can do four calculations at a time with a single AltiVec instruction. AltiVec is fast because you can do multiple things in parallel this way.

Most AltiVec single precision floating point code is 3-4 times faster than the usual scalar single precision floating point code for this reason. The reason that it is more often only three times faster and not the full four times faster (as would be predicted by the parallelism in the vector register I just mentioned) is that there is some additional overhead for making sure that the floats are in the right place in a vector register, that you don't have to deal with in the scalar registers. (There is only one way to put a floating point value in a scalar register.)

Double precision floating point values are twice as big (take up twice as many bytes) as single precision floating point values. That means you can only cram two of them into the vector register instead of four. If our experience with single precision floating point translates to double precision floating point, then the best you could hope to get by having double precision in AltiVec is a (3 to 4)/2 = 1.5 to 2 times speed up.

Is that enough to justify massive new hardware on Motorola's or Apple's part?

In my opinion, no.

This is especially true when one notes that using the extra silicon to instead add a second or third scalar FPU could probably do a better job of getting you a full 2x or 3x speed up, and the beauty part of this is that it would require absolutely no recoding for AltiVec. In other words, it would be completely backwards compatible with code written for older machines, give *instant speedups everywhere* and require no developer retraining whatsoever. This would be a good thing.

Even if you still think that SIMD with only two way parallelism is better than two scalar FPU's, you must also consider that double precision is a lot more complicated than single precision. There is no guarantee that pipeline lengths would not be a lot longer. If they were, that 1.5x speed increase might evaporate -- quickly.

Yes, Intel has SSE2, which has two doubles in a SIMD unit. Yes, it is faster -- for Intel. It makes sense for Intel for a bunch of reasons that have to do with shortcomings in the Pentium architecture and nothing to do with actual advantages with double precision in SIMD.

To begin with Intel does not have a separate SIMD unit like PowerPC does. If you want to use MMX/SSE/SSE2 on a Pentium, you have to shut down the FPU. That is very expensive to do. As a work around, Intel has added Double precision to its SIMD so that people can do double precision math without having to restart the FPU. You can tell this is what they had in mind because they have a bunch of instructions in SSE2 that only operate on one of the two doubles in the vector. They are in effect using their vector engine as a scalar processing unit to avoid having to switch between the two. Their compilers will even recompile your scalar code to use the vector engine in this way because they avoid the switch penalty.

Okay, so Intel has double precision in their vector unit and despite what I have said, you still think that is absolutely wonderful. But do they *really* have a double precision vector unit? The answer is not so clear.

Their vector unit actually does calculations on the two doubles in the vector in a similar "one at a time fashion" to the way an ordinary scalar unit would. They only can get one vector FP op through [every two cycles] for this reason. AltiVec has no such limitation!

AltiVec can push through one vector FP op per cycle, doing four floating point operations simultaneously (up to 20 in flight concurrently). AltiVec also has a MAF core, which in many cases does two FP operations per instruction. This is the reason why despite large differences in clock frequency, AltiVec can meet and often beat the performance of Intel's vector engine.

The other big dividend that they get from double precision SIMD is the fact that they can get two doubles into one register. When you only have eight registers this is a big deal! [PowerPC has 32 registers for each of scalar floating point and AltiVec!] In 90% of the cases, we programmers don't need more space in there and the registers the PPC provides are just fine.

Simply put, (from a developers position) we just don't need double precision in the vector engine, and we wouldn't derive much benefit from it if we had it. The worst thing that could possibly happen for Mac developers is that we get it, because that would mean that the silicon could not be used to make some other part of the processor faster and more efficient, and a lot of code would need to be rewritten for little to no performance benefit. It wouldn't be a logical tradeoff.

*The only way this would be worthwhile would be to double the width of the vector register so that we get 4x parallelism for double precision FP arithmetic.

And with respect to 3D apps *requiring* double precision...

Most 3D rendering apps do not NEED double precision everywhere. They just need it in a few places, and often (if they *really* decide to look) they may find that there are more robust single-precision algorithms out there that would be just as good. In the end they should be using those algorithms anyway, because the speed benefits for SIMD are twice as good for single precision than they are for double precision.

Apps like this can get a lot more mileage out of the PowerPC if they just increase the amount of parallelism as much as possible in their data processing. Don't just take one square root at a time, do four etc. And this isn't even taking into account multiprocessing just yet or even AltiVec for that matter -- the scalar units alone, by virtue of their pipelines, are capable of doing three to five operations simultaneously! However if you don't give them 3-5 things to do at every given moment, this power goes unused.

Unfortunately, this can be noticed in quite a few Mac applications already on the market where performance doesn't seem to be as solid as it should be. What is baffling is why many Mac developers aren't taking advantage of this power. What it boils down to is that most of these apps just do one thing at a time (for the most part), and in turn are wasting 60-80% of the CPU cycles. That's a lot of waste. What's nice is that the AltiVec unit is also pipelined, so it is important to do a lot in parallel there too. The only problem is that developers actually have to make a conscious effort to use the processor the way it was designed to be used. ]]] - (Anonymous source)

--
Ed M.

Darth Mole
04-01-2003, 04:35 AM
AAAAAAARRRRRGHHHHH!!!!

My head hurts

wizlon
04-01-2003, 06:00 AM
Arnie, congratulations on joining the luxology team, I hope you still hang around these forums from time to time as your input and knowledge has been invaluable.

Johnny
04-01-2003, 07:54 AM
I wonder whether the new Macs (sporting the 970) will be killers, or simply maimers...

The first PowerMacs were sort of like speed-bumped Quadras in performance...the first G4s weren't quite what they ought to be.

Seems the box surrounding the chip often has to catch up to the chip's potential, or maybe Apple wants to use leftover parts before designing and selling brand-new parts which better unleash the new chip.

Could this be the case when the 970 hits? Yes, more powerful chip, but what if it's strapped into a VW Beetle?

J

Lynx3d
04-01-2003, 10:03 AM
@Ed M.

interesting posting...
I don't know how much SSE2 instruction using doubles LW uses on P4s, but SSE2 definitely helps a lot, although it uses no dedicated pipeline for it!

I don't think adding more FPU units will really speed up anything, the articles i read statet that it's a big challenge to keep multiple execution units busy with a single thread, actually it's THE challenge and the reason why vector units and simultaneous multithreading (intels Hyper-Threading is an approach) were invented after all...

But with a quick search anout basic raytracing techniques it seems that you really don't need double precision for many calculation, but you can't abandon it completely i think.

All the speculations aside, let's just wait and see, it's unimportant how stuff would perform that you can't buy :D

eblu
04-01-2003, 11:30 AM
johnny,
i think you have approached the nail and are holding a hammer above its flat circular top.

no company wants to make a machine that is technically revolutionary. those machines are too hard to sell and too expensive to make.
Apple has enough trouble selling their slightly higher quality machines, so they make sure they dont design white elephants, and try to stay away from good technologies that are doomed to fail, like the beta cassette (and... (arg some of my favs) quickdraw 3d, opendoc, ADB, SCSI, etc...). They purposely let other companies cut themselves on bleeding edges, and they produce devices from proven components.

A dramatic increase in processor speed (lets say 4 GHZ, for arguements sake... wouldn't that sound lovely?) )means an entirely new motherboard (etc...), with dramatic changes to the supply chain, higher cost of design and delivery, and... a box that just cant be sold in today's market.

Its just a bad idea for a company to make a dramatic leap in speed. So I think that people expecting a dedicated hard core 3d box from apple will be dissapointed.
I will say however that rendering speeds could definetly increase if someone would go hunting for the bugs/bottlenecks in the code.

Beamtracer
04-01-2003, 06:37 PM
I disagree. Apple has a history of bringing new technology to the Mac long before it appears on Windows boxes.

DVD, Firewire, even USB! Apple can move quicker than the others. Apple has already designed a new motherboard for the 64-bit machines.

I wonder how much 64-bit software will be ready when these machines go on sale. If Apple makes its presentation to developers in June, and the machines go on sale around August, will that be enough time for developers to recompile their applications?

It sounds as though Apple will move the entire PowerMac line to the new 64-bit processor. That means that 64-bit computing will propagate through the graphics community very rapidly. Owners of these 64-bit machines will be looking for 64-bit software. Developers who are first to release such software will have a tremendous marketing advantage.

Johnny
04-01-2003, 06:47 PM
Originally posted by Beamtracer
It sounds as though Apple will move the entire PowerMac line to the new 64-bit processor. That means that 64-bit computing will propagate through the graphics community very rapidly. Owners of these 64-bit machines will be looking for 64-bit software. Developers who are first to release such software will have a tremendous marketing advantage.

This sounds pretty fine...I hate to rain on it, but how much effort would this take, and how willing are developers to do it when/if they've just re-written for OSX?

You and I would realize the benefit of finally having software that takes advantage of the PPC, but do developers see this as translating into fatter wallets?

the cool scenario would be that they could crow how well their apps perform on Apple's new Macs, but wouldn't they just as easily say, "Aw, screw it...we'll just keep concentrating efforts on the Wintel platform?"

J

Ed M.
04-02-2003, 12:06 AM
Well, after Arnie's post, I get the distinct feeling that if Apple releases the 970, NewTek will somehow provide us with reasons why improvements to Lightwave can't be made. What I mean is there always seems to be this "skepticism"

Everyone should keep in mind what the PPC970 brings to the table...

The chip overall is superior to the G4 on all accounts... They are:

- Two independent, fully functional, scalar double precision FPUs.

- Very powerful out-of-order execution

- MUCH MUCH MUCH MORE BANDWIDTH (i.e. a six times faster bus, with low latency! )

- More internal parallelism (four instead of three instructions peak throughput per clock)

- Higher core clock speeds.

- SMP

The secret is that all the goodness and advantages of 64-bitness will not become apparent until these machines become more widespread and developers start to dream up new and innovative solutions that take advantage of it. The bottom line is that it's likely we'll see a SIGNIFICANT boost in Lightwave performance without any modifications to the code at all and that's just based on the sheer improvements the 970 make over the G4. What happens when developers really start to exploit it?

--
Ed M.

Lynx3d
04-02-2003, 01:50 AM
Just found an interesting quote:
"Should Apple move from 32-bit PPC to 64-bit PPC," says Jon Stokes of Ars Technica, "Mac users should not expect the same kinds of performance gains that x86 software sees from the jump to x86-64. 64-bit PPC gives you larger integers and more memory, and that's about it. There are no extra registers, no cleaned up addressing scheme, etc."
( Source: Mac buyer's guide (http://www.macbuyersguide.com/editorials/editorial-ppc970.htm) )

Possibly it really comes down to what Ed M says, it's the overall architecture that has to show it's potential, not the number "64".
On paper the chip looks great, but it also makes some tradeoffs, however x86 has much more limitations that need to be tricked, that's why x86-64 e.g. provides twice as many registers and therefore programs (compilers mainly i think) need to be optimized to use them.

Ed M.
04-02-2003, 05:57 PM
Well, Lynx3d, it would seem that you were saving that to dive-bomb an unsuspecting mac-user when the time *appeared* right ;-) Perhaps I'm just being a little presumptuous though. Fortunately, I keep up on such matters. It seems that the excerpt you quoted was taken out of context; not only that, it's outdated and has been corrected since. The new quote states it a little differently:


"Should Apple move from 32-bit PPC to 64-bit PPC, Mac users should not expect the same kinds of ISA-related performance gains that x86 software sees when ported to x86-64. 64-bit PPC gives you larger integers and more memory, and that's about it. There are no extra registers, no cleaned up addressing scheme, etc., because the PPC ISA doesn't really need these kinds of revisions to bring it into the modern era."

In other words, the PPC is already there.. It was designed with a lot of forethought and thus, better to begin with.

Updated text taken from here:

http://arstechnica.com/cpu/03q1/x86-64/x86-64-5.html

DKE article here:

http://www.igeek.com/articles/Opinion/x86-64.txt

Then read this one:

http://www.igeek.com/articles/Hardware/Processors/x86-64vPPC-64.txt

As for the Windows realm moving to 64-bit, I suggest that you read these links if you haven't already:

http://vbulletin.newtek.com/showthread.php?s=&threadid=1091

(look for *my* posts at the above URL)

http://www.theinquirer.net/?article=8476

http://arstechnica.com/archive/news/1048527903.html

As for Apple... I think things are looking awesome. Let's face it, for developers, OS X is fertile ground and a chance to break from the train wreck that's Windows. Not only that, OS X is the #1 UNIX and the #2 OS behind Microsoft. Not having your wares running on it isn't too forward-looking if you ask me. I wouldn't trust a company that only seeks to capitalize on short-term profits. It's what's currently "doing in" the industry as a whole. NewTek should get on board now. Forget Microsoft and Intel if they plan to stick with 64-bit until the end of the decade. By then all their DRM and other Orwellian features will have added in and security and privacy will be as lame as ever... OS X, it's FERTILE ground. It's time for developers to get with it and move along.

--
Ed M.

Beamtracer
04-02-2003, 06:55 PM
6 or 7 years ago Apple was selling machines with 8MB of RAM. Kind of laughable now, but it was reasonable at the time.

I'm currently using a bit over 2GB of RAM, as are many other Lightwave users on this forum. In computer years it doesn't take long for specifications to double. 32-bit computers will soon hit the RAM ceiling of 4GB. They can't address more RAM than that.

That's why we need to quickly migrate to a 64-bit platform, or we'll forever be stuck with a RAM limit of 4GB.

I can't understand why there is any scepticism about moving to 64-bits. Nobody can seriously claim that we won't soon need more RAM.

A few years ago people would have questioned why you would ever need more than 8 or 16MB or RAM. 2GB would have seemed unimaginable. That's a 250x increase. Yet from 2GB to 4GB is only a 2x increase. That won't take long.

Sceptics should look at recent history for confirmation that 64-bit systems will soon be demanded. Newtek (and in Arnie's case Luxology) should send representatives to attend Apple's WWD conference in June to learn about migrating to this new 64-bit platform.

wapangy
04-02-2003, 06:59 PM
Yeah, lots of people (intel) are trying to downplay 64bit. Wait till they get 64bit (in consumer machines), they be all over it saying its the best thing ever.

Lynx3d
04-02-2003, 07:14 PM
Hehe, no didn't mean to dive-bomb anyone...i read most of those articles before.
It's just that i'm sick of people thinking "it's 64bit, it's gotta be soooo muuuuch faaaaster..." and that's just not automatically true, and the statement expressed that quite nicely IMO, maybe a bit oversimplified.

Maybe i didn't put it clearly enough, it's rather positive that programs will benefit from a PPC970 without really having to be optimized long winded for new registers or other 64bit specific things, (there was just never a lack in registers, in contrast to x86 architecture), and transition should go faster than in the PC arena...
I'm pretty aware that M$ leaves AMD stand out in the rain with x86-64 support...


Not having your wares running on it isn't too forward-looking if you ask me.

Uhm if i got that right you're talking about TCPA, and that's one of my bigger concerns, it just sucks *****.
However unfortunately IBM and Motorola are in the list of TCPA supporters too :(
What now? Back to SGI? :D

wapangy
04-02-2003, 07:29 PM
I'm not planning on it being way faster because its 64bit, I'm just happy that its not a G4, and hopefully IBM can do a better job than Motorola in pumping up the Mhz and everything. I hear its also supposed to be cheaper to make, and requires less power (i think).

I just wish apple would make a quad processor tower, then it really would be as fast as PCs (although expensive).

Ed M.
04-02-2003, 08:23 PM
I just wish apple would make a quad processor tower, then it really would be as fast as PCs (although expensive).


*As* fast? lol

It's been estimated that a single PPC 970 will be more than enough to at least match whatever is in the Wintelon world at the time. Anything related to SIMD will be off the scale for AltiVec no matter how fast the competitions cpus are. This thing has MEGA MEGA BANDWIDTH... And AltiVec LOVES bandwidth. Make it 2-way or 4-way -- exactly how IBM planned it to be and it's gonna be a major stomp-a-thon. What's more, it will only be the *first* in a new PPC line. All bets are off if Apple decides to license OS X to IBM to run on some of ~their~ MONSTER 4-way and 8-way configs for the ultra-high-end... ;-) It will be a complete bloodbath for the competition if that were to happen. As fr now, I'll take a dual 970 ;-)

--
Ed

Johnny
04-02-2003, 09:06 PM
Originally posted by Ed M.
*As* fast? lol

It's been estimated that a single PPC 970 will be more than enough to at least match whatever is in the Wintelon world at the time. Anything related to SIMD will be off the scale for AltiVec no matter how fast the competitions cpus are. This thing has MEGA MEGA BANDWIDTH...


Is it known whether the boxes built around this killer will be able to allow its power to be unleashed, or be a bottleneck, as have past and current Macs have been?

J

Ed M.
04-02-2003, 09:58 PM
[[[Is it known whether the boxes built around this killer will be able to allow its power to be unleashed, or be a bottleneck, as have past and current Macs have been?]]]

I think you misunderstand a few things... A few misconceptions regarding Apple.

Apples boards that support the G4 are VASTLY more efficient than anything in the Intel/Athlon camp when comparing it based on percentage of the maximum utilization/throughput. Apple uses the G4 to its limits. Remember, what's important is the *THROUGHPUT* Macs have the highest throughput compared percentage-wise to Intel and AMD systems.

And we shouldn't forget this little *gem* that Chris Cox from Adobe posted a while back when I invited him to join some of our other discussions. Listen to what he says... I know it's a bit *dated*, but it will hopefully clear up any misconceptions you might have about Apple not designing a system that can utilize a CPU to it's limits..


"133 MHz bus means nothing, except in comparison to other PowerPC chips with different bus speed. The throughput (bandwidth) of the bus is what matters.

The Athlon XP has a 200 MHz double clocked bus and uses DDR DRAM -- but can only move 700 MB/s (on a good day).

The P4 has a fast bus and dual channel RDRAM and can move 1500 MB/s -- but only on very simple operations like memcpy, memset, memcmp. For complex operations it's not much better than PC133 (where it only moves about 600 MB/s).

The PowerMac G4 has a lowly 133 MHz bus, and moves 1085 MB/s, sustained over large (32 Meg) buffers. For less optimized code it can sustain 930 MB/s (again over large buffers).

Why is the slowest bus moving so much memory? Better bus design, better DRAM controller, better cache design, and several other details that only serious solderheads would understand.

Oh, and clockspeeds -- again, the number doesn't matter. Throughput (in this case computation completion) matters. That's why a P4 at 1500 MHz is usually slower than a P3 at 1000 MHz, and an Athlon XP at 1.6 GHz runs circles around a P4 at 2.0 GHz. The PowerPC has a lower clockspeed - sure. But it also has lower latencies, larger caches, more functional units, more pipelining, better cache control, and lots of other things that make it competitive with Intel chips at over twice the clockspeed.

If you're talking about double precision floating point - then the G4 has a LOT of advantages over Intel and AMD chips. My own render code is double precision, and it's about 40% faster on the G4 than on a P4 with twice the clockspeed.

And integer calculations... Well, that's nicely demonstrated with Photoshop or the distributed.net client -- all integer, and they're faster even without the vector code.

About the only disadvantage I've found on the G4 is the ATA66 supplied on the motherboard. But I normally replace that with a Ultra160 card anyway. " - Chris Cox, Adobe programmer (SIMD/optimization master)



Sure things have changed since.. the PCs are clearly faster (for now). On the other hand, Imagine if Apple continues to offer such superior, highly efficient designs for this new processor... Imagine if Apple is able to utilize the 970 to near it's full potential just as they are doing with the G4... Remember, Apple was not at fault -- they were doing the best that they could with what Motorola was giving them. There's a new player in town now.

Anyway, I hope that cleared up some of your misconceptions. In other words, Apple's designs/utilization are not the bottlenecks.

--
Ed

Ade
04-02-2003, 10:17 PM
PC ppl usually say, "i dont care if my pc uses 4 fans, drains 90 watts, is loud like a jet and hot like a fry pan, in the end its still faster than a mac by almost double"....

Another arguement is many appz like Adobe AE and lightwave could be more optimised to take DP cpu's. Hence why the same filters in FCP are much faster to render than the same filters in AE. When Skake comes out properly we will see this arguement in practise.

I wish Newtek would get serious and announce a mac team and their commitment to optimising it and getting those missing plugs to the mac version. Lightwave sold more on mac than pc, and if u look at these boards, the mac ppl are very active. I hear the main mac team comprising of one guy moved to luxology?

hence the 7.5b update was a flop.

ingo
04-03-2003, 02:26 AM
>>PC ppl usually say, "i dont care if my pc uses 4 fans, drains 90 watts, is loud like a jet and hot like a fry pan, in the end its still faster than a mac by almost double"....

Well if we really get the 970 IBM chip instead of the 2GHz Moto chip this August we can easily compete with this, a bunch of loud fans and and 60 watts power drain should be enough to get the PC guys to switch to the Mac ;)

Lynx3d
04-03-2003, 04:35 AM
There were als rumors Apple considers using Itaniums...130Watt thermal design power, now that's pretty hot :D

However my 2 CPUs also consume like max. 130W in total, but watercooling keeps them silent...probably more silent than most Macs out there (and of course the average PC, you only get what you pay for...)

Beamtracer
04-03-2003, 05:53 AM
It was PC Mag that spread the rumors about Apple and Itanium. They had no evidence of foundation for the rumors. They were just trying to stir things up.

Now the Windows users would know more about this than me, but I believe that AMD's 64-bit offering is running a bit late. This will mean that Apple is first to market with their 64-bit machine for the masses. They will get a tremendous publicity coup from this.

I can just imagine the Windows users salivating when they see it!

Ade
04-03-2003, 07:38 AM
Itanium is a flop.
Opteron will be late, I'd rather stay with PPC IBM they have the best fab labs than all, even nvidia is going to them.

x86 architecture is soo old, but gives me the ****s that its not dead yet and is making apple look slow.