Brutman wrote:You know normally I wouldn't look too kindly on a flame war on this site.
It's only a flame war when someone takes it personally - I'm not, I'm hoping you're not either... Of course being a backwoods New Englander I can insult you fifteen ways from sunday and not even be aware I'm doing it... There's an attitude of "rip it all down and build it up again" that can really off-put people who aren't used to the "Wellt, ya cahnt geht theyah frum heeyah..." mentality.
Besides, what did you think the smiley was for

-- twas a gentle ribbing. Come on, have a sense of humor about it.
Brutman wrote:Yes - I am a C programmer. Even worse, I've been a C programmer for 20 years, professionally for 18. And my day job is to write C code for use in the C runtime provided by GLIBC. I reek of C ..
My condolances. C and I have never gotten along, I can do it, but the needlessly cryptic syntax, absurdly loose typing and total lack of keyword checking or forward declaration makes it so ridiculously error prone I'm amazed anything written in it even works right in the first place.
But I've been a long-time Wirth fan -- structured semantic syntax, strict typecasting, forward declarations.
It's why my work of the past decade in PHP has basically felt like slumming... especially given how it seems most PHP/HTML/CSS developers just vomit up code any old way.
Brutman wrote:And to make it worse, I am a reasonable PowerPC assembler programmer, and have been for nearly as long. And I love having 32 registers, nearly all identical. (There is a dirty little secret with R0, but we'll ignore that for now.)
Whereas I have trouble keeping track of nine.
Brutman wrote:I am your nemesis.
The anti-Jason? Joy.
Brutman wrote:Almighty God did not intend for anybody to divine the purpose of code by reading straight hex.
Not talking straight hex, I'm talking assembler, not machine language. (and yes, there is a difference). Even understanding the opcodes used on 68K legacy processors looking at PPC source code makes my eyes bleed -- needlessly convoluted and over-complex... usually juggling so much in the air at once it's a miracle the processor can keep track of it, much less the programmer.
Brutman wrote:As for compilers, they hate constraints on register allocation - that adds to the complexity of the compiler. It hurts optimization and increases the time it takes to compile code when the compiler has to honor silly rules; nothing bogus about it.
I've heard that complaint before -- and it makes NO sense to me... less registers means less opcodes and less confusion, making the code SIMPLER to write, not more complex. (though admittedly, a wee bit more memory tied). The times people have trouble coding for it usually coming from not thinking hard enough about breaking the job into smaller tasks and just trying to do too much at once.
Kind of why M$ C# targeting generic x86 is a magnitude of order faster than GCC with processor optimizations turned on? It's one of those things where thanks to GCC having more targets than bullets it's not really very well optimized for any one target -- MOST of the x86 code for it's C operations being disastrously inefficient and poorly thought out. You'd think it was a crappy port from a RISC chip instead of an efficient port to the target processor or something.
(See that Quake III port that was floating around a few years ago that was 20% faster just because it was built using Borland C++ instead of GCC)
I mean it's bad when the inefficient FPC (at least compared to Delphi) can school GCC.
Brutman wrote:The segmenting method that x86 uses works well enough to extend the memory model and to make relocation easy. It fails miserably on the part where it keeps programs from stepping on each other, or from having wrap-around accidents. It's just a constant accident waiting to happen and it makes handling large amounts of data and code tedious. Perfectly suitable for 1981 when code was 5 K and machines had 128, but really nasty for larger programs.
... and a total non-issue once you migrate to using the segment registers as SELECTORS on protected mode CPU's... especially since to 'wrap' you'd either have to screw up your range checking or allocate a size in excess of the available RAM... before you even reach the wraparound you should be out-of-bounds. Once you're in 32 bit on the 386 your offset register is now the max size of available memory. You'd only have that problem if DUMB ENOUGH to do the 'flat memory' trick of allocating a 4 gig descriptor at address 0.
If you are wrapping unintentionally, your code is poorly thought out in the first place. God forbid a coder be expected to include range checking... But of course they don't which is how we get buffer overrun vulnerabilities on everything from two-decade old networking code that's STILL present inside OSX (despite having been fixed on the core BSD it was ripped off of for some decade or so), to something as simple as JPEG decoder logic allowing code elevation.
THANKS C.
Though that's treading into the Wirth vs. AT&T arguement -- do you want the compiler to include range checking of pre-declared variable and element sizes by default completely preventing the CHANCE of a overflow, or do you want the compiler make code that not only fail to range check ANYTHING, but allows typo's, bad or completely lacking memory handling practices, accidental assignments, cross-type assignments with memory overlaps messing up packed registers, etc, etc, etc.. to be turned into an executable that may not show that error until a month or even years down the road when it blows up and screws everyone.
As you can tell, NOT a fan of C dialect languages.
Brutman wrote:Which is why 32 bit operating systems don't just implement a flat address space; they implement virtual memory, and they do it on a per process basis. (Each process has it's own address space.) This is a realization that 32 bit flat models where everybody is in the same address space is a bad thing.
Which was exactly what the 286 added with it's protected mode in 1982... and thus the GPF was born.
Brutman wrote:Simple question - if you want to calculate an offset from one pointer to another, you need to do math.
You mean like "effective addresses" which can be done ON THE FLY to an offset after loading the pointers? (LEA being so efficient it's often used INSTEAD of ADD or SUB?)
Brutman wrote:In a segmented model you need to take the segment registers into account, which may be different.
Which is a problem HOW exactly? Pre-protected mode you shouldn't be screwing with single vars larger than 64k anyways since you're 16 bit.. while 32 bit onward you've got each 16 bit selector having it's own protection and access rules (instead of that mapping nonsense that offers LESS protection if you fake ring-level) with it's own allocated size and a zero index to the start of the allocated memory segment.
Brutman wrote:In a segmented architecture at best you know you are in the same segment already
That is a flawed assumption that should NEVER be done -- EVER.
EVAARRR -- if you're thinking that way I can see how you don't like segment registers. Yer not using them right. LxS flavors exist for a reason -- passing a selector AND an offset as a pointer because you should NEVER asusme you are in the same segment -- in fact the different segment registers exist FOR that reason... DS being for your heap and/or read segment, CS being for your code and ES being for your write target. EVERY pointer you pass should ALSO be passing a segment. That's what LES, LDS, etc are FOR.
Brutman wrote:and you still have an ALU operation to compute the offset.
Maybe once at the start, but once you've MADE a pointer you shouldn't be running that calculation again unless it's a EA based on something like BP and an IMMED -- and apart from passing values to calls on your heap and/or stack there's no reason to be doing that... at least so far as a base offset is concerned.
Brutman wrote:By the way, that Celeron you speak so highly of is a RISC processor under the covers. Take a look at the architecture - it busts apart your CISC instructions int RISC-like micro-ops that it hides. Which allows it to use shadow registers (gasp! unmarked general purpose registers!), multiple pipelines, store queues, etc.
Which is basically CISC making RISC useful -- you'll notice I basically said that -- though from the rest of your responses it's almost like you COMPLETELY missed what I said from the latter half of my post on.
Brutman wrote:Sorry - was thinking PowerPC, ARM, MIPs, and all of the other, more sane architectures that implement virtual memory using page tables, not segment registers. Those registers that you mention are holdovers from an architecture 30 years ago. Normal (and I am going to assert x86 is not normal) architectures don't use segment registers - they let the virtual memory system handle that detail.
... and I think that's where we differ really. I HATE page maps. They are such a royal pain in the ass it's no wonder there's a near total dearth of quality low level software or even regular software on anything but x86. Much less they seem MORE prone to memory leaks than selectors -- though with so many people using C or similar languages it's no wonder 99% of software out there bleeds memory like a steel sieve. (It's the best kind of Sieve)
We make fun of Steve for it, but when he did his "Developers, Developers, Developers, Developers" rant, he hit it on the head... and I've rarely come across anyone who does low level stuff on a regular basis that has kind words for ARM, MIPS or PPC.
Brutman wrote:What? RISC has been the direction for chips since the late 1980s, early 1990s. You can't possibly claim that a processor that takes x86 on the surface and busts it into micro-ops is a CISC processor at the core.
You missed what I was saying - I said right there RISC won in the end, but only by putting CISC over it. RISC chips that used RISC-Style instructions with hordes of GP registers as the top-level interface for programmers firmly sat in "also ran" category since it's inception as a twinkle in IBM's eye back in the 70's up until the advent of the smart phone -- and even then there's been little to blow my skirt up. (but then, I've never been a phone person in the first place). ARM continues to feel like a rinky underpowered tinkertoy in everything it's in... I'd have thought by now we'd have handhelds at least as powerful as decade old desktops... which we do in terms of the video hardware (with the overglorified PowerVR chips, shades of 1998, S3 Savage is more video card) but in terms of the CPU in these things, not so much.
In a lot of ways I like to think of the CISC machine atop RISC as a JIT optimizing compiler so that a mere mortal might have a chance of making decent code with it -- something that even most RISC high level language compilers barely manage. (if at all, again, see the fat bloated disaster known as GCC)
But then, I've always hated linking, I've always hated makefiles, and I've always hated the stupidity of having to keep separate header files from my linked library code... NOT that GCC seems to bother with dead code removal from included libraries either just lumping it all in there any old way. (NOT that ANY modern compiler is much better -- Yay for 500k console apps for "hello world")
Brutman wrote:It is the best of both worlds in that you get to use your ancient x86 instructions, but every modern x86 since the Pentium Pro and the Pentium II are RISC processors under the covers.
That is what I said... it's kind of my point. I wouldn't voluntarily program RISC directly in the first place. It takes a CISC front-end to make RISC useful. Of course if you want access to all those extra bits, there's always SSE, SSE2, SSE3, SSE4... The latter two of those making altivec look like training wheels.
Oh, and the PII was not RISC under the hood... Though yes the Pentium Pro was... as was the PIII. In terms of widespread adoption AMD actually beat them to the punch with the K6, though yes, CYRIX was actually ahead of the curve on x86 over RISC.
Though as I said it does come down to comfort zone... Even when I was working as a Apple tech during the G3 and G4 era's (hangs head in shame) I was never once enticed by anything other than x86... From 1982 to now there has never once been a computer on another CPU that had the combination of QUALITY software, CHOICE of software and hardware, EXPANDABILITY of hardware, at a reasonable PRICE to even make me look at them... well, except maybe the Amiga, and that was more for novelty factor than serious computing.
When I did have access to them, I was unimpressed at best, disgusted at worst. This REALLY holds for the G3's and G4's -- but as a service tech I did tend to see them on their worst days.
The only thing about Adobe web development products that can be considered professional grade tools are the people promoting their use.