It is currently Wed Oct 18, 2017 6:57 pm

All times are UTC - 7 hours





Post new topic Reply to topic  [ 92 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next
Author Message
PostPosted: Fri Sep 22, 2017 11:07 am 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 6277
Location: Seattle
Bregalad wrote:
And [C is] not good at optimizing for hardware level either - the only reason it optimizes to decent codes is because HUGE amount of effort went into doing this, not because the language facilitates it.
Arguing that C is not good for "optimizing things for hardware" requires being willfully ignorant. Generating machine code for a modern computers is a huge broad optimization problem and this is not something that possibly can nor even should it be expressed in the programmer's language itself.

Or are you seriously going to argue that the varying size of an L1 cache line from microarchitecture to microarchitecture is something that the programmer should be forced to think about?


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 11:55 am 
Offline
User avatar

Joined: Fri Nov 12, 2004 2:49 pm
Posts: 7230
Location: Chexbres, VD, Switzerland
Quote:
Generating machine code for a modern computers is a huge broad optimization problem and this is not something that possibly can nor even should it be expressed in the programmer's language itself.

I didn't state otherwise.

lidnariq wrote:
Or are you seriously going to argue that the varying size of an L1 cache line from microarchitecture to microarchitecture is something that the programmer should be forced to think about?

I never said that.


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 12:12 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 6277
Location: Seattle
Then justify your argument that "[C is] not good at optimizing for hardware level either" as a meaningful statement with something to contrast against, as opposed to just a useless statement of opinion.


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 12:28 pm 
Offline
User avatar

Joined: Fri Nov 12, 2004 2:49 pm
Posts: 7230
Location: Chexbres, VD, Switzerland
What I meant is that, when I started programming in C I was tricked into thinking that if I wrote programs a certain way it made them optimized, when it fact I couldn't be further from the truth.

For instance at first glance you'd think that:
Code:
   ++i;

will use the "inc" instruction and be more optimized
Code:
   i += 1;

will use the "add" (or watthever) instruction. But actually, it's completely wrong, the instruction used has nothing to do with the C code and the ++ operator is basically completely useless, they might just as well not have it in the language at all.

Similarly, you can think that
Code:
while(!something)

uses the zero flag and is faster than
Code:
while(something == 0)

But actually it has nothing to do with that, and this doesn't make any difference, so you might as well use the verbose, explicit comparison and make it clear what your code is doing.

The "const" keyword will make you think it helps the compiler to optimize the code, by putting things in ROM instead of RAM, or by avoiding useless copy of data. Actually it does none of this and this keyword is useless when it comes to optimization. Any data with "const" keyword is still writeable, just not through the name you're declaring.

The ANSI C standard requires all operations to be performed to at least 16-bit ints by default, killing any hope of getting a decent performance on any 8-bit CPU.

Another problem is how arguments are passed by value, killing many optimisation possibilities, especially when passing large object. You need to either pass a pointer (wastes resources) or copy the data to the stack (wastes resources) or use C++ and the tedious "const& " all over the place. I know in some cases those copies can still be optimized out, but only at the price of a major effort from the compiler and full control-flow analyzis of what your program is doing, definitely not thanks to the language.

Because access to types are explicit, it kills optimizations that the compiler could do without telling to improve performance. For instance, in 6502 it's common to store LSB and MSB of arrays in two separate 8-bit arrays. C could never achieve such an optimisation, because arrays of 16-bit ints are explicit to the user and accessible with pointer arithmetic. When higher level languages such as Python (or any other langauge without explicit pointers) COULD optimize it that way.

Just the examples I thought off, there's probably a lot more. So there's nothing wrong about C but it forces you to avoid some optimizations, and make you think you're optimizing code when you're not.


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 1:27 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 6277
Location: Seattle
Your argument seems to boil down to the tautological argument that because C is not machine code, C is not machine code...

But knowing the underlying machine code doesn't tell you anything about whether that code would perform well either!

The x86 LOOP and JCXZ instructions used to be preferred (higher performance) ways to construct loops. But modern machines (really, everything since the pentium) they're slower than doing the loop using generic instructions instead.

If C had given you the ability to say "no, I want to use the INC instruction instead of an ADD instruction", it'd be giving you an even greater way to shoot yourself in the foot than just memory management. It'd be a portability nightmare, comparable to actually rewriting machine code instead of a useful language.

Did you know that x86-64 deprecated several instructions altogether? The byte 0x40, which meant "INC (E)AX", is now used for something else. Other architectures don't provide an INC instruction at all. Why do you think this is something the programmer should be thinking about?


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 1:28 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 5718
Location: Canada
Bregalad wrote:
There's zero reason to force you to use a painful language just because it's used in the industry

You're getting a bit ridiculous here. "Painful language"? I use C++ in my own hobby stuff because I like the way it works and what it can do.

Give me a break.

Bregalad wrote:
(I can't optimize because ++i should use inc and i+=1 should not) ...the ++ operator is basically completely useless, they might just as well not have it in the language at all.

No, to be optimized it should use whatever method of incrementing is most efficient or practical for the situtation, and that what it does.

I think you're mistaking "optimization" for "direct control of assembly output".

I'd agree ++ is not a great feature, and I think it's a little bit vestigial, and there's actually something in it very worth complaining about (e.g. prefix/postfix side effects) but in general it's very easy to use and completely normal to understand.

Bregalad wrote:
(I can't optimize because I can't specify when to use the zero flag with ! vs ==0)

Again, it does use the zero flag when it's efficient or practical to do so. Again, this is not an optimization concern, but an assembly output concern.

C actually usually has inline assembly built into the language for addressing that kind of concern if you need to. (MSVC has started to phase it out with its 64-bit compilers though, in favour of other techniques.)

Bregalad wrote:
The "const" keyword will make you think it helps the compiler to optimize the code, by putting things in ROM instead of RAM, or by avoiding useless copy of data. Actually it does none of this and this keyword is useless when it comes to optimization. Any data with "const" keyword is still writeable, just not through the name you're declaring.

Not really true. On a platform that has ROM, or can write protect memory, const data very often does go somewhere that is phyiscally made read only.

In some cases you can't protect it in that way, e.g. a const temporary on the stack. Attempts to assign it to a non const pointer are a type safety violation and the compiler will stop you. You're allowed to explicitly override this, but in that case you're deliberately sabotaging yourself.

There is the problem that direct memory access is not prohibited in general, and the most often source of errors with this comes up because of how arrays have no bounds checking (e.g. buffer overflow attacks).

Of course in C++ they added mechanisms that provide bounds checking for arrays, so that problem is already solved if you're not obstinate about it.

Bregalad wrote:
The ANSI C standard requires all operations to be performed to at least 16-bit ints by default, killing any hope of getting a decent performance on any 8-bit CPU.

No it doesn't. Maybe you're thinking that the default "int" is going to be 16-bit, but if the result is going back into an 8-bit type the compiler is by no means required to do any 16-bit calculations at all.

Bregalad wrote:
Another problem is how arguments are passed by value, killing many optimisation possibilities, especially when passing large object. You need to either pass a pointer (wastes resources) or copy the data to the stack (wastes resources) or use C++ and the tedious "const& " all over the place. I know in some cases those copies can still be optimized out, but only at the price of a major effort from the compiler and full control-flow analyzis of what your program is doing, definitely not thanks to the language.

Okay so you just described that you can pass by value, by reference, or inline. What is the other optimization possibility that is being killed here?

In a lot of other high level languages you don't even have options about whether something is passed by value or reference. Java in an example that bothers me, where you primitive and object types are passed in different ways without any syntactic difference. Python on the other hand has types that are implicitly mutable or not and often you don't find out which until a (cryptic) runtime exception occurs (e.g. is this a Bytes or a ByteArray?).


Bregalad wrote:
For instance, in 6502 it's common to store LSB and MSB of arrays in two separate 8-bit arrays. C could never achieve such an optimisation, because arrays of 16-bit ints are explicit to the user and accessible with pointer arithmetic. When higher level languages such as Python (or any other langauge without explicit pointers) COULD optimize it that way.

C can do this through macros, to some degree.

C++ actually can do this efficiently and effectively though.


Anyway, I honestly find this nitpicky attack on C really strange. What are you comparing against? Is there a language that you use all the time that you really don't have any qualms like? Everything here you've picked on was either trivial or wrong. If you named any language I could say a thousand similar things about it. What's the point? They've all got some problems, but well used languages have plenty of good ways to cope with those problems (i.e. practice effective coding styles, and learn the ropes).


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 2:05 pm 
Offline

Joined: Mon Jan 30, 2017 5:20 pm
Posts: 294
Location: Colorado USA
I've read through a lot of books and tutorials on languages in the C family, and they were all only about making command line tools, none of them mentioned anything about plotting pixels and creating an actual window for my program to run in. What is the WinAPI for Linux like?


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 2:09 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 6277
Location: Seattle
There are many options (raw X calls, XAW, TK, GTK, QT)...

but for what you seem to want to do, you should use SDL.


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 2:13 pm 
Offline
User avatar

Joined: Thu Mar 31, 2016 11:15 am
Posts: 197
itt: a guy who hasn't completed his game in ~13 years arguing with a guy who hasn't completed his game in ~4 years over which language is most effective to program in

really jogs the nog

use whatever the hell language works for you. if you latch on to C, use C. If you latch on to Python, use Python. doesn't matter what you use so long as you're using it. really, the bottleneck is not the language but willpower and the motivation to learn of the person using it.

also, this whole language wars shit got old 30 years ago. stop wasting time on it and go outside or something

Quote:
I've read through a lot of books and tutorials on languages in the C family, and they were all only about making command line tools, none of them mentioned anything about plotting pixels and creating an actual window for my program to run in. What is the WinAPI for Linux like?

SDL2 library. SFML if you're doing C++.


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 2:23 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 5718
Location: Canada
pubby wrote:
itt: a guy who hasn't completed his game in ~13 years arguing with a guy who hasn't completed his game in ~4 years over which language is most effective to program in

Why would you even say this?


Top
 Profile  
 
PostPosted: Fri Sep 22, 2017 2:38 pm 
Offline
User avatar

Joined: Thu Mar 31, 2016 11:15 am
Posts: 197
I'm in a rude sour mood today and hoped a cheap shot would put an end to your two arguments.

My point was, it's foolish to think the choice of programming language is a magic bullet that allows one to work faster/better or produce better code. Or text editors. Or operating systems. Or gym equipment, et cetera. The difference is negligible compared to human factors like motivation and perseverance. A lot of time is spent arguing about programming languages but in reality it's all for naught.

I know you and Bregalad mostly weren't really talking about this. It was mostly you correcting his misunderstandings and such. So don't take what I say too seriously; I'm just shitposting cause I'm grouchy.


Top
 Profile  
 
PostPosted: Sat Sep 23, 2017 3:51 am 
Offline
User avatar

Joined: Mon Jan 03, 2005 10:36 am
Posts: 2962
Location: Tampere, Finland
pubby wrote:
itt: a guy who hasn't completed his game in ~13 years arguing with a guy who hasn't completed his game in ~4 years over which language is most effective to program in

You may have had an argument there if either game was actually programmed in any of the languages that were being discussed.

_________________
Download STREEMERZ for NES from fauxgame.com! — Some other stuff I've done: kkfos.aspekt.fi


Top
 Profile  
 
PostPosted: Sat Sep 23, 2017 6:31 am 
Offline

Joined: Mon Apr 01, 2013 11:17 pm
Posts: 437
Bregalad wrote:
What I meant is that, when I started programming in C I was tricked into thinking that if I wrote programs a certain way it made them optimized, when it fact I couldn't be further from the truth.

It's still true, but compilers nowadays are smart enough to see the easy ones like ++i; versus i += 1;. There are still things that can trip up a modern compiler. For example, indirection via pointers is harder to optimize (because the compiler can't always prove you're not modifying the pointers you're using), so you can gain some speed by reducing indirection. Of course, you should only worry about that after you've come up with a good algorithm. I don't think any compilers are smart enough to replace a slow algorithm with a fast one.

I know it's already been mentioned, but on modern CPUs the ideal instruction for a given operation is often not obvious. On some x86 CPUs, INC can be slower than ADD, and the compiler will take this into account when it generates code.

Bregalad wrote:
The "const" keyword will make you think it helps the compiler to optimize the code, by putting things in ROM instead of RAM, or by avoiding useless copy of data. Actually it does none of this and this keyword is useless when it comes to optimization. Any data with "const" keyword is still writeable, just not through the name you're declaring.

Converting a pointer to a non-const object to const and then back to non-const is defined behavior: since the object is non-const, then any non-const pointer to the object may be used to change its value.

Converting a pointer to a const object to non-const is undefined behavior: the compiler assumes that your program will never do this, so the results will depend on how the compiler decided to optimize the const object. It may behave as if it were never declared const in the first place, or it may behave as if it's constant sometimes and not other times, or it may truly be constant, or attempting to change its value may crash your program.

Declaring an object as const doesn't prevent you from writing, it tells the compiler that it should assume you will never write. How much of a difference that makes depends on the compiler.

Bregalad wrote:
The ANSI C standard requires all operations to be performed to at least 16-bit ints by default, killing any hope of getting a decent performance on any 8-bit CPU.

This sounds more like a problem with cc65's optimizer than anything to do with the C language itself.

Bregalad wrote:
For instance, in 6502 it's common to store LSB and MSB of arrays in two separate 8-bit arrays. C could never achieve such an optimisation, because arrays of 16-bit ints are explicit to the user and accessible with pointer arithmetic.

The 6502 is very limited compared to the minicomputers C was originally used with. It's unlikely anyone involved in designing C or the 6502 intended the two to be used together.


Top
 Profile  
 
PostPosted: Sat Sep 23, 2017 8:12 am 
Offline

Joined: Tue Feb 07, 2017 2:03 am
Posts: 248
The point of C is it doesn't matter if I do
i++
++i ( in a single line non-assign case)
i = i + 1
i += 1

because the compiler will just work it out for me and do the hard work of caring what is the exact right way to do it in this instance. That is what is good about c.
However you need to remember that C is designed for line editors, back in the old days when you had terminal and you had to select a line you wanted to edit, then it appeard in the line down the bottom, then you edit it, then it writes it back for you.. see vi for an example of this. Hence why C will let you do for (int i=0,j=0; i<100; j+=++i) { on a single line because back then editing multiple lines was a real pain. And has no relevance today and that is why any c programmer would not do such nonsense and any lead would bash with a stick if you did. Because yes you should do
Code:
int j =0;
for( int i = 0, i < 100, ++i)
{
    j += i;
    ..stuff
}


Also if you want to get good c++17 code on a 6502 you can see https://www.youtube.com/watch?v=zBkNBP00wJE and you will see the C++ really evaporates when you use the const tricks and other tools to help the compiler optimise for you.

It seems your issue is more that you want to be a "good" programmer and thus your ability to see that i++ allows for an inc over an clc adc should make better code and damn that compiler for it ignores you and just makes either case the best case regardless of how "good" you are. And that is silly and a waste of your time, what you should focus on is coming up with the algorithm and means of achieving the task at hand that solves it in the fastest way possible and let the compiler work out if its better to do an inc or adc for that case on that line. i.e things like converting ptr = y*320 + x*8 +row in a loop into something like ptr += 165 to save all that extra calculation work each loop.


Top
 Profile  
 
PostPosted: Sat Sep 23, 2017 8:34 am 
Offline

Joined: Tue Feb 07, 2017 2:03 am
Posts: 248
DementedPurple wrote:
I've read through a lot of books and tutorials on languages in the C family, and they were all only about making command line tools, none of them mentioned anything about plotting pixels and creating an actual window for my program to run in. What is the WinAPI for Linux like?
That is because the language has no GUI, or built in windowing system. Just like 6502 has no windowing system of gui or graphics system. If you want to learn a graphics system you need to look up tutorials for the one you choose. The WinAPI is for most parts the best, it is kind of why we all use windows. If you are talking the Win32 GDI(+), then Charles Petzold makes amazing books, they are tomes, well written, very easy to read and will give you solid coding methods and design methodologies to follow.
https://www.amazon.com/Programming-Wind ... bc?ie=UTF8 or the 6th edition if you can find it.
MFC don't touch with a barge poll. WPF go C# in Visual C# and forget the rest. XAML is a nice improvement and I find it much easier to get along with, however I feel its learning curve is higher than WPF. But both are in the drag-drop double click and edit and continue on the code for super sweet development experience.

Qt is ok, it has a lot of pain points, its clunky and will cause you to pull your hair out if you don't understand how to make "qmake" scripts and its custom build system and you need to be well versed in c++, qstring vs cstring vs bstr vs std::string for example. SDL is fairly straightforward, X is written in "cardboard" and is going to be archaic as, but there should be a tonne of tutorials in it if you can find them ( wayback machine might assist)

It sounds like what you really want is to run DOS BOX, read Abrash's Black Book and use 'MODE X' however DX7 would also probably suit your style if you could get it to run.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 92 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next

All times are UTC - 7 hours


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group