You are viewing a single comment's thread from:

RE: Prediction: The problematic scaling of Moore's Law will be compensated by AI advances

in #ai8 years ago

You're right. We humans use rules of thumb (heuristics) to speed up our thinking considerably, although at the cost of having to make exceptions from time to time. That's why natural-language comprehension has been so difficult to program.

If AI's brought into the picture as a way to keep Moore's Law going, the new speed-up will come at the cost of inaccuracy. Edge cases will proliferate and programmers will have to become good at catching them and coming up with error-trapping routines to deal with them.

In fact, skilled edge-case-catching and error-trapping might well become a field of its own, like cryptography is now. Interestingly the same skills are needed to do both - namely, becoming good at spotting subtle bugs and unexpected behavior.

Thank goodness for open-source collaboration and code auditing! We're going to need both more and more....

Sort:  

the new speed-up will come at the cost of inaccuracy.

Not necessarily. Current CPU architectures are far from optimal. Basically most chip designs are suboptimal, including GPUs. And even their power efficiency utilization (which even the user can fix) is suboptimal when it could have been easily optimized. Intel launches a series of chips and they are all running the same volts, whether it's the 2GHz or 3GHz chip of the same die. Yet the 2GHz could be running fine with -0.2v at stock speed. And the user will be burning all those extra watts for life, for no reason at all.

Suboptimal implementations also apply for code compilers and the code we are running. A gcc compilation at -O3 levels can not produce results close to what one can get by tampering with asm in performance critical code. Some times (say mining software) you see gains 2-3x or more. The use of newer instruction sets is also extremely slow. I was looking the other day at Firefox 49 beta. They have just applied a SSSE3 scaling filter for videos which gives good speed bumps. But the SSSE3 set is ...8-9 years old.

There's actually tons of code where SSE / AVX is used and the instructions are used in a scalar instead of packed mode. Packed is when you get 2-4-8 sets of data to be processed in the same processing cycles. I did some tests earlier this year, I found out that the behavior was similar in most major compilers (gcc/icc/llvm). If you are not using an array, SIMD use is pretty low - even when the code is profiled and the whole logic and flow is analyzed before optimization. And that's kind of the simple stuff that can be done. Rearranging code to be executed with full performance benefits is something that I expect AI would do.

For example, SHA256 hashing is currently done linearly in CPUs and no packing instructions are used. Yet if you put together multiple processes of SHA256 hashing, you can pack the data from each stage and batch process them in SIMD fashion. Intel was showing reduction like 8cycles / hash down to 2cycles with AVX256 if you properly apply such a technique... I believe CPU mining algorithms that are out there today are totally crippled because they work with one hash every time instead of multiple hashes that are batch-processed in each stage. And imagine that this is supposed to be highly optimized software! There's too much room for improvement all around (hardware, code, compilers, etc).

Good points...so there is room for optimization that has nothing to do with "going AI." I suppose the chip and compiler companies haven't gotten around to those optimizations because Moore's Law is still doing its thing. I'm sure that they'll focus a lot harder on optimizing once the limits of physics are bumped into.

Yep... plenty of room all around, both for humans (that won't do the optimizations for whatever reasons) and AI (which can do all that humans don't, plus more)...

Coin Marketplace

STEEM 0.16
TRX 0.16
JST 0.030
BTC 58067.54
ETH 2469.39
USDT 1.00
SBD 2.40