Prediction: The problematic scaling of Moore's Law will be compensated by AI advances

in #ai8 years ago

As of 2016 the general consensus is that Moore's law will stop "scaling" in the mid-term future due to the electronics starting to hit the atomic barrier. Packing transistors in an ever increasing density is acknowledged as a dead-end for increasing performance.

I will use the blockchain as a time-stamping mechanism and predict that this won't affect performance increases in end-user experience. Instead the user-experience will be massively improved, despite the Moore's law scaling barrier, due to the following reasons:

1) AI will initially assist in (and then completely take over) the design of vastly superior processor architectures that increase the processing efficiency of a given number of transistors. New instruction sets, lower cycle counts per instruction, new branch prediction and cache management logic, more aggressive out-of-order execution, etc.

2) AI making advances in the way executable code is generated and executed (compiler software and processor interface that receives the instructions), thus reversing the long-trend of "Wirth's Law".

3) AI making breakthroughs in the field of physics and materials which can be used in electronics, leading to much faster frequencies - an area that has been pretty stagnant for the past 10-12 years.


As a side-note and prediction, these advances will allow the large-scale adoption of blockchain solutions (that typically have a serious scaling problem) even with readily available consumer hardware.

Sort:  

You're right. We humans use rules of thumb (heuristics) to speed up our thinking considerably, although at the cost of having to make exceptions from time to time. That's why natural-language comprehension has been so difficult to program.

If AI's brought into the picture as a way to keep Moore's Law going, the new speed-up will come at the cost of inaccuracy. Edge cases will proliferate and programmers will have to become good at catching them and coming up with error-trapping routines to deal with them.

In fact, skilled edge-case-catching and error-trapping might well become a field of its own, like cryptography is now. Interestingly the same skills are needed to do both - namely, becoming good at spotting subtle bugs and unexpected behavior.

Thank goodness for open-source collaboration and code auditing! We're going to need both more and more....

the new speed-up will come at the cost of inaccuracy.

Not necessarily. Current CPU architectures are far from optimal. Basically most chip designs are suboptimal, including GPUs. And even their power efficiency utilization (which even the user can fix) is suboptimal when it could have been easily optimized. Intel launches a series of chips and they are all running the same volts, whether it's the 2GHz or 3GHz chip of the same die. Yet the 2GHz could be running fine with -0.2v at stock speed. And the user will be burning all those extra watts for life, for no reason at all.

Suboptimal implementations also apply for code compilers and the code we are running. A gcc compilation at -O3 levels can not produce results close to what one can get by tampering with asm in performance critical code. Some times (say mining software) you see gains 2-3x or more. The use of newer instruction sets is also extremely slow. I was looking the other day at Firefox 49 beta. They have just applied a SSSE3 scaling filter for videos which gives good speed bumps. But the SSSE3 set is ...8-9 years old.

There's actually tons of code where SSE / AVX is used and the instructions are used in a scalar instead of packed mode. Packed is when you get 2-4-8 sets of data to be processed in the same processing cycles. I did some tests earlier this year, I found out that the behavior was similar in most major compilers (gcc/icc/llvm). If you are not using an array, SIMD use is pretty low - even when the code is profiled and the whole logic and flow is analyzed before optimization. And that's kind of the simple stuff that can be done. Rearranging code to be executed with full performance benefits is something that I expect AI would do.

For example, SHA256 hashing is currently done linearly in CPUs and no packing instructions are used. Yet if you put together multiple processes of SHA256 hashing, you can pack the data from each stage and batch process them in SIMD fashion. Intel was showing reduction like 8cycles / hash down to 2cycles with AVX256 if you properly apply such a technique... I believe CPU mining algorithms that are out there today are totally crippled because they work with one hash every time instead of multiple hashes that are batch-processed in each stage. And imagine that this is supposed to be highly optimized software! There's too much room for improvement all around (hardware, code, compilers, etc).

Good points...so there is room for optimization that has nothing to do with "going AI." I suppose the chip and compiler companies haven't gotten around to those optimizations because Moore's Law is still doing its thing. I'm sure that they'll focus a lot harder on optimizing once the limits of physics are bumped into.

Yep... plenty of room all around, both for humans (that won't do the optimizations for whatever reasons) and AI (which can do all that humans don't, plus more)...

Nice analysis! Also, I think that benefits to AI will come from identifying trends and learning from big data sets things that we can't easily spot. Advances in medicine gained through non-user identifiable data would be one application. I can see AI being very game changing, I'm not sure how it is going to effect Moore's law, I would expect it to advanced the user experience as you suggest.

I'll bet quantum computing and AI processes will change things a lot, I'll be interested to see how much.

It would seem to be a good fit to have AI processes learn and predict things from data sets stored in the block chain. The Quotient coin was originally planned for this but I guess it proved to be to difficult to do.

"I'll bet quantum computing and AI processes will change things a lot, I'll be interested to see how much."

The most surprised people of what is coming will actually be those of us in the IT - even if we expect big leaps (because what's coming is even bigger than what we expect).

"Normal" people will be able to absorb what is coming and shrug it off as "technological progression". They will have no "limits" on their expectations...

Coin Marketplace

STEEM 0.20
TRX 0.14
JST 0.030
BTC 68854.36
ETH 3283.36
USDT 1.00
SBD 2.67