The Future of MicroprocessorssteemCreated with Sketch.

in science •  2 years ago

This talk is astonishing. The speaker, Sophie Wilson, really knows her stuff, and she’s charming.

So, in a nutshell, the reason computers aren’t 30GHz is not due to physical die limitations. It’s due to the difficulties in exposing the light required to etch the structures. It would take way too long to make these super fast CPUs at 7 nano meters.

Basically, the problem is economics, not physics, which is great. I find this encouraging because we're at the point where a breakthrough will open the floodgates. There is incentive to come up with a whole new process that's economical. Wilson seems skeptical, and I don't blame her.

Core Limits

Wilson also makes the assertion that adding more cores don't help. This is a specific claim about scalar bottlenecks.

Her assertion is that CPU utilization maxes out to 4 cores when browsing. Adding cores doesn't help after that. But they keep cramming cores into our devices anyway.

For that specific claim, I agree that adding cores doesn't help. I also agree that this is true in most use cases for most consumers running most applications. But this doesn't address background processes.

On an application level, there's no advantage. But on an OS level there is.

For applications to take advantage of the rising core numbers, there has to be a paradigm shift in how programmers tackle problems.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

I don't really know anything about this stuff but it actually makes a lot of sense.
I've been trying to research cost-effective gaming systems and some of the dual-core processors actually perform better than some quad cores and they're cheaper (for the most part).
It seems like the manufacturer's are trying to pour more water into an already full cup and relying on the fact that consumers think more stuff means more value when they're probably over-paying.

·

Yeah, that is happening. But there are also power users who will make sure they use everything they're paying for. I'm sure there's a car analogy that applies.

I am pretty sure the problem of frequency is related to resistance not precision. Since at least 5 years the watt cost per instruction halved every 18 months. Multiple processors scale beautifully when doing 3D graphics.

Typical GPUs now have hundreds of parallel units. Cryptographic and media decoding algorithms algorithms are highly parallelisable also. Matrix math naturally scales easily.

All these cores will be running blockchain and distributed computing and storage nodes in the future. There is no future in centralised databases, and computers hold immense value in some of this database. Security requires a lot of encryption. If it isn't obvious to you yet, you aren't looking. Information security is the big issue now.

Number one, to everyone.

Block chains are limited by single threaded performance.

·

But, that's on purpose, right?

This post has been ranked within the top 25 most undervalued posts in the second half of Nov 26. We estimate that this post is undervalued by $11.13 as compared to a scenario in which every voter had an equal say.

See the full rankings and details in The Daily Tribune: Nov 26 - Part II. You can also read about some of our methodology, data analysis and technical details in our initial post.

If you are the author and would prefer not to receive these comments, simply reply "Stop" to this comment.

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal