You are viewing a single comment's thread from:

RE: Are our ideas running out?

in #science7 years ago

I’ve done some of my own research on the subject & would love to provide my opinions based on what I know/have learned. Granted, some may disagree with the conclusions I’ve drawn and I would like nothing more than to discuss it further to hear your input. Moore’s Law has recently waned due to several complex technological difficulties for optimizing transistors and integrated circuit performance. Despite this fact, I believe that it it possible to maintain Moore’s Law up until at least 2030. Improvements in computing architecture, the material composition of transistors, and related costs to produce these computers will be the driving factors in extending Moore’s Law & the computational consequences that follow.
With regard to the way in which we compute, packing more and more transistors on a single chip doesn’t increase the clock speed. By utilizing multicores, the speed increases- it can be used, for example, to allow multiple programs on a desktop system to always be executing concurrently. Multithreading make it easier to logically have many things going on in your program at a time, and can absorb the dead-time of other threads. Effective gains in performance are achieved through the use of both to speed up a single program. Ultimately though, multithreading is faking parallel computing and doesn’t address the heart of the problem. In parallel computing, there are many CPUs all working at the same time. Exploiting the benefits of concurrency within a single process on a single CPU with shared memory is at the very least a move in the right direction. Ultimately, parallel computing may solely have the potential to resurrect Moore’s Law and provide a platform for future economic growth and commercial innovation. Graphics processing units, or GPUs- a type of parallel computer- enable continued scaling of computing performance in today’s energy-constrained environment.The critical need is to build them in an energy-efficient manner in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance. Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance–at a tremendous expense in energy. The challenge is for the computing industry to adapt this new platform. After generations of serial programming, there is enormous resistance to change, as is present in many fields of study, since it requires a break with longstanding practices. Converting the enormous volume of existing serial programs to run in parallel is a formidable task, and one that is made even more burdensome by the scarcity of programmers trained in parallel programming.
Current transistors are primarily composed of silicon. This material eventually reaches a physical limit on the amount of transistors you can fit in a given area at the atomic level. Alternatively, carbon-based compounds such as graphene have been proposed for transistors. When a single monolayer of carbon atoms is extracted from nonconductive bulk graphite, electrical properties are observed contributing to semiconductor behavior, making it a feasible substitute for silicon. More research must be performed because it is not yet commercially viable for a number of reasons-one being its resistivity value increases decreasing electron mobility. Once achieved, integrated circuits will decrease in size dramatically since the same amount of transistors can fit in a significantly smaller area.
Not only is the density of transistors in integrated circuits vital, but also price per transistor will play an imperative role in changes in computing. Finding ways to lower cost of production will play a significant role in expediting the overall progress in computing. Despite the boffins observing that exponential growth is getting harder to achieve with constant expansion of research efforts, the free market will ultimately find a way to meet the demands of consumers. Sharing ideas and research efforts will only help counterbalance declining research productivity in order to maintain constant economic growth. As companies realize this, they will be incentivized to cooperate with one another. I think that cost is often a factor that goes unlooked when it comes to extending Moore’s Law.
At times, the information I provided is not pertinent to the topic of discussion. I found it difficult to adhere strictly to the scope of Moore’s Law without addressing these tangential ideas. Some people might view other factors as more imperative to Moore’s Law, although these seemed to catch my attention. I believe quantum computing will play a major role regarding limits in computation as well & is a whole different topic itself. What I also find captivating is the crossover between nanotechnology and biology. Some studies have shown how biological micro-cells are capable of impressive computational power while also being energy efficient. Taking advantage of evolutionary efficiency may help us to tackle future situations. The future of computing is bright and we all play a part in it!

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.029
BTC 66244.62
ETH 3320.00
USDT 1.00
SBD 2.70