Here's How Meltdown and Spectre works and how your computer is vulnerable to them

in #meltdown6 years ago (edited)

 

You may have heard these terms because they are the two new flaws found in a number of processors ranging from Intel, ARM and AMD. Almost all systems in the world are vulnerable to the attack.

These two bugs were first reported in June last year and kept private from the public while they give some time to the developer so that they could create a fix for this bugs. The Mac Os has been patched since December however there could be some performance issue due to the changes in the OS. Some applications which are heavily reliant on system calls might run much slower to protect them against meltdown and there is no other way to fix this issue other than this.

What they could do

The main upshot of this bugs is that both of them allow unprivileged processors to read data in the memory that it shouldn't be able to do so. The computer has a kernel which is the core of an operating system. This runs in a privileged mode and it protects processes from stomping all over each other. It protects access to device drivers. If you're running a server, for example, providing web services to many users then this bug is vulnerable because it will allow any user to run this terrible code and view potentially secret module of other users or from your own servers. If you are a desktop user then the outcome of this is that you could perhaps hit some vulnerable website which runs some crafted javascript. This start to read some secret module from your web browser.


How it works

Intel and other processor developers have been making processes much faster than their previous one. The Ram that turns of the millennium processor clock speeds were getting faster and faster because it was an easy way to tell the user that your processor is getting faster but in recent year that can hit the limits of silicon and they can't make the clock rate go faster.However, in parallel processor there has been a lot of less well- known changes, to make the processor more efficient per clock cycle. One thing is the cache memory, so accessing main memory is actually kind of slow because the main memory typically resides elsewhere on the motherboard and its a long path for the processor to get access to all those gigabytes of main memory. So they have a cache which is a small chunk memory that resides next to the processor which keeps track of stuff that would be in main memory. You might have a layer 2 and layer 3 caches which you know slightly away from the processor or close to main memory and get successively slower but also get bigger because cache memory is mostly on the processor, silicon becomes more expensive. 

The second trick is that every processor is something called as pipe-lining of instructions what this means is that in the older processors decoding and executing instructions might take several clock cycles. Pipe-lining allows the processor to actually execute multiple instructions simultaneously by taking advantage that instructions don't interfere with each other and beyond this pipe-lining will split each instruction in memory into multiple smaller operations that processor can handle efficiently and as long as the instruction effects get assembled in the correct order you're actually getting much quick execution.

The last trick is really important to this class of bugs is speculative execution so remember that the instructions can't affect each other, well it’s quite common in a code for an instruction which could be dependent on the state of memory it could do one thing or another. When you have pipe-lining going on and you hit a branch you need to figure out which branch you could possibly go along which path and so speculative execution tries to figure out which way it will go and which it will start executing the instruction of one branch before it necessarily knows that that's the correct branch to do and it does this just by keeping track of how many times a branch has been performed. There is something called branch history buffer which the CPU stores and track.

Internally odds are that if a branch instruction took one path previously it will go that way more often than not but what makes this kind of interesting is that the code is executed speculatively before the conditional statement has completed and it could be a security critical statement so processor is supposed to roll back the execution when it does something it shouldn't but it might not actually rollback complete state. There are two important things that wouldn't get rolled back for performance reasons that are cache, anything copied into the cache will be still in the cache and the branch prediction history these are important for the performance of the CPU. Reasons so they are not gonna get messed around with.


Meltdown

The Meltdown is the one that causes the most noise because that affects the kernel memory it means that a regular user can access kernel memory and the way it works is. First of all it has two instructions. First reads some value from a protected region inside the kernel then the second instruction uses one bit of this kernel memory that access to decide whether to load address A or B. Now right away the instruction, the first instruction should stop execution because that is a user processor trying to access protected kernel memory but the check to get decide whether the user can access memory or not guess what that is a branch it checks and then it could go either one way which say that addresses that its fine or it could go another way that says always an exception and since most address accesses are good. It will almost certainly follow the route that says ok. I will continue to execute the next instruction so it's possible for the second instruction to start action on the contents of kernel memory and loading whatever part of memory it was you have targeted before it gets shut down and the exception happens.

The next thing that happens is the exception gets sent out the process that did this catches it and deals with it. How does it deal with it? well now it can go and try to load either address A or address B that was gonna depend on that one bit in the kernel and because that stuff was copied into the cache whatever location was loaded into the cache should now load a lot faster so the process can do this for one bit at a time, it can test the loading time of two different locations and figure out what bit was set in the kernel then it can flush the cache and repeat for the next bit of the memory. The upshot is that a sufficiently dedicated process can just run through the memory one bit at a time and can read out any secret module that at once at a few K per second, this is pretty bad and the reason actually incidentally that the user is able to do this is that the kernel memory is mapped into user process memories so that when a system call is made to the kernel the kernel has all the data they are already mapped now it's still protected right that's why it raises an exception the way this is being fixed in a lot of cases is that the kernel memory is now no longer going to be mapped into user space and when you call a system call it's gonna change the memory mapping and then perform the system calling in clean up the memory mapping after that adds a lot of overhead and that's why calling into the kernel is gonna be like 30% or more slower so yeah that explains meltdown.

Spectra

The spectra bugs I'm not going to go into details. It employs branch prediction issues instead trying to read kernel memory it can say try to read outside of a buffer in memory and if this is managed by virtual machines say a Javascript engine inside a browser then you can use it to read arbitrary memory inside the browser and get access to cookies and login credentials and other stuff. The other way it can be exploited is if you know a process has the cord which is gonna do branches and guess what they all do. You can poison the branch prediction, cache and force it to execute one way or another and again potentially read data just by looking at cache timing the cache timing incidentally is something called a side channel attack. So this is where the designer didn't intend for this to weak data but the attacker has figured out that it leaks some sort of information so it's an unintentional way that data is being leaked back that's what a side channel is and yeah this is a big thing.

Sort:  

good work dear i also visit to your blog

amazing info.. thanks for share with us

Congratulations @aveshnaik007! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

You published your First Post
You made your First Comment
You got a First Vote
Award for the number of upvotes
Award for the number of upvotes received

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Good post

This was a bit hard to read and follow maybe a diagram or something would have helped me.

Whoa!! Brilliant

Hey,Man..
I also Give you,,100%..ok

amazing info and great work (y)

Coin Marketplace

STEEM 0.19
TRX 0.12
JST 0.027
BTC 61110.82
ETH 3368.39
USDT 1.00
SBD 2.48