architecture introduced by FTK 2.0 is a revolution. In the previous post was written FTK 2.0 architecture adopted a client / server. This is true in principle, but under the hood lurks a FTK architecture designed with a clear goal: scalability. The CF is no longer what it was 10 years ago when they were the only tool for DOS (before the arrival of EnCase or FTK) and that the size of the data to be analyzed is counted in a few Gig of disk. Now those who work in this area knows the problems that arise, either because the course of technology (750-GB disks in PCs home now!), Or the complexity of networks and operating systems. It happens so that an assessment should consider a couple of PC desktop, laptop and maybe one or two portable devices like a BlackBerry or PDA, without counting the number of USB thumbdrive, the cdrom, dvd, etc.. FTK tool breaks down into three distinct parts:
- interface (user interface), which requires very little memory and low computing power.
- The database that keeps information of our case or cases handled dall'examiner.
- The worker, or the component responsible for indexing, the recovery of files and all the common tasks that require computing power and memory
In the current version of FTK (the stand-alone) allows you to merge these three components on same system or to separate the user interface than the other two. The advantage is that then we can take a PC, a laptop, than for the analysis of our case and let it do the dirty work for a multiprocessor system with memory and ram at will. It would not be a true scalability we had to stop just this division. Professional versions and Lab Edition is also possible to separate the database from the worker or workers
This is where the advantages of a transparent and scalable architecture emerge. Let's take a quick example considering an architecture with a user interface (or even two), the database and three workers. Once a disk image (or why not, more pictures) is added in case the workers are competing tasks to perform. The first worker could, for example, start the data recovery, the second image indexing and last start the bruteforce a password-protected file and everything is transparent to the user. It 'clear that if we are faced with a situation where many are involved disk images with a series of operations on some terabytes of data (and the trend now stands on these dimensions), a kind of architecture that has machines for hard work can only inconvenient. And the news from Accessdata not over.
0 comments:
Post a Comment