Under the farming model, the parallel application is divided into one or more process farms. A process farm comprises one farmer task and one or more worker classes. A worker class has one or more worker tasks which are grouped together because they perform similar functions. A worker (task) can be associated with one and only one farmer task (and in turn one worker class). In a hierarchical farm, a worker task can in turn be a farmer to some other set of worker classes and worker tasks. Again, the worker can be a farmer to more than one farm. This could be useful in subdivision of a work packet into sub parts for processing by another worker class hierarchy. Also, the farm implementation should be such that a worker task should be free to leave a process farm (dynamic process migration to another farm) and join any other farm if needed.
To implement an algorithm using the farming paradigm, the data parallel application would need to create one or more farms. The farmer task would need to take the data to be processed and split it up into a number of work packets which would be independent of each other and could be processed by a worker task. It would then give out the work packets to a worker class without knowing which particular worker task would process it. The operating environment would decide the appropriate worker to forward the work packet to. After processing the work packet, the worker would create a reply packet and address it to the farmer task. The farmer task would collect all the work packets from a particular worker class and reassemble these appropriately. More than one farm and more than one worker class could be active at a particular instant creating, sending or receiving packets. The farmer could terminate a farm after it is no longer needed, at which stage the worker tasks would automatically be expelled from the farm and informed accordingly. However, before this happens, a worker should also be able to leave the farm and stop receiving work packets from the worker class.