Size Tiered Compaction Strategy In Apache Cassandra

Preface:

Compaction techniques combines multiple SSTables to a single SSTableĀ  to improve the performance of read operations by reducing disk I/O and to free space from deleted data.
In this article we will discuss about the size tiered compaction strategy.

Size Tiered Compaction Strategy:

Size tiered compaction strategy combine multiple SSTables which belong to a particular data size range.
This technique categories SSTables into different buckets and compaction process will run for each bucket indiviaully.

    Compaction is based on following properties:

  • bucket_high (default 1.5)
    SSTables added to a bucket if the SSTable size is less than 1.5 times average size of that bucket.
    e.g. if SSTable size is 14MB and bucket avg. size is 10MB, then SSTable will be added that bucket and new avg. size will be computed the bucket.
  • bucket_low (default 0.5) SSTables added to a bucket if the SSTable size is greater than 0.5 times average size of that bucket.
    e.g. if SSTable size is 7MB and bucket avg. size is 10MB, then SSTable will be added that bucket and new avg. size will be computed the bucket.
  • max_threshold (default 32) maximum number of SSTables to allow in a bucket for compaction.
  • min_threshold (default 4) minimum number of SSTables in a bucket to start a compaction.
  • min_sstable_size (default 50MB) The bucketing process groups SSTables that differ in size by greater/less than 50%. This results in a bucketing process that is too fine grained for small SSTables. All sstables are not matching ths criteria and having size less than min_sstable_size get into a single bucket whose avg. size is less than min_sstables_size.
  • Lets assume we have SSTables of sizes (in MB) as 78, 51, 100, 60, 19, 27, 34, 7, 1, 10.

    SSTable before compaction starts

    SSTable before compaction starts

    Let’s see how compaction algorithm works …

    Algorithm:

    • Sort SSTable according to size in descending order. After sorting above SStable we will have a list of sorted SSTables(in MB) as 100, 78, 60, 51, 34, 27, 19, 10, 7, 1.

      SSTable sorted before appearing for compaction

      SSTable sorted before appearing for compaction

    • Categories SSTables into a multiple buckets based on configuration and following condition :
      IF ((bucket avg size * bucket_low < SStable’ size < bucket avg size * bucket_high) OR (SStable’ size < min_sstable_size AND bucket avg size < min_sstable_size))
      then add the SSTable to the bucket and compute the new avg. size of the bucket.

      ELSE create a new bucket with the SSTable.

      Following bucketing will happen with bucket_low as 0.5, bucket_high as 1.5, min_sstable_size as 32MB, min_threshold as 4 and max_threshold as 32.

      Bucketing before compaction

      Bucketing before compaction

    • Buckets are sorted according to their hotness(CASSANDRA-5515) property. Hotness of bucket is sum of hotness of SSTables in bucket. Cold bucket will get the less priority than hot buckets for compaction.
    • Those buckets do not meet the criteria of min_threshold will be discarded. Those buckets have SSTables more than max_threshold, will be trimmed to max_threshold and appear for the compaction. All SSTables in such bucket will be sorted according to hotness of SSTable and top (32/ max_threshold ) SStable will be considered.

    Limitation:

    • Size tiered compaction doesnt give any guarantee about column distribution of particular row. I may possible that columns of particular row-key belong to different SSTables. In such cases read performance will hit as read operation need to touch all SSTables where columns present.
    • In worst case it might need exact amount of free space(100%) to combine SSTables.

Apache Pig : Group By, Nested Foreach, Join Example

Preface:

In this post we will learn about basic Apache Pig Operations like group by, foreach, nested foreach and join by examples.

Input Data:

For experimental purpose we have generated dummy test data.
We have following dataset for operations.

  1. Customer’s order history data :

    This data contains the orders placed by customer. For example customer with id ‘A’ had ordered item ‘I’. Order date in milliseconds was ‘1391230800000’ and Delivery date in milliseconds was ‘1391317200000’.
    Schema : ( customerId:int, itemId:int, orderDate:long, deliveryDate:long)
    You can generate the dummy data using following script:

  2. Customer’s basic information data :

    This data contains the basic information of customer. For example customer Id, name and city.
    Schema : ( custoerId:int, name:chararray, city:chararry)
    You can generate the dummy test data using following script:

Examples:

  1. GROUP BY:
    Lets start with ‘group by’ operation.
    Problem Statement:
    Find the number of items bought by each customer.
    Input:
    Customer’s order history data.
    Output:
    Output should contain the total number of items bought by each customer.
    Schema of output should be : (customerId:int , itmCnt:int );

    
    -- Problem Stmt : find the number of items bought by each customer.
    -- load the input data :: Schema ( customerId , itemId , order Date, delivery Date );
    orders = load '/testData100k' using PigStorage(',') as (cstrId:int, itmId:int, orderDate: long, deliveryDate: long );
    -- group input by customer id
    grpd = group orders by cstrId;
    -- count number of items bought by each customer
    items_by_customer = foreach grpd generate group as cstrId, COUNT(orders) as itemCnt;
    describe items_by_customer;
    --items_by_customer: {cstrId: int,itemCnt: long}
    
  2. NESTED FOREACH:
    Now lets discuss about Nested Foreach Operation. We can use foreach operator to iterate over input records, but we can also apply different relational operations to each record in same data processing pipeline.

    Problem Statement:
    Find the total number of items bought by each customer and which item he/she bought higest times. For example Customer A has bought items ‘i1’ 10 times , ‘i2’ 5 times and ‘i3’ 7 times. So total number of items taken by Customer A is 10+5+7=22, out of which item ‘i1’ is taken higest times.
    Input:
    Customer’s order history data.
    Output:
    Output should contain total number of items bought by customer and which of item he/she bought higest time.
    Schema of output should be :
    (customerId:int, itemId:int , higestBoughtItemCnt:long, totalItemCnt:long)

    
    -- Problem Stmt : find the number of items bought by each customer
    -- which item he/she bought highest time.
    -- load the input data :: Schema ( customerId , itemId , order Date, delivery Date );
    orders = load '/testData100k' using PigStorage(',') as (cstrId:int, itmId:int, orderDate: long, deliveryDate: long );
    -- group by  custorer-id and item-id
    grpd_cstr_itm = group orders by (cstrId,itmId);
    grpd_cstr_itm_cnt = foreach grpd_cstr_itm generate group.cstrId as cstrId, group.itmId as itmId, COUNT(orders) as itmCnt;
    -- group by cstrId 
    grpd_cstr = group grpd_cstr_itm_cnt by cstrId ;
    describe grpd_cstr;
    -- grpd_cstr: {group: int,grpd_cstr_itm_cnt: {cstrId: int,itmId: int,itmCnt: long}}
    -- iterate over grpd_cstr_itm and find total number of items bought by customer and which item he/or she bought higest time.
    result = foreach grpd_cstr{
    	total_orders = SUM(grpd_cstr_itm_cnt.itmCnt);
    	srtd_orders = order grpd_cstr_itm_cnt by itmCnt desc;
    	higest_bought = limit srtd_orders 1;
    	generate FLATTEN(higest_bought),total_orders as totalCnt;
    };
    -- result will contains ( customer_id , itm_id_bought_higest_times, number_of_times_it_bought, total_items);
    describe result;
    -- result: {higest_bought::cstrId: int,higest_bought::itmId: int,higest_bought::itmCnt: long,totalCnt: long}
    
  3. JOIN:
    Like in RDBMS, join operator in Pig join datasets based on values common to each dataset.
    We will join customer order history dataset with customer information dataset to get interesting result.
    Problem Statement:
    Find the total number of items ordered by customers by city.
    For example customers from CityA has ordered item ‘i1’ 300 times. Customer from CityB ordered item ‘i4’ 129 times.
    Input:
    Customer’s order history data.
    Customer’s basic information data.
    Output:
    Output should contain total number of items ordered by customers by city.
    Schema of output should be :
    (itemId:int, city:chararray, totalItemCnt:long)

    
    -- Problem Stmt : find the a perticular items ordered by customers by city.
    -- For e.g. Customers from CityA has ordered the item 'item1' 350 times.
    -- load the input data :: Schema ( customerId , itemId , order Date, delivery Date );
    orders = load '/testData100k' using PigStorage(',') as (cstrId:int, itmId:int, orderDate: long, deliveryDate: long );
    -- load the customer information data :: (customerId , name , city )
    cstr_info = load '/customerInformation' using PigStorage(',') as (cstrId:int, name:chararray, city:chararray);
    -- join orders and customer_info by cstrId;
    jnd = join orders by cstrId, cstr_info by cstrId;
    describe jnd;
    -- jnd: {orders::cstrId: int,orders::itmId: int,orders::orderDate: long,orders::deliveryDate: long,cstr_info::cstrId: int,cstr_info::name: chararray,cstr_info::city: chararray}
    -- group  by itemId and city 
    jnd_grp = group jnd by (orders::itmId, cstr_info::city);
    describe jnd_grp;
    -- jnd_grp: {group: (orders::itmId: int,cstr_info::city: chararray),jnd: {orders::cstrId: int,orders::itmId: int,orders::orderDate: long,orders::deliveryDate: long,cstr_info::cstrId: int,cstr_info::name: chararray,cstr_info::city: chararray}}
    -- lets count and generate the result.
    result = foreach jnd_grp generate FLATTEN(group) , COUNT(jnd) as cnt;
    describe result;                                                     
    --result: {group::orders::itmId: int,group::cstr_info::city: chararray,cnt: long}
    

Apache Pig : Writing a User Defined Function (UDF)

Preface:

In this post we will write a basic/demo custom function for Apache Pig, called as UDF (User Defined Function).
Pig’s Java UDF extends functionalities of EvalFunc. This abstract class have an abstract method “exec” which user needs to implement in concrete class with appropriate functionality.

Problem Statement:

Lets write a simple Java UDF which takes input as Tuple of two DataBag and check whether second databag(set) is subset of first databag(set).
For example, Assume you have been given tuple of two databags. Each DataBag contains elements(tuples) as number.

Input:
Databag1 : {(10),(4),(21),(9),(50)}
Databag2 : {(9),(4),(50)}
Output:
True

Then function should return true as Databag2 is subset of Databag1.

From implemetation point of view

As we are extending abstract class EvalFucn, we will be implementing exec function. In this function we’ll write logic to find is given set is subset of other or not. We will also override function outputSchema to specify output schema ( boolean : true or false ).


import java.io.IOException;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;

import org.apache.pig.EvalFunc;
import org.apache.pig.data.DataBag;
import org.apache.pig.data.DataType;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.logicalLayer.schema.Schema;
import org.apache.pig.impl.logicalLayer.schema.Schema.FieldSchema;

/**
 * Find the whether given SetB is subset of SetA.
 * <p>
 * 	input:
 * <br>setA : {(10),(4),(21),(9),(50)}</br>
 * <br>setB : {(9),(4),(50)}</br>
 * <br></br>
 *  output:
 * <br>true</br>
 * </p>
 * @author https://shrikantbang.wordpress.com
 *
 */
public class IsSubSet extends EvalFunc {

	@Override
	public Schema outputSchema(Schema input) {
		if(input.size()!=2){
			throw new IllegalArgumentException("input should contains two elements!");
		}

		List fields = input.getFields();
		for(FieldSchema f : fields){
			if(f.type != DataType.BAG){
				throw new IllegalArgumentException("input fields should be bag!");	
			}
		}
		return new Schema(new FieldSchema("isSubset",DataType.BOOLEAN));
	}

	private Set populateSet(DataBag dataBag){
		HashSet set = new HashSet();

		Iterator iter = dataBag.iterator();
		while(iter.hasNext()){
			set.add(iter.next());
		}
		return set;
	}

	@Override
	public Boolean exec(Tuple input) throws IOException {

		Set setA = populateSet((DataBag) input.get(0));
		Set setB = populateSet((DataBag) input.get(1));
		return setA.containsAll(setB) ? Boolean.TRUE : Boolean.FALSE;

	}

}

A Quick Test

Lets test our UDF to find whether given set is subset of other set or not.


-- Register jar which contains UDF.
register '/home/hadoop/udf.jar';

-- Define function for use.
define isSubset IsSubSet();

-- lets assume we have dataset as following :
 dump datset;
--({(10),(4),(21),(9),(50)},{(9),(4),(50)})
--({(50),(78),(45),(7),(4)},{(7),(45),(50)})
--({(1),(2),(3),(4),(5)},{(4),(3),(50)})

-- lets check subset function
result = foreach dataset generate $0,$1,isSubset($0,$1);

dump result;
--({(10),(4),(21),(9),(50)},{(9),(4),(50)},true)
--({(50),(78),(45),(7),(4)},{(7),(45),(50)},false)
--({(1),(2),(3),(4),(5)},{(4),(3),(50)},false)