To optimize volume, three options were performed for this lab. All of them used 0.75 for the volume factor, and 500 million random data were used. Command time and /usr/bin/time was used to check its details, but loss time was found between the total time and the sum of the user and system time.
1. Volume_out = sample data * volume_factor : This method was simple but slow due to float multiplication and two types of conversions occurred.
2. using lookup table: In this option, multiplication happened while creating the lookup table so that it was faster than the previous option.
3. using fixed-point: the volume factor was converted to fixed point integer as a setup, and then multiplied by a binary as the actual calculation. As you see below, this method was much faster than the previous versions.
Last, option O3 for the compile was used to compare the time after optimization, resulted in shorter time.
In this lab, I learned -O3 option made the program faster, more effectively by optimization, and the result can be found in the table. I think that this option, gcc -O3, will be very strong tool when the data is massive.