Monday, November 4, 2013

Scalar Data

For this assignment, we are creating data readers to read in volume data and visualize it.  The data was given to us as an NRRD format.  Using the header file of brain data that was given to us in 2d, I successfully imported the data and rendered the image seen below.

Further research into this data shows me that the intensity values were not between 0 and 1 where 1 is the max intensity and 0 is no intensity.  I mapped the values by linear interpolation to give the following image.


The next thing I did was went to the color brewer and chose a single hue visualization color of turquoise since it was assigned by the class as an assignment.  The following is my output result.


This was hard to visualize so I inverted the color scale and got the following image and all colors I tried out.






Personally, I still think the black/white visualization shows a lot more information since there is a stronger change in intensity when studying the data.  This is the best out of the color mapping technique since it goes from light to dark and turquoise seems to look more like the inside of the brain like an MRI scan.  Since it is easiest on the eye and looks better the the colorbrewer black white image, the first turquoise image seen again below is my winner.


Now it's time to rescale the image and make it fixed to 800 pixels in height.  Since I was doing a point plotting method, this was the image that came about.


I had to scale the strokes by 4 since I have a retina display to get the points to appear pixelated.



Now it's time to do linear interpolation between the points to make them not appear pixelated as the image above.  When working on the left-to-right pixels, I get the following image.



I then add in the y-axis and get the following image.


The reason that there are still missing points is because I need to interpolate diagonally also.  Here is when I interpolate diagonally also.


As you can see, the image looks a lot more crystal clear when you linear interpolate between the points in all directions rather than the pixelated image I was showing shown again below.


The errors that you see in the better linear interpolated image is the border of the image is not shown accurately since I did not interpolate the edge case with my algorithm.  Everything in the center image looks a lot more crystal clear though you can still see that it is a little pixelated.  Rather than showing full bouncing values, all I did was interpolate between two adjacent pixels.  The image would look a lot better if I used an algorithm that looked at more pixels and would give the effect of a Gaussian blur instead.

Because of the edges being a little weird and the diagonal probably causing some noise, I redid the interpolation and got the following graph.


Comparing the two images, my interpolation was originally wrong and this actually gives a very good dataset.  The only area that I saw blended off was the right side of the image.  When testing further with the test.nrrd that was provided by the class web page, I reran that code and the image seemed to clamp to the edge so I felt good about my interpolation results.


Creating the kernel algorithm on my own, I was able to visualize the isocontours on the graph using the isovalue 32700.


I then used my marching squares algorithm on the brain dataset and checked it against the isovalue 176 to see if it was accurate on the brain dataset.


It was reasonably compared with the example image that was given to us on the class web page.  The next thing I worked on was getting the contour mode to be toggled between pressing the key 'c'.  I then explored different values within the brain dataset.  I noticed that anything above 226 in isovalue was just static and anything under 146 was not focusing on data of interest.  My favorite interest isovalue was 216 because it seemed to focus on the outside of the brain where the density was not as strong.


Isovalue = 146

Isovalue = 216

Isovalue = 226

To most effectively bounce between isovalues, rather than restarting the program every time I wanted to change Isovalues, I used the '[' character to decrement the isovalue a certain number, in the brain picture I used the number 10.  I then used ']' as the increment value for the isovalue also set at 10.

It seems as though there are no problems in my marching squares algorithm, it compared spot-on with the demo that was posted online.  Comparing the colors to the turqoise I liked, I tried a few different colors giving me the following results.

Purple Highlighting

Yellow Highlighting

Light Purple Highlighting

Blue Highlighting

My favorite result was the blue highlighting and the black highlighting for contour lines.  I actually like blue more because it doesn't take away from the details of the drawing as much so my color of choice is actually the blue.

Now is the time for data exploration.  Given the mountain dataset, this is the image I was able to produce.

I decided to change the values so that the color would make more sense giving me the following graph.

Though I thought this one looked cooler, it stole the image to the dark area like I was highlighting so I decided to change it back.  The following are the contours I studied out.





It turns out after rescaling the image isovalues to between 0-1 was a great idea since there were even negative numbers in this dataset.  After looking much closer into the dataset, I started noticing with the contours that larger numbers were along the silhouette of the middle image between two.  Since this was a mountain dataset, I was able to symbolize larger data values to be steeper areas and smaller areas to be flat areas.  I adjusted my visualization to accommodate for these changes.

This image looked much more fun to study and looks much more interesting since it is concentrating on just the steepness of values where negative numbers don't play effect.  In the above image, you can see that the darker the color, the steeper the incline and the lighter the image, the more flat the image is at that point.  I then studied the contours on this particular visualization.

It was very cool studying the slopes where they are the same and I'm very impressed with my results.  Some of the most interesting stuff I found in my study was how it sots areas where my eyes don't see as well like small peaks in certain areas as seen by the example below in between the two circular contours towards the top of the image..

I really liked making dynamic cube mapping.  It was so interesting how accurate the image looks when you have a larger resolution image and how the contours are still drawn without any error.  The biggest difference between the brain dataset and the mtHood dataset is that the brain dataset was all positive values and the mtHood dataset had negative values.  It turned out that the colors did not matter on this dataset as much as I tried to put data back into its original color.


My data can be downloaded from http://sci.utah.edu/~mavinm/cs6630/scalarData.tar.gz or http://sci.utah.edu/~mavinm/cs6630/scalarData.zip.  You do need authentication to download the data.  Please contact me if you have trouble viewing the data.


No comments:

Post a Comment