Saturday, November 16, 2013

Transfer Function

For the transfer function assignment, the first thing that I did was install Image3D to understand what an example transfer function is.  As the data to visualize, I chose a dataset of a hand called hand16.uvf.  It also included a 1d transfer function called hand16.1dt that I installed to initially look at.  The data that was presented at first was very cool.  Below is the image of the 1d transfer function one of the researchers did.

This was a very cool picture of the bone in yellow and the veins in orange and the muscle in red.  It also provided a 1d transfer function to look at.


The first thing that I wanted to do was play with the visualization.  It had options to rotate and translate the 3d volume.


I then wanted to see what I could do with the transfer function so I made it so I would only see the skin fully.  I chose the color blue because I wanted to see what colors RGB could produce other than the standard red/green/blue.

The transfer function for just the hand looks like this image below.

I then wanted to see if I could get the bone to appear by itself so after playing with the transfer function a lot, I came up with a method to filter just the bone seen in white.


To get this transfer function, I used the following setup.


I played with the program enough to know what I enjoyed and what I want to do with my transfer function.  Some of my favorite things I liked about the transfer function is that it had a checkerboard at the top which helps me understand the alpha channel and the true color blending in at a particular isovalue.  I also liked the freedom of moving my cursor to select different types of data.

The difficult part about the widget/editor is that I had to learn everything myself.  I could have read some other documentation online but I would rather have a tutorial right where I'm looking since the average user just wants to see what a program does before digging deep in if they want to spend a lot of time in it.  I noticed the 3d window had some cool benefits of keyboard shortcuts but I couldn't find any keyboard shortcuts that worked inside of the transfer function window.  I didn't like how I had to move my mouse all the way to the red checkbox to uncheck/check it.  I would rather press 'r' for red to toggle in between and do the same for the other colors.  That would be one of the improvements I would do.  Overall, the biggest thing I would do is help the researcher with speed.  I like keyboard shortcuts and I would change that to make the transfer function more apparent.  I would choose the keys 'r' for red, 'b' for blue, 'g' for green, and 'a' for alpha.  I would also have presets where the alpha would be preset with the keys 1-9 or something like that so the researcher could reset the alpha to a certain value or use popular transfer function schemas.

In the next step, I downloaded Dr. Christoph Garth's volume renderer he provided.  I downloaded the data and got the following volume renderer.



I really liked the foot dataset so after looking at all of the datasets, I stuck to the foot.  To make it an effective visualization, I found that depending on what I was interested in, ambience, specular, diffuse all played an important role.  I wanted to represent a 3d model so I made sure it had all of those settings applied to the dataset and I got the following foot after manipulating the data.



The coolest thing I really liked was the density bar that allowed me to see the whole foot and skin if I wanted and in real-time modifying the density to make it show less and less until I got to the bone model which was in the image just above.  I found this method very useful especially for beginners.  They can see in real-time what each step function does and could modify it accordingly.  I could take this to a 10-year-old and they would understand how to use the program simply.  This is limited to beginners who don't need to research certain behaviors.  If I was trying to find breaking research, I would actually use Image3D as the first preference rather than this since it has much more support.

Now to come up with my own scheme, I sketched out three ideas.  Below, I will show my sketches and the ideas behind them.  

In the first diagram, it is very similar to how Image3D works.  I really liked its functionality so I went off of its layout and added features like shortcut keys to toggle in between the colors like 'r' for red, and other letters for the other colors based on the underlined color.  Another thing I added is a label where the bottom is f(x) and the left side is labeled the intensity value of each color so the user knows and has a more clarified definition of what the scale is.  I also really liked the alpha box checker-box so I want to layout that color in the box at first again depending on if the intensity value if nothing or completely filled.



In the second image, I took out the checkboxes and allowed the users to change the color within each of the grids.  This would make a fast layout for the data and similar to the first draft, I would add the color of all the combinations at the top.

For the third image, I wanted to play with polar coordinates so I did the same checkbox routine but rather than having a rectangle, it is a circle and you would drag your mouse around the circle to choose intensity  values where f(x) runs around the circumference of the circle and 0 is in the center of the circle while 255 color intensity value is on the outside of the circle.

I didn't like the polar coordinates idea because f(0) and f(255) are most-likely discrete and wouldn't make it seem continuous.  I kind of liked the several rectangle boxes though it wasn't nice because I'm using redundancy in the different colors.  If I did use that method, I would have also put the graphs side-by-side to see if that would have been nicer but I am a fan of just one rectangular box.

I chose to go with the first idea choosing Image3D's method but with a few enhancements like keyboard shortcuts since no one wants to really move their mouse outside to check on certain boxes.

To implement the method, I first drew all of the coordinates and then used mapping to map the indexes into the right area using alpha as a black color.  This was to get a good test case and this was my result.



As you can kind of see, the values mapped in appropriately but they used a scattering variable when I dragged the mouse which only drew the point of focus on each frame cycle at the speed of my system.  To take care of this, I implemented a previous point tracker and used an equation to calculate the slope of the difference between the points then interpolated all of the in-between points so that dragging the mouse acted appropriately.  Another thing I did was make sure the points drawn inside of the window widget were bound inside of the widget even if my mouse went outside of the widget.  It wouldn't focus on my custom widget unless I was already in the widget when clicking and as I drag outside of the widget, it handles these issues.  Another thing I fixed in my graph was that the left axis needed to show not only alpha but all the other red, green, blue values also.

As you can see from this image, it looks much better since it takes care of all the in-between values.  There still lacks a jump from 0 to the first value in the line and between all of the values.  To take care of this, rather than drawing points, I drew connecting lines.

This gave me good enough results to be happy.  Now I need to create a radio widget to choose between red, green, blue and alpha.  Not only did I do that, I changed the border to not be completely black so you could see the alpha at the bottom and the colors at the top.  Some functionalities that I added when doing this was making it so you couldn't deselect one of the radio buttons.  I did the same with the "Step" and "Custom Transfer Function" so it would require one of them at least.  Another change I did was to make the view start inside of "Custom Transfer Function" mode since I didn't care about development in "Step" function mode.


As you can see, I wanted to keep the labeling consistent to the labels on the left so I used the radio buttons from ControlP5 when creating RadioButtons.  Now to tie them all together to the widget amongst the selection.  Below is an output where I modified all of the colors.  I also have the output image.


Now as I wanted to do in the beginning, I want to add shortcut keys that would allow you to press the keyboard to select a color.  I made an underline that underlined the letter that you could press to go in between red, green, blue and alpha.  Since the keys were so far away, I also made some more shortcut keys that would iterate between them using the 'k' button to iterate left and 'l' button to iterate to the right.  This was very cool because I could be in the middle of a line and want to change colors while drawing.  Here is the result of me drawing a straight line across iterating through the colors with the keyboard at the same time.

Now I want to add the number of data values at each particular point so the viewer knows how many points on the 3d visualization.  Again, I had to dig into other code that was inside of the black box that the class page recommended.  I found a way to use the data to my advantage and had it plot in this widget where the data values were none zero or the max value.  Another thing I did was changed the focus window so that it was bound for a click focus 20 pixels on the outside of the box so if someone clicked and held on the outside of the box and dragged it, it would take care of the boundaries at the borders.
Now that the data contained the number of values at a certain f(x) value, I modified the label on the left to say intensity since that would make more sense.  The data values are rescaled to fit in the 0-255 box.

The last thing that I want to add is a bar at the top that would show the value at at a certain f(x) value.  This took forever since I had to write everything and there seemed to be many complications that made me have to use other routines to draw.  In the end, everything looked very good and I was impressed with my results.  Below I will show the checkerbox alpha that I drew.
After which, after adding color to the box which was where the complications came in, I got a working version of the following.


Comparing the colors to the result set after, I know that it is debugged correctly.  So I didn't have to restart the program everytime, I used the space button to reset the values back to their default values.

The first dataset I decided to study was the fuel dataset.  The most interesting thing I found about the dataset was that there was different values throughout the dataset where the center had a different value than the outside of the dataset.  It diligently moved throughout the fuel dataset meaning that the dataset of fuel probably contained the density or a temperature since they were not uniform throughout.  Below is the transfer function I came up with for the dataset.


The RGB values were hard to play with so if I had more time I would have used HSV instead of RGB.  It was cool looking at the dataset and coming up with a good transfer function.  Some of the greatest strengths of my design is showing the color at the top bar.  Another strength is the easy functionality of moving the cursor around the dataset to discover data.  The cons are the RGB values I have mentioned earlier.  The challenges of finding a good volume rendering is finding where the good values of interest are.  You have to spend a lot of time with the data to find interesting areas.  This is a faster process of any other design pattern I've done in the past which makes it super fun to play with.  Now to study another dataset.

In the Bucky dataset, I was able to come up with the following transfer function visualization.


While studying this visualization, I found some other flaws in my design that I would make better if I had more time.  I didn't like how clicking on red just modified the red.  It is confusing to find out what color I really wanted.  With more time, I would use a color bucket where you could select a color and use that in the graph dependent on the intensity I wanted.  I felt like I had to really understand how RGB works to be able to.  I decided that for my case, I wanted to improve my tools so that the buttons could be selected simultaneously.  In the mean-time, I improved the checkboxes to have the 'i' key activate all of the checkboxes and 'o' key will deactivate all checkboxes.  I did have to take away the 'k' and 'l' functionality since I'm not using a radio technique anymore and it didn't make sense.

Below is the upgraded implementation I created to make color picking easier to understand for the user similar to Image3D.

As you can see by the image above, the values follow one another since all are selected,  after I deselect the green value and get the following changes in the second photo.

I really like this implementation better.  Let's now go and do some data exploration on the same datasets again and see if we can discover anything new.


As you can see from the images above, it is easier to move values around though I didn't find it significantly easier.  I did enjoy using the tool more though and felt it was a better implementation than the first.  Now let's look at the other dataset again and see if that makes it easier.


In this dataset, I did find it significantly easier to explore the data and put colors into different areas.  It was still confusing since selecting red only modified red and didn't show more of the red on the image necessarily.

Overall, this has been my favorite assignment because it feels powerful to edit 3d images using 2d tools.  I like the ease of access and how things work uniformly.

You can download my code from sci.utah.edu/~mavinm/cs6630/TransferFunctions.zip, note that you need authentication to download it.  Please contact me if you have any issues.

Monday, November 4, 2013

Scalar Data

For this assignment, we are creating data readers to read in volume data and visualize it.  The data was given to us as an NRRD format.  Using the header file of brain data that was given to us in 2d, I successfully imported the data and rendered the image seen below.

Further research into this data shows me that the intensity values were not between 0 and 1 where 1 is the max intensity and 0 is no intensity.  I mapped the values by linear interpolation to give the following image.


The next thing I did was went to the color brewer and chose a single hue visualization color of turquoise since it was assigned by the class as an assignment.  The following is my output result.


This was hard to visualize so I inverted the color scale and got the following image and all colors I tried out.






Personally, I still think the black/white visualization shows a lot more information since there is a stronger change in intensity when studying the data.  This is the best out of the color mapping technique since it goes from light to dark and turquoise seems to look more like the inside of the brain like an MRI scan.  Since it is easiest on the eye and looks better the the colorbrewer black white image, the first turquoise image seen again below is my winner.


Now it's time to rescale the image and make it fixed to 800 pixels in height.  Since I was doing a point plotting method, this was the image that came about.


I had to scale the strokes by 4 since I have a retina display to get the points to appear pixelated.



Now it's time to do linear interpolation between the points to make them not appear pixelated as the image above.  When working on the left-to-right pixels, I get the following image.



I then add in the y-axis and get the following image.


The reason that there are still missing points is because I need to interpolate diagonally also.  Here is when I interpolate diagonally also.


As you can see, the image looks a lot more crystal clear when you linear interpolate between the points in all directions rather than the pixelated image I was showing shown again below.


The errors that you see in the better linear interpolated image is the border of the image is not shown accurately since I did not interpolate the edge case with my algorithm.  Everything in the center image looks a lot more crystal clear though you can still see that it is a little pixelated.  Rather than showing full bouncing values, all I did was interpolate between two adjacent pixels.  The image would look a lot better if I used an algorithm that looked at more pixels and would give the effect of a Gaussian blur instead.

Because of the edges being a little weird and the diagonal probably causing some noise, I redid the interpolation and got the following graph.


Comparing the two images, my interpolation was originally wrong and this actually gives a very good dataset.  The only area that I saw blended off was the right side of the image.  When testing further with the test.nrrd that was provided by the class web page, I reran that code and the image seemed to clamp to the edge so I felt good about my interpolation results.


Creating the kernel algorithm on my own, I was able to visualize the isocontours on the graph using the isovalue 32700.


I then used my marching squares algorithm on the brain dataset and checked it against the isovalue 176 to see if it was accurate on the brain dataset.


It was reasonably compared with the example image that was given to us on the class web page.  The next thing I worked on was getting the contour mode to be toggled between pressing the key 'c'.  I then explored different values within the brain dataset.  I noticed that anything above 226 in isovalue was just static and anything under 146 was not focusing on data of interest.  My favorite interest isovalue was 216 because it seemed to focus on the outside of the brain where the density was not as strong.


Isovalue = 146

Isovalue = 216

Isovalue = 226

To most effectively bounce between isovalues, rather than restarting the program every time I wanted to change Isovalues, I used the '[' character to decrement the isovalue a certain number, in the brain picture I used the number 10.  I then used ']' as the increment value for the isovalue also set at 10.

It seems as though there are no problems in my marching squares algorithm, it compared spot-on with the demo that was posted online.  Comparing the colors to the turqoise I liked, I tried a few different colors giving me the following results.

Purple Highlighting

Yellow Highlighting

Light Purple Highlighting

Blue Highlighting

My favorite result was the blue highlighting and the black highlighting for contour lines.  I actually like blue more because it doesn't take away from the details of the drawing as much so my color of choice is actually the blue.

Now is the time for data exploration.  Given the mountain dataset, this is the image I was able to produce.

I decided to change the values so that the color would make more sense giving me the following graph.

Though I thought this one looked cooler, it stole the image to the dark area like I was highlighting so I decided to change it back.  The following are the contours I studied out.





It turns out after rescaling the image isovalues to between 0-1 was a great idea since there were even negative numbers in this dataset.  After looking much closer into the dataset, I started noticing with the contours that larger numbers were along the silhouette of the middle image between two.  Since this was a mountain dataset, I was able to symbolize larger data values to be steeper areas and smaller areas to be flat areas.  I adjusted my visualization to accommodate for these changes.

This image looked much more fun to study and looks much more interesting since it is concentrating on just the steepness of values where negative numbers don't play effect.  In the above image, you can see that the darker the color, the steeper the incline and the lighter the image, the more flat the image is at that point.  I then studied the contours on this particular visualization.

It was very cool studying the slopes where they are the same and I'm very impressed with my results.  Some of the most interesting stuff I found in my study was how it sots areas where my eyes don't see as well like small peaks in certain areas as seen by the example below in between the two circular contours towards the top of the image..

I really liked making dynamic cube mapping.  It was so interesting how accurate the image looks when you have a larger resolution image and how the contours are still drawn without any error.  The biggest difference between the brain dataset and the mtHood dataset is that the brain dataset was all positive values and the mtHood dataset had negative values.  It turned out that the colors did not matter on this dataset as much as I tried to put data back into its original color.


My data can be downloaded from http://sci.utah.edu/~mavinm/cs6630/scalarData.tar.gz or http://sci.utah.edu/~mavinm/cs6630/scalarData.zip.  You do need authentication to download the data.  Please contact me if you have trouble viewing the data.