Monday, June 22, 2009

Latex, but not like the glove...

Been learning how to use LaTeX today to write up my work. Because I just started with a template made by Beth from Astro 333, it hasn't been hard to get going. I'm just plugging in my graphics and describing my process. (For the sake of good record keeping, and so that I can jump back in easily without losses when I get back from vacation.)

What was really exciting was that after I went back and looked at a color-color plot of my star catalogs (about 1000 stars in each field!) that made last Friday, it showed the sequence brilliantly. I color coded a few kinds of stars to look at, namely a "faint" vs "bright" parameter (25th mag cutoff), and a now a flux ratio cut off, where anything less than 1.3 was displayed in a group, and another group of 1.3-1.5. The sequence was made up almost entirely of the <1.3, with only a few bright stars from the other group along the stream. I plan on going back to this to see how far past the cut-off those good-looking stars are, after I finish my LaTeX-ing. Which may mean after I get back from my two weeks away.

I'm going to finish up my LaTeX-ing tomorrow morning, and then I'm off to the U.K.
Cheerio then!

Thursday, June 18, 2009

Two down, plenty left to go...

About to finish up item 3 of the Plan.
The past few days I've just been going through and documenting my star-picking process and kind of taking inventory of my really good graphics and adjusting them for updated parameters. My official classification cut-off is now 0.75-1.0 for stars.

Today I have been working on the flux-ratios. My hope is that two peaks will appear on my histogram and help me pick a parameter to further weed out a few galaxies (which would correspond to the anomalous lines, that don't fit a star's profile, on my aperture magnitude vs aperture radius plots). When I went through, I was careful with my math in my code, making sure I had done my distance modulus algebra correctly and all, and made some good histograms.
somewhere down the line I realized I'd in fact still managed to input my magnitudes backwards. I had the equations set up correctly, but instead of solving for flux(ap11)/flux(ap8) I had gotten flux(ap8)/flux(ap11). Not a huge deal, so I just flipped 'em around to see where that got me.

What's left to do is really look at their peaks (i.e. I've been looking at them, but am unsure what to do with what I see, and need to consult Beth in the morning, and compare "star" vs "galaxy" peaks, etc.), and pick a parameter, like flux ratios greater than 1 are in fact galaxies and discard those from my star catalog.
Then: vacation, and then moving on with the Plan.

Monday, June 15, 2009

The Best Plans of Mice and Men...

...often go awry. ~Robert Burns

But here's to hoping this one doesn't!

The Plan (for the second half of the summer)
  1. Detail process of star isolating technique
  2. Make final call on classification cut-off
  3. Separating further with aperture magnitudes:
    Flux ratios and plots
  4. Look at the spatial distribution and cmds of the stars
  5. Pursue analysis or proceed with this work on a new data set

Monday was very productive, in addition to talking over this with Beth. I sat down with her first thing in the morning and got through the two bugs left in my Error code. Two rather simple things, so I got started full-bore making lovely error plots for the rest of the day. I made magnitude error and color error plots with the median lines, in both N and S, for all objects and just for the "star" objects (as determined by class ge 0.7). I even made four corresponding nifty little plots showing just the median lines, in different colors, to compare them on a single set of axis. In the process I learned how to use "legend" to add a key to my graph. Beautiful.

Next: Get started on that "best catalogue of stars."

Friday, June 12, 2009

To Err is Human

The day was filled with Errors, of the human and astronomical varieties.


I went through my multi-plots some more, this time with a magnitude=24 cutoff. By looking at these brighter objects, we gain two major things:
-less uncertainty in measured magnitudes
-probably a more accurate classification (star vs gal)

We also then make the assumption that "stars are stars are stars", i.e. that the dimmer stars are fundamentally and structurally of the same nature that the bright ones are. Thus, by investigating the patterns of star sequences in brighter samples, we can glean info about those dimmer (meaning farther away or colder, in some cases). And, as Beth pointed out (which I surmised as well), this assumption would be ill advised in the realm of galaxies, as dimmer ones (farther away or not) we know don't all have the same structures as those closer to us; this is due to the highly varied morphological properties of galaxies, as well as their higher redshift, and the phenomenon of "looking back in time" at younger galaxies, as opposed to our closer, more evolved galactic structures.

Once implementing the higher magnitude cutoff, I made my multi-plots, showing different layers of brighter objects and color-coded stars vs galaxies. **I did successfully show the separation of the brightest stars in a pretty sequence. Have yet to attempt this with an i-z color-color plot.**

And by the way, this was all on the larger set of processors, squid! Which is actually why this took me a little longer than it should have, because I working in a less familiar environment. But I'm pretty acclimated to it now, even though I did prefer eel's IDL development environment. The switch became more immediately necessary because poor little eel and his one processor could no longer handle the loads I was giving him with the multiple plot procedures.


Began the morning by going through and making up a few quick multi-plots (in the same manner as yesterday), corresponding to the southern field, as I worked only with the northern half prior.

Then came the Reign of Error! ;P
I had started a code yesterday evening, setting up to make my plots of magnitude and color uncertainties, in the style I found as I read Dylan's research. This plotting got off to a hazardous start, and only got worse.

First tackled was a simple plot of B measurement error vs. B magnitude. This looked relatively as expected, once axis parameters were implemented. So, move don to a B-V error vs. B magnitude, and got a not nice surprise: numerous points appeared below the curve of the estimated minimum errors, in streaks towards the x-axis. Beth and I puzzled about these odd errors for a while, doing several sanity-checks on my data and making sure my code was indeed debugged, and it seemed to me that the only way they could have appeared was if the B magnitude was somehow smaller than V. Then it dawned on me, that some of the B errors were showing up as 0's!

Investigating a small portion of these points, I found that most of them were classified as galaxies (like 90% between 0-0.04). I also went to on examine them in the largest aperture, to see how the measurements and errors compared. Using a my new favorite IDL toy, the multi-plot, I set up a comparison between the two uncertainty plots in each aperture, but have been able to determine little from them other than that this odd phenomenon occurs in the same manner in both.

Needing further direction for the afternoon before Beth left, she started me also thinking about and working on taking a median of the plots. Using mostly the online tutorial from Astro333, and a little from an IDL book, I got the code written out to my satisfaction, and ran it with high hopes. These were soon torn down, because the code couldn't even get as far as the forloop. De-bugging proceeded for hours (yes, multiple hours *exhaustion*), as bit by bit, as each line was altered. And sanity-checked. And thoroughly examined. Repeatedly (thanks to Gail for helping me through this frustrating and grueling process). One error message at a time was sifted through.

Finally, we got to the bottom of the problem, and I solved the forloop's issues by adding some limits on the range of the data set. This stemmed from the realizations that some of my data are funky, very dim, have crazy filler readings of 99's, or there are simply holes where there are no data points at some magnitudes on the brighter end of the spectrum.

Then, luckily, without another error, I got it to give me the median values I sought. And then, to my dismay, when I tried to oplot it, it failed, claiming that it wasn't an array. Which I have yet to figure out, because I told it that it was an array, and I can't figure out why I doesn't know that... But at this point I am going to need to ask for assistance from a fresh pair of eyes, because I've been working on this code for too long for me to see anything wrong popping out.

So, for Monday:

-Finish debugging this code and get a pretty median uncertainty line.
-Meet with Beth about my work and set official goals for where I'm going with my project for the rest of the summer.

Also on my to-do list:
-Error plots for other bands and colors
-more reading

Wednesday, June 10, 2009

Colorful Couple of Days


Started out making some color-color plots, beginning with B-V vs V-i. When I got that successfully coded, I saw that the several thousand black dots were very hard to interpret.
Plan: a) make it plot different symbols for "stars" and "galaxies" ; and/or b) use randomu to generate a random selection of objects to narrow the sample to plot.
Option a seemed easier, so I started researching through one of the IDL books, and browsing online documentation. Fairly easily I adapted my code to a plot and oplot of the two groups of objects, with different psym numbers, and looked at some cmds and color-color plots. These were unfortunately very hard to read, still, what with the plethora of points. Soon thereafter, Beth also suggested using different colored dots for the two sets, and I spent the rest of the afternoon working out my new code to include loadct and made the stars appear red in my new set of plots.

These new black and red plots were successful in the cmds and B-V vs V-i plots, in both fields, but I have yet to gotten my code worked out for a B-V vs i-z plot.


Beth helped walk us through the code to add in order to get the 'x' plot display device to not let the plots disappear when overlapped by other windows. Hurray!
She also suggested to me, in addition to the randomu reminder, using !p.multi mechanism to display multiple plots side by side. I used this to make a 2x2 display of my B-V vs V-i plots, with the dual color, solid black, just stars, and just galaxies plots.

Looking at these, there is a kind of stream pattern that could be the stars, it's just clouded over by "galaxies", so another thing I want to do is try to isolate those and highlight the pretty stars' path in red. (When Beth sent me Dylan's work on another project from the Spring semester, I looked at some of his color-color diagrams, and though they were in different bands, they had the same kinds of shapes, with this red line and a black cloud of other objects. My goal will be eventually to get plots as nicely set up as some of the ones in his paper.)

After our group meeting, because of a comment by Gail about how close to the "star cutoff" some of my anomalous objects were, I went back to my code and colored some of the dots on my cmd to distinguish the really starry-stars (this time, 0.85-1.0 classification), from the less star-like ones (0.7-0.85). Sadly, this really didn't help shed much light on the issue, as the two groups were still dispersed amongst each other, not indicative of any particular behavior that either possessed uniquely.
Beth then suggested cutting out some of the faintest objects, as the most likely to be misclassified. So, I started out making new plots with magnitude cut-offs of 23, which resulted in very few stars, in comparison. I think tomorrow I will try bumping it back up to 24 or 25 to see where that leads me.

Goal for Friday:
Work out how to isolate the line of stars OR get the B-V vs i-z color-color diagram code to work.

Other things To Do:
-Randomu to narrow sample size
-Think about some ways to use these plots and info to actually separate out the misclassified galaxies (and reconcile the discrepancy between the Besancon model star population prediction and the number I am currently working with)
-Read (for fun!) the Dark Matter Substructure paper forwarded by Beth, which I do find interesting.
-Work on learning how to use the IDL command line outside of the Developement Environment so that I can work on squid instead of eel, because it seems to be slowing down and choking up a bit more and more every day...
-Figure out a better way to try to hang the clock. ;P

Tuesday, June 9, 2009

Besancon Model

Yesterday and this morning:
I looked up the RA and Dec the GOODS and the HUDF (the HUDF was taken in the same field as the GOODS southern field, and therefore should have a relatively similar stellar density). Then I learned how to use the IDL procedure GLACTC to convert to galactic coordinates.

N: RA = 189.2282
Dec = 62.2355
Gl = 125.86662
Gb = 54.810068

S: RA = 53.122923
Dec = -27.79965
Gl = 223.55983
Gb = -54.430659
(all in degrees)

These were then used to generate a simulation with the Besancon model: Model of stellar population synthesis of the Galaxy: Catalogue simulation without kinematics, Johnson-Cousins photometric system.

I left in the default parameters, except expanding the distance range to 250kpc, and changing the V-band range to 10-28, then input the coordinates.
It generated a catalog with 16308 stars in the north, and one with 16102 in the south. This averages to 16205 stars per degree^2.
The GOODS field is 0.0889 deg^2, giving an estimate of 1440 expected stars.
In comparison, HUDF is 0.003055 deg^2, with an estimate of 50 stars (close to the results from the HUDF paper).

In my preliminary sorting, by just using 0.7-1 as the set of "stars", I get 10638 stars in the north, and 10168 in the south. Some of these we suppose are misclassified galaxies, and some of my next steps are to see if I can't separate those out.

To Do:
-- color-color plots, b-v vs v-i and b-v vs i-z, with different symbols for stars and galaxies (meaning I'll be looking through code literature for a bit to learn that)
-- code something to see home many stars magnitudes are changing or leveled off (to try to separate out the anomalies in the cmds)

Friday, June 5, 2009


For-loops have now replaced Fruit Loops in the top 1000 topics on my mind. ;P
Context: I was very happy I got my first for-loop to work today!

I was working on the optimal-aperture project, by plotting magnitudes vs aperture size for a small sampling of objects. It took a little while for me to work out how to code it, and with Beth's help when I had like half a step left, I got some pretty good plots! They show the "light distribution" (reminiscent of the HUDF paper, we later realized) of the stars, with first a rapid increase through the first few apertures, with them leveling off around 20 to 30 pixels. So I chose aperture 8 (of the 11), with a radius at 33.33 pixels.

What was interesting were the objects that didn't level off so quickly, but grew progressively throughout... suggesting perhaps that they are misclassified galaxies. This may be interesting to pursue as a further way to analyze and categorize star vs galaxy.

Using aperture 8, I then went through and made quick work of producing cmds of the North and South fields, for all objects, as well as "galaxies" and "stars". Examining these cmds alongside the aperture 11 ones revealed something interesting: the mysterious cloud of slightly brighter points on ap 11 cmds appears to have migrated up from a slight overdensity at a dimmer magnitude on the ap 8 cmds.

I think this follows from my hypothesis about the galaxies continuing to brighten, after the stars have settled. The dots that remain consistent in the cmds between apertures are likely to correspond to those objects whose magnitude we saw level off on the aperture plot. The objects whose light continued to grow would have brighter magnitudes at larger apertures, and would thus appear to climb up the color-magnitude diagram, as did this cloud of points. Again, this could be further pursued as a method to classify and check whether objects are indeed stars or galaxies.

Continuing To-Do List for next week:

1) Look up RA and Dec (and galactic coordinates) of the two surveys and compare star/galaxy density predictions
2) Look up the Besancon galactic model and compare star/galaxy density predictions
3) Next week- look into using color information to distinguish mislabled galaxies from true stars

Thursday, June 4, 2009

Troublesome CMDs

Well, yesterday was not nearly as productive as it should have been.
I did a lot of reading, trying to figure out some coding things. I started making my color-magnitude diagrams (using the magnitude measured with the largest aperture, since I haven't yet determined which aperture radius is optimal). I hit a lot of snags, and my plots came out looking very odd.


Spent the morning again failing to debug CMD code. Beth helped me get through it this afternoon, and I now have a lovely diagram, with corresponding "star-only" and "galaxies-only" CMDs, for comparison.
I also went back to see if the number of stars I've selected (semi-arbitrarily, by taking the class cutoff to be 0.7), actually matched the prediction based on the HUDF numbers. From my work and histogram today, it appears to be much larger (on the order of 10,000, rather than 500 per field). I have yet to determine how to make up for this discrepancy, aside from claiming that the morphological parameters called some galaxies stars.


0) For-loops and figuring out optimal aperture for magnitudes
1) Look up RA and Dec (and galactic coordinates) of the two surveys and compare star/galaxy density predictions
2) Look up the Besacom galactic model and compare star/galaxy density predictions
3) Next week- look into using color information to distinguish mislabled galaxies from true stars (some "astronomy kung fu" may have to happen here...). ;)

Tuesday, June 2, 2009

Histograms and Star-Count Estimate


To examine the distributions objects as galaxies or stars more clearly, Beth suggested plotting a histogram. This lead to a lot of Idl instruction reading, and a moderate amount of her help. But by the end of the day I had a good plot of the distribution of objects, clearly (thank goodness) showing tall spikes at the galaxy end of the spectrum, and a moderate spike at the star end.
Next I set out to make a hist. of the range of FWHM amongst the stars. I did so by learning how to use a "where statement" (Beth's favorite!), to pick out just those objects in the sample whose classifications had been assigned a value between 0.7 and 1.0. Debugging was still needed.


Finished up (learning how to) making the FWHM histogram.
Also read through the paper Stars in the Hubble Ultra Deep Field to learn how they seperated the galaxies from the stars in thier catalog of apporx. 10,000 objects. Though they used some tricky super-mathematical techniques, it appears to have boiled down to their light distribution, and then a little magnitude profiling. This gave them 46 unresolved objects (non-galaxies, at least until further narrowing down). Of that set, they used what spectroscopy that they could, and elimination of dimmer than an i magnitude of 27, to result in 26 stars.

Given the number of objects, we had a bit of a scare, anticipating that if proportional, our catalogs would then result in a mere 100 or so stars. But never fear! Arithmatic is here!
The area of the HUDF is 11 sq. arcmin, meaning there were 2.364 stars per sq. arcmin. Our GOODS survey data (N and S fields combined) constitute 320 sq. arcmin, resulting in 756.48 stars.

I'm glad- an estimated 750 stars is probably a lot more useful. Or maybe not? Guess we'll have to see how the numbers boil down.

For Tomorrow:

I began writing code to make some color-magnitude diagrams today, so I'll hopefully finish them up by our group meeting in the afternoon.

Monday, June 1, 2009

Plots Galore!

So, much has been accomplished since last Friday!

I made a couple of data structures (with instruction from Beth) to combine the catalogs of the different bands. So now, instead of reading in a bunch of files to make plots, etc, I just have to access either the North or South catalog I made. Sweet.

Next, continued to go through literature searching for the answer to the star-galaxy classification mystery. Finally I found a "SExtractor for Dummies" guide, and after a bit of searching found a section that said that the continuous scale rated objects from 0 (galaxy) to 1 (star).

Starting out:
Goals for the day:
Re-plot the ra-dec from the new catalog.
Plot fwhm against the classification.
Use this to approximate a cut off as to what we'll call a galaxy or star.
Also look at range of fwhm and use this to get an idea of what aperture size to use from the magnitude data.

As of lunch time:
All plots in the goals section have been made! RA vs Dec in both north and south, as well as all 8 fwhm-class plots in each band, N and S.
Proceeding to check out galaxy cut-off values as well as fluxes.