Friday, July 31, 2009

So Long, Farewell, Auf Wiedersehen, Good-Bye!

It's the last day of summer work! This is my last post! Ahhhh! ;)

Now that I've got my COSMOS data structures, I've gone through and plotted RA-Dec, some star/galaxy classification histograms, and CMDs. I'm also adding an "epilogue" about these to my summer work summary paper.

Classification and RA-Dec

The data includes four parameters by which I can classify stars. From the ACS data, there is the class column, just as in the GOODS data, as well as their own column called mu_class. As far as I have been able to determine by my plots and n_elements(...), the mu_class assigns objects with a class of 0.85 or 0.9 and higher as a star. The redshift catalog had two additional classifier columns, the zp_best and Type. Zp_best is based on redshift data, and based on the histograms, those assigned 0 (stars), correspond to the 1's (stars) according to the Type. The interesting thing is that, when plotted, though it is obvious that my classification of class greater than 0.75 includes a few more stars than their mu_class, that mu_class has a significantly higher number of stars than even the zp/type classification. Also, the zp/type RA-Dec plots show odd wholes in the footprint, as if spheres were subtracted out; I initially supposed that this was the case, with some kind of masking, but it seems rather significant, and I've not been able to find any further documentation explain it.

CMDs

They look beautiful! At least, until you get down to the I-band; those look a little weird, but it could just be some magnitude limits acting funny-? Perhaps something to investigate further later. The G-R vs R plots are especially nice, and you can see clearly the differences in the stellar cut-offs by looking at the population getting whittled down.

Color-Color Plots

I made a couple of these really quickly, in B-V vs V-I, and you can again see the beautiful narrowing down of the stellar population between the different star-parameters.

Document, Files, etc.

I updated all of my Research document, included a few notes on my directory and file names, and copied it to data03, and eel, as well as e-mailed it to Beth and myself. I also went through and made sure to have copied all of my files in my home folder over to data03, since it's been like a week since I updated that. So, everything is all set, and now I'm kind of sad to be clearing my stuff out of the lab!

But it's been a great summer, and I'm sure I'll do more work in the future.

Au revior,
~Jennifer

Tuesday, July 28, 2009

Hurray, COSMOS!

Well, it's been a tiring past few days, but it's finally done! The Cosmos catalogs were giving me some trouble, and the codes to data-structure-ify them were running into a lot of trouble.

All of the nonsense with the RA and Dec columns I sorted out: it turns out the data downloading site added its own two columns of calculated sexigesimal coordinates, in addition to the catalog's RA and Dec columns. Hence the confusion. So I'm just writing these as strings and ignoring them.

I ran into a lot of other troubles with a lot of "null" entries in other columns, so I had to read them in, used a where statement to find the "null"s and replaced them all with "99.9"s. Took a lot of lines of code, and a lot of things to get the structure to co-operate and accept the columns. But, after a lot of help from Beth with de-bugging and some code acrobatics, I got it to work this evening.

There is now a catalog for the ACS COSMOS data (cosmoscat), one for the photometric data (cosmosphotcat), and a merged catalog after spherematching their RAs and Decs (cosmos).

I've also got a pretty whole-survey RA-Dec plot already done! Next to tackle: classhist, comparing my star/galaxy classification parameter to theirs, selecting out stars, cmds, color-color, etc.

Also finished my LaTeX document on GOODS, and it looked great on squid, but when i scp'ed it to eel it turned crappy and images weren't displayed correctly, and i also had some trouble e-mailing the file. This to be checked out later.

Friday, July 24, 2009

GOODS to COSMOS

So, now that I've finished up my analysis of my GOODS Stars, here are my conclusions:

The spatial distribution of the stars in the GOODS fields, both as a whole and the halo stars alone, is statistically random, showing no significant structures or streaming. To me this means one of two things, or more likely a combination thereof:
a) the stars were formed in the disk and somehow randomized and ejected into the halo
b) the stars are from Milky Way satellites, but the accretion event was so long ago that the tidal streams have settled.

I think (b) could probably account for most of them. I suppose there could be ways of telling, with metallicity or something else, which is the origin of these stars, should that be desired.

Anyway, I've written my results and conclusions up in my paper, and I'm just fine-tuning it now.

I'm moving on to (attempt to) run my process on another set of data, COSMOS- which as far as I can tell has over 1.1 million objects in it... much much larger that GOODS. Thus far it has been an unfruitful attempt, with numerous obstacles in the way of actually obtaining a readable file and getting IDL to process it. I don't know who structured their catalog, but they have made it really inaccessible by writing in the RA and Dec not in decimal form, so much of my difficulty has been getting those columns.

More work on COSMOS to be continued throughout next week.

Wednesday, July 22, 2009

Halo Stars

Nearly done with GOODS stars project!

K-S Test results for Halo Stars:

North:
28th mag: 0.9855 average w/ 0.24225 stdev
27th mag: 0.9988 average w/ 0.20298 stdev
26.5th mag: 0.7996 average w/ 0.3058 stdev

South:
28th mag: 0.9085 ave. w/ 0.18898 stdev
27th mag: 0.68989 ave. w/ 0.20114 stdev
26.5th mag: 0.8457 ave. w/ 0.1335 stdev

The trend towards more randomness in the 28th magnitude is still seen in the halo subset of GOODS stars, and these are not very different from those of the K-S test on the whole stellar catalog.

Given the high values, I'm comfortable gleaning from this that in the northern field the spatial distribution is random. Though the southern field has slightly lower values, they are still within the random-range, and within a standard deviation of the expected probability (based on the previous random vs random trials).

Wrapping Up

Now I'm just adding to the Results section of my Research Document, and I'll be wrapping up the GOODS stars project. Then I'll be playing with the COSMOS data and checking out its star/galaxy population and stellar spatial distribution.

Monday, July 20, 2009

K-S Test Results

Monday:
Kolmogorov-Smirnov Test
Tests for how similar two data sets are, by measuring the largest distance between the two functions. The IDL function, kstwo, works by inputting two data sets and outputting the K-S statistic "D" and the corresponding "prob". If prob is small, the tests are likely not from the same origin. I ran this test on my data and a couple sets of random data, and taking the mean and standard deviation of numerous trials for comparison. I got the kinds of results I was expecting between the random sets, but got two differing results on the GOODS stars when I included my whole star catalog versus limiting it to the 27th magnitude.

Results

GOODS-N to 27th mag vs. Random 1 (Normalized*, 1 set vs 9 sets)
d-mean: 0.19797699
d-stdev: 0.04482917
prob-mean: 0.69592268
prob-stdev: 0.2092225

*The first time I ran it, I hadn't yet normalized them, so the sets had varying total populations, and resulted in even higher values of d and lower probabilities.

GOODS-N to 28th mag vs. Random 1 (Normalized, 1 set vs 9 sets)
d-mean: 0.089285724
d-stdev: 0.055698492
prob-mean: 0.99998375
prob-stdev: 0.18113585

GOODS-S to 27th mag vs. Random 1 (Normalized, 1 set vs 9 sets)
d-mean: 0.16683391
d-stdev: 0.027475102
prob-mean: 0.84035881
probsigma: 0.12022707

GOODS-S to 28th mag vs. Random 1 (Normalized, 1 set vs 9 sets)
d-mean: 0.12184878
d-stdev: 0.042824080
prob-mean: 0.99533697
prob-stdev: 0.19727988

Random 2 vs. Random 3 (1 set vs 9 sets)
d-mean: 0.14321605
d-stdev: 0.055629589
prob-mean: 0.92319757
prob-stdev: 0.20686308

Random 2 vs. Random 3 (9 sets vs 9 sets)
d-mean: 0.12757371
d-stdev: 0.032610029
prob-mean: 0.98843256
prob-stdev: 0.070053501

Random 4 vs. Random 5 (100 sets vs 100 sets)
d-mean: 0.1610636
d-stdev: 0.046950535
prob-mean: 0.8827874
prob-stdev: 0.18292440

Conclusions

I'm more comfortable going with the statistics done on the GOODS data to the 27th magnitude, since in my work before eliminating the dimmest data points gave me a more stellar sample. This means their likenesses to randomness are lowered. The South field I think is still well within range to call "close to random" at about 84%, given the averages and standard deviations where the sets are known to be random. The North field I can't say quite as confidently, at almost 70%, but it lies at the edge of what I'd call random.

Friday, July 17, 2009

Phew!

Made it through the week!

Accomplished all of the plots

Took a huge bite out of yesterday's To-Do List. Got all of the multi-plots done in both fields, including the histograms! The only persisting problem was in the smooth plots, for some reason the contour lines only showed up on the first of the 9 plots, while all of the other labels and commands, etc. worked throughout the for-loop. This is a mystery, to be looked at later if need be.

To Do:

Read up on and implement Kolmogrov-Smirnov test
Update Research Document
...COSMOS!?

Thursday, July 16, 2009

Keep to the Code

Today: Spent the majority of the day engulfed in editing and writing code.

Fixed Random Distribution

Beth pointed out to me how I had inadvertently cut off the edges of the field, so this is now fixed. For some reason I kept coming up with errors when I tried to run the code for the southern field.

Made Multi-plots!

Since the northern field worked, I proceeded with that, adding a for-loop and a multi-plot command to run numerous trials and got a 3x3 set of RA-Dec plots, and a data structure saving these trials' data.

Smooth and Significance Histograms

Next on the list was to make the smoothed RA-Dec plots, and the accompanying statistical significance histograms. When I tried to run my code on the random data though, I again ran into errors, and have isolated a problem or two, but not the solutions. I think the next thing I'll try will be to skip over the part of the code that's tripping up and see if I can get the rest of it to work or if there are further issues.

To Do:

Southern random positions, and multi-plot, data structure
Smooth RA-Dec plots for both fields, and significance histograms
-
finish debugging and get smoothed RA-Dec plots
-make accompanying cumulative histograms
Kolmogrov Smirnoff Test
-to compare trends of random trials and observed data in addition to eye-balling it
Update Research Summary Document
Ditto to Alex's Nap Idea
-
late night showing of Harry Potter at the iMax: movie and company were great beyond expectations, but caused slight shortage of sleep.

Wednesday, July 15, 2009

Random-ness

Today:
Axes fixed


Contour RA-Dec plots are now beautiful.

Random position generator

Given the code to get me going. Set up a randomu and manipulated the arrays to have values within the RA's and Dec's of the GOODS-N field. Spherematched these with actual data positions to accommodate the odd shape of the footprint. Plotted 500 of the random sample of stars (and oplotted the GOODS survey), so check that they overlapped properly. Oddly, the Dec's range was fine, but the outer endges of the RA's got chopped off. Wasn't obvious to me how this happened by looking at the code, so this will be addressed tomorrow.

Tomorrow:
Random position cont...

-check out why edges got cut off
-make for southern field
-emperical comparison to observed stars' positions

Tuesday, July 14, 2009

Contoured Density Plots

Started off the day going over isochrone images with Beth

The Besancon Model still seems puzzling, showing a lot of faint red stars. The Trilegal looks better, and the isochrone fitted at 20kpc nearly follows the line of stars (great by my standards, since in my limited experience few things have lined up nearly this well, but Beth mentioned some discrepancies). We determined that the disk/halo stars cut-off should be at a v-i of 1.5.
I haven't yet started working the separated stars yet, as I got distracted today...

Distribution, smoothing, significance

Brainstorming ensues. Beth explained how the smoothing worked a little bit, which brought up some minor pixel issues. To get out the edges and contrasts we wanted, I used SEARCH2D to make a pixel map of just 1's and 0's to separate the outer area with no data. The significance ((image-mean)/stddev) was taken of the data, and plotted with the "nodata" area set to 0.0 on the grey-scale. I also made an accompanying histogram of n-sigma; it wasn't Gaussian but it had the right tailing-off-shape.

Went to talk on BLAST-thought it was cool

Contour lines

Adding to the distribution density plot are now contour lines. It took me pretty much all afternoon figuring out and working on the code from Beth to make these show up properly. In the end, with a little typo-spotting help from Gail, I got it to work. I even patted myself on the back a little for going back making the contour lines shift from black to white so that you could still distinguish them on the lighter and darker areas. Only problems are the axes... even more so when I tried to make another plot of the northern field (as I'd been working primarily with the southern one all day, and didn't want the poor guy to feel neglected).

Tomorrow:
(in no particular order)
Fixing axes

-Go through code again and see if I can't find the problem
-If not, talk to Beth about it
Implement color-cut
Random distribution generator

Monday, July 13, 2009

Isochrones

Finishing up from last week, I made gray-scale RA-Dec density plots, hess-style. Also considering going back to cutting out some of the faintest blue objects.

With some new direction on how to proceed in looking at the distributions of the GOODS stars, I set out with it in mind to select out the disk vs halo stars into two groups. Using our brains for this, I started with an isochrone generated online to check where on the Trilegal model it would fall. I had to convert it from Absolute to Apparent magnitude, using 20kpc as the distance. I was pleased to find that the curve fell along the stars in my hess diagram.

Next is to use magnitude and color limits to pick out the groups of stars and check out their distributions for any kind of structure or randomness.

Friday, July 10, 2009

Enough of this stuff, It's Friday, I'm in love

Yep, it's Friday. And I like that song. ^_^

So, with my nice new star catalogs, as of this morning I have:
1) made RA-Dec plots
2) made CMDs
3) made a histogram of the distribution of apparent magnitudes
4) done the above for the Besancon Model catalog for comparison

The N and S catalogs have 1151 and 1175 stars in them, respectively. In the RA-Dec plots their distribution appears pretty homogeneous across the fields. About 35% of the stars are magnitude 26.5 or brighter, and these have overall the same kind of random distribution, but with some odd voids in areas. (In the future- may consider a random coordinates generator to see the probabilities of distributions, to see if these voids are significant.)
The magnitude distribution of the model is of a similar shape, but with more evenly increasing numbers the fainter the stars, rather than the leap in numbers from brighter to fainter in the GOODS stars.

This afternoon I did a lot:
I started using the Trilegal model as another comparison for a star catalog.
I also learned how to make a Hess diagram, and with a little help from code from Alex and some of Dylan's work, I got it to work by the end of the day.

Further project direction Monday.

Wednesday, July 8, 2009

I'm Baaaaack!

I have returned from the British Isles and returned to work.

Since I had put together my research summary paper before leaving, it was easy to review and jump back in right where I left off. I spent the morning cleaning up a few things in my paper, and then spent most of the day working on polishing up and finalizing my flux-ratio cut-off. By group meeting after lunch I'd made a few plots trying to be more precise with my parameters and sort through the data, systematically eliminating fainter objects that clouded over the color-color plot and getting down to the good stars. By the end of the day I made the final call: up to 1.4 (best at magnitude 26.5 or brighter, but am including up to 28).

That being done, I now have my stars catalog and am going to look at their distributions.

Monday, June 22, 2009

Latex, but not like the glove...

Been learning how to use LaTeX today to write up my work. Because I just started with a template made by Beth from Astro 333, it hasn't been hard to get going. I'm just plugging in my graphics and describing my process. (For the sake of good record keeping, and so that I can jump back in easily without losses when I get back from vacation.)

What was really exciting was that after I went back and looked at a color-color plot of my star catalogs (about 1000 stars in each field!) that made last Friday, it showed the sequence brilliantly. I color coded a few kinds of stars to look at, namely a "faint" vs "bright" parameter (25th mag cutoff), and a now a flux ratio cut off, where anything less than 1.3 was displayed in a group, and another group of 1.3-1.5. The sequence was made up almost entirely of the <1.3, with only a few bright stars from the other group along the stream. I plan on going back to this to see how far past the cut-off those good-looking stars are, after I finish my LaTeX-ing. Which may mean after I get back from my two weeks away.

I'm going to finish up my LaTeX-ing tomorrow morning, and then I'm off to the U.K.
Cheerio then!

Thursday, June 18, 2009

Two down, plenty left to go...

About to finish up item 3 of the Plan.
The past few days I've just been going through and documenting my star-picking process and kind of taking inventory of my really good graphics and adjusting them for updated parameters. My official classification cut-off is now 0.75-1.0 for stars.

Today I have been working on the flux-ratios. My hope is that two peaks will appear on my histogram and help me pick a parameter to further weed out a few galaxies (which would correspond to the anomalous lines, that don't fit a star's profile, on my aperture magnitude vs aperture radius plots). When I went through, I was careful with my math in my code, making sure I had done my distance modulus algebra correctly and all, and made some good histograms.
THEN
somewhere down the line I realized I'd in fact still managed to input my magnitudes backwards. I had the equations set up correctly, but instead of solving for flux(ap11)/flux(ap8) I had gotten flux(ap8)/flux(ap11). Not a huge deal, so I just flipped 'em around to see where that got me.

What's left to do is really look at their peaks (i.e. I've been looking at them, but am unsure what to do with what I see, and need to consult Beth in the morning, and compare "star" vs "galaxy" peaks, etc.), and pick a parameter, like flux ratios greater than 1 are in fact galaxies and discard those from my star catalog.
Then: vacation, and then moving on with the Plan.

Monday, June 15, 2009

The Best Plans of Mice and Men...

...often go awry. ~Robert Burns

But here's to hoping this one doesn't!

The Plan (for the second half of the summer)
  1. Detail process of star isolating technique
  2. Make final call on classification cut-off
  3. Separating further with aperture magnitudes:
    Flux ratios and plots
  4. Look at the spatial distribution and cmds of the stars
  5. Pursue analysis or proceed with this work on a new data set

Monday was very productive, in addition to talking over this with Beth. I sat down with her first thing in the morning and got through the two bugs left in my Error code. Two rather simple things, so I got started full-bore making lovely error plots for the rest of the day. I made magnitude error and color error plots with the median lines, in both N and S, for all objects and just for the "star" objects (as determined by class ge 0.7). I even made four corresponding nifty little plots showing just the median lines, in different colors, to compare them on a single set of axis. In the process I learned how to use "legend" to add a key to my graph. Beautiful.

Next: Get started on that "best catalogue of stars."

Friday, June 12, 2009

To Err is Human

The day was filled with Errors, of the human and astronomical varieties.

Yesterday:

I went through my multi-plots some more, this time with a magnitude=24 cutoff. By looking at these brighter objects, we gain two major things:
-less uncertainty in measured magnitudes
-probably a more accurate classification (star vs gal)

We also then make the assumption that "stars are stars are stars", i.e. that the dimmer stars are fundamentally and structurally of the same nature that the bright ones are. Thus, by investigating the patterns of star sequences in brighter samples, we can glean info about those dimmer (meaning farther away or colder, in some cases). And, as Beth pointed out (which I surmised as well), this assumption would be ill advised in the realm of galaxies, as dimmer ones (farther away or not) we know don't all have the same structures as those closer to us; this is due to the highly varied morphological properties of galaxies, as well as their higher redshift, and the phenomenon of "looking back in time" at younger galaxies, as opposed to our closer, more evolved galactic structures.

Once implementing the higher magnitude cutoff, I made my multi-plots, showing different layers of brighter objects and color-coded stars vs galaxies. **I did successfully show the separation of the brightest stars in a pretty sequence. Have yet to attempt this with an i-z color-color plot.**

And by the way, this was all on the larger set of processors, squid! Which is actually why this took me a little longer than it should have, because I working in a less familiar environment. But I'm pretty acclimated to it now, even though I did prefer eel's IDL development environment. The switch became more immediately necessary because poor little eel and his one processor could no longer handle the loads I was giving him with the multiple plot procedures.

Today:

Began the morning by going through and making up a few quick multi-plots (in the same manner as yesterday), corresponding to the southern field, as I worked only with the northern half prior.

Then came the Reign of Error! ;P
I had started a code yesterday evening, setting up to make my plots of magnitude and color uncertainties, in the style I found as I read Dylan's research. This plotting got off to a hazardous start, and only got worse.

First tackled was a simple plot of B measurement error vs. B magnitude. This looked relatively as expected, once axis parameters were implemented. So, move don to a B-V error vs. B magnitude, and got a not nice surprise: numerous points appeared below the curve of the estimated minimum errors, in streaks towards the x-axis. Beth and I puzzled about these odd errors for a while, doing several sanity-checks on my data and making sure my code was indeed debugged, and it seemed to me that the only way they could have appeared was if the B magnitude was somehow smaller than V. Then it dawned on me, that some of the B errors were showing up as 0's!

Investigating a small portion of these points, I found that most of them were classified as galaxies (like 90% between 0-0.04). I also went to on examine them in the largest aperture, to see how the measurements and errors compared. Using a my new favorite IDL toy, the multi-plot, I set up a comparison between the two uncertainty plots in each aperture, but have been able to determine little from them other than that this odd phenomenon occurs in the same manner in both.

Needing further direction for the afternoon before Beth left, she started me also thinking about and working on taking a median of the plots. Using mostly the online tutorial from Astro333, and a little from an IDL book, I got the code written out to my satisfaction, and ran it with high hopes. These were soon torn down, because the code couldn't even get as far as the forloop. De-bugging proceeded for hours (yes, multiple hours *exhaustion*), as bit by bit, as each line was altered. And sanity-checked. And thoroughly examined. Repeatedly (thanks to Gail for helping me through this frustrating and grueling process). One error message at a time was sifted through.

Finally, we got to the bottom of the problem, and I solved the forloop's issues by adding some limits on the range of the data set. This stemmed from the realizations that some of my data are funky, very dim, have crazy filler readings of 99's, or there are simply holes where there are no data points at some magnitudes on the brighter end of the spectrum.

Then, luckily, without another error, I got it to give me the median values I sought. And then, to my dismay, when I tried to oplot it, it failed, claiming that it wasn't an array. Which I have yet to figure out, because I told it that it was an array, and I can't figure out why I doesn't know that... But at this point I am going to need to ask for assistance from a fresh pair of eyes, because I've been working on this code for too long for me to see anything wrong popping out.

So, for Monday:

-Finish debugging this code and get a pretty median uncertainty line.
-Meet with Beth about my work and set official goals for where I'm going with my project for the rest of the summer.

Also on my to-do list:
-RANDOMU!!
-Error plots for other bands and colors
-more reading

Wednesday, June 10, 2009

Colorful Couple of Days

Yesterday:

Started out making some color-color plots, beginning with B-V vs V-i. When I got that successfully coded, I saw that the several thousand black dots were very hard to interpret.
Plan: a) make it plot different symbols for "stars" and "galaxies" ; and/or b) use randomu to generate a random selection of objects to narrow the sample to plot.
Option a seemed easier, so I started researching through one of the IDL books, and browsing online documentation. Fairly easily I adapted my code to a plot and oplot of the two groups of objects, with different psym numbers, and looked at some cmds and color-color plots. These were unfortunately very hard to read, still, what with the plethora of points. Soon thereafter, Beth also suggested using different colored dots for the two sets, and I spent the rest of the afternoon working out my new code to include loadct and made the stars appear red in my new set of plots.

These new black and red plots were successful in the cmds and B-V vs V-i plots, in both fields, but I have yet to gotten my code worked out for a B-V vs i-z plot.

Today:

Beth helped walk us through the code to add in order to get the 'x' plot display device to not let the plots disappear when overlapped by other windows. Hurray!
She also suggested to me, in addition to the randomu reminder, using !p.multi mechanism to display multiple plots side by side. I used this to make a 2x2 display of my B-V vs V-i plots, with the dual color, solid black, just stars, and just galaxies plots.

Looking at these, there is a kind of stream pattern that could be the stars, it's just clouded over by "galaxies", so another thing I want to do is try to isolate those and highlight the pretty stars' path in red. (When Beth sent me Dylan's work on another project from the Spring semester, I looked at some of his color-color diagrams, and though they were in different bands, they had the same kinds of shapes, with this red line and a black cloud of other objects. My goal will be eventually to get plots as nicely set up as some of the ones in his paper.)

After our group meeting, because of a comment by Gail about how close to the "star cutoff" some of my anomalous objects were, I went back to my code and colored some of the dots on my cmd to distinguish the really starry-stars (this time, 0.85-1.0 classification), from the less star-like ones (0.7-0.85). Sadly, this really didn't help shed much light on the issue, as the two groups were still dispersed amongst each other, not indicative of any particular behavior that either possessed uniquely.
Beth then suggested cutting out some of the faintest objects, as the most likely to be misclassified. So, I started out making new plots with magnitude cut-offs of 23, which resulted in very few stars, in comparison. I think tomorrow I will try bumping it back up to 24 or 25 to see where that leads me.

Goal for Friday:
Work out how to isolate the line of stars OR get the B-V vs i-z color-color diagram code to work.


Other things To Do:
-Randomu to narrow sample size
-Think about some ways to use these plots and info to actually separate out the misclassified galaxies (and reconcile the discrepancy between the Besancon model star population prediction and the number I am currently working with)
-Read (for fun!) the Dark Matter Substructure paper forwarded by Beth, which I do find interesting.
-Work on learning how to use the IDL command line outside of the Developement Environment so that I can work on squid instead of eel, because it seems to be slowing down and choking up a bit more and more every day...
-Figure out a better way to try to hang the clock. ;P

Tuesday, June 9, 2009

Besancon Model

Yesterday and this morning:
I looked up the RA and Dec the GOODS and the HUDF (the HUDF was taken in the same field as the GOODS southern field, and therefore should have a relatively similar stellar density). Then I learned how to use the IDL procedure GLACTC to convert to galactic coordinates.

N: RA = 189.2282
Dec = 62.2355
Gl = 125.86662
Gb = 54.810068

S: RA = 53.122923
Dec = -27.79965
Gl = 223.55983
Gb = -54.430659
(all in degrees)

These were then used to generate a simulation with the Besancon model: Model of stellar population synthesis of the Galaxy: Catalogue simulation without kinematics, Johnson-Cousins photometric system.

I left in the default parameters, except expanding the distance range to 250kpc, and changing the V-band range to 10-28, then input the coordinates.
It generated a catalog with 16308 stars in the north, and one with 16102 in the south. This averages to 16205 stars per degree^2.
The GOODS field is 0.0889 deg^2, giving an estimate of 1440 expected stars.
In comparison, HUDF is 0.003055 deg^2, with an estimate of 50 stars (close to the results from the HUDF paper).

In my preliminary sorting, by just using 0.7-1 as the set of "stars", I get 10638 stars in the north, and 10168 in the south. Some of these we suppose are misclassified galaxies, and some of my next steps are to see if I can't separate those out.

To Do:
-- color-color plots, b-v vs v-i and b-v vs i-z, with different symbols for stars and galaxies (meaning I'll be looking through code literature for a bit to learn that)
-- code something to see home many stars magnitudes are changing or leveled off (to try to separate out the anomalies in the cmds)

Friday, June 5, 2009

For-loops!

For-loops have now replaced Fruit Loops in the top 1000 topics on my mind. ;P
Context: I was very happy I got my first for-loop to work today!

I was working on the optimal-aperture project, by plotting magnitudes vs aperture size for a small sampling of objects. It took a little while for me to work out how to code it, and with Beth's help when I had like half a step left, I got some pretty good plots! They show the "light distribution" (reminiscent of the HUDF paper, we later realized) of the stars, with first a rapid increase through the first few apertures, with them leveling off around 20 to 30 pixels. So I chose aperture 8 (of the 11), with a radius at 33.33 pixels.

What was interesting were the objects that didn't level off so quickly, but grew progressively throughout... suggesting perhaps that they are misclassified galaxies. This may be interesting to pursue as a further way to analyze and categorize star vs galaxy.

Using aperture 8, I then went through and made quick work of producing cmds of the North and South fields, for all objects, as well as "galaxies" and "stars". Examining these cmds alongside the aperture 11 ones revealed something interesting: the mysterious cloud of slightly brighter points on ap 11 cmds appears to have migrated up from a slight overdensity at a dimmer magnitude on the ap 8 cmds.

I think this follows from my hypothesis about the galaxies continuing to brighten, after the stars have settled. The dots that remain consistent in the cmds between apertures are likely to correspond to those objects whose magnitude we saw level off on the aperture plot. The objects whose light continued to grow would have brighter magnitudes at larger apertures, and would thus appear to climb up the color-magnitude diagram, as did this cloud of points. Again, this could be further pursued as a method to classify and check whether objects are indeed stars or galaxies.

Continuing To-Do List for next week:

1) Look up RA and Dec (and galactic coordinates) of the two surveys and compare star/galaxy density predictions
2) Look up the Besancon galactic model and compare star/galaxy density predictions
3) Next week- look into using color information to distinguish mislabled galaxies from true stars

Thursday, June 4, 2009

Troublesome CMDs

Well, yesterday was not nearly as productive as it should have been.
I did a lot of reading, trying to figure out some coding things. I started making my color-magnitude diagrams (using the magnitude measured with the largest aperture, since I haven't yet determined which aperture radius is optimal). I hit a lot of snags, and my plots came out looking very odd.

Today:

Spent the morning again failing to debug CMD code. Beth helped me get through it this afternoon, and I now have a lovely diagram, with corresponding "star-only" and "galaxies-only" CMDs, for comparison.
I also went back to see if the number of stars I've selected (semi-arbitrarily, by taking the class cutoff to be 0.7), actually matched the prediction based on the HUDF numbers. From my work and histogram today, it appears to be much larger (on the order of 10,000, rather than 500 per field). I have yet to determine how to make up for this discrepancy, aside from claiming that the morphological parameters called some galaxies stars.

Tomorrow:

0) For-loops and figuring out optimal aperture for magnitudes
1) Look up RA and Dec (and galactic coordinates) of the two surveys and compare star/galaxy density predictions
2) Look up the Besacom galactic model and compare star/galaxy density predictions
3) Next week- look into using color information to distinguish mislabled galaxies from true stars (some "astronomy kung fu" may have to happen here...). ;)

Tuesday, June 2, 2009

Histograms and Star-Count Estimate

Yesterday:

To examine the distributions objects as galaxies or stars more clearly, Beth suggested plotting a histogram. This lead to a lot of Idl instruction reading, and a moderate amount of her help. But by the end of the day I had a good plot of the distribution of objects, clearly (thank goodness) showing tall spikes at the galaxy end of the spectrum, and a moderate spike at the star end.
Next I set out to make a hist. of the range of FWHM amongst the stars. I did so by learning how to use a "where statement" (Beth's favorite!), to pick out just those objects in the sample whose classifications had been assigned a value between 0.7 and 1.0. Debugging was still needed.

Today:

Finished up (learning how to) making the FWHM histogram.
Also read through the paper Stars in the Hubble Ultra Deep Field to learn how they seperated the galaxies from the stars in thier catalog of apporx. 10,000 objects. Though they used some tricky super-mathematical techniques, it appears to have boiled down to their light distribution, and then a little magnitude profiling. This gave them 46 unresolved objects (non-galaxies, at least until further narrowing down). Of that set, they used what spectroscopy that they could, and elimination of dimmer than an i magnitude of 27, to result in 26 stars.

Given the number of objects, we had a bit of a scare, anticipating that if proportional, our catalogs would then result in a mere 100 or so stars. But never fear! Arithmatic is here!
The area of the HUDF is 11 sq. arcmin, meaning there were 2.364 stars per sq. arcmin. Our GOODS survey data (N and S fields combined) constitute 320 sq. arcmin, resulting in 756.48 stars.

I'm glad- an estimated 750 stars is probably a lot more useful. Or maybe not? Guess we'll have to see how the numbers boil down.

For Tomorrow:

I began writing code to make some color-magnitude diagrams today, so I'll hopefully finish them up by our group meeting in the afternoon.

Monday, June 1, 2009

Plots Galore!

So, much has been accomplished since last Friday!

I made a couple of data structures (with instruction from Beth) to combine the catalogs of the different bands. So now, instead of reading in a bunch of files to make plots, etc, I just have to access either the North or South catalog I made. Sweet.

Next, continued to go through literature searching for the answer to the star-galaxy classification mystery. Finally I found a "SExtractor for Dummies" guide, and after a bit of searching found a section that said that the continuous scale rated objects from 0 (galaxy) to 1 (star).

Starting out:
Goals for the day:
Re-plot the ra-dec from the new catalog.
Plot fwhm against the classification.
Use this to approximate a cut off as to what we'll call a galaxy or star.
Also look at range of fwhm and use this to get an idea of what aperture size to use from the magnitude data.

As of lunch time:
All plots in the goals section have been made! RA vs Dec in both north and south, as well as all 8 fwhm-class plots in each band, N and S.
Proceeding to check out galaxy cut-off values as well as fluxes.

Thursday, May 28, 2009

Initial Progress

Well, the search for info has given some results. It just took a while to track down the requisite papers.
In response to comments: The GOODS covers an area of 10'x16' in each of it's two fields. I think that comes out to about 0.0000007 % of the total sky area...

The magnitude limits on the bands were given to be:
B,V= 28.1+/- 0.3
i = 27.4 +/- 0.3
z = 26.95 +/- 0.35

Wednesday I made my first official plot! RA vs Dec of all of the objects in the survey. It wasn't so great at first; I was having a little trouble with the coding. After our group meeting, Beth took a look and helped me out. Turns out I didn't tell it not to connect all of the points, so the funny look was a ton of lines crisscrossing the plot. This was corrected, and then it looked like it should! It then took me longer than it should have to successfully save the image. The next observation was just the sheer multitude of data. All of the survey objects were included.

Since I'm going to be looking at stars, the next step is to sift through and set aside all of the galaxy objects. I had hopes that this wouldn't be all too complicated, as there was a conveniently labeled column in the catalog that read star-galaxy classification parameter. No such luck. The column appears to be a measure on a continuous scale from 0 to 1, and I have been unsuccessful in finding any documentation explaining it. I have decided to e-mail the "help" address given on the GOODS website, as I did for information about the mysterious 10 unlabeled columns (to which the response was that they had been used by the survey team to check for photometric consistency and overlapping in their tiles, and could be ignored for my purposes).

Another idea proposed at our group meeting that I am looking into is to use a random number generator to pick a sample population to plot. Lastly, it was proposed that I combine the catalogs of four bands together. Since I had not previously considered this, I didn't know how difficult it would be, if the objects listed were consistent, or in order. Sphere matching could be used to help this, and pick out matching objects between the catalogs.

Upon examination, I found that no such worries were needed, that the objects in each catalog matched one another. In fact, the first 16 columns of data are all positional information (RA, dec, assigned section numbers, x and y coordinates assigned to describe positions on the tiles, etc), and these were unvarying between all four bands- as it should be! So, I don't think as much work will be required to match up rows to combine them, and maybe there is a relatively easy was to grab non-repeated columns from three of the catalogs and add them to the first. --> probably a good project for today.

The whole of Thursday was spent researching and reading through more papers on the GOODS and its data.

Goal for Friday: Make a plan on how to combine catalogs and make some headway doing so. Also waiting to hear back from "Help" about classification parameter interpretation in order to move on that.

Tuesday, May 26, 2009

Early Stages

The first few days working at the computer have felt rather unproductive, but there has been progress. I have mainly been reading articles and familiarizing myself with my computer system, and beginning to learn IDL.

Early research:

I am working with data from the Great Observatories Origins Deep Survey (GOODS), from the Hubble Telescope's Advanced Camera for Surveys. Four fields were imaged, with broad, non-overlapping filters: F435W (B), F606W (V), F775W (i), and F850W (z). Exposure times were 3, 2.5, 2.5, and 5 orbits, respectively. It is not a very deep survey, but is much larger than previous programs carried out (data was taken in 2002-2003). It's primary goal at the time was to gather information about small faint galaxies, at high redshift, and hopefully lead to information about the formation of our own galaxy. I will be using this data to examine for small faint blue sources, to see if they fit the profile of Blue Horizontal Branch stars.

The data set:

Data releases were posted online by the survey, where I accessed the latest set, version 2.0 (which had some updates and improvements on the previous version). The data was downloaded from the catalog, in the set of 8 text files (4 from the northern portion of the sky, and the 4 taken in the southern portion). These data tables illustrated 104 columns of position coordinates, fluxes, magnitudes, etc. so the next step was determining just which data would be useful. I will primarily be using the RA and dec information, along with apparent magnitudes, angular size, and the full width at half maximum (FWHM).

Next: The long and somewhat tedious foray into the realm of IDL.

I spent a couple of days going through tutorials and aclimating myself with the some of the language. Ideas I know have some grasp of (and could follow instructions to make examples of): Procedures, Functions, Objects.

To summarize a bit: Procedures are like to do lists. Functions are like procedures, but give an end result (like a mathematical function, duh). Both of these things are kinds of Methods.
Objects are like Methods with data embeded in them. There are a number of kinds of objects, which are grouped by Class. The main pieces of a typical object could include an initial function, a cleanup procedure, display procedure, and a defining procedure. Following use of the example tutorial object, it warned to take note that by deleting the name that refers to the object the object itself has not been deleted, that that has to be destroyed as well (the reference is an entity separate from its referent, and one will leave the other behind if destroyed).

Near-future goal:
Read-in file to make position plots from RA, dec data.

Thursday, May 21, 2009

Intro: Week 1 on the job

I have begun doing research in astrophysics this summer at Haverford with Professor Beth Willman. In just a few days, I've already been swept up in articles, the Sloan Digital Sky Survey website, data from Hubble Telescope, and have jumped into a new computer world with unknown operating systems and foreign programming languages. This is certainly going to be an adventure.

Exploring to infinity, and beyond! ...from a computer lab. ;)