top of page

The Final Product!

After months of work and learning, we have finally found the results we were looking for. The final product of this project, other than a scientific paper, is this graph.

This graph demonstrates the results of the project. I will give an explanation of how we got to this result.

Firstly, we have to prepare the data which describes the emission of the star that is in this system, this can be measured with a telescope and it has been. This data is given to is in terms of erg/s/cm2/A. That unit is erg, which is 10e-7 joules, seconds, time, centimeters, distance, and Angstrom, another unit of distance. This unit is given to us in the distance unit form, or realm as I like to call it, however for our calculation, we need this unit to be in terms of frequency. The following code converts the data that we received from the telescope into the units we need for the calculation.

This data is then written to a file to use for the next code.

The following code is used to compare the "black body" data of the star to the "real" data. The black body measures what the star absorbs and the real data measures what the star emits. They are both ways to measure the output of a star. I compare these two data types using the code below.

This code produces the graph below.

The blue line is the real stellar spectrum and the red line is the black body. As you can see, the two spectra agree meaning that the blue line data is good to use.

After this, I created an input file for the code that Dr. Ertel had written which computes the best fit for a select group of parameters using a 3-d probability space and tracing through that space using an increasingly better fitting path. One of these input files samples a selection of small grains and the other a sample of an exponent.

This file is passed to a code named "sand" which will output the best fitting values for the select parameters. The output for those parameters is below.

As you can see, and as I will explain, the two outputs are very similar, but they are slightly different. The top results we refer to as the large grain model because the lower dust grain size is 60 microns, while the second is called the small grain model because the minimum grain size is 1.009. The goal in the end is to disprove the large grain model. Another paper described the disk as having a minimum grain size of 60 microns. The models that we create will either prove or disprove this idea.

These results are then given to another code which can create a graph of the emission of the disk model as well as an image. The code for the image is shown below.

This code creates an image.

Here is an example of the image this creates.

This is an image of the debris disk for the small grain model. This image is then convolved with the Gaussian Distribution that I had created earlier in the project.

This convolution is created using the code below and creates the image below.

Then the peak of the convolved image and the Gaussian are compared using the code below. The convolution these two images gives the disk image as it would be seen through a telescope. The ratio between these two images will be used later to scale the upper limit for the the emission of the debris disk.

Finally, all of this information is then used to create the final spectral energy distribution that compares the large grains to the small grains.

If you look at the 3 white triangles on at about 10e3 microns, these are the upper limits of the data. The top triangle is the upper limit for the dashed line and the second highest white triangle is the upper limit for the dotted line. The dashed line represents the small grain model, and the dotted line is for the large grain model. Both of the upper limits are scaled by the ratio between the Gaussian and the convolved image. This figure demonstrates that the large grain model is above the upper limit, meaning that the large grain model does not work for this debris disk. This is what Dr. Ertel was planning to prove with this project.


RECENT POST
bottom of page