ARLO STScI ACDSD MAST CASB Documentation |
Here's a set of notes written by Jorge García and Mike Potter, to get you through a night's observations using the arlo package. This is very specific to running the version at STScI, maintained by Jorge García.
GETTING STARTED - SETTING UP THE ENVIRONMENT
GET THE DATA FROM TAPE TO DISK
BEGIN arlo PROCESSING: RUNNING fbd
CREATE A LIST OF OBJECT FIELDS AND GETTING THE GSC ASTROMETRIC
REFERENCE
RUNNING pad
RUN app
MAKE LISTS FOR ics
ABOUT THE FIELD NAMES
BREAK THE LIST INTO TWO PARTS - A STANDARDS LIST AND A
TARGET LIST
TRIM THE DAOPHOT DATA FILES
RUN ics
RUN MKNOBSFILE
RUN ics AGAIN
BUILD THE CONFIG.DAT FILE
RUNNING fitparam
BREAK UP THE TARGET LIST - CHECK THE DSS ARCHIVE - PREPARE
COLOR.INDEX FILE
RUNNING res
RUN arlo2dira
ARCHIVE YOUR WORK TO TAPE: TAR
CREATE HISTOGRAM PLOTS
PUTTING TOGETHER THE FINAL REPORT
GETTING STARTED - SETTING UP THE ENVIRONMENT
Each night's observations should be reduced in a separate directory. Typically you will need at least one gigabyte of disk space for each night.
1) create a directory named "main", and go there.
2) copy /home/jgarcia/arlo/iraf_arlo to the main directory.
3) check the display environment definition.
4) source iraf_arlo.
This will create all the necessary subdirectories and set up the iraf environment for arlo. It also pop-up the xgterm and ximtool windows. If you logout of an arlo session and need to return later to do more work on a given night's data you only need to "source iraf_arlo" (or if you prefer to work in 24 bits display mode "source iraf_arlo_ds9")- the existing subdirectories will not be altered. After step #3, above, you will be in iraf, and will see the iraf system prompt. Load the arlo package:
cl> arlo
GET THE DATA FROM TAPE TO DISK
Copy your data from tape to disk. Here we encounter the first bunch of kludgy bits. Most of the data we will be reducing here has already been flat-fielded and debiased. arlo expects to do this stuff itself. Also, the data must be in fits format, but with a .fts extension. Finally, when we're done, we want all the files to have names that conform to the naming convention described below. Here's how this can all be accomplished:
a) Read the reduced image data tape (usually written as a fits tape) using the iraf task t2d. Use "n" as the prefix - all the files will be written to disk as "n001 n002 n003 .... and so on.
ar> t2d mtl n 1-999
b) Change the file extensions to ".fts"
ar>rename "n*" "fts" field="extn"
c) Since we are reducing data which have already been flat-fielded and debiased we need to fill some values into _buffer which would normally be filled by arlo while running the early steps of the fbd task. First, create a listfile called "temp.lis" containing the names of all of the image files:
ar> files *.fts > temp.lis
Then run the following:
ar> cl < arlodir$tfh_archive/kpno97ccdt2ka.9m.cl
(obviously, this is for the kpno data - similar exist for data gathered elsewhwere).
Now, go into _buffer and fill in the data set name. To do so, just epar _buffer:
ar> epar _buffer
On the first line enter the data set name. Follow the following naming convention:
aaaarbbncc
Where:
aaaa = Observatory code = "KPNO" or "CTIO", etc.
r = Just the letter "R"
bb = Run number - 01 to 99
n = Just the letter "N"
cc = Night number, 01 - 99
As an example, the second night of the tenth run at kitt peak would be "KPNOR10N02". Please use CAPITAL letters!!
d) Use the iraf task hselect to create a list file containing all of the original iraf file names. Typically the filenames are not exactly in the format they need to be in, so after executing the command shown below you may have to edit the "file_out" file to conform to the naming convention outlined below.
ar> hselect "*.fts" fields="IRAFNAME" expr="yes" > file_out
e) If the filter names are screwed up, or if FILTER was not the filter keywork in the headers, now is the time to use hedit and hselect to update the headers so that each contains the keyword "FILTER" and its value is the actual filter name, like U,B,V,R, or I. As an example, suppose in the image headers the "V" and "R" filters are designated as "2" and "4", that the keyword is FBOLTPOS, and there is no FILTER keyword. Begin by using the hselect task to create lists of images for each filter:
ar> hselect *.fts $I "FBOLTPOS = 2" > v_list
ar> hsel1ct *.fts $I "FBOLTPOS = 4" > r_list
NOTE >>>> If you get a message saying:
"ERROR: operands have incompatible types"
that usually means the keyword you're searching on (FBOLTPOS in the case above) is expressed as a string in the header (its value will be enclosed in either single or double quotes if it is a string). In that case you must enclose the value in quotes, as shown below:
ar> hselect *.fts $I "FBOLTPOS = '4'" > r_list
Then use hedit to add the FILTER keyword with the appropriate value:
ar> hedit @v_list FILTER 'V' add=yes del=no
ar> hedit @r_list FILTER 'R' add=yes del=no
f) run the iraf files task to create a list of the .fts files:
ar> files *.fts > file_in
>>>> Here is the file naming convention that MUST be adheared to throughout the processing steps:
arbbnddeee.imh where
a = observatory code, "k" for kitt peak, "c" for cerro
tololo, etc.
r = just the letter "r"
bb = run number, 01-??
n = letter n
dd = night number - 01 to ??
eee = sequential frame number for eveining - should be 001 to ????
As an example: Kitt Peak run #10, night #1, image #145 would be named: "kr10n01145.imh"
Be sure that the names in the "file_out" listfile conform to this, and that the file names correspond to their log numbers for the night in question. If the original iraf file name has been preserved throughout the initial image calibration steps then the scheme outlined here will work. If not you will have to create "file_out" by hand.
BEGIN arlo PROCESSING: RUNNING fbd
The first arlo task to run is fbd. Epar the task parameters for fbd to look something like this:
(dacd = "file_in") Input list of FITS filesNOTE:
(todsk = "file_out") Ouput list of IRAF images
(display = no) Use display?
(faiimh = yes) Create IRAF Images?
(listit = yes) Create list files? (y/n)
(chkhdr = no) Check Header,Observatory,CCD? (y/n)
(invertEW = no) Invert E<->W orientation (y/n)
(invertNS = no) Invert N<->S orientation (y/n)
(transim = no) Transpose images (y/n)
(hd_rch = "arlodir$tfh_archive/") Header Archive Directory
(fbderr = no) fbd error detected?
(ima = "") <<<<< See note below >>>>>>
(mode = "al")
dacd = the list of files with .fts extensions
todsk = list of properly named files with .imh extensions
If fbd crashes you will need to specify ima = elab.lis to restart.
fbd will convert the fits files to .imh files, and create a table containing data for each image necessary for following reduction steps. It also creates the following output files:
elab.lis - list of all files
object_v.lis - list of all "v" frames
object_r.lis - list of all "r" frames
... and so on ...
flat, bias and dark lists - which should be empty!
logbook.tab - table of image-specific data
fbd will also pause once it has built "logbook.tab" to give you the option to edit it if you want. Usually its okay; just enter (ctrl) D and "quit" to exit.
Finally, somewhere in the processing, fbd makes some assumptions about what filter is assigned to which position in the filter wheel on the telescope, and its often wrong. Look at the contents of "object_v.lis", and so on, to make sure that they are appropriately named. I've found that I often have to rename, say, "object_u.lis" to be "object_b.lis" because fbd thinks that the U filter is in the first position yet the B filter actually was in the first position, and so on.
CREATE A LIST OF OBJECT FIELDS AND GETTING THE GSC ASTROMETRIC REFERENCE
Later in the processing we will need some astrometric reference tables giving ra and dec of objects in the digital sky survey. All you need to do is follow the instruction given by utildss task and have an active account on PIXELA:
ar> utildss
The next task is called pad. Epar pad to look like this:
ar> lpar pad
(showdisp = no) Use graphics output?NOTE: The two fields "immag" and "alrdone" should be left blank when running for the first time on a data set. If pad crashes (which it often does) you then must specify immag = "elab_pad.lis" and alrdone = "elab_app.lis" The first was written by fbd and is basically an input list; elab_app.lis is a list of files that pad completed, and is eventually used as the input list for app, the next step in processing.
(thres = 4.) DAOFIND threshold
(apedef = "pmt") Aperture photometry criteria
(aperad = 9.) Aperture rad if apedef=fixed
(psfdef = "2p3fwhm") PSF photometry criteria
(dann = 10.) DAOPARS dannulus parameter
(funz = "penny1") DAOPARS PSF fitting function
(track = "good") Telescope Tracking
(paderr = no) pad error detected?
(immag = "") >>>> See Note Below!
(alrdone = "") >>>> See Note Below!
(mode = "al")
Note that this will take a while. It sets up all the parameters necessary to run daophot - both for aperture and psf-fitting photometry. Writes a daophot parameter file for each image to each of the directories psfpars and apepars. If problems arrise a file prob_pad.lis will contain info about what happened.
Also note that one way which pad crashes is when there is a VERY bright star in the field. During the process of looking for cosmic rays the algorithm can become confused when there is a large saturated area. If this occurs you'll need to copy in a blank box (rectangular region of zeros) onto the area of the bright star.
There's nothing tricky here - again, epar the task to look something like that shown, below. This will also take quite a bit of time, typically 6 to 10 hours, sometimes even longer. As with all of the tasks so far, run app from the "main" directory.
ar> lpar app
(showdisp = no) Use display?Note that the task frequently crashes, usually during psf fitting, when the function (chosen as "penny1" in app) does not converge for some image while building a model psf. In those cases go to the psfpars and apepars directories, edit the daopars.function line to be something other than penny1, and retry. Note that to save time on a restart, edit the file elab_app.lis to delete files already done (in the list "done.lis"). Then restart.
(commandf = "") Command file
(apply to each image)
(maxs = 6) Number of stars for PSF computing
(magorpsf = yes) Use both Aperture and PSF
(apperr = no) app error detected?
(imas = "elab_app.lis")
(alrdone = "done.lis")
(mznxbc = "")
(mode = "al")
Now, making the input lists for the following tasks is a bit of a pain. The final goal will be a list which gives, for each line, a complete "set" of observations of either a standard star or a target field. The format is similar in all the lists, though for various reasons there are some slight differences in how you must designate the field names. The next task, "ics", needs two input lists, one for targets and one for standard fields. Both are formatted as follows:
field_desig : image_1 image_2 image_3
field_desig is the field designation, and must be unique, so if you observed an object more than once during the night you'll need to add a suffix, like "a,b,c,d,..." to keep the designation for each observation set unique. Usually the field_desig is just the standard star or DSS plate id with an additional letter added.
image1, image2 .... are the images that comprise that set of obser- vations. If you were observing using only two filters then there MUST be two iamges per observing set. If for some reason there is a missing image for a given set you must include INDEF as a place-holder, but it cannot be the first image in the set (the order within a set is not important).
Getting there is a fairly involved process. Here's how!
1) run mkimsets on the entire list of images for the night. Here's a sample set of input. You will need the title info:
ar> lpar mkimsets
imlist = "kr9n01???.imh" The input image listThe first thing mkimsets will output is a list like the following: (note that I've not included the header instructions)
idfilters = "V R" The list of filter ids
imsets = "temp.lis" The output image set file
(imobsparams = "") The output image observing parameters file
(input = "images") The source of the input image list
(filter = "filter") The filter keyword
(fields = "title") Additional image list fields
(sort = "") The image list field to be sorted on
(edit = yes) Edit the input image list before grouping
(rename = yes) Prompt the user for image set names
(review = yes) Review the image set file with the editor
(list = "")
(mode = "al")
##############################################################################
kr9n01004.imh V PII-260
kr9n01005.imh V PII-260
kr9n01006.imh R PII-260
kr9n01007.imh R PII-260
kr9n01008.imh V "100 241 Standard"
kr9n01009.imh R "100 241 Standard"
kr9n01010.imh V "100 394 Standard"
kr9n01011.imh R "100 394 Standard"
kr9n01012.imh V "104 490 Standard"
kr9n01013.imh R "104 490 Standard"
kr9n01014.imh V "104 598 Standard"
kr9n01015.imh R "104 598 Standard"
kr9n01016.imh V "104 598 Standard"
kr9n01017.imh R "104 598 Standard"
kr9n01018.imh V P037-F
kr9n01019.imh R P037-F
kr9n01020.imh V P037-F
kr9n01021.imh R P037-F
kr9n01022.imh V P168-F
kr9n01023.imh R P168-F
kr9n01024.imh V P168-F
kr9n01025.imh R P168-F
kr9n01026.imh V "100 241 Standard"
kr9n01027.imh R "100 241 Standard"
kr9n01028.imh R "100 241 Standard"
kr9n01029.imh V "100 394 Standard"
kr9n01030.imh R "100 394 Standard"
kr9n01031.imh V "100 394 Standard"
kr9n01032.imh R "100 394 Standard"
Edit the output as needed so that each group of observations is contiguous, and that INDEF lines are included for missing observations in a set. As an example, note that the second and third lines in the example above need to be swapped so that there are two observing sets. Also, the line: INDEF V "100 241 Standard" must be added after kr9n01028 to complete that set of observations. The order is not important, as long as the observations for each set are contiguous.
The next output mkimsets gives is a question about entering a new name. Just give cr for each line; we'll include names in the next step. Finally, the program gives you one last chance to edit what the file will look like - just check that there are no "INDEF" entries as the first observation in a set. :wq to get out and its done.
2) Use the columns task to break temp.lis into (nfilters+2) separate single-column files. In this case we only have 2 filters, so:
ar> columns temp.lis numcol=4 outroot="qcol."
Note that the second column is just the colon from the input file - so the third column is the first image in a given observing set. Use it as input to the hselect task to create a coulumn with the object name.
ar> hselec @qcol.3 title yes > qcol.5
Then use the unix paste command to put things back together.
ar>paste qcol.5 qcol.2 qcol.3 qcol.4 > temp2.lis
produces the following (note that qcol.5 is the output from the hselect task):
PII-260 :
kr9n01004.imh kr9n01006.imh
PII-260 :
kr9n01005.imh kr9n01007.imh
"100 241 Standard" : kr9n01008.imh kr9n01009.imh
"100 394 Standard" : kr9n01010.imh kr9n01011.imh
"104 490 Standard" : kr9n01012.imh kr9n01013.imh
"104 598 Standard" : kr9n01014.imh kr9n01015.imh
"104 598 Standard" : kr9n01016.imh kr9n01017.imh
P037-F :
kr9n01018.imh kr9n01019.imh
P037-F :
kr9n01020.imh kr9n01021.imh
P168-F :
kr9n01022.imh kr9n01023.imh
P168-F :
kr9n01024.imh kr9n01025.imh
"100 241 Standard" : kr9n01026.imh kr9n01027.imh
"100 241 Standard" : kr9n01028.imh INDEF
"100 394 Standard" : kr9n01029.imh kr9n01030.imh
Edit temp2.lis to delete the ".imh" from the file names, get rid of the quotation marks, put in hyphens in the standard star names - basically clean it up making sure all names conform to the naming standards for the rest of the process. Not sure what those are right now, but I'm sure I'll find out! Now sort temp2.lis using the iraf sort command:
ar> sort temp2.lis > all.lis
Edit all.lis, add a letter after the target name in each line so that the name for that set is unique. Its only necessary for those objects that were observed in more than one set, but just to be neat and tidy add an "a" even if there is only one observing set for a given object. Here's what it should look like now:
107-484a : kr9n01044
kr9n01045
107-484b : kr9n01060 kr9n01061
107-484c : kr9n01062 kr9n01063
107-484d : kr9n01078 INDEF
107-484e : kr9n01079 kr9n01080
109-71a : kr9n01064 kr9n01065
109-71b : kr9n01081 kr9n01082
109-71c : kr9n01093 kr9n01094
109-949a : kr9n01066 kr9n01067
109-949b : kr9n01083 kr9n01084
109-949c : kr9n01095 kr9n01096
P037-Fa : kr9n01018 kr9n01019
P037-Fb : kr9n01020 kr9n01021
P134-Fa : kr9n01097 kr9n01098
P134-Fb : kr9n01099 kr9n01100
The field names must conform to the DSS naming conventions. There is a bit of history involved with how and why the fields are named the way they are. The original northern survey was taken at Palomar observatory and was taken on a 6-degree grid across the northern sky and the equatorial parts of the southern sky. The southern survey was taken on a 5-degree grid over the southern sky. The original Guide Star Photometric Catalogue included sequences in the centers of each of these plates. Since then a new northern sky survey has been (and is still being taken) at Palomar. The second Guide Star Photometric Catalogue includes sequences taken in the regions of the original catalogue in the north (the 6-degree grid) as well as on the new 5-degree grid of the second Palomar northern survey. Here's how they are designated:
For southern survey plates: Sxxx-q
For old northern survey plates: Pxxx-q
for new northern survey plates: XPxxx
WHERE: "xxx" is a three-digit plate-center number, usually numbered from the respective pole - that is S001 is the south pole region, and P001 is the north pole region; S800 would be a plate near the equator, as would P500. "q" is a letter designation indicating the star in the original Guide Star Photometric Catalogue upon which the new sequence is centered.
Note that in the case of the "XP" regions since there is no previous sequence there is no "q" designation.
Observers will often use slightly different naming conventions when at the telescope (if you go observing how about sticking with the program, eh?). Some common ones are to call the XP regions "Poss-II" or "P2" regions, and so on. If your list includes any of these erroneous designations you will need to correct them in the all.lis file.
BREAK THE LIST INTO TWO PARTS - A STANDARDS LIST AND A TARGET LIST
Edit all.lis again, first deleting all of the observation sets of standard stars, and write the output to ta.lis (the targets list). Edit the file again, this time deleting all of the target observation sets and write to st.lis (the list of standards). Here's what each file should now look like:
>>>> ta.lis: <<<<
P037-Fa : kr9n01018
kr9n01019
P037-Fb : kr9n01020 kr9n01021
P134-Fa : kr9n01097 kr9n01098
P134-Fb : kr9n01099 kr9n01100
P168-Fa : kr9n01022 kr9n01023
>>>> st.lis: <<<<
100-241a : kr9n01008
kr9n01009
100-241b : kr9n01026 kr9n01027
100-241c : kr9n01028 INDEF
100-394a : kr9n01010 kr9n01011
100-394b : kr9n01029 kr9n01030
That should do it - these are the files that ics needs as input.
In order to save time in the next few steps it makes sense to trim the daophot object databases for each image so that only the stars which satisfy the maximum estimated photometric error criterion ( le 0.05 mag) are included. Begin by making sure that all of the .mag and .als files have the same extension and version number. If app crashed during processing there may be some images with extra .mag or .als files. Just change the version number on the one you want to use to match the rest of the .mag and/or .als files. Then, from the main directory, use the following:
ar> cl < arlodir$cut_mag.cl
ar> cl < arlodir$cut_als.cl
ics must be run in three steps; once to determine image-to-image shifts for the targets list, once to determine image-to-image shifts for the standards list, and once to assign position and designation for standard stars in the standard star images.
Epar ics as shown below to enter image-to-image coordinate shifts for the list of standard star observations:
ar> lpar ics
(listima = "st.lis") Image set listThe task will display the first image in a set of observations in the imtool window. Point to a fairly bright star anywhere in the image and press "a". The next image in the set will be displayed - point to the same star and press "a". Continue throught each set of observations. Note that if there were errors made while putting together the lists "st.lis", or if the observers made mistakes in writing the image titles, you may find that the field in one image may not match the field in another image of the same set. If that happens you can try to patch things back together later; usually you'll have to rebuild the st.lis file and restart ics.
(doshift = yes) Calculate coords images shift?
(shout = "st.shift") Ouput shift file
(dostandard = no) Clean up MKOBSFILE ouput?
(aptmkobs = "st.temp") Output from MKOBSFILE - Aperture
(psfmkobs = "st.temp") Output from MKOBSFILE - PSF
(filtername = "R") Reference filter
(hmf = 2) How many filters?(0=auto)
(maxdist = 15) Maximum distance pointer-star
(recover = no) Recover from previous run?
(outputap = "st.ape") Cleaned file (aperture ouput)
(outputps = "st.psf") Cleaned file (PSF ouput)
(stddone = 0)
(imas = "")
(mkdat = "")
(mode = "al")
When finished, change listima = "ta.lis" and shout = "ta.shift" and run through the same procedure for the list of target observations.
MKNOBSFILE must be run four times, twice each from the "resultape" and "resultpsf" directories. The task creats files that contain photo- metric data for every star from every frame. We need one file for the standard frames. Begin by moving to the "resultape" directory and run mknobsfile with parameters set like this:
ar> lpar mknobsfile
photfiles = "@newmag" The input list of appHOT/DAOPHOT databasesNow go to the "resultpsf" directory and run two more times exactly as before, only with the parameter "photfiles" set to "@newals".
idfilters = "V R" The list of filter ids
imsets = "../st.lis" The input image set file
observations = "st.temp" The output observations file
(obsparams = "") The input observing parameters file
(obscolumns = "") The format of obsparams
(minmagerr = 0.001) The minimum error magnitude
(shifts = "../st.shift") The input x and y coordinate shifts file
(apercors = "") The input aperture corrections file
(aperture = 1) The aperture number of the extracted magnitude
(tolerance = 5.) The tolerance in pixels for position matching
(allfilters = no) Output only objects matched in all filters
(verify = no) Verify interactive user input ?
(verbose = yes) Print status, warning and error messages ?
(mode = "al")
Go back to the main directory, epar ics to look something like this and run over the standards list (st.lis) to find all standard stars in the reference image in each observation set:
ar> lpar ics
(listima = "st.lis") Image set listThis time you will be shown the "R" frame from each set of standard star observations. You need to indicate the location and name of each standard star that appears in the image. The finder charts for the standard stars can be found in AJ, Vol 104, #1, pp 340-371, and plates 21-76.
(doshift = no) Calculate coords images shift?
(shout = "st.shift") Ouput shift file
(dostandard = yes) Clean up MKOBSFILE ouput?
(aptmkobs = "st.temp") Output from MKOBSFILE - Aperture
(psfmkobs = "st.temp") Output from MKOBSFILE - PSF
(filtername = "R") Reference filter
(hmf = 2) How many filters?
(0=auto)
(maxdist = 15) Maximum distance pointer-star
(recover = no) Recover from previous run?
(outputap = "st.ape") Cleaned file
(aperture ouput)
(outputps = "st.psf") Cleaned file
(PSF ouput)
(stddone = 0)
(imas = "")
(mkdat = "")
(mode = "al")
When running ics this time, use the "s" key to enter each star's posi- tion. After indicating the star's position you must enter its name. The correct format is fielddesig_starnumber. As an example, "101_281", or "101_L5". Also rememer that you must give its name twice, once for the aperture photometry list, and once again for the psf photometry list. It can take a fairly long time for each entry - so be patient! The immediate urge will be to type ahead - to enter the star name twice immediately after entering the star's positon. That's not a good idea because if the star does not exist in one of the lists the program is not going to ask you to enter a new name. The type ahead buffer will then assign that designation to the next object.
Now that that is done, go to the resultape directory and edit the st.ape list. You need to delete the first two blank characters from each data line in the file (do not delete the first two characters in the header lines at the top of the file). Move over to the resultpsf directory and do the same thing to the st.psf file.
The next thing we need is a config.dat file. This file describes how the standard star database file is organized, how the observation or instrumental data files are orgainized, what values are to be solved for and the equations used to solve. The config.dat file is composed of three parts. The first part describes the structure of the photometric data file for the standards, and is kept in photcalx$catalogs/fnlandolt.dat - here's what it looks like:
# Declare the new Landolt UBVRI standards catalog variables
catalog V 4 # the V magnitude
BV 5
# the (B-V) color
UB 6
# the (U-B) color
VR 7
# the (V-R) color
RI 8
# the (R-I) color
VI 9
# the (V-I) color
error(V) 12
# the V magnitude error
error(BV) 13
# the (B-V) color error
error(UB) 14
# the (U-B) color error
error(VR) 15
# the (V-R) color error
error(RI) 16
# the (R-I) color error
error(VI) 17
# the (V-I) color error
The second thing needed in the config.dat file is a description of how the observational data are stored in the st.ape and st.psf files. That info can be found in fst.temp.dat (its just the st.temp file with "f" prepended and ".dat" appended to the name). Here's what's in it:
# Declare the observations file variables
observations
TV
3
# time of observation in filter V
XV
4
# airmass in filter V
xV
5
# x coordinate in filter V
yV
6
# y coordinate in filter V
mV
7
# instrumental magnitude in filter V
error(mV) 8
# magnitude error in filter V
TR
10
# time of observation in filter R
XR
11
# airmass in filter R
xR
12
# x coordinate in filter R
yR
13
# y coordinate in filter R
mR
14
# instrumental magnitude in filter R
error(mR) 15
# magnitude error in filter R
The final element is a description of the equations that "fitparam" must solve. That can be copied from photcalx$catalogs/tnlandolt.dat. Here's what it looks like:
# Sample transformation section for the new Landolt UBVRI system
transformation
fit u1=0.0, u2=0.65, u3=0.000
const u4=0.0
UFIT : mU = (UB + BV + V) + u1 + u2 * XU + u3 * UB + u4 * UB * XU
fit b1=0.0, b2=0.35, b3=0.000
const b4=0.0
BFIT : mB = (BV + V) + b1 + b2 * XB + b3 * BV + b4 * BV * XB
fit v1=0.0, v2=0.17, v3=0.000
const v4=0.0
VFIT : mV = V + v1 + v2 * XV + v3 * BV + v4 * BV * XV
fit r1=0.0, r2=0.08, r3=0.000
const r4=0.0
RFIT : mR = (V - VR) + r1 + r2 * XR + r3 * VR + r4 * VR * XR
fit i1=0.0, i2=0.03, i3=0.000
const i4=0.0
IFIT : mI = (V - VI) + i1 + i2 * XI + i3 * VI + i4 * VI * XI
Now, depending on how many filters were used, you will need to edit these definitions to match your dataset. As an example, for a night where you observed with just V and R filters, the config.dat file would look like this, after combining all three parts into a single file and editing out the unnecessary stuff:
# Configuration file for reducing VR photoelectric photometry catalog
V
4
# the V magnitude
VR
7
# the (V-R) color
error(V)
12
# the V magnitude error
error(VR) 15
# the (V-R) color error
# Declare the observations file variables observations
TV
3
# time of observation in filter V
XV
4
# airmass in filter V
xV
5
# x coordinate in filter V
yV
6
# y coordinate in filter V
mV
7
# instrumental magnitude in filter V
error(mV) 8
# magnitude error in filter V
TR
10
# time of observation in filter R
XR
11
# airmass in filter R
xR
12
# x coordinate in filter R
yR
13
# y coordinate in filter R
mR
14
# instrumental magnitude in filter R
error(mR) 15
# magnitude error in filter R
transformation
fit v1=0.0, v2=0.17, v3=0.000
const v4=0.0
VFIT : mV = V + v1 + v2 * XV
fit r1=0.0, r2=0.08, r3=0.000 const r4=0.0
RFIT : mR = (V - VR) + r1 + r2 * XR + r3 * VR + r4 * VR * XR
Once the config.dat file has been created you are ready to procede with fitparam. Epar the task to look something like this:
ar> lpar fitparam
observations = "st.ape" List of observations filesWhen running fitparam, you will need to delete obviously bad points, but do pay attention - if there are bad points it could be due to mis-identified stars (which is not a big problem) or a period of variable transparency (which is a big problem). Once you have a solution that you feel comfortable with, PRODUCE A HARDCOPY PLOT OF THE FIT. This is done be typing ":.snap" anytime the cursor is inside the plot window. Do one plot for each color. After exiting the FITPARAM task give the "gflush" command to clear the plot buffer and send the plots to the printer. Save the plots - we'll use them in our final report, to be discussed later.
catalogs = "photcalx$catalogs/nlandolt.dat" List of standard catalog files
config = "config.dat" Configuration file
parameters = "fitape" Output parameters file
(weighting = "uniform") Weighting type (uniform,photometric,equations)
(addscatter = yes) Add a scatter term to the weights ?
(tolerance = 3.0000000000000E-5) Fit convergence tolerance
(maxiter = 15) Maximum number of fit iterations
(nreject = 0) Number of rejection iterations
(low_reject = 3.) Low sigma rejection factor
(high_reject = 3.) High sigma rejection factor
(grow = 0.) Rejection growing radius
(interactive = yes) Solve fit interactively ?
(logfile = "STDOUT") Output log file
(log_unmatche = yes) Log any unmatched stars ?
(log_fit = no) Log the fit parameters and statistics ?
(log_results = no) Log the results ?
(catdir = )_.catdir) The standard star catalog directory
(graphics = "stdgraph") Output graphics device
(cursor = "") Graphics cursor input
(mode = "al")
You will need to run fitparam once in each directory; once in resultape and again in resultpsf. Note that the first parameter must be changed from "st.ape" to "st.psf", and "fitape" changed to "fitpsf" to run in the resultpsf directory!
BREAK UP THE TARGET LIST - CHECK THE DSS ARCHIVE - PREPARE COLOR.INDEX FILE
Now another of those wonderfully kludgy things. We need to break the target list into as many files as there were observations for any object. In most cases we observe each target field twice - so that in the main directory we need to break the "ta.lis" file into two files, "taa.lis", and "tab.lis". File "taa.lis" will include all of the "first" observations of a given target field, and "tab.lis" will contain all of the second observations of each target field (each "observation" consisting of a set of images representing a run throught the entire filter set).
Next, check that the appropriate "dss" tables exist in the arlodir$dss/ directory. If they do not they must be created from GASP. There must be one table for each target field, named exactly the same as in the "ta?.lis" files, with a .tab extension. These files contain catalogues of the dss objects known to exist in the target fields - when building them within GASP its best if you keep the field size only slightly larger than the actual chip size.
One more file that must be constructed is the "color.index" file. Its very easy - the two examples below should suffice to explain:
Example 1: V and R filters:
V
errV
VR
errVR
Example 2: B,V, and R filters:
V
errV
BV
errBV
VR
errVR
res is the program which ties everything together. Using the method of similar triangles it tries to match objects found in the images - first between the aperture and psf photometry lists for a single color, then between each color. After that it employs the same method to match objects in the DSS table file to objects found in the images in order to calculate astrometric plate coefficients. Finally, using these plate coefficients along with the photometric solution calculated earlier, res determines calibrated magnitudes, colors, and position for every star in each image.
Epar the task parameters to look something like this:
ar> lpar res
(usedispl = yes) Use display?Note that we will usually have to run the task twice, once for each of the (now separated) lists (taa.lis and tab.lis). But, before running the second time you'll need to create a couple of subdirectories and move the plate solution files from the first run into one of those directories. This will prevent res from using the first data set's plate solutions on the second data set's images. Create the following two directories (from the main directory):
(imslis = "taa.lis") Imsets list in mkobsfile task
(outputta = "taa") Output table
(toll = 2) Tolerance in mkobsfile task
(obsout = "mkobouta") Ouput file in mkobsfile task
(invout = "finalresulta") Output from invertfit task
(astrom = yes) Astrometry needed?
(U = no) U filter?
(B = no) B filter?
(V = yes) V filter?
(R = yes) R filter?
(I = no) I filter?
(indxfile = "arlodir$color.index") User Color Index File
(shout = "ta.shift") Coordinates Shift File
(filref = "R") Filter for coords reference
(objflag = "NF") Flag to ALL objects
(commflag = "No_comment") Comment to ALL objects
(toread = "../tab.lis") Internal use
(leggi = "uycv") Internal use
(mode = "al")
ar> mkdir platesola
ar> mkdir platesolb
Move the current contents into the platesola subdirectory
ar> mv *.platesol platesola/.
Now run res on the second target list - change the imslis parameter to "tab.lis" and the outputa parameter to "tab" and let fly!
Depending on how crowded the field is, the triangle matching algorithm may fail somewhat frequently. When that occurs you will be presented with a plot showing the objects from the DSS database table for the image, and the image itself will be displayed in the imtool window. You will be asked to indicate the locations of some stars, first in the plot, then in the image. Try to indicate 6 or more stars, scattered evenly across the image. Note that the image may be rotated or flipped wrt the plot; if needed you can use the imtool menu and buttons to flip or rotate the image to match.
The tables created by res contain some data that we aren't interested in. Its also in IRAF table format; we'd prefer an ascii file. To trim the unwanted objects (those which have only aperture or only psf photometry) use the following:
cl> tselect "kr10n01A" "kr10n01AS" "objflag == 'NF'"
cl> tselect "kr10n01B" "kr10n01BS" "objflag == 'NF'"
Essentially what you're doing is creating new tables which are subsets of the original res output. These new tables only contain data for the stars which have both aperture and psf magnitudes. Now we need to convert the output tables to and ascii table in the format needed by CASB and Torino. That is done by the task arlo2dira :
ar> arlo2dira
Table to be read and to be write:
infile: kr10n01AS
outfile: kr10n01A.txt
Working in kr10n01AS and writing kr10n01A.txt
.....
.....
Note that there is no parameter file for arlo2dira ; you will be asked for the names of the input table (no extension needed) and the output text file (extension is needed). When completed send email to Jorge García to let him know where the text files are.
ARCHIVE YOUR WORK TO TAPE: TAR
Now find a free 8mm tape drive and use the Unix "tar" task to create a tape archive of all your work. Move to the main directory for the night and give the following command (assuming that you're using drive /dev/rmt/0)
unix> tar cvf /dev/rmt/0 . (NOTE - the period at the end is important!)
This will create a tape archive of all files in "main" as well as all files in all subdirectories under main. If you are confident you know what you are doing you can normally fit more than one night's reductions onto a single tar tape. If you are unsure of how to accomplish this I recommend not doing so - tapes are cheap; much cheaper than the time it takes to redo all of the reductions if you make a mistake!
Do not delete all of your files yet - wait until Jorge has a chance to look at your reductions to be sure everything is in order.
For each night, and for each of the subset tables (the ones used for input to arlo2dira) create histograms as follows:
ar> dvpar.device = "stdplot" ar> tselect "kr10n01AS" "tempA"
"(vapt-vpsf) < 0.05 && (rapt-rpsf) < 0.05"
ar> histogram tempA col="rapt" nbins=20 z1=10 z2=20 title="kr10n01A" xlabel="R
Mag" ylabel = "N Stars"
ar> histogram tempA col="vapt" nbins=20 z1=10 z2=20 title="kr10n01A" xlabel="V
Mag" ylabel = "N Stars"
ar> tselect "kr10n01BS" "tempB" "(vapt-vpsf) < 0.05 && (rapt-rpsf)
< 0.05"
ar> histogram tempB col="rapt" nbins=20 z1=10 z2=20 title="kr10n01B" xlabel="R
Mag" ylabel = "N Stars"
ar> histogram tempB col="vapt" nbins=20 z1=10 z2=20 title="kr10n01B" xlabel="V
Mag" ylabel = "N Stars"
ar> gflush ar> dvpar.device = "stdgraph"
PUTTING TOGETHER THE FINAL REPORT:
Create a table like the one shown below, including information for each night of the run. Print a copy and attach it to the histograms produced above, along with the FITPARAM plots produced earlier. Email a copy of the summary table to Jorge García, then give him the entire paper stack. You're done!!
Summary for Kitt Peak Run 10
-------------------------------------------------------------------------------
Night Status # of RMS Astrometry Reduced By # Fields B V R auto/manual/bad Date
-------------------------------------------------------------------------------
1 11 x.xx 0.02 0.02 10/ 1/ 0 July 20 98 M Potter
2 Clear 10 x.xx 0.03 0.03 10/ 0/ 0 July 22 98 M Potter
3 Clouds 0
4 Clouds 0
5 Clouds 0
6 Clouds 0
7 P Cloud 2 x.xx 0.02 0.02 2/ 0/ 0 July 27 98 M Potter
-------------------------------------------------------------------------------