준비되어야 할 파일..

1) 데이터 파일 (당연한것이고..)
2) 기 작업된 marco file, layout, stylesheet 등..

batch processing 은 다음과 같이 간단하게 이루어짐

 $tec360 -b -p macrofile

-b 옵션을 이용해서 batch mode 작업수행
-p 모드로 마크로 이용

loop 를 이용해서 여러파일을 한번에 작업하기

shell script 이용하는 방법
 #!/bin/sh
n=1
while test $n -le 10
do
    tecplot -b -p batch.mcr -y d$n.out batch.lay d$n.plt
    n=`expr $n+1`
done

위와 같이 shell script를 작성해서 이용하던지..

마크로 파일에 아래와 같이 loop 항을 첨부해서 이용

#!MC 1120
$!EXPORTSETUP EXPORTFORMAT = PS
$!PRINTSETUP PALETTE = MONOCHROME
$!LOOP 10
$!OPENLAYOUT "batch.lay"
ALTDATALOADINSTRUCTIONS = "d|LOOP|.plt"
$!EXPORTSETUP PRINTRENDERTYPE = VECTOR
$!EXPORTSETUP EXPORTFNAME = "d|LOOP|.out"
$!EXPORT
EXPORTREGION = CURRENTFRAME
$!ENDLOOP
$!QUIT

참고로 여러파일을 하나로 묶어서 layout 파일로 열 경우 아래의 매크로 이용

이전에 "***.lay" 파일이 생성되어 있어야함.

#!MC 1120
$!LOOP 30
$!IF |LOOP| < 10
    $!VarSet |FILENAME| = 'plotn0|LOOP| '
    $!VarSet |OUTNAME| = '20|LOOP|'
$!ELSEIF |LOOP| >= 10
    $!VarSet |FILENAME| = 'plotn|LOOP| '
    $!VarSet |OUTNAME| = '2|LOOP|'
$!ENDIF
$!VarSet |MFBD| = './|FILENAME|'
$!READDATASET  '"|MFBD|.01" "|MFBD|.02" "|MFBD|.03" "|MFBD|.04" "|MFBD|.05" "|MFBD|.06" "|MFBD|.07" "|MFBD|.08" "|MFBD|.09" "|MFBD|.10" "|MFBD|.11" "|MFBD|.12" "|MFBD|.13" "|MFBD|.14" "|MFBD|.15" "|MFBD|.16" "|MFBD|.17" "|MFBD|.18" "|MFBD|.19" "|MFBD|.20" "|MFBD|.21" "|MFBD|.22" "|MFBD|.23" "|MFBD|.24" "|MFBD|.25" "|MFBD|.26" "|MFBD|.27" "|MFBD|.28" "|MFBD|.29" '
$!WRITEDATASET  "|OUTNAME|.plt"
  INCLUDETEXT = NO
  INCLUDEGEOM = NO
  INCLUDECUSTOMLABELS = NO
  BINARY = YES
  USEPOINTFORMAT = NO
  PRECISION = 9
  TECPLOTVERSIONTOWRITE = TECPLOTCURRENT
$!OPENLAYOUT "export.lay"
  ALTDATALOADINSTRUCTIONS = "|OUTNAME|.plt"
#$!EXPORTSETUP ExportFormat = PS
#$!PRINTSETUP PALETTE = MONOCHROME
#$!EXPORTSETUP PRINTRENDERTYPE = VECTOR
#$!EXPORTSETUP EXPORTFNAME = "|MFBD|.ps"
$!EXPORTSETUP IMAGEWIDTH = 1024
$!EXPORTSETUP EXPORTFORMAT = JPEG
$!EXPORTSETUP QUALITY = 100
$!EXPORTSETUP EXPORTFNAME = '|OUTNAME|.jpg'
$!EXPORT
  EXPORTREGION = CURRENTFRAME
$!ENDLOOP
$!QUIT



'리눅스이야기' 카테고리의 다른 글

Sample MPICH-2 pbs script file  (0) 2011.06.23
kill 명령어  (0) 2011.01.13
Acer 3820TG Ubuntu 에서 멀티터치 이용  (0) 2010.10.11
우분투 10.04 grub 수정  (0) 2010.10.06
GNU Make 강좌 링크  (0) 2010.08.04
Posted by 스핏파이어
,



터미널에서 다음과 같이 치면됨... 파폭에서 잘되는군..

xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Two-Finger Scrolling" 8 1
xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Scrolling" 8 1 1
xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Pressure" 32 10
xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Width" 32 8

'리눅스이야기' 카테고리의 다른 글

kill 명령어  (0) 2011.01.13
linux 상에서 tecplot 창 띄우지 않고 작업하기  (0) 2010.10.28
우분투 10.04 grub 수정  (0) 2010.10.06
GNU Make 강좌 링크  (0) 2010.08.04
batch job 관련 참고자료.  (0) 2010.06.09
Posted by 스핏파이어
,



sudo gedit /boot/grub/grub.cfg
Posted by 스핏파이어
,



'리눅스이야기' 카테고리의 다른 글

Acer 3820TG Ubuntu 에서 멀티터치 이용  (0) 2010.10.11
우분투 10.04 grub 수정  (0) 2010.10.06
batch job 관련 참고자료.  (0) 2010.06.09
리눅스 파일 링크 변경법  (0) 2010.05.25
Missing libstdc++.so.5 on Ubuntu  (0) 2010.05.24
Posted by 스핏파이어
,



http://www.ncsa.illinois.edu/UserInfo/Resources/Hardware/Intel64Cluster/Doc/Jobs.html


1. Interactive Use

Jobs should not be run on the interactive nodes. Their use is primarily for compiling and building your programs. Instead, please run jobs on the compute nodes. See the section on qsub -I for instructions on how to run an interactive job on the compute nodes.

2. Running Programs

MPI

All the implementations of MPI on the NCSA Intel 64 Linux Cluster have the mpirun script for running an MPI program. See the sample batch scripts for syntax details for the MPI implementations.

Notes:

  • The environment variable $PBS_NODEFILE is automatically defined in a batch job to point to a temporary file that contains the list of nodes assigned to the job.
  • The arguments to mpirun need to come before your executable. Any arguments after your executable are considered to be arguments to your executable.
  • The VMI2 MPI implementation does not propagate environment variables well. The workaround is to create a wrapper script that sets all the environment variables that your code will need along with the executable. Then in your batch script use the wrapper script as the executable in your mpirun line.
  • As noted in the MVAPICH2 sample batch script, in order to run MVAPICH2 jobs, a file named .mpd.conf needs to exist in your home directory with the line:
    MPD_SECRETWORD=XXXXXXX     
    where XXXXXXX is a string of random alphanumeric characters, with at least one alphabetic character.

    The file should also be readable and writeable only by the owner, so the permissions need to be set as follows:

    chmod 700 $HOME/.mpd.conf

OpenMP

Before you run an OpenMP program, set the environment variable OMP_NUM_THREADS to the number of thtreads you want. For example, to run program a.out interactively with two threads:

  setenv OMP_NUM_THREADS 2
./a.out

The following environment variables may also be useful in running your OpenMP programs:

OMP_SCHEDULE Sets the schedule type and (optionally) the chunk size for DO and PARALLEL DO loops declared with a schedule of RUNTIME. The default is STATIC.
KMP_LIBRARY sets the run-time execution mode. The default is throughput, but it can be set to turnaround so worker threads do not yield while waiting for work.
KMP_STACKSIZE Sets the number of bytes to allocate for the stack of each parallel thread. You can use a suffix k, m, or g to specify kilobytes, megabytes or gigabytes. The default is 4m.

Hybrid MPI/OpenMP

To run a MPI/OpenMP hybrid program, you need to set the envionment variable OMP_NUM_THREADS to the number of threads you want, and change the number of cpus per node for MPI accordingly. For example, to run a program with 10 MPI ranks and 8 threads for each rank, do the following in your batch script:

  #PBS -l nodes=10:ppn=1
setenv OMP_NUM_THREADS 8

See the exception with VMI2 in the MPI section above on using a wrapper.

(See the qsub section for information on PBS directives.)

3. Batch System (Torque)

The NCSA Intel 64 Linux Cluster uses the Torque Resource Manager with the Moab Workload Manager for running jobs. Torque is based upon OpenPBS, so the commands are the same as PBS commands.

3.1 Scheduling Policies

The scheduling policy on Abe is set to highly favor large node-count jobs.

Also, as with other HPC systems at NCSA, the scheduling policy includes fair-share. This is a policy whereby a job's priority may be increased or decreased because of other jobs that the user's project may be running or have recently run. Basically, in order to give everyone a fair opportunity to run jobs, a user's job will have a higher priority if users in their project haven't run jobs in the recent past. Fair-share also factors in the ratio of the service units the user's project is allocated and the time to the allocation expiration.

To maximize utilization, the scheduler will also back-fill jobs. When trying to schedule large blocks of nodes for large jobs, there are often "holes" where some nodes are idle waiting to be added to a pool to start a large waiting job. The scheduler back-fills smaller jobs into these holes.

When figuring out a job's priority relative to other jobs, there are several factors which are taken into account. Some of these factors include:

  • job size (how many nodes)
  • job expansion factor (the ratio of the time the job has spent eligible to be run versus how much time the job has requested)
  • the raw amount of time the job has spent eligible to be run
  • fair-share factors
A relative weighting of these factors contributes to a job's priority.

A debug queue is available to facilitate fast turnaround on debugging/testing jobs. Jobs in this queue have an intrinsically higher priority; additionally, they accrue priority at a much higher rate because the expansion factor (and its associated priority factor) increases very quickly.

In order to keep jobs from the long queue from dominating the system and causing shorter jobs to wait behind them, there is a limit on the nodes currently running jobs from the long queue. Given the fluid nature of our job load, this limit is adjusted from time to time, but in the general case we tend to keep it between 1/4 and 1/3 of the available nodes. When that limit is reached, subsequent jobs in the queue may go into a blocked state until running jobs finish and free up resources. Then the jobs will automatically be moved from the blocked state and get scheduled to run.

3.2 Queues

The following queues are currently available for users:

QueueWalltimeMax # Nodes
debug 30 mins 16
normal(default) 48 hours 256
(as of July 1 2009)
wide 48 hours 600
(as of September 16 2009)
long 168 hours 256
(as of July 1 2009)

NOTES:

  1. Jobs submitted to all but the wide queue will stay within the bounds of either the 16GB memory or the 8GB memory nodes i.e., jobs will not span across the two types of resources unless submitted to the wide queue.
  2. The minimum node count for the wide queue is 64.
  3. Access to resources over 600 nodes (up to a maximum of 1024 nodes) is available by special request. Please send email to consult@ncsa.uiuc.edu to request access. Include the number of nodes and the wall time required, and the number of jobs to be run.

3.3 Batch Commands

Below are brief descriptions of the useful batch commands. For more detailed information, refer to the individual man pages.

3.3.1 qsub

The qsub command is used to submit a batch job to a queue. All options to qsub can be specified either on the command line or as a line in a script (known as an embedded option). Command line options have precedence over embedded options. Scripts can be submitted using

qsub [list of qsub options] script_name

The main qsub commands are listed below. The sample batch scripts illustrates qsub usage and options. Also see the qsub man page for other options.

  • -l resource-list: specifies resource limits. The resource_list argument is of the form:
    resource_name[=[value]][,resource_name[=[value]],...]:resource

    The resource_names are:

    walltime: maximum wall clock time (hh:mm:ss) [default: 10 mins]
    nodes: number of 8-core nodes [default: 1 node]
    ppn: how many cores per node to use (1 through 8) [default: ppn=1]
    resource: resource to be used. The available resource is himem to access the 16 GB memory nodes.
    Note: Specify the himem resource only if you absolutely need the higher memory nodes since it can impact turnaround time of the job.

    Examples:
    #PBS -l walltime=00:30:00,nodes=2:ppn=8
    #PBS -l walltime=00:30:00,nodes=2:ppn=8:himem
  • -q queue_name: specify queue name.[default: normal]

  • -N jobname: specifies the job name.

  • -o out_file: store the standard output of the job to file out_file. After the job is done, this file will be found in the directory from which the qsub command was issued. [default :<jobname>.o<PBS_JOBID>]

  • -e err_file: store the standard error of the job to file err_file. After the job is done, this file will be found in the directory from which the qsub command was issued. [default :<jobname>.e<PBS_JOBID>]

  • -j oe: merge standard output and standard error into standard output file.

  • -V: export all your environment variables to the batch job.

  • -m be: send mail at the beginning and end of a job.

  • -M myemail@myuniv.edu : send any email to given email address.

  • -A project: charge your job to a specific project (TeraGrid project or NCSA PSN). (for users in more than one project)

  • -X: enables X11 forwarding.      

Notes:

  • Using the -N option will generate stdout and stderr files of the form: <jobname>.o<jobid> and <jobname>.o<jobid> respectively in the directory from where the batch job was submitted when used without the -o and -e options.
  • Temporary stdout/stderr files while the job is running are located in the home directory [$HOME/.pbs_spool or $HOME], and named <jobid>.abem5.OU and <jobid>.abem5.ER.

3.3.2 qsub -I

The -I option tells qsub you want to run an interactive job. You can also use other qsub options such as those documented in the batch sample scripts. For example, the following command:

   qsub -I -V -l walltime=00:30:00,nodes=2:ppn=8

will run an interactive job with a wall clock limit of 30 minutes, using two nodes and eight cores per node.

After you enter the command, you will have to wait for Torque to start the job. As with any job, your interactive job will wait in the queue until the specified number of nodes is available. If you specify a small number of nodes for smaller amounts of time, the wait should be shorter because your job will backfill among larger jobs. Once the job starts, you will see something like this:

qsub: waiting for job 1244.abem5.ncsa.uiuc.edu to start
qsub: job 1244.abem5.ncsa.uiuc.edu ready

Now you are logged into the launch node. At this point, you can use the appropriate command to start your program.

When you are done with your runs, you can use the exit command to end the job.

3.3.3 qstat

The qstat command displays the status of batch jobs.
  • qstat -a gives the status of all jobs on the system.
  • qstat -n lists nodes allocated to a running job in addition to basic information.
  • qstat -f PBS_JOBID gives detailed information on a particular job.
    Note: Currently PBS_JOBID needs to be the full extension: <jobid>.abem5.ncsa.uiuc.edu.
  • qstat -q provides summary information on all the queues.

See the man page for other options available.

3.3.4 qhist

qhist, a locally written tool available on the NCSA Intel 64 Linux Cluster, summarizes the raw accounting record(s) for one or more jobs. See the output of "qhist --help" for details.
NOTE: As of May 6 2009, SU charges for a job are available the day after the job completes.

To display information about a specific job, the syntax is qhist PBS_JOBID.

3.3.5 qdel

The qdel command deletes a queued job or kills a running job. The syntax is qdel PBS_JOBID.

Note: You only need to use the numeric part of the Job ID.

3.4 Sample Batch Scripts

Sample batch scripts are available in the directory /usr/local/doc/batch_scripts for use as a template.

3.5 Disk Space for Batch Jobs

Scratch space for batch jobs is provided via a per-job scratch directory that is created at the beginning of the job. This directory is created under /scratch/batch, and is based on the JobID. If the batch script uses one of the sample scripts as a template, the name of this scratch directory is available to job scripts with the $SCR environment variable.

Your job scratch directory may be deleted soon [possibly immediately] after your job completes, so you should take care to transfer results to the mass storage system (see the section Automated Saving of Files from Batch Jobs).

The cdjob command can be used to change the working directory to the scratch directory of a running batch job. The syntax is

cdjob PBS_JOBID

3.6 Automated Saving of Files from Batch Jobs

The saveafterjob utility is available for automated, guaranteed saving of output files from batch jobs to the mass storage system. It needs to be added with the SoftEnv key: +saj
For details on its use, see the saveafterjob page and the sample PBS batch scripts.

4. Notes

  • To avoid excessive paging, we recommend restricting job memory to 875MB/core or 7GB/node.
  • While a job is running, you can ssh to the compute nodes on which your job is running. qstat -n provides the list of hosts assigned to your job. The first host on the list is the launch node.

'리눅스이야기' 카테고리의 다른 글

우분투 10.04 grub 수정  (0) 2010.10.06
GNU Make 강좌 링크  (0) 2010.08.04
리눅스 파일 링크 변경법  (0) 2010.05.25
Missing libstdc++.so.5 on Ubuntu  (0) 2010.05.24
우분투 관리자 접속  (0) 2010.05.13
Posted by 스핏파이어
,



1. 프로그램 버전 확인 (gcc 를 예로)
#gcc --version

2. 프로그램 위치 확인
#which gcc
/usr/bin/gcc

3. 링크 경로확인
#ls -l /usr/bin/gcc
**gcc 관련 정보 출력

4. 링크 제거
#unlink /usr/bin/gcc
#gcc --version
   -> 아무것도 안나와야함

5. 링크 재설정
#ln -s /usr/bin/gcc-4.3 /usr/bin/gcc
  *** ln -s  { 링크경로}  {링크파일}

6. 링크 확인
#gcc --version 등으로..

'리눅스이야기' 카테고리의 다른 글

GNU Make 강좌 링크  (0) 2010.08.04
batch job 관련 참고자료.  (0) 2010.06.09
Missing libstdc++.so.5 on Ubuntu  (0) 2010.05.24
우분투 관리자 접속  (0) 2010.05.13
리눅스 사용자 시스템 제한 설정  (0) 2010.05.11
Posted by 스핏파이어
,



우분투 linux 에서 tecplot 설치시 발생한 에러 정정법

============ 해결법은 아래와 같음 ====================

원본소스 : http://i-ubuntu.springnote.com

일부 소프트웨어을 구동을 할때 libstdc++.so.5을 필요로하는 경우가있습니다.
만일 사용자가 RPM을 사용할수있는 페도라나 수세사용자라면 rpmfind.net
에서 패키지을 검색을 하면 어떤 패키지가 필요한지 알수있습니다.
아마도 페도라 사용자라면
compat-libstdc++-33 패키지을 각 패포판에 맞는
버전을 설치하는 것만으로 해결됩니다만 우분투에는 제공하지않습니다.
해결방법은 다음과 같습니다. 

Code:

change directory to /tmp directory:

cd /tmp/

# download deb package:

wget -c http://lug.mtu.edu/ubuntu/pool/main/g/gcc-3.3/libstdc++5_3.3.6-13ubuntu2_i386.deb --> 이파일은 존재하지 않는것 같다.

대신 나는 이 밑에 파일로 했다.

wget -c http://lug.mtu.edu/ubuntu/pool/main/g/gcc-3.3/libstdc++5_3.3.6-10_amd64.deb
(당연히 한줄로...연결하시야합니당..)

# unpack deb package to get library file

dpkg -x libstdc++5_3.3.6-13ubuntu2_i386.deb libstdc++5

# copy library file to /usr/bin directory

sudo cp libstdc++5/usr/lib/libstdc++.so.5.0.7 /usr/lib

# change directory to /usr/lib directory

cd /usr/lib

# create simbolic link to library

sudo ln -s libstdc++.so.5.0.7 libstdc++.so.5

Posted by 스핏파이어
,



기본적으로 우분투는 설치시에 관리자 계정의 비밀번호를 설정하지 않는다. 따라서 처음 설치시에 당황하게 되는데.. 간단히 사용자 계정으로 들어가서 아래와 같이 관리자 계정 비밀번호를 설정하면된다.

$ sudo passwd

그러면 현재 접속되어 있는 계정의 passwd 를 물어보게 되고 , 다음에 root 의 passwd 를 설정하는 창이 나오게 된다.

-참고---------------------------------------------------------------------------

만약에 그냥 나는 관리자다 하고 무조건 관리자 권한으로 이용하고 싶다면.. sudo 를 하는게 귀찮을 때가 있는데 처음부터 root 로 로그인을 하면 sudo 를 이용할 필요가 없다.

root 계정이 만들어 졌다면..

시스템 - 관리 - 로그인 창 - 보안 탭에서 [로컬 시스템 관리자 로그인 허용] 을 체크

로그인 화면에서 사용자 계정대신 "root" 로 입력하고, 로그인한다.

만약 보안 탭에서 [자동 로그인 사용] 을 체크하면, 부팅할 때마다 로그인을 할 필요가 없어진다.
Posted by 스핏파이어
,



$ulimit -a

이용해서 사용자 이용현황 확인

[id@node001 ~]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 36864
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

해당 변수에 맞는 옵션으로 용량 변경 가능
제한없을때는 unlimited 로 설정


** mpi 이용시 , 다음 Error 에 대해서 ************************
net_send: could not write to fd=5, errno = 32

위의 에러는 일반적으로 이용하고 있는 프로세서중 하나가 죽었을 경우에 발생.
해결책은 환경설정의 stack size 를 충분히 확보했는지 체크~

Posted by 스핏파이어
,



유닉스머신이나 리눅스 머신에서 포트란을 사용하기 위해서는 일단 시스템 관리자에서 포트란을 설치해 달라고 요청을 해야된다. 요즘에 관심의 촛점이되고 있는 리눅스 머신의 경우 기본적으로 포트란 컴파일러가 설치가 되어 있다. 좀 전에 작성한 sample.for 파일을 리눅스 머신에서 컴파일하는 방법에 대한 생각을 해보자. 리눅스 머신에서 사용하는 명령어는 대부분 f77이나 g77이다. g77의 경우를 예를 들어 설명하겠다.
g77 sample.for
위와 같은 명령어에 의해 생성되는 파일은 윈도우의 경우와는 약간 다르다. 위의 경우에는 실행 파일의 이름을 정해주지 않았기 때문에 a.out이라는 형태의 실행파일이 만들어졌다. 그러므로 프로그램을 실행하기 위해서는
./a.out
이라고 해야만 실행이 된다. 실행 파일의 이름을 결정하기 위해서는 -o 옵션을 사용해야 된다.
g77 -o sample sample.for
또한 유닉스나 리눅스에서는 도스와는 달리 실행파일의 확장자 같은 것은 필요가 없기 때문에 파일의 실행 권한만 있으면 된다. 그러므로 위에서 컴파일한 파일을 실행하기 위해서는
./sample
이라고 하면 된다. 또한 단순히 목적코드만을 만드는 경우에는 -c 옵션을 사용하면 된다.
g77 -c sample.for
이 경우에는 윈도우 시스템과 달리 sample.o 라는 파일을 만들어 준다. 윈도우 시스템에서는 sample.obj 가 만들어지는 것과는 약간 다르다.

마지막으로 다른 언어와 달리 포트란은 대소문자를 구분하지 않는다. 윈도우 시스템인 경우야 당연이 시스템을 사용하는 경우 대소문자를 구분하지 않지만 유닉스 시스템인 경우는 다르게 대소문자를 구분한다. 예를 들어 a.exe와 A.exe 파일이 있는 경우 윈도우 시스템은 같은 것으로 인식을 한다. 또한 명령어를 사용하는데 있어서도 디렉토리를 바꾸는데 사용하는 명령어인 `cd'나 `CD'를 구분하지 않는다. 즉, 같은 명령어로 인식한다는 것이다. 그러나 유닉스 시스템에서는 대소문자를 구분해서 위에서 예를 들은 a.exe와 A.exe라는 파일은 다른 파일이며 명령어인 `cd'는 있지만 `CD'라는 명령어는 없는 것으로 나온다. 그렇지만 두 시스템 모두 포트란에서 사용하는 변수나 함수는 모두 동일한 것으로 인식을 한다.

C        1         2         3         4         5         6         7
C23456789012345678901234567890123456789012345678901234567890123456789012
C
C Sample file to show basic fortran program
C
program sample

integer a, B

a=1
B=2

write(*,10) A, b
10 format(2i)
stop
end
위의 예제를 컴파일 하는 경우 어떠한 에러도 발생하지 않는다. 그러나 대부분의 다른 언어의 경우에는 이 두개를 구분해서 사용하므로 포트란을 사용할때 주의를 해야된다. Matlab에서도 a라고 지정한 변수와 A라고 지정한 변수는 다른 것으로 알고 있다.

원본소스 : http://cheric.org/ippage/e/ipdata/2001/13/node11.html


Posted by 스핏파이어
,