Encrypted Data Storage
Overview and Existing Documentation
The following document provides an overview of the Medical Data Manager (MDM) that we intend to deploy as well as technical information on its implementation:
http://www.i3s.unice.fr/~johan/mdm/mdm-051013.pdf
.
The MDM is composed from the following services:
- gLite data management services v1.5 (fireman file catalog, hydra keystore, gLiteIO)
- a SRM-DICOM server
- the AMGA metadata manager
- a trigger script to register new DICOM files into the system
- a client library to access and visualize image file that were registered.
The following sections describe the gLite components installation, SRM-DICOM server installation, AMGA installation, the trigger installation, the client library installation, and instructions for building from sources.
gLite topics
Install
For the basic functionality only two packages are needed (with their dependencies):
glite-data-catalog-cli and
glite-data-io-client. You also need the attached
services.xml file, which points you to the services in the EGEE prototype testbed.
As 'root' you can install the packages with the following commands:
echo 'rpm http://egee-jra1-data.web.cern.ch/egee-jra1-data/repository-biomed dist glite' >/etc/apt/sources.list.d/egee-na4-biomed.list
echo 'rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R1.4/ rhel30 externals' >/etc/apt/sources.list.d/glite-externals.list
apt-get update
apt-get install mdmdemo
The following NEW packages will be installed:
creaimage (0.0.0-1)
creaviewer (0.0.0-2)
glite-amga-cli (1.0.0-1)
glite-data-catalog-api-c (2.0.0-4)
glite-data-catalog-cli (1.7.2-4)
glite-data-hydra-cli (1.0.2-1)
glite-data-io-base (2.0.0-1)
glite-data-io-client (1.5.2-1)
glite-data-io-gss-auth (1.0.0-1)
glite-data-io-quanta (1.0.0-1)
glite-data-util-c (1.2.0-1)
glite-essentials-cpp (1.1.1-1_EGEE)
glite-service-discovery-api-c (2.2.0-0)
glite-service-discovery-file-c (2.1.0-2)
jmc (0.0.0-1)
mdmdemo (0.0.0-3)
vdt_globus_essentials (VDT1.2.2rh9-1)
wx-gtk2-ansi (2.6.2-1)
Need to get 32.5MB of archives.
After unpacking 123MB of additional disk space will be used.
export GLITE_LOCATION=/opt/glite
mkdir -p $GLITE_LOCATION/etc
wget -O $GLITE_LOCATION/etc/services.xml https://twiki.cern.ch/twiki/pub/EGEE/DMEncryptedStorage/services.xml
You can use the attached
reinstall
script to create a UI on your machine as a normal (not 'root') user:
export GLITE_ROOT=/tmp/ui
./reinstall
bash --rcfile $GLITE_ROOT/env_settings.sh
The above described installations would make use of the following (some of them only planned) services:
Cipher
The suggested algorithm is AES, however it is not available in the OpenSSL version, which comes
as default with Globus 2.x. So for these cases we have to fall back to IDEA (ide-cbc) or Blowfish (bf-cbc).
Blowfish is faster, so we shall use
bf-cbc with a 128 bit key for the demo.
Prototype 0.
Prototype 0. is an implementation of the functionality in shell scripts, using a single key-store
(see the attached
glite-eds-put
,
glite-eds-get
, and
glite-eds-rm
scripts).
You can emulate the functionality locally, by printing the remote commands, instead of executing
them. In this mode the encryption, storage (locally) and decryption does happen, so you can test various
ciphers (by setting EDS_CIPHER and EDS_KEYINFO environmental variables):
# the following line forces the commands
# to be printed, instead of exectued
export DRYRUN=echo
./glite-eds-put input-file /tmp/file
./glite-eds-get /tmp/file output-file
./glite-eds-rm /tmp/file
If you have installed a UI properly and you are using the
services.xml file,
then you can also run these commands with the services:
# storing the file to an SE
./glite-eds-put input-file /tmp/file
# checking the result
glite-catalog-stat /tmp/file
glite-meta-getattr /tmp/file
# retrieving the file from SE
./glite-eds-get /tmp/file output-file
# removing the file from SE
./glite-eds-rm /tmp/file
Prototype of the SRM side registration (
glite-eds-register
)
is almost the same as
glite-eds-put,
however it does not involve the gLite I/O server. It takes an input file, an
LFN and a SURL as input:
# encryption and registration
./glite-eds-register input-file /tmp/file srm://dicomsrm.example.com:8443/tmp/file output-file
# checking the result remotely
glite-catalog-stat /tmp/file
glite-meta-getattr /tmp/file
# and checking the encrypted file locally
ls -l output-file
# removing the local file
rm output-file
# removing the remote entries
glite-catalog-rm /tmp/file
glite-meta-remove /tmp/file
The created
output-file shall be returned to the client for the specified SURL.
Prototype 1.
glite-data-hydra-service and
glite-data-hydra-cli packages has been added to implement the functionality
of the above mentioned scripts in a simple C library and CLI. It will be part of R1.5, but it is already
included into a special APT repository for the above mentioned
reinstall script.
SRM-DICOM installation
/******************************************************************************
* Copyright (c) 2004 on behalf of the EU EGEE Project :
* Centre National de la Recherche Scientifique (CNRS), France
* Laboratoire de l'Accélérateur Linéaire (LAL), France.
******************************************************************************
* File name : readme.txt
* Author : Daniel Jouvenot
* Description : Help to construct the SRM DICOM system
*****************************************************************************/
/*****************************************************************************/
// Comments
/*****************************************************************************/
This system pemits to use a DICOM server with an SRM server. So it permits to benefit of DICOM services on a grid such EGEE with a standard interface such SRM.
This is a sub system of a Medical data management system described on the page http://www.i3s.unice.fr/~johan/mdm/ (see also https://twiki.cern.ch/twiki/bin/view/EGEE/DMEncryptedStorage).
/*****************************************************************************/
// Description
/*****************************************************************************/
SRM DICOM system is principally composed of 6 parts :
1. An SRM server
2. An srmToDicom library
3. A sdmStorescp application
4. A DICOM server
5. A storageDicomManager.properties file
6. A sdmDaemon package
You have to install and configure these differents parts one by one to be able to correctly use the SRM DICOM servers.
To use these differents systems you will need :
1. A data base management system running with mysql and postgresql
2. A gridftp service running (if you use srmcp as a client for SRM)
/*****************************************************************************/
// SRM server
/*****************************************************************************/
To install the SRM server you have to get the rpm file on http://quattor.web.lal.in2p3.fr/packages/site/
$ wget http://quattor.web.lal.in2p3.fr/packages/site/srmServer-last_version.noarch.rpm
As root install this rpm :
$ rpm -i srmServer-last_version.noarch.rpm
Now you have to configure several files according to your system :
1. Configure the config.xml file under /opt/srmServer/.srmconfig. For example :
<!--jdbcUrl-->
<jdbcUrl> jdbc:postgresql:srmDataBase </jdbcUrl>
<!--jdbcClass-->
<jdbcClass> org.postgresql.Driver </jdbcClass>
<!--jdbcUser-->
<jdbcUser> dicom </jdbcUser>
<!--jdbcPass-->
<jdbcPass> enstore </jdbcPass>
You have to create the jdbcUrl and jdbcUser. For that configure your postgresql
driver, start the posgresql server and :
a. Create the user (as postgres) :
$ /usr/local/pgsql/bin/createuser dicom
b. Create database (as you)
$ /usr/local/pgsql/bin/createdb srmDataBase
2. Configure the dcache.kpwd to add the mapping and the login entries for the user according to the /etc/grid-security/grid-mapfile file
3. Configure the run-unix-fs file under /opt/srmServer/bin to specify the name of your machine
/*****************************************************************************/
// srmToDicom library
/*****************************************************************************/
It's the library used by the SRM server to be able to call the DICOM server.
To install this library you have to get it on http://quattor.web.lal.in2p3.fr/packages/site/
$ wget http://quattor.web.lal.in2p3.fr/packages/site/srmToDicom-last_version.i386.rpm
$ rpm -i file.rpm (as root)
Specify the path of this library in your LD_LIBRARY_PATH variable.
/*****************************************************************************/
// sdmStorescp application
/*****************************************************************************/
It's the service class provider for the storage on disk of the dicom files.
To install this application you have to get it on http://quattor.web.lal.in2p3.fr/packages/site/
$ wget http://quattor.web.lal.in2p3.fr/packages/site/sdmStorescp-last_version.i386.rpm
$ rpm -i file.rpm (as root)
/*****************************************************************************/
// DICOM server
/*****************************************************************************/
The DICOM server is composed of 2 differents parts.
1. The DCMTK's libraries which are used to interface the server
2. The CTN's applications which supplied DICOM services
So you have to install these two componants from http://quattor.web.lal.in2p3.fr/packages/site/
$ wget http://quattor.web.lal.in2p3.fr/packages/site/dcmtk-last_version.i386.rpm
$ wget http://quattor.web.lal.in2p3.fr/packages/site/ctn-last_version.mysql4.i386.rpm
$ rpm -i file.rpm (as root)
You also have to configure your DICOM server.
1. Initiate the users under mysql
$ mysqladmin -u root create ctn
$ mysql -u root
2. Under mysql write
$ GRANT ALL PRIVILEGES ON *.* TO ctn@localhost IDENTIFIED BY 'ctn' WITH GRANT OPTION;
$ quit
3. Create some tables using scripts (from main CTN directory)
$ cd cfg_scripts/mysql
$ csh ./CreateDB CTNControl
$ csh ./CreateTables Control CTNControl
$ csh ./CreateDB DicomImage
$ csh ./CreateTables DIM DicomImage
$ csh ./CreateDB FISDB
$ csh ./CreateTables FIS FISDB
$ csh ./CreateDB LTA_IDB
$ csh ./CreateDB LTA_FIS
$ csh ./CreateDB STA_IDB
$ csh ./CreateDB STA_FIS
$ csh ./CreateTables FIS LTA_FIS
$ csh ./CreateTables FIS STA_FIS
$ csh ./CreateTables DIM LTA_IDB
$ csh ./CreateTables DIM STA_IDB
4. Configure CTN server doing the following
$ /usr/local/ctn/bin/cfg_ctn_tables
From the application :
-> Control->General->Applications
Application Title = CTN
Node = name of your CTN server
Organization = any
Port = 10004
Application Title = MOVESCU
Node = name of your CTN server
Organization = any
Port = 10004
Application Title = STORESCU
Node = name of your CTN server
Organization = any
Port = 10004
Application Title = STORESCP
Node = name of your CTN server
Organization = any
Port = 10006
From the application :
-> Control->General->Security Matrix
Requesting Application (Vendor) = CTN
Responding Application (CTN) = CTN
Requesting Application (Vendor) = MOVESCU
Responding Application (CTN) = CTN
Requesting Application (Vendor) = STORESCU
Responding Application (CTN) = CTN
Requesting Application (Vendor) = STORESCP
Responding Application (CTN) = CTN
Requesting Application (Vendor) = CTN
Responding Application (CTN) = STORESCP
From the application :
-> Control->Image Server->Storage Access
Application Title (CTN) = CTN
Database key = DicomImage
Owner = any
Group Name = any
Comment = any
From the application :
-> Control->Image Server->Storage Control
Requesting Application = CTN
Responding Application = CTN
Medium = any
(File System) Root = /var/spool/srmToDicomCache/srmCache
Requesting Application = MOVESCU
Responding Application = CTN
Medium = any
(File System) Root = /var/spool/srmToDicomCache/srmCache
Requesting Application = STORESCU
Responding Application = CTN
Medium = any
(File System) Root = /var/spool/srmToDicomCache/ctnCache
Requesting Application = STORESCP
Responding Application = CTN
Medium = any
(File System) Root = /var/spool/srmToDicomCache/srmCache
Requesting Application = CTN
Responding Application = STORESCP
Medium = any
(File System) Root = /var/spool/srmToDicomCache/srmCache
From the application :
-> Control->FIS->FIS Access
Title = CTN
Database Key = FISDB
Group = any
Comment = any
Don't forget to create the /var/spool/srmToDicomCache/srmCache and the /var/spool/srmToDicomCache/ctnCache directory with the good rights.
/*****************************************************************************/
// storageDicomManager.properties file
/*****************************************************************************/
This is a file describing the properties needed by the SRM and DICOM server to place under /etc/storageDicomManager directory by default. Example :
DICOM_SERVER_PEER = grid13.lal.in2p3.fr
DICOM_SERVER_PORT = 10004
RETRIEVE_PORT = 2100
SOCKET_SERVER_PORT = 30000
PUT_APPLICATION_TITLE = STORESCU
MOVE_APPLICATION_TITLE = MOVESCU
PEER_APPLICATION_TITLE = CTN
PROVIDER_APPLICATION_TITLE = STORESCP
QUERY_FILE_NAME = /tmp/medi-dicom-query-
QUERYDICOM_FILE_NAME = /tmp/medi-dicom-queryDicom-
SRM_STORAGE_DIR = /var/spool/srmToDicomCache/srmCache
/*****************************************************************************/
// sdmDaemon package
/*****************************************************************************/
This is the package containing the daemons to launch the SRM, storescp and the DICOM applications. These daemons to launch are
a. srmServer
b. sdm_storescp
c. archive_server
These daemons need some conf files. Examples :
->srmServer.conf file :
SRM_HOME=/opt/srmServer
SRM_CONFIG=/opt/srmServe/.srmconfig/config.xml
GRIDFTP_HOST=grid03.lal.in2p3.fr
GRIDFTP_PORT=2811
DCMDICTPATH=/opt/dcmtk/lib/dicom.dic
JAVA_HOME=/usr/java/j2sdk1.4.2_08
->ctn.conf file :
PORT=10004
->sdm_storescp.conf file :
PORT1=30000
PORT2=10006
STORAGE_DIR=/var/spool/srmToDicomCache/srmCache
PROVIDER_TITLE=STORESCP
Place these files under /etc.
To launch the daemons as root do :
->sdm_storescp.conf file :
PORT1=30000
PORT2=10006
STORAGE_DIR=/var/spool/srmToDicomCache/srmCache
PROVIDER_TITLE=STORESCP
Place these files under /etc.
To launch the daemons as root do :
$ service srmServer start
$ service sdm_storescp start
$ service archive_server start
or
$ /etc/init.d/srmServer start
$ /etc/init.d/sdm_storescp start
$ /etc/init.d/archive_server start
/*****************************************************************************/
// For tests
/*****************************************************************************/
If you use srmcp
1. Put test
$ ./srmcp -debug=true -use_proxy=true -x509_user_proxy=/tmp/x509up_u6011 -webservice_protocol=http file:////home/jouvenot/trunk/images/ct.001 srm://grid13.lal.in2p3.fr:8443/xxx/yyy/try001
2. Get test
$ ./srmcp -debug=true -use_proxy=true -x509_user_proxy=/tmp/x509up_u6011 srm://grid13.lal.in2p3.fr:8443/home/jouvenot/trunk/images/ct.001 file:////home/jouvenot/try002
To get srmcp see https://srm.fnal.gov/twiki/bin/view/SrmProject/UnixFSSrm
AMGA topics
AMGA RPMs are available from
AMGA web site
. We are currently using the glite-amga-cli-1.0.0-rc4c_noglobus.rpm client package. The client needs to configure connection to the server through a ~/.mdclient.config file.
Trigger script
The trigger script is making use of:
* gLite and AMGA clients (see instruction above)
* DCMTK library (download and install DCMTK from
dcmtk-3.5.3-2.i386.rpm
).
* SRM-DICOM (download and install it from
LAL
)
It can be retreived from CVS and compiled as follows:
cvs -d :ext:jmontagn@jra1mw.cvs.cern.ch:/cvs/jra1mw co org.egee.na4.mdm
cd org.egee.na4.mdm/trigger
cd getuidSrc && make
cd writeMetaDataToAMGASrc && ./bootstrap
To use the trigger script, several environment variables need to be defined:
export GLITE_LOCATION=/opt/glite
export AMGA_DIR=${GLITE_LOCATION}
export GLITE_BIN_LOCATION=${GLITE_LOCATION}/bin
export SRMCP_LOCATION=/opt/d-cache/srm/bin
export GET_UID_LOCATION=<path to getuid binary compiled above>
export WRITE_TO_AMGA_LOCATION=<path to rdmd binary compiled above (writeMetaDataToAMGASrc)>
gLite I/O Server with file backend
As 'root' you can install the packages with the following commands:
echo 'rpm http://egee-jra1-data.web.cern.ch/egee-jra1-data/repository-biomed dist glite' >/etc/apt/sources.list.d/egee-na4-biomed.list
echo 'rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R1.4/ rhel30 externals' >/etc/apt/sources.list.d/glite-externals.list
apt-get update
apt-get install glite-data-io-server
And some more, which might be useful for debugging:
apt-get install glite-service-discovery-cli
apt-get install glite-data-srm-cli
Setting up the environment and configuring the service:
groupadd dicom
useradd dicom -g dicom -m
mkdir /var/glite /var/glite/lock /var/glite/tmp
chown -R dicom.dicom /var/glite
mkdir /var/log/glite
chown dicom.dicom /var/log/glite
# you have to add later the proper DNs to this file
touch /etc/grid-security/grid-mapfile
ln -s /etc/grid-security/grid-mapfile /home/dicom/.gridmap
# check to see if it works
su - dicom
# configuring the sevice
export GLITE_LOCATION=/opt/glite
export LD_LIBRARY_PATH=$GLITE_LOCATION/externals/lib:$GLITE_LOCATION/lib:$LD_LIBRARY_PATH
/opt/glite/bin/glite_data_config_generator \
-f /opt/glite/share/config/glite-data-io-server/io-server-fireman.config.xml \
-o /opt/glite/etc/glite-io-server-biomed.properties.xml \
-l /opt/glite/etc/glite-io-server-biomed.log-properties
level: 1 - Normal
Port: 5382
# all the other optional values should _not_ be set
FasEndPoint: https://lxb1434.cern.ch:8443/EGEE/glite-data-catalog-service-fr-mysql/services/FiremanCatalog
SrmEndPoint: httpg://egee1.unice.fr:8443/srm/managerv1
SeHostName: egee1.unice.fr
SeProtocol: file
RootPath: /
FiremanEndPoint : https://lxb1434.cern.ch:8443/EGEE/glite-data-catalog-service-fr-mysql/services/FiremanCatalog
Log File name: /var/log/glite/glite-io-server-biomed.log
cat >/opt/glite/etc/glite-data-io-server.conf <<EOF
export GRIDMAP=/etc/grid-security/grid-mapfile
export GLITE_SERVICE_USER=dicom
EOF
# this is a bug:
chmod +x /opt/glite/etc/init.d/glite-data-io-server
# registering the I/O server system wide and starting it
ln -s /opt/glite/etc/init.d/glite-data-io-server /etc/init.d/
service glite-data-io-server start
# running after reboot
chkconfig --add glite-data-io-server
# rotating the log files
ln -s /opt/glite/etc/logrotate.d/glite-data-io-server /etc/logrotate.d/
Removing the service:
chkconfig --del glite-data-io-server
rm /etc/init.d/glite-data-io-server
rm /etc/logrotate.d/glite-data-io-server
rm -rf /var/glite/
rm -rf /var/log/glite/
userdel gproduct
groupdel gproduct
rm -rf /home/gproduct/
# if no other gLite component is installed, then simply:
rpm -qa | grep glite | xargs rpm -e
rpm -e d-cache-client
--
AkosFrohner - 02 Dec 2005
MDM client library and image format conversion
The RPMs and SRPMs for the MDM client library can be found here:
All these RPMs install in /usr/local/. The source RPMs or the binaries (for Scientific Linux 3 and Fedora Core 4) can be downloaded here:
On the server side, the following RPMs provide the image format conversion functionnalities:
- jmc-0.0.0-1.i386.rpm
- gdcm-1.3.0-1.3.0.20051206.i386.rpm
- creaimage-0.0.0-3.i386.rpm
The binary /usr/local/imgconv provides format conversion functionnaliy.
On the client side, the following RPMs are required. They provide image R/W, basic manipulation, and visualisation.
- jmc-0.0.0-1.i386.rpm
- gdcm-1.3.0-1.3.0.20051206.i386.rpm
- creaimage-0.0.0-3.i386.rpm
- creaviewer-0.0.0-4.i386.rpm
- mdmdemo-0.0.0-5.i386.rpm
For creaviewer, there is also a dependency with wxwindows that you can install from:
- wx-base-ansi-2.6.2-1.i386.rpm
- wx-gtk2-ansi-2.6.2-1.i386.rpm
- wx-gtk2-ansi-gl-2.6.2-1.i386.rpm
In addition, if you need to recompile creaviewer from source RPM, you will need:
- wx-base-ansi-devel-2.6.2-1.i386.rpm
- wx-gtk2-ansi-devel-2.6.2-1.i386.rpm
If you need to recompile gdcm from source RPM you will also need cmake (>= 2.2):
- cmake-2.2.0-2.sl3.i386.rpm
Building from source
This is a step by step guide for building the client RPMs from source. This should work on any platforms.
The
EGEE Developer's Guide
should provide more insight in the details. For the CVS usage, please have a look at the
CVS Service for LCG
pages!
You need CVS, Ant 1.6, gcc 3.2 and Java SDK 1.4 installed at least.
Checking out the base modules
export CVSROOT=:pserver:anonymous@jra1mw.cvs.cern.ch:/cvs/jra1mw
export WORKSPACE=$PWD
cvs co -r glite_branch_1_5_0 org.glite
cd org.glite
ant -f project/glite.csf.xml amga data security service-discovery
Do not be worried! There are many packages checked out, but you do not need them all!
Compilation of the most important modules
# GridSite
cd $WORKSPACE/org.gridsite.core
ant dist
# security components
cd $WORKSPACE/org.glite.security
ant -Dtarget=dist
# service-discovery components
cd $WORKSPACE/org.glite.service-discovery
ant -Dtarget=dist cli file-c
# data management components
cd $WORKSPACE/org.glite.data
ant -Dtarget=dist hydra-cli catalog-cli transfer-cli srm-cli
# metadata components
cd $WORKSPACE/org.glite.amga
ant -Dtarget=dist server
Installation of the required packages
cd $WORKSPACE/dist/slc30/i386/RPMS
wget http://glite.web.cern.ch/glite/packages/externals/bin/rhel30/RPMS/glite-essentials-cpp-1.1.1-1_EGEE.i386.rpm
wget http://glite.web.cern.ch/glite/packages/externals/bin/rhel30/RPMS/vdt_globus_sdk-VDT1.2.2rh9-1.i386.rpm
sudo rpm -ivh glite-amga-cli-1.0.0-1.i386.rpm \
glite-data-catalog-api-c-2.0.0-4.i386.rpm \
glite-data-catalog-cli-1.7.2-4.i386.rpm \
glite-data-hydra-cli-1.0.2-1.i386.rpm \
glite-data-io-base-2.0.0-1.i386.rpm \
glite-data-io-client-1.5.1-3.i386.rpm \
glite-data-io-gss-auth-1.0.0-1.i386.rpm \
glite-data-io-quanta-1.0.0-1.i386.rpm \
glite-data-srm-api-c-1.1.0-1.i386.rpm \
glite-data-srm-cli-1.2.2-2.i386.rpm \
glite-data-util-c-1.2.0-1.i386.rpm \
glite-essentials-cpp-1.1.1-1_EGEE.i386.rpm \
glite-service-discovery-api-c-2.2.0-0.i386.rpm \
glite-service-discovery-cli-2.2.0-3.i386.rpm \
glite-service-discovery-file-c-2.1.0-2.i386.rpm \
gridsite-1.1.15-1.i386.rpm \
vdt_globus_sdk-VDT1.2.2rh9-1.i386.rpm
# also needed for the image handling libraries and wxWindows
sudo apt-get install libjpeg-devel libpng-devel libtiff-devel
sudo apt-get install gtk2-devel \
XFree86-devel atk-devel fontconfig-devel freetype-devel pango-devel
Compilation of the demo sources
# retrieving the source RPMs
wget http://www.i3s.unice.fr/~johan/mdm/creaimage-0.0.0-3.src.rpm
wget http://www.i3s.unice.fr/~johan/mdm/creaviewer-0.0.0-4.src.rpm
wget http://www.i3s.unice.fr/~johan/mdm/jmc-0.0.0-1.src.rpm
wget http://www.i3s.unice.fr/~johan/mdm/mdmdemo-0.0.0-5.src.rpm
wget http://www.i3s.unice.fr/~johan/mdm/wx-gtk2-ansi-2.6.2-1.src.rpm
wget ftp://sunsite.cnlab-switch.ch/mirror/mandrake/official/current/SRPMS/contrib/editline-1.12-2mdk.src.rpm
# creating a build environment
mkdir $HOME/rpm
cd $HOME/rpm
mkdir BUILD RPMS SOURCES SPECS SRPMS
# installing the build depdendencies
sudo apt-get install libjpeg-devel libpng-devel libtiff-devel
sudo apt-get install gtk2-devel \
XFree86-devel atk-devel fontconfig-devel freetype-devel pango-devel
sudo apt-get install libtermcap-devel
rpm -i --define "_topdir ${HOME}/rpm" \
wx-gtk2-ansi-2.6.2-1.src.rpm \
creaimage-0.0.0-3.src.rpm \
creaviewer-0.0.0-4.src.rpm \
jmc-0.0.0-1.src.rpm \
mdmdemo-0.0.0-5.src.rpm \
editline-1.12-2mdk.src.rpm
cd $HOME/rpm/SPECS
rpmbuild -bb --clean --without unicode --define "_topdir ${HOME}/rpm" wxGTK.spec
sudo rpm -ivh $HOME/rpm/RPMS/i386/wx*.rpm
rpmbuild -bb --clean --define "_topdir ${HOME}/rpm" jmc.spec
sudo rpm -ivh $HOME/rpm/RPMS/i386/jmc-0.0.0-1.i386.rpm
rpmbuild -bb --clean --define "_topdir ${HOME}/rpm" creaimage.spec
sudo rpm -ivh $HOME/rpm/RPMS/i386/creaimage-0.0.0-3.i386.rpm
rpmbuild -bb --clean --define "_topdir ${HOME}/rpm" --without gl creaviewer.spec
sudo rpm -ivh $HOME/rpm/RPMS/creaviewer-0.0.0-4.i386.rpm
rpmbuild -bb --clean --define "_topdir ${HOME}/rpm" --without gl mdmdemo.spec
sudo rpm -ivh $HOME/rpm/RPMS/mdmdemo-0.0.0-5.i386.rpm
# "patch" the spec file
wget -O editline.spec https://twiki.cern.ch/twiki/pub/EGEE/DMEncryptedStorage/editline.spec
rpmbuild -bb --clean --define "_topdir ${HOME}/rpm" editline.spec
sudo rpm -ivh $HOME/rpm/RPMS/libeditline0-1.12-3.i386.rpm
See
this message
for more information.
Links
Last edit:
AkosFrohner on 2009-03-09 - 10:36
Number of topics: 1
Maintainer:
AkosFrohner