Difference: InstallSquid2 (54 vs. 55)

Revision 552016-11-08 - DaveDykstra

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Changed:
<
<

Installing a Frontier squid cache server

>
>

Installing a frontier-squid2 cache server

 
Changed:
<
<
The frontier-squid software package is a patched version of the standard squid http proxy cache software, pre-configured for use by the Frontier distributed database caching system. This installation is recommended for use by Frontier in the LHC CMS & ATLAS projects, and also works well with the CernVM FileSystem. Many people also use it for other applications as well; if you have any questions or comments about general use of this package contact frontier-talk@cern.ch.
>
>
NOTE: these are instructions for installing a frontier-squid2 package containing the former version of squid used for many years by the WLCG. Instructions to install the current version based on squid-3 are on the InstallSquid page. The frontier-squid2 package can run on the same computer as frontier-squid, as long as it is configured to use different ports. All of the paths in this package are similar to the paths in the frontier-squid package except they all have a '2' suffix; for example, /etc/squid2, /var/log/squid2, and /usr/sbin/squid2.
 
Changed:
<
<
Note to Open Science Grid users: this same package is also available from the Open Science Grid so it will probably be more convenient to you to follow the OSG frontier-squid installation instructions.

Note to users of EGI's UMD repository: the same package is also available in UMD so it might be easier for you to get it from there.

>
>
The frontier-squid2 software package is a patched version of the standard squid http proxy cache software, pre-configured for use by the Frontier distributed database caching system. This installation is recommended for use by Frontier in the LHC CMS & ATLAS projects, and also works well with the CernVM FileSystem. Many people also use it for other applications as well; if you have any questions or comments about general use of this package contact frontier-talk@cern.ch.
  If you have any problems with the software or installation, or would like to suggest an improvement to the documentation, please submit a support request to the Frontier Application Development JIRA.
Line: 18 to 16
 
Changed:
<
<

Why use frontier-squid instead of regular squid?

>
>

Why use frontier-squid2 instead of regular squid?

 
Changed:
<
<
The most important feature of frontier-squid is that it correctly supports the HTTP standard headers Last-Modified and If-Modified-Since better than other distributions of squid. The Frontier distributed database caching system, which is used by the LHC projects ATLAS and CMS, depends on proper working of this feature, so that is the main reason why that project maintains this squid distribution. Older versions of squid2 (including the one distributed with Red Hat EL 5) and all versions of squid3 (including the one on Red Hat EL 6) prior to squid3.5 (which is now in pre-release) do not correctly support this feature, as documented in the infamous squid bug #7. Also, the frontier-squid package contains a couple of related patches that are not in any standard squid distribution. Details are in the beginning paragraph of the MyOwnSquid twiki page. Although this package expressly supports If-Modified-Since, it also works well with applications that do not require If-Modified-Since including CVMFS. The collapsed_forwarding feature is also missing from most versions of squid, and it is important for the most common grid applications that use squid and is included in the frontier-squid package.
>
>
The most important feature of frontier-squid2 is that it correctly supports the HTTP standard headers Last-Modified and If-Modified-Since better than other distributions of squid. The Frontier distributed database caching system, which is used by the LHC projects ATLAS and CMS, depends on proper working of this feature, so that is the main reason why that project maintains this squid distribution. Older versions of squid2 (including the one distributed with Red Hat EL 5) do not correctly support this feature, as documented in the infamous squid bug #7. Also, the frontier-squid2 package contains a couple of related patches that are not in any standard squid distribution. Details are in the beginning paragraph of the MyOwnSquid twiki page. Although this package expressly supports If-Modified-Since, it also works well with applications that do not require If-Modified-Since including CVMFS. The collapsed_forwarding feature is also missing from most versions of squid, and it is important for the most common grid applications that use squid and is included in the frontier-squid2 package.
  In addition, the package has several additional features:
  1. A configuration file generator, so configuration customizations can be preserved across package upgrades even when the complicated standard configuration file changes.
Changed:
<
<
  1. The ability to easily run multiple squid processes listening on the same port, in order to support more networking throughput than can be handled by a single CPU core (squid2 is single-threaded).
>
>
  1. The ability to easily run multiple squid processes listening on the same port, in order to support more networking throughput than can be handled by a single CPU core (squid2 is single-threaded and has no concept of multiple workers like squid3).
 
  1. Automatic cleanup of the old cache files in the background when starting squid, to avoid problems with cache corruption.
  2. Default access control lists to permit remote performance monitoring from shared WLCG squid monitoring servers at CERN.
  3. The default log format is more human readable and includes contents of client-identifying headers.
Line: 49 to 47
  3) What network specs?
Changed:
<
<
The latencies will be lower to the worker nodes if you have a large bandwidth. The network is almost always the bottleneck for this system, so at least a gigabit each is highly recommended. If you have many job slots, 2 bonded gigabit network connections is even better, and squid on one core of a modern CPU can pretty much keep up with 2 gigabits. Squid is single-threaded so if you're able to supply more than 2 gigabits, multiple squid processes on the same machine need to be used to serve the full throughput. This is supported in the frontier-squid package (instructions below) but each squid needs its own memory and disk space.
>
>
The latencies will be lower to the worker nodes if you have a large bandwidth. The network is almost always the bottleneck for this system, so at least a gigabit each is highly recommended. If you have many job slots, 2 bonded gigabit network connections is even better, and squid on one core of a modern CPU can pretty much keep up with 2 gigabits. Squid is single-threaded so if you're able to supply more than 2 gigabits, multiple squid processes on the same machine need to be used to serve the full throughput. This is supported in the frontier-squid2 package (instructions below) but each squid needs its own memory and disk space.
  4) How many squids do I need?
Line: 65 to 63
 

Software

Changed:
<
<
The instructions below are for the frontier-squid rpm version >= 2.7STABLE9-23.1 on a Scientific Linux version 5 or 6 based system. The rpm is based on the frontier-squid source tarball, and there are also instructions for installing directly from the frontier-squid tarball available. Please see the tarball Release Notes and rpm Release Notes for details on what has changed in recent versions. If, for some reason, you prefer to use a non frontier-squid distribution of squid, see MyOwnSquid.
>
>
The instructions below are for the frontier-squid2 rpm version >= 2.7STABLE9-23.1 on a Scientific Linux version 5, 6 or 7 based system. The rpm is based on the frontier-squid2 source tarball; there isn't documentation for installing it, but the tarball is available and the instructions are very similar to the instructions for installing directly from the frontier-squid tarball. Please see the rpm Release Notes for details on what has changed in recent versions. If, for some reason, you prefer to use a non frontier-squid or frontier-squid2 distribution of squid, see MyOwnSquid.
 

Puppet

Changed:
<
<
A puppet module for configuring frontier-squid is available on puppet-forge which understands a lot of the following instructions. If you're using puppet, check there first.
>
>
A puppet module for configuring frontier-squid is available on puppet-forge which understands a lot of the following instructions. If you're using puppet, check there first. Note that the puppet module is for frontier-squid so you would have to adapt it for frontier-squid2.
 

Preparation

Changed:
<
<
By default the frontier-squid rpm installs files with a "squid" user id and group. If they do not exist, the rpm will create them. If your system has its own means of creating logins you should create the login and group before installing the rpm. If you want the squid process to use a different user id (historically it has been "dbfrontier"), then for example before installing the rpm create the file /etc/squid/squidconf with the following contents:
>
>
By default the frontier-squid2 rpm installs files with a "squid" user id and group. If they do not exist, the rpm will create them. If your system has its own means of creating logins you should create the login and group before installing the rpm. If you want the squid process to use a different user id (historically it has been "dbfrontier"), then for example before installing the rpm create the file /etc/squid2/squidconf with the following contents:
 
    export FRONTIER_USER=dbfrontier
    export FRONTIER_GROUP=dbfrontier
Line: 91 to 89
 

Next, install the package with the following command:

Changed:
<
<
    # yum install frontier-squid

>
>
    # yum install frontier-squid2

 

Set it up to start at boot time with this command:

Changed:
<
<
    # chkconfig frontier-squid on

>
>
    # chkconfig frontier-squid2 on

 

Configuration

Changed:
<
<
Custom configuration is done in /etc/squid/customize.sh. That script invokes functions that edit a supplied default squid.conf source file to generate the final squid.conf that squid sees when it runs. Comments in the default installation of customize.sh give more details on what can be done with it. Whenever /etc/init.d/frontier-squid runs it generates a new squid.conf if customize.sh has been modified.
>
>
Custom configuration is done in /etc/squid2/customize.sh. That script invokes functions that edit a supplied default squid.conf source file to generate the final squid.conf that squid sees when it runs. Comments in the default installation of customize.sh give more details on what can be done with it. Whenever /etc/init.d/frontier-squid2 runs it generates a new squid.conf if customize.sh has been modified.
  It is very important for security that squid not be allowed to proxy requests from everywhere to everywhere. The default customize.sh allows incoming connections only from standard private network addresses and allows outgoing connections to anywhere. If the machines that will be using squid are not on a private network, change customize.sh to include the network/maskbits for your network. For example:
    setoption("acl NET_LOCAL src", "131.154.0.0/16")

Line: 115 to 113
 

Now that the configuration is set up, start squid with this command:

Changed:
<
<
    # service frontier-squid start

>
>
    # service frontier-squid2 start

 

To have a change to customize.sh take affect while squid is running, run the following command:

Changed:
<
<
    # service frontier-squid reload

>
>
    # service frontier-squid2 reload

 

Moving disk cache and logs to a non-standard location

Changed:
<
<
Often the filesystems containing the default locations for the disk cache ( /var/cache/squid) and logs ( /var/log/squid) isn't large enough and there's more space available in another filesystem. To move them to a new location, simply change the directories into symbolic links to the new locations while the service is stopped. Make sure the new directories are created and writable by the user id that squid is running under. For example if /data is a separate filesystem:
    # service frontier-squid stop
    # mv /var/log/squid /data/squid_logs
    # ln -s /data/squid_logs /var/log/squid
    # rm -rf /var/cache/squid/*
    # mv /var/cache/squid /data/squid_cache
    # ln -s /data/squid_cache /var/cache/squid
    # service frontier-squid start

>
>
Often the filesystems containing the default locations for the disk cache ( /var/cache/squid2) and logs ( /var/log/squid2) isn't large enough and there's more space available in another filesystem. To move them to a new location, simply change the directories into symbolic links to the new locations while the service is stopped. Make sure the new directories are created and writable by the user id that squid is running under. For example if /data is a separate filesystem:
    # service frontier-squid2 stop
    # mv /var/log/squid2 /data/squid_logs2
    # ln -s /data/squid_logs2 /var/log/squid2
    # rm -rf /var/cache/squid2/*
    # mv /var/cache/squid2 /data/squid_cache2
    # ln -s /data/squid_cache2 /var/cache/squid2
    # service frontier-squid2 start

 
Changed:
<
<
Alternatively, instead of creating symbolic links you can set the cache_log and coredump_dir options, the second parameter of the cache_dir option, and the first parameter of the access_log option in /etc/squid/customize.sh. For example:
    setoption("cache_log", "/data/squid_logs/cache.log")
    setoption("coredump_dir", "/data/squid_cache")
    setoptionparameter("cache_dir", 2, "/data/squid_cache")
    setoptionparameter("access_log", 1, "daemon:/data/squid_logs/access.log")

>
>
Alternatively, instead of creating symbolic links you can set the cache_log and coredump_dir options, the second parameter of the cache_dir option, and the first parameter of the access_log option in /etc/squid2/customize.sh. For example:
    setoption("cache_log", "/data/squid_logs2/cache.log")
    setoption("coredump_dir", "/data/squid_cache2")
    setoptionparameter("cache_dir", 2, "/data/squid_cache2")
    setoptionparameter("access_log", 1, "daemon:/data/squid_logs2/access.log")

 

It's recommended to use the "daemon:" prefix on the access_log path because that causes squid to use a separate process for writing to logs, so the main process doesn't have to wait for the disk. It is on by default for those who don't set the access_log path.

Changing the size of log files retained

Changed:
<
<
The access.log is rotated each night, and also if it is over a given size (default 5 GB) when it checks each hour. You can change that value by exporting the environment variable SQUID_MAX_ACCESS_LOG in /etc/sysconfig/frontier-squid to a different number of bytes. You can also append M for megabytes or G for gigabytes. For example for 20 gigabytes each you can use:
>
>
The access.log is rotated each night, and also if it is over a given size (default 5 GB) when it checks each hour. You can change that value by exporting the environment variable SQUID_MAX_ACCESS_LOG in /etc/sysconfig/frontier-squid2 to a different number of bytes. You can also append M for megabytes or G for gigabytes. For example for 20 gigabytes each you can use:
 
    export SQUID_MAX_ACCESS_LOG=20G
Changed:
<
<
By default, frontier-squid compresses log files when they are rotated, and saves up to 9 access.log.N.gz files where N goes from 1 to 9. In order to estimate disk usage, note that the rotated files are typically compressed to a bit under 15% of their original size, and that the uncompressed size can go a bit above $SQUID_MAX_ACCESS_LOG because the cron job only checks four times per hour. For example, for SQUID_MAX_ACCESS_LOG=20G the maximum size will be a bit above 20GB plus 9 times 3GB, so allow 50GB to be safe.
>
>
By default, frontier-squid2 compresses log files when they are rotated, and saves up to 9 access.log.N.gz files where N goes from 1 to 9. In order to estimate disk usage, note that the rotated files are typically compressed to a bit under 15% of their original size, and that the uncompressed size can go a bit above $SQUID_MAX_ACCESS_LOG because the cron job only checks four times per hour. For example, for SQUID_MAX_ACCESS_LOG=20G the maximum size will be a bit above 20GB plus 9 times 3GB, so allow 50GB to be safe.
  If frontier-awstats is installed (typically only on central servers), an additional uncompressed copy is also saved in access.log.0.
Changed:
<
<
An alternative to setting the maximum size of each log file, you can leave each log file at the default size and change the number of log files retained, for example for 50 files (about 6GB total space) set the following in /etc/squid/customize.sh:
>
>
An alternative to setting the maximum size of each log file, you can leave each log file at the default size and change the number of log files retained, for example for 50 files (about 6GB total space) set the following in /etc/squid2/customize.sh:
 
    setoption("logfile_rotate", "50")
Changed:
<
<
It is highly recommended to keep at least 3 days worth of logs, so that problems that happen on a weekend can be investigated during working hours. If you really do not have enough disk space for logs, the log can be disabled with the following in /etc/squid/customize.sh:
>
>
It is highly recommended to keep at least 3 days worth of logs, so that problems that happen on a weekend can be investigated during working hours. If you really do not have enough disk space for logs, the log can be disabled with the following in /etc/squid2/customize.sh:
 
    setoption("access_log", "none")
Changed:
<
<
Then after doing service frontier-squid reload (or service frontier-squid start if squid was stopped) remember to remove all the old access.log* files.
>
>
Then after doing service frontier-squid2 reload (or service frontier-squid2 start if squid was stopped) remember to remove all the old access.log* files.
 
Changed:
<
<
On the other hand, the compression of large rotated logs can take a considerably long time to process, so if you have plenty of disk space and don't want to have the additional disk I/O and cpu resources taken during rotation, you can disable rotate compression by putting the following in /etc/sysconfig/frontier-squid:
>
>
On the other hand, the compression of large rotated logs can take a considerably long time to process, so if you have plenty of disk space and don't want to have the additional disk I/O and cpu resources taken during rotation, you can disable rotate compression by putting the following in /etc/sysconfig/frontier-squid2:
 
    export SQUID_COMPRESS_LOGS=false
That uses the old method of telling squid to do the rotation, which keeps access.log.N where N goes from 0 to 9, for a total of 11 files including access.log. When compression is turned off, the default SQUID_MAX_ACCESS_LOG is reduced from 5GB to 1GB, so override that to set your desired size. When converting between compressed and uncompressed format, all the files of the old format are automatically deleted the first time the logs are rotated.
Line: 176 to 174
  To enable this, your site should open incoming firewall(s) to allow UDP requests to port 3401 from 128.142.0.0/16, 188.184.128.0/17, and 188.185.128.0/17. If you run multiple squid processes, each one will need to be separately monitored. They listen on increasing port numbers, the first one on port 3401, the second on 3402, etc. When that is ready, register the squid with WLCG to start the monitoring.
Added:
>
>
When running both frontier-squid2 and frontier-squid on the same computer, one of them will need to change the monitoring port, for example with the following in /etc/squid2/customize.sh:
    setoption("snmp_port", "4401")
 Note: some sites are tempted to not allow requests from the whole range of IP addresses listed above, but we do not recommend that because the monitoring IP addresses can and will change without warning. Opening the whole CERN range of addresses has been cleared by security experts on the OSG and CMS security teams, because the information that can be collected is not sensitive information. If your site security experts still won't allow it, the next best thing you can do is to allow the aliases wlcgsquidmon1.cern.ch and wlcgsquidmon2.cern.ch. Most firewalls do not automatically refresh DNS entries, so you will also have to be willing to do that manually whenever the values of the aliases change.

Testing the installation

Line: 229 to 231
 
    $ export http_proxy=http://yoursquid.your.domain:3128
Changed:
<
<
and perform the fnget.py test twice again. It should pass through your squid, and cache the response. To confirm that it worked, look at the squid access log (in /var/log/squid/access.log if you haven't moved it). The following is an excerpt:
>
>
and perform the fnget.py test twice again. It should pass through your squid, and cache the response. To confirm that it worked, look at the squid access log (in /var/log/squid2/access.log if you haven't moved it). The following is an excerpt:
 
    128.220.233.179 - - [22/Jan/2013:08:33:17 +0000] "GET http://cmsfrontier.cern.ch:8000/FrontierProd/Frontier?type=frontier_request:1:DEFAULT&encoding=BLOBzip&p1=eNorTs1JTS5RMFRIK8rPVUgpTcwBAD0rBmw_ HTTP/1.0" 200 810 TCP_MISS:DIRECT 461 "fnget.py 1.5" "-" "Python-urllib/2.6"
Line: 240 to 242
 

Log file contents

Changed:
<
<
Error messages are written to cache.log (in /var/log/squid if you haven't moved it) and are generally either self-explanatory or an explanation can be found with google.
>
>
Error messages are written to cache.log (in /var/log/squid2 if you haven't moved it) and are generally either self-explanatory or an explanation can be found with google.
 
Changed:
<
<
Logs of every access are written to access.log (also in /var/log/squid if you haven't moved it) and the default frontier-squid format contains these fields:
>
>
Logs of every access are written to access.log (also in /var/log/squid2 if you haven't moved it) and the default frontier-squid2 format contains these fields:
 
  1. Source IP address
  2. User name from ident if any (usually just a dash)
  3. User name from SSL if any (usually just a dash)
Line: 265 to 267
  takes care of this problem.
Changed:
<
<
  • If squid has difficulty creating cache directories on RHEL 6, like for example:
    # service frontier-squid start
    
    
>
>
  • If squid has difficulty creating cache directories on RHEL 6 or RHEL 7, like for example:
    # service frontier-squid2 start
    
    
 
Changed:
<
<
Generating /etc/squid/squid.conf
>
>
Generating /etc/squid2/squid.conf
  Initializing Cache... 2014/02/21 14:43:53| Creating Swap Directories
Changed:
<
<
FATAL: Failed to make swap directory /var/cache/squid/00: (13) Permission denied
>
>
FATAL: Failed to make swap directory /var/cache2/squid/00: (13) Permission denied
  ... Starting 1 Frontier Squid... Frontier Squid start failed!!! Then if SELinux is enabled and you want to leave it on try the following command:

Changed:
<
<
# restorecon -R /var/cache
>
>
# restorecon -R /var/cache2
 
Changed:
<
<
And start frontier-squid again.
>
>
And start frontier-squid2 again.
 

Inability to reach full network throughput

Changed:
<
<
If you have a CPU that can't quite keep up with full network throughput, we have found that up to an extra 15% throughput can be achieved by binding the single-threaded squid process to a single core, to maximize use of the per-core on-chip caches. This is not enabled by default, but you can enable it by putting the following in /etc/sysconfig/frontier-squid:
>
>
If you have a CPU that can't quite keep up with full network throughput, we have found that up to an extra 15% throughput can be achieved by binding the single-threaded squid process to a single core, to maximize use of the per-core on-chip caches. This is not enabled by default, but you can enable it by putting the following in /etc/sysconfig/frontier-squid2:
 
    export SETSQUIDAFFINITY=true
Line: 297 to 299
 
  1. Make sure there's a "daemon:" prefix on the access_log if you have changed its value.
  2. Reduce the max log size before compression and increase the number of log files retained, to decrease the length of time of each log compression.
  3. Disable compression if you have the space.
Changed:
<
<
  1. As root run ionice -c1 -p PID for the pid listed in squid.pid (default /var/run/squid/squid.pid) for each squid process run. This raises their I/O priority above ordinary filesystem operations.
>
>
  1. As root run ionice -c1 -p PID for the pid listed in squid.pid (default /var/run/squid2/squid.pid) for each squid process run. This raises their I/O priority above ordinary filesystem operations.
 
  1. Disable the access log completely.

Running out of file descriptors

Changed:
<
<
By default, frontier-squid makes sure that there are at least 4096 file descriptors available for squid, which is usually enough. However, under some situations where there are very many clients it might not be enough. When this happens, a message like this shows up in cache.log:
>
>
By default, frontier-squid2 makes sure that there are at least 4096 file descriptors available for squid, which is usually enough. However, under some situations where there are very many clients it might not be enough. When this happens, a message like this shows up in cache.log:
 
    WARNING! Your cache is running out of filedescriptors

There are two ways to increase the limit:

Changed:
<
<
  1. Add a line such as ulimit -n 16384 in /etc/sysconfig/frontier-squid.
>
>
  1. Add a line such as ulimit -n 16384 in /etc/sysconfig/frontier-squid2.
 
  1. Set the nofile parameter in /etc/security/limits.conf or a file in /etc/security/limits.d. For example use a line like this to apply to all accounts:
    * - nofile 16384
    
    or replace the '*' with the squid user name if you prefer.
Line: 349 to 351
 If you have either a particularly slow machine or a high amount of bandwidth available, you may not be able to get full network throughput out of a single squid process. For example, our measurements with a 10 gigabit interface on a 2010-era machine with 8 cores at 2.27Ghz showed that 3 squids were required for full throughput.

Multiple squids can be enabled very simply by doing these steps:

Changed:
<
<
  • Stop frontier-squid and remove the old cache and logs
>
>
  • Stop frontier-squid2 and remove the old cache and logs
 
  • Create subdirectories under your cache directory called 'squid0', 'squid1', up to 'squidN-1' for N squids, making sure they are writable by the user id that your squid runs under
Changed:
<
<
  • Start frontier-squid again. This will automatically detect the extra subdirectories and start that number of squid processes. It will create corresponding log subdirectories and /var/run/squid subdirectories, and generate a separate squid configuration file for each process in /etc/squid/.squid-N.conf. It will also assign each squid process to a particular core as described above.
>
>
  • Start frontier-squid2 again. This will automatically detect the extra subdirectories and start that number of squid processes. It will create corresponding log subdirectories and /var/run/squid2 subdirectories, and generate a separate squid configuration file for each process in /etc/squid2/.squid-N.conf. It will also assign each squid process to a particular core as described above.
 When running multiple squids, all of the memory & disk usage is multiplied by the number of squids. For example, if you choose a cache_dir size of 100GB, running 3 squids will require 300GB for cache space. All the squids listen on the same port and take turns handling requests. Only squid0 will contact the upstream servers; the others forward requests to squid0 (this can be changed, see the next section).
Changed:
<
<
If you want to revert to a single squid, reverse the above process including cleaning up the corresponding log directories, /var/run/squid subdirectories, and the generated configuration files.
>
>
If you want to revert to a single squid, reverse the above process including cleaning up the corresponding log directories, /var/run/squid2 subdirectories, and the generated configuration files.
 

Running independent squids on the same machine

Changed:
<
<
By default multiple squids are configured so that only one of them will read from upstream servers, and others read from that squid. To disable that feature and instead have each separately read from the upstream server, you can put the following in /etc/sysconfig/frontier-squid:
>
>
By default multiple squids are configured so that only one of them will read from upstream servers, and others read from that squid. To disable that feature and instead have each separately read from the upstream server, you can put the following in /etc/sysconfig/frontier-squid2:
 
    export SQUID_MULTI_PEERING=false
Line: 366 to 368
  Note that there is currently no mechanism to have a different administrator-controlled configuration for each of the independent squids.
Deleted:
<
<

The frontier-squid2 rpm

In addition to the frontier-squid rpm, there is also a frontier-squid2-2.7 rpm. This is identical to the corresponding frontier-squid-2.7 rpm except that all the squid directories and files in shared directories have a "2" suffix on them, for example there's a /etc/squid2, /var/cache/squid2, /var/log/squid2, and /etc/init.d/frontier-squid2. This rpm may be installed on the same machine as the frontier-squid rpm, but one or both must change their http_port and snmp_port options to avoid clashing with the other. Just do yum install frontier-squid2 to install, and add the "2" suffix in all the configuration instructions on this page.

 

Having squid listen on a privileged port

This package runs squid strictly as an unprivileged user, so it is unable to open a privileged TCP port less than 1024. The recommended way to handle that is to have squid listen on an unprivleged port and use iptables to forward a privileged port to the unprivileged port. For example, to forward port 80 to port 8000, use this:

    # iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8000
Changed:
<
<
You can change the port that squid listens on with this in /etc/squid/customize.sh:
>
>
You can change the port that squid listens on with this in /etc/squid2/customize.sh:
 
    setoption("http_port","8000")
Line: 400 to 398
  It should report "offline_mode is now ON" which will prevent cached items from expiring. Then as long as everything was preloaded and the laptop doesn't reboot (because starting squid normally clears the cache) you should be able to re-use the cached data. You can switch back to normal mode with the same command or by stopping and starting squid.
Changed:
<
<
To prevent clearing the cache on start, put the following in /etc/sysconfig/frontier-squid:
>
>
To prevent clearing the cache on start, put the following in /etc/sysconfig/frontier-squid2:
 
    export SQUID_CLEAN_CACHE_ON_START=false

If you do that before the first time you start squid (or if you ever want to clear the cache by hand), run this to initialize the cache:

Changed:
<
<
    # service frontier-squid cleancache

>
>
    # service frontier-squid2 cleancache

 
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback