Proxy Server
Proxy Server
Proxy Server
A proxy server is a computer system sitting between the client requesting a web document and the
target server (another computer system) serving the document. In its simplest form, a proxy server
facilitates communication between client and target server without modifying requests or replies.
When we initiate a request for a resource from the target server, the proxy server hijacks our
connection and represents itself as a client to the target server, requesting the resource on our
behalf. If a reply is received, the proxy server returns it to us, giving a feel that we have
communicated with the target server.
In advanced forms, a proxy server can filter requests based on various rules and may allow
communication only when requests can be validated against the available rules. The rules are
generally based on an IP address of a client or target server, protocol, content type of web
documents, web content type, and so on.
Sometimes, a proxy server can modify requests or replies, or can even store the replies from the
target server locally for fulfilling the same request from the same or other clients at a later stage.
Storing the replies locally for use at a later time is known as caching. Caching is a popular
technique used by proxy servers to save bandwidth, empowering web servers, and improving the
end user's browsing experience.
Installation
At a terminal prompt, enter the following command to install the Squid server:
$ sudo apt install squid
Configuration
Prior to editing the configuration file, you should make a copy of the original file and protect it
from writing so you will have the original settings as a reference, and to re-use as necessary.
To set your Squid server to listen on TCP port 8080 instead of the default TCP port 3128, change
the http_port directive as such:
http_port 8080
or
http_port www.example.com:8080
Access Control Lists (ACLs) are the base elements for access control and are normally used in
combination with other directives such as http_access , icp_access , and so on, to control access to
various Squid components and web resources. ACLs identify a web transaction and then directives
such as http_access , cache , and then decides whether the transaction should be allowed or not.
Also, we should note that the directives related to accessing resources generally end with _access .
Every access control list definition must have a name and type, followed by the values for
that particular ACL type:
Using Squid's access control, you may configure use of Internet services proxied by Squid to be
available only users with certain Internet Protocol (IP) addresses. For example, we will illustrate
access by users of the 192.168.42.0/24 subnetwork only:
Add the following to the bottom of the ACL section of your /etc/squid/squid.conf file:
acl fortytwo_network src 192.168.42.0/24
Then, add the following to the top of the http_access section of your /etc/squid/squid.conf
file:
http_access allow fortytwo_network
Using the excellent access control features of Squid, you may configure use of Internet services
proxied by Squid to be available only during normal business hours. For example, we'll illustrate
access by employees of a business which is operating between 9:00AM and 5:00PM, Monday
through Friday, and which uses the 10.1.42.0/24 subnetwork:
Add the following to the bottom of the ACL section of your /etc/squid/squid.conf file:
1. Class 1 pool allows to restrict the rate of bandwidth for large downloads. This makes the
restriction of rate of download of a large file.
Implementing Class1 delay pool
Steps:
1. Define the ACL for the delay pool
2. Defines the number of delay pools (delay_pools 1)
3. Define the class of delay pool (delay_calss 1 1)
4. Set the parameters for the pool number (delay_parameres 1 restore_rate/max_size). Once the
request exceds the max_size then the squid will make the bandwidth to the given
restore_rate for a user/source(The mesurement is taken in "bytes") eg:- delay_parameters 1
20000/15000
5. Enable the delay_access to include the feature (delay_access)
Configure the class 1 delay pool:
# vim squid.conf
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # This will tell the delay pool number
delay_calss 1 1 # This defines the delay pool number 1 is a class1 type delay pool
delay_parameters 1 20000/15000 #This is delay parameter for pool number 1 which has the
restore rate of 20000 when the usage hits 15000 bytes
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
# reload squid
Test the rate of bandwidth using wget. Here we can see that all the rate will be restricted to 10% of
the cieling from the beginning for all the src. This makes the rest of the bandwidth free for usage of
other purpose i.e, Out of 1.5M we have taken a cieling of .5M for internel network and we have told
to squid that each request from src should get a 10% of .5M of bandwidth.
In the class1 pool the restriction of the bandwidth was started only after meeting the max size of
download. But in class 2 instead of the max download size here we defined a ceiling and user is
restricted to it from the beginning.
# reload squid
This makes the squid to make the bandwidth usage 50% per subnet(Incase if we have 2 subnets in
our network) and each user will get 20% of the subnet cieling. (i.e, out of 1.5M we have taken a
cieling of .5M. the subnet cieling will share 50% of this .5M clieing(.25M). In each subnet the users
will get 20%(.05M) of bandwidth of the subnet ceiling (.25M)).
# reload squid
This will make the class 2 pool to be activated only while the office hours. Test by changing the
time in the squid servers after configuring the class 2 pool with time period.
SquidAnalyzer parse native access log format of the Squid proxy and generate general statistics
about hits, bytes, users, networks, top url, top second level domain and denied URLs. Common and
combined log format are also supported. SquidGuard logs can also be parsed and ACL's redirection
reported into denied URLs report. Statistic reports are oriented to user and bandwidth control, this is
not a pure cache statistics generator. SquidAnalyzer use flat files to store data and don't need any
SQL, SQL Lite or Berkeley databases. This analyzer is incremental so it should be run in a daily
cron. Take care if you have rotate log enable to run it before rotation is done.
REQUIREMENT
Nothing is required than a modern perl version 5.8 or higher. Graphics are based on the Flotr2
Javascript library so they are drawn at your browser side without extra installation required.
INSTALLATION
Generic install If you want the package to be intalled into the Perl distribution just do the following:
perl Makefile.PL
make
make install
Custom install
perl Makefile.PL \
LOGFILE=/var/log/squid3/access.log \
BINDIR=/usr/bin \
CONFDIR=/etc \
HTMLDIR=/var/www/squidreport \
BASEURL=/squidreport \
MANDIR=/usr/share/man/man3 \
DOCDIR=/usr/share/doc/squidanalyzer
Post installation
1. Modify your httpd.conf to allow access to HTML output like follow:
Alias /squidreport /var/www/squidanalyzer
<Directory /var/www/squidanalyzer>
Options -Indexes FollowSymLinks MultiViews
AllowOverride None
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Directory>
or run it manually.