Squid – make HTTPS proxy

There seems to be a bit of confusion about configuring SQUID to transparently intercept SSL (read: HTTPS) connections. Some sites say it’s plain not possible:

http://www.faqs.org/docs/Linux-mini/TransparentProxy.html#ss2.3

Recent development in SQUID features have made this possible. This article explores how to set this up at a basic level. The SQUID proxy will basically act as a man in the middle. The motivation behind setting this up is to decrypt HTTPS connections to apply content filtering and so on.

There are some concerns that transparently intercepting HTTPS traffic is unethical and can cause legality issues. True, and I agree that monitoring HTTPS connections without properly and explicitly notifying the user is bad but we can use technical means to ensure that the user is properly notified and even gets prompted to accept monitoring or back out. More on this towards the end of the article

So, on to the technical details of setting the proxy up. First, install the dependencies . We will need to compile SQUID from scratch since by default it’s not compiled using the necessary switches. I recommend downloading the latest 3.1 version, especially if you want to notify users about the monitoring. In ubuntu:

apt-get install build-essential libssl-dev

Note : for CentOS users, use openssl-devel rather than libssl-dev

Build-essentials downloads the compilers while libssl downloads SSL libraries that enable SQUID to intercept the encrypted traffic. This package (libssl) is needed during compilation. Without it, when running make you will see the errors similar to the following in the console:

error: ‘SSL’ was not declared in this scope

Download and extract the SQUID source code from their site. Next, configure, compile and install the source code using:

./configure –enable-icap-client –enable-ssl
make
make install

Note the switches I included in the configure command:

* enable-icap-client : we’ll need this to use ICAP to provide a notification page to clients that they are being monitored.

* enable-ssl : this is a prerequisite for SslBump, which squid uses to intercept SSL traffic transparently

Once SQUID has been installed, a very important step is to create the certificate that SQUID will present to the end client. In a test environment, you can easily create a self-signed certificate using OpenSSL by using the following:

openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout http://www.sample.com.pem  -outhttp://www.sample.com.pem

This will of course cause the client browser to display an error:

In an enterprise environment you’ll probably want to generate the certificate using a CA that the clients already trust. For example, you could generate the certificate using microsoft’s CA and use certificate auto-enrolment to push the certificate out to all the clients in your domain.

Onto the actual SQUID configuration. Edit the /etc/squid.conf file to show the following:

always_direct allow all
ssl_bump allow all

http_port 192.9.200.32:3128 transparent

#the below should be placed on a single line
https_port 192.9.200.32:3129 transparent ssl-bump cert=/etc/squid/ssl_cert/www.sample.com.pem key=/etc/squid/ssl_cert/private/www.sample.com.pem

Note you may need to change the “cert=” and the “key=” to point to the correct file in your environment. Also of course you will need to change the IP address

The first directive (always_direct) is due to SslBump. By default ssl_bump is set to accelerator mode. In debug logs cache.log you’d see “failed to select source for”. In accelerator mode, the proxy does not know which backend server to use to retrieve the file from, so this directive instructs the proxy to ignore the accelerator mode. More details on this here:

http://www.squid-cache.org/Doc/config/always_direct/

The second directive (ssl_bump) instructs the proxy to allow all SSL connections, but this can be modified to restirct access. You can also use the “sslproxy_cert_error” to deny access to sites with invalid certificates. More details on this here:

http://wiki.squid-cache.org/Features/SslBump

Start squid and check for any errors. If no errors are reported, run:

netstat -nap | grep 3129

to make sure the proxy is up and running. Next, configure iptables to perform destination NAT, basically to redirect the traffic to the proxy:

iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j DNAT –to-destination 192.9.200.32:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 443 -j DNAT –to-destination 192.9.200.32:3129

Last thing to be done was to either place the proxy physically in line with the traffic or to redirect the traffic to the proxy using a router. Keep in mind that the proxy will change the source IP address of the requests to it’s own IP. In other words, by default it does not reflect the client IP.

That was it in my case. I did try to implement something similar to the above but using explicit mode. This was my squid.conf file, note only one port is needed for both HTTP and HTTPS since HTTPS is tunneled over HTTP using the CONNECT method:

always_direct allow all
ssl_bump allow all

#the below should be placed on a single line

http_port 8080 ssl-bump cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Pp.pem

As regards my previous discussion of notifying users that they are being monitored, consider using greasyspoon:

http://dvas0004.wordpress.com/2011/02/28/squid-greasyspoon-enhancing-your-proxy-deployment-with-content-adaptation/

With this in place, you can instruct greasyspoon to send a notify page to the clients. If they accept this notify page, a cookie (let’s say the cookie is called “NotifySSL”) is set. GreasySpoon can then check for the presence of this cookie in subsequent requests and if present, allow the connection. If the cookie is not present, customers again get the notify page. Due to security considerations, most of the time cookies are only valid for one domain, so you may end up with users having to accept the notify for each different domain they visit. But, you can use greasyspoon in conjunction with a backend MySQL database or similar to record source IP addresses that have been notified and perform IP based notifications. Anything is possible :)

– courtesy:  http://blog.davidvassallo.me/2011/03/22/squid-transparent-ssl-interception/

Squid https transparent proxy setup with SSL

Squid https transparent proxy setup with SSL

Let’s understand first how squid proxy works in transparent mode. While setting up squid as a transparent proxy we can forward the entire request coming from port 80 to squid server’s port i.e. 3128 by default. When we talk about port 80 it means http protocol. What if we request for Gmail who uses https protocol and this protocol by default send request to port 443 of squid’s port, and we iptable firewall rules to forward traffic from port 80 to port 3128 and we forget about port 443 which is used by https protocol and squid is http proxy server. Now many folks may think it’s easy and forward all traffic coming from port 443 to squid port 3128.  No it won’t work.  Because https connection establishes a secure connection over the network and for that it uses certificate and public key private key pairs. And first of all I thanks God for RSA and DSA algorithm as it is not so easy to decrypt data which is encrypted by use of this algorithm. Squid proxy is a middle man who changes packets header and route traffic to internet world. So what we have to do is to create certificate and public key private key pair for internal network which can be used by squid client and squid server and later squid server can route your traffic to internet world. To yield faster results it is better to sign certificate from CA. self signed certificates are little bit slowing the connection. As in a transparent mode encryption and decryption done twice so it may yields result slow so I advised you to keep patience.

Steps are:

  1. iptables  -t nat -A PREROUTING -i eth0  -p tcp –dport 80 -j REDIRECT –to-port 3128
  2. iptables  -t nat -A PREROUTING  -i eth0 -p tcp –dport  443 -j REDIRECT –to-port 3130

Certificate and public key private key generation.

  1. openssl genrsa -des3 -out server. Key 1024
  2. openssl req -new –key -out server.csr
  3. openssl req -new -key server. Key -out server.csr

Steps to remove passphrase

  1. cp server. Key server. Key.old
  2. openssl rsa -in server. Key.org -out server. Key

Create server certificate

Openssl x509 -req -days 365 -in  server.csr -sign.key -out server.crt

Now make some changes to squid.conf file

  1. http_port 3128 transparent
  2. https_port 3130 transparent cert=/”path to server.crt” key=/”path to server.key”.

Another easy way to create certificates and public key private key pair is using genkey utility. In order to use that you have crypto-utils package install on your machine.

Steps are:

  1. #yum -y install crypto-utils
  2. genkey -days 365 squidserver.hostname.com
  3. Hit next.
  4. Select number of bits for data encryption. Default is 1024. This command will generate random bits.
  5. Generate the certificate.
  6. I will suggest you to never used passphrase for key, because if u assigns passphrase to key then along with public key we need to share passphrase.
  7. Certificate and key are stored at /etc/pki/tls/certs/ and /etc/pki/tls/private/
  8. In squid.conf make necessary change like this

http_port 3128 transparent

https_port 3130 transparent cert=/etc/pki/tls/certs/squidserver.hostname.com.crt key=/etc/pki/tls/private/squidserver.hostname.com.key.

setup vacation auto responder with sendmail

Requirement: Vacation, Sendmail, SquirrelMail, SquirrelMail Local User Autoresponder and Mail Forwarder plugin, SquirrelMail Compatibility plugin, vsftp and CentOS 5

1. Download vacation-1.2.6.3.tar.gz from the link below: –
http://vacation.sourceforge.net

2. Extract the vacation to a temporary directory as below: –
tar xvfz vacation-1.2.6.3.tar.gz -C /tmp

3. Change directory to /tmp/vacation-1.2.6.3 as below: –
cd /tmp/vacation-1.2.6.3

4. Run the “make” command as below: –
make

5. Copy the “vacation” binary to “/usr/bin” as below: –
cp vacation /usr/bin

6. Create a softlink in the Sendmail’s restricted shell utility “smrsh” as below: –
cd /etc/smrsh
ln -s /usr/bin/vacation vacation

7. Next, lets proceed with installing and configuring SquirrelMail’s Local User Autoresponder and Mail Forwarder Plugin. Download local_autorespond_forward-3.0-1.4.0.tar.gz from the link below: –
http://www.squirrelmail.org/plugin_view.php?id=264

8. Extract the local_autoresponder_forward to SquirrelMail’s plugin directory (in CentOS 5) as below: –
tar xvfz local_autorespond_forward-3.0-1.4.0.tar.gz -C /usr/share/squirrelmail/plugins

9. Download the Compatibility plugin from the link below: –
http://www.squirrelmail.org/plugin_view.php?id=152

10. Extract the compatibility plugin to SquirrelMail’s plugin directory (in CentOS 5) as below: –
tar xvfz compatibility-2.0.8-1.0.tar.gz -C /usr/share/squirrelmail/plugins

11. Run the SquirrelMail’s config command as below: –
cd /usr/share/squirrelmail/config
./conf.pl

12. Patch your SquirrelMail according to your version as below: –
patch -p0 < patches/compatibility_patch-1.4.8.diff

13. In SquirrelMail Configuration Main Menu, key-in “8″ to enter Plugins menu. Next, key-in the number that refer to “local_autorespond_forward” to install the plugin

14. Create the local_autorespond_forward configuration file as below: –
cd /usr/share/squirrelmail/plugin/local_autorespond_forward
cp config.php.sample config.php

15. Edit the config.php file and change the following as below: –
$ftp_passive = 1;

16. Next, you need to enable the vsftp service in init level 3, 4 and 5 as below: –
chkconfig --level 345 vsftp on

17. Let’s start the vsftp service as below: –
service vsftp start

You can now begin to use the SquirrelMail’s local_autorespond_forward plugin to configure the vacation email responder for Sendmail.

FreeBSD- fetch-mail not scheduled

Fetchmail in my freebsd mail server not seems to be scheduled

got this fix

 

fetchmail -d 360 -f /usr/local/etc/fetchmailrc

 

-d – fetchmail will work as a daemon

360 – run every 360 seconds

-f  – refer to conf file

mdadm – software raid lost after reboot- solved

mdadm  doesn’t record the raid settings automatically, after you set everything up, you have to remember to save the mdadm.conf file.

Anyway, I forgot to do that, and on reboot, it hung my machine

So here is how I recovered it.

originally, my stripe was created with:

mdadm -C /dev/md0 –level=raid0 –raid-devices=2 /dev/hda3 /dev/hdc1

and so I was able to recreate the stripe using:

mdadm -A /dev/md0 /dev/hda3 /dev/hdc1

and then mount it with:

mount /dev/md0 /www -t ext3

so then I saved the /etc/mdadm.conf file with:

echo ‘DEVICES /dev/hda3 /dev/hdc1’ > /etc/mdadm.conf
mdadm –detail –scan >> /etc/mdadm.conf

So before adding it back into my /etc/fstab, I reboot, and check that I can mount it

mount /dev/md0 /www -t ext3

and if that works (it did), I add back into /etc/fstab

/dev/md0  /www   ext3   defaults   1 2

and reboot again.

WP Twitter Auto Publish Powered By : XYZScripts.com