Relayd proxy "how to" ( relayd.conf )



Home


Relayd is an open source load balancer which is able to handle protocol layers 3, 4, and 7. Relayd, which used to be called hoststated, is the latest load balancing application developed by the OpenBSD group. It can be setup as a forward, reverse or TCP port style relay or redirector and be used as a ssl accelerator. It is a fast, secure and stable front end for a web server of web cluster.

Relayd is included in OpenBSD v4.3, but you can also build it if you sync with the latest cvs tree in -current. If you need assistance with CVS look at our help page, Getting your local src repository to cvs -current

Lets take a look at some example uses of relayd, why they might be useful in your environment and the working configuration files to help you get started.





Example 1: Reverse HTTP proxy with layer 7 filtering - Similar to Pound reverse proxy

The first example is an application filtering HTTP proxy we can setup in front of an internal web server. Relayd will be accepting the connections from the Internet clients and filtering their requests for security. Then relayd will make the request to the internal web server on behalf of the external clients. The internal web server will return the results to relayd which will then respond directly to the client.

This is the traffic ideology we are looking to support:


             Internet  -->  relayd reverse proxy  -->  internal LAN web server



This config is for a http load balancer in front of three(3) web servers. It does some layer 7 filtering by looking at the headers coming from the clients. It also can change the "Server" header coming from our servers to globally annonomyze our web server revisions. Take a look at the comments for each line in the config for details.

NOTE:This is a work in progress. The commented lines at the end for URL and request method are a work in progress. We are waiting for a response from the developers.

#######################################################
###  Calomel.org  /etc/relayd.conf   START
###  Reverse HTTP proxy with layer 7 filtering
###   for an internal web server
#######################################################
 
## Macros
#
relayd_addr="127.0.0.1"
relayd_port="8080"

## If you have a back-end web cluster or a single machine,
## you can specify each host here. Then use the table name
## with load balancing in the "relay" directive below.
#
web_port="80"
table <web_hosts> { 10.10.10.11, 10.10.10.22, 10.10.10.33 }

## Global Options
#
# Interval in seconds at which the back-end hosts
# will be checked (default: 10 seconds)
interval 10
 
# Timeout for back-end servers to respond. Set to
# 200 for local servers and around 1000 for servers
# on other subnets. (default: 200 milliseconds)
timeout 200
 
# Number of child processes to run. (default: 5)
prefork 5
 
# Log state notifications after completed host
# checks. State can be up, down or unknown.
log updates

## Reverse layer 7 HTTP proxy with filtering
#
http protocol "httpfilter" {

   ### TCP performance options 
    tcp { nodelay, sack, socket buffer 65536, backlog 100 }

   ### Return HTTP/HTML error pages
    return error

   ### allow logging of remote client ips to internal web servers
    header append "$REMOTE_ADDR" to "X-Forwarded-For"

   ### set Keep-Alive timeout to global timeout
    header change "Keep-Alive" to "$TIMEOUT"

   ### close connections upon receipt
    header change "Connection" to "close"

   ### Block bad or abusive User-Agents (case insensitive)
    label "BAD user agent"
    request header filter "Bandia*" from "User-Agent"
    request header filter "TwoSands*" from "User-Agent"

   ### Block bad Referrers, (case insensitive)
    label "BAD referrer"
    request header filter "Napsack*" from "Referer"

   ### Anonymize our webserver's name/type
    response header change "Server" to "JustSomeServer"

   ### Block requests to wrong host (case insensitive)
    label "BAD Host request"
    request header expect "calomel.org" from "Host"
    request header expect "www.calomel.org" from "Host"

## ## URL filtering (NOT working yet) (case insensitive)
## ## Only allow /, *.html and *.jpg
    # label "BAD path request"
    # request url expect "*.{html,jpg}"
    # request url expect "./*\.{html,jpg,gif}"
    # request path expect "(^\/|\.html|\.css|\.jpg|favicon\.ico|robots\.txt|\.png)$"
    # request path expect "/index.html"

## ## Block bad request method (NOT working yet) (case insensitive)
## ## Only allow GET and HEAD. Deny all others.
    # label "BAD request method"
    # request header expect "GET"
    # request header expect "HEAD"
}

relay httpproxy {
   ### listen and accept redirected connections from pf. For most
   ### protocol types you can also use the synproxy flag in your pf.conf rules.
    listen on $relayd_addr port $relayd_port

   ### apply web filters listed above
    protocol "httpfilter"

   ### forward to webserver(s) with load balancing and
   ### check the "/" path to make sure it comes back
   ### with code 200. You can check any path and also execute scripts
   ### with,  check http '/path/test_script.php' code 200
    forward to <web_hosts> port $web_port mode loadbalance check http "/" code 200 
}
#######################################################
###  Calomel.org  /etc/relayd.conf  END
###  Reverse HTTP proxy with layer 7 filtering
###   for an internal web server
#######################################################



For more information about OpenBSD's Pf firewall, CARP and HFSC quality of service options check out our PF Config (pf.conf), PF CARP and PF quality of service HFSC "how to's".






Example 2: SSL Accelerated, Reverse Proxy with Layer 7 filtering to internal web servers

A ssl accelerator is a daemon that accepts ssl encrypted connections from remote clients. It then decrypts those connections using your private ssl key and signed certificate. Relayd will then proxy the connection _unencrypted_ to your back end servers. This is an especially useful tool when you want to support https connections, but you also want to load balance those connections between many back end servers.

This is the traffic ideology we are looking to support:

                      https                           http
           Internet -------->  relayd reverse proxy  ------->  internal LAN web server
                     port 443                        port 80



This example is a SSL accelerator to front a set of identical internal web servers. This config is for a http load balancer in front of Three(3) web servers, can be used to front a web cluster. Most importantly it decrypts ssl connections from the client and relays the connection unencrypted to the internal web servers. Relayd also does some layer 7 filtering by looking at the headers coming from the clients. It also can change the "Server" header coming from our servers to globally annonomyze our web server revisions. Take a look at the comments for each line in the config for details.

#######################################################
###  Calomel.org  /etc/relayd.conf   START
###  SSL Accelerated, Reverse Proxy with Layer 7 filtering
###   for an internal web server
#######################################################
## Macros
#   
relayd_addr="127.0.0.1"
relayd_port="8080" 
    
## If you have a back-end web cluster or a single machine,
## you can specify each host here. Then use the table name
## with load balancing in the "relay" directive below.
#
web_port="80"
table <web_hosts> { 10.10.10.11, 10.10.10.22, 10.10.10.33 }

## Global Options
#
# Interval in seconds at which the back-end hosts
# will be checked (default: 10 seconds)
interval 10
 
# Timeout for back-end servers to respond. Set to
# 200 for local servers and around 1000 for servers
# on other subnets. (default: 200 milliseconds)
timeout 200
 
# Number of child processes to run. (default: 5)
prefork 5

# Log state notifications after completed host
# checks. State can be up, down or unknown.
log updates

## SSL Accelerated reverse proxy with layer 7 HTTP filtering
#
http protocol "httpfilter" {

   ### TCP performance options 
    tcp { nodelay, sack, socket buffer 65536, backlog 100 }

   ### Return HTTP/HTML error pages
    return error

   ### allow logging of remote client ips to internal web servers
    header append "$REMOTE_ADDR" to "X-Forwarded-For"

   ### set Keep-Alive timeout to global timeout
    header change "Keep-Alive" to "$TIMEOUT"

   ### close connections upon receipt
    header change "Connection" to "close"

   ### Block bad or abusive User-Agents (case insensitive)
    label "BAD user agent"
    request header filter "Bandia*" from "User-Agent"
    request header filter "TwoSands*" from "User-Agent"

   ### Block bad Referrers, (case insensitive)
    label "BAD referrer"
    request header filter "Napsack*" from "Referer"

   ### Anonymize our webserver's name/type
    response header change "Server" to "JustSomeServer"

   ### Block requests to wrong host (case insensitive)
    label "BAD Host request"
    request header expect "calomel.org" from "Host"
    request header expect "www.calomel.org" from "Host"

   ### SSL accelerator ciphers. Set strong crypto cipher suites
   ### without anonymous DH though ssl version 3 only.
   ### Do _NOT_ accept ssl version 2 due to its vulnerabilities. 
    ssl { sslv3, tlsv1, ciphers "HIGH:!ADH", no sslv2 }
}

relay httpproxy {

   ### listen and accept SSL redirected connections from pf. For most
   ### protocol types you can also use the synproxy flag in your pf.conf rules.
    listen on $relayd_addr port $relayd_port ssl

   ### apply web filters listed above
    protocol "httpfilter"

   ### forward to webserver(s) unencrypted with load balancing and
   ### check the "/" path to make sure they return requests
   ### with code 200. You can check any path and also execute scripts
   ### with,  check http '/path/test_script.php' code 200
    forward to <web_hosts> port $web_port mode loadbalance check http "/" code 200
}
#######################################################
###  Calomel.org  /etc/relayd.conf  END
###  SSL Accelerated, Reverse Proxy with Layer 7 filtering
###   for an internal web server
#######################################################



Generating RSA server certificates for relayd

Relayd will look in the directory /etc/ssl/private/ for the private key and /etc/ssl/ for certificates. If the ssl keyword is present, like in our line "listen on $relayd_addr port $relayd_port ssl", the relay will accept connections using the encrypted SSL protocol. The relay will look up a private key in /etc/ssl/private/address.key and a public certificate in /etc/ssl/address.crt, where address is the specified IP address of the relay to listen on.

So, the name of the files must be the same as the address relayd is listening on. In our example relayd.conf, we are listening on "relayd_addr=127.0.0.1" so our files _MUST_ be named 127.0.0.1.key, 127.0.0.1.csr and 127.0.0.1.crt.

OPTION 1: To support https transactions in https you will need to generate a RSA certificate.

openssl genrsa -out /etc/ssl/private/127.0.0.1.key 1024

OPTION 2: Or, if you wish the key to be encrypted with a pass-phrase that you will have to type in when starting servers

openssl genrsa -des3 -out /etc/ssl/private/127.0.0.1.key 1024

The next step is to generate a Certificate Signing Request which is used to get a Certifying Authority (CA) to sign your certificate. To do this use the command:

openssl req -new -key /etc/ssl/private/127.0.0.1.key -out /etc/ssl/private/127.0.0.1.csr

This 127.0.0.1.csr file can then be given to Certifying Authority who will sign the key.

You can also sign the key yourself, using the command:

openssl x509 -req -days 365 -in /etc/ssl/private/127.0.0.1.csr -signkey /etc/ssl/private/127.0.0.1.key -out /etc/ssl/127.0.0.1.crt

With /etc/ssl/127.0.0.1.crt and /etc/ssl/private/127.0.0.1.key in place, you should be able to start relayd with the above config file and accept transactions with your machine on port 443 through pf.

You will most likely want to generate a self-signed certificate in the manner above along with your certificate signing request to test your server's functionality even if you are going to have the certificate signed by another Certifying Authority. Once your Certifying Authority returns the signed certificate to you, you can switch to using the new certificate by replacing the self-signed /etc/ssl/127.0.0.1.crt with the certificate signed by your Certifying Authority, and then restarting relayd.







Example 3: TCP Port relay

If you need to, relayd can also forward connections of almost any TCP connection to another box or port. For example, you can have it listen on one machine for ssh/scp/sftp connections and forward them to another machine. You can do this for port 80 web servers, port 25 mail servers or any other well behaved tcp service (i.e. _not_ ftp). All you need to do is change the machine names and the port numbers.

This is the traffic ideology we are looking to support:


             Internet  -->  relayd forwarder (box1_addr)  -->  server (box2_addr)



This is an example of our server listening on $box1_addr accepting web requests and relaying them to $box2_addr:80.

#######################################################
###  Calomel.org  /etc/relayd.conf  START
#### TCP port relay and forwarder
#######################################################

## Macros
#
box1_addr="10.10.10.10"
box1_port="80"
box2_addr="10.20.20.20"
box2_port="80"

## TCP port relay and forwarder
#
protocol "tcp_service" {
                   tcp { nodelay, socket buffer 65536 }
           }

           relay "tcp_forwarder" {
                   listen on $box1_addr port $box1_port
                   protocol "tcp_service"
                   forward to $box2_addr port box2_port
           }
#######################################################
###  Calomel.org  /etc/relayd.conf  END
#### TCP port relay and forwarder
#######################################################







Example 4: DNS loadbalancer relay

If you have a few dns servers you may want to distribute the load between them. You can also make the setup of of your client systems easier by pointing them to just the relayd dns load balancer. Some OS's have the built in ability to rotate dns servers, but this is more a hit/miss option.

Relayd can DNS loadbalance very well and even check to make sure the DNS machines are up and available. In this example we have 3 backend DNS servers and relayd will balance the requests between them. On the client side, you only need to point them to the single relayd machine.

Relayd has the advantage over streight PF if you have multiple hosts you want to balance between (like a pool of web servers), since it checks status of the hosts in the pool and if one goes down, it takes it out of the pool.

This is the traffic ideology we are looking to support:

                                                        /--> DNS one
          DNS clients  -->  relayd DNS load balancer ------> DNS two
                                                        \--> DNS three



This config will allow TCP and UDP connections through pf to our box on localhost:8053. Relayd will then loadbalance the DNS requests between the three dns servers listed in <dns_servers>.

#######################################################
###  Calomel.org  /etc/relayd.conf  START
#### DNS Load Balancer
#######################################################
## Macros
#
relayd_addr="127.0.0.1"
relayd_port="8053"

table <dns_servers> { 192.168.1.11, 192.168.2.22, 192.168.3.33 }
dns_servers_port="53"

## Global Options
#
# Set the interval in seconds at which the hosts
# will be checked. The default interval is 10 seconds.
interval 10

# Set the global timeout in milliseconds for checks.
timeout 200

# When using relays, run the specified number of
# processes to handle relayed connections.
prefork 5

# Log state notifications after completed host
# checks. State can be up, down or unknown.
log updates

## DNS loadbalancer
#
dns protocol "dnsfilter" {
   ### TCP performance options
    tcp { nodelay, sack, socket buffer 1024, backlog 1000 }
}

relay dnsproxy {
       ### listen and accept redirected connections from pf
        listen on $relayd_addr port $relayd_port

       ### apply web filters
        protocol "dnsfilter"

       ### forward to web server(s)
        forward to <dns_servers> port $dns_servers_port \
                mode loadbalance check tcp
}
#######################################################
###  Calomel.org  /etc/relayd.conf  END
#### DNS Load Balancer
#######################################################



For more information or ideas about the BIND DNS server check out our guides on Bind DNS Caching Server (named.conf), DNS Verify (ip to hostname to ip) and using DynDns.org with ddclient (ddclient.conf).







Example 5: Service Redirector

Relayd can be used as a proxy, but also as a master redirector by allowing it to make rdr rules through a pf anchor. Redirections represent a pf(4) rdr rule. They are used for stateful redirections to the hosts in the specified tables. pf(4) rewrites the target IP addresses and ports of the incoming connections, operating on layer 3.

This is one of relayd's most powerful and most useful options. Since the data does not need to proxy/relay through relayd, relayd does not have to understand the protocol. Relayd is only listening for connections and making rdr rules through pf. You could have a IMAPS cluster, CITRIX server farm or whatever. It does not matter because they will all work.

Relayd has the advantage over streight PF if you have multiple hosts you want to balance between (like a pool of web servers), since it checks status of the hosts in the pool and if one goes down, it takes it out of the pool.

This is the traffic ideology we are looking to support:


          External clients  -->  relayd redirection  --> internal servers
                                 using PF rdr rules



The first step is adding the relayd anchor and any pass rules for the generated rdr rules to your pf.conf. Here we added the relayd anchor and a pass rule to allow traffic to our internal servers.

## Add to your /etc/pf.conf
##
################ Macros ###############################
some_servers="{ 192.168.1.111, 192.168.1.222, 192.168.1.333 }"

################ Translation ###############################
## Relayd
rdr-anchor "relayd/*"

################ Filtering #################################
pass in log on em0 inet proto tcp from any to $some_servers port 500 flags S/SA synproxy state tagged RELAYD



The second step is setting up relayd to generate rdr rules and watch the backend servers. For example, we have relayd using 100.200.300.400 port 500 on the external interface (em0). Relayd will then check the status of the internal servers listed in the table <some_servers> and make the appropriate rdr rules when external clients connect. If the internal servers ever go down the rdr rules never get generated for that host until it comes back up.

#######################################################
###  Calomel.org  /etc/relayd.conf  START
#### Service Redirector
#######################################################
## Macros
#
relayd_addr="100.200.300.400"
relayd_port="500"
relayd_int="em0"

table <some_servers> { 192.168.1.111, 192.168.1.222, 192.168.1.333 }
servers_port="500"

## Global Options
#
# Set the interval in seconds at which the hosts
# will be checked. The default interval is 10 seconds.
interval 10

# Set the global timeout in milliseconds for checks.
timeout 200

# When using relays, run the specified number of
# processes to handle relayed connections.
prefork 5

# Log state notifications after completed host
# checks. State can be up, down or unknown.
log updates

redirect anchor_name {
        listen on $relayd_addr port $relayd_port interface $relayd_int
        tag RELAYD
        sticky-address
        forward to <some_servers> port $servers_port mode roundrobin check tcp
}
#######################################################
###  Calomel.org  /etc/relayd.conf  END
#### Service Redirector
#######################################################

NOTE: In our example, relayd will make an anchor rule in the following form. Anchors will contain a rdr rule and pf will create a stateful pass rule. Once the pass rule is active then the rdr rule will be removed from the anchor as it has served its purpose. This usually happens in tenths of a second.

rdr on [interface em0] from [client ip] to [interface em0] port 500 tag RELAYD -> [table: some_servers] port 500







Example 6: Transparent HTTP proxy with layer 7 filtering - Similar to a Squid proxy

A transparent proxy is used to filter web requests from a LAN client to the server. It can be used for transparent L7 forwarding in relays. A transparent proxy will do this without any configuration changes to the clients. This is done by allowing the proxy to listen on port 80 and intercept the LAN client's traffic. The proxy can then filter what it needs to and then make the connection to the remote server on behalf of the client. This is both secure, as users can not bypass the filters, and saves work, since the only changes are done to the firewall itself.

A transparent proxy can be setup to block ads, block unwanted sites or any other information you do not want to cross the boundary of your firewall.

This is the traffic ideology we are looking to support:


             internal LAN clients  -->  relayd transparent  --> Internet
                                              proxy



First, add a rdr rule to pf to make any port 80 request to external web servers from LAN clients goto relayd listening on localhost port 8080. This is the transparent part of the proxy. Then add a pass rule so internal machines can access relayd on localhost:8080.

BTW, none of the clients need to be setup any differently since we are using the firewall to redirect traffic on protocol layer 3. This is the beauty of a transparent proxy.

## Add to your /etc/pf.conf
##
################ Translation ###############################
## Relayd transparent proxy for the LAN
rdr on $IntIf inet proto tcp from $IntNet to any port www tag INTWEB -> lo0 port 8080

################ Filtering #################################
pass in log on $IntIf inet proto tcp from $IntNet to lo0 port 8080 flags S/SA synproxy state tagged INTWEB



Second, put the following into your /etc/relayd.conf file. It will allow traffic sent to localhost:8080 to be proxied to any external internet server. With some of the examples below you can block external sites, bad internal user agents or anything else you feel you do not want to pass the firewall.

The important line in this config is "forward to nat lookup". It allows relayd to look at the state table of the rdr rule we setup. From the state table relayd can figure out the original destination ip and make a connection to the remote site. Then, relayd will proxy that data back to the internal LAN client.

#######################################################
###  Calomel.org  /etc/relayd.conf  START
#### Transparent HTTP proxy with layer 7 filtering
#######################################################
## Macros
#
relayd_addr="127.0.0.1"
relayd_port="8080"

## Global Options
#
prefork 10

## Transparent HTTP proxy with layer 7 filtering
#
http protocol "httpfilter" {

   ### TCP performance options
    tcp { nodelay, sack, socket buffer 65536, backlog 1000 }

   ### Return HTTP/HTML error pages
    return error

   ### set Keep-Alive timeout to global timeout
    header change "Keep-Alive" to "$TIMEOUT"

   ### close connections upon receipt
    header change "Connection" to "close"

   ### Block bad User-Agents (case insensitive)
    label "BAD user agent"
    request header filter "Mozilla/4.0*" from "User-Agent"
    request header filter "SomeBrokeBrowser/1.0*" from "User-Agent"

   ### Block requests to wrong host (case insensitive)
    label "BAD Host request"
    request header filter "*youtube.com*" from "Host"
    request header filter "*myspace.com*" from "Host"
    request header filter "*facebook.com*" from "Host"
    request header filter "*bfriends.com*" from "Host"

   ### Obscure our user agents
    request header change "Accept" to "text/html,text/plain;q=0.9,*/*;q=0.8"
    request header change "Accept-Charset" to "ISO-8859-1,utf-8;q=0.9"
    request header change "Accept-Encoding" to "gzip"
    request header change "Accept-Language" to "en-us,en;q=0.9"
    request header change "User-Agent" to "Arcana imperii"
}

relay httpproxy {
       ### listen and accept redirected connections from pf
        listen on $relayd_addr port $relayd_port

       ### apply web filters
        protocol "httpfilter"

       ### transparent http proxy
        forward to nat lookup
}
#######################################################
###  Calomel.org  /etc/relayd.conf  END
#### Transparent HTTP proxy with layer 7 filtering
#######################################################







Example 7: Direct Server Return (DSR)

Load balancing is often used to distribute load over multiple resources (cpu, ram, disk, network, etc.) and to gain increased reliability because of the redundancy introduced with multiple servers. Since all return traffic has to flow through the load balancer, it soon becomes the bottleneck of your load balanced cluster. Enter DSR. With DSR, the server replies to the client by itself (hence the term "Direct Server Return"), so that the load balancer does not have to forward all traffic back to the client. Undeadly.org has an interview with the developer and shows examples at Undeadly on Direct Server Return support in OpenBSD.

SECURITY WARNING on DSR: we do not recommend you use DSR unless you know exactly what you are doing. The use of the "sloppy state" directive in PF is incredibly insecure and easily compromised. Understand your risks before implementing.

This is a quote from the developer about relayd in DSR mode:

reyk: Don't use it. It is something that is required to handle really high amounts of bandwidth, like streams, but it totally negates the security benefits of real load balancing with pf redirections and the extra security in our packet filter. So unless you really need it, you should get a faster box and run the traditional mode instead.

There is nothing comparable to the stateful filtering in pf, just the buzzword "SPI" (stateful packet inspection) doesn't say anything about the quality of the checks. The new sloppy mode is so scary and usable at the same time, but it is even worse that other firewalls can't do much more than our sloppy mode.





Testing Relayd

When you get relayd installed and the /etc/relayd.conf configuration file in place it is time to make sure everything works. Log into the machine and start a xterm. We recommend starting relayd with the following arguments:

relayd -d -vv -f /etc/relayd.conf



Web test: On a remote machine you can test your relayd's response to a HEAD query by doing:

lynx -head http://your_hostname.com



DNS request: Testing DNS resolution from your relayd DNS load balancer is as easy asking your relayd dns balancer for a host name lookup of Google.com. Watch relayd query each DNS server in turn.

host www.google.com your_relayd_dns_proxy_server
  ...or...
dig www.google.com @your_relayd_dns_proxy_server





Relayctl - The relayd command and control interface

The program "relayctl" can be used to interface with the relayd daemon. You can check the status of the proxy, enable or disable hosts and watch the states of client's connections.

Simply executing "relayctl" without arguments will print out the first tree of options available to you. Most arguments in the first tree will have sub arguments. The man page has all of the definitions (man relayctl).

root@machine: relayctl        
valid commands/args:
  monitor
  show
  poll
  reload
  stop
  redirect
  table
  host

If we wanted to see a summery of relayd's status and availability we could use the following. This output is of a server using a similar relayd.conf from Example 1 above. relayctl references the file /var/run/relayd.sock for all its information.

root@machine: relayctl show summary  
Id      Type     Name                    Avlblty     Status
0       relay    httpproxy                           active
1       table    web_hosts:80                        active (1 hosts up)
1       host     127.0.0.1               100.00%     up

You can also look at the status of each host in the table "httpproxy"

root@machine: relayctl show hosts
Id      Type     Name                    Avlblty     Status
1       table    web_hosts:80                        active (1 hosts up)
1       host     127.0.0.1               100.00%     up
                 total: 825/825 checks





Starting Relayd for operational use

When you are satisfied with your relayd setup you can start it manually using the following command. Specifying the argument "-f /etc/relayd.conf" is optional as this is the default location for the config file.

relayd -f /etc/relayd.conf

If you have OpenBSD v4.3 or later install then add the following line to your /etc/rc.conf.local

relayd_flags=""

If you look at the processes after daemonizing realyd, you should see something similar to the following. This example is using the prefork=5 config option so there are 5 "socket relay engine" processes.

root@machine: ps -aux | grep relay | sort
_relayd   5371  0.0  0.1   724  1096 ??  S  10:10AM  0:00.01 relayd: socket relay engine (relayd)
_relayd   6161  0.0  0.1   724  1252 ??  S  10:10AM  0:00.01 relayd: socket relay engine (relayd)
_relayd   8476  0.0  0.1   724  1268 ??  S  10:10AM  0:00.01 relayd: host check engine (relayd)
_relayd  11374  0.0  0.1   724  1096 ??  S  10:10AM  0:00.01 relayd: socket relay engine (relayd)
_relayd  16556  0.0  0.1   724  1096 ??  S  10:10AM  0:00.01 relayd: socket relay engine (relayd)
_relayd  21966  0.0  0.1   724  1100 ??  S  10:10AM  0:00.01 relayd: socket relay engine (relayd)
_relayd  29844  0.0  0.2  1056  1616 ??  S  10:10AM  0:00.00 relayd: pf update engine (relayd)
root       204  0.0  0.2  1048  1564 ??  Ss 10:10AM  0:00.00 relayd: parent (relayd)



Want more speed? Make sure to also check out the Network Speed and Performance Guide. With a little time and understanding you could easily double your firewall's throughput.





Questions?

Why am I getting the error, "fatal: flush_table: cannot flush table stats: Operation not supported by device"?

Sometimes, if relayd shuts down incorrectly the relayd anchor is still populated. We have seen relayd exit improperly if you pkill the process or if all the servers relayd is checking are unavailable. To check the anchors execute the following to see if an anchor is still in place:
root@machine: pfctl -a 'relayd/*' -vvsA 
  relayd/anchor_name

To clear the relayd anchor execute the following with your anchor name. Once the anchor is clear relayd will start like normal.

root@machine: pfctl -a 'relayd/anchor_name' -Fa
rules cleared
nat cleared
1 tables deleted.

The name for the anchor comes from the name used in the relayd.conf, redirect anchor_name {}. The anchor must be empty, otherwise relayd will show the following in the error logs:

Jan 10 10:10:10 test relayd[31094]: startup
Jan 10 10:10:10 test relayd[11727]: fatal: flush_table: cannot flush table stats: Operation not supported by device
Jan 10 10:10:10 test relayd[31094]: check_child: lost child: pf update engine exited
Jan 10 10:10:10 test relayd[21492]: host check engine exiting
Jan 10 10:10:10 test relayd[31094]: terminating

What are all these HEAD requests in my webserver's logs?

If you enabled http protocol checks with, "check http "/" code 200" then relayd is responsible. It is making sure that the path "/" is available on all hosts so it can continue to send them traffic. The logs might look something like the following. Notice that the time interval is every 10 seconds and the response code is "200"? The directive "interval 10" tells relayd to check every 10 seconds.
- - - [10/Jan/2008:10:10:10 -0400] "HEAD / HTTP/1.0" 200 0 "-" "-"
- - - [10/Jan/2008:10:10:20 -0400] "HEAD / HTTP/1.0" 200 0 "-" "-"
- - - [10/Jan/2008:10:10:30 -0400] "HEAD / HTTP/1.0" 200 0 "-" "-"
- - - [10/Jan/2008:10:10:40 -0400] "HEAD / HTTP/1.0" 200 0 "-" "-"

Do you know about Nginx web server? Can your help me?

Nginx, pronounced "Engine X", is an incredibly efficient web server that rivals even Lighttpd. We highly recommend checking it out on the Calomel.org Nginx web server "how to" page. It is faster, more memory efficient and has more options than Lighttpd as well as having a very active development base.





Questions, comments, or suggestions? Contact Calomel.org