专注业务连续性支持与数据保护
2009-04-01技术合集

各种程序的伪静态规则已关闭评论

各种程序的伪静态规则

顾名思义, 伪静态是相对真实静态来讲的.
  通常我们为了更好的缓解服务器压力,和增强搜索引擎的友好面.都将文章内容生成静态页面.
  但是有的朋友为了实时的显示一些信息.或者还想运用动态脚本解决一些问题.不能用静态的方式来展示网站内容.
  但是这就损失了对搜索引擎的友好面.怎么样在两者之间找个中间方法呢,这就产生了伪静态技术.
  就是展示出来的是以.html一类的静态页面形式,但其实是用ASP一类的动态脚本来处理的.
  我的一个思路是用IIS的404错误处理机制来实现的.这个比rewrite技术要灵活得多.
  首先,设置站点属性-自定意错误
  找到HTTP错误404,然后编辑属性->消息类型选中URL->URL填入“/index.asp”,或您的错误处理页面.
  这样,比如用户或蜘蛛访问http://tech.todo.net.cn/12345.html 时(12345为文章在数据库的ID).由于些页面不存在,所以触发了404错误.转向了index.asp
  在index.asp里加
  CurrDomain=Request.ServerVariables("HTTP_HOST") '当前访问域名
  CurrURL=Replace(Request.ServerVariables("QUERY_STRING"),"404;http://"&CurrDomain&":80","") '当前访问URL
  此时的CurrURL应该是:12345.html .
  这样,就得到用户正在试图访问的页面.然后提取里边的文章ID(应该为:12345),用正则,这里就不多说了.
  然后到数据库里提取出文章内容.输出到页面里,就OK了.
  这样.用户或蜘蛛看到的URL还是他访问的URL.而我们对内容的处理上可以用到了动态技术.这就是我们想要的结果.
  说得简单了一些.但是基本思路就是这样了.
[编辑本段]伪静态的利与弊
  当然犹如一篇文章的作者所说的:"如果流量稍大一些使用伪静态就出现CPU使用超负荷,我的同时在线300多人就挂了,而不使用伪静态的时候同时在线超500人都不挂,我的ISS数是1000。”
  确实是这样的,由于伪静态是用正则判断而不是真实地址,分辨到底显示哪个页面的责任也由直接指定转由CPU来判断了,所以CPU占有量的上升,确实是伪静态最大的弊病。
  我们应该选择伪静态还是真静态?
  1、使用真静态和假静态对SEO来说没有什么区别
  2、使用真静态可能将导致硬盘损坏并将影响论坛性能
  3、使用伪静态将占用一定量的CPU占有率,大量使用将导致CPU超负荷
  4、最重要的一点,我们要静态是为了SEO
  所以:
  1、使用真静态的方法可以直接排除了,因为无论怎么生成,对硬盘来说都是很伤的。
  2、既然真伪静态的效果一样,我们就可以选择伪静态了。
  3、但是伪静态大量使用会照成CPU超负荷。
  4、所以我们只要不大量使用就可以了。
  5、既然静态只是给SEO看的,我们只需要伪静态给SEO就行了,不需要给用户使用。
  6、所以我们只要在专门提供给SEO爬的Archiver中使用伪静态就可以了。
  7、谢谢大家耐心看我写的文章。
  8、有何不解的地方或是有不同的看法欢迎提出。
  关于伪静态和真静态的评论
  真正的静态化和伪静态还是有本质的区别的。为浏览用户处理一个纯粹htm和一个调用多个数据的php在CPU的使用率方面明显前者少。记得原来有个人说html下载硬盘读写频繁,他这么说好像读取数据库不用读写磁盘似的,何况还有一大堆缓存的零散php也是放在硬盘的,这些读取不用磁盘操作么?可笑。
  读取单个htm+图片Flash等附件就可以实现的目的,何苦要读数据库又要读php缓存文件又要重新整合数据输出再+图片Flash等附件这么大费周章呢?CMS首页不需要很多的互动的,论坛那一套不应该拿到这里来用,相反应该更多考虑的是:美观!兼容!信息的直观!性能!还有稳定!

各种程序的伪静态规则

2009-04-01技术合集

SSH、PUTTY远程LINUX 乱码已关闭评论

SSH、PUTTY远程LINUX 乱码

vi /etc/sysconfig/i18n
将内容改为
LANG="zh_CN.GB18030"
LANGUAGE="zh_CN.GB18030:zh_CN.GB2312:zh_CN"
SUPPORTED="zh_CN.GB18030:zh_CN:zh:en_US.UTF-8:en_US:en"
SYSFONT="lat0-sun16"
这样中文在SSH,telnet终端就可以正常显示了。
或者改成下面的,正常显示英文。
LANG="en_US.UTF-8"
SUPPORTED="en_US.UTF-8:en_US:en"
SYSFONT="latarcyrheb-sun16

2009-04-01技术合集

linux对应PPPOE的sh脚本已关闭评论

linux对应PPPOE的sh脚本

#!/bin/bash
# 请先输入您的相关参数,不要输入错误了!
EXTIF="ppp0" # 这个是可以连上 Public IP 的网路介面
INIF="eth0" # 内部 LAN 的连接介面;若无请填 ""
INNET="10.0.0.0/8" # 内部 LAN 的网域,若没有内部 LAN 请设定为 ""
export EXTIF INIF INNET
# 第一部份,针对本机的防火墙设定!###########################
# 1. 先设定好核心的网路功能:
echo "1" > /proc/sys/net/ipv4/tcp_syncookies
echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
for i in /proc/sys/net/ipv4/conf/*/rp_filter; do
echo "1" > $i
done
for i in /proc/sys/net/ipv4/conf/*/log_martians; do
echo "1" > $i
done
for i in /proc/sys/net/ipv4/conf/*/accept_source_route; do
echo "0" > $i
done
for i in /proc/sys/net/ipv4/conf/*/accept_redirects; do
echo "0" > $i
done
for i in /proc/sys/net/ipv4/conf/*/send_redirects; do
echo "0" > $i
done
# 2. 清除规则、设定预设政策及开放 lo 与相关的设定值
PATH=/sbin:/usr/sbin:/bin:/usr/bin; export PATH
iptables -F
iptables -X
iptables -Z
iptables -P INPUT Drop
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state –state RELATED -j ACCEPT
# 3. 启动额外的防火墙 script 模组
if [ -f /usr/local/virus/iptables/iptables.deny ]; then
sh /usr/local/virus/iptables/iptables.deny
fi
if [ -f /usr/local/virus/iptables/iptables.allow ]; then
sh /usr/local/virus/iptables/iptables.allow
fi
if [ -f /usr/local/virus/httpd-err/iptables.http ]; then
sh /usr/local/virus/httpd-err/iptables.http
fi
iptables -A INPUT -m state –state ESTABLISHED -j ACCEPT
# 4. 允许某些类型的 ICMP 封包进入
AICMP="0 3 3/4 4 11 12 14 16 18"
for tyicmp in $AICMP
do
iptables -A INPUT -i $EXTIF -p icmp –icmp-type $tyicmp -j ACCEPT
done
# 5. 允许某些服务的进入,请依照您自己的环境开启
# iptables -A INPUT -p TCP -i $EXTIF –dport 22 -j ACCEPT # SSH
# iptables -A INPUT -p TCP -i $EXTIF –dport 25 -j ACCEPT # SMTP
# iptables -A INPUT -p UDP -i $EXTIF –sport 53 -j ACCEPT # DNS
# iptables -A INPUT -p TCP -i $EXTIF –sport 53 -j ACCEPT # DNS
# iptables -A INPUT -p TCP -i $EXTIF –dport 80 -j ACCEPT # WWW
# iptables -A INPUT -p TCP -i $EXTIF –dport 110 -j ACCEPT # POP3
# iptables -A INPUT -p TCP -i $EXTIF –dport 443 -j ACCEPT # HTTPS
# 第二部份,针对后端主机的防火墙设定!##############################
# 1. 先载入一些有用的模组
modules="ip_tables iptable_nat ip_nat_ftp ip_nat_irc ip_conntrack
ip_conntrack_ftp ip_conntrack_irc"
for mod in $modules
do
testmod=`lsmod | grep "${mod} "`
if [ "$testmod" == "" ]; then
modprobe $mod
fi
done
# 2. 清除 NAT table 的规则吧!
iptables -F -t nat
iptables -X -t nat
iptables -Z -t nat
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
iptables -t nat -P OUTPUT ACCEPT
# 3. 开放成为路由器,且为 IP 分享器!
if [ "$INIF" != "" ]; then
/sbin/iptables -A INPUT -i $INIF -j ACCEPT
echo "1" > /proc/sys/net/ipv4/ip_forward
/sbin/iptables -t nat -A POSTROUTING -s $INNET -o $EXTIF -j MASQUERADE
fi
# 如果你的 MSN 一直无法连线,或者是某些网站 OK 某些网站不 OK,
# 可能是 MTU 的问题,那你可以将底下这一行给他取消注解来启动 MTU 限制范围
# iptables -A FORWARD -p tcp -m tcp –tcp-flags SYN,RST SYN -m tcpmss \
# –mss 1400:1536 -j TCPMSS –clamp-mss-to-pmtu
# 4. 内部伺服器的设定:
# iptables -t nat -A PREROUTING -p tcp -i $EXTIF –dport 80 \
# -j DNAT –to 192.168.1.210:80
# Deny All Other Connections
/sbin/iptables -A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT

2009-04-01技术合集

mysql集群配置已关闭评论

mysql集群配置

Mysql Cluster: The definitive HOWTO
This guide was written over a year ago and is an excellent introduction however it has not been updated since! If you have any questions please email me or Buy my book (US).

Introduction
You MUST have a third server as a managment node but this can be shut down after the cluster starts. Also note that I do not recommend shutting down the managment server (see the extra notes at the bottom of this document for more information). You can not run a mysql cluster with just two servers And have true redundancy.

You should also disable SELinux or use the following rules (many thanks to Robin Bowes for letting me know of these):

allow mysqld_t port_t:tcp_socket name_connect;
allow mysqld_t var_lib_t:file append;
allow mysqld_t var_lib_t:sock_file create;
allow mysqld_t var_lib_t:file read;
allow mysqld_t var_lib_t:sock_file unlink;
allow mysqld_t var_lib_t:file { getattr write };
Although it is possible to set the cluster up on two physical servers you WILL NOT GET the ability to "kill" one server and for the cluster to continue as normal. For this you need a third server running the managment node.

I am going to talk about three servers,

mysql1.domain.com 192.168.0.1
mysql2.domain.com 192.168.0.2
mysql3.domain.com 192.168.0.3
Servers 1 and 2 will be the two that end up "clustered". This would be perfect for two servers behind a loadbalancer or using round robin DNS and is a good replacement for replication. Server 3 needs to have only minor changes made to it and does NOT require a mysql install. It can be a low-end machine and can be carrying out other tasks.

STAGE 1: Install mysql on the first two servers:
Complete the following steps on both mysql1 and mysql2:

groupadd mysql useradd -g mysql mysql cd /usr/local/ wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.15-linux-i686-glibc23.tar.gz /from/http://mirror.trouble-free.net/mysql_mirror/ tar -zxvf mysql-max-5.0.15-linux-i686-glibc23.tar.gz rm mysql-max-5.0.15-linux-i686-glibc23.tar.gz ln -s mysql-max-5.0.15-linux-i686-glibc23 mysql cd mysql scripts/mysql_install_db –user=mysql chown -R root . chown -R mysql data chgrp -R mysql . cp support-files/mysql.server /etc/rc.d/init.d/ chmod +x /etc/rc.d/init.d/mysql.server chkconfig –add mysql.server
Do not start mysql yet.

STAGE 2: Install and configure the managment server
You need the following files from the bin/ of the mysql directory: ndb_mgm and ndb_mgmd. Download the whole mysql-max tarball and extract them from the bin/ directory.

mkdir /usr/src/mysql-mgm cd /usr/src/mysql-mgm http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-5.0.15-pc-linux-gnu-i686.tar.gz/ from/http://www.signal42.com/mirrors/mysql/ tar -zxvf mysql-max-5.0.15-pc-linux-gnu-i686.tar.gz rm mysql-max-5.0.15-pc-linux-gnu-i686.tar.gz cd mysql-max-5.0.15-pc-linux-gnu-i686 mv bin/ndb_mgm . mv bin/ndb_mgmd . chmod +x ndb_mg* mv ndb_mg* /usr/bin/ cd rm -rf /usr/src/mysql-mgm
You now need to set up the config file for this managment:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini
Now, insert the following (changing the bits as indicated):

[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
# the IP of THIS SERVER
HostName=192.168.0.3
# Storage Engines
[NDBD]
# the IP of the FIRST SERVER (STORAGE NODE)
HostName=192.168.0.1
DataDir= /var/lib/mysql-cluster
[NDBD]
# the IP of the SECOND SERVER (STORAGE NODE)
HostName=192.168.0.2
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]
Now, start the managment server:

ndb_mgmd
This is the mysql managment server, not maganment console. You should therefore not expect any output (we will start the console later).

STAGE 3: Configure the storage/SQL servers and start mysql
On each of the two storage/SQL servers (192.168.0.1 and 192.168.0.2) enter the following (changing the bits as appropriate):

vi /etc/my.cnf
Enter i to go to insert mode again and insert this on both servers (changing the IP address to the IP of the managment server that you set up in stage 2):

[mysqld]
ndbcluster
# the IP of the MANAGMENT (THIRD) SERVER
ndb-connectstring=192.168.0.3
[mysql_cluster]
# the IP of the MANAGMENT (THIRD) SERVER
ndb-connectstring=192.168.0.3
Be aware that anything in the [mysql_cluster] section will override the defaults in [mysql], so if you introduce a nodeid and then try to run multiple daemons on the same machines you should be aware of this!

Now, we make the data directory and start the storage engine:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd –initial
/etc/rc.d/init.d/mysql.server start
If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server.
NOTE that you should ONLY use –initial if you are either starting from scratch or have changed the config.ini file on the managment.

STAGE 4: Check its working
You can now return to the managment server (mysql3) and enter the managment console:

/usr/local/mysql/bin/ndb_mgm
Enter the command SHOW to see what is going on. A sample output looks like this:

[root@mysql3 mysql-cluster]# /usr/local/mysql/bin/ndb_mgm
— NDB Cluster — Management Client —
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
———————
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.1 (Version: 5.0.15, Nodegroup: 0, Master)
id=3 @192.168.0.2 (Version: 5.0.15, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 5.0.15)

[mysqld(API)] 2 node(s)
id=4 (Version: 5.0.15)
id=5 (Version: 5.0.15)

ndb_mgm>
If you see

not connected, accepting connect from 192.168.0.[1/2/3]
in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out where you have gone wrong and change this HOWTO to fix it.

If you are OK to here it is time to test mysql. On either server mysql1 or mysql2 enter the following commands: Note that we have no root password yet.

mysql
use test;
Create TABLE ctest (i INT) ENGINE=NDBCLUSTER;
Insert INTO ctest () VALUES (1);
Select * FROM ctest;
You should see 1 row returned (with the value 1).

If this works, now go to the other server and run the same Select and see what you get. Insert from that host and go back to host 1 and see if it works. If it works then congratulations.

The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network cable and see if the other server keeps on going fine (try the Select query). If you dont have physical access do the following:

ps aux | grep ndbd
You get an output like this:

root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd
root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd
root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd
In this case ignore the command "grep ndbd" (the last line) but kill the first two processes by issuing the command kill -9 pid pid:

kill -9 5578 5579
Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server has died. To restart it, just issue

ndbd
NOTE no

–inital!

——————————————————————————–

Further notes about setup
I strongly recommend that you read all of this (and bookmark this page). It will almost certainly save you a lot of searching.

The Managment Server
I strongly recommend that you do not stop the managment server once it has started. This is for several resons:

The server takes hardly any server resources
If a cluster falls over, you want to be able to just ssh in and type ndbd to stat it. You dont want to have to start messing around with another server
If you want to take backups then you need the managment server up
The cluster log is sent to the management server so to check what is going on in the cluster or has happened since last this is an important tool
All commands from the ndb_mgm client is sent to the management server and thus no management commands without management server.
The managment server is required in case of cluster reconfiguration (crashed server or network split). In the case that it is not running, "split-brain" scenario will occure. The management server arbitration role is required for this type of setup to provide better fault tollerance.
However you are welcome to stop the server if you prefer.

Starting and stopping ndbd automatically on boot
To achieve this, do the following on both mysql1 and mysql2:

echo "ndbd" > /etc/rc.d/init.d/ndbd
chmod +x /etc/rc.d/init.d/ndbd
chkconfig –add ndbd
Note that this is a really quick script. You ought really to write one that at least checks if ndbd is already started on the machine.

Use of hostnames
You will note that I have used IP addresses exclusively throught this setup. This is because using hostnames simply increases the number of things that can go wrong. Mikael Ronström of MySQL AB kindly explains: "Hostnames certainly work with MySQL Cluster. But using hostnames introduces quite a few error sources since a proper DNS lookup system must be set-up, sometimes /etc/hosts must be edited and their might be security blocks ensuring that communication between certain machines is not possible other than on certain ports". I strongly suggest that while testing you use IP addresses if you can, then once it is all working change to hostnames.

RAM
Use the following formula to work out the amount of RAM that you need on each storage node:

(Size of database * NumberofReplicas * 1.1) / Number of storage nodes
NumberofReplicas is set to two by default. You can change it in config.ini if you want. So for example to run a 4GB database with NoOfReplicas set to 2 you need just under 9GB of RAM in total (4 * 2 * 1.1), so if you had two storage nodes you would need 4.5GB ram per storage node. For the SQL nodes and managment nodes you dont need much RAM at all.Bear in mind that if you have variable-width fields in MySQL Cluster 4.0 or 5.0 you will find that you will need a LOT more RAM than this formula predicts.

Note: A lot of people have emailed me querying the maths above! Remember that the cluster is fault tollerant, and each piece of data is stored on at least 2 nodes. (2 by default, as set by NumberOfReplicas). So you need TWICE the space you would need just for one copy, multiplied by 1.1 for overhead.

Adding storage nodes
If you decide to add storage nodes, bear in mind that 3 is not an optimal numbers. If you are going to move from two (above) then move to 4.

Adding SQL nodes
If you want to add another SQL node (i.e. you have another server that you want to add to the cluster but you dont need it to act as a storage node), then just add the following to /etc/my.cnf on the server (it must be a mysql-max server):

[mysqld]
ndbcluster
# the IP of the MANAGMENT (THIRD) SERVER
ndb-connectstring=192.168.0.3
[mysql_cluster]
# the IP of the MANAGMENT (THIRD) SERVER
ndb-connectstring=192.168.0.3
Then you need to make sure that there is another [MYSQLD] line at the end of config.ini on the managment server. Restart the cluster (see below for an important note) and restart mysql on the new API. It should be connected.

Important note on changing config.ini
If you ever change config.ini you must stop the whole cluster and restart it to re-read the config file. Stop the cluster with a SHUTDOWN command to the ndb_mgm package on the managment server and then restart all the storage nodes.

Some useful configuration options that you will need if you have large tables:

DataMemory: defines the space available to store the actual records in the database. The entire DataMemory will be allocated in memory so it is important that the machine contains enough memory to handle the DataMemory size. Note that DataMemory is also used to store ordered indexes. ordered indexes uses about 10 bytes per record. Default: 80MB
IndexMemory The IndexMemory is the parameter that controls the amount of storage used for hash indexes in MySQL Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Default: 18MB
MaxNoOfAttributes This parameter defines the number of attributes that can be defined in the cluster. Default: 1000
MaxNoOfTables Obvious (bear in mind that each BLOB field creates another table for various reasons so take this into account). Default: 128

View this page at mysql.com for further information about the things you can put in the [NDBD] section of config.ini

A note about security
MySQL cluster is not secure. By default anyone can connect to your managment server and shut the whole thing down. I suggest the following precautions:

Install APF and block all ports except those you use (do NOT include any MySQL cluster ports). Add the IPs of your cluster machines to the /etc/apf/allow_hosts file.
Run MySQL cluster over a second network card on a second, isolated, network.
Other resources
I found the following resources very useful:

The MySQL cluster documentation. This is gradually being reworked and is really quite good now.
On-line Backup of MySQL Cluster.
Defining MySQL Cluster Storage Nodes for information that you will need to allow for bigger database memory or a larger number of tables, indexes, unique indexes
MySQL Cluster mailing list.
Google.
MySQL Forums
The #mysql IRC chanel on freenode and EFNet. If you need a free (open source) IRC client I recomment Bersirc.
Thanks
I must thank several others who have contributed to this: Mikael Ronström from MySQL AB for helping me to get this to work and spotting my silly mistake right at the end, Lewis Bergman for proof-reading this page and pointing out some improvements, as well as suffering the frustration with me and Martin Pala for explaining the final reason to keep the managment server up as well as a few other minor changes. Thanks also to Terry from Advanced Network Hosts who paid me to set a cluster up and at the same time produce a HOWTO.

Comments/Corrections
I would be delighted for you to send any corrections or comments you may have to alex@davz.net. If you are also really stuck I would be delighted to help you.

Please do not copy this document; please just link to it. This is simply to prevent yet more old information getting out onto the internet. I will keep this updated. If you want something changed please email me.

2009-03-31技术合集

在CentOS 5上架设PPTP Server已关闭评论

在CentOS 5上架设PPTP Server

用Poptop建立基于Linux的PPTP Server,可以让漫游用户从Internet上的任何地方安全地连接到公司内部网络。支持Windows 95/98/Me/NT/2000/XP和Linux上的PPTP客户端。要求:CentOS 5
kernel-2.6.18-8.el5
ppp-2.4.4-1.el5

Kernel version 2.6.15及以上版本内核内置了MPPE的支持,CentOS 5的kernel是2.6.18,所以不需要再安装MPPE模块。

CentOS 5自带的ppp-2.4.4-1.el5也支持MPPE。

1.运行下面的命令测试内核是否支持MPPE如果支持会返回“ok”:modprobe ppp-compress-18 && echo ok

2. 从下面的网址下载RPM文件pptpd-1.3.4-1.rhel5.1.i386.rpm : -http://poptop.sourceforge.net/yum/stable/packages/

3. 安装RPM文件: -rpm -ivh pptpd-1.3.4-1.rhel5.1.i386.rpm4. 修改文件/etc/ppp/options.pptpd如下: -name pptpd
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
proxyarp
lock
nobsdcomp
novj
novjccomp
nologfd

5. 修改文件/etc/pptpd.conf如下: -option /etc/ppp/options.pptpd
logwtmp
localip 192.168.2.1
remoteip 192.168.2.11-156.

在/etc/ppp/chap-secrets 里添加下面的帐号(johndie)和密码(passwrd) :

# Secrets for authentication using CHAP
# client server secret IP addresses

"user" pptpd "passwd" "ip"(*表示可以多人连接)

2009-03-31技术合集

在Linux操作系统下配置多线路ADSL的方法已关闭评论

在Linux操作系统下配置多线路ADSL的方法

此文档可以说明双ADSL及多ADSL增加线路的配置过程。

Linux配置多线路ADSL的方法

powered by KindGeorge http://kindgeorge.at.3322.org\经过一段时间的观察,证明运行良好,现把设置过程及方法总结一下,欢迎指正.

此文档可以说明双adsl及多adsl增加线路的配置过程.

实验环境:

操作系统: RedHat7.3

两条adsl,长期观察线路稳定,动态ip,带宽2M,

三块网卡: eth0 tulip,接内网

eth1 3c59x,接第一条adsl

eth2 8139too,接第二条adsl

ethn xxxx,(如果还有的话….)

目的:用两条adsl共同上网,分担负载,实现一般的简单负载平衡,带动内网上网.

1.添加网卡

插入网卡,启动机器,如果需要双线路上网,就要三块网卡了,配置网卡也可以参考其他资料.

配置为找到相应的型号模块,例如:eth2是8139的网卡

[root@kindgeorge root]# vi /etc/modules.conf
alias parport_lowlevel parport_pc
alias eth0 tulip
alias eth1 3c59x
alias eth2 8139too

把新的线路插在eth2上

2.配置adsl

方法一:

(1)运行adsl-setup程序,会一步一步的提示你完成配置过程.

创建ppp0和ppp1的拨号配置文件,并保存配置,一般保存在/etc/sysconfig/network-scripts/ifcfg-ppp1

(2)确定/etc/sysconfig/network-scripts/ifcfg-ppp0文件,其中的PIDFILE参数设为:

PIDFILE=/var/run/ppp-adsl.pid

修改/etc/sysconfig/network-scripts/ifcfg-ppp1文件,将其中的PIDFILE参数设为:

PIDFILE=/var/run/ppp-adsl1.pid

如果你有更多的线路,可以继续增加0,1,2,3….等

这是为了使不同的拨好用不同的pid,如果不修改此参数将无法启动第二条线路接口。

(3)配置文件一般是:

USERCTL=no
BOOTPROTO=dialup
NAME=DSLppp1
DEVICE=ppp1
TYPE=xDSL
ONBOOT=yes
PIDFILE=/var/run/pppoe-adsl1.pid
FIREWALL=NONE
PING=.
PPPOE_TIMEOUT=20
LCP_FAILURE=3
LCP_INTERVAL=20
CLAMPMSS=1412
CONNECT_POLL=6
CONNECT_TIMEOUT=60
DEFROUTE=yes
SYNCHRONOUS=no
ETH=eth2 (对应新线路的网卡)
PROVIDER=DSLppp1
USER=isp提供的新用户名字2
PEERDNS=no

方法二:

(1)直接拷贝

cp /etc/sysconfig/network-scripts/ifcfg-ppp0 /etc/sysconfig/network-scripts/ifcfg-ppp1

(2)修改其中的ppp0为ppp1,

修改PIDFILE=/var/run/pppoe-adsl.pid 为PIDFILE=/var/run/pppoe-adsl1.pid

修改采用的新接线路的网卡,例如:改ETH=eth1 为ETH=eth2

(3)增加新线路的帐号和密码. 帐号密码一般是保存在/etc/ppp/chap-secrets 和pap-secrets

我们只需要在最后增加新的用户名2和密码2即可.

/etc/ppp/chap-secrets文件一般是这样的:

# Secrets for authentication using CHAP

# client server secret IP addresses

"用户名1" * "密码1"

"用户名2" * "密码2"

"用户名n" * "密码n"

3.启动ppp接口

因为adsl-start 命令缺省只能启动第一的ppp接口。所以要启动两个接口,必须指定配置文件。

可以用:方法一:

ifup ppp0

ifup ppp1

(ifup pppn…)

或方法二:

adsl-start /etc/sysconfig/network-scripts/ifcfg-ppp0

adsl-start /etc/sysconfig/network-scripts/ifcfg-ppp1

(adsl-start /etc/sysconfig/network-scripts/ifcfg-pppn)

4.查看新设置是否启动:

方法一:执行: ifconfig

出现ppp0 和ppp1 ,并且均得到ip了.说明成功了,结果象这样:

ppp0 Link encap:Point-to-Point Protocol
inet addr:218.114.37.137 P-t-P:61.142.110.30 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1
RX packets:165721 errors:0 dropped:0 overruns:0 frame:0
TX packets:123673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:157324193 (150.0 Mb) TX bytes:14068892 (13.4 Mb)

ppp1 Link encap:Point-to-Point Protocol
inet addr:218.114.35.62 P-t-P:61.142.110.30 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1
RX packets:560055 errors:0 dropped:0 overruns:0 frame:0
TX packets:439711 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:530025378 (505.4 Mb) TX bytes:80895162 (77.1 Mb)

方法二: 执行ip高级命令: ip link ls

也可以检验ppp0和ppp1,结果象这样:

1: lo: mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:50:bf:07:4e:32 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:01:02:9a:31:b9 brd ff:ff:ff:ff:ff:ff
4: eth2: mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:e0:4c:79:44:71 brd ff:ff:ff:ff:ff:ff
268: ppp1: mtu 1492 qdisc pfifo_fast qlen 3
link/ppp
273: ppp0: mtu 1492 qdisc cbq qlen 3
link/ppp

方法三: 用命令测试反应

ping -I ppp0 202.96.134.133 (测试ppp0的线路状况)

ping -I ppp1 202.96.134.133 (测试ppp1的线路状况)

ping -I ppp2 ……n (如果还有多条的话)

5.如果单单是要实现链路负载平衡,让ppp0和ppp1分担负载,那么我们象以下设置一下配置:

(1)启动路由,允许转发echo 1 > /proc/sys/net/ipv4/ip_forward

(2)进行伪装:

iptables -A POSTROUTING -t nat -s 192.168.1.0/24 -o ppp0 -j MASQUERADE

iptables -A POSTROUTING -t nat -s 192.168.1.0/24 -o ppp1 -j MASQUERADE

iptables -A POSTROUTING -t nat -s 192.168.1.0/24 -o pppn -j MASQUERADE (如果还有多条的话)

(3)修改网关

ip route replace default scope global nexthop dev ppp0 weight 1 nexthop dev ppp1 weight 1

说明: 用replace的原因是当原来没有网关时会增加这个网关,当已经存在网关时,会修改原来的网关.

用add 也可以,但是当已经存在默认网关时就要先delete再add.

两个weight 1的意思是这两条链路的权值是相等的,两条链路承担的网络流量是相等的。

有一点是需要说明的,因为路由表是基于缓存的,所以在实际中两条链路并不能100%的平分流量

nexthop NEXTHOP 设置多路径路由的下一跳地址。NEXTHOP比较复杂,它的语法和以下高层参数类似:

via ADDRESS–表示下一跳路由器;

dev NAME–表示输出设备;

weight NUMBER–在多路由路径中,这个元素的权重。表示相对带宽或者服务质量。

如果你有多条线路,那么继续在其中增加 nexthop dev ppp2(ppp3……)即可

(4) 刷新路由

ip route flush cache

6.即时检查数据的方向

用tcpdump 分别监察两条线路,用下面命令:

tcpdump -i ppp0
tcpdump -i ppp1
tcpdump -i ppp2……n (有多条的话)

7.为了每次启动时自动启动,把命令写在启动脚本里面,在 /etc/rc.local 后面增加:

ifup ppp0
ifup ppp1
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A POSTROUTING -t nat -s 192.168.1.0/24 -o ppp0 -j MASQUERADE
iptables -A POSTROUTING -t nat -s 192.168.1.0/24 -o ppp1 -j MASQUERADE
ip route replace default scope global nexthop dev ppp0 weight 1 nexthop dev ppp1 weight 1
ip route flush cache

结果:运行数月,效果稳定良好。

上网会明显加快,为何呢,因为两条线路分摊了负载.例如,内网80人同时上网, 如果网络是一条2M的adsl,那么整条线路都拥挤,挤得死死的,每人分得的带宽可能只有30k.大家都会明显感觉上网很慢。

如果增加多一条线路,那么两条2M共同分摊拥挤程度,这样,可能每人分得的带宽就会变成60K,是原来的加倍,以这种形式来加快上网速度,增加了并发连接的个数。

2009-03-31技术合集

语言栏去掉CH问题已关闭评论

语言栏去掉CH问题

右键语言栏点击设置,先添加一个EN的键盘,比如英语(美国)键盘-美式键盘,然后确定之后再次修改输入法,删除这个英语的!那么就再也不会出现CH和EN的列表了!就是单单的输入法列!