Linux网络管理之网卡别名及网卡绑定配置

在日常的运维工作当中,有时候需要在一块物理网卡上配置多个IP地址,这就是网卡子接口的概念,以及多块网卡实现网卡的绑定,通俗来讲就是多块网卡使用的一个IP地址,下面我来详细说明实现的过程。

创建网卡子接口

CentOS系统当中网络是由NetworkManager这个服务来管理的,它提供了一个图形的界面,但此服务不支持物理网卡子接口的设置,所以在配置网卡子接口的时候,我们需要关闭此服务

临时关闭:service NetworkManager stop

永久关闭:chkconfig  NetworkMangager  off

如果有时需要临时创建子接口需要这么操作

[root@server ~]#  ip   addr add 10.1.252.100/16 dev eth0 label  eth0:0

注意:一旦重启网络服务,将会失效

创建永久的网卡子接口,这时候就需要写到网卡的配置文件里面去了网卡的配置文件路径在/etc/sysconfig/network-scripts/目录下以ifcfg开头跟设备名的文件,加入我设置的子接口的配置文件叫做eth0:0

vim /etc/sysconfig/network-scripts/ifcfg-eth0:0(如果你每次编辑网卡配置文件,每次这个路径觉得很长的时候可以定义别名,直接cd切换目录到这个文件的当前目录下)

DEVICE=eth0:0   //网卡的子接口名称                                                                                 

BOOTPROTO=none  //使用的协议这里是静态                                                                   

IPADDR=192.168.1.100   //子接口的IP地址                                                                     

NETMASK=255.255.255.0  //子接口的子网掩码                                                                

GATEWAY=192.168.1.254   //子接口的网关                                                                       

DNS1=8.8.8.8                     //子接口指定的dns                                                                        

编辑网卡的配置文件之后需要重启网络服务                                                                     

[root@server network-scripts]# service network restart                                                    

[root@server network-scripts]# ifconfig                                                                                  

eth0      Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                     

          inet addr:10.1.252.100  Bcast:10.1.255.255  Mask:255.255.0.0          

          inet6 addr: fe80::20c:29ff:fed1:18fd/64 Scope:Link                                        

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

          RX packets:47570 errors:0 dropped:0 overruns:0 frame:0                           

          TX packets:1618 errors:0 dropped:0 overruns:0 carrier:0                            

          collisions:0 txqueuelen:1000                                                                                

          RX bytes:3140045 (2.9 MiB)  TX bytes:135945 (132.7 KiB)                         

                                                                                                                                                              

eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                                

          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0 

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

至此网络子接口就配置完成了

 

 

网卡绑定

在讲解如何实现bonding网卡绑定前我先来讲讲bond的原理以及bond的工作模式,最后将实现网卡绑定的配置

bonding

就是将多块网卡绑定同一IP地址对外提供服务,可以实现高可用或者负载均衡。当然给两块网卡设置同一IP地址是不可能的。通过bonding,虚拟一块网卡对外提供连接,物理网卡被修改为相同的MAC地址。

正常情况下,网卡只接受目的硬件地址是自身MAC的以太网帧,对于别的数据帧都过滤掉,以减轻负担。但是网卡也支持混杂promisc的模式,接收网络上的所有帧,tcpdumpbonding就运行在这个模式下,驱动程序中的mac地址,将两块网卡的MAC地址改成相同,可以接受特定的mac数据帧,然后把相应的数据帧传给bond驱动程序处理。双网卡工作的时候表现为一个虚拟网卡(bond0),该虚拟网卡也需要驱动,驱动名叫bonding

bonding的工作模式

mode 0 balance-rr

轮询(round-robin)策略:从头到尾顺序的在每一个slave接口上面发送数据包。本模式提供负载均衡和容错的能力,两块网卡都工作。

 

mode  1 active-backup

主备策略:在绑定中,只有一个slave被激活。当且仅当活动的slvae接口失败时才会激活其他slave。为了避免交换机发生混乱时绑定的MAC地址只有一个外部端口上可见。

 

mode 3broadcast

广播策略:在所有的slave接口上传送所有的保温。本模式提供容错能力。

 

这里我给大家配置的mode 1模式,我这里使用的是vmware虚拟机来做的实验,在做实验之前需要再添加一块网卡,这样linux系统中才会有两块网卡

第一步:创建bonding设备的配置文件

[root@server network-scripts]# vim ifcfg-bond0                                                                    

DEVICE=bond0                                                                                                                                 

BOOTPROTO=none                                                                                                                          

IPADDR=10.1.252.100                                                                                                                    

NETMASK=255.255.0.0                                                                                                                  

GATEWAY=10.1.0.1                                                                                                                         

DNS1=8.8.8.8                                                                                                                                    

BONDING_OPTS="miimon=100 mode=1"                                                                                 

第二部:编辑两块物理网卡的配置文件                                                                              

[root@server network-scripts]# vim ifcfg-eth0                                                                       

DEVICE=eth0                                                                                                                                    

MASTER=bond0                                                                                                                               

SLAVE=yes                                                                                                                                          

                                                                                                                                                              

[root@server network-scripts]# vim ifcfg-eth1                                                                       

DEVICE=eth1                                                                                                                                    

MASTER=bond0                                                                                                                                

SLAVE=yes              

注:miimon是用来进行链路检测的。如果miimon=100,那么系统每100毫秒检测一次链路状态,如果有一条线路不通就转入另一条线路。

    mode=1表示工作模式为主备模式

    MASTER=bond0 主设备为bond0

 

配置完成只需要重启网络服务即可,测试使用另一台主机来ping bond0IP地址接口,接下来测试bond的状态,将其中的一块网卡down掉,看另一块网卡能不能顶上来,如果能,则表示成功

查看bond的状态:watch –n 1 cat /proc/net/bonding/bond 动态观察bond的状态

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth0                                                                                                

MII Status: up                                                                                                                          

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID:  0                                                                                                                                                                                       

 

当我把eth0网卡down掉,当前活动的网卡就变成了eth1

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth1                                                                                                

MII Status: down                                                                                                                    

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID          :        0                                                                                                                                                                                                              

                                                                                                                                                                                                                                             

原创文章,作者:fszxxxks,如若转载,请注明出处:http://www.178linux.com/42839

(0)
fszxxxksfszxxxks
上一篇 2016-09-01
下一篇 2016-09-02

相关推荐

  • vim简单实用的技巧总结

    vi 和 vim无需过多的区分,vim可认为是vi的增强版。        这篇关于vim的手册,我个人觉得实在无法把它写成文章,只能以手册的方式列举出来,因为,vim是一个动手使用的工具,只能在使用中你才能慢慢发现它的操作多么符合逻辑,我从下面这些简略的描述中,尽量将它的操作步骤写出来,但更重要的…

    Linux干货 2015-09-14
  • Linux文件系统上的权限

    Linux文件系统上的权限 首先我们来看一下如何修改文件的属主和属组 修改文件的属主:chown chown [OPTION]… [OWNER][:[GROUP]]FILE… 用法 OWNER修改属主 OWNER:GROUP修改属主:属组 :GROUP修改属组 命令中的冒号可以用.替换 -R:递归 chowen [OPTION]&#8…

    Linux干货 2016-08-04
  • Linux文件系统的创建、检测、修复、分区等工具的使用

    一、文件系统管理 什么是文件系统?     我的理解是文件系统是对磁盘上的数据和文件结构的管理规范。     如果文件系统没有继承性,那么以前文件系统的数据就无法传到新的文件系统中。     Linux的文件系统有这么几种:ext2, ex…

    系统运维 2016-03-03
  • PHP高效率写法(详解原因)

    1.尽量静态化:    如果一个方法能被静态,那就声明它为静态的,速度可提高1/4,甚至我测试的时候,这个提高了近三倍。   当然了,这个测试方法需要在十万级以上次执行,效果才明显。   其实静态方法和非静态方法的效率主要区别在内存:静态方法在程序开始时生成内存,实例方法在程序运行中生成内存,所以静…

    Linux干货 2015-05-28
  • 制作本地yum源(以redhat5.8通过复制光盘文件到本地)

    为什么要制作本地yum源 1、RPM安装方式很多情况下不能有效解决复杂的包依赖关系。 2、而刚安装完成系统的YUM一般是通过联网使用。 3、如果是未注册的redhat5.0以上系统,则不能联网使用yum。 我们可以改变yum库的指向,使其指向本地自己制作的yum仓库,从而轻松实现本地yum方式查询、安装应用软件了。 下面据此给大家介绍其操作方法(以Redha…

    Linux干货 2016-04-11
  • Linux用户和组管理整理

    本节介绍Linux用户和组相关的配置文件,用户账号管理、查询用户信息以及切换用户的相关命令,将会让我们更了解各配置文件中各个字段的含义,以及各相关命令的使用方法,方便Linux运维人员管理Linux系统中的用户和组。 背景: Linux是个多用户、多任务的系统,考虑到每个人的隐私权、每个人喜好的工作环境以及每个用户所使用的权限等方面的不同,所以设定了“用户”…

    Linux干货 2015-06-15