Windows2016安装VMware-VCSA-all-6.7.0 (VCSA )指南

登录地址:

登录地址:

https://192.168.4.221/ui   # H5版

https://192.168.4.221/vsphere-client/?csp  # FLASH版

账号:administrator@vsphere.local

密码:123456

参考博文: https://blog.51cto.com/3701740/2112464

1.下载安装包VMware-VCSA-all-6.7.0-13643870.iso 用虚拟光驱挂载或者解压运行,选择“安装”,VCSA 6.7版本同时提供其他选项。

软件下载链接:https://pan.baidu.com/s/1gyTi3z18H1gPRSO_ki9Geg 

提取码:1qtx

准备环境和工具

1).3台 ESXi 6.7主机 

2).准备一台Windows Server 2019系统的虚拟机

3).VMware-VCSA-all-6.7.0-13643870.iso

2.解压目录 

3. 提示安装分为两个阶段

4. 勾选“我接受许可协议条款”

5. 选择“嵌入式PSC

6. 指定VCSA 6.7部署到ESXi主机或VC

7. 提示证书警告,选择“是”

8.  配置VCSA 6.7虚拟机名称以及root密码

10.  选择部署大小

11. 选择VCSA 6.7虚拟机存储

12. 配置VCSA 6.7虚拟机网络

13. 确认第1阶段参数

14. 开始第一阶段部署

15. 在部署的过程中,VCSA 6.7虚拟机电源会打开,可以PING

18. 完成第一阶段部署,开始第二阶段部署

19. 开始第二阶段配置

20. 配置NTP服务器

D

21. 配置SSO参数

22. 确认是否加入CEIP

23. 确认参数

24. 确定开始第二阶段部署

25. 服务启动

26. 部署时间取决于物理服务器性能

27. VCSA 6.7虚拟机控制台

28. VCSA 6.7提供H5以及FLASH两个选择,从初步使用看,H5功能比VCSA 6.5得到增强

27. 输入SSO登录

28. VCSA 6.7 H5界面

29. H5主页界面

30. FLASH界面

Windows Server 2016 KMS 客户端安装密钥

Windows Server 2016 KMS 客户端安装密钥

Windows Server 2016 KMS 客户端安装密钥

Windows Server 2016 Datacenter:CB7KF-BWN84-R7R2Y-793K2-8XDDG

Windows Server 2016 Standard:WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY

Windows Server 2016 Essentials:JCKRF-N37P4-C2D82-9YXRT-4M63B

kms密钥激活步骤:

右键开始图标,命令提示符(管理员)

slmgr /ipk CB7KF-BWN84-R7R2Y-793K2-8XDDG

slmgr /skms kms.03k.org

slmgr /ato

CentOS7安装pure-ftp

1、创建用户组和用户

1、创建用户组和用户

groupadd ftpgroup

useradd ftpuser -g ftpgroup -s /sbin/nologin

2、给/data/www/html赋ftpuser权限

mkdir -p /data/www/html

chown -R ftpuser:ftpgroup /data/www/html

3、安装pure-ftpd

yum install epel-release  #默认的 yum 源没有提供 pure-ftpd,需要先安装epel扩展源

yum install pure-ftpd -y

4、下载配好的pure-ftpd.conf

wget -P /etc/pure-ftpd/ http://www.kglan.com/soft/pure-ftp/pure-ftpd.conf

5、修改配置文件pure-ftpd.conf

vi /etc/pure-ftpd/pure-ftpd.conf

#修改内容如下所示:

#限制所有用户只能访问主目录

ChrootEveryone              yes

#信任组ID,不用设置,注释掉

# TrustedGID                    100

#是否断开非兼容的客户端,设置no时,兼容ie等比较非正规化的ftp客户端

BrokenClientsCompatibility  no

#最大连接的客户端数量

MaxClientsNumber            10

是否以守护(doemon)进程运行,设置yes

Daemonize                   yes

#单个IP最大连接数

MaxClientsPerIP             8

#是否记录所有用户的ftp连接命令

VerboseLog                  no

#客户端未发出-a命令时,是否列出隐藏文件(dot-files)?

DisplayDotFiles             yes

#只允许匿名用户?我们用于非公共ftp,所以要进行认证,不能匿名登录

AnonymousOnly               no

#设置为yes时,禁止匿名用户登录,只允许认证用户登录

NoAnonymous                 yes

#默认( facility )是 “ftp”。 “none” 将禁止日志。

SyslogFacility              ftp

#设置用户登陆后的显示信息

# FortunesFile              /usr/share/fortune/zippy

#//禁止反向解析,在日志文件中不解析主机名。

DontResolve                 yes

#LDAP配置文件目录

# LDAPConfigFile                /etc/pure-ftpd/pureftpd-ldap.conf

#MySQL配置文件目录

# MySQLConfigFile               /etc/pure-ftpd/pureftpd-mysql.conf

#PGSQL配置文件目录

# PGSQLConfigFile               /etc/pure-ftpd/pureftpd-pgsql.conf

#删除注释,并启用,如果需要上面那几种数据库来存放用户信息,请自行删除注释

#此为虚拟用户数据库路径,我们创建的虚拟用户就保存在这里

PureDB                        /etc/pure-ftpd/pureftpd.pdb

#验证服务pure-authd 的socket 路径

# ExtAuth                       /var/run/ftpd.sock

#启用 PAM 认证方式

PAMAuthentication             yes

#unix认证方式,只用一种即可

# UnixAuthentication            yes

#是否允许匿名用户创建文件目录

AnonymousCanCreateDirs      no

#设定负载阙值,当系统负载大于以下设定的数值后,将禁止匿名用户下载!

MaxLoad                     2

#FTP启用主动模式时用到的端口范围,建议设置为31888 to 36888

#主要是不想去改防火墙了,用以前vsftp的防火墙端口规则

PassivePortRange          31888 36888

#强制一个IP地址使用被动响应( PASV/EPSV/SPSV replies)

#ForcePassiveIP                192.168.0.1

#匿名用户和认证用户下载时的速度比例

# AnonymousRatio                1 10

#上传下载速度比例设置,全局变量

# UserRatio                 1 10

#不允许下载ftp属主的文件

AntiWarez                   yes

#服务监听的IP 地址和端口。(缺省是所有IP地址和21端口)

# Bind                      127.0.0.1,21

#匿名用户带宽

# AnonymousBandwidth            8

#认证用户带宽

# UserBandwidth             8

#文件和目录的umask

Umask                       133:022

#用户ID至少要大于1000才能登陆

MinUID                      1000

#是否使用/etc/ftpusers配置文件来禁用帐号,默认为no

UseFtpUsers no

#是否仅允许认证用户进行 FXP 传输?默认为no,这里改yes

AllowUserFXP                yes

#是否对匿名用户和非匿名用户允许进行匿名 FXP 传输。

AllowAnonymousFXP           no

#用户不能删除和写点文件(文件名以 ‘.’ 开头的文件),即使用户是文件的所有者也不行。

ProhibitDotFilesWrite       no

#同上

ProhibitDotFilesRead        no

#是否对已存在的文件自动重命名?必须no

AutoRename                  no

#设置yes禁止匿名用户上传新文件

AnonymousCantUpload         yes

#设定仅允许来自以下IP地址的非匿名用户连接。

#TrustedIP                  10.1.1.1

#如果需要为日志每一行添加 PID 去掉下面行的注释

LogPID                     yes

#log文件路径

AltLog                     clf:/var/log/pureftpd.log

#设置为yes时,不接受 CHMOD 命令。用户不能更改他们文件的属性。

#NoChmod                     yes

#设置yes时,允许用户恢复和上传文件,不允许删除他们

#KeepAllFiles                yes

#用户主目录不存在的话,自动创建。

CreateHomeDir               no

#删除注释后,启用配额管理,1000:10 就限制每一个用户只能使用 1000 个文件,共10Mb。

#Quota                       1000:10

#运行时的pid路径

#PIDFile                     /var/run/pure-ftpd.pid

# 如果你的 pure-ftpd 编译时加入了 pure-uploadscript 支持,这个指令将会使 pure-ftpd

# 发送关于新上传的情况信息到 /var/run/pure-ftpd.upload.pipe,这样 pure-uploadscript

# 就能读然后调用一个脚本去处理新的上传。

#这个功能用好了可以做很多事。。。

#CallUploadScript yes

#限定上传文件占用硬盘的极限值,超过后不再接收上传数据

MaxDiskUsage               99

# Set to ‘yes’ if you don’t want your users to rename files.

#是否禁止用户重命名已存在的文件

NoRename                  no

#设置为yes,防止chmod修改错误导致文件锁定

CustomerProof              yes

#3:20 意思是同一个认证用户最大可以有3个同时活动的进程。而且同时最多只能有20个匿名用户进程。

# PerUserLimits            3:20

# yes文件相同直接删除旧的,no先保留再更新

NoTruncate               yes

# TLS                      1

# SSL is disabled by default. TLS 1.0, 1.1 and 1.2 are available by

# default.

# TLSCipherSuite           HIGH

# Certificate file, for TLS

# CertFile                 /etc/ssl/private/pure-ftpd.pem

#只允许IPV4连接

IPV4Only                 yes

# Listen only to IPv6 addresses in standalone mode (ie. disable IPv4)

# By default, both IPv4 and IPv6 are enabled.

# IPV6Only                 yes

FileSystemCharset    UTF-8

ClientCharset    UTF-8

6、创建虚拟用户生成用户数据db

pure-pw useradd myftp -u ftpuser -d /data/www/html

密码:123456

pure-pw mkdb /etc/pure-ftpd/pureftpd.pdb

7、开启服务 设置开机启动

systemctl start  pure-ftpd

systemctl enable pure-ftpd

systemctl status pure-ftpd

8、开放防火墙端口

在使用FTP过程中不仅仅会用到21端口,可能还会用到其他端口,所以此处我们放行 20/21,1024/65535,如下:

firewall-cmd –zone=public –add-port=20-21/tcp –permanent

firewall-cmd –permanent –zone=public –add-port=49152-65535/tcp

firewall-cmd –reload

9、本地使用 FTP 客户端测试

#查看相应日志记录

cat /var/log/messages

#查看安全日志记录

cat /var/log/secure

#先查看 21 端口是否开启

netstat -an | grep 21

#然后查看 proftpd 进程

ps -aux | grep pure-ftpd

#Linux 在启动一个进程时,系统会在 /proc下创建一个以 PID 命名的目录,该目录是系统内存的映射目录,提供内核与进程信息,其中包括一个名为 exe 的文件即记录了绝对路径,通过 ll 或 ls –l 命令即可查看

ls -l /proc/PID

10、修改虚拟用户密码

pure-pw passwd myftp

eset nod32 antivirus4激活码

V263-3733-4US6-DSHF-C676-JNUV
V263-3733-4US6-DT3N-D4QL-5RHJ
V263-3733-4US6-DTNE-P7FB-C848
V263-3733-4US6-DUME-57LS-8HWL
V263-3733-4US6-DVF8-K4EK-9HQQ
V263-3733-4US6-DVQS-353F-D3MF
V263-3733-4US6-DWJF-Q5PW-HVVN
V263-3733-4US6-DWR3-N8ES-GBHD
V263-3733-4US6-DXCW-H88F-SQM7
V263-3733-4US6-DYPE-73EB-78EX
V263-3733-4US6-E36F-X6BC-UFSV
V263-3733-4US6-DRF5-E6JC-JNXF
V263-3733-4US6-DS6A-F6B5-YTC4

CNDU-W33T-AACE-EU9U-2XFG

CentOS7安装VNC-Server

#安装
yum groupinstall “GNOME Desktop”
yum install tigervnc-server tigervnc-server-module
cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
vim /etc/systemd/system/vncserver@:1.service
#修改内容如下所示
[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target

[Service]
Type=forking

# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :’
ExecStart=/usr/sbin/runuser -l root -c “/usr/bin/vncserver %i”
PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStop=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :’
PIDFile=/root/.vnc/%H%i.pid

[Install]
WantedBy=multi-user.target

#设置防火墙
firewall-cmd –permanent –zone=public –add-service vnc-server
firewall-cmd –reload
#重载服务配置
systemctl daemon-reload
#设置开机启动
systemctl enable vncserver@:1.service
#启动服务
systemctl start vncserver@:1.service

#查看服务监听的端口
netstat -lnpt|grep Xvnc
# 根据监听的端口,进行端口开放,每个用户会对应一个端口,第一个用户默认为5901端口。我这里是root用户
firewall-cmd –add-port=5901/tcp –permanent
firewall-cmd –reload

Centos7上安装和配置Spark集群

一.服务器规划
192.168.4.116 hadoop-namenode # 该节点只运行namenode服务
192.168.4.135 hadoop-yarn # 该节点运行resourcemanager服务
192.168.4.16 hadoop-datanode1 # 数据节点
192.168.4.210 hadoop-datanode2 # 数据节点
192.168.4.254 hadoop-datanode3 # 数据节点

二.服务器优化
1.安装前准备
Java 8+, Python 2.7+/3.4+, R 3.1+,Scala 2.11及hadoop3.1.1

操作系统:CentOS Linux release 7.3.1611 (Core)
内核: 4.19.0-1.el7.elrepo.x86_64
Jdk版本号:1.8.0_20
Hadoop版本号:3.1.1
Scala版本号:2.12.8
Spark版本号:2.3.2

2.升级Centos7内核
具体实验步骤:
# 载入公钥
rpm –import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安装ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 载入elrepo-kernel元数据
yum –disablerepo=\* –enablerepo=elrepo-kernel repolist
# 查看可用的rpm包
yum –disablerepo=\* –enablerepo=elrepo-kernel list kernel*
# 安装最新版本的kernel
yum –disablerepo=\* –enablerepo=elrepo-kernel install -y kernel-ml.x86_64
# 重启操统系统
reboot
# 删除旧版本工具包
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64
# 安装新版本工具包
yum –disablerepo=\* –enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64
#将新安装的内核设定为操作系统的默认内核,或者说如何将新版本的内核设置为重启后的默认内核
grub2-set-default 0
grub2-mkconfig -o /etc/grub2.cfg
# 再次重启操统系统
reboot
至此,已完成升级

#查看发行版本
cat /etc/redhat-release
#查看内核版本
uname -r

3.设置静态IP
#启动网卡
systemctl start NetworkManager
systemctl enable NetworkManager
systemctl status NetworkManager
systemctl restart NetworkManager

#操作
nmcli con show
nmcli con mod ‘Wired connection 1’ ipv4.method manual ipv4.addresses 192.168.4.135/24 ipv4.gateway 192.168.4.1 ipv4.dns 8.8.8.8 connection.autoconnect yes
nmcli con mod ‘Wired connection 1’ ipv4.method manual ipv4.addresses 192.168.4.16/24 ipv4.gateway 192.168.4.1 ipv4.dns 8.8.8.8 connection.autoconnect yes
nmcli con mod ‘Wired connection 1’ ipv4.method manual ipv4.addresses 192.168.4.210/24 ipv4.gateway 192.168.4.1 ipv4.dns 8.8.8.8 connection.autoconnect yes
nmcli con mod ‘Wired connection 2’ ipv4.method manual ipv4.addresses 192.168.4.116/24 ipv4.gateway 192.168.4.1 ipv4.dns 8.8.8.8 connection.autoconnect yes
nmcli con mod ‘Wired connection 2’ ipv4.method manual ipv4.addresses 192.168.4.254/24 ipv4.gateway 192.168.4.1 ipv4.dns 8.8.8.8 connection.autoconnect yes
nmcli con reload

5.禁用selinux
setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/’ /etc/selinux/config

6.优化内核参数
vim /etc/sysctl.conf
#增加内容如下所示:
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.tcp_fin_timeout = 2
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_max_orphans = 2000
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 5000 65000
net.core.netdev_max_backlog = 1000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.nf_conntrack_max = 25000000
net.netfilter.nf_conntrack_max = 25000000
net.netfilter.nf_conntrack_tcp_timeout_established = 180
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

执行sysctl -p 生效

7.修改hostname
hostnamectl set-hostname hadoop-namenode –static
hostnamectl set-hostname hadoop-yarn –static
hostnamectl set-hostname hadoop-datanode1 –static
hostnamectl set-hostname hadoop-datanode2 –static
hostnamectl set-hostname hadoop-datanode3 –static

8.SSH免登录置(此一步可以使用“四.安装hadoop”第7点方法处理,如果此处已处理,在“四.安装hadoop”第7点方法处理这一步略过,不需要操作)
#各个节点执行生成公私钥
ssh-keygen -t rsa
cat /root/.ssh/id_rsa.pub >> /root/authorized_keys
#在hadoop-namenode-192.168.4.116上
scp -p ~/.ssh/id_rsa.pub root@192.168.4.16:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.135:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.210:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.254:/root/.ssh/authorized_keys
#在hadoop-yarn-192.168.4.135上
scp -p ~/.ssh/id_rsa.pub root@192.168.4.16:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.116:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.210:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.254:/root/.ssh/authorized_keys
#在hadoop-datanode1-192.168.4.16上
scp -p ~/.ssh/id_rsa.pub root@192.168.4.116:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.135:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.210:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.254:/root/.ssh/authorized_keys
#在hadoop-datanode2-192.168.4.210上
scp -p ~/.ssh/id_rsa.pub root@192.168.4.16:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.116:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.135:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.254:/root/.ssh/authorized_keys
#在hadoop-datanode3-192.168.4.254上
scp -p ~/.ssh/id_rsa.pub root@192.168.4.16:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.116:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.135:/root/.ssh/authorized_keys
scp -p ~/.ssh/id_rsa.pub root@192.168.4.210:/root/.ssh/authorized_keys

9.处理ping回路127.0.0.1和自己本机内网IP不通问题
#临时开启
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
#永久开启
echo “net.ipv4.icmp_echo_ignore_all = 0″>>/etc/sysctl.conf
sysctl -p

三.安装jdk1.8
yum install lrzsz wget vim -y

#上传jdk1.8压缩包
rpm -qa | grep openjdk
yum -y remove java-*
tar -xvf jdk-8u20-linux-x64.tar.gz
rm -f jdk-8u20-linux-x64.tar.gz

#编辑jdk环境变量
vim /etc/profile.d/java.sh
#添加内容如下所示:
#!/bin/bash
JAVA_HOME=/data/jdk1.8.0_20/
PATH=$JAVA_HOME/bin:$PATH
export PATH JAVA_HOME
export CLASSPATH=.

#授权
chmod +x /etc/profile.d/java.sh
source /etc/profile.d/java.sh

#查看jdk版本
java -version

四.安装hadoop
1.新增用户hadoop
groupadd hadoop
useradd -g hadoop -s /usr/sbin/nologin hadoop

2.为hadoop用户增加管理员权限,方便部署,避免一些对新手来说比较棘手的权限问题
visudo
#在root ALL=(ALL) ALL下面增加一行
hadoop ALL=(ALL) ALL

3.为了测试方便,会关闭所有服务器的防火墙,在所有服务器上执行关闭防火墙
systemctl stop firewalld # 停止firewall
systemctl disable firewalld # 禁止firewall开机启动
firewall-cmd –state # 查看默认防火墙装状态(关闭后显示notrunning, 开启显示running)

4.关闭所有服务器的SLNEX
vim /etc/selinux/config
#修改内容为
SELINUX=disabled

5.修改hostname
vim /etc/hostname
#新增内容为
hadoop-namenode #其它分别为hadoop-yarn,hadoop-datanode1,hadoop-datanode2和hadoop-datanode3

6.配置hosts
vim /etc/hosts
#新增内容为
192.168.4.116 hadoop-namenode
192.168.4.135 hadoop-yarn
192.168.4.16 hadoop-datanode1
192.168.4.210 hadoop-datanode2
192.168.4.254 hadoop-datanode3

7.SSH免密码登录(同理将其它节点的公钥追加进来,即:每个节点都拥有其它机器的公钥)
# 一路回车即可,在~/.ssh 目录下回生成id_rsa.pub 文件,将该文件追加到authorized_keys
ssh-keygen -t rsa
cd ~/.ssh

#在三台服务器上执行(此一步可以使用“二.服务器优化”8.SSH免登录置 处理,如果上面已处理,这一步略过,不需要操作)
yum -y install openssh-clients
#在hadoop-namenode-192.168.4.116上
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.16
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.135
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.210
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.254
#在hadoop-yarn-192.168.4.135上
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.16
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.116
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.210
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.254
#在hadoop-datanode1-192.168.4.16上
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.116
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.135
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.210
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.254
#在hadoop-datanode2-192.168.4.210上
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.16
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.116
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.135
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.254
#在hadoop-datanode3-192.168.4.254上
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.16
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.116
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.135
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.4.210

8.解压hadoop到指定目录
#下载hadoop安装包
wget http://apache.01link.hk/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz
# -C 参数指定解压目录
tar -zxvf hadoop-3.1.1.tar.gz -C /data
mkdir -pv /data/hadoop-3.1.1/dfs/tmp
mkdir -pv /data/hadoop-3.1.1/dfs/name
mkdir -pv /data/hadoop-3.1.1/dfs/data
chown -R hadoop.hadoop /data/hadoop-3.1.1
chmod 755 -R /data/hadoop-3.1.1
rm -f /data/hadoop-3.1.1.tar.gz

9.配置hadoop环境变量
vim ~/.bash_profile
#新增内容如下所示:
export HADOOP_HOME=/data/hadoop-3.1.1
export PATH=$PATH:$HADOOP_HOME/bin

# 让配置立即生效,否则要重启系统才生效
source ~/.bash_profile

10.配置hadoop-env.sh、mapred-env.sh、yarn-env.sh,在这三个文件中添加JAVA_HOME路径,如下
vim /data/hadoop-3.1.1/etc/hadoop/hadoop-env.sh
vim /data/hadoop-3.1.1/etc/hadoop/mapred-env.sh
vim /data/hadoop-3.1.1/etc/hadoop/yarn-env.sh
#新增内容如下所示:
export JAVA_HOME=/data/jdk1.8.0_20

11.配置core-site.xml文件
vim /data/hadoop-3.1.1/etc/hadoop/core-site.xml
#新增内容如下所示(可以配置对应的IP):
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-namenode:9000</value>
<description>namenode的地址</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hadoop-3.1.1/dfs/tmp</value>
<description>namenode存放数据的目录</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>

12.配置hdfs-site.xml文件
vim /data/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
#新增内容如下所示(可以配置对应的IP):
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop-namenode:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-namenode:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>文件副本数,一般指定多个,测试指定一个</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hadoop-3.1.1/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop-3.1.1/dfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>16m</value>
</property>

</configuration>

13.配置mapred-site.xml文件
vim /data/hadoop-3.1.1/etc/hadoop/mapred-site.xml
#新增内容如下所示(可以配置对应的IP):
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-yarn:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-yarn:19888</value>
</property>

<property>
<name>mapreduce.application.classpath</name>
<value>
/data/hadoop-3.1.1/etc/hadoop,
/data/hadoop-3.1.1/share/hadoop/common/*,
/data/hadoop-3.1.1/share/hadoop/common/lib/*,
/data/hadoop-3.1.1/share/hadoop/hdfs/*,
/data/hadoop-3.1.1/share/hadoop/hdfs/lib/*,
/data/hadoop-3.1.1/share/hadoop/mapreduce/*,
/data/hadoop-3.1.1/share/hadoop/mapreduce/lib/*,
/data/hadoop-3.1.1/share/hadoop/yarn/*,
/data/hadoop-3.1.1/share/hadoop/yarn/lib/*
</value>
</property>

</configuration>

14.配置yarn-site.xml文件
vim /data/hadoop-3.1.1/etc/hadoop/yarn-site.xml
#新增内容如下所示(可以配置对应的IP):
<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-yarn</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-yarn:8032</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-yarn:8031</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-yarn:8030</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-yarn:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-yarn:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>flase</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>6</value>
<description>每个任务使用的虚拟内存占物理内存的百分比</description>
</property>
</configuration>

15.通过scp命令将上述修改的文件复制到其它服务器:
scp -r /data/hadoop-3.1.1 hadoop-yarn:/data
scp -r /data/hadoop-3.1.1 hadoop-datanode1:/data
scp -r /data/hadoop-3.1.1 hadoop-datanode2:/data
scp -r /data/hadoop-3.1.1 hadoop-datanode3:/data

16.在hadoop-namenode上进行NameNode的格式化
cd /data/hadoop-3.1.1
./bin/hdfs namenode -format

17.在hadoop-namenode上启动namenode
./bin/hdfs –daemon start namenode

18.在hadoop-yarn上启动resourcemanaer,nodemanager
cd /data/hadoop-3.1.1
./bin/yarn –daemon start resourcemanager
./bin/yarn –daemon start nodemanager

19.在hadoop-datanode1,hadoop-datanode2,hadoop-datanode3上启动datanode,nodemanager
cd /data/hadoop-3.1.1
./bin/hdfs –daemon start datanode
./bin/yarn –daemon start nodemanager

20.通过jps命令可以查看启动的进程
jps

21.通过自带例子测试hadoop集群安装的正确性
cd /data/hadoop-3.1.1
./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 1 2

22.修改C:\Windows\System32\drivers\etc\hosts文件增加下面域名映射
#hadoop-spark大数据
192.168.4.116 hadoop-namenode
192.168.4.135 hadoop-yarn
192.168.4.16 hadoop-datanode1
192.168.4.210 hadoop-datanode2
192.168.4.254 hadoop-datanode3

23.通过管理界面查看集群情况
http://hadoop-namenode:50070 #可以配置对应的IP
http://hadoop-yarn:8088 #可以配置对应的IP

五.安装scala
1.下载解压
wget https://downloads.lightbend.com/scala/2.12.8/scala-2.12.8.tgz
tar -zxvf scala-2.12.8.tgz -C /data
chown -R hadoop:hadoop /data/scala-2.12.8
chmod 755 -R /data/scala-2.12.8
rm -f /data/scala-2.12.8.tgz

2.修改/etc/profile文件
vim /etc/profile
##scala环境变量设置:
export SCALA_HOME=/data/scala-2.12.8
export PATH=$PATH:$SCALA_HOME/bin
#让其立即生效
source /etc/profile
#查看scala版本号
scala -version

六.安装spark
1.Spark安装,分为:
1). 准备,包括上传到主节点,解压缩并迁移到/data/目录;
2). Spark配置集群,配置/etc/profile、conf/slaves以及confg/spark-env.sh,共3个文件,配置完成需要向集群其他机器节点分发spark程序
3). 直接启动验证,通过jps和宿主机浏览器验证
4). 启动spark-shell客户端,通过宿主机浏览器验证

2.解压
tar -zxvf spark-2.3.2-bin-hadoop2.7.tgz -C /data
chown -R hadoop:hadoop /data/spark-2.3.2-bin-hadoop2.7
chmod 755 -R /data/spark-2.3.2-bin-hadoop2.7
rm -f /data/spark-2.3.2-bin-hadoop2.7.tgz

3.配置文件与分发程序
3.1 各个节点上配置/etc/profile
vim /etc/profile
#spark环境变量设置:
export SPARK_HOME=/data/spark-2.3.2-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
#让其立即生效
source /etc/profile

3.2 配置conf/slaves
cp /data/spark-2.3.2-bin-hadoop2.7/conf/slaves.template /data/spark-2.3.2-bin-hadoop2.7/conf/slaves
vim /data/spark-2.3.2-bin-hadoop2.7/conf/slaves
#添加节点如下所示:
# A Spark Worker will be started on each of the machines listed below.
#localhost

#添加节点如下所示:
hadoop-datanode1
hadoop-datanode2
hadoop-datanode3

3.3 配置conf/spark-env.sh
cp /data/spark-2.3.2-bin-hadoop2.7/conf/spark-env.sh.template /data/spark-2.3.2-bin-hadoop2.7/conf/spark-env.sh
vim /data/spark-2.3.2-bin-hadoop2.7/conf/spark-env.sh
#添加内容如下所示;
export JAVA_HOME=/data/jdk1.8.0_20
export SCALA_HOME=/data/scala-2.12.8
export HADOOP_HOME=/data/hadoop-3.1.1
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_WORKER_OPTS=”-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=864000 -Dspark.worker.cleanup.appDataTtl=864000″
export SPARK_MASTER_IP=hadoop-datanode1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=900M

3.4 重新设置目录归属和授权
chown -R hadoop:hadoop /data/spark-2.3.2-bin-hadoop2.7
chmod 755 -R /data/spark-2.3.2-bin-hadoop2.7

3.5 向各节点分发Spark程序
#进入hadoop-datanode1机器/data/目录,使用如下命令把spark-2.3.2-bin-hadoop2.7文件夹复制到hadoop-datanode2和hadoop-datanode3机器
scp -r /data/spark-2.3.2-bin-hadoop2.7 hadoop-datanode2:/data/
scp -r /data/spark-2.3.2-bin-hadoop2.7 hadoop-datanode3:/data/

3.6 在spark主节点hadoop-datanode1上配置master与master的信任关系
否则有可能报错 “Spark:通过start-slaves.sh脚本启动worker报错:Permission denied, please try again”
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
cat /root/.ssh/id_rsa.pub >> /root/authorized_keys

3.7 管理Spark服务
cd /data/spark-2.3.2-bin-hadoop2.7/sbin
./start-all.sh #启动Spark
./stop-all.sh #关闭spark
jps #验证启动
netstat -nlt #通过netstat -nlt 命令查看hadoop-datanode1节点网络情况

3.8 验证客户端连接
#进入hadoop-datanode1节点,进入spark-2.3.2-bin-hadoop2.7的bin目录,使用spark-shell连接集群
cd /data/spark-2.3.2-bin-hadoop2.7/bin
spark-shell –master spark://hadoop-datanode1:7077 –executor-memory 600m

七.服务脚本管理
1.在hadoop-namenode上
mkdir -pv /root/.script
vim /root/.script/start_hadoop-namenode.sh
#添加内容如下所示:
#!/bin/bash
nowtime=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
/data/hadoop-3.1.1/bin/hdfs –daemon start namenode
echo $nowtime “启动hadoop namenode成功” >> hadoop.log
#授权
chmod 755 /root/.script/start_hadoop-namenode.sh

2.在hadoop-yarn上
mkdir -pv /root/.script
vim /root/.script/start_hadoop-yarn.sh
#添加内容如下所示:
#!/bin/bash
nowtime=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
/data/hadoop-3.1.1/bin/yarn –daemon start resourcemanager
echo $nowtime “启动hadoop resourcemanager成功” >> hadoop.log
/data/hadoop-3.1.1/bin/yarn –daemon start nodemanager
nowtime2=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
echo $nowtime2 “启动hadoop nodemanager成功” >> hadoop.log

#授权
chmod 755 /root/.script/start_hadoop-yarn.sh

3.在hadoop-datanode1,hadoop-datanode2和hadoop-datanode3上
mkdir -pv /root/.script
vim /root/.script/start_hadoop-datanode.sh
#添加内容如下所示:
#!/bin/bash
nowtime=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
/data/hadoop-3.1.1/bin/hdfs –daemon start datanode
echo $nowtime “启动hadoop datanode成功” >> hadoop.log
/data/hadoop-3.1.1/bin/yarn –daemon start nodemanager
nowtime2=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
echo $nowtime2 “启动hadoop nodemanager成功” >> hadoop.log

#授权
chmod 755 /root/.script/start_hadoop-yarn.sh

4.在spark主节点hadoop-datanode1上配置启动服务和关闭服务脚本
4.1 启动服务
vim /root/.script/start_spark.sh
#添加内容如下所示:
#!/bin/bash
nowtime=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
/data/spark-2.3.2-bin-hadoop2.7/sbin/start-all.sh
echo $nowtime “启动spark成功” >> spark.log

#授权
chmod 755 /root/.script/start_spark.sh

4.2 关闭服务
vim /root/.script/stop_spark.sh
#添加内容如下所示:
#!/bin/bash
nowtime=`date –date=’0 days ago’ “+%Y-%m-%d %H:%M:%S”`
/data/spark-2.3.2-bin-hadoop2.7/sbin/stop-all.sh
echo $nowtime “启动spark成功” >> spark.log

#授权
chmod 755 /root/.script/stop_spark.sh

Centos7安装优化RabbitMQ教程

参考:https://www.cnblogs.com/flying607/p/9046858.html
一.安装erlang
官网下载地址:https://www.erlang-solutions.com/resources/download.html

1.Yum安装
wget https://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
yum list erlang
yum install -y erlang

#手工安装
官网对应版本:https://packagecloud.io/rabbitmq/erlang/
wget –content-dispositio https://packagecloud.io/rabbitmq/erlang/packages/el/7/erlang-21.2.4-1.el7.centos.x86_64.rpm
yum install erlang-21.2.4-1.el7.centos.x86_64.rpm

2.查看erlang版本
erl -version

3.查看包状态
rpm -q erlang

二.安装RabbitMQ
1.设置Yum repository
vim /etc/yum.repos.d/rabbitmq.repo
#添加内容如下所示
[bintray-rabbitmq-server]
name=bintray-rabbitmq-rpm
baseurl=https://dl.bintray.com/rabbitmq/rpm/rabbitmq-server/v3.7.x/el/7/
gpgcheck=0
repo_gpgcheck=0
enabled=1

2.安装
wget https://github.com/rabbitmq/rabbitmq-server/releases/download/v3.7.11/rabbitmq-server-3.7.11-1.el7.noarch.rpm
rpm –import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
yum install rabbitmq-server-3.7.11-1.el7.noarch.rpm

4.设置防火墙
firewall-cmd –zone=public –add-port=5672/tcp –permanent
firewall-cmd –zone=public –add-port=15672/tcp –permanent
firewall-cmd –reload

5.开启管理UI
#RabbitMQ的用户角色分类:none、management、policymaker、monitoring、administrator 执行命令
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user admin 123456
rabbitmqctl set_user_tags admin administrator
rabbitmqctl list_users

#访问地址
http://192.168.4.116:15672/

默认用户名和密码: guest/guest;
需要注意的是:guest用户仅仅提供localhost作为ip登录;
则会提示错误,登录不了:
# 如下是日志输出
=WARNING REPORT==== 21-Oct-2017::23:31:33 ===
HTTP access denied: user ‘guest’ – User can only log in via localhost

6.管理rabbitmq服务
systemctl start rabbitmq-server #启动服务
systemctl enable rabbitmq-server #设置开机启动
systemctl disable rabbitmq-server #停止开机启动
systemctl restart rabbitmq-server #重新启动服务
systemctl status rabbitmq-server #查看服务当前状态
systemctl list-units –type=service #查看所有已启动服务

网络安全狗安装手册(免费版)

注意:网站安全狗Linux-Nginx版只支持Ninx1.12以下版本
一.安装脚本:
yum -y install mlocate dmidecode pciutils lsof
tar xzvf safedog_linux64.tar.gz
mv safedog_an_linux64_2.8.21207 /usr/local/
cd /usr/local/safedog_an_linux64_2.8.21207/
chmod +x *.py
./install.py
 注意:安装过程时间有点久,刚开始会提示选择网站安全狗选择模式:1.apache 2.nginx 根据自己服务器配置来定,我是用nginx 的,所以选2后回车即可;后面还会要求输入nginx安装路径的,我的安装路径是/usr/local/nginx 输入后回车即可。
二.软件运行(下面几步是可选)
1.打开安全狗官网 http://www.safedog.cn 进行服云账号注册登录
2.在客户端进行命令行方式:输入命令 sdcloud –u 用户名
sdcloud –u safedog
3.客户端加入服云后,可进行命令行功能操作(;
使用:
service safedog status 查看安全狗服务;
service safedog start 启动安全狗服务;
service safedog stop 停止安全狗服务;
sdstart 重启安全狗服务
三.相关网站
1.网站安全狗:
2.服务器安全狗:
3.服云登录地址

如何在Centos7安装和设置svn为服务并开机启动

1.安装
yum install -y subversion
2.创建版本目录
mkdir /opt/svn/repositories
3.创建版本目录
svnadmin create /opt/svn/repositories/
4.添加用户组和账号
vim /opt/svn/repositories/conf/authz
添加内容:
[groups]
# 用户组及对应的用户
pp =lm,yl,jw,zh
# 库目录权限
[/]
# 用户组权限
@pp = rw
# 非用户组权
*=r
5.设置用户密码
vim /opt/svn/repositories/conf/passwd
添加内容:
[users]
lm = 123456
yl = 123456
jw = 123456
zh = 123456
6.设置svn权限
vim /opt/svn/repositories/conf/authz
添加内容:
[general]
# 匿名访问的权限,可以是read,write,none,默认为read
anon-access=none
# 使授权用户有写权限
auth-access=write
# 使用哪个文件作为账号文件
password-db=passwd
# 使用哪个文件作为权限文件
authz-db=authz
# 认证命名空间,subversion会在认证提示里显示,并且作为凭证缓存的关键字
realm=/opt/svn/repositories
7.添加防火墙端口
默认端为3690,此处不用默认端口,假设开一个新端口9999
systemctl status firewalld
systemctl start firewalld
systemctl enable firewalld
firewall-cmd –permanent –zone=public –add-port=9999/tcp
firewall-cmd –reload
firewall-cmd –list-all
systemctl restart firewalld
8.手工启动服务
svnserve -d -r /opt/svn/repositories/ –listen-port 9999
9.查看svn进程
ps -ef | grep svn
10.检查svn版本库
svnserve –version
二..设置svn为服务及开机启动
1.修改文件/etc/sysconfig/svnserve,此处是重点,修改监听端口在这里加参数
vim /etc/sysconfig/svnserve
修改内容:
# OPTIONS is used to pass command-line arguments to svnserve.
#
# Specify the repository location in -r parameter:
#OPTIONS=”-r /var/svn”
OPTIONS=”-r /opt/svn/repositories/ –listen-port 9999″
2.修改/usr/lib/systemd/system/svnserve.service
vim /usr/lib/systemd/system/svnserve.service
修改内容:
# /usr/lib/systemd/system/svnserve.service
[Unit]
Description=Subversion protocol daemon
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/svnserve
ExecStart=/usr/bin/svnserve –daemon –pid-file=/run/svnserve/svnserve.pid $OPTIONS
[Install]
WantedBy=multi-user.target
3.设置开机启动
systemctl start svnserve
systemctl enable svnserve
systemctl status svnserve
systemctl restart svnserve