7 Star 107 Fork 52

Noah2021 / 电商秒杀系统深度优化

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
GPL-3.0

测试用例

在下订单之前需要先发布对应的商品用于在Redis中生成 Token 避免大量请求导致服务器崩溃

发布商品的请求 URL 是:http://82.156.200.100:82/item/publishpromo?id=1(最后的 id 根据你在链接上看到的自己来就行)

项目测试地址是:http://82.156.200.100:81/shop/login.html

用户名:188888,密码:000000

虽然有注册模块,但不支持个人注册 ( 没集成短信验证码的功能,验证码打印在服务器的控制台上 )

本地项目启动指南

有云服务器的话,按下面的来,没有的话就在本地搭环境,本地不好一点是 Windows 下的 RocketMQ 的环境是不好搭

  1. 在云服务器上配好 JDK、MySQL、Redis(要配密码)、RocketMQ 等必要环境(把端口、权限都打开,要保证本地都能连接上),然后将项目 clone 到本地
  2. 修改 application.properties
  3. 启动

课程介绍

本项目来自慕课网:聚焦Java性能优化 打造亿级流量秒杀系统

课程中借由“电商秒杀”案例,通过多种性能优化技术,总结了互联网项目中“秒杀”的经典性能优化方案技术,提供了统一的设计思维和思考方式,帮助真正理解性能优化中每个技术的使用以及背后的原理。

image-20210205114218230

知识图谱

image-20210205132949390

技术选型

前端: jQuery

后端: SpringBoot + Mybatis

**中间件:**RocketMQ + Redis + Druid

配置环境

依赖

  • org.springframework.boot:spring-boot-start-parent:2.2.2.RELEASE
  • org.springframework.boot:spring-boot-starter-web
  • org.springframework.boot:spring-boot-starter-test
  • org.springframework.boot:spring-boot-starter-jdbc
  • org.mybatis.generator:mybatis-generator-maven-plugin:1.3.5
  • org.mybatis.spring.boot:mybatis-spring-boot-starter:2.1.4
  • mysql:mysql-connector-java:5.1.41
  • com.alibaba:druid:1.2.3
  • org.springframework.boot:spring-boot-starter-data-redis
  • org.springframework.session:spring-session-data-redis:2.0.5.RELEASE
  • org.projectlombok:lombok
  • junit:junit:4.10
  • org.apache.commons:commons-lang3:3.3.2
  • org.hibernate:hibernate-validator:5.2.4.Final
  • joda-time:joda-time:2.6
  • com.google.guava:guava:18.0
  • org.apache.rocketmq:rocketmq-client:4.3.0
  • javax.xml.bind:jaxb-api:2.3.0(JavaEE的API,由于JDK9中不包含所以引入,JDK6/7/8不必引入)
  • com.sun.xml.bind:jaxb-impl:2.3.0(同上)
  • com.sun.xml.bind:jaxb-core:2.3.0(同上)
  • javax.activation:activation:1.1.1(同上)

MySQL

这里我是通过宝塔面板安装的,服务端选择的是MariaDB,数据库的初始密码设置在面板里。

当本地连接云服务器时出现Host xxx is not allowed to connect to this MariaDb server,可能是你的帐号不允许从远程登陆,只能在localhost。这个时候只要在localhost的那台电脑,登入MySQL后,更改 mysql 数据库里的 user表里的 host字段,从localhost改称%

mysql -u root -p
use mysql;
update user set host = '%' where user = 'root'  and host='localhost';
select host, user from user;

同样也会云服务器连不上MySQL的情况,也同样是修改user表的权限。结果如下:

image-20210211152808016

然后重启MySQL服务或再执行执行一个语句mysql>FLUSH PRIVILEGES使修改生效。

Redis

这一块很容易出错,还是要注意一下,步骤如下:

  1. 先修改redis的配置文件redis.conf

    • NETWORK块里面将访问权限更改为所有人也就是修改为BINK 0.0.0.0
    • GENERAL块的daemonize 修改为yes
    • 在视频中老师将INCLUDES里面添加了:/requirepass,我修改后反而启动报错,删除了才好,对此修改持保留态度
    • SECURITY块里面添加requirepass + 密码用于多加一层验证,保护数据库安全。当客户端想要访问数据时,需要进行权限认证AUTH + 密码
  2. 创建Redis服务

    • 进入Redis目录的utils目录
    • 执行Shell文件并新建配置文件redis.conf,日志文件redis.log和数据目录data,这样是为了方便我们以后进行管理(在输入路径的时候不可撤销,建议在记事本上写好后粘贴)
    ./install_server.sh
    # 配置
    [root@LEGION-Y7000 utils]# ./install_server.sh
    Welcome to the redis service installer
    This script will help you easily set up a running redis server
    
    Please select the redis port for this instance: [6379] 
    Selecting default: 6379	# 进行
    Please select the redis config file name [/etc/redis/6379.conf] /www/server/redis-5.0.8/redis.conf
    Please select the redis log file name [/var/log/redis_6379.log] /www/server/redis-5.0.8/redis.log
    Please select the data directory for this instance [/var/lib/redis/6379] /www/server/redis-5.0.8/data
    Please select the redis executable path [/usr/local/bin/redis-server] 
    Selected config:
    Port           : 6379
    Config file    : /www/server/redis-5.0.8/redis.conf
    Log file       : /www/server/redis-5.0.8/redis.log
    Data dir       : /www/server/redis-5.0.8/data
    Executable     : /usr/local/bin/redis-server
    Cli Executable : /usr/local/bin/redis-cli
    Is this ok? Then press ENTER to go on or Ctrl-C to abort.ok
    Copied /tmp/6379.conf => /etc/init.d/redis_6379
    Installing service...
    failed to glob pattern /etc/rc0.d/[SK][0-9][0-9]redis_6379: No such file or directory
    failed to glob pattern /etc/rc0.d/[SK][0-9][0-9]redis_6379: No such file or directory
    /var/run/redis_6379.pid exists, process is already running or crashed
    Installation successful!
    • 服务安装成果后可通过命令chkconfig --list | grep redis或者在/etc/rc.d/init.d/redis_6379查看redis_6379服务的具体内容

创建数据库

项目中的 backup.sql 就是 MySQL 的备份文件,还原一下就行了

# 备份
mysqldump -h主机名 -u用户名 -p密码 database 数据库名 > 文件名.sql
# 还原
mysql -h主机名 -u用户名 -p密码 数据库名 < 文件名.sql

基础项目开发

自动生成耗时的pojo类、dao接口以及相应的mapper.xml文件

需要使用mybatis的自动生成的工具,完成对数据库文件的映射,这里需要在pom文件里引入自动生成的插件依赖

<!--mybatis-generator的依赖 自动生成javabean和sql-->
<dependency>
    <groupId>org.mybatis.generator</groupId>
    <artifactId>mybatis-generator-maven-plugin</artifactId>
    <version>1.3.5</version>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.41</version>
    <scope>runtime</scope>
</dependency>

编写mybatis-generator.xml,用来自动生成pojo类和XXXmapper.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration
        PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN"
        "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd">
<generatorConfiguration>

    <context id="DB2Tables" targetRuntime="MyBatis3">
        <!--数据库链接地址账号密码-->
        <jdbcConnection driverClass="com.mysql.jdbc.Driver" connectionURL="jdbc:mysql://你的数据库服务器IP地址:3306/miaosha"
                        userId="root" password="admin">
        </jdbcConnection>
        <!--生成pojo存放位置-->
        <javaModelGenerator targetPackage="com.noah2021.pojo" targetProject="src/main/java">
            <property name="enableSubPackages" value="true"/>
            <property name="trimStrings" value="true"/>
        </javaModelGenerator>
        <!--生成映射文件存放位置-->
        <sqlMapGenerator targetPackage="mybatis.mapper" targetProject="src/main/resources">
            <property name="enableSubPackages" value="true"/>
        </sqlMapGenerator>
        <!--生成Dao类存放位置-->
        <!-- 客户端代码,生成易于使用的针对Model对象和XML配置文件 的代码
                type="ANNOTATEDMAPPER",生成Java Model 和基于注解的Mapper对象
                type="MIXEDMAPPER",生成基于注解的Java Model 和相应的Mapper对象
                type="XMLMAPPER",生成SQLMap XML文件和独立的Mapper接口
        -->
        <javaClientGenerator type="XMLMAPPER" targetPackage="com.noah2021.dao" targetProject="src/main/java">
            <property name="enableSubPackages" value="true"/>
        </javaClientGenerator>

        <!--生成对应表及类名,这里每一个表的五项属性是为了删除自动编写的复杂查询-->
        <table tableName="user_info" domainObjectName="UserDO" enableCountByExample="false"
               enableUpdateByExample="false" enableDeleteByExample="false"
               enableSelectByExample="false" selectByExampleQueryId="false"></table>
        <table tableName="user_password" domainObjectName="UserPasswordDO" enableCountByExample="false"
               enableUpdateByExample="false" enableDeleteByExample="false"
               enableSelectByExample="false" selectByExampleQueryId="false"></table>
    </context>
</generatorConfiguration>

添加mybatis-gengerator插件,注意版本应该和上面依赖的版本一致

<pluginManagement><!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) -->
    <plugins>
        <plugin>
            <groupId>org.mybatis.generator</groupId>
            <artifactId>mybatis-generator-maven-plugin</artifactId>
            <version>1.3.5</version>
            <dependencies>
                <dependency>
                    <groupId>org.mybatis.generator</groupId>
                    <artifactId>mybatis-generator-core</artifactId>
                    <version>1.3.5</version>
                </dependency>
                <dependency>
                    <groupId>mysql</groupId>
                    <artifactId>mysql-connector-java</artifactId>
                    <version>5.1.41</version>
                </dependency>
            </dependencies>
            <executions>
                <execution>
                    <id>mybatis generator</id>
                    <phase>package</phase>
                    <goals>
                        <goal>generate</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <!--允许移动生成的文件-->
                <verbose>true</verbose>
                <!--允许自动覆盖文件,第一次使用的时候用true,之后改回false-->
                <overwrite>true</overwrite>
                <configurationFile>
                    src/main/resources/mybatis-generator.xml
                </configurationFile>
            </configuration>
        </plugin>
    </plugins>
</pluginManagement>

进入Run/Debug Configurations面板,添加一个maven命令:mybatis-generator:generate,然后执行命令,就可以得到下图所示的目录结构

image-20210205161136019

mapper.xml文件中的updateinsert标签:使用useGeneratedKeys="true"获取主键并赋值到keyProperty设置的领域模型属性中,keyProperty的值是对象的。

模型架构

接入层:View Object与前端对接的模型,隐藏内部实现,仅供展示的聚合模型

业务层:Domain Model领域模型,业务核心模型,拥有生命周期,贫血并以服务输出能力

数据层:Data Object数据模型,同数据库映射,用以ORM方式操作数据库的能力模型

image-20210205124720526

业务操作

Demo示范

在真正的生产环境中,pojo不可简单地将数据库的数据传送给service,这里新建新建一个model层用于保护数据库信息安全,接下来我们将按照上面模型架构的方式来写一个小demo

  1. 先新建一个UserModel类,它包含UserDO的全部字段以及UserPasswordDOencrptPassword字段,然后将他俩组装成UserModel

    private Integer id;
    private String name;
    private Byte gender;
    private Integer age;
    private String telphone;
    private String registerMode;
    private String thirdPartyId;
    private String encrptPassword;
  2. 编写service层对应接口和类

    public interface UserService {
        public UserModel getUserById(Integer id);
    }
    @Service
    public class UserServiceImpl implements UserService {
    
        @Autowired
        UserDOMapper userDOMapper;
    
        @Autowired
        UserPasswordDOMapper userPasswordDOMapper;
        @Override
        public UserModel getUserById(Integer id) {
            UserDO userDO = userDOMapper.selectByPrimaryKey(id);
            if(userDO == null)
                return null;
            //这里为了严谨,将UserPasswordDOMapper接口的selectByPrimarykey的方法名进行了修改
            UserPasswordDO userPasswordDO = userPasswordDOMapper.selectByUserId(id);
            return convertFromDataObject(userDO,userPasswordDO);
    
        }
        //由userDO和userPasswordDO组装(验证)成一个userModel对象
        public UserModel convertFromDataObject(UserDO userDO, UserPasswordDO userPasswordDO){
            if(userDO == null)
                return null;
            UserModel userModel = new UserModel();
            BeanUtils.copyProperties(userDO, userModel);
            //这里userModel只是缺一个encrptPassword变量,所以只用将userPasswordDO相应的字段赋过来即可
            if(userPasswordDO != null)
                userModel.setEncrptPassword(userPasswordDO.getEncrptPassword());
            return userModel;
        }
    }
  3. 编写controller

    @Controller
    @RequestMapping("/user")
    public class UserController {
    
        @Autowired
        UserService userService;
        @RequestMapping("/get")
        @ResponseBody
        public UserModel getUser(@RequestParam("id")int id){
            UserModel userModel = userService.getUserById(id);
            return userModel;
        }
    }
  4. 结果如图所示,这里存在不安全性,一旦请求数据被人截获用户的密码也就被知道了(虽然后面我们还会经过MD5对明文密码进行加密,这里最好还是不要被别人知道),能把encrptPassword去掉最好

    image-20210205175044606

  5. 之前,我们将model对象直接传给前端,现在为了去掉encrptPassword,新加了一个viewobject层,新建类UserVO,它只包含下面字段

    private Integer id;
    private String name;
    private Byte gender;
    private Integer age;
    private String telephone;
  6. UserController将原来的userModel对象转换成UserVO对象,然后进行返回到前端,重启项目结果如下

    image-20210205180756809

CommonReturnType

status = 500时,需要给前端正确的提示,于是新建一个CommonReturnType

返回成功信息

public class CommonReturnType {
    //表明对应请求的返回处理结果 "success" 或 "fail"
    private String status;

    //若status=success,则data内返回前端需要的json数据
    //若status=fail,则data内使用通用的错误码格式
    private Object data;

    //定义一个通用的创建方法
    public static CommonReturnType create(Object result){
        return CommonReturnType.create(result,"success");
    }

    public static CommonReturnType create(Object result,String status){
        CommonReturnType type = new CommonReturnType();
        type.setStatus(status);
        type.setData(result);
        return type;
    }

    public String getStatus() {
        return status;
    }

    public void setStatus(String status) {
        this.status = status;
    }

    public Object getData() {
        return data;
    }

    public void setData(Object data) {
        this.data = data;
    }
}

再在UserController里面返回CommonReturnType类的数据,结果如下:

image-20210205182440935

装饰器模式

装饰者(Decorator)和具体组件(ConcreteComponent)都继承自组件(Component),具体组件的方法实现不需要依赖于其它对象,而装饰者组合了一个组件,这样它可以装饰其它装饰者或者具体组件。所谓装饰,就是把这个装饰者套在被装饰者之上,从而动态扩展被装饰者的功能。装饰者的方法有一部分是自己的,这属于它的功能,然后调用被装饰者的方法实现,从而也保留了被装饰者的功能。可以看到,具体组件应当是装饰层次的最低层,因为只有具体组件的方法实现不需要依赖于其它对象。

<>

返回失败信息

通过使用包装器(装饰器)组装类的实现:由于EmBusinessErrorBusinessException都继承了CommonErr接口,达到不用新建EmBusinessErrorBusinessException的类就可获得errCodeerrMsg的组装类,同时接口里面的setErrMsg还达到可以替换原本的errMsg进行自定义的功能

  1. CommonErr接口的实现

    //组件(Component)
    public interface CommonError {
        public int getErrCode();
        public String getErrMsg();
        //返回值是CommonError是为了BusinessException
        public CommonError setErrMsg(String errMsg);
    }
  2. EmBusinessError的实现

    //具体组件(ConcreteComponent)
    public enum EmBusinessError implements CommonError {
        //通用错误类型10001
        PARAMETER_VALIDATION_ERROR(10001,"参数不合法"),
        UNKNOWN_ERROR(10002,"未知错误"),
    
        //20000开头为用户信息相关错误定义
        USER_NOT_EXIST(20001,"用户不存在"),
        USER_LOGIN_FAIL(20002,"用户手机号或密码不正确"),
        USER_NOT_LOGIN(20003,"用户还未登陆"),
        //30000开头为交易信息错误定义
        STOCK_NOT_ENOUGH(30001,"库存不足"),
        ;
    
        EmBusinessError(int errCode,String errMsg){
            this.errCode = errCode;
            this.errMsg = errMsg;
        }
    
        private int errCode;
        private String errMsg;
    
        @Override
        public int getErrCode() {
            return this.errCode;
        }
    
        @Override
        public String getErrMsg() {
            return this.errMsg;
        }
    
        public void setErrCode(int errCode) {
            this.errCode = errCode;
        }
    
        @Override
        public CommonError setErrMsg(String errMsg) {
            this.errMsg = errMsg;
            return this;
        }
    }
  3. BusinessException的实现

    //装饰器(Decorator)
    public class BusinessException extends Exception implements CommonError {
    
        private CommonError commonError;
    
        //直接接收EmBusinessError的传参用于构造业务异常
        public BusinessException(CommonError commonError){
            super();
            this.commonError = commonError;
        }
    
        //接收自定义errMsg的方式构造业务异常
        public BusinessException(CommonError commonError,String errMsg){
            super();
            this.commonError = commonError;
            this.commonError.setErrMsg(errMsg);
        }
    
        @Override
        public int getErrCode() {
            return this.commonError.getErrCode();
        }
    
        @Override
        public String getErrMsg() {
            return this.commonError.getErrMsg();
        }
    
        @Override
        public CommonError setErrMsg(String errMsg) {
            this.commonError.setErrMsg(errMsg);
            return this;
        }
    
        public CommonError getCommonError() {
            return commonError;
        }
    }
  4. 修改UserController

    # 修改
    @RequestMapping("/get")
    @ResponseBody
    public CommonReturnType getUser(@RequestParam("id") int id) throws BusinessException {
        UserModel userModel = userService.getUserById(id);
        if(userModel == null)
            //空指针异常
            userModel.setEncptPassword("111");
            //throw new BusinessException(EmBusinessError.USER_NOT_EXIST);
        UserVO userVO = convertFromUserModel(userModel);
        return CommonReturnType.create(userVO);
    }
    # 新增
    //解决未被controller吸收的异常
    @ExceptionHandler(Exception.class)//当收到Exception类型的异常进入该方法
    @ResponseStatus(HttpStatus.OK)//即使收到异常也返回OK
    @ResponseBody
    public Object handlerException(HttpServletRequest request, Exception e){
        BusinessException businessException = (BusinessException) e;
        CommonReturnType commonReturnType = new CommonReturnType();
        commonReturnType.setStatus("fail");
        HashMap<String, Object> data = new HashMap<>();
        data.put("errCode", businessException.getErrCode());
        data.put("errMsg", businessException.getErrMsg());
        commonReturnType.setData(data);
        return commonReturnType;
    }
  5. 将新增的方法添加到基类BaseController并对代码进行优化,这样以后每个继承它的类都可以执行该业务

    public enum EmBusinessError implements CommonError {
        //通用错误类型
        PARAMETER_VOLIDATION_ERROR(10001, "参数不合法"),
        //未知错误
        UNKNOWN_ERROR(10002, "未知错误"),
        //20000开头为用户信息相关错误
        USER_NOT_EXIST(20001," 用户不存在")
        ;
        private int errCode;
        private String errMsg;
    public class BaseController {
        //解决未被controller吸收的异常
        @ExceptionHandler(Exception.class)//当收到Exception类型的异常进入该方法
        @ResponseStatus(HttpStatus.OK)//即使收到异常也返回OK
        @ResponseBody
        public Object handlerException(HttpServletRequest request, Exception e) {
            HashMap<String, Object> data = new HashMap<>();
            if (e instanceof BusinessException) {
                BusinessException businessException = (BusinessException) e;
                data.put("errCode", businessException.getErrCode());
                data.put("errMsg", businessException.getErrMsg());
            } else {
                data.put("errCode", EmBusinessError.UNKNOWN_ERROR.getErrCode());
                data.put("errMsg", EmBusinessError.UNKNOWN_ERROR.getErrMsg());
            }
            return CommonReturnType.create(data, "fail");
        }
    }

谷歌商店里面有一个JSON-handle的插件特别好用,推荐一下~

image-20210205233457896

短信验证码

这里实现了,一个简单的注册功能。前端代码略

这里贴一下UserController

@Autowired
HttpServletRequest httpServletRequest;
	/*produces:它的作用是指定返回值类型,不但可以设置返回值类型还可以设定返回值的字符编码;
    consumes: 指定处理请求的提交内容类型(Content-Type),例如application/json, text/html;*/
@RequestMapping(value = "/getotp",method = {RequestMethod.POST}, consumes={CONTENT_TYPE_FORMED})
@ResponseBody
public CommonReturnType getOtp(@RequestParam("telphone") String telphone){
    Random random = new Random();
    int randomInt = random.nextInt(99999);
    randomInt += 10000;
    String otpCode = String.valueOf(randomInt);
    //手机号和验证码用key-value对的形式保存起来,应当放在redis里面,这里为了简单就输出到控制台上
    httpServletRequest.getSession().setAttribute(telphone, "otpCode");
    System.out.println("telphone: " + telphone + ", otpCode: "+ otpCode);
    return  CommonReturnType.create(null);
}

用户注册接口

  1. 定义函数签名,参数列表包括:telphoneotpCodenamegenderagepassword

  2. 验证输入的验证码和对应验证码是否符合

  3. 进入用户注册流程:UserService实现$\rightarrow$UserModel转换成UserDOUserPasswordDO$\rightarrow$UserServiceImpl实现(注意判空)

    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-lang3</artifactId>
        <version>3.7</version>
    </dependency>
    # 可以用apache的StringUtil类

    这里选用insertSelective较于insert的优点是:在传给的 UserDO字段为空时可以不覆盖数据库的默认字段。

    小tip:建议数据库的字段都设为非空字段,但是也不尽然,在网站设置强绑定(即第三方登陆仍然需要手机号注册)的情况下,手机号字段为唯一索引,而注册用户在用手机号注册后使用第三方登陆注册会出现注册不了的现象,这种情况手机号字段设成null是比较合适的,因为唯一索引不限制null的唯一。

注册前端页面

  1. 编写前端注册页面
  2. 处理getotp.htmlregister.html间的session共享

DEFAULT_ALLOW_CREDENTIALS=true:需配合前端设置xhrFields授信后使得跨域session共享

@CrossOrigin(allowCredentials = "true", allowedHeaders = "*")
//前端
xhrFields:{withCredentials:true}
  1. 修改之前的MD5加密方式
//由于JDK9不能用Base64Encoder所以改用Base64
public String encodeByMd5(String str) throws NoSuchAlgorithmException, UnsupportedEncodingException {
    //确定计算方法
    MessageDigest md5 = MessageDigest.getInstance("MD5");
    //BASE64Encoder base64Encoder = new BASE64Encoder();
    Base64.Encoder encoder = Base64.getEncoder();
    //加密字符串
    String newstr = encoder.encodeToString(md5.digest(str.getBytes("utf-8"));
    return newstr;
}

截至目前,很容易出现bug的两点:

  • 由于SpringBoot版本太高,导致@CrossOrigin加自定义属性的时候就会启动不了,我是把版本从2.4.2降到2.2.2才好
  • 视频里判断验证码和HttpServletRequest 获得的session的值是否相同时用的是DruidStringUtils类的equals方法,通过Debug发现一运行到那一行就会出现调用目标异常(栈溢出),改用Apache的就行了

登陆

前端页面较为简单,略,后端业务实现步骤如下:

  1. controller类首先入参校验手机号和密码都不能为空,将密码通过MD5的方式传入service进行校验
  2. 首先通过手机号获取id后再在UserPasswordDO表里通过user_id拿到UserPasswordDO对象,组合成UserModel后的EncrptPassword与传参传过来的加密密码进行对比,如果相同返回给controller
  3. controller通过session传入两个变量LOGINLOGIN_USER留给以后用,返回给前端处理信息

优化表单校验:star:

  1. 引入依赖

    <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-validator</artifactId>
        <version>5.4.1.Final</version>
    </dependency>
  2. 实现ValidationResult类来展示验证结果

    public class ValidationResult {
        //校验结果是否有错
        private boolean hasErrors = false;
    
        //存放错误信息的map
        private Map<String, String> errorMsgMap = new HashMap<>();
        
        public boolean isHasErrors() {
            return hasErrors;
        }
    
        public void setHasErrors(boolean hasErrors) {
            this.hasErrors = hasErrors;
        }
    
        public Map<String, String> getErrorMsgMap() {
            return errorMsgMap;
        }
    
        public void setErrorMsgMap(Map<String, String> errorMsgMap) {
            this.errorMsgMap = errorMsgMap;
        }
    
        //实现通用的通过格式化字符串信息获取错误结果的msg方法
        public String getErrMsg() {
            return StringUtils.join(errorMsgMap.values().toArray(), ",");
        }
    }
  3. 实现ValidatorImplbean绑定,然后返回校验结果

    @Component
    public class ValidatorImpl implements InitializingBean{
        private Validator validator;
    
        //实现校验方法并返回校验结果
        public ValidationResult validate(Object bean){
            final ValidationResult result = new ValidationResult();
            Set<ConstraintViolation<Object>> constraintViolationSet = validator.validate(bean);
            if(constraintViolationSet.size() > 0){
                //有错误
                result.setHasErrors(true);
                constraintViolationSet.forEach(constraintViolation->{
                    String errMsg = constraintViolation.getMessage();
                    String propertyName = constraintViolation.getPropertyPath().toString();
                    result.getErrorMsgMap().put(propertyName,errMsg);
                });
            }
            return result;
        }
    
        @Override
        public void afterPropertiesSet() throws Exception {
            //将hibernate validator通过工厂的初始化方式使其实例化
            this.validator = Validation.buildDefaultValidatorFactory().getValidator();
        }
    }
  4. model类添加注解

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public class UserModel {
        private Integer id;
        @NotNull(message = "用户名不能为空")
        private String name;
        @NotNull(message = "性别不能为空")
        private Byte gender;
        @NotNull(message = "年龄不能为空")
        @Min(value = 0, message = "年龄必须大于0")
        @Max(value = 200, message = "年龄必须小于200")
        private Integer age;
        @NotNull(message = "手机号不能为空")
        private String telphone;
        private String registerMode;
        private String thirdPartyId;
        @NotNull(message = "密码不能为空")
        private String encrptPassword;
    }
  5. service层进行校验

    @Autowired
    private ValidatorImpl validator;
    
    //方法内添加即可
    ValidationResult result =  validator.validate(userModel);
    if(result.isHasErrors()){
        throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR,result.getErrMsg());
    }

经过Debug我发现运行报错的原因是不能在Model类上使用@NotBlank注解,改成@NotNull就可以了。可是网上说应用在String类型的属性是没问题的,无解...

实现Item相关类

  1. 修改pom文件和mybatis的文件自动生成Pojomappermapper.xml,手动创建modelViewObject
  2. 实现Item相关的controllerservice类,其中包含创建商品、获取商品列表、根据ID查询商品

tip

  • 为了前端展示,常常定义的ViewObjectPojo类的属性更多,可以看出之前的User相关类中Model是由Pojo类聚合而成,而ViewObject也可以用Model类聚合而成。

  • service层实现modelpojo类的转换,controller实现ViewObjectmodel层的转换。

创建商品

  1. 实现ItemController类,由前端页面传来的参数封装成ItemModel对象,用ItemVO类返回给前端
  2. 实现ItemServiceImpl
    • 入参校验
    • ItemModel转换成ItemDO
    • 插入到数据库
    • 返回创建的对象

获取商品列表:star:

这里运用流式编程的方法通过pojo类组装成model然后将其转换成集合

// service
public List<ItemModel> listItem() {
    List<ItemDO> itemDOList = itemDOMapper.listItem();
    List<ItemModel> itemModelList =  itemDOList.stream().map(itemDO -> {
        ItemStockDO itemStockDO = itemStockDOMapper.selectByItemId(itemDO.getId());
        ItemModel itemModel = this.convertModelFromPojo(itemDO,itemStockDO);
        return itemModel;
    }).collect(Collectors.toList());
    return itemModelList;
}

// controller
@RequestMapping(value = "/list",method = {RequestMethod.GET})
@ResponseBody
public CommonReturnType listItem(){
List<ItemModel> itemModelList = itemService.listItem();
    //使用stream api将list内的itemModel转化为ItemVO;
    List<ItemVO> itemVOList =  itemModelList.stream().map(itemModel -> {
        ItemVO itemVO = this.convertFromItemModel(itemModel);
        return itemVO;
    }).collect(Collectors.toList());
    return CommonReturnType.create(itemVOList);
}

商品列表前端页面

利用DOM操作填充table,将商品信息展示到页面

<script>
	// 定义全局商品数组信息
	var g_itemList = [];

	jQuery(document).ready(function () {
		$.ajax({
			type:"GET",
			url:"http://localhost:8080/item/list",
			xhrFields: {withCredentials: true},
			success:function (data) {
				if (data.status == "success") {
					// alert("获取商品信息成功");
					g_itemList = data.data;
					reloadDom();
				}else {
					alert("获取商品信息失败,原因:"+data.data.errMsg);
				}
			},
			error:function (data) {
				alert("获取商品信息失败,原因:"+data.responseText);
			}
		})
	})

	function reloadDom() {
		for (var i = 0; i < g_itemList.length; i ++){
			var itemVO = g_itemList[i];
			console.log(itemVO.title)
			var dom = "<tr data-id='"+ itemVO.id +"' id='itemDetail"+ itemVO.id +"'><td>"+ itemVO.title +"</td><td><img style='width: 100px;height: auto' src='"+ itemVO.imgUrl +"'></td><td>"+ itemVO.description +"</td><td>"+ itemVO.price +"</td><td>"+ itemVO.stock +"</td><td>"+ itemVO.sales +"</td></tr>";
			$("#container").append($(dom));
			$("#itemDetail"+itemVO.id).on("click",function (e) {
				window.location.href="getitem.html?id="+$(this).data("id");
			})
		}
	}
</script>

下订单

  1. 根据mybatis-generator.xml自动生成OrderDOOrderDOMapperOrderDOMapper.xml

  2. 实现OrderModer类(注意itemPrice属性的定义)

  3. OrderServiceImpl的实现

    • 入参校验
    • 修改item_stock表该商品的stock,注意还要判断该商品的库存是否够
    • 封装OrderModel对象,订单流水号的生成如下所示
    • 将封装好的OrderModel对象转换成OrderDO对象,然后插入order_info
    • 修改item表的该商品的sales
    • 返回OrderModel对象给controller
    @Transactional(propagation = Propagation.REQUIRES_NEW)
    private String generateOrderNo(){
        //订单号有16位
        StringBuilder stringBuilder = new StringBuilder();
        //前8位为时间信息,年月日
        LocalDateTime now = LocalDateTime.now();
        String nowDate = now.format(DateTimeFormatter.ISO_DATE).replace("-","");
        stringBuilder.append(nowDate);
    
        //中间6位为自增序列
        //获取当前sequence,在这里需要给sequence_info的getSequenceByName语句加锁:for update
        int sequence = 0;
        SequenceDO sequenceDO =  sequenceDOMapper.getSequenceByName("order_info");
        sequence = sequenceDO.getCurrentValue();
        sequenceDO.setCurrentValue(sequenceDO.getCurrentValue() + sequenceDO.getStep());
        sequenceDOMapper.updateByPrimaryKeySelective(sequenceDO);
        String sequenceStr = String.valueOf(sequence);
        for(int i = 0; i < 6-sequenceStr.length();i++){
            stringBuilder.append(0);
        }
        stringBuilder.append(sequenceStr);
        //最后2位为分库分表位,暂时写死
        stringBuilder.append("00");
    
        return stringBuilder.toString();
    }
  4. OrderController的实现

    • 查看sessionIS_LOGIN属性是否存在,若不存在则抛异常
    • 查看sessionLOGIN_USER属性,获得该用户的id(用于入参校验和插入order_info表),操作service进行下单

活动商品

  1. 根据mybatis-generator.xml自动生成PromoDOPromoDOMapperPromoDOMapper.xml

  2. 实现PromoModel类(增加了status属性)

    @AllArgsConstructor
    @NoArgsConstructor
    @Data
    public class PromoModel {
        private Integer id;
    
        //秒杀活动状态 1表示还未开始,2表示进行中,3表示已结束
        private Integer status;
    
        //秒杀活动名称
        private String promoName;
    
        //秒杀活动的开始时间,DateTime属于joda-time类
        private DateTime startDate;
    
        //秒杀活动的结束时间
        private DateTime endDate;
    
        //秒杀活动的适用商品
        private Integer itemId;
    
        //秒杀活动的商品价格
        private BigDecimal promoItemPrice;
    }
  3. OrderModel类中加了PromoId属性,在ItemModel里组合了PromoModel

  4. 实现PromoServiceImpl

    @Override
    public PromoModel getPromoByItemId(Integer itemId) {
        PromoDO promoDO = promoDOMapper.selectByItemId(itemId);
        PromoModel promoModel = convertFromPojo(promoDO);
        if(promoModel == null)
            return null;
        if(promoModel.getStartDate().isAfterNow())
            promoModel.setStatus(1);
        else if(promoModel.getEndDate().isBeforeNow())
            promoModel.setStatus(3);
        else
            promoModel.setStatus(2);
        return promoModel;
    }
  5. OrderControllerOrderServiceImpl类的相关方法中增加和Promo相关的参数,用于修改下单的相关信息

    @Override
    @Transactional
    public OrderModel createOrder(Integer userId, Integer itemId, Integer promoId, Integer amount) throws BusinessException {
        //入参校验
        ItemModel itemModel = itemService.getItemById(itemId);
        if (itemModel == null) {
            throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "商品信息不存在");
        }
        UserModel userModel = userService.getUserById(userId);
        if (userModel == null) {
            throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "用户信息不存在");
        }
        if (amount <= 0 || amount > 99) {
            throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "数量信息不正确");
        }
        if (promoId != null) {
            if (promoId.intValue() != itemModel.getPromoModel().getId())
                throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "活动信息不正确");
            else if (itemModel.getPromoModel().getStatus() != 2)
                throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "数量信息未开始");
        }
        //下单方式:1.落单减库存    2.支付减库存:会造成某人已下完单,但是当付款成功的时候却没货
        boolean flag = itemService.decreaseStock(itemId, amount);
        if (!flag)
            throw new BusinessException(EmBusinessError.STOCK_NOT_ENOUGH);
        //订单入库
        OrderModel orderModel = new OrderModel();
        orderModel.setUserId(userId);
        orderModel.setItemId(itemId);
        orderModel.setAmount(amount);
        if (promoId != null)
            orderModel.setItemPrice(itemModel.getPromoModel().getPromoItemPrice());
        else
            orderModel.setItemPrice(itemModel.getPrice());
    
        orderModel.setPromoId(promoId);
        //这里不再是商品价格itemModel.getPrice()而是之前订单价格orderModel.getItemPrice()
        BigDecimal orderPrice = orderModel.getItemPrice().multiply(new BigDecimal(amount));
        orderModel.setOrderPrice(orderPrice);
        orderModel.setId(generateOrderNo());
        //返回前端
        OrderDO orderDO = convertFromOrderModel(orderModel);
        //插入到order_info表
        orderDOMapper.insertSelective(orderDO);
        //增加销量
        itemService.increaseSales(itemId, amount);
        return orderModel;
    }
  6. 实现活动商品前端

部署到云端

  1. pom文件中加入springbootmaven插件
    <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  2. 在本机的命令行执行mvn clean package后得到的jar包上传到云服务器
  3. 新建外挂配置文件application.properties并以该配置运行
    server.port=80
    
    # 注意端口冲突,查看端口的命令是 netstat -lnp|grep 端口号 
    [root@LEGION-Y7000 intellij]# java -jar miaosha-0.0.1-SNAPSHOT.jar --spring.config.addition-location=/www/intellij/miaosha/application.properties
  4. 编写外挂配置脚本deploy.sh
    nohup java -Xms400m -Xmx400m -XX:NewSize=200m -XX:MaxNewSize=200m -jar miaosha-0.0.1-SNAPSHOT.jar --spring.config.addition-location=/www/intellij/miaosha/application.properties
    
    # 参数说明
    nohup:以非停止方式运行程序,这样即便控制台退出了程序也不会停止
    java:java命令启动,设置jvm初始和最大内存为400m,设置jvm中初始新生代和最大新生代大小为200m,设置成一样的目的是为减少扩展jvm内存池过程中向操作系统索要内存分配的消耗
    spring.config.addtion-location=指定额外的配置文件
  5. 都把文件改成可执行文件
    chmod -R 777 *
  6. 执行编写的shell文件,将文件结果都打印到nohup.out文件里
    [root@LEGION-Y7000 miaosha]# ./deploy.sh &
    [1] 29471
    [root@LEGION-Y7000 miaosha]# nohup: ignoring input and appending output to ‘nohup.out’

性能压测

  1. 添加一个线程组,再将Http请求察看结果树聚合报告添加进去,高级栏中要选Java

  2. 查看服务器性能

[root@LEGION-Y7000 miaosha]# ps -ef|grep java
root      3195 12989  0 18:29 pts/1    00:00:00 grep --color=auto java
root     26473     1  0 17:06 ?        00:00:00 jsvc.exec -java-home /usr/java/jdk1.8.0_121 -user www -pidfile /www/server/tomcat/logs/catalina-daemon.pid -wait 10 -outfile /www/server/tomcat/logs/catalina-daemon.out -errfile &1 -classpath /www/server/tomcat/bin/bootstrap.jar:/www/server/tomcat/bin/commons-daemon.jar:/www/server/tomcat/bin/tomcat-juli.jar -Djava.util.logging.config.file=/www/server/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dcatalina.base=/www/server/tomcat -Dcatalina.home=/www/server/tomcat -Djava.io.tmpdir=/www/server/tomcat/temp org.apache.catalina.startup.Bootstrap
www      26474 26473  0 17:06 ?        00:00:10 jsvc.exec -java-home /usr/java/jdk1.8.0_121 -user www -pidfile /www/server/tomcat/logs/catalina-daemon.pid -wait 10 -outfile /www/server/tomcat/logs/catalina-daemon.out -errfile &1 -classpath /www/server/tomcat/bin/bootstrap.jar:/www/server/tomcat/bin/commons-daemon.jar:/www/server/tomcat/bin/tomcat-juli.jar -Djava.util.logging.config.file=/www/server/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dcatalina.base=/www/server/tomcat -Dcatalina.home=/www/server/tomcat -Djava.io.tmpdir=/www/server/tomcat/temp org.apache.catalina.startup.Bootstrap
root     29472 29471  0 17:33 pts/1    00:00:15 java -Xms400m -Xmx400m -XX:NewSize=200m -XX:MaxNewSize=200m -jar miaosha-0.0.1-SNAPSHOT.jar --spring.config.addition-location=/www/intellij/miaosha/application.properties
[root@LEGION-Y7000 miaosha]# netstat -anp | grep 29472
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      29472/java          
tcp        0      0 172.20.77.40:80         39.149.232.113:10140    ESTABLISHED 29472/java          
tcp        0      0 172.20.77.40:80         39.149.232.113:10139    ESTABLISHED 29472/java          
tcp        0      0 172.20.77.40:47530      82.156.200.100:3306      ESTABLISHED 29472/java          
unix  2      [ ]         STREAM     CONNECTED     858555   29472/java           
unix  2      [ ]         STREAM     CONNECTED     859732   29472/java
[root@LEGION-Y7000 miaosha]# pstree -p 29472 | wc -l
30
[root@LEGION-Y7000 miaosha]# top -H

image-20210211190701605

几个很重要的命令:

# 查看服务器状态
top -H
# 当前进程的并发线程数
pstree -p 29472 | wc -l
# 查看端口连接
netstat -lnp | grep 3306

优化Tomcat配置:star:

tomcat的配置容量总是不高(默认是10个),解决方案:

  • 通过修改配置文件调优
  1. 查看SpringBoot配置:在spring-configuration-metadata.json文件下,查看各节点的配置

    • server.tomcat.accept-count:等待队列长度。默认100;
    • server.tomcat.max-connections:最大可被连接数,默认10000
    • server.tomcat.max-threads:最大工作线程数,默认200
    • server.tomcat.min-spare-threads:最小线程数,默认10
    • 默认配置下,连接超过10000后出现拒绝连接情况;
    • 默认配置下,触发的请求超过200+100后拒绝处理;
    • -Xmx3550m -最大可用内存 -Xms3550m -JVM促使内存为3550m -Xmn2g 年轻代大小为2G -Xss128k -设置每个线程的堆栈大小 -XX:NewRatio=4:设置年轻代(包括Eden和两个Survivor区)与年老代的比值(除去持久代)设置为4,则年轻代与年老代所占比值为1:4 -XX:SurvivorRatio=4:设置年轻代中Eden区与Survivor区的大小比值。设置为4,则两个Survivor区与一个Eden区的比值为2:4; -XX:MaxPermSize=16m:设置持久代大小为16m -XX:MaxTenuringThreshold=0:设置垃圾最大年龄
  2. 修改外挂配置文件

server.port=80
server.tomcat.accept-count=1000
server.tomcat.max-threads=800
server.tomcat.min-spare-threads=100
  1. 杀掉线程再重新启动
[root@LEGION-Y7000 miaosha]# kill -9 29472
[root@LEGION-Y7000 miaosha]# ./deploy.sh &
[1] 4392
[root@LEGION-Y7000 miaosha]# nohup: ignoring input and appending output to ‘nohup.out’
  1. 调优结果
[root@LEGION-Y7000 miaosha]# pstree -p 27464 | wc -l
120
  • 通过内嵌Tomcat开发
  1. 配置项目开发

    • keepAliveTimeOut:多少毫秒后不响应的断开keepalive(设置在服务端上)
    • maxKeepAliveRequests:多少次请求后keepalive断开失效
    • 使用WebServerFactoryCustomizer< ConfigurableServletWebServerFactory >:定制化内嵌tomcat配置
  2. 代码实现

@Component
public class WebServerConfiguration implements WebServerFactoryCustomizer<ConfigurableWebServerFactory> {
    @Override
    public void customize(ConfigurableWebServerFactory configurableWebServerFactory) {
        //使用对应工厂类提供给我们的接口定制化我们的tomcat connector
        ((TomcatServletWebServerFactory)configurableWebServerFactory).addConnectorCustomizers(new TomcatConnectorCustomizer() {
            @Override
            public void customize(Connector connector) {
                Http11NioProtocol protocol = (Http11NioProtocol) connector.getProtocolHandler();
                //定制化keepalivetimeout,设置30秒内没有请求则服务端自动断开keepalive链接
                protocol.setKeepAliveTimeout(30000);
                //当客户端发送超过10000个请求则自动断开keepalive链接
                protocol.setMaxKeepAliveRequests(10000);
            }
        });
    }
}
  1. 数据库查询耗时
image-20210212113835237

分布式扩展

image-20210213112601642

数据库开放远程端口连接

需要4台阿里云服务器来进行:1台nginx反向代理、1台数据库、2台应用程序

  1. 最开始的一台做数据库服务器,把之前的/www/intellij/miaosha文件夹内的都传输到另外两台应用程序服务器上

    scp -r /www/intellij root@应用服务器一的IP():/www/
    
    scp -r /www/intellij root@应用服务器二的IP():/www/
  2. 在另外两台的服务器上启动应用程序

    # 连接
    ssh root@应用服务器一的IP()
    # ...
  3. 修改两个应用服务器的配置文件 application.properties

    server.port=80
    server.tomcat.accept-count=1000
    server.tomcat.max-threads=800
    server.tomcat.min-spare-threads=100
    # 地址改成私网地址更优
    spring.datasource.url=jdbc:mysql://82.156.200.100:3306/miaosha?useUnicode=true&characterEncoding=UTF-8
  4. 发现telnet 私网IP 3306不通,遂修改user表的权限

    GRANT ALL PRIVILEGES ON *.* to root@'%' identified by 'root';
    FLUSH PRIVILEGES;

    TELNET 私网IP 3306后的结果是

    [root@LEGION-Y7000 miaosha]# telnet 172.20.77.40 3306
    Trying 172.20.77.40...
    Connected to 172.20.77.40.
    Escape character is '^]'.
    Y
    5.5.5-10.1.44-MariaDBUO&jhIr%^-? *TIO5X<d`710mysql_native_passwordConnection closed by foreign host.
  5. 安装JDK后启动应用./deploy.sh &

    # 到达JDKrpm包所在目录,打开rpm包执行权限
    chmod -R 777 jdk.rpm
    # 安装rpm
    rpm -ivh jdk.rpm
    # 检查java版本
    java -version
    # 启动应用
    ./deploy.sh &

将静态资源上传到云端

Nginx作用:

  1. 作为web服务器
  2. 作为动静分离服务器
  3. 作为反向代理服务器
image-20210212232646783

OpenResty概述:

  • OpenRestyNginx核心加很多第三方模块组成,默认集成了Lua开发环境,使得Nginx可以作为一个Web Server使用
  • 借助于Nginx的事件驱动模型和非阻塞IO,可以实现高性能的Web应用程序
  • OpenResty提供了大量组件如MysqlRedisMemcached等等,使在Nginx上开发应用更方便,更简单

常用Nginx命令:

cd /usr/local/nginx/sbin/
./nginx  启动
./nginx -s stop  停止
./nginx -s quit  安全退出
./nginx -s reload  重新加载配置文件
ps aux|grep nginx  查看nginx进程

步骤:

  1. Nginx上部署OpenResty

    • 上传openresty.tar.gz包到服务器,接着执行下面的命令:
    chmod -R 777 openresty.tar.gz
    tar -xvzf openresty.tar.gz
    cd openresty
    ./configure
    # 我的没报错,是这样的
    cd ../..
    Type the following commands to build and install:
        gmake
        gmake install
    # 报错的话,执行下面的命令
    yum install pcre-devel openssl-devel gcc curl
    # 编译
    make
    # 安装
    make install
    # 安装完成
    make[2]: Leaving directory `/www/openresty-1.17.8.2/build/nginx-1.17.8'
    make[1]: Leaving directory `/www/openresty-1.17.8.2/build/nginx-1.17.8'
    mkdir -p /usr/local/openresty/site/lualib /usr/local/openresty/site/pod /usr/local/openresty/site/manifest
    ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/local/openresty/bin/openresty
    # 在/usr/local/openresty/nginx目录下启动Nginx
    sbin/nginx -c conf/nginx.conf

    由于我是在数据库端部署的Nginx,所以把端口号改成81,在宝塔面板和阿里云的安全组里放行后就可以访问index.html了。

  2. 将前端的页面进行修改后(通过增加gethost.js文件来替换前端页面的地址),上传到/usr/local/openresty/nginx/html

    至此才算真正部署一个项目到云端。

  3. 修改nginx.conf的配置文件,然后将静态资源全转移到新建的resources的目录中

    location /resources/{
        alias /usr/local/openresty/nginx/html/resources/;
        autoindex on;
        root   html;
        index  index.html index.htm;
        autoindex_exact_size off;
        autoindex_localtime on;
    }
  4. 重启Nginxsbin/nginx -s reload

Nginx做反向代理服务器:star:

  • 设置upstream server
  • 设置动态请求locationproxy pass路径
  • 开启tomcat access log访问日志验证

反向代理配置,配置一个backend_server,可以用于指向后端不同的server集群,配置内容为server集群的局域网ip,以及轮巡的权重值,并且配置个location,当访问规则命中location任何一个规则的时候则可以进入反向代理规则。

  1. 修改nginx.conf文件
#gzip  on;
upstream backend_server{
    server 应用服务器一私网:81 weight=1;
    server 应用服务器二私网:81 weight=1;
}

location / {
    proxy_pass http://backend_server;# 轮询上面的两个服务器
    #proxy_set_header Host $http_host:$proxy_port;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
  1. 重启Nginx

  2. 开启tomcataccesslog

# 先在秒杀项目的目录里新建一个tomcat的文件夹并授权777
# 修改外挂配置文件
server.tomcat.accesslog.enabled=true
server.tomcat.accesslog.directory=/www/intellij/miaosha/tomcat
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D
# 保存文件、杀掉java进程并重新部署
# %h 访问的用户IP地址。

# %l 访问逻辑用户名,通常返回'-'。

# %u 访问验证用户名,通常返回'-'。

# %t 访问日期。

# %r 访问的方式(post或者是get),访问的资源和使用的http协议版本
 
# %s 访问返回的http状态码。

# %b 访问资源返回的流量

# %D 处理请求的时间,以毫秒为单位
  1. 开启Nginx和后端应用程序由短连接修改成KeepAlive(长连接)模式(默认情况下客户端和Nginx,客户端和应用程序、应用程序和数据库服务器是长连接,而Nginx和后端应用程序是短连接)

Snipaste_2021-02-21_22-01-50

upstream backend_server{
    server 应用服务器一私网:81 weight=1;
    server 应用服务器二私网:81 weight=1;
    keepalive 30;
}
location / {
    proxy_pass http://backend_server;# 轮询上面的两个服务器
    #proxy_set_header Host $http_host:$proxy_port;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
}
# Nginx向应用服务器默认配置是http 1.0,不使用keepalive
  1. 重启Nginx并进行压测

Nginx高性能的原因:star:

  • epoll多路复用完成非阻塞式的IO操作;
  • master-worker进程模型,允许其进行平滑重启和配置,不会断开和客户端连接,基于worker的单线程模型和epoll多路复用的机制完成高效的操作;
  • 协程机制,完成单进程单线程模型,并支持并发编程调用接口,将每个请求对应到线程的某一个协程中,结合epoll多路复用的机制完成同步调用的开发;
  1. epoll多路复用(解决IO阻塞回调通知问题)

I/O多路复用就是通过一种机制,可以监视多个描述符,一旦某个IO能够读写,通知程序进行相应的读写操作。

I/O多路复用的场合

​ 当客户处理多个描述字时(通常是交互式输入和网络套接字),必须使用I/O复用

​ 如果一个TCP服务器既要处理监听套接字,又要处理已连接套接字,一般也要用到I/O复用

​ 如果一个服务器即要处理TCP,又要处理UDP,一般要使用I/O复用


Java BIO模型

clientserver之间通过TCP/IP建立联系,javaclient只有等到所有字节流socket.writeTCP/IP的缓冲区之后,对应的java client才会返回;若网络很慢,缓冲区填满之后,client就必须等待信息传输过去缓冲器有空闲使得缓冲区可以给上游去写时,才可达到直接返回的效果;


Linux select模型

image-20210221235949836

变更触发轮询查找,文件描述符有1024数量上限;一旦java server被唤醒,并且对应的socket连接打上有变化的标识之后,就代表已经有数据可以让你读写

弊端:

轮询效率低,有1024数量限制


epoll模型

image-20210222000312281

变更触发轮询,变更触发回调直接读取,理论上无上限。epoll是为了解决selectpoll的轮询方式效率低问题;

假设一个场景: 有100万个客户端同时与一个服务器进程保持着TCP连接。而每一时刻,通常只有几百上千个TCP连接是活跃的(事实上大部分场景都是这种情况)。如何实现这样的高并发?

select/poll时代,服务器进程每次都把这100万个连接告诉操作系统(从用户态复制句柄数据结构到内核态),让操作系统内核去查询这些套接字上是否有事件发生,轮询完后,再将句柄数据复制到用户态,服务器应用程序轮询处理已发生的网络事件,这一过程资源消耗较大。因此,select/poll一般只能处理几千的并发连接。

epoll的设计和实现与select完全不同: epoll通过在Linux内核中申请一个简单的文件系统(文件系统一般由B+树实现) 把原先的select/poll调用分成了3个部分:

调用epoll_create()建立一个epoll对象(在epoll文件系统中为这个句柄对象分配资源);

调用epoll_ctlepoll对象中添加这100万个连接的套接字;

调用epoll_wait收集发生的事件的连接;

实现上面说是的场景,只需要在进程启动时建立一个epoll对象,然后在需要的时候向这个epoll对象中添加或者删除连接。同时,epoll_wait的效率也非常高,因为调用epoll_wait时,并没有一股脑的向操作系统复制这100万个连接的句柄数据,内核也不需要去遍历全部的连接。

  1. master-worker进程模型

Nginx多进程模型如下所示: Nginx多进程模型 管理员理解为root操作用户,用于启动管理nginx进程;信号理解为启动或者重启Nginx,每个worker进程都是单线程的 Master进程的主要功能:

  • 接收来自外界的信号;
  • 向各个worker进程发送信号;
  • 监控worker进程的运行状态;
  • worker进程在异常情况下退出后,会自动重启新的worker进程;

nginx会启动一个master进程,然后根据配置文件内的worker进程的数量去启动相应的数量的worker进程,master进程和worker进程是一个父子关系;master进程用来管理worker进程,worker进程才是用来管理客户端连接的。

Master进程会先创建好对应的socke去监听对应的短裤,然后再fork出多个worker进程,master会启动一个epoll的多路复用模型;当client想要在socket端口建立经典的TCP三次握手建立连接的时候,对应的epoll多路复用会产生一个回调,通知所有的可以acceptworker进程,但只有一个worker进程会成功,其它的都会失败。

Nginx提供了一把共享锁accept_mutex来保证同一时刻只有一个work进程在accept连接,从而解决集群问题;当一个worker进程accept这个连接后,就开始读取请求,解析请求,处理请求,产生数据后,再返回给客户端,最后才断开连接;

  1. 协程机制

一个线程可以有多个协程,协程是线程的内存模型

  • 依附于线程的内存模型,切换开销小;
  • 遇阻塞即归还执行权,代码同步,调用新的不阻塞的协程;
  • 无需加锁;

分布式会话实现

之前我们的会话请求是依赖SpringBoot内嵌的tomcat容器封装的HttpServletRequest类来实现的,但是当我们实现了分布式扩展后,由于Nginx不断轮询不同的应用程序服务器端,只有当连续两次轮巡到同一台服务器才能进行一次完整的会话,这样无疑是不现实。于是我们只有靠Redis来实现分布式会话。

  • 基于cookie传输sessionid:由SpringBoot内嵌的tomcat容器实现迁移到Redis实现
  • 基于token传输类似sessionidjava代码实现迁移到Redis实现
  1. 引入分布式session相关的redis依赖,引入不同的版本时,可能会引起原有jar包版本不兼容。我是SpringBoot版本是2.2.2
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.session</groupId>
        <artifactId>spring-session-data-redis</artifactId>
        <version>2.0.5.RELEASE</version>
    </dependency>
  2. properties中配置redis属性
    # redis
    spring.redis.host=127.0.0.1
    spring.redis.port=6379
    spring.redis.database=10
    #spring.redis.password
    # 设置 Jedis 连接池:最大连接数量、最小idle连接
    spring.redis.jedis.pool.max-active=50
    spring.redis.jedis.pool.min-idle=20
  3. 自定义配置redis连接状态
  4. 启动项目进行登陆,此时的session已经存入redis但是还未序列化,故登陆失败
  5. 序列化Redis
    /*有两种序列化的方式,一、使用默认的`JDK`序列化方式;
           二、修改对应`Redis`序列化方式改成`json`方式*/
    public class UserModel implements Serializable
  6. 在本地进行登陆操作检验Redis可运行,重新打包上传,把Redis部署到和数据库服务器一起(不能部署到应用服务器端)
  7. 修改两个应用服务器的配置文件(spring.redis.host=82.156.200.100,spring.redis.password=你的密码,若没有请忽略),重新部署

  1. UserController引入RedisTemplate,建立登陆凭证token和用户登陆态之间的联系,要给UserModel进行序列化
    //login
    String uuidToken = UUID.randomUUID().toString();
    uuidToken = uuidToken.replace("-", "");
    redisTemplate.opsForValue().set(uuidToken, userModel);
    redisTemplate.expire(uuidToken, 1, TimeUnit.HOURS);//设置超时时间
    return CommonReturnType.create(uuidToken);
  2. 前端验证:修改前端代码getitem.html以及gethost.js,用作本机调试
    <!--login.html-->
    var token = data.data;
    window.localStorage["token"]=token;
    <!--getitem.html-->
    var token = window.localStorage["token"];
    if(token == null){
        alert("没有登陆,不能下单");
        window.location.href="login.html";
        return false;
    }
    <!--将token作为参数通过url传回应用程序服务器-->
    url: "http://" + g_host + "/order/createorder?token="+token
  3. 后端重复验证:修改后端代码OrderController
    @Autowired
    RedisTemplate redisTemplate;
    //封装下单请求
    @RequestMapping(value = "/createorder",method = {RequestMethod.POST},consumes={CONTENT_TYPE_FORMED})
    @ResponseBody
    public CommonReturnType createOrder(@RequestParam(name="itemId")Integer itemId,
                                        @RequestParam(name="amount")Integer amount,
                                        @RequestParam(name="promoId",required = false)Integer promoId
                                        ) throws BusinessException {
    //        Boolean isLogin = (Boolean) httpServletRequest.getSession().getAttribute("IS_LOGIN");
    //        if(isLogin == null || !isLogin.booleanValue()){
    //            throw new BusinessException(EmBusinessError.USER_NOT_LOGIN,"用户还未登陆,不能下单");
    //        }
        String token = httpServletRequest.getParameterMap().get("token")[0];//可以通过传参获取也可以这样获取
        if(StringUtils.isEmpty(token)){
            throw new BusinessException(EmBusinessError.USER_NOT_LOGIN,"用户还未登陆,不能下单");
        }
        //获取用户的登陆信息
        UserModel userModel = (UserModel) redisTemplate.opsForValue().get(token);
        if(userModel == null)//说明token已经失效
            throw new BusinessException(EmBusinessError.USER_NOT_LOGIN,"用户还未登陆,不能下单");
    //        UserModel userModel = (UserModel)httpServletRequest.getSession().getAttribute("LOGIN_USER");
        OrderModel orderModel = orderService.createOrder(userModel.getId(),itemId,promoId,amount);
        return CommonReturnType.create(null);
    }

多级缓存

Redis缓存​&​本地​缓冲:star:

  • 单机版
  • sentinal哨兵模式
  • 集群cluster模式
  1. Redis Sentinel集群看成是一个ZooKeeper集群,它是集群高可用的心脏,它一般是由 3~5 个节点组成,这样挂了个别节点集群还可以正常运转。它负责持续监控主从节点的健康,当主节点挂掉时,自动选择一个最优的从节点切换为主节点。客户端来连接集群时,会首先连接sentinel,通过sentinel来查询主节点的地址,然后再去连接主节点进行数据交互。当主节点发生故障时,客户端会重新向sentinel要地址,sentinel会将最新的主节点地址告诉客户端。如此应用程序将无需重启即可自动完成节点切换。

    image-20210214221936189

    Redis1崩了以后会更改主从节点的身份:

    image-20210214222156616

  2. 集群cluster模式的特点:

    • 所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽;

    • 节点的fail是通过集群中超过半数的节点检测失效时才生效;

    • 客户端与redis节点直连,不需要中间proxy层,客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可;

    • redis-cluster把所有的物理节点映射到[0-16383]slot上,cluster负责维护node<->slot<->value

  3. Redis集中式缓存:商品详情动态内容实现(上)Item的数据存取改成在Redis上,接下来==设置序列化方式,对于key可以直接序列化,对于Value还需补充由JodaDateTimeJson字符串的转换==

    @Component
    @EnableRedisHttpSession(maxInactiveIntervalInSeconds = 3600)
    public class RedisConfig{
        @Bean
        public RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory){
            RedisTemplate redisTemplate = new RedisTemplate();
            redisTemplate.setConnectionFactory(redisConnectionFactory);
            //给key进行序列化
            StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
            redisTemplate.setKeySerializer(stringRedisSerializer);
            //给value进行序列化
            Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
    
            ObjectMapper objectMapper = new ObjectMapper();
            SimpleModule simpleModule = new SimpleModule();
            simpleModule.addSerializer(DateTime.class,new JodaDateTimeJsonSerializer());
            simpleModule.addDeserializer(DateTime.class,new JodaDateTimeJsonDeserializer());
            //序列化的结果包含类的信息以及特殊属性类的信息
            objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
            objectMapper.registerModule(simpleModule);
    
            jackson2JsonRedisSerializer.setObjectMapper(objectMapper);
            redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
            return redisTemplate;
        }
    }
  4. 本地热点数据缓存:商品详情动态内容实现(下) 为了减少访问redis的网络开销和redis的广播消息,本地热点缓存的生命周期不会特别长;本地热点缓存是为了一些热点数据瞬时访问的容量来做服务的,对应的生命周期要比rediskey的生命周期要短很多;这样才能做到被动失效的时候对于脏读失效的控制是非常小的。Guava cache本质上是一个HashMap:可以控制keyvalue的大小,以及key的超时时间;可配置的LRU策略,最近最少访问的key,当内存不足的时候优先被淘汰;线程安全;

  5. 引入Guava Cache的依赖

    <!--Guava Cache-->
    <dependency>
        <groupId>com.google.guava</groupId>
        <artifactId>guava</artifactId>
        <version>18.0</version>
    </dependency>
    1. 配置本地缓存

@PostConstruct该注解被用来修饰一个非静态的void()方法。==被@PostConstruct修饰的方法会在服务器加载Servlet的时候运行,并且只会被服务器执行一次==。PostConstruct在构造函数之后执行,init()方法之前执行。该注解的方法在整个Bean初始化中的执行顺序:Constructor(构造方法) -> @Autowired(依赖注入) -> @PostConstruct(注释的方法)

```java
@Service
public class CacheServiceImpl implements CacheService {
    private Cache<String,Object> commonCache = null;
    @PostConstruct
    public void init(){
        commonCache = CacheBuilder.newBuilder()
                //设置缓存容器的初始容量为10
                .initialCapacity(10)
                //设置缓存中最大可以存储100个KEY,超过100个之后会按照LRU的策略移除缓存项
                .maximumSize(100)
                //设置写缓存后多少秒过期
                .expireAfterWrite(60, TimeUnit.SECONDS).build();
    }
    @Override
    public void setCommonCache(String key, Object value) {
        commonCache.put(key,value);
    }
    @Override
    public Object getFromCommonCache(String key) {
        return commonCache.getIfPresent(key);
    }
}
```
3. 实现多级缓存查询商品详情
```java
//商品详情页浏览
@RequestMapping("/get")
@ResponseBody
public CommonReturnType getItem(@RequestParam("id") Integer id) throws BusinessException {
//        ItemModel itemModel = itemService.getItemById(id);
    ItemModel itemModel = null;
    //取本地缓存
    itemModel = (ItemModel) cacheService.getFromCommonCache("item_" + id);
    if(itemModel == null){
        //从redis中取
        itemModel = (ItemModel) redisTemplate.opsForValue().get("item_" + id);
        if (itemModel == null) {
            //从mysql中取
            itemModel = itemService.getItemById(id);
            redisTemplate.opsForValue().set("item_" + id, itemModel);
            redisTemplate.expire("item_" + id, 10, TimeUnit.MINUTES);
        }
        cacheService.setCommonCache("item_"+id, itemModel);
    }
//            throw new BusinessException(EmBusinessError.STOCK_NOT_ENOUGH, "该商品不存在");
    ItemVO itemVO = convertFromItemModel(itemModel);
    return CommonReturnType.create(itemVO);
}
```

Nginx proxy cache缓存:star:

  • 前提:Nginx反向代理前置
  • 依靠文件系统存索引级的文件
  • 依靠内存缓存文件地址
  1. Nginx proxy cache的配置
# 声明一个cache缓冲节点的内容
# 做一个二级目录,先将对应的url做一次hash,取最后一位做一个文件目录的索引;
# 在取一位做第二级目录的索引来完成对应的操作,文件内容分散到多个目录,减少寻址的消耗;
# 在nginx内存当中,开了100m大小的空间用来存储keys_zone中的所有的key
# 文件存取7天,文件系统最多存取10个G
proxy_cache_path /usr/local/openresty/nginx/tmp_cache levels=1:2 keys_zone=tmp_cache:100m inactive=7d max_size=10g;
location / {
    proxy_pass http://backend_server;
    proxy_cache tmp_cache;
    proxy_cache_key $uri;
    proxy_cache_valid 200 206 304 302 7d;# 只有后端返回的状态码是这些,对应的cache操作才会生效,缓存周期7天
    #proxy_set_header Host $http_host:$proxy_port;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
}
  1. 重启Nginx
  2. 进入应用服务器端查看之前新建的tomcat目录下的accesslog,可以发现最近要查询的数据已经在Nginx反向代理服务器阻断了根本到不了tomcataccesslog
[root@LEGION-Y7000 miaosha]# ls
application.properties  deploy.sh  miaosha-0.0.1-SNAPSHOT.jar  nohup.out  tomcat
[root@LEGION-Y7000 miaosha]# cd tomcat
[root@LEGION-Y7000 tomcat]# ls
access_log.2021-02-13.log  access_log.2021-02-14.log  access_log.2021-02-15.log
[root@LEGION-Y7000 tomcat]# tail -f access_log.2021-02-15.log 
39.149.232.3 - - [15/Feb/2021:18:49:47 +0800] "GET /item/get?id=1 HTTP/1.1" 200 326 4
39.149.232.3 - - [15/Feb/2021:18:49:47 +0800] "GET /favicon.ico HTTP/1.1" 200 98 3
39.149.232.3 - - [15/Feb/2021:18:49:47 +0800] "GET /item/get?id=1 HTTP/1.1" 200 326 4
39.149.232.3 - - [15/Feb/2021:18:49:47 +0800] "GET /favicon.ico HTTP/1.1" 200 98 2
39.149.232.3 - - [15/Feb/2021:18:53:58 +0800] "GET /item/get?id=1 HTTP/1.1" 200 326 4
39.149.232.3 - - [15/Feb/2021:18:58:33 +0800] "GET /item/get?id=1 HTTP/1.1" 200 326 4
39.149.232.3 - - [15/Feb/2021:18:58:34 +0800] "GET /favicon.ico HTTP/1.1" 200 98 2
39.149.232.3 - - [15/Feb/2021:19:10:52 +0800] "GET /item/list HTTP/1.1" 200 1472 8
39.149.232.3 - - [15/Feb/2021:19:10:53 +0800] "GET /item/get?id=1 HTTP/1.1" 200 326 5
39.149.232.3 - - [15/Feb/2021:19:10:56 +0800] "GET /item/get?id=3 HTTP/1.1" 200 1638 6
  1. 进入新建的tmp_cache目录,查看Nginx proxy cache缓存
[root@LEGION-Y7000 tmp_cache]# ls
0  8  d
[root@LEGION-Y7000 tmp_cache]# cd 8
[root@LEGION-Y7000 8]# ls
f6
[root@LEGION-Y7000 8]# cd f6
[root@LEGION-Y7000 f6]# ls
86e4d1b3ba4f1464e409c74be4ef6f68
[root@LEGION-Y7000 f6]# cat 86e4d1b3ba4f1464e409c74be4ef6f68 
F3`ÿÿÿÿÿÿÿÿŒ*`ksr¯`+Access-Control-Request-Headers莺ÿX[Kl8w󍪨² 
KEY: /item/get
HTTP/1.1 200 
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Content-Type: application/json
Transfer-Encoding: chunked
Date: Mon, 15 Feb 2021 10:53:58 GMT

{"status":"success","data":{"id":1,"title":"Sony_XM2","price":1100,"stock":86,"description":"初级降噪","sales":13,"imgUrl":"https://img12.360buyimg.com/n7/jfs/t1/153308/37/12948/287783/5feda8ceEf68df9ea/fe428c62d634d809.jpg","promoModel":null,"promoStatus":0,"promoPrice":null,"promoId":null,"startDate":null}}

Nginx lua缓存

  • Lua协程机制
  • Nginx协程机制
  • Nginx lua插载点
  • Nginx lua实战
  1. Nginx协程

    • nginx的每一个Worker进程都是在epollqueue这种事件模型之上,封装成协程

    • 每一个请求都有一个协程进行处理

    • 即使Nginx lua需要运行lua,相对与C有一定的开销,但依旧能保证高并发的能力

    Nginx协程机制

    • Nginx每个工作进程创建一个lua虚拟机

    • 工作进程内的所有协程共享同一个vm

    • 每一个外部请求都是由一个lua协程处理,之间数据隔离

    • lua代码调用io等异步接口时,协程被挂起,上下文数据保持不变

    • 自动保存,不阻塞工作进程

    • io异步操作完成后还原协程上下文,代码继续执行

    Nginx lua插载点

    • init_by_lua:系统启动时调用;
    • init_worker_by_lua:worker进程启动时调用;
    • set_by_lua:nginx变量用复杂lua return
    • rewrite_by_lua:重写url规则
    • access_by_lua:权限验证阶段
    • content_by_lua:内容输出结点

  1. Nginx lua实战

    [root@LEGION-Y7000 openresty]# mkdir lua
    # 在新建的init.lua内输入文本
    [root@LEGION-Y7000 lua]# vim init.lua
    [root@LEGION-Y7000 lua]# cat init.lua 
    ngx.log(ngx.ERR,"init lua success");
    [root@LEGION-Y7000 lua]# cd ../
    [root@LEGION-Y7000 openresty]# cd nginx/
    [root@LEGION-Y7000 nginx]# vim conf/nginx.conf
    # 在http块内加入下面
    init_by_lua_file ../lua/init.lua;
    # 重启
    [root@LEGION-Y7000 nginx]# sbin/nginx -c conf/nginx.conf
    nginx: [error] [lua] init.lua:1: init lua success
    nginx: [emerg] bind() to 0.0.0.0:81 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:81 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:81 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:81 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:81 failed (98: Address already in use)
    nginx: [emerg] still could not bind()

OpenResty实战

  • OpenResty hello world
[root@LEGION-Y7000 lua]# vim helloworld.lua
[root@LEGION-Y7000 lua]# cat helloworld.lua 
ngx.exec("/item/get?id=1");
# 在server块内加入
location /helloworld{
	content_by_lua_file ../lua/helloworld.lua;
}
# 设置url为http://82.156.200.100/helloworld即可访问/item/get?id=1中的内容
  • shared dic共享内存字典
# 在/usr/local/openresty/nginx/conf/nginx.conf内加入
lua_shared_dict my_cache 128m
    server { # 参照,无意义
# 在lua目录里编辑文本itemsharedic.lua
function get_from_cache(key)
        local cache_ngx = ngx.shared.my_cache
        local value = cache_ngx:get(key)
        return value
end

function set_to_cache(key,value,exptime)
        if not exptime then
                exptime = 0
        end
        local cache_ngx = ngx.shared.my_cache
        local succ, err, forcible = cache_ngx:set(key,value,exptime)
        return succ
end

local args = ngx.req.get_uri_args()
local id = args["id"]
local item_model = get_from_cache("item_"..id)
if item_model == nil then
        local resp = ngx.location.capture("/item/get?id="..id)
        item_model = resp.body
        set_to_cache("item_"..id, item_model, 1*60)
end
ngx.say(item_model)
# 修改nginx.conf
location /luaitem/get{
    default_type "application/json";
    content_by_lua_file ../lua/itemsharedic.lua;
}
# 重启Nginx
# 关掉Nginx proxy cache(保留proxy_pass)然后访问http://82.156.200.100/luaitem/get?id=3即可获得对应json数据
  • OpenResty redis(推荐)

nginx可以连接到redis上,进行只读不写,若redis内没有对应的数据,那就回源到应用程序服务器上面,然后对应的应用程序服务器也判断一下redis内有没有对应的数据,若没有,回源mysql读取,读取之后放入redis中 ,那下次h5对应的ajax请求就可以直接在redis上做一个读的操作,nginx不用管数据的更新机制,下游服务器可以填充redis,nginx只需要实时的感知redis内数据的变化,在对redis添加一个redis slave,redis slave通过redis master做一个主从同步,更新对应的脏数据。

# 新建itemredis.lua
local args = ngx.req.get_uri_args()
local id = args["id"]
local redis = require "resty.redis"
local cache = redis:new()
local ok,err = cache:connect("你的Redis服务器IP",6379)
cache:auth(XXXXXX) # redis的认证密码
local item_model = cache:get("item_"..id)
if item_model == ngx.null or item_model == nil then
        local resp = ngx.location.capture("/item/get?id="..id)
        item_model = resp.body
end
ngx.say(item_model)
# 修改conf/nginx.conf
location /luaitem/get{
    default_type "application/json";
    content_by_lua_file ../lua/itemredis.lua;
}
# 重启Nginx,然后访问http://82.156.200.100/luaitem/get?id=3即可获得对应json数据

页面静态化

静态请求CDN

  • DNSCNAME解析到源站
  • 回源缓存设置
  • 强推失效

用户将静态数据请求到ECS服务器,ECS服务器解析到阿里云的CDN中,CDN可以理解为一个无限大的内容磁盘缓存,本身没有文件存储的,当用户要访问getItem一个静态资源文件的时候,只需要根据路由规则查看本地是否有这样的文件,有就直接返回,没有就回源到原站;回源到上图中的OSS中去获取静态资源文件。如果取得了getItemhtml静态资源文件,CDN就可以一边返回对应的文件,一边把文件按照http指示的生命周期缓存起来,以便于下一次用户在访问时,不用在回源到OSS中,直接返回即可。

回源缓存设置

cache control响应头 cache control是服务端用来告诉客户端说,我这个http的response你可不可以缓存,以什么样的策略去缓存;

  • private:客户端可以缓存/默认设置;
  • public:客户端和代理服务器都可以缓存; 客户端往服务端发送http请求,中间可能会经过ngixn反向代理,也可能会经过正向代理的出口服务器,也可能会经过CDN网络。因此中间层的节点看到对应的cache control是private的时候认定只有请求发起的客户端/浏览器才可以进行缓存;
  • max-age = xxx:缓存的内容将在xxx秒后失效;
  • no-cache:强制向服务端再验证一次; 会将对象的缓存存储在客户端,但是下一次用的时候需要向服务端验证一次这个缓存还能不能用,再去决定是否要去用之前用过的缓存;
  • no-store:不缓存请求的任何返回内容;

流程图如下所示

有效性判断

再验证一次,就是对缓存的有效性判断;

  • ETag:资源唯一标识 一般是将请求的资源内容做一个MD5处理,在第一次返回的内容中加上Etag标识一起返回给浏览器,浏览器存储下对应的Etag,下一次缓存时,所谓的有效性判断就是将之前的Etag一起带到服务器中,用来验证不发送对应的响应而是发送对应的http请求并且带上Etag的值,服务端会将Etag的值和本地文件的Etag内容做比较,若一致,就返回一个304 not modify,告诉其服务端内容有效;
  • If-None-Match:客户端发送的匹配Etag标识符;
  • Last-modified:资源最后被修改的时间;
  • If-Modified-Since:客户端发送的匹配资源最后修改时间的标识符; 若这个时间早于Last-modified,说明资源是无效的,反之即有效

浏览器的三种刷新方式

  • 回车刷新或a连接:看cache-control对应的max-age是否仍然有效,有效则直接from cache,若cache-control中位no-cache,则进入缓存协商逻辑;
  • F5刷新或者command+R刷新:去掉cache-control中的max-age或者直接设置max-age为0,然后进入缓存协商逻辑;
  • 强制刷新ctril+F5或者command+shift+R刷新:去掉cache-control和协商头,强制刷新;
  • 协商机制,比较Last-modified和Etag到服务端,若服务端判断没变化则304不返回数据,否则200返回数据;

在这里插入图片描述

CDN自定义缓存策略

  • 可自定义目录过期时间;
  • 可自定义后缀名过期时间;
  • 可自定义对应权重;
  • 可通过界面或API强制cdn对应目录刷新(非保成功);

阿里云CDN缓存策略,这篇文章讲了CDN的自定义缓存策略,可以看一下细节;

静态资源部署策略

  • css,js,img等元素使用带版本号部署,例如a.js?v=1.0不便利,且维护困难 html一般采取强推的概念,对应的html文件可以设置max-age,更新的时候,强推掉,让所有 CDN都失效调,全部回源。但对应的max-age设置为较短的时间;
  • css,js,img等元素使用带摘要部署:例如a.js?v=45edw存在先部署html还是先部署资源的覆盖问题;
  • css,js,imh等元素使用摘要做文件名部署,例如45edw.js,新老版本并存,且可回滚,资源部署完成后再部署html;

对应部署策略

  • 对应静态资源保持生命周期内不会变,max-age可设置的很长,无视失效更新周期;
  • html文件设置no-cache或较短max age,以便于更新;
  • html文件仍然可以设置较长的max age,依靠动态的获取版本号请求发送到后端,异步下载最新的版本号的html后展示渲染在前端;
  • 动态请求也可以静态化成json资源推送到cdn上;
  • 依靠异步请求获取后端节点对应资源状态做紧急下架处理;
  • 可通过跑批仅仅推送cdn内容使其下架等操作;

全页面静态化

  • html css js静态资源cdn化
  • js ajax动态请求cdn化
  • 全页面静态化
  • 在服务端完成html,cdd,甚至js的load渲染成纯html文件后直接以静态资源的方式部署到CDN上。

phantomjs

  • 无头浏览器,可以借助其模拟webkit js的执行; 应用:
  • 修改需要全页面静态化的实现,采用initView和hasInit方式防止多次初始化;
  • 编写对应轮询生成内容方式;
  • 将全静态化页面生成后推送到cdn;

缓存库存

交易性能瓶颈

  • JMeter压测
  • 交易验证依赖数据库
  • 库存行锁(减库存均是串行进行的)
  • 后置处理逻辑

交易验证优化:star:

  • 用户风控策略优化:策略缓存模型化
  • 活动校验策略优化:引入活动发布流程,模型缓存化,紧急下线能力
  • 存在风险:RedisMySQL内数据不一致
  1. 下单时ItemModelUserModel模型缓存化:实现ItemServiceImplgetItemByIdInCache方法和UserServiceImplgetUserByIdInCache方法并在OrderServiceImpl中使用

  2. 扣减库存缓存化:刚开始由于decreaseStockSQL语句中itemId不是唯一索引,所以锁住的整个表,但是下单的时候并不只是一个商品,我们之前压测的却只是同一件商品这并不符合实际。所以加上唯一索引就会给这条SQL语句加上一个行锁执行我们对应的操作优化了性能,由原来的整张表串行减库存变成itemId对应的商品串行减库存但这也是一个性能瓶颈,解决方案是:活动发布同步库存进缓存,然后下单交易只需减Redis缓存库存,接着异步消息扣减MySQL数据库内库存

    <update id="decreaseStock">
      <!--
        WARNING - @mbg.generated
        This element is automatically generated by MyBatis Generator, do not modify.
        This element was generated on Mon Feb 08 21:40:03 CST 2021.
      -->
      update item_stock
      set stock = stock - #{amount, jdbcType=INTEGER}
      where item_id = #{itemId, jdbcType=INTEGER} and stock >= #{amount, jdbcType=INTEGER}
    </update>
  3. 运营发现活动有异常,在后台将对应的活动进行修改,比如将活动提前结束。若线上在redis的缓存没有正常过期,即便修改了活动时间,但是用户还是可以以活动秒杀价格交易,因此需要一个紧急下线能力。所以运营人员至少要在活动开始前半个小时将活动发布上去,半个小时内足够进行缓存的预热。然后设计一个紧急下线的接口(老师食言了,😓!!!),用代码实现可以清除redis内的缓存。当redis内无法查询状态,就会去数据库内查询活动状态,从而达到紧急下架的能力

库存行锁优化

  • itemId需要创建唯一索引
alter table item_stock add unique index item_id_index(item_id)
  • 扣减库存缓存化
  1. 活动发布同步库存进缓存

  2. 下单交易减缓存库存

  3. 问题:数据库记录不一致,缓存中修改了但是数据库中的数据没有进行修改;

  • 异步同步数据库
  1. 活动发布同步库存进缓存

  2. 下单交易减缓存库存

  3. 异步消息扣减数据库内库存

可以让C端用户完成购买商品的高效体验,又能保证数据库最终的一致性

分布式事务

image-20210216231148872

分布式设计CAP三方面:一致性、可用性、分区容忍性。

分区容忍性是必要的,要么选择强一致性,等待所有的数据都一致的时候才可用;要么就是牺牲强一致性变得可用。所以牺牲强一致性来实现CAP中的A和P(可用性和分区容忍性)。强一致性是重要的,但是不追求瞬时状态的强一致性,追求的是最终的一致性,达到基础可用、最终一致性、软状态;

软状态:在应用当中会瞬时的存在有数据不一致性的情况,比如一部分数据已经成功,另外一部分数据还在处理当中。那我们的业务认为这些是可以容忍的;

在我们的缓存库存中,redis中存储的状态都是正确的,但是由于异步消息队列的consumer没有被触发,在那一瞬时数据库的状态是错误的。但只要分布式事务的消息投递成功,数据库的状态就会被正确更新,这个设计就是用来处理库存最终一致性的方案。只要消息中间件有99%以上的高可用的方式,就有99%以上的概率是可以保证数据库的状态可以跟redis中的状态是一致的。

RocketMQ:star:

image-20210216231823322

  1. 安装RocketMQ并初始化
[root@LEGION-Y7000 www]# mkdir rocketmq
[root@LEGION-Y7000 www]# cd rocketmq/
[root@LEGION-Y7000 rocketmq]# wget https://mirrors.bfsu.edu.cn/apache/rocketmq/4.8.0/rocketmq-all-4.8.0-bin-release.zip
--2021-02-16 23:25:39--  https://mirrors.bfsu.edu.cn/apache/rocketmq/4.8.0/rocketmq-all-4.8.0-bin-release.zip
Resolving mirrors.bfsu.edu.cn (mirrors.bfsu.edu.cn)... 39.155.141.16, 2001:da8:20f:4435:4adf:37ff:fe55:2840
Connecting to mirrors.bfsu.edu.cn (mirrors.bfsu.edu.cn)|39.155.141.16|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13881969 (13M) [application/zip]
Saving to: ‘rocketmq-all-4.8.0-bin-release.zip’

100%[============================================================================>] 13,881,969  --.-K/s   in 0.1s    

2021-02-16 23:25:39 (92.7 MB/s) - ‘rocketmq-all-4.8.0-bin-release.zip’ saved [13881969/13881969]
[root@LEGION-Y7000 rocketmq]# chmod -R 777 *
[root@LEGION-Y7000 rocketmq]# unzip rocketmq-all-4.8.0-bin-release.zip # 解压缩
[root@LEGION-Y7000 rocketmq]# ls
rocketmq-all-4.8.0-bin-release  rocketmq-all-4.8.0-bin-release.zip
[root@LEGION-Y7000 rocketmq]# cd rocketmq-all-4.8.0-bin-release
[root@LEGION-Y7000 rocketmq-all-4.8.0-bin-release]# ls
benchmark  bin  conf  lib  LICENSE  NOTICE  README.md

Start Name Server

  > nohup ./bin/mqnamesrv -n 82.156.200.100:9876 &
  > tail -f ~/logs/rocketmqlogs/namesrv.log	# `~`代表的路径是`/root`
  The Name Server boot success...

Start Broker

  # 先打开安全组和防火墙的9876、10909、10911和10912端口,再修改runbroker.cmd内JAVA_OPT都改成512m后进行以下操作
  # 在conf/broker.conf中加入下面配置
flushDiskType = ASYNC_FLUSH # 参照
namesrvAddr = 82.156.200.100:9876
brokerIP1 = 82.156.200.100
  > nohup sh bin/mqbroker -n82.156.200.100:9876 -c conf/broker.conf autoCreateTopicEnable=true &
  > tail -f ~/logs/rocketmqlogs/broker.log
  The broker[%s, 172.30.30.233:10911] boot success...

Send & Receive Messages

Before sending/receiving messages, we need to tell clients the location of name servers. RocketMQ provides multiple ways to achieve this. For simplicity, we use environment variable NAMESRV_ADDR

 > export NAMESRV_ADDR=localhost:9876
 > sh bin/tools.sh org.apache.rocketmq.example.quickstart.Producer
 SendResult [sendStatus=SEND_OK, msgId= ...

 > sh bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer
 ConsumeMessageThread_%d Receive New Messages: [MessageExt...

Shutdown Servers

> sh bin/mqshutdown broker
The mqbroker(36695) is running...
Send shutdown request to mqbroker(36695) OK

> sh bin/mqshutdown namesrv
The mqnamesrv(36664) is running...
Send shutdown request to mqnamesrv(36664) OK
  1. 指定topicstock
[root@LEGION-Y7000 rocketmq-all-4.8.0-bin-release]# cd bin
[root@LEGION-Y7000 bin]# ./mqadmin updateTopic -n localhost:9876 -t stock -c DefaultCluster
# 报错
[root@LEGION-Y7000 bin]# vim tools.sh
# 修改 JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${BASE_DIR}/lib:${JAVA_HOME}/jre/lib/ext:/usr/java/jdk1.8.0_121/jre/lib/ext"
[root@LEGION-Y7000 bin]# ./mqadmin updateTopic -n localhost:9876 -t stock -c DefaultCluster
RocketMQLog:WARN No appenders could be found for logger (io.netty.util.internal.PlatformDependent0).
RocketMQLog:WARN Please initialize the logger system properly.
create topic to 172.17.0.1:10911 success.
TopicConfig [topicName=stock, readQueueNums=8, writeQueueNums=8, perm=RW-, topicFilterType=SINGLE_TAG, topicSysFlag=0, order=false]
  1. 代码实现
/*连接mq*/
# mq
mq.nameserver.addr=82.156.200.100:9876
mq.topicname=stock
/*引入依赖*/
<!--RocketMQ-->
<dependency>
    <groupId>org.apache.rocketmq</groupId>
    <artifactId>rocketmq-client</artifactId>
    <version>4.3.0</version>
</dependency>
/*实现MqProducer*/
@Component
public class MqProducer {

    private DefaultMQProducer producer;

    @Value("${mq.nameserver.addr}")
    private String namesrvAddr;

    @Value("${mq.topicname}")
    private String topicName;

    @PostConstruct
    public void init() throws MQClientException {
        producer = new DefaultMQProducer("producer_group");
        producer.setNamesrvAddr(namesrvAddr);
        producer.start();
    }

    //异步库存扣减消息
    public boolean asyncReduceStock(Integer itemId, Integer amount) {
        Map<String, Object> bodyMap = new HashMap<>();
        bodyMap.put("itemId", itemId);
        bodyMap.put("amount", amount);
        Message message = new Message(topicName, "increase",
                JSON.toJSON(bodyMap).toString().getBytes(Charset.forName("UTF-8")));
        try {
            producer.send(message);
        } catch (MQClientException e) {
            e.printStackTrace();
            return false;
        } catch (RemotingException e) {
            e.printStackTrace();
            return false;
        } catch (MQBrokerException e) {
            e.printStackTrace();
            return false;
        } catch (InterruptedException e) {
            e.printStackTrace();
            return false;
        }
        return true;
    }
}
/*实现MqConsumer*/
@Component
public class MqConsumer {

    private DefaultMQPushConsumer consumer;
    @Value("${mq.nameserver.addr}")
    private String nameAddr;

    @Value("${mq.topicname}")
    private String topicName;

    @Autowired
    private ItemStockDOMapper itemStockDOMapper;

    @PostConstruct
    public void init() throws MQClientException {
        consumer = new DefaultMQPushConsumer("stock_consumer_group");
        consumer.setNamesrvAddr(nameAddr);
        consumer.subscribe(topicName, "*");

        consumer.registerMessageListener(new MessageListenerConcurrently() {
            @Override
            public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) {
                //实现库存真正到数据库内扣减的逻辑
                Message msg = msgs.get(0);
                String jsonString = new String(msg.getBody());
                Map<String, Object> map = JSON.parseObject(jsonString, Map.class);
                Integer itemId = (Integer) map.get("itemId");
                Integer amount = (Integer) map.get("amount");

                itemStockDOMapper.decreaseStock(itemId, amount);
                return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
            }
        });

        consumer.start();
    }
}
/*更新ItemServiceImpl*/
@Override
@Transactional
public boolean decreaseStock(Integer itemId, Integer amount) {
//        int affectedRow = itemStockDOMapper.decreaseStock(itemId, amount);
    Long row = redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue() * -1);
    if (row >= 0) { // > 变 >=
        //更新库存成功
        boolean mqResult = producer.asyncReduceStock(itemId, amount);
        if (!mqResult) {
            redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue());
            return false;
        }
        return true;
    } else {
        //更新库存失败,比如库存由0->-1,要更改回去
        redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue());
        return false;
    }
}
<!--个人感觉下面老师补充的俩方法毫无意义...-->
ItemService.java
新建一个方法
//异步更新库存
boolean asyncDecreaseStock(Integer itemId,Integer amount);
//库存回补
boolean increaseStock(Integer itemId,Integer amount)throws BusinessException;

ItemServiceImpl.java
@Override
public boolean asyncDecreaseStock(Integer itemId, Integer amount) {
    boolean mqResult = mqProducer.asyncReduceStock(itemId,amount);
    return mqResult;
}

@Override
public boolean increaseStock(Integer itemId, Integer amount) throws BusinessException {
    redisTemplate.opsForValue().increment("promo_item_stock_"+itemId,amount.intValue());
    return true;
}
  1. 调试程序
    • 根据MySQL里的promo表查找匹配的item_idid
    • 通过url:http://localhost:8080/item/publishpromo?id=1promo_item_stock_x存入Redis
    • 点进对应的商品详情页下单并Debug
  2. 存在问题
    • 异步消息发送失败
    • 扣减操作执行失败
    • 下单失败无法正确回补库存

事务性消息

改进:star:

之前在OrderServiceImplcreateOrder方法中减库存存在问题:当出现减库存成功但是订单入库失败的情况会导致Redis虽然 回滚了但是MQ却无法取消消息,结果MySQL中库存会比Redis中少,造成少卖的情况。(MySQL中库存比真实库存Redis的少) 于是改进了方法:之前减库存分为两部分(Redis中减库存,发送MQMySQL保证数据一致性),现在将发送MQ的那部分放到createOrder方法末尾。Spring@Transactional只有在方法成功返回之后才会commit,倘若因为网络问题或磁盘满了导致commit失败,还是会白白扣掉库存。在前面的数据Commit之后再执行afterCommit方法,与此同时,抛异常的行为自然没有意义所以注掉

/*OrderServiceImpl*/
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
    @Override
    public void afterCommit() {
        //异步更新库存
        boolean mqResult = itemService.asyncDecreaseStock(itemId, amount);
//                if (!mqResult) {
//                    itemService.increaseStock(itemId, amount);
//                    throw new BusinessException(EmBusinessError.MQ_SEND_FAIL);
//                }
    }
});
/*ItemServiceImpl*/
@Override
@Transactional
public boolean decreaseStock(Integer itemId, Integer amount) {
//        int affectedRow = itemStockDOMapper.decreaseStock(itemId, amount);
    Long row = redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue() * -1);
    if (row >= 0) { // > 变 >=
        //更新库存成功
//            boolean mqResult = producer.asyncReduceStock(itemId, amount);
//            if (!mqResult) {
//                redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue());
//                return false;
//            }
        return true;
    } else {
        //更新库存失败,比如库存由0->-1,要更改回去
        increaseStock(itemId, amount);
        return false;
    }
}

现在只有一个问题了,如何保证MQ发送必定成功?这就需要用到事务性消息:==保证数据库的事务提交,只要事务提交了就一定会保证消息发送成功。数据库内事务回滚了,消息必定不发送,事务提交未知,消息也处于一个等待的状态==

<!--MqProducer-->
transactionMQProducer = new TransactionMQProducer("transaction_producer_group");
transactionMQProducer.setNamesrvAddr(nameAddr);
transactionMQProducer.start();

transactionMQProducer.setTransactionListener(new TransactionListener() {
    @Override
    public LocalTransactionState executeLocalTransaction(Message msg, Object arg) {
        //真正要做的事  创建订单
        Integer itemId = (Integer) ((Map)arg).get("itemId");
        Integer promoId = (Integer) ((Map)arg).get("promoId");
        Integer userId = (Integer) ((Map)arg).get("userId");
        Integer amount = (Integer) ((Map)arg).get("amount");
//                String stockLogId = (String) ((Map)arg).get("stockLogId");
        try {
            orderService.createOrder(userId,itemId,promoId,amount);
        } catch (BusinessException e) {
            e.printStackTrace();
            //设置对应的stockLog为回滚状态
//                    StockLogDO stockLogDO = stockLogDOMapper.selectByPrimaryKey(stockLogId);
//                    stockLogDO.setStatus(3);
//                    stockLogDOMapper.updateByPrimaryKeySelective(stockLogDO);
            return LocalTransactionState.ROLLBACK_MESSAGE;
        }
        return LocalTransactionState.COMMIT_MESSAGE;
    }
    //当executeLocalTransaction没返回明确的LocalTransactionState时就轮到checkLocalTransaction方法了
    @Override
    public LocalTransactionState checkLocalTransaction(MessageExt msg) {
        //根据是否扣减库存成功,来判断要返回COMMIT,ROLLBACK还是继续UNKNOWN
        String jsonString  = new String(msg.getBody());
        Map<String,Object>map = JSON.parseObject(jsonString, Map.class);
        Integer itemId = (Integer) map.get("itemId");
        Integer amount = (Integer) map.get("amount");
        String stockLogId = (String) map.get("stockLogId");
//                StockLogDO stockLogDO = stockLogDOMapper.selectByPrimaryKey(stockLogId);
//                if(stockLogDO == null){
//                    return LocalTransactionState.UNKNOW;
//                }
//                if(stockLogDO.getStatus().intValue() == 2){
//                    return LocalTransactionState.COMMIT_MESSAGE;
//                }else if(stockLogDO.getStatus().intValue() == 1){
//                    return LocalTransactionState.UNKNOW;
//                }
        return LocalTransactionState.ROLLBACK_MESSAGE;
    }
});
}
//事务型同步库存扣减消息
public boolean transactionAsyncReduceStock(Integer userId,Integer itemId,Integer promoId,Integer amount){
    Map<String,Object> bodyMap = new HashMap<>();
    bodyMap.put("itemId",itemId);
    bodyMap.put("amount",amount);
//        bodyMap.put("stockLogId",stockLogId);

    Map<String,Object> argsMap = new HashMap<>();
    argsMap.put("itemId",itemId);
    argsMap.put("amount",amount);
    argsMap.put("userId",userId);
    argsMap.put("promoId",promoId);
//        argsMap.put("stockLogId",stockLogId);

    Message message = new Message(topicName,"increase",
            JSON.toJSON(bodyMap).toString().getBytes(Charset.forName("UTF-8")));
    TransactionSendResult sendResult = null;
    try {
        sendResult = transactionMQProducer.sendMessageInTransaction(message,argsMap);
    } catch (MQClientException e) {
        e.printStackTrace();
        return false;
    }
    if(sendResult.getLocalTransactionState() == LocalTransactionState.ROLLBACK_MESSAGE){
        return false;
    }else if(sendResult.getLocalTransactionState() == LocalTransactionState.COMMIT_MESSAGE){
        return true;
    }else{
        return false;
    }
}

此时,创建订单的任务已经完全被MqProducer接管了,所以OrderController就把createOrder方法修改成

if (!mqProducer.transactionAsyncReduceStock(userModel.getId(), itemId, promoId, amount))
    throw new BusinessException(EmBusinessError.UNKNOWN_ERROR, "下单失败");

库存流水

为了根据checkLocalTransaction确定消息的状态,需要引入操作流水(操作型数据:log data

  1. 创建stock_log表,根据mybatis-generator新建表相关文件
CREATE TABLE `stock_log` (
  `stock_log_id` varchar(64) NOT NULL,
  `item_id` int(11) NOT NULL DEFAULT '0',
  `amount` int(11) NOT NULL DEFAULT '0',
  `status` int(11) NOT NULL DEFAULT '0' COMMENT '//1表示初始状态,2表示下单扣减库存成功,3表示下单回滚',
  PRIMARY KEY (`stock_log_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
  1. 先完成库存流水init状态,然后通过事务型消息下单
/*ItemServiceImpl*/
@Override
@Transactional
public void initStockLog(Integer itemId, Integer amount) {
    StockLogDO stockLogDO = new StockLogDO();
    stockLogDO.setItemId(itemId);
    stockLogDO.setAmount(amount);
    stockLogDO.setStockLogId(UUID.randomUUID().toString().replace("-",""));
    stockLogDO.setStatus(1);//1初始未知,2成功,3失败回滚
    stockLogDOMapper.insertSelective(stockLogDO);
    }
/*OrderController*/
itemService.initStockLog(itemId, amount);
  1. stockLogId放入create方法内并修改相关代码,设置库存流水状态为成功
<!--OrderServiceImpl-->
StockLogDO stockLogDO = stockLogDOMapper.selectByPrimaryKey(stockLogId);
if(stockLogDO == null)
    throw new BusinessException(EmBusinessError.UNKNOWN_ERROR);
stockLogDO.setStatus(2);
stockLogDOMapper.updateByPrimaryKeySelective(stockLogDO);
  1. 实现MqProducercheckLocalTransaction方法
@Override
public LocalTransactionState checkLocalTransaction(MessageExt msg) {
    //根据是否扣减库存成功,来判断要返回COMMIT,ROLLBACK还是继续UNKNOWN
    String jsonString  = new String(msg.getBody());
    Map<String,Object>map = JSON.parseObject(jsonString, Map.class);
    Integer itemId = (Integer) map.get("itemId");//无用啊感觉,不知道老师为啥要这俩参数
    Integer amount = (Integer) map.get("amount");//
    String stockLogId = (String) map.get("stockLogId");
    StockLogDO stockLogDO = stockLogDOMapper.selectByPrimaryKey(stockLogId);
    if(stockLogDO == null){
        return LocalTransactionState.UNKNOW;
    }
    if(stockLogDO.getStatus().intValue() == 2){
        return LocalTransactionState.COMMIT_MESSAGE;
    }else if(stockLogDO.getStatus().intValue() == 1){
        return LocalTransactionState.UNKNOW;
    }
    return LocalTransactionState.ROLLBACK_MESSAGE;
}
  1. 补充MqProducerexecuteLocalTransaction方法
@Override
public LocalTransactionState executeLocalTransaction(Message msg, Object arg) {
    //真正要做的事  创建订单
    Integer itemId = (Integer) ((Map)arg).get("itemId");
    Integer promoId = (Integer) ((Map)arg).get("promoId");
    Integer userId = (Integer) ((Map)arg).get("userId");
    Integer amount = (Integer) ((Map)arg).get("amount");
    String stockLogId = (String) ((Map)arg).get("stockLogId");
    try {
        orderService.createOrder(userId,itemId,promoId,amount, stockLogId);
    } catch (BusinessException e) {
        e.printStackTrace();
        //设置对应的stockLog为回滚状态
        StockLogDO stockLogDO = stockLogDOMapper.selectByPrimaryKey(stockLogId);
        stockLogDO.setStatus(3);
        stockLogDOMapper.updateByPrimaryKeySelective(stockLogDO);
        return LocalTransactionState.ROLLBACK_MESSAGE;
    }
    return LocalTransactionState.COMMIT_MESSAGE;
}

问题本质: 没有库存操作流水:

对于操作型数据:log data,意义是库存扣减的操作记录下来,便于追踪库存操作流水具体的状态;根据这个状态去做对应的回滚,或者查询对应的状态,使很多异步型的操作可以在操作型数据上,例如编译人员在后台创建的一些配置。

主业务数据:master data,ItemModel就是主业务数据,记录了对应商品的主数据;ItemStock对应的库存也是主业务数据;

库存数据库最终一致性保证

方案:

引入库存操作流水,能够做到redis和数据库之间最终的一致性; 引入事务性消息机制;

带来的问题是:

redis不可用时如何处理; 扣减流水错误如何处理;

业务场景决定高可用技术实现

设计原则: 宁可少卖,不可超卖;

方案:

redis可以比实际数据库中少; 超时释放;

库存售罄

  • 库存售罄标识;

  • 售罄后不去操作后续流程;

  • 售罄后通知各系统售罄;

  • 回补上新

/*OrderServiceImpl*/
@Override
@Transactional
public boolean decreaseStock(Integer itemId, Integer amount) {
//        int affectedRow = itemStockDOMapper.decreaseStock(itemId, amount);
    Long row = redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue() * -1);
    if (row > 0) { // > 变 >= 再分为<0 或者 =0
        //更新库存成功
//            boolean mqResult = producer.asyncReduceStock(itemId, amount);
//            if (!mqResult) {
//                redisTemplate.opsForValue().increment("promo_item_stock_" + itemId, amount.intValue());
//                return false;
//            }
        return true;
    } else if(row == 0){
        //打上库存售罄的标识别
        redisTemplate.opsForValue().set("promo_item_stock_invalid_"+itemId, "true");
        return true;
    }else {
        //更新库存失败,比如库存由0->-1,要更改回去
        increaseStock(itemId, amount);
        return false;
    }
}
/*OrderController*/
//若库存不足直接返回下单失败
if(redisTemplate.hasKey("promo_item_stock_invalid_"+itemId)){
    throw new BusinessException(EmBusinessError.STOCK_NOT_ENOUGH);
}

后置流程

销量逻辑异步化 交易单逻辑异步化

流量削峰

秒杀令牌

  1. 原理

    • 秒杀接口需要依靠令牌才能进入,对应的秒杀下单接口需要新增一个入参,表示对应前端用户获得传入的一个令牌,只有令牌处于合法之后,才能进入对应的秒杀下单的逻辑

    • 秒杀令牌由秒杀活动模块负责生成,交易系统仅仅验证令牌的可靠性,以此来判断对应的秒杀接口是否可以被这次httprequest进入

    • 秒杀活动模块对秒杀令牌生成全权处理,逻辑收口

    • 秒杀下单前需要获得秒杀令牌才能开始秒杀

  2. 后端代码实现

/*PromoServiceImpl*/
@Override
public String generateSecondKillToken(Integer promoId,Integer itemId,Integer userId) {

    //判断是否库存已售罄,若对应的售罄key存在,则直接返回下单失败
    if(redisTemplate.hasKey("promo_item_stock_invalid_"+itemId)){
        return null;
    }
    PromoDO promoDO = promoDOMapper.selectByPrimaryKey(promoId);

    //dataobject->model
    PromoModel promoModel = convertFromPojo(promoDO);
    if(promoModel == null){
        return null;
    }

    //判断当前时间是否秒杀活动即将开始或正在进行
    if(promoModel.getStartDate().isAfterNow()){
        promoModel.setStatus(1);
    }else if(promoModel.getEndDate().isBeforeNow()){
        promoModel.setStatus(3);
    }else{
        promoModel.setStatus(2);
    }
    //判断活动是否正在进行
    if(promoModel.getStatus().intValue() != 2){
        return null;
    }
    //判断item信息是否存在
    ItemModel itemModel = itemService.getItemByIdInCache(itemId);
    if(itemModel == null){
        return null;
    }
    //判断用户信息是否存在
    UserModel userModel = userService.getUserByIdInCache(userId);
    if(userModel == null){
        return null;
    }

    //获取秒杀大闸的count数量
    long result = redisTemplate.opsForValue().increment("promo_door_count_"+promoId,-1);
    if(result < 0){
        return null;
    }
    //生成token并且存入redis内并给一个5分钟的有效期
    String token = UUID.randomUUID().toString().replace("-","");

    redisTemplate.opsForValue().set("promo_token_"+promoId+"_userid_"+userId+"_itemid_"+itemId,token);
    redisTemplate.expire("promo_token_"+promoId+"_userid_"+userId+"_itemid_"+itemId,5, TimeUnit.MINUTES);

    return token;
}
/*OrderController*/
//生成秒杀令牌
@RequestMapping(value = "/generatetoken",method = {RequestMethod.POST},consumes={CONTENT_TYPE_FORMED})
@ResponseBody
public CommonReturnType generatetoken(@RequestParam(name="itemId")Integer itemId,
                                    @RequestParam(name="promoId")Integer promoId) throws BusinessException {
    //根据token获取用户信息
    String token = httpServletRequest.getParameterMap().get("token")[0];
    if(StringUtils.isEmpty(token)){
        throw new BusinessException(EmBusinessError.USER_NOT_LOGIN,"用户还未登陆,不能下单");
    }
    //获取用户的登陆信息
    UserModel userModel = (UserModel) redisTemplate.opsForValue().get(token);
    if(userModel == null){
        throw new BusinessException(EmBusinessError.USER_NOT_LOGIN,"用户还未登陆,不能下单");
    }
    //获取秒杀访问令牌
    String promoToken = promoService.generateSecondKillToken(promoId,itemId,userModel.getId());

    if(promoToken == null){
        throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR,"生成令牌失败");
    }
    //返回对应的结果
    return CommonReturnType.create(promoToken);
}
/*注释掉`ItemServiceImpl`内已经验证过的代码*/
/*OrderController*/
//校验秒杀令牌是否正确
if (promoId != null){
    String inRedisPromoToken = (String) redisTemplate.opsForValue().get("promo_token_"+promoId+"_userid_"+userModel.getId()+"_itemid_"+itemId);
    if(inRedisPromoToken == null)
        throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "秒杀令牌校验失败");
    if (!StringUtils.equals(promoToken,inRedisPromoToken))
        throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR, "秒杀令牌校验失败");
}
  1. 前端代码实现
$.ajax({
    type:"POST",
    contentType:"application/x-www-form-urlencoded",
    url:"http://"+g_host+"/order/generatetoken?token="+token,
    data:{
        "itemId":g_itemVO.id,
        "promoId":g_itemVO.promoId
    },
    xhrFields:{withCredentials:true},
    success:function(data){
        if(data.status == "success"){
            var promoToken = data.data;
            $.ajax({
                type:"POST",
                contentType:"application/x-www-form-urlencoded",
                url:"http://"+g_host+"/order/createorder?token="+token,
                data:{
                    "itemId":g_itemVO.id,
                    "amount":1,
                    "promoId":g_itemVO.promoId,
                    "promoToken":promoToken
                },
                xhrFields:{withCredentials:true},
                success:function(data){
                    if(data.status == "success"){
                        alert("下单成功");
                        window.location.reload();
                    }else{
                        alert("下单失败,原因:"+data.data.errMsg);
                        if(data.data.errCode == 20003){
                            window.location.href="login.html";
                        }
                    }
                },
                error:function(data){
                    alert("下单失败,原因:"+data.responseText);
                }
            });


        }else{
            alert("获取令牌失败,原因:"+data.data.errMsg);
            if(data.data.errCode == 20003){
                window.location.href="login.html";
            }
        }
    },
    error:function(data){
        alert("获取令牌失败,原因为"+data.responseText);
    }
});

秒杀大闸:star:

为了解决秒杀令牌在活动一开始无限制生成,影响系统的性能,提出了秒杀大闸的解决方案;

  1. 原理 依靠秒杀令牌的授权原理定制化发牌逻辑,解决用户对应流量问题,做到大闸功能; 根据秒杀商品初始化库存颁发对应数量令牌,控制大闸流量; 用户风控策略前置到秒杀令牌发放中; 库存售罄判断前置到秒杀令牌发放中。
  2. 代码实现
/*PromoServiceImpl*/
//将大闸限制的数字设到redis内
//publishPromo
redisTemplate.opsForValue().set("promo_door_count_"+promoId, itemModel.getStock().intValue()*5);
//获取秒杀大闸的count数量
//generateSecondKillToken
long result = redisTemplate.opsForValue().increment("promo_door_count_"+promoId,-1);
if(result < 0){
    return null;
}
  1. 缺陷 浪涌流量涌入后系统无法应对 多库存多商品等令牌限制能力弱
  2. 队列泄洪原理 排队有些时候比并发更高效(例如redis单线程模型,innodb mutex key等); 依靠排队去限制并发流量; 依靠排队和下游阻塞窗口程度调整队列释放流量大小; 以支付宝银行网关队列为例,支付宝需要对接许多银行网关,当你的支付宝绑定多张银行卡,那么支付宝对于这些银行都有不同的支付渠道。在大促活动时,支付宝的网关会有上亿级别的流量,银行的网关扛不住,支付宝就会将支付请求队列放到自己的消息队中,依靠银行网关承诺可以处理的TPS流量去泄洪; 消息队列就像“水库”一样,拦蓄上游的洪水,削减进入下游河道的洪峰流量,从而达到减免洪水灾害的目的
  3. 队列泄洪实现
/*OrderController*/
private ExecutorService executorService;

@PostConstruct
public void init() {
    executorService = Executors.newFixedThreadPool(20);
}
//createOrder
//同步调用线程池的submit方法
//拥塞窗口为20的等待队列,用来队列化泄洪
Future<Object> future = executorService.submit(new Callable<Object>() {

    @Override
    public Object call() throws Exception {
        //加入库存流水init状态
        String stockLogId = itemService.initStockLog(itemId, amount);
        //再去完成对应的下单事务型消息机制
        if (!mqProducer.transactionAsyncReduceStock(userModel.getId(), itemId, promoId, amount, stockLogId)) {
            throw new BusinessException(EmBusinessError.UNKNOWN_ERROR, "下单失败");
        }
        return null;
    }
});
try {
    future.get();
} catch (InterruptedException e) {
    throw new BusinessException(EmBusinessError.UNKNOWN_ERROR);
} catch (ExecutionException e) {
    throw new BusinessException(EmBusinessError.UNKNOWN_ERROR);
}
  1. 本地or分布式 本地:将队列维护在本地内存中; 分布式:将队列设置到外部redis中

比如说我们有100台机器,假设每台机器设置20个队列,那我们的拥塞窗口就是2000,但是由于负载均衡的关系,很难保证每台机器都能够平均收到对应的createOrder的请求,那如果将这2000个排队请求放入redis中,每次让redis去实现以及去获取对应拥塞窗口设置的大小,这种就是分布式队列;

本地和分布式有利有弊:

分布式队列最严重的就是性能问题,发送任何一次请求都会引起call网络的消耗,并且要对Redis产生对应的负载,Redis本身也是集中式的,虽然有扩展的余地。单点问题就是若Redis挂了,整个队列机制就失效了。

本地队列的好处就是完全维护在内存当中的,因此其对应的没有网络请求的消耗,只要JVM不挂,应用是存活的,那本地队列的功能就不会失效。因此企业级开发应用还是推荐使用本地队列,本地队列的性能以及高可用性对应的应用性和广泛性。当然我们也有对应的负载均衡的能力。 其实没有办法当我们每个本地对应服务器都能完全均匀地接受createOrder这个请求,他有负载不均衡的问题。但是在高瓶颈高可用性的情况下,这些问题是可以被接受的。我们可以使用外部的分布式集中队列,当外部集中队列不可用时或者返回请求时间超时拉到不能接受的状态时,可以采用降级的策略,切回本地的内存队列。

防刷限流

验证码

  • 验证码生成于验证技术
  • 限流原理与实现
  • 防黄牛技术
  1. 后端代码实现
/*OrderController.java*/
//生成验证码
@RequestMapping(value = "/generateverifycode",method = {RequestMethod.GET,RequestMethod.POST})
@ResponseBody
public void generateverifycode(HttpServletResponse response) throws BusinessException, IOException {
    //根据token获取用户信息
    String token = httpServletRequest.getParameterMap().get("token")[0];
    if (StringUtils.isEmpty(token)) {
        throw new BusinessException(EmBusinessError.USER_NOT_LOGIN, "用户还未登陆,不能生成验证码");
    }
    UserModel userModel = (UserModel) redisTemplate.opsForValue().get(token);
    if(userModel == null)
        throw new BusinessException(EmBusinessError.USER_NOT_LOGIN, "用户还未登陆,不能生成验证码");
    Map<String, Object> map =CodeUtil.generateCodeAndPic();
    redisTemplate.opsForValue().set("verify_code_"+userModel.getId(),map.get("code"));
    redisTemplate.expire("verify_code_"+userModel.getId(),10,TimeUnit.MINUTES);
    ImageIO.write((RenderedImage) map.get("codePic"), "jpeg", response.getOutputStream());
}
//generateToken
//通过verifyCode验证验证码的有效性
String redisVerifyCode = (String) redisTemplate.opsForValue().get("verify_code_"+userModel.getId());
if(StringUtils.isEmpty(redisVerifyCode)){
    throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR,"请求非法");
}
if(!redisVerifyCode.equalsIgnoreCase(verifyCode)){
    throw new BusinessException(EmBusinessError.PARAMETER_VALIDATION_ERROR,"请求非法,验证码错误");
}
  1. 前端代码实现
<div id="verifyDiv" style="display: none" class="form-actions" >
    <img src=""/>
    <input id="verifyContent" type="text" value=""/>
    <button class="btn blue" id="verifyButton" type="submit">
        验证
    </button>
</div>

$("#verifyButton").on("click",function () {
    var token = window.localStorage["token"];
    $.ajax({
        type:"POST",
        contentType:"application/x-www-form-urlencoded",
        url:"http://"+g_host+"/order/generatetoken?token="+token,
        data:{
            "itemId":g_itemVO.id,
            "promoId":g_itemVO.promoId,
            "verifyCode":$("#verifyContent").val()
        },
        xhrFields:{withCredentials:true},
        success:function(data){
            if(data.status == "success"){
                var promoToken = data.data;
                $.ajax({
                    type:"POST",
                    contentType:"application/x-www-form-urlencoded",
                    url:"http://"+g_host+"/order/createorder?token="+token,
                    data:{
                        "itemId":g_itemVO.id,
                        "amount":1,
                        "promoId":g_itemVO.promoId,
                        "promoToken":promoToken
                    },
                    xhrFields:{withCredentials:true},
                    success:function(data){
                        if(data.status == "success"){
                            alert("下单成功");
                            window.location.reload();
                        }else{
                            alert("下单失败,原因:"+data.data.errMsg);
                            if(data.data.errCode == 20003){
                                window.location.href="login.html";
                            }
                        }
                    },
                    error:function(data){
                        alert("下单失败,原因:"+data.responseText);
                    }
                });


            }else{
                alert("获取令牌失败,原因:"+data.data.errMsg);
                if(data.data.errCode == 20003){
                    window.location.href="login.html";
                }
            }
        },
        error:function(data){
            alert("获取令牌失败,原因为"+data.responseText);
        }
    });
});
$("#createorder").on("click", function () {
    var token = window.localStorage["token"];
    if(token == null){
        alert("没有登陆,不能下单");
        window.location.href="login.html";
        return false;
    }

    $("#verifyDiv img").attr("src","http://"+g_host+"/order/generateverifycode?token="+token);
    $("#verifyDiv").show();
});

限流技术

  1. 限流目的
  • 流量远比你想象的要多
  • 系统活着比挂了要好
  • 宁愿只让少数人能用,也不要让所有人不能用
  1. 限流方案
  • 限制并发:在controller入口设置一个计数器(假定计数器初始大小是一),在入口时减一,在出口时加一
  • 行业常用的解决方案是限制TPSQPS
  • 令牌桶算法:限制每一秒流量的最大值以应对突发的流量,但不能超过限定值
  • 漏桶算法:平滑网络流量,以固定的速率流入对应的操作

image-20210219151205354

  1. 代码实现
<!--OrderController-->
private RateLimiter orderCreateRateLimiter;

@PostConstruct
public void init() {
    executorService = Executors.newFixedThreadPool(20);
    orderCreateRateLimiter = RateLimiter.create(300);
}
//createOrder
if(!orderCreateRateLimiter.tryAcquire()){
    throw new BusinessException(EmBusinessError.RATE_LIMIT);
}
  1. 限流力度
  • 接口维度
  • 总维度
  1. 限流范围
  • 集群限流:依赖Redis或其他的中间件技术做统一计数器,往往会产生性能瓶颈
  • 单机限流:负载均衡的前提下单机平均限流效果更好
  1. 传统防刷
  • 限制一个会话(session_id,token)同一秒钟/分钟接口调用多少次:多会话接入绕开无效
  • 限制一个ip同一秒钟/分钟 接口调用多少次:数量不好控制,容易误伤

防黄牛技术

  1. 黄牛为什么难防
  • 模拟器作弊:模拟硬件设备,可修改设备信息
  • 设备牧场作弊:工作室里有一批移动设备
  • 人工作弊:靠佣金吸引兼职人员刷单
  1. 设备指纹
  • 采集终端设备各项参数,启动应用时生成唯一设备指纹
  • 根据对应设备指纹的参数猜测出模拟器等可疑设备概率
  1. 凭证系统
  • 根据设备指纹下发凭证
  • 关键业务链路上带上凭证并由业务系统到凭证服务器上验证
  • 凭证服务器根据对应凭证所等价的设备指纹参数并根据实时行为风控系统判定对应凭证的可疑度分数
  • 若分数低于某个数值则由业务系统返回固定错误码,拉起前端验证码验身,验身成功后加入凭证服务器对应分数
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.

简介

聚焦Java性能优化 打造亿级流量秒杀系统 展开 收起
GPL-3.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Java
1
https://gitee.com/noah2021/miaosha.git
git@gitee.com:noah2021/miaosha.git
noah2021
miaosha
电商秒杀系统深度优化
master

搜索帮助

53164aa7 5694891 3bd8fe86 5694891