分布式实战 | 第一篇 :ELK在开源全栈(Spring Cloud微服务+Vue+微信小程序)【有来项目】的应用,不单单是分布式日志收集,这次助你快速入门ElasticSearch

一. 前言

欢迎大家加开源项目有来交流群,一起参与开源项目的研发~

其实早前就想计划出这篇文章,但是最近主要精力在完善微服务、系统权限设计、微信小程序和管理前端的功能,不过好在有群里小伙伴的一起帮忙反馈问题,基础版的功能已经差不多,也在此谢过,希望今后大家还是能够相互学习,一起进步~

OK,回正题,ELK是Elasticsearch、Logstash、Kibana三个开源软件的组合,相信很多童鞋使用ELK有去做过分布式日志收集。流程概括为:微服务应用把Logback输出的日志通过HTTP传输至LogStash,然后经过分析过滤,转发至ES,再由Kibana提供检索和统计可视化界面。

在本实战案例中,使用Spring AOP、Logback横切认证接口来记录用户登录日志,收集到ELK,通过SpringBoot整合RestHighLevelClient实现对ElasticSearch数据检索和统计。从日志搜集到数据统计,一次性的走个完整,快速入门ElasticSearch。

本篇涉及的前后端全部源码已上传gitee和github,熟悉有来项目的童鞋快速过一下步骤即可。

项目名称 Github 码云
后台 youlai-mall youlai-mall
前端 youlai-mall-admin youlai-mall-admin

二. 需求

基于ELK的日志搜集的功能,本篇实现的需求如下:

  1. 记录系统用户登录日志,信息包括用户IP、登录耗时、认证令牌JWT
  2. 统计十天内用户登录次数、今日访问IP和总访问IP
  3. 充分利用记录的JWT信息,通过黑名单的方式让JWT失效实现强制下线

实现效果:

访问地址://www.youlai.store

  • Kibana日志可视化统计

  • 登录次数统计、今日访问IP统计、总访问IP统计

  • 登录信息,强制用户下线,演示的是自己强制自己下线的效果

三. Docker快速搭建ELK环境

1. 拉取镜像

docker pull elasticsearch:7.10.1
docker pull kibana:7.10.1
docker pull logstash:7.10.1

2. elasticsearch部署

1. 环境准备

# 创建文件
mkdir -p /opt/elasticsearch/{plugins,data}  /etc/elasticsearch
touch /etc/elasticsearch/elasticsearch.yml 
chmod -R 777 /opt/elasticsearch/data/  
vim /etc/elasticsearch/elasticsearch.yml
# 写入
cluster.name: elasticsearch
http.cors.enabled: true                               
http.cors.allow-origin: "*"                     
http.host: 0.0.0.0
node.max_local_storage_nodes: 100

2. 启动容器

docker run -d --name=elasticsearch --restart=always \
-e discovery.type=single-node \
-e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-p 9200:9200 \
-p 9300:9300 \
-v /etc/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
-v /opt/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
elasticsearch:7.10.1

3. 验证和查看ElasticSearch版本

curl -XGET localhost:9200

2. kibana部署

1. 环境准备

# 创建文件
mkdir -p /etc/kibana
vim /etc/kibana/kibana.yml

# 写入
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "//elasticsearch:9200" ]
i18n.locale: "zh-CN"

2. 启动容器

docker run -d --restart always -p 5601:5601 --link elasticsearch \
-e ELASTICSEARCH_URL=//elasticsearch:9200 \
-v /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml \
kibana:7.10.1

3. logstash部署

1. 环境准备

  • 配置 logstash.yml
# 创建文件
mkdir -p /etc/logstash/config
vim /etc/logstash/config/logstash.yml

# 写入
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "//elasticsearch:9200" ]
xpack.management.pipeline.id: ["main"]
  • 配置pipeline.yml
# 创建文件
vim  /etc/logstash/config/pipeline.yml 

# 写入(注意空格)
- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/logstash.config"
  • 配置logstash.conf
# 创建文件
mkdir -p /etc/logstash/pipeline
vim /etc/logstash/pipeline/logstash.conf 

# 写入
input {
    tcp {
      port => 5044
      mode => "server"
      host => "0.0.0.0"
      codec => json_lines
    }
}
filter{

}
output {
    elasticsearch {
        hosts => ["elasticsearch:9200"]
        # 索引名称,没有会自动创建
        index => "%{[project]}-%{[action]}-%{+YYYY-MM-dd}"
    }
}

2. 启动容器

docker run -d --restart always -p 5044:5044 -p 9600:9600 --name logstash --link elasticsearch \
-v /etc/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v /etc/logstash/config/pipeline.yml:/usr/share/logstash/config/pipeline.yml \
-v /etc/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
logstash:7.10.1

4. 测试

访问: //localhost:5601/

四. Spring AOP + Logback 横切打印登录日志

1. Spring AOP横切认证接口添加日志

代码坐标: common-web#LoginLogAspect

@Aspect
@Component
@AllArgsConstructor
@Slf4j
@ConditionalOnProperty(value = "spring.application.name", havingValue = "youlai-auth")
public class LoginLogAspect {

    @Pointcut("execution(public * com.youlai.auth.controller.AuthController.postAccessToken(..))")
    public void Log() {
    }

    @Around("Log()")
    public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {

        LocalDateTime startTime = LocalDateTime.now();
        Object result = joinPoint.proceed();

        // 获取请求信息
        ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
        HttpServletRequest request = attributes.getRequest();

        // 刷新token不记录
        String grantType=request.getParameter(AuthConstants.GRANT_TYPE_KEY);
        if(grantType.equals(AuthConstants.REFRESH_TOKEN)){
            return result;
        }

        // 时间统计
        LocalDateTime endTime = LocalDateTime.now();
        long elapsedTime = Duration.between(startTime, endTime).toMillis(); // 请求耗时(毫秒)

        // 获取接口描述信息
        MethodSignature signature = (MethodSignature) joinPoint.getSignature();
        String description = signature.getMethod().getAnnotation(ApiOperation.class).value();// 方法描述

        String username = request.getParameter(AuthConstants.USER_NAME_KEY); // 登录用户名
        String date = startTime.format(DateTimeFormatter.ofPattern("yyyy-MM-dd")); // 索引名需要,因为默认生成索引的date时区不一致

        // 获取token
        String token = Strings.EMPTY;
        if (request != null) {
            JSONObject jsonObject = JSONUtil.parseObj(result);
            token = jsonObject.getStr("value");
        }

        String clientIP = IPUtils.getIpAddr(request);  // 客户端请求IP(注意:如果使用Nginx代理需配置)
        String region = IPUtils.getCityInfo(clientIP); // IP对应的城市信息

        // MDC 扩展logback字段,具体请看logback-spring.xml的自定义日志输出格式
        MDC.put("elapsedTime", StrUtil.toString(elapsedTime));
        MDC.put("description", description);
        MDC.put("region", region);
        MDC.put("username", username);
        MDC.put("date", date);
        MDC.put("token", token);
        MDC.put("clientIP", clientIP);

        log.info("{} 登录,耗费时间 {} 毫秒", username, elapsedTime); // 收集日志这里必须打印一条日志,内容随便吧,记录在message字段,具体看logback-spring.xml文件
        return result;
    }
}

2. Logback日志上传至LogStash

代码坐标:common-web#logback-spring.xml

<!-- Logstash收集登录日志输出到ElasticSearch  -->
<appender name="LOGIN_LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>localhost:5044</destination>
    <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
        <providers>
            <timestamp>
                <timeZone>Asia/Shanghai</timeZone>
            </timestamp>
            <!--自定义日志输出格式-->
            <pattern>
                <pattern>
                    {
                    "project": "${APP_NAME}",
                    "date": "%X{date}", <!-- 索引名时区同步 -->
                    "action":"login",
                    "pid": "${PID:-}",
                    "thread": "%thread",
                    "message": "%message",
                    "elapsedTime": "%X{elapsedTime}",
                    "username":"%X{username}",
                    "clientIP": "%X{clientIP}",
                    "region":"%X{region}",
                    "token":"%X{token}",
                    "loginTime": "%date{\"yyyy-MM-dd HH:mm:ss\"}",
                    "description":"%X{description}"
                    }
                </pattern>
            </pattern>
        </providers>
    </encoder>
    <keepAliveDuration>5 minutes</keepAliveDuration>
</appender>

<!-- additivity="true" 默认是true 会向上传递至root -->
<logger name="com.youlai.common.web.aspect.LoginLogAspect" level="INFO" additivity="true">
    <appender-ref ref="LOGIN_LOGSTASH"/>
</logger>
  • localhost:5044 Logstash配置的input收集数据的监听
  • %X{username} 输出MDC添加的username的值

五. SpringBoot整合ElasticSearch客户端RestHighLevelClient

1. pom依赖

代码坐标: common-elasticsearch#pom.xml

客户端的版本需和服务器的版本对应,这里也就是7.10.1

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-client</artifactId>
        </exclusion>

        <exclusion>
            <artifactId>elasticsearch</artifactId>
            <groupId>org.elasticsearch</groupId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>7.10.1</version>
</dependency>

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-client</artifactId>
    <version>7.10.1</version>
</dependency>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

2. yml 配置

spring:
  elasticsearch:
    rest:
      uris: ["//localhost:9200"]
      cluster-nodes:
        - localhost:9200

3. RestHighLevelClientConfig 配置类

代码坐标: common-elasticsearch#RestHighLevelClientConfig

@ConfigurationProperties(prefix = "spring.elasticsearch.rest")
@Configuration
@AllArgsConstructor
public class RestHighLevelClientConfig {

    @Setter
    private List<String> clusterNodes;

    @Bean
    public RestHighLevelClient restHighLevelClient() {

        HttpHost[] hosts = clusterNodes.stream()
                .map(this::buildHttpHost) // eg: new HttpHost("127.0.0.1", 9200, "http")
                .toArray(HttpHost[]::new);
        return new RestHighLevelClient(RestClient.builder(hosts));
    }

    private HttpHost buildHttpHost(String node) {
        String[] nodeInfo = node.split(":");
        return new HttpHost(nodeInfo[0].trim(), Integer.parseInt(nodeInfo[1].trim()), "http");
    }
}

4. RestHighLevelClient API封装

代码坐标: common-elasticsearch#ElasticSearchService

  • 暂只简单封装实现需求里需要的几个方法,计数、去重计数、日期聚合统计、列表查询、分页查询、删除,后续可扩展…
@Service
@AllArgsConstructor
public class ElasticSearchService {

    private RestHighLevelClient client;

    /**
     * 计数
     */
    @SneakyThrows
    public long count(QueryBuilder queryBuilder, String... indices) {
        // 构造请求
        CountRequest countRequest = new CountRequest(indices);
        countRequest.query(queryBuilder);

        // 执行请求
        CountResponse countResponse = client.count(countRequest, RequestOptions.DEFAULT);
        long count = countResponse.getCount();
        return count;
    }

    /**
     * 去重计数
     */
    @SneakyThrows
    public long countDistinct(QueryBuilder queryBuilder, String field, String... indices) {
        String distinctKey = "distinctKey"; // 自定义计数去重key,保证上下文一致

        // 构造计数聚合 cardinality:集合中元素的个数
        CardinalityAggregationBuilder aggregationBuilder = AggregationBuilders
                .cardinality(distinctKey).field(field);

        // 构造搜索源
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(queryBuilder).aggregation(aggregationBuilder);

        // 构造请求
        SearchRequest searchRequest = new SearchRequest(indices);
        searchRequest.source(searchSourceBuilder);

        // 执行请求
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
        ParsedCardinality result = searchResponse.getAggregations().get(distinctKey);
        return result.getValue();
    }

    /**
     * 日期聚合统计
     *
     * @param queryBuilder 查询条件
     * @param field        聚合字段,如:登录日志的 date 字段
     * @param interval     统计时间间隔,如:1天、1周
     * @param indices      索引名称
     * @return
     */
    @SneakyThrows
    public Map<String, Long> dateHistogram(QueryBuilder queryBuilder, String field, DateHistogramInterval interval, String... indices) {

        String dateHistogramKey = "dateHistogramKey"; // 自定义日期聚合key,保证上下文一致

        // 构造聚合
        AggregationBuilder aggregationBuilder = AggregationBuilders
                .dateHistogram(dateHistogramKey) //自定义统计名,和下文获取需一致
                .field(field) // 日期字段名
                .format("yyyy-MM-dd") // 时间格式
                .calendarInterval(interval) // 日历间隔,例: 1s->1秒 1d->1天 1w->1周 1M->1月 1y->1年 ...
                .minDocCount(0); // 最小文档数,比该值小就忽略

        // 构造搜索源
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder
                .query(queryBuilder)
                .aggregation(aggregationBuilder)
                .size(0);

        // 构造SearchRequest
        SearchRequest searchRequest = new SearchRequest(indices);
        searchRequest.source(searchSourceBuilder);

        searchRequest.indicesOptions(
                IndicesOptions.fromOptions(
                        true, // 是否忽略不可用索引
                        true, // 是否允许索引不存在
                        true, // 通配符表达式将扩展为打开的索引
                        false // 通配符表达式将扩展为关闭的索引
                ));

        // 执行请求
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);

        // 处理结果
        ParsedDateHistogram dateHistogram = searchResponse.getAggregations().get(dateHistogramKey);

        Iterator<? extends Histogram.Bucket> iterator = dateHistogram.getBuckets().iterator();

        Map<String, Long> map = new HashMap<>();
        while (iterator.hasNext()) {
            Histogram.Bucket bucket = iterator.next();
            map.put(bucket.getKeyAsString(), bucket.getDocCount());
        }
        return map;
    }

    /**
     * 列表查询
     */
    @SneakyThrows
    public <T extends BaseDocument> List<T> search(QueryBuilder queryBuilder, Class<T> clazz, String... indices) {
        List<T> list = this.search(queryBuilder, null, 1, ESConstants.DEFAULT_PAGE_SIZE, clazz, indices);
        return list;
    }

    /**
     * 分页列表查询
     */
    @SneakyThrows
    public <T extends BaseDocument> List<T> search(QueryBuilder queryBuilder, SortBuilder sortBuilder, Integer page, Integer size, Class<T> clazz, String... indices) {
        // 构造SearchSourceBuilder
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(queryBuilder);
        searchSourceBuilder.sort(sortBuilder);
        searchSourceBuilder.from((page - 1) * size);
        searchSourceBuilder.size(size);
        // 构造SearchRequest
        SearchRequest searchRequest = new SearchRequest(indices);
        searchRequest.source(searchSourceBuilder);
        // 执行请求
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
        SearchHits hits = searchResponse.getHits();
        SearchHit[] searchHits = hits.getHits();

        List<T> list = CollectionUtil.newArrayList();
        for (SearchHit hit : searchHits) {
            T t = JSONUtil.toBean(hit.getSourceAsString(), clazz);
            t.setId(hit.getId()); // 数据的唯一标识
            t.setIndex(hit.getIndex());// 索引
            list.add(t);
        }
        return list;
    }

    /**
     * 删除
     */
    @SneakyThrows
    public boolean deleteById(String id, String index) {
        DeleteRequest deleteRequest = new DeleteRequest(index,id);
        DeleteResponse deleteResponse = client.delete(deleteRequest, RequestOptions.DEFAULT);
        return true;
    }
}

六. 后台接口

在SpringBoot整合了ElasticSearch的高级客户端RestHighLevelClient,以及简单了封装方法之后,接下来就准备为前端提供统计数据、分页列表查询记录、根据ID删除记录接口了。

1. 首页控制台

首页控制台需要今日IP访问数,历史总IP访问数、近十天每天的登录次数统计,具体代码如下:

代码坐标: youlai-admin#DashboardController

@Api(tags = "首页控制台")
@RestController
@RequestMapping("/api.admin/v1/dashboard")
@Slf4j
@AllArgsConstructor
public class DashboardController {

    ElasticSearchService elasticSearchService;

    @ApiOperation(value = "控制台数据")
    @GetMapping
    public Result data() {
        Map<String, Object> data = new HashMap<>();

        // 今日IP数
        long todayIpCount = getTodayIpCount();
        data.put("todayIpCount", todayIpCount);

        // 总IP数
        long totalIpCount = getTotalIpCount();
        data.put("totalIpCount", totalIpCount);

        // 登录统计
        int days = 10; // 统计天数
        Map loginCount = getLoginCount(days);
        data.put("loginCount", loginCount);

        return Result.success(data);
    }

    
    private long getTodayIpCount() {
        String date = LocalDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd"));
        TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("date", date);
        String indexName = ESConstants.LOGIN_INDEX_PATTERN + date; //索引名称
        
        // 这里使用clientIP聚合计数,为什么加.keyword后缀呢?下文给出截图
        long todayIpCount = elasticSearchService.countDistinct(termQueryBuilder, "clientIP.keyword", indexName);
        return todayIpCount;
    }

    private long getTotalIpCount() {
        long totalIpCount = elasticSearchService.countDistinct(null, "clientIP.keyword", ESConstants.LOGIN_INDEX_PATTERN);
        return totalIpCount;
    }

    private Map getLoginCount(int days) {

        LocalDateTime now = LocalDateTime.now();
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");

        String startDate = now.plusDays(-days).format(formatter);
        String endDate = now.format(formatter);

        String[] indices = new String[days]; // 查询ES索引数组
        String[] xData = new String[days]; // 柱状图x轴数据
        for (int i = 0; i < days; i++) {
            String date = now.plusDays(-i).format(formatter);
            xData[i] = date;
            indices[i] = ESConstants.LOGIN_INDEX_PREFIX + date;
        }

        // 查询条件,范围内日期统计
        RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("date").from(startDate).to(endDate);
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery()
                .must(rangeQueryBuilder);


        // 总数统计
        Map<String, Long> totalCountMap = elasticSearchService.dateHistogram(
                boolQueryBuilder,
                "date", // 根据date字段聚合统计登录数 logback-spring.xml 中的自定义扩展字段 date
                DateHistogramInterval.days(1),
                indices);

        // 当前用户统计
        HttpServletRequest request = RequestUtils.getRequest();
        String clientIP = IPUtils.getIpAddr(request);

        boolQueryBuilder.must(QueryBuilders.termQuery("clientIP", clientIP));
        Map<String, Long> myCountMap = elasticSearchService.dateHistogram(boolQueryBuilder, "date", DateHistogramInterval.days(1), indices);


        // 组装echarts数据
        Long[] totalCount = new Long[days];
        Long[] myCount = new Long[days];

        Arrays.sort(xData);// 默认升序
        for (int i = 0; i < days; i++) {
            String key = xData[i];
            totalCount[i] = Convert.toLong(totalCountMap.get(key), 0l);
            myCount[i] = Convert.toLong(myCountMap.get(key), 0l);
        }
        Map<String, Object> map = new HashMap<>(4);

        map.put("xData", xData); // x轴坐标
        map.put("totalCount", totalCount); // 总数
        map.put("myCount", myCount); // 我的

        return map;
    }
}
  • 聚合字段clientIP为什么添加.keyword后缀?

2. 登录记录分页查询接口

代码坐标: youlai-admin # LoginRecordController

@Api(tags = "登录记录")
@RestController
@RequestMapping("/api.admin/v1/login_records")
@Slf4j
@AllArgsConstructor
public class LoginRecordController {

    ElasticSearchService elasticSearchService;

    ITokenService tokenService;

    @ApiOperation(value = "列表分页")
    @ApiImplicitParams({
            @ApiImplicitParam(name = "page", value = "页码", defaultValue = "1", paramType = "query", dataType = "Long"),
            @ApiImplicitParam(name = "limit", value = "每页数量", defaultValue = "10", paramType = "query", dataType = "Long"),
            @ApiImplicitParam(name = "startDate", value = "开始日期", paramType = "query", dataType = "String"),
            @ApiImplicitParam(name = "endDate", value = "结束日期", paramType = "query", dataType = "String"),
            @ApiImplicitParam(name = "clientIP", value = "客户端IP", paramType = "query", dataType = "String")
    })
    @GetMapping
    public Result list(
            Integer page,
            Integer limit,
            String startDate,
            String endDate,
            String clientIP
    ) {

        // 日期范围
        RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("date");

        if (StrUtil.isNotBlank(startDate)) {
            rangeQueryBuilder.from(startDate);
        }
        if (StrUtil.isNotBlank(endDate)) {
            rangeQueryBuilder.to(endDate);
        }

        BoolQueryBuilder queryBuilder = QueryBuilders.boolQuery().must(rangeQueryBuilder);

        if (StrUtil.isNotBlank(clientIP)) {
            queryBuilder.must(QueryBuilders.wildcardQuery("clientIP", "*" + clientIP + "*"));
        }
        // 总记录数
        long count = elasticSearchService.count(queryBuilder, ESConstants.LOGIN_INDEX_PATTERN);

        // 排序
        FieldSortBuilder sortBuilder = new FieldSortBuilder("@timestamp").order(SortOrder.DESC);

        // 分页查询
        List<LoginRecord> list = elasticSearchService.search(queryBuilder, sortBuilder, page, limit, LoginRecord.class, ESConstants.LOGIN_INDEX_PATTERN);

        // 遍历获取会话状态
        list.forEach(item -> {
            String token = item.getToken();
            int tokenStatus = 0;
            if (StrUtil.isNotBlank(token)) {
                tokenStatus = tokenService.getTokenStatus(item.getToken());
            }
            item.setStatus(tokenStatus);
        });

        return Result.success(list, count);
    }


    @ApiOperation(value = "删除登录记录")
    @ApiImplicitParam(name = "ids", value = "id集合", required = true, paramType = "query", dataType = "String")
    @DeleteMapping
    public Result delete(@RequestBody List<BaseDocument> documents) {
        documents.forEach(document -> elasticSearchService.deleteById(document.getId(), document.getIndex()));
        return Result.success();
    }

}

3. 强制下线接口

代码坐标: youlai-admin#TokenController

  • 这里还是将JWT添加至黑名单,然后在网关限制被加入黑名单的JWT登录
@Api(tags = "令牌接口")
@RestController
@RequestMapping("/api.admin/v1/tokens")
@Slf4j
@AllArgsConstructor
public class TokenController {

    ITokenService tokenService;

    @ApiOperation(value = "强制下线")
    @ApiImplicitParam(name = "token", value = "访问令牌", required = true, paramType = "query", dataType = "String")
    @PostMapping("/{token}/_invalidate")
    @SneakyThrows
    public Result invalidateToken(@PathVariable String token) {
        boolean status = tokenService.invalidateToken(token);
        return Result.judge(status);
    }

}

代码坐标: youlai-admin#TokenServiceImpl

@Override
@SneakyThrows
public boolean invalidateToken(String token) {

    JWTPayload payload = JWTUtils.getJWTPayload(token);

    // 计算是否过期
    long currentTimeSeconds = System.currentTimeMillis() / 1000;
    Long exp = payload.getExp();
    if (exp < currentTimeSeconds) { // token已过期,无需加入黑名单
        return true;
    }
    // 添加至黑名单使其失效
    redisTemplate.opsForValue().set(AuthConstants.TOKEN_BLACKLIST_PREFIX + payload.getJti(), null, (exp - currentTimeSeconds), TimeUnit.SECONDS);
    return true;
}

七. 前端界面

项目前端源码:youlai-mall-admin,以下只贴出页面路径,有兴趣下载到本地查看源码和效果

代码坐标: src/views/dashboard/common/components/LoginCountChart.vue

  • 登录次数统计、今日访问IP统计、总访问IP统计

代码坐标: src/views/admin/record/login/index.vue

  • 登录信息,强制用户下线,演示的是自己强制自己下线的效果

八. 问题

1. 日志记录登录时间比正常时间晚了8个小时

项目使用Docker部署,其中依赖openjdk镜像时区是UTC,比北京时间晚了8个小时,执行以下命令修改时区解决问题

docker exec -it youlai-auth /bin/sh
echo "Asia/Shanghai" > /etc/timezone
docker restart youlai-auth

2. 用Nginx代理转发,怎么获取用户的真实IP?

在配置代理转发的时候添加:

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

九. Kibana索引检索

在LogStash的logout我们指定了索引的名称 “%{[project]}-%{[action]}-%{+YYYY-MM-dd}”

在logback-spring.xml指定了project为youlai-auth,action为login,替换生成类似youlai-auth-login-2021-3-25的索引,其中日期是可变的,然后我们在Kibana界面创建youlai-auth-login-*索引模式来对日志进行检索。

  • 创建youlai-auth-login-*索引模式

  • 根据索引模式,设置日期范围,进行登录日志的检索

十. 结语

至此,整个实战过程已经完成,搭建了ELK环境,使用Spring AOP横切来对登录日志的定点的搜集,最后通过SpringBoot整合ElasticSearch的高级Java客户端RestHighLevelClient来对搜集登录日志信息进行聚合计数、统计、以及日志中访问令牌操作来实现无状态的JWT会话管理,强制JWT失效让用户下线。文中只贴出关键的代码,其中还有像IP转地区的工具使用鉴于篇幅的原因并未一一说明,完整代码请参考git上的完整源代码。点击跳转

希望大家通过本篇文章能够快速入门ElasticSearch,如果有问题欢迎留言或者加我微信(haoxianrui)。

终. 附录

欢迎大家加入开源项目有来项目交流群,一起学习Spring Cloud微服务生态组件、分布式、Docker、K8S、Vue、element-ui、uni-app、微信小程序全栈等技术。

最后附上有来项目往期文章

后台微服务

  1. Spring Cloud实战 | 第一篇:Windows搭建Nacos服务
  2. Spring Cloud实战 | 第二篇:Spring Cloud整合Nacos实现注册中心
  3. Spring Cloud实战 | 第三篇:Spring Cloud整合Nacos实现配置中心
  4. Spring Cloud实战 | 第四篇:Spring Cloud整合Gateway实现API网关
  5. Spring Cloud实战 | 第五篇:Spring Cloud整合OpenFeign实现微服务之间的调用
  6. Spring Cloud实战 | 第六篇:Spring Cloud Gateway+Spring Security OAuth2+JWT实现微服务统一认证授权
  7. Spring Cloud实战 | 最七篇:Spring Cloud Gateway+Spring Security OAuth2集成统一认证授权平台下实现注销使JWT失效方案
  8. Spring Cloud实战 | 最八篇:Spring Cloud +Spring Security OAuth2+ Vue前后端分离模式下无感知刷新实现JWT续期
  9. Spring Cloud实战 | 最九篇:Spring Security OAuth2认证服务器统一认证自定义异常处理
  10. Spring Cloud实战 | 第十篇 :Spring Cloud + Nacos整合Seata 1.4.1最新版本实现微服务架构中的分布式事务,进阶之路必须要迈过的槛
  11. Spring Cloud实战 | 第十一篇 :Spring Cloud Gateway网关实现对RESTful接口权限和按钮权限细粒度控制

后台管理前端

  1. vue-element-admin实战 | 第一篇: 移除mock接入微服务接口,搭建SpringCloud+Vue前后端分离管理平台
  2. vue-element-admin实战 | 第二篇: 最小改动接入后台实现根据权限动态加载菜单

微信小程序

  1. vue+uni-app商城实战 | 第一篇:从0到1快速开发一个商城微信小程序,无缝接入Spring Cloud OAuth2认证授权登录

应用部署

  1. Docker实战 | 第一篇:Linux 安装 Docker
  2. Docker实战 | 第二篇:Docker部署nacos-server:1.4.0
  3. Docker实战 | 第三篇:IDEA集成Docker插件实现一键自动打包部署微服务项目,一劳永逸的技术手段值得一试
  4. Docker实战 | 第四篇:Docker安装Nginx,实现基于vue-element-admin框架构建的项目线上部署
  5. Docker实战 | 第五篇:Docker启用TLS加密解决暴露2375端口引发的安全漏洞,被黑掉三台云主机的教训总结