ElasticSearch实战

ElasticSearch实战

es: //www.elastic.co/cn/

kibana: //www.elastic.co/cn/kibana

链接://pan.baidu.com/s/1qmXNZBVGrcp0fuo9bBqrRA 提取码:6zpo –来自百度网盘超级会员V5的分享 来自狂神公众号

防京东商城搜索(高亮)

1、工程创建(springboot)

创建过程略

目录结构

2、基本编码

①导入依赖

  1. <properties>
  2. <java.version>1.8</java.version>
  3. <elasticsearch.version>7.6.1</elasticsearch.version>
  4. </properties>
  5. <dependencies>
  6. <!-- jsoup解析页面 -->
  7. <!-- 解析网页 爬视频可 研究tiko -->
  8. <dependency>
  9. <groupId>org.jsoup</groupId>
  10. <artifactId>jsoup</artifactId>
  11. <version>1.10.2</version>
  12. </dependency>
  13. <!-- fastjson -->
  14. <dependency>
  15. <groupId>com.alibaba</groupId>
  16. <artifactId>fastjson</artifactId>
  17. <version>1.2.70</version>
  18. </dependency>
  19. <!-- ElasticSearch -->
  20. <dependency>
  21. <groupId>org.springframework.boot</groupId>
  22. <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
  23. </dependency>
  24. <!-- thymeleaf -->
  25. <dependency>
  26. <groupId>org.springframework.boot</groupId>
  27. <artifactId>spring-boot-starter-thymeleaf</artifactId>
  28. </dependency>
  29. <!-- web -->
  30. <dependency>
  31. <groupId>org.springframework.boot</groupId>
  32. <artifactId>spring-boot-starter-web</artifactId>
  33. </dependency>
  34. <!-- devtools热部署 -->
  35. <dependency>
  36. <groupId>org.springframework.boot</groupId>
  37. <artifactId>spring-boot-devtools</artifactId>
  38. <scope>runtime</scope>
  39. <optional>true</optional>
  40. </dependency>
  41. <!-- -->
  42. <dependency>
  43. <groupId>org.springframework.boot</groupId>
  44. <artifactId>spring-boot-configuration-processor</artifactId>
  45. <optional>true</optional>
  46. </dependency>
  47. <!-- lombok 需要安装插件 -->
  48. <dependency>
  49. <groupId>org.projectlombok</groupId>
  50. <artifactId>lombok</artifactId>
  51. <optional>true</optional>
  52. </dependency>
  53. <!-- test -->
  54. <dependency>
  55. <groupId>org.springframework.boot</groupId>
  56. <artifactId>spring-boot-starter-test</artifactId>
  57. <scope>test</scope>
  58. </dependency>
  59. </dependencies>

②导入前端素材

③编写 application.preperties配置文件

  1. # 更改端口,防止冲突
  2. server.port=9999
  3. # 关闭thymeleaf缓存
  4. spring.thymeleaf.cache=false

④测试controller和view

  1. @Controller
  2. public class IndexController {
  3. @GetMapping({"/","index"})
  4. public String index(){
  5. return "index";
  6. }
  7. }

访问 localhost:9999

到这里可以先去编写爬虫,编写之后,回到这里

⑤编写Config

  1. @Configuration
  2. public class ElasticSearchConfig {
  3. @Bean
  4. public RestHighLevelClient restHighLevelClient(){
  5. RestHighLevelClient client = new RestHighLevelClient(
  6. RestClient.builder(
  7. new HttpHost("127.0.0.1",9200,"http")
  8. )
  9. );
  10. return client;
  11. }
  12. }

⑥编写service

因为是爬取的数据,那么就不走Dao,以下编写都不会编写接口,开发中必须严格要求编写

ContentService

  1. @Service
  2. public class ContentService {
  3. @Autowired
  4. private RestHighLevelClient restHighLevelClient;
  5. // 1、解析数据放入 es 索引中
  6. public Boolean parseContent(String keyword) throws IOException {
  7. // 获取内容
  8. List<Content> contents = HtmlParseUtil.parseJD(keyword);
  9. // 内容放入 es 中
  10. BulkRequest bulkRequest = new BulkRequest();
  11. bulkRequest.timeout("2m"); // 可更具实际业务是指
  12. for (int i = 0; i < contents.size(); i++) {
  13. bulkRequest.add(
  14. new IndexRequest("jd_goods")
  15. .id(""+(i+1))
  16. .source(JSON.toJSONString(contents.get(i)), XContentType.JSON)
  17. );
  18. }
  19. BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
  20. restHighLevelClient.close();
  21. return !bulk.hasFailures();
  22. }
  23. // 2、根据keyword分页查询结果
  24. public List<Map<String, Object>> search(String keyword, Integer pageIndex, Integer pageSize) throws IOException {
  25. if (pageIndex < 0){
  26. pageIndex = 0;
  27. }
  28. SearchRequest jd_goods = new SearchRequest("jd_goods");
  29. // 创建搜索源建造者对象
  30. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
  31. // 条件采用:精确查询 通过keyword查字段name
  32. TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("name", keyword);
  33. searchSourceBuilder.query(termQueryBuilder);
  34. searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));// 60s
  35. // 分页
  36. searchSourceBuilder.from(pageIndex);
  37. searchSourceBuilder.size(pageSize);
  38. // 高亮
  39. // ....
  40. // 搜索源放入搜索请求中
  41. jd_goods.source(searchSourceBuilder);
  42. // 执行查询,返回结果
  43. SearchResponse searchResponse = restHighLevelClient.search(jd_goods, RequestOptions.DEFAULT);
  44. restHighLevelClient.close();
  45. // 解析结果
  46. SearchHits hits = searchResponse.getHits();
  47. List<Map<String,Object>> results = new ArrayList<>();
  48. for (SearchHit documentFields : hits.getHits()) {
  49. Map<String, Object> sourceAsMap = documentFields.getSourceAsMap();
  50. results.add(sourceAsMap);
  51. }
  52. // 返回查询的结果
  53. return results;
  54. }
  55. }

⑦编写controller

  1. @Controller
  2. public class ContentController {
  3. @Autowired
  4. private ContentService contentService;
  5. @ResponseBody
  6. @GetMapping("/parse/{keyword}")
  7. public Boolean parse(@PathVariable("keyword") String keyword) throws IOException {
  8. return contentService.parseContent(keyword);
  9. }
  10. @ResponseBody
  11. @GetMapping("/search/{keyword}/{pageIndex}/{pageSize}")
  12. public List<Map<String, Object>> parse(@PathVariable("keyword") String keyword,
  13. @PathVariable("pageIndex") Integer pageIndex,
  14. @PathVariable("pageSize") Integer pageSize) throws IOException {
  15. return contentService.search(keyword,pageIndex,pageSize);
  16. }
  17. }

⑧测试结果

1、解析数据放入 es 索引中

2、根据keyword分页查询结果

3、爬虫(jsoup)

数据获取:数据库、消息队列、爬虫、…

①搜索京东搜索页面,并分析页面

  1. http://search.jd.com/search?keyword=java
页面如下

审查页面元素

页面列表id:J_goodsList

目标元素:img、price、name

②爬取数据(获取请求返回的页面信息,筛选出可用的)

创建HtmlParseUtil,并简单编写
  1. public class HtmlParseUtil {
  2. public static void main(String[] args) throws IOException {
  3. /// 使用前需要联网
  4. // 请求url
  5. String url = "//search.jd.com/search?keyword=java";
  6. // 1.解析网页(jsoup 解析返回的对象是浏览器Document对象)
  7. Document document = Jsoup.parse(new URL(url), 30000);
  8. // 使用document可以使用在js对document的所有操作
  9. // 2.获取元素(通过id)
  10. Element j_goodsList = document.getElementById("J_goodsList");
  11. // 3.获取J_goodsList ul 每一个 li
  12. Elements lis = j_goodsList.getElementsByTag("li");
  13. // 4.获取li下的 img、price、name
  14. for (Element li : lis) {
  15. String img = li.getElementsByTag("img").eq(0).attr("src");// 获取li下 第一张图片
  16. String name = li.getElementsByClass("p-name").eq(0).text();
  17. String price = li.getElementsByClass("p-price").eq(0).text();
  18. System.out.println("=======================");
  19. System.out.println("img : " + img);
  20. System.out.println("name : " + name);
  21. System.out.println("price : " + price);
  22. }
  23. }
  24. }

运行结果

原因是啥?

一般图片特别多的网站,所有的图片都是通过延迟加载的

  1. // 打印标签内容
  2. Elements lis = j_goodsList.getElementsByTag("li");
  3. System.out.println(lis);

打印所有li标签,发现img标签中并没有属性src的设置,只是data-lazy-ing设置图片加载的地址

创建HtmlParseUtil、改写
  • 更改图片获取属性为 data-lazy-img

  • 与实体类结合,实体类如下

    1. @Data
    2. @AllArgsConstructor
    3. @NoArgsConstructor
    4. public class Content implements Serializable {
    5. private static final long serialVersionUID = -8049497962627482693L;
    6. private String name;
    7. private String img;
    8. private String price;
    9. }
  • 封装为方法

  1. public class HtmlParseUtil {
  2. public static void main(String[] args) throws IOException {
  3. System.out.println(parseJD("java"));
  4. }
  5. public static List<Content> parseJD(String keyword) throws IOException {
  6. /// 使用前需要联网
  7. // 请求url
  8. String url = "//search.jd.com/search?keyword=" + keyword;
  9. // 1.解析网页(jsoup 解析返回的对象是浏览器Document对象)
  10. Document document = Jsoup.parse(new URL(url), 30000);
  11. // 使用document可以使用在js对document的所有操作
  12. // 2.获取元素(通过id)
  13. Element j_goodsList = document.getElementById("J_goodsList");
  14. // 3.获取J_goodsList ul 每一个 li
  15. Elements lis = j_goodsList.getElementsByTag("li");
  16. // System.out.println(lis);
  17. // 4.获取li下的 img、price、name
  18. // list存储所有li下的内容
  19. List<Content> contents = new ArrayList<Content>();
  20. for (Element li : lis) {
  21. // 由于网站图片使用懒加载,将src属性替换为data-lazy-img
  22. String img = li.getElementsByTag("img").eq(0).attr("data-lazy-img");// 获取li下 第一张图片
  23. String name = li.getElementsByClass("p-name").eq(0).text();
  24. String price = li.getElementsByClass("p-price").eq(0).text();
  25. // 封装为对象
  26. Content content = new Content(name,img,price);
  27. // 添加到list中
  28. contents.add(content);
  29. }
  30. // System.out.println(contents);
  31. // 5.返回 list
  32. return contents;
  33. }
  34. }

结果展示

4、搜索高亮

在3、的基础上添加内容

①ContentService

  1. // 3、 在2的基础上进行高亮查询
  2. public List<Map<String, Object>> highlightSearch(String keyword, Integer pageIndex, Integer pageSize) throws IOException {
  3. SearchRequest searchRequest = new SearchRequest("jd_goods");
  4. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
  5. // 精确查询,添加查询条件
  6. TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("name", keyword);
  7. searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
  8. searchSourceBuilder.query(termQueryBuilder);
  9. // 分页
  10. searchSourceBuilder.from(pageIndex);
  11. searchSourceBuilder.size(pageSize);
  12. // 高亮 =========
  13. HighlightBuilder highlightBuilder = new HighlightBuilder();
  14. highlightBuilder.field("name");
  15. highlightBuilder.preTags("<span style='color:red'>");
  16. highlightBuilder.postTags("</span>");
  17. searchSourceBuilder.highlighter(highlightBuilder);
  18. // 执行查询
  19. searchRequest.source(searchSourceBuilder);
  20. SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  21. // 解析结果 ==========
  22. SearchHits hits = searchResponse.getHits();
  23. List<Map<String, Object>> results = new ArrayList<>();
  24. for (SearchHit documentFields : hits.getHits()) {
  25. // 使用新的字段值(高亮),覆盖旧的字段值
  26. Map<String, Object> sourceAsMap = documentFields.getSourceAsMap();
  27. // 高亮字段
  28. Map<String, HighlightField> highlightFields = documentFields.getHighlightFields();
  29. HighlightField name = highlightFields.get("name");
  30. // 替换
  31. if (name != null){
  32. Text[] fragments = name.fragments();
  33. StringBuilder new_name = new StringBuilder();
  34. for (Text text : fragments) {
  35. new_name.append(text);
  36. }
  37. sourceAsMap.put("name",new_name.toString());
  38. }
  39. results.add(sourceAsMap);
  40. }
  41. return results;
  42. }

②ContentController

  1. @ResponseBody
  2. @GetMapping("/h_search/{keyword}/{pageIndex}/{pageSize}")
  3. public List<Map<String, Object>> highlightParse(@PathVariable("keyword") String keyword,
  4. @PathVariable("pageIndex") Integer pageIndex,
  5. @PathVariable("pageSize") Integer pageSize) throws IOException {
  6. return contentService.highlightSearch(keyword,pageIndex,pageSize);
  7. }

③结果展示

5、前后端分离(简单使用Vue)

删除Controller 方法上的 @ResponseBody注解

①下载并引入Vue.min.js和axios.js

如果安装了nodejs,可以按如下步骤,没有可以到后面素材处下载

  1. npm install vue
  2. npm install axios

②修改静态页面

引入js
  1. <script th:src="@{/js/vue.min.js}"></script>
  2. <script th:src="@{/js/axios.min.js}"></script>
修改后的index.html
  1. <!DOCTYPE html>
  2. <html xmlns:th="//www.thymeleaf.org">
  3. <head>
  4. <meta charset="utf-8"/>
  5. <title>狂神说Java-ES仿京东实战</title>
  6. <link rel="stylesheet" th:href="@{/css/style.css}"/>
  7. <script th:src="@{/js/jquery.min.js}"></script>
  8. </head>
  9. <body class="pg">
  10. <div class="page">
  11. <div id="app" class=" mallist tmall- page-not-market ">
  12. <!-- 头部搜索 -->
  13. <div id="header" class=" header-list-app">
  14. <div class="headerLayout">
  15. <div class="headerCon ">
  16. <!-- Logo-->
  17. <h1 id="mallLogo">
  18. <img th:src="@{/images/jdlogo.png}" alt="">
  19. </h1>
  20. <div class="header-extra">
  21. <!--搜索-->
  22. <div id="mallSearch" class="mall-search">
  23. <form name="searchTop" class="mallSearch-form clearfix">
  24. <fieldset>
  25. <legend>天猫搜索</legend>
  26. <div class="mallSearch-input clearfix">
  27. <div class="s-combobox" id="s-combobox-685">
  28. <div class="s-combobox-input-wrap">
  29. <input v-model="keyword" type="text" autocomplete="off" id="mq"
  30. class="s-combobox-input" aria-haspopup="true">
  31. </div>
  32. </div>
  33. <button type="submit" @click.prevent="searchKey" id="searchbtn">搜索</button>
  34. </div>
  35. </fieldset>
  36. </form>
  37. <ul class="relKeyTop">
  38. <li><a>狂神说Java</a></li>
  39. <li><a>狂神说前端</a></li>
  40. <li><a>狂神说Linux</a></li>
  41. <li><a>狂神说大数据</a></li>
  42. <li><a>狂神聊理财</a></li>
  43. </ul>
  44. </div>
  45. </div>
  46. </div>
  47. </div>
  48. </div>
  49. <!-- 商品详情页面 -->
  50. <div id="content">
  51. <div class="main">
  52. <!-- 品牌分类 -->
  53. <form class="navAttrsForm">
  54. <div class="attrs j_NavAttrs" style="display:block">
  55. <div class="brandAttr j_nav_brand">
  56. <div class="j_Brand attr">
  57. <div class="attrKey">
  58. 品牌
  59. </div>
  60. <div class="attrValues">
  61. <ul class="av-collapse row-2">
  62. <li><a href="#"> 狂神说 </a></li>
  63. <li><a href="#"> Java </a></li>
  64. </ul>
  65. </div>
  66. </div>
  67. </div>
  68. </div>
  69. </form>
  70. <!-- 排序规则 -->
  71. <div class="filter clearfix">
  72. <a class="fSort fSort-cur">综合<i class="f-ico-arrow-d"></i></a>
  73. <a class="fSort">人气<i class="f-ico-arrow-d"></i></a>
  74. <a class="fSort">新品<i class="f-ico-arrow-d"></i></a>
  75. <a class="fSort">销量<i class="f-ico-arrow-d"></i></a>
  76. <a class="fSort">价格<i class="f-ico-triangle-mt"></i><i class="f-ico-triangle-mb"></i></a>
  77. </div>
  78. <!-- 商品详情 -->
  79. <div class="view grid-nosku" >
  80. <div class="product" v-for="result in results">
  81. <div class="product-iWrap">
  82. <!--商品封面-->
  83. <div class="productImg-wrap">
  84. <a class="productImg">
  85. <img :src="result.img">
  86. </a>
  87. </div>
  88. <!--价格-->
  89. <p class="productPrice">
  90. <em v-text="result.price"></em>
  91. </p>
  92. <!--标题-->
  93. <p class="productTitle">
  94. <a v-html="result.name"></a>
  95. </p>
  96. <!-- 店铺名 -->
  97. <div class="productShop">
  98. <span>店铺: 狂神说Java </span>
  99. </div>
  100. <!-- 成交信息 -->
  101. <p class="productStatus">
  102. <span>月成交<em>999笔</em></span>
  103. <span>评价 <a>3</a></span>
  104. </p>
  105. </div>
  106. </div>
  107. </div>
  108. </div>
  109. </div>
  110. </div>
  111. </div>
  112. <script th:src="@{/js/vue.min.js}"></script>
  113. <script th:src="@{/js/axios.min.js}"></script>
  114. <script>
  115. new Vue({
  116. el:"#app",
  117. data:{
  118. "keyword": '', // 搜索的关键字
  119. "results":[] // 后端返回的结果
  120. },
  121. methods:{
  122. searchKey(){
  123. var keyword = this.keyword;
  124. console.log(keyword);
  125. axios.get('h_search/'+keyword+'/0/20').then(response=>{
  126. console.log(response.data);
  127. this.results=response.data;
  128. })
  129. }
  130. }
  131. });
  132. </script>
  133. </body>
  134. </html>
测试

安装包及前端素材

链接://pan.baidu.com/s/1M5uWdYsCZyzIAOcgcRkA_A
提取码:qk8p
复制这段内容后打开百度网盘手机App,操作更方便哦

疑惑:

1、使用term(精确查询)时,我发现三个问题,问题如下:

  • 字段值必须是一个词(索引中存在的词),才能匹配

    • 问题:中文字符串,term查询时无法查询到数据(比如,“编程”两字在文档中存在,但是搜索不到)

    • 原因:索引为配置中文分词器(默认使用standard,即所有中文字符串都会被切分为单个中文汉字作为单词),所以没有超过1个汉字的词,也就无法匹配,进而查不到数据

    • 解决:创建索引时配置中文分词器,如

      1. PUT example
      2. {
      3. "mappings": {
      4. "properties": {
      5. "name":{
      6. "type": "text",
      7. "analyzer": "ik_max_word" // ik分词器
      8. }
      9. }
      10. }
      11. }
  • 查询的英文字符只能是小写,大写都无效

  • 查询时英文单词必须是完整的