• Stars
    star
    157
  • Rank 238,399 (Top 5 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated about 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

HtmlExtractor是一个Java实现的基于模板的网页结构化信息精准抽取组件。

HtmlExtractor是一个Java实现的基于模板的网页结构化信息精准抽取组件,本身并不包含爬虫功能,但可被爬虫或其他程序调用以便更精准地对网页结构化信息进行抽取。

HtmlExtractor是为大规模分布式环境设计的,采用主从架构,主节点负责维护抽取规则,从节点向主节点请求抽取规则,当抽取规则发生变化,主节点主动通知从节点,从而能实现抽取规则变化之后的实时动态生效。

捐赠致谢

如何使用?

使用SeleniumHtmlFetcher需要安装驱动:
FirefoxDriver: brew install geckodriver
ChromeDriver:  brew install chromedriver

HtmlExtractor由2个子项目构成,html-extractor和html-extractor-web。
html-extractor实现了数据抽取逻辑,是从节点,html-extractor-web提供web界面来维护抽取规则,是主节点。
html-extractor是一个jar包,可通过maven引用:
<dependency>
    <groupId>org.apdplat</groupId>
    <artifactId>html-extractor</artifactId>
    <version>1.1</version>
</dependency>
html-extractor-web是一个war包,需要部署到Servlet/Jsp容器上。
在html-extractor-web目录下运行mvn jetty:run就可以启动Servlet/Jsp容器jetty,之后打开浏览器访问:
http://localhost:8080/html-extractor-web/api/ 查看自己定义的规则。

注意:页面模板中定义的所有CSS路径和抽取表达式全部抽取成功,才算抽取成功,
     只要有一个CSS路径或抽取表达式失败,就是抽取失败。

如何使用HtmlExtractor实现基于模板的网页结构化信息精准抽取?

单机集中式使用方法:

//1、构造抽取规则
List<UrlPattern> urlPatterns = new ArrayList<>();
//1.1、构造URL模式
UrlPattern urlPattern = new UrlPattern();
urlPattern.setUrlPattern("http://money.163.com/\\d{2}/\\d{4}/\\d{2}/[0-9A-Z]{16}.html");
//1.2、构造HTML模板
HtmlTemplate htmlTemplate = new HtmlTemplate();
htmlTemplate.setTemplateName("网易财经频道");
htmlTemplate.setTableName("finance");
//1.3、将URL模式和HTML模板建立关联
urlPattern.addHtmlTemplate(htmlTemplate);
//1.4、构造CSS路径
CssPath cssPath = new CssPath();
cssPath.setCssPath("h1");
cssPath.setFieldName("title");
cssPath.setFieldDescription("标题");
//1.5、将CSS路径和模板建立关联
htmlTemplate.addCssPath(cssPath);
//1.6、构造CSS路径
cssPath = new CssPath();
cssPath.setCssPath("div#endText");
cssPath.setFieldName("content");
cssPath.setFieldDescription("正文");
//1.7、将CSS路径和模板建立关联
htmlTemplate.addCssPath(cssPath);
//可象上面那样构造多个URLURL模式
urlPatterns.add(urlPattern);

//2、获取抽取规则对象
ExtractRegular extractRegular = ExtractRegular.getInstance(urlPatterns);
//注意:可通过如下3个方法动态地改变抽取规则
//extractRegular.addUrlPatterns(urlPatterns);
//extractRegular.addUrlPattern(urlPattern);
//extractRegular.removeUrlPattern(urlPattern.getUrlPattern());

//3、获取HTML抽取工具
HtmlExtractor htmlExtractor = new DefaultHtmlExtractor(extractRegular);

//4、抽取网页
String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html";
HtmlFetcher htmlFetcher = new JSoupHtmlFetcher();
String html = htmlFetcher.fetch(url);
List<ExtractResult> extractResults = htmlExtractor.extract(url, html);

//5、输出结果
int i = 1;
for (ExtractResult extractResult : extractResults) {
    System.out.println((i++) + "、网页 " + extractResult.getUrl() + " 的抽取结果");
    if(!extractResult.isSuccess()){
        System.out.println("抽取失败:");
        for(ExtractFailLog extractFailLog : extractResult.getExtractFailLogs()){
            System.out.println("\turl:"+extractFailLog.getUrl());
            System.out.println("\turlPattern:"+extractFailLog.getUrlPattern());
            System.out.println("\ttemplateName:"+extractFailLog.getTemplateName());
            System.out.println("\tfieldName:"+extractFailLog.getFieldName());
            System.out.println("\tfieldDescription:"+extractFailLog.getFieldDescription());
            System.out.println("\tcssPath:"+extractFailLog.getCssPath());
            if(extractFailLog.getExtractExpression()!=null) {
                System.out.println("\textractExpression:" + extractFailLog.getExtractExpression());
            }
        }
        continue;
    }
    Map<String, List<ExtractResultItem>> extractResultItems = extractResult.getExtractResultItems();
    for(String field : extractResultItems.keySet()){
        List<ExtractResultItem> values = extractResultItems.get(field);
        if(values.size() > 1){
            int j=1;
            System.out.println("\t多值字段:"+field);
            for(ExtractResultItem item : values){
                System.out.println("\t\t"+(j++)+"、"+field+" = "+item.getValue());   
            }
        }else{
            System.out.println("\t"+field+" = "+values.get(0).getValue());     
        }
    }
    System.out.println("\tdescription = "+extractResult.getDescription());
    System.out.println("\tkeywords = "+extractResult.getKeywords());
}

多机分布式使用方法:

1、运行主节点,负责维护抽取规则:
方法一:在html-extractor-web目录下运行mvn jetty:run 。
方法二:在html-extractor-web目录下运行mvn install ,
      然后将target/html-extractor-web-1.0.war部署到Tomcat。

2、获取一个HtmlExtractor的实例(从节点),示例代码如下:
String allExtractRegularUrl = "http://localhost:8080/HtmlExtractorServer/api/all_extract_regular.jsp";
String redisHost = "localhost";
int redisPort = 6379;
ExtractRegular extractRegular = ExtractRegular.getInstance(allExtractRegularUrl, redisHost, redisPort);
HtmlExtractor htmlExtractor = new DefaultHtmlExtractor(extractRegular);

3、抽取信息,示例代码如下:
String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html";
HtmlFetcher htmlFetcher = new JSoupHtmlFetcher();
String html = htmlFetcher.fetch(url);
List<ExtractResult> extractResults = htmlExtractor.extract(url, html);

int i = 1;
for (ExtractResult extractResult : extractResults) {
    System.out.println((i++) + "、网页 " + extractResult.getUrl() + " 的抽取结果");
    if(!extractResult.isSuccess()){
        System.out.println("抽取失败:");
        for(ExtractFailLog extractFailLog : extractResult.getExtractFailLogs()){
            System.out.println("\turl:"+extractFailLog.getUrl());
            System.out.println("\turlPattern:"+extractFailLog.getUrlPattern());
            System.out.println("\ttemplateName:"+extractFailLog.getTemplateName());
            System.out.println("\tfieldName:"+extractFailLog.getFieldName());
            System.out.println("\tfieldDescription:"+extractFailLog.getFieldDescription());
            System.out.println("\tcssPath:"+extractFailLog.getCssPath());
            if(extractFailLog.getExtractExpression()!=null) {
                System.out.println("\textractExpression:" + extractFailLog.getExtractExpression());
            }
        }
        continue;
    }
    for(ExtractResultItem extractResultItem : extractResult.getExtractResultItems()){
        System.out.print("\t"+extractResultItem.getField()+" = "+extractResultItem.getValue());              
    }
    System.out.println("\tdescription = "+extractResult.getDescription());
    System.out.println("\tkeywords = "+extractResult.getKeywords());
}

https://travis-ci.org/ysc/HtmlExtractor

More Repositories

1

QuestionAnsweringSystem

QuestionAnsweringSystem是一个Java实现的人机问答系统,能够自动分析问题并给出候选答案。
Java
1,957
star
2

word

Java分布式中文分词组件 - word分词
Java
1,812
star
3

cws_evaluation

Java开源项目cws_evaluation:中文分词器分词效果评估对比
Lex
948
star
4

APDPlat

APDPlat是Application Product Development Platform的缩写,即应用级产品开发平台。
JavaScript
521
star
5

data-generator

如果你在从事大数据BI的工作,想对比一下MySQL、GreenPlum、Elasticsearch、Hive、Spark SQL、Presto、Impala、Drill、HAWQ、Druid、Pinot、Kylin、ClickHouse、Kudu等不同实现方案之间的表现,那你就需要一份标准的数据进行测试,这个开源项目就是为了生成这样的标准数据。
Java
278
star
6

superword

Superword is a Java open source project dedicated in the study of English words analysis and auxiliary reading.
Java
267
star
7

search

元搜索引擎
Java
225
star
8

jsearch

jsearch:高性能的全文检索工具包
Java
92
star
9

rank

rank是一个seo工具,用于分析网站的搜索引擎收录排名。
Java
66
star
10

realtime-log

微服务日志之实时日志
Java
30
star
11

short-text-search

自定制的精准短文本搜索服务
Java
18
star
12

word_web

通过web服务器对word分词的资源进行集中统一管理
Java
17
star
13

counter

分布式环境下的原子计数器和API每天调用次数限制
Java
17
star
14

high-availability

保障服务的持续高可用、高性能及负载均衡
Java
17
star
15

baby-typing-game

适合2到6岁的宝宝打字游戏
HTML
10
star
16

borm

大数据的对象持久化
Java
10
star
17

ysc.github.com

ysc.github.com
CSS
2
star
18

luke

Automatically exported from code.google.com/p/luke
Java
1
star
19

AudiobooksForKids

A collection of best-selling audiobooks for kids, from timeless classics to popular series.
1
star