- 浏览: 245540 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
wilsonchen:
...
pdf.js html5展现pdf -
angole:
不错,又学习到了。
mybatis打印sql -
hft24dq:
rxcss66 写道 个人没有搞明白什么原理,不过跟一楼说的一 ...
mybatis打印sql -
fireinjava:
fireinjava 写道org.apache.ibatis. ...
mybatis打印sql -
fireinjava:
org.apache.ibatis.logging.LogFa ...
mybatis打印sql
总结:HPack 优于 CJSON
json1 json2 json3 json4 json5
Original JSON size (bytes) 52966 104370 233012 493589 1014099
Minimized 33322 80657 180319 382396 776135
Compress CJSON 24899 48605 108983 231760 471230
Compress HPack 5727 10781 23162 49099 99575
Gzipped 2929 5374 11224 23167 43550
Gzipped and Minimized 2775 5035 10411 21319 42083
Gzipped and compressed with CJSON 2568 4605 9397 19055 37597
Gzipped and compressed with HPack 1982 3493 6981 13998 27358
原文地址:http://web-resource-optimization.blogspot.com/2011/06/json-compression-algorithms.html
About
JSON (Java Script Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It can be used as a data interchange format, just like XML. When comparing JSON to XML, it has several advantages over the last one. JSON is really simple, it has a self-documenting format, it is much shorter because there is no data configuration overhead. That is why JSON is considered a fat-free alternative to XML.
However, the purpose of this post is not to discuss the pros and cons of JSON over XML. Though it is one of the most used data interchanged format, there is still room for improvement. For instance, JSON uses excessively quotes and key names are very often repeated. This problem can be solved by JSON compression algorithms. There are more than one available. Here you'll find an analysis of two JSON compressors algorithms and a conclusion whether JSON compression is useful and when it should be used.
Compressing JSON with CJSON algorithm
CSJON compress the JSON with automatic type extraction. It tackles the most pressing problem: the need to constantly repeat key names over and over. Using this compression algorithm, the following JSON:
?
1
2
3
4
5
6
7
8
9
10
11
12
[
{ // This is a point
"x": 100,
"y": 100
}, { // This is a rectangle
"x": 100,
"y": 100,
"width": 200,
"height": 150
},
{}, // an empty object
]
Can be compressed as:
?
1
2
3
4
5
6
7
8
9
10
{
"templates": [
[0, "x", "y"], [1, "width", "height"]
],
"values": [
{ "values": [ 1, 100, 100 ] },
{ "values": [2, 100, 100, 200, 150 ] },
{}
]
}
The more detailed description of the compression algorithm, along with the source code can be found here:
Compressing JSON with HPack algorithm
JSON.hpack is a lossless, cross language, performances focused, data set compressor. It is able to reduce up to 70% number of characters used to represent a generic homogeneous collection. This algorithms provides several level of compression (from 0 to 4). The level 0 compression performs the most basic compression by removing keys (property names) from the structure creating a header on index 0 with each property name. Next levels make it possible to reduce even more the size of the JSON by assuming that there are duplicated entries.
For the following JSON:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[{
name : "Andrea",
age : 31,
gender : "Male",
skilled : true
}, {
name : "Eva",
age : 27,
gender : "Female",
skilled : true
}, {
name : "Daniele",
age : 26,
gender : "Male",
skilled : false
}]
the hpack algorithm produces a compressed version which looks like this:
?
1
[["name","age","gender","skilled"],["Andrea",31,"Male",true],["Eva",27,"Female",true],["Daniele",26,"Male",false]]
More details about hpack algorithm can be found at project home page.
Analysis
The purpose of this analysis is to compare each of the described JSON compressor algorithms. For this purpose we will use 5 files with JSON content having different dimensions, varying from 50K to 1MB. Each JSON file will be served to a browser using a servlet container (tomcat) with the following transformations:
Unmodified JSON - no change on the server side
Minimized JSON - remove whitespaces and new lines (most basic js optimization)
Compressed JSON using CJSON algorithm
Compressed JSON using HPack algorithm
Gzipped JSON - no change on the server side
Gzipped and minimized JSON
Gzipped and compressed using CJSON algorithm
Gzipped and compressed using HPack algorithm
RESULTS
This table contains the results of the benchmark. Each row of the table contains one of the earlier mentioned transformation. The table has 5 columns, one for each JSON file we process.
\
json1 json2 json3 json4 json5
Original JSON size (bytes) 52966 104370 233012 493589 1014099
Minimized 33322 80657 180319 382396 776135
Compress CJSON 24899 48605 108983 231760 471230
Compress HPack 5727 10781 23162 49099 99575
Gzipped 2929 5374 11224 23167 43550
Gzipped and Minimized 2775 5035 10411 21319 42083
Gzipped and compressed with CJSON 2568 4605 9397 19055 37597
Gzipped and compressed with HPack 1982 3493 6981 13998 27358
Relative size of transformations(%)
The relative size of transformation graphic is useful to see if the size of the json to compress affects the efficiency of compression or minimization. You can notice the following:
the minimization is much more efficient for smaller files. (~60%)
for large and very large json files, the minimization has constant efficiency (~75%)
compressors algorithms has the same efficency for any size of json file
CJson compressing algorithm is less efficient (~45%) than hpack algorithm (~8%)
CJson compressing algorithm is slower than hpack algorihtm
Gzipped content has almost the same size as the compressed content
Combining compression with gzip or minimization with gzip, doesn't improve significantly efficiency (only about 1-2%)
Conclusion
Both JSON compression algorithms are supported by wro4j since version 1.3.8 by the following processors: CJsonProcessor & JsonHPackProcessor. Both of them provide the following methods: pack & unpack. The underlying implementation uses Rhino engine to run the javascript code on the serverside.
JSON Compression algorithms considerably reduce json file size. There a several compression algorithms. We have covered two of them: CJson and HPack. HPack seems to be much more efficient than CJson and also significantly faster. When two entities exchange JSON and the source compress it before it reach the target, the client (target) have to apply the inverse operation of compression (unpacking), otherwise the JSON cannot be used. This introduce a small overhead which must be taken into account when deciding if JSON compression should be used or not.
When gziping of content is allowed, it has a better efficiency than any other compression algorithm. In conclusion, it doesn't worth to compress a JSON on the server if the client accept the gzipped content. The compression on the server-side does make sense when the client doesn't know how to work with gzipped content and it is important to keep the traffic volue as low as possible (due to cost and time).
Onother use-case for JSON compression algorithm is sending a large JSON content from client to server (which is sent ungzipped). In this case, it is important to unpack the JSON content on the server before consuming it.
json1 json2 json3 json4 json5
Original JSON size (bytes) 52966 104370 233012 493589 1014099
Minimized 33322 80657 180319 382396 776135
Compress CJSON 24899 48605 108983 231760 471230
Compress HPack 5727 10781 23162 49099 99575
Gzipped 2929 5374 11224 23167 43550
Gzipped and Minimized 2775 5035 10411 21319 42083
Gzipped and compressed with CJSON 2568 4605 9397 19055 37597
Gzipped and compressed with HPack 1982 3493 6981 13998 27358
原文地址:http://web-resource-optimization.blogspot.com/2011/06/json-compression-algorithms.html
About
JSON (Java Script Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It can be used as a data interchange format, just like XML. When comparing JSON to XML, it has several advantages over the last one. JSON is really simple, it has a self-documenting format, it is much shorter because there is no data configuration overhead. That is why JSON is considered a fat-free alternative to XML.
However, the purpose of this post is not to discuss the pros and cons of JSON over XML. Though it is one of the most used data interchanged format, there is still room for improvement. For instance, JSON uses excessively quotes and key names are very often repeated. This problem can be solved by JSON compression algorithms. There are more than one available. Here you'll find an analysis of two JSON compressors algorithms and a conclusion whether JSON compression is useful and when it should be used.
Compressing JSON with CJSON algorithm
CSJON compress the JSON with automatic type extraction. It tackles the most pressing problem: the need to constantly repeat key names over and over. Using this compression algorithm, the following JSON:
?
1
2
3
4
5
6
7
8
9
10
11
12
[
{ // This is a point
"x": 100,
"y": 100
}, { // This is a rectangle
"x": 100,
"y": 100,
"width": 200,
"height": 150
},
{}, // an empty object
]
Can be compressed as:
?
1
2
3
4
5
6
7
8
9
10
{
"templates": [
[0, "x", "y"], [1, "width", "height"]
],
"values": [
{ "values": [ 1, 100, 100 ] },
{ "values": [2, 100, 100, 200, 150 ] },
{}
]
}
The more detailed description of the compression algorithm, along with the source code can be found here:
Compressing JSON with HPack algorithm
JSON.hpack is a lossless, cross language, performances focused, data set compressor. It is able to reduce up to 70% number of characters used to represent a generic homogeneous collection. This algorithms provides several level of compression (from 0 to 4). The level 0 compression performs the most basic compression by removing keys (property names) from the structure creating a header on index 0 with each property name. Next levels make it possible to reduce even more the size of the JSON by assuming that there are duplicated entries.
For the following JSON:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[{
name : "Andrea",
age : 31,
gender : "Male",
skilled : true
}, {
name : "Eva",
age : 27,
gender : "Female",
skilled : true
}, {
name : "Daniele",
age : 26,
gender : "Male",
skilled : false
}]
the hpack algorithm produces a compressed version which looks like this:
?
1
[["name","age","gender","skilled"],["Andrea",31,"Male",true],["Eva",27,"Female",true],["Daniele",26,"Male",false]]
More details about hpack algorithm can be found at project home page.
Analysis
The purpose of this analysis is to compare each of the described JSON compressor algorithms. For this purpose we will use 5 files with JSON content having different dimensions, varying from 50K to 1MB. Each JSON file will be served to a browser using a servlet container (tomcat) with the following transformations:
Unmodified JSON - no change on the server side
Minimized JSON - remove whitespaces and new lines (most basic js optimization)
Compressed JSON using CJSON algorithm
Compressed JSON using HPack algorithm
Gzipped JSON - no change on the server side
Gzipped and minimized JSON
Gzipped and compressed using CJSON algorithm
Gzipped and compressed using HPack algorithm
RESULTS
This table contains the results of the benchmark. Each row of the table contains one of the earlier mentioned transformation. The table has 5 columns, one for each JSON file we process.
\
json1 json2 json3 json4 json5
Original JSON size (bytes) 52966 104370 233012 493589 1014099
Minimized 33322 80657 180319 382396 776135
Compress CJSON 24899 48605 108983 231760 471230
Compress HPack 5727 10781 23162 49099 99575
Gzipped 2929 5374 11224 23167 43550
Gzipped and Minimized 2775 5035 10411 21319 42083
Gzipped and compressed with CJSON 2568 4605 9397 19055 37597
Gzipped and compressed with HPack 1982 3493 6981 13998 27358
Relative size of transformations(%)
The relative size of transformation graphic is useful to see if the size of the json to compress affects the efficiency of compression or minimization. You can notice the following:
the minimization is much more efficient for smaller files. (~60%)
for large and very large json files, the minimization has constant efficiency (~75%)
compressors algorithms has the same efficency for any size of json file
CJson compressing algorithm is less efficient (~45%) than hpack algorithm (~8%)
CJson compressing algorithm is slower than hpack algorihtm
Gzipped content has almost the same size as the compressed content
Combining compression with gzip or minimization with gzip, doesn't improve significantly efficiency (only about 1-2%)
Conclusion
Both JSON compression algorithms are supported by wro4j since version 1.3.8 by the following processors: CJsonProcessor & JsonHPackProcessor. Both of them provide the following methods: pack & unpack. The underlying implementation uses Rhino engine to run the javascript code on the serverside.
JSON Compression algorithms considerably reduce json file size. There a several compression algorithms. We have covered two of them: CJson and HPack. HPack seems to be much more efficient than CJson and also significantly faster. When two entities exchange JSON and the source compress it before it reach the target, the client (target) have to apply the inverse operation of compression (unpacking), otherwise the JSON cannot be used. This introduce a small overhead which must be taken into account when deciding if JSON compression should be used or not.
When gziping of content is allowed, it has a better efficiency than any other compression algorithm. In conclusion, it doesn't worth to compress a JSON on the server if the client accept the gzipped content. The compression on the server-side does make sense when the client doesn't know how to work with gzipped content and it is important to keep the traffic volue as low as possible (due to cost and time).
Onother use-case for JSON compression algorithm is sending a large JSON content from client to server (which is sent ungzipped). In this case, it is important to unpack the JSON content on the server before consuming it.
发表评论
-
spring send gmail
2012-04-24 11:06 1078只要这样配置好久能使用gmail了 <bean id= ... -
log4j 常用配置
2012-03-22 21:02 1041原文地址:http://www.benmccann.com/d ... -
Dependency Injection - An Introductory Tutorial - Part 1
2012-02-20 10:57 1174原文地址:http://code.google.com/p/j ... -
struts2 排除拦截部分路径
2011-11-30 13:27 5737情况:在web.xml中配置一个servlet映射路径为/te ... -
java image scale
2011-07-20 13:47 893http://code.google.com/p/java-i ... -
实现自定义截取图片
2011-07-13 17:30 1057几种插件: http://odyniec.net/projec ... -
jms基础概念和应用场景
2011-07-01 13:55 1571原文地址:http://blog.csdn.net/KimmK ... -
Envers –tracked your Entity Objects
2011-06-29 09:05 1526原文地址:http://get2java.wordpress. ... -
Why we don’t use Doubles for Financial Calculations
2011-06-27 11:14 1235原文地址:http://bloodredsun.com/?p= ... -
http://jbaruch.wordpress.com/2011/06/22/unified-logging-using-slf4j/
2011-06-27 10:48 1179原文地址:http://jbaruch.wordpress.c ... -
Sturts2 全局异常日志
2011-06-25 10:50 1777原文地址:http://www.brucephillips.n ... -
eclipse的build与clean
2011-06-22 09:01 1536现象:无论怎么改变代码,程序的行为始终不变,一直报错。。。。 ... -
jfreechart 图标各部分名字
2011-06-09 19:57 788图标各部分名字 -
jfreechart 自定义饼图颜色
2011-06-09 18:52 4819关键代码: private static class Pi ... -
jfreechart demo 代码
2011-06-07 18:33 2915jfreechart 官方demo 源代码 -
jfreechart 中文问题
2011-06-07 16:52 897基本如果将jfreechart生成图片的字体多改成中文的字体就 ... -
SVN 安装
2011-05-30 15:45 982collabnet svn 现在出了整和的SVN EDGE。这 ... -
amazon 云计算
2011-05-28 21:45 1030最近看了amazon的云计算 ... -
c3p0 java.lang.Exception: DEBUG -- CLOSE BY CLIENT STACK TRACE
2011-05-28 16:07 1550修改日志级别为info http://hi.baidu.com ... -
mybatis打印sql
2011-05-09 15:03 19880mybatis默认使用log4j,当有self4j这个日志ja ...
相关推荐
Real-Time Video Compression: Techniques and Algorithms introduces the XYZ video compression technique, which operates in three dimensions, eliminating the overhead of motion estimation. This book ...
Networkers2009:BRKVID-3102 - Advanced Video Compression Standards: MPEG-4 AVC, SVC and MVC
A Survey on Compression Algorithms in Hadoop
以下用于压缩样本文本文件的数据压缩算法: 二元霍夫曼三元霍夫曼算术LZ编码运行 compare_algorithms.m 以获取每个压缩算法的结果(压缩前后的位数)并在图表中查看压缩率,或者如果您想要特定算法的结果,则单独...
Chapter 7 Lossless Compression Algorithms.ppt
Chapter 8 Lossy Compression Algorithms.ppt
Compression algorithms perception of research and application
压缩算法霍夫曼,游程长度+体重,LZ77 ...
This book presents the theory and application of new methods of image compression based on self-transformations of an image. These methods lead to a representation of an image as a fractal, an object ...
CZip压缩算法 这只是一个使用霍夫曼编码压缩文件的示例程序。 狂热地观看了硅谷的所有季节之后,我感觉就像在这样做。 已知错误 当我尝试将其应用于PNG时,由于某种原因,它不起作用。 我怀疑当它在已经具有某种压缩...
This MFC project implements the algorithm of book "Fractal Image Compression: Theory and Application". Tested on Visual Studio 2010 of Win 7. Can only read raw images. 分形图像压缩是一种高效的图像...
compression: PCD Huffmann specials: sizes: all resolutions, from 192 x 128 up to 6144 x 4096 (64 Base vaporware) rotated: clockwise and counter-clockwise Portable pixel/gray map images (*.ppm, *.pgm, ...
cd relaxdays-challenge-compression/ Docker容器 安装 。 使用以下项目构建一个Docker容器: docker build -t relaxdays-challenge-compression . 压缩: docker run -v $( pwd ) :/data -it relaxdays-...
deflate ) 创建具有应用压缩的Content-Encoding标头如果不支持编码,则发送409 Not Acceptable例子import { serve } from 'https://deno.land/std@0.90.0/http/server.ts'import { compression } from 'https:...
一篇关于模式匹配和文本数据压缩的综述性论文
a number of desirable properties to be included in image and video compression algorithms. Accordingly, current and future generation image compression algorithms should not only demonstrate state-of-...
gcc-arm-11.2-2022.02-mingw-w64-i686-arm-none-linux-gnueabihf.exe...Supported LTO compression algorithms: zlib gcc version 11.2.1 20220111 (GNU Toolchain for the Arm Architecture 11.2-2022.02 (arm-11.14))
A Survey on Data Compression in Wireless Sensor Networks 原文和翻译:关于无线传感网络的数据压缩调查
to compressing data, and it describes several compression algorithms, some of which are general, while others are designed for a specific type of data. The goal of the book is to introduce the reader ...