| |
ChristopherJam
Registered: Aug 2004 Posts: 1381 |
On the relative pros and cons of various RLE schemes
So over in Native crunch/decrunch code. I just suggested encoding
abbcdddeffffg as
abb{0}cdd{1}eff{2}g
Raistlin kindly liked the idea, Krill quite sensibly didn't want us to hijack the post and suggested a new post for the new topic. so... Have at it! |
|
... 33 posts hidden. Click here to view all posts.... |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1381 |
ooh, I like that one enthusi.
Really interesting the historical divide between packers and crunchers, especially the way intros tended to be displayed between one and the other. It does indeed reduce the impact on runlength limits considerably. Not sure why I ever bothered trying to improve the ratio of my higher ratio crunchers for empty spaces now :P |
| |
Martin Piper
Registered: Nov 2007 Posts: 645 |
In the old days, dictionary based compression would be quite slow and memory hungry. The usual way of speeding it up would be to limit the distance searched and limit the dictionary.
Using a RLE compression pass before a dictionary compression stage would reduce the total input data, it would reduce the strain on the dictionary, and would pass some benefit along to the dictionary based compression.
As dictionary algorithms improve, and also as time and memory becomes less of an issue, the dictionary size can be increased and this means there is less benefit to shrinking the input data with a RLE pre-pass. |
| |
Fungus
Registered: Sep 2002 Posts: 629 |
Main reason for packing first was before crunchers had REU versions with 2mhz or 8mhz support they were slow as hell and took many hours. So smaller input files crunched way faster. Also crunching large files tended to bug out. They generally were smaller too, so it became the norm. |
Previous - 1 | 2 | 3 | 4 | 5 - Next |